Why Every Editor, Colorist, and VFX Artist Needs to Understand ACES

Why Every Editor, Colorist, and VFX Artist Needs to Understand ACES

Editor’s note: Thanks to our friends at Mixing Light for sharing their expertise for this article and the Frame.io Workflow Guide.

When it comes to the future of filmmaking, few topics in recent memory have caused so much excitement and confusion as ACES.

If you read the marketing material, it’s easy to understand why ACES is creating so much buzz: it promises to deliver uncompromising image quality from set to screen, and every step in between.

The official technical documentation, on the other hand, tells a different story altogether, where ACES seems to be just a baffling avalanche of acronyms and technical charts.

But fear not! Understanding and adopting ACES doesn’t require a PhD in Color Science, or budgets with over 10 digits.

In today’s article, we’re breaking down what ACES is, what ACES isn’t, why it just might transform the future of post-production, and how you can start planning for it today.

ACES: An Overview

ACES or the Academy Color Encoding System is an open color management and interchange system developed by the Academy of Motion Picture Arts & Sciences (AMPAS) and industry partners like Technicolor, ARRI, RED, and many others.


ACES aims to standardize the color science used across a huge breadth of software and hardware tools, and enable projects of all types to preserve the highest level of image quality throughout the entire workflow, including production, post-production, distribution/presentation, and archiving.

The History Of ACES

In 2004, when the ACES project started, the Academy and many industry experts realized a threat was looming over the future of the cinematic arts.

Decades of innovation in digital workflows had led to dozens of different camera systems, encoding options, display devices, and other hardware and software tools with very little standardization. While this rapid technological advancement brought tremendous benefits to the industry, the Academy realized the lack of commonality might potentially result in the irreparable loss of thousands of digital films.

Thus, the ACES project began. The goal was to build a standardized system capable of managing color in a precise, yet straightforward way, no matter the camera or display being used. The standard also needed to be open, allowing easy adoption in all parts of the industry, yet also robust and adaptable enough to handle the complexity of modern digital production.

ACES Today

After over 15 years of far-reaching collaboration between the industry and the Academy, ACES is slowly but steadily gaining momentum as the standard color management system across all types of projects.

Since version 1.0 was released in 2014, ACES has been used on dozens of Hollywood blockbusters, and is making its way into TV series and independent films. ACES pipelines are also being adopted in animation and VFX workflows, where the improved image quality and interoperability is critical to producing accurate results.

ACES is also being adopted by independent manufacturers and developers who aren’t founding partners with AMPAS on the standard. ACES compatibility is already available in many high-end software applications, including DaVinci Resolve, Filmlight’s Baselight, and Nucoda, with new programs adding it regularly.


This adoption is encouraging, but some might question why this sort of top-down standardization is necessary, especially considering existing alternatives and workflow processes.

While it is true that many software developers have built color management systems from the ground up for their own applications, and camera manufacturers pride themselves on the color science baked into their hardware, none of these solutions can close the compatibility gap that AMPAS predicts. These proprietary solutions have no hope of ever becoming adopted by everyone industry-wide.

ACES, on the other hand, involves input from a diverse group of industry partners with scientists and artists in every field AMPAS represents. This open approach allows private developers and manufacturers to make their own existing solutions compatible with ACES, which gives users more creative choice and technological flexibility.

In short, ACES offers the best chance of delivering a universal color management solution for the future of film and video workflows.

How ACES Works

It’s important to understand that ACES is not just a program or plugin you can download. Nor is it just a firmware update for your camera or monitor. ACES is also not a particular workflow, a creative look for your footage, or a file format for your project.

Rather, ACES is a collection of rules for encoding and transforming data, along with metadata definitions for that data. ACES also encompasses a set of developer tools for integrating those data specifications into software and hardware. Finally, all of ACES’ rules, guidelines, and tools are formulated based on a set of standards laid down by the Society of Motion Picture & Television Engineers (SMPTE) and the International Organization for Standards (ISO).

To really come to grips with what ACES is, let’s take a look at what it does inside your workflow.

Understanding Capture & Scene-Referred Data

One of the most important things to understand about ACES is that its processing pipeline utilizes capture-referred data—i.e. the color science (and sometimes secret sauce) that each camera system uses and bakes into the signal.

ACES reverse-engineers that data (through what’s known as an ACES Input Transform or IDT) back into the pure linear light information that was in the actual scene in front of the camera. Theoretically, without any camera bias.

This is why ACES is often discussed as being scene-referred or, with the more technical phrase, scene-linear. Abstracting away the camera’s bias allows us to get closer to the actual real-world scene that the camera was pointing at.

Additionally, ACES color uses a color space so large it actually encompasses the entire visual locus (every color humans can see) and even colors we can’t see! Even the smaller working color spaces of ACES (which we’ll cover later) are much larger than Rec. 709 and Rec. 2020.

ACES AP0 color space encompasses all the colors humans can see, dwarfing the range of colors Rec. 709 is capable of encoding.

ACES is a unifying standard, which allows you to transform footage captured from many different cameras, into a scene-referred color science with a common linear starting point.

ACES is also broad enough to encompass most every color space, without the limitations of smaller gamuts. That means “future proof” formats for client handoffs and archival processes are possible, since the ACES color space is so large.

But the scene-linear approach of ACES is only part of the overall pipeline.

Because the human eye doesn’t work in a linear fashion, projects will ultimately have to be viewed on TVs, displays, and projectors that assign gamma or EOTF (Electro Optical Transfer Function) curves. These curves aren’t linear, and our current display devices are much more limited in the color spaces they can accurately reproduce (compared to ACES).

ACES already takes this into account and incorporates display-referred color management into the overall pipeline. Linear ACES data is parsed through different transforms for different color spaces and display devices, so that an image appears accurately in a variety of viewing contexts.

That means you can capture and work with footage in the highest quality all the way until final delivery, and then ACES makes it easy to adapt footage for almost any screen.

aces image quality preservation
Unlike traditional DI and video workflows, ACES maintains maximum image quality until conversion for the intended viewing formats. Image © Academy of Motion Picture Arts and Sciences.

When it comes to precision, ACES uses Open EXR 16-bit half-float processing which results in 30+ stops of scene-referred exposure.

Keep in mind that even though EXR is used, often this processing is just internal to the app you’re using and no EXR files are created for you to manage—except in renders.

In sum, these are the benefits of ACES pipelines:

  • Camera System Color Science Unification – Because of the capture-referred, scene-linear transform at the start of an ACES processing pipeline (no matter the camera(s) that were used on a project) there is a common color science starting point that all ACES capable applications can use and understand.
  • No Guessing For VFX/CGI Workflows – One reason that ACES has been embraced by VFX/CGI heavy films is that they’re compositing and working in linear anyway! And that linear data can then be rendered back to whatever is appropriate to the project, or kept linear and given back to a colorist to simply reapply their existing grade with no look shifting.
  • Ready For Wide Gamut/High Dynamic Range – Now and in the future. Because ACES is capable of retaining 30+ stops of image data and the gamuts are so large, it’s a great match for HDR, wide gamut projects, however those concepts may develop in the future.
  • Evergreen Digital Masters – One major thing that the Academy pushes about ACES is that it allows for a true evergreen digital master because of the ultrawide/high dynamic range nature of the system.

The ‘Parts’ Of ACES

Even though ACES and its various transforms are quite mathematically complex, you can understand ACES better by understanding what each part or transform in the pipeline does.

Here’s the terminology for each of these transforms:

  • ACES Input Transform (aka: IDT or Input Device Transform) – The Input Transform takes the capture-referred data of a camera and transforms it into scene linear, ACES color space. Camera manufacturers are responsible for developing IDTs for their cameras but the Academy tests and verifies the IDTs. In future versions of ACES, the Academy may take on more control in the development of IDTs. IDTs, like all ACES transforms, are written using the CTL (Color Transform Language) programming language. It’s also possible to utilize different IDTs for a camera system to compensate for different camera settings that might have been used.
  • ACES Look Transform (aka: LMT or Look Modification Transform) – The first part of what’s known as the ACES Viewing Transform (the Viewing Transform is a combination of LMT, RRT, & ODT transforms). LMTs provide a way to apply a look to a shot in a similar way to using a Look Up Table (LUT). It’s important to note that the LMT happens after color grading of ACES data and not every tool supports the use of LMTs.
  • RRT (Reference Rendering Transform) – Think of the RRT as the render engine component of ACES. The RRT converts scene referred linear data to an ultrawide display-referred data set. The RRT works in combo with the ODT to create viewable data for displays and projectors. While the Academy publishes the standard RRT, some applications have the ability to use customized RRTs (written with CTL), but many color correction systems do not provide direct access to the RRT.
  • ACES Output Transform (also known as the ODT or Output Device Transform) – The final step in the ACES processing pipeline is the ODT. This takes the ultrawide, high dynamic range data from the RRT and transforms it for different devices and color spaces like P3 or Rec 709, 2020, etc. ODTs like IDTs and RRTs are written with CTL.

There are also three main subsets of ACES used for finishing workflows called ACEScc, ACEScct and ACEScg:

  • ACEScc uses logarithmic color encoding and has the advantage of making color grading tools feel much more like they do when working in a log space that many colorists prefer.
  • ACEScct is just like ACEScc, but adds a ‘toe’ to the encoding so that when using lift operations the response feels more similar to traditional log film scans. This quasi-logarithmic behavior is described as being more “milky,” or “foggier.” ACEScct was added with the ACES 1.03 specification and is meant as an alternative to ACEScc based on the feedback of many colorists.
  • ACEScg utilizes linear color encoding and is designed for VFX/CGI artists so their tools behave more traditionally.
ACEScct provides colorists a more familiar feel for grading compared to ACEScc.

While ACEScc, ACEScct and ACEScg and the transforms they use are what you’ll most often see when it comes to ACES, there are some additional terms you may encounter:

  • APD (Academy Printing Density) – AMPAS supplied reference density for calibrating film scanners.
  • ADX (Academy Density Exchange) – Used for scanning film and getting those scans into ACES – similar to the Cineon system for scanning.
  • ACESproxy – uses logarithmic color encoding and is an integer, range limited version of ACEScc. ACESproxy is meant to be used on set with compatible equipment over SDI.

The ACES Pipeline

Now that we’ve defined the transforms used for ACES, understanding how the various transforms combine to form an ACES processing pipeline is pretty straightforward:

Camera Data -> Input Transform -> Color Grading -> Look Transform (optional) -> Reference Rendering Transform -> Output Transform

As mentioned, ACES is a hybrid color management system of scene referred/scene linear and display referred data.

In the graphic below, the various ACES transforms fit into the scene referred (top section) and display referred (bottom section) part of the pipeline:

ACES Pipeline

ACES Color Spaces

When ACES is discussed, you’ll often hear terms like 2065-1, AP0, AP1, and Rec. 2020+ thrown around.

What do these terms mean?

SMPTE ST 2065-1 is the SMPTE standardization of ACES.

While this standard has many parts, in daily usage ACES 2065 has come to mean the full linear version of ACES that has a larger gamut than the visual locus set of RGB primaries. ACES 2065 uses a set of primaries known as AP0 (ACES Primaries 0).

While ACES AP0 encompasses a much larger range of colors than most all other gamuts, ACES AP1 is only slightly larger than Rec. 2020, lending it the name Rec. 2020+


2065-1/AP0 is mainly meant for archival and file exchange. For grading, vfx, and editing AP1 is more likely to be used currently. ACEScc, ACEScct, and ACEScg all utilize AP1 primaries.

Using ACES

As you can see, ACES represents a huge leap over the color management systems you might be used to in your everyday workflow. At this point, you might be thinking “Yes, I want that now. How do I get it?”

As mentioned earlier, several post-production applications are already offering ACES compatibility, so if you use any of those programs regularly, you’re only a few steps away from diving into ACES.

That said, as we’ve discussed, ACES is a pipeline. For it to be valuable, every step of your workflow needs to follow the ACES rules.

So, how you can truly adopt ACES is very much dependent on what you are working on and what you need ACES to do.

For example, VFX artists can already viably use ACES for improved compositing accuracy and handing off work between teams. The increased color accuracy can add incredible levels of realism to your scenes, and solves a myriad of technical issues artists have been battling for years.

But for many creatives, who deal with live-action footage on a daily basis, complete adoption of ACES will depend on production teams embracing the new standard. Of course, this is largely influenced by what transforms are available for which camera manufacturers. That’s the necessary link for incorporating camera formats into the ACES system.

Obviously, it will take some time for hardware manufacturers to catch up for more workflows to be able to adopt ACES in earnest. But that doesn’t mean everyone has to wait around to get their hands dirty with ACES.

Open Source Development

Because ACES is an open source project, anyone can help bring the standard to life in new ways.

While AMPAS actively manages and curates the development of the ACES system, the development of the system is held in a GitHUB repository for anyone to contribute. ACES transforms are written in Color Transform Language (CTL), so if you’re mathematically inclined and have experience in color science, you can contribute there.

If you’re less technically-inclined but still want to contribute, you can visit ACES Central, an AMPAS run community. There you can connect with other film and video professionals who are exploring the future of ACES, and learn how other end users and ACES certified partners are adopting the technology.


The good news about ACES’ continuous development is that new versions are always intended to be backwards compatible. Data encoded using older versions of ACES can be opened using a newer version of ACES that an application uses, though this may cause some differences in image representation compared to the original project. Many applications offer the ability to switch the version of ACES being used, so that you can match the appropriate version to the original project.

The most up-to-date official version of ACES is 1.1. Many applications do not yet support it, so version 1.03 remains the most widely supported version currently.

Unless there are major future changes and/or needs for end-users as determined by AMPAS, we will probably only see the major release versions appear in software, while the minor .0x releases may just be implemented behind the scenes as performance and bug fixes.


We know this is a lot to take in, but it is worth your time to get a basic understanding of ACES now so that you can adopt it as it becomes viable for your workflow in the future.

ACES isn’t a magic bullet that will fix every technical issue or creative challenge of filmmaking. No technology will ever do that.

That said, ACES does offer some incredible advances for the future of video and cinema. This is a future we could only dream about in the not too distant past, so it’s exciting to see it just over the horizon.

Time will tell how fast it gets here.

If you want to dig deeper into every aspect of film and video workflow, from capture to conform to delivery be sure to check out the Frame.io Workflow Guide. At over 100,000 words, it’s the most comprehensive website dedicated to film and video workflow.

Thank you to Ben Bailey for contributing this article.

Interested in contributing?

This blog relies on people like you to step in and add your voice. Send us an email: blog at frame.io if you have an idea for a post or want to write one yourself.