Why Every Editor, Colorist, and VFX Artist Needs to Understand ACES

Few filmmaking topics cause as much excitement and confusion as ACES.

If you read the marketing material, it’s easy to understand why ACES is creating so much buzz. It promises to deliver uncompromising image quality from set to screen, and every step in between.

The official technical documentation, on the other hand, tells a different story altogether. Here, ACES seems to be just a baffling avalanche of acronyms and technical charts.

But fear not! Understanding and adopting ACES doesn’t require a PhD in Color Science, or budgets with over 10 digits.

In today’s article, we’re breaking down what ACES is, what ACES isn’t, and why it just might transform the future of post-production.

ACES: An overview

ACES, or the Academy Color Encoding System, is an open color management and interchange system developed by the Academy of Motion Picture Arts & Sciences (AMPAS) and industry partners like Technicolor, ARRI, RED, and many others.

The system aims to standardize the color science used across a huge breadth of software and hardware tools. It also promises to preserve the highest level of image quality throughout the entire workflow, including production, post-production, distribution/presentation, and archiving.

The history of ACES

In 2004, when the ACES project started, a cloud loomed over the cinematic arts.

Decades of innovation in digital workflows had led to innumerable cameras, codecs, displays, hardware and software. But with very little standardization. This rapid technological advancement brought tremendous benefits to the industry. But the Academy realized the lack of commonality might result in the irreparable loss of thousands of digital films.

Thus, the ACES project began. The Academy wanted a standardized system capable for managing color in a precise, yet straightforward way, that could be easily adopted by hardware and software partners. The standard also needed to be open, allowing easy implementation in all parts of the industry, while being robust and adaptable enough to handle the ever-changing complexity of modern workflows.

ACES today

After over 15 years of far-reaching collaboration between the industry and the Academy, ACES is slowly but steadily gaining momentum. Soon it could be the standard color management system across all types of projects.

Since version 1.0 was released in 2014, ACES has been used on dozens of Hollywood blockbusters, and is making its way into TV series and independent films. Animation and VFX workflows are also adopting ACES pipelines, where the improved image quality and interoperability is critical.

ACES is also being adopted by independent manufacturers and developers who aren’t founding partners with AMPAS on the standard. ACES compatibility is already available in many high-end software applications, including DaVinci Resolve, Filmlight’s Baselight, and Nucoda. And new programs are adding their support.

Why ACES?

This adoption is encouraging, but some might question why this sort of top-down standardization is necessary. Especially considering existing alternatives and workflow processes.

While it is true that many software developers have built color management systems from the ground up for their own applications, and camera manufacturers pride themselves on the color science baked into their hardware, none of these solutions can close the compatibility gap that AMPAS predicts. These proprietary solutions have no hope of ever becoming adopted by everyone industry-wide.

ACES, on the other hand, involves input from industry partners with scientists and artists in every field AMPAS represents. This open approach allows private developers and manufacturers to make their own existing solutions compatible with ACES. And this gives users more creative choice and technological flexibility.

In short, ACES offers the best chance of a universal color management solution for future film and video projects.

How ACES works

It’s important to understand that ACES is not just a program or plugin you can download. Nor is it just a firmware update for your camera or monitor. ACES is also not a particular workflow, a creative look for your footage, or a file format for your project.

Rather, ACES is a collection of rules for encoding and transforming data, along with metadata definitions for that data. ACES also encompasses a set of developer tools for integrating those data specifications into software and hardware. Finally, all of ACES’ rules, guidelines, and tools are formulated based on a set of standards laid down by the Society of Motion Picture & Television Engineers (SMPTE) and the International Organization for Standards (ISO).

To really come to grips with what ACES is, let’s take a look at what it does inside your workflow.

Understanding capture- & scene-referred data

One of the most important things to understand about ACES is that its processing pipeline utilizes capture-referred data. In other words, the color science (and sometimes secret sauce) that each camera system uses and bakes into the signal.

ACES reverse-engineers that data (through what’s known as an ACES Input Transform or IDT) back into the pure linear light information that was in the actual scene in front of the camera. Theoretically, without any camera bias.

ACES is often described as scene-referred or scene-linear. Abstracting away the camera’s bias allows us to get closer to the actual real-world scene that the camera records.

Additionally, ACES color uses a color space so large it actually encompasses the entire visual locus (every color humans can see) and even colors we can’t see! Even the smaller working color spaces of ACES (which we’ll cover later) are much larger than Rec. 709 and Rec. 2020.

Unification

ACES is a unifying standard, which allows you to transform footage captured from many different cameras, into a scene-referred color science with a common linear starting point.

ACES is also broad enough to encompass almost every color space, without the limitations of smaller gamuts. That means “future proof” formats for client handoffs and archival processes, since the ACES color space is so large.

But the scene-linear approach of ACES is only part of the overall pipeline.

The eyes have it

Your eyes don’t work in a linear fashion, and project will be viewed on TVs, displays, and projectors. These devices all assign gamma or EOTF (Electro Optical Transfer Function) curves. These curves aren’t linear, and our current display devices are much more limited in the color spaces they can accurately reproduce (compared to ACES).

ACES already takes this into account and incorporates display-referred color management into the overall pipeline. Linear ACES data parses through different transforms for different color spaces and display devices. So an image appears accurately in a variety of viewing contexts.

That means you can capture and work with footage in the highest quality all the way until final delivery. Then ACES makes it easy to adapt footage for almost any screen.

When it comes to precision, ACES uses Open EXR 16-bit half-float processing which results in 30+ stops of scene-referred exposure.

This processing can often be internal to the app you’re using even when EXR is used. So no EXR files are created for you to manage—except in renders.

Benefits

In sum, these are the benefits of ACES pipelines:

  • Camera System Color Science Unification.
    Because of the capture-referred, scene-linear transform at the start of an ACES processing pipeline (no matter the camera(s) that were used on a project) there is a common color science starting point that all ACES capable applications can use and understand.
  • No Guessing For VFX/CGI Workflows.
    One reason that ACES has been embraced by VFX/CGI heavy films is that they’re compositing and working in linear anyway! And that linear data can then be rendered back to whatever is appropriate to the project. Or kept linear and given back to a colorist to simply reapply their existing grade with no look shifting.
  • Ready For Wide Gamut/High Dynamic Range.
    Now and in the future. Because ACES is capable of retaining 30+ stops of image data and the gamuts are so large, it’s a great match for HDR, wide gamut projects. Regardless of how these concepts may develop in the future.
  • Evergreen Digital Masters.
    One major thing that the Academy pushes about ACES is that it allows for a true evergreen digital master because of the ultrawide/high dynamic range nature of the system.

The ‘parts’ of ACES

Even though ACES and its various transforms are quite mathematically complex, you can understand ACES better by understanding what each part or transform in the pipeline does.

Here’s the terminology for each of these transforms:

ACES Input Transform (aka: IDT or Input Device Transform)

The Input Transform takes the capture-referred data of a camera and transforms it into scene linear, ACES color space. Camera manufacturers are responsible for developing IDTs for their cameras but the Academy tests and verifies the IDTs. In future versions of ACES, the Academy may take on more control in the development of IDTs. IDTs, like all ACES transforms, are written using the CTL (Color Transform Language) programming language. It’s also possible to utilize different IDTs to compensate for different camera settings that might have been used.

ACES Look Transform (aka: LMT or Look Modification Transform)

The first part of what’s known as the ACES Viewing Transform (the Viewing Transform is a combination of LMT, RRT, & ODT transforms). LMTs provide a way to apply a look in a similar way to a Look Up Table (LUT). It’s important to note that the LMT happens after color grading of ACES data. Also, not every tool supports the use of LMTs.

RRT (Reference Rendering Transform)

Think of the RRT as the render engine component of ACES. The RRT converts scene referred linear data to an ultrawide display-referred data set. The RRT works in combo with the ODT to create viewable data for displays and projectors. While the Academy publishes the standard RRT, some applications have the ability to use customized RRTs (written with CTL). But many color correction systems do not provide direct access to the RRT.

ACES Output Transform (also known as the ODT or Output Device Transform)

The final step in the ACES processing pipeline is the ODT. This takes the high dynamic range data from the RRT and transforms it for different devices and color spaces. Like P3 or Rec 709, 2020, etc. Like IDTs and RRTs, ODTs are written with CTL.

Derivative standards

There are also three main subsets of ACES used for finishing workflows called ACEScc, ACEScct and ACEScg:

  • ACEScc uses logarithmic color encoding and has the advantage of making color grading tools feel much more like they do when working in a log space that many colorists prefer.
  • ACEScct is just like ACEScc, but adds a ‘toe’ to the encoding. This means that lift operations respond similarly to traditional log film scans. This quasi-logarithmic behavior is described as being more milky, or foggier. ACEScct was added with the ACES 1.03 specification. It’s meant as an alternative to ACEScc based on the feedback of many colorists.
  • ACEScg utilizes linear color encoding and is designed for VFX/CGI artists so their tools behave more traditionally.

Additional terms

While ACEScc, ACEScct and ACEScg and the transforms they use are what you’ll most often see when it comes to ACES, there are some additional terms you may encounter:

  • APD (Academy Printing Density) – AMPAS supplied reference density for calibrating film scanners.
  • ADX (Academy Density Exchange) – Used for scanning film and getting those scans into ACES – similar to the Cineon system for scanning.
  • ACESproxy – uses logarithmic color encoding and is an integer, range limited version of ACEScc. ACESproxy is meant to be used on set with compatible equipment over SDI.

The ACES pipeline

Now that we’ve defined the transforms used for ACES, understanding how the various transforms combine to form an ACES processing pipeline is pretty straightforward:

Camera Data -> Input Transform -> Color Grading -> Look Transform (optional) -> Reference Rendering Transform -> Output Transform

As mentioned, ACES is a hybrid color management system of scene referred/scene linear and display referred data.

In the graphic below, the various ACES transforms fit into the scene referred (top section) and display referred (bottom section) part of the pipeline:

ACES color spaces

When ACES is discussed, you’ll often hear terms like 2065-1, AP0, AP1, and Rec. 2020+ thrown around.

What do these terms mean?

SMPTE ST 2065-1 is the SMPTE standardization of ACES.

While this standard has many parts, in daily usage ACES 2065 has come to mean the full linear version of ACES that has a larger gamut than the visual locus set of RGB primaries. ACES 2065 uses a set of primaries known as AP0 (ACES Primaries 0).

While ACES AP0 encompasses a much larger range of colors than most gamuts, ACES AP1 is only slightly larger than Rec. 2020, lending it the name Rec. 2020+

2065-1/AP0 is mainly meant for archival and file exchange. For grading, vfx, and editing AP1 is more likely to be used currently. ACEScc, ACEScct, and ACEScg all utilize AP1 primaries.

Using ACES

As you can see, ACES represents a huge leap over the color management systems you might be used to. At this point, you might be thinking “Yes, I want that now. How do I get it?”

As mentioned earlier, several post-production applications are already offering ACES compatibility. So if you use any of those programs regularly, you’re only a few steps away from diving into ACES.

That said, as we’ve discussed, ACES is a pipeline. For it to be valuable, every step of your workflow needs to follow the ACES rules.

Adoption

So, how you can truly adopt ACES depend on what you’re working on and what you need ACES to do.

For example, VFX artists can already viably use ACES for improved compositing accuracy and handing off work between teams. The increased color accuracy can add incredible levels of realism to your scenes. It also solves myriad technical issues artists have been battling for years.

But for many creatives, who deal with live-action footage on a daily basis, complete adoption of ACES will depend on production teams embracing the new standard. Of course, this is largely influenced by what transforms are available for which camera manufacturers. That’s the necessary link for incorporating camera formats into the ACES system.

Obviously, it will take some time for hardware manufacturers to catch up for more workflows to be able to adopt ACES in earnest. But that doesn’t mean everyone has to wait around to get their hands dirty with ACES.

Open source development

Because ACES is an open source project, anyone can help bring the standard to life in new ways.

While AMPAS actively manages and curates the development of the ACES system, the development of the system is held in a GitHUB repository for anyone to contribute. ACES transforms are written in Color Transform Language (CTL), so if you’re mathematically inclined and have experience in color science, you can contribute there.

If you’re less technically-inclined but still want to contribute, you can visit ACES Central, an AMPAS run community. There you can connect with other film and video professionals who are exploring the future of ACES. And you can learn how other end-users and ACES certified partners are adopting the technology.

Versions

The good news about ACES’ continuous development is that new versions are always intended to be backwards compatible. Data encoded using older versions of ACES can be opened using a newer version of ACES that an application uses. Though this may cause some differences in image representation compared to the original project. Many applications offer the ability to switch the version of ACES being used. This lets you match the appropriate version to the original project.

The most up-to-date official version of ACES is 1.1. Many applications do not yet support it, so version 1.03 remains the most widely supported version currently.

Unless there are major future changes and/or needs for end-users as determined by AMPAS, we will probably only see the major release versions appear in software, while the minor .0x releases may just be implemented behind the scenes as performance and bug fixes.

Conclusion

We know this is a lot to take in, but it is worth your time to get a basic understanding of ACES now so that you can adopt it as it becomes viable for your workflow in the future.

ACES isn’t a magic bullet that will fix every technical issue or creative challenge of filmmaking. No technology will ever do that.

That said, ACES does offer some incredible advances for the future of video and cinema. This is a future we could only dream about in the not too distant past, so it’s exciting to see it just over the horizon.

Time will tell how fast it gets here.

Ben Bailey

Editor, explorer. Ben is a member of the Frame.io team.