Creating a look in a color grading suite.

LUTs vs Transforms: How They’re Different and Why It Matters

In light of current events, many of us are taking a step back to learn new filmmaking skills and patch holes in our workflow.

As you deepen your knowledge and sharpen your mind, may I suggest you take a moment to consider how you handle color in your workflow. I suggest this topic, not just because color is one of the most important factors in how viewers perceive your work, but also because there is significant change on the horizon you should be aware of.

Image transforms are here, and they’re going to change how we capture, manipulate, and convert digital footage. But how are they different from LUTs? What advantages do they offer? When and how can filmmakers start using them?

In today’s article, we’ll answer each of these questions so you can understand the debate about LUTs vs transforms.

If you ever use LUTs in your workflow, read on.

What is a LUT?

A LUT, or lookup table, is a table of numerical values used to adjust an image’s visual properties.

LUTs are powerful creative tools, but they’re just lists of fixed plaintext numbers. In fact, you can open most LUT files in standard text editing applications to view the columns and rows of numbers.

This might seem too simple for such a useful creative tool, but the easily-readable plaintext data is what makes LUT files so powerful.

When a LUT is applied to an image, every pixel from that image is adjusted with the values from the table to change the visual look. The pixel can be made to look brighter, or more contrasty, or more saturated, or whatever combination is specified.

This process often yields impressive visual changes, but it’s remarkably simple. It just takes RGB values in, adjusts them by the parameters in the table, and then spits them back out. It’s very basic math.

The extent of this process varies depending on the type of LUT specification being used (1D vs 3D), but the idea is the same. And it’s this simplicity that makes LUTs a great tool for a variety of functions.

For example, you can easily build a LUT that transforms footage from one color format to another, say from log to Rec. 709. Or you can build a LUT to emulate the look of your favorite filmstock, and then use that LUT in-camera to capture footage with that look.

But this simplicity comes with a drawback: LUTs are “dumb.”

Because LUTs are just a fixed table of values, the user can’t adjust the results based on the type of image being input. So if a LUT is designed to increase overall brightness by 10 percent, but an image is already overexposed, the LUT will just make it worse.

Remember, LUTs only adjust with the overall look of an image, an do not replace the need for secondary color correction or finer grading.

Why we’ve used LUTs historically

As filmmakers, we strive to give our work unique or stylized looks that fit the story being told.

But as Michael Cioni and Shane Hurlbut recently discussed, ensuring the visual style you envisioned makes it to the big screen can be difficult. Not only do we have to communicate our creative vision to other stakeholders in the production, but we also have to ensure the look survives for the intended audience.

This is where LUTs come into play. While you could manually grade and regrade every single shot through every step in the pipeline, it’s easier to use LUTs to partially automate the process.

For many smaller teams (and even medium-sized ones) LUTs provide a more sustainable color workflow, where fewer hours are spent in costly color grading suites. 

LUTs are also logistically necessary for technical conversions, like converting for broadcast or projection standards. Sure, you could re-grade an entire film by hand, but a couple clicks will apply a technical LUT that yields basically the same result.

Why LUTs are so common during production

It makes sense that LUTs are useful for post-production teams, but why exactly are they so prevalent on set during production?

In short, LUTs make life easier for the director and DP.

Because LUTs are so easy to process, they can be applied in real time to most any video signal. That gives the top creative stakeholder better information to make on-the-fly creative decisions  On top of that, using LUTs is super simple.

It only takes a small hardware box to apply LUTs to camera outputs. For example, the AJA LUT Box, the Teradek Colr, or BoxIO units from Flanders Scientific are inexpensive solutions that let directors and DPs preview the visual look of a film in real time on set.

LUTs help bridge the gap between the creative vision of a project and the real-world environment it’s captured in.

A director and DP can work with a colorist before shooting even starts to create an initial look for the film that is previewed on set during principal photography. It’s a significant benefit that production teams have come to depend on.

Because of this, LUTs will remain a key production tool well into the future.

Limitations of LUTs

As we consider the utility of LUTs,  it’s important we recognize their limitations.

Aside from the functional limitations of their simplicity, they also have a few technical issues that can cause real problems, especially in final finishing.

Banding

Banding is a result of a quantization error; quantization being the process of assigning digital values to be recorded in a digital file. Basically, it’s an error from a lack of information, and LUTs can cause it when used incorrectly.

For every brightness or color value in an image, a numerical value has to be  assigned for it to be stored as a video file. And for a LUT to work, you need to have a matching value in the table for every value in the image.

If you don’t have enough values in the LUT, the nearest available value will be assigned to unmatched values in the image. This will cause artifacts that we call banding.

This results in the types of artifacts we’re all familiar with in low quality web images, where fine gradients turn into blocky, noisy steps that look like distinct bands.

This issue becomes much worse with high end video formats, like 16 bit DPX and 12 bit ProRes. If a LUT was created for 8 bit H.264, there will be a massive number of unmatched values when applied to these high-end source files. Those values will have to be interpolated, which will dramatically increase the potential for banding.

Out of Gamut Color

Another technical limitation with LUTs is how out of gamut color is handled.

Any input color or brightness level that is outside the range of the intended color space, or not specified in the LUT parameters, is generally just converted to the nearest available value.

For example, super bright highlights that look fine in a Rec. 2020 file will be “clipped out” when converted to Rec. 709 with a technical LUT.

You might not notice this issue when first applying a LUT, but when you try to bring back detail in those shots later, you’ll really start seeing the damage.

And this isn’t just true for brightness. Strictly LUT-based workflows can also lose much of the rich gradations in color you saw on set or in camera. This is just a limitation in how applications deal with out of gamut data when using a LUT.

Too much faith in LUTs

The last issue with LUTs is less of a technical limitation and more a of a cultural error.

For some reason, many in the filmmaking community believe LUTs are much more capable than they truly are. They download or create a single LUT that looks amazing for certain shots, and then try to use it for everything. Of course, this leads to frustration and disappointment when it doesn’t produce the same results, but I see misused LUTs in far too many projects.

This is yet another reminder that “a single shot tests naught,” and a sequence of multiple shots with a variety of conditions is always the best for testing any workflow or technology.

LUTs are powerful, flexible tools, but filmmakers need to test them throughout their workflows before depending on them for a project.

Thankfully, LUTs are not the only tools we have at our disposal when it comes to manipulating the look of our image. Enter transforms.

What is a Transform?

Image transforms are a “smart” way of adjusting the color of an image.

Basically, transforms are complex mathematical functions that manipulate an image based on the values being input by the image, rather than being based on fixed values in a table.

In this process, data is input from the image, it’s plugged into the formula, and new values are derived from the original values.

Whereas LUTs are just text-readable files, transforms are much more complicated.

Because transforms are mathematical functions, they can handle fractional  values down to the maximum bit depth the system can handle. This increases the resulting accuracy of image changes, and retains far greater detail in gradients and other fine image features.

These increased capabilities make transforms more precise tools that offer filmmakers significantly more control, which is why post-production pipelines have been steadily integrating transforms over the last decade.

Why transforms are “better” than LUTs

Transforms were largely pioneered and promoted in the early days by the Academy of Motion Picture arts and Sciences with their ACES system.

ACES is designed to be the master color space that all filmmaking tools use. It is, in essence, the great equalizer between cameras, computer applications, and displays. If a piece of software or hardware is correctly using ACES, it removes a lot of the guesswork and inaccuracy of older color workflows.

One of the key incentives for adopting ACES is the ability to seamlessly use multiple types of cameras on a production.

Since ACES uses transforms specifically designed for different cameras and standards, it’s relatively easy to convert footage into the ACES color space. That means any camera can be profiled in ACES (by the manufacturer), and then you can match it almost automatically to footage from other ACES-profiled sources. So how exactly does this work?

Instead of using input and output LUTs for image conversions (a method many filmmakers have previously used to match incoming formats and output to various devices) ACES uses “Input Device Transforms,” or IDTs, and “Output Device Transforms”, or ODTs.

These device-specific transforms can be used to match footage from an Arri Alexa to a RED Weapon without any of the heavy grading it would usually take.

On top of that, you could use ACES to create a CDL (Color Decision List) to pre-visualize the exact look you want inside the cameras or from their outputs.

ACES has immediately born fruit in mixed camera environments, but is also useful for single-camera shoots that need to be matched to b-roll or external footage. That will let multi-unit production teams, or long term projects that change out gear along the way, more easily maintain a consistent visual look.

ACES even allows “custom recipe” transforms, that can be used to ingest footage from various sources, and then process in whatever manner is desired.

This level of sophistication vividly illustrates what transform-based color management systems. Not only do they overcome the drawbacks of LUT-based workflows, but they open up new opportunities for teams to work more efficiently and creatively than ever before.

Peaceful coexistence?

In the near future, we can assume that LUTs will stick around in a variety of applications, especially on set and in smaller-scale post pipelines.

But given the increased processing requirements of transforms, it’s unlikely LUTs will be replaced before a new generation of production hardware debuts. We also need to keep in mind that our industry can be slow to adopt new changes, as illustrated by the 16+ years it’s taken for ACES to roll out.

And after all, LUTs do in fact work when used correctly. Their drawbacks don’t tangibly impact creative vision, as long as you know what they’re capabilities are. So for now, their ease-of-use is well worth any potential downside.

But as filmmakers become familiar with transforms, it’s likely they’ll take over most post-production processes. With the support of the Academy, ACES workflows will be implemented in many high-end films, and those lessons will filter down into smaller productions.

We’re in the messy transition phase now, where LUTs are passed along the pipeline while experiments with transforms are also happening. There will be frustrations, and unexpected issues. Shots will be matched unsuccessfully, and teams will get stuck and give up on transforms for one project, only to try again a few shoots later

But that’s just the nature of workflows. They evolve.

And it’s our job as filmmakers to evolve with them.

Header photo by Alicia Slough

Charles Haine

Charles Haine is the Interim Program Director for the Feirstein Graduate School of Cinema at Brooklyn College. He has been a filmmaker and entrepreneur working in the motion picture industry since 1999, and received his MFA from USC in 2006. Haine founded the Academy Award and Emmy nominated production company Dirty Robber in 2008, directed the feature film Angels Perch, the websites Salty Pirate, and countless shortform projects including a music video for Fitz & The Tantrums.

How a Pro Colorist Maximizes Color Separation

Should You Use Resolve Color Management or CSTs?

The Ultimate Guide to Premiere Pro Productions