The Essential Guide to Color Spaces


It seems like the term color space is thrown around more than ever these days, but still many of us (even seasoned post professionals) are a bit fuzzy on what it specifically refers to, and the role it plays in the acquisition and manipulation of video.

But why has “color space” become so prevalent on set and in our post pipelines, and why is the knowledge gap among professionals so wide?

This is a byproduct of the explosive advancements of digital cameras, displays, and workflows we’ve seen in the last 10-15 years. Virtually every professional-grade camera out there now lets users choose between multiple capture color spaces, which makes it an upfront concern for production teams.

On the post-production side, intermediate color spaces have to be selected to make sure the exchange of footage, VFX, and graphics assets between facilities and teams runs smoothly. Additionally, with HDR and other next-gen display formats on the rise, we’re faced with even more choices when it comes time for delivery.

To add even more fuel to the fire, it’s becoming increasingly necessary to choose the color space during pre-production, at the same time as the codecs, resolutions, and other technical image specifications are decided, as it’s a key detail necessary to plotting out a project’s workflow. That means more people than ever before are concerned with color spaces, and need to understand what they are and how they should be used.

So how do we choose between color spaces? What makes one better or worse for our content and/or workflow? And what are the hidden opportunities and hazards hiding behind each of these technical choices?

Today we’ll explore these questions and definitively explain what a color space is. Along the way, we’ll discover why your choice of color space may be the single most important factor in capturing and delivering beautiful motion images. Let’s dive in!

What is color space?

Let’s start with a clear, simple definition of what color space refers to. A color space describes a specific, measurable, and fixed range of possible colors and luminance values. Its most basic practical function is to describe the capabilities of a capture or display device to reproduce color information.

For example, if I have a camera that shoots in an arbitrary “Color Space A,” anything I capture which has color and/or luminance values beyond what that space can define will not be accurately captured. These “out of bounds” values can be dealt with in different ways (some more visually pleasing than others), but they can not be accurately captured as they appear in the real world.

The same is true of a display device designed to reproduce “Color Space B” — any color or luminance value stored in a digital image that falls outside “Color Space B” cannot be accurately shown on that particular screen. In both cases, larger color spaces mean a greater range of colors can be accurately captured and/or reproduced.

Furthermore, the only good way to ensure accurate reproduction of a scene captured in Color Space A and rendered in Color Space B is to be aware of the discrepancy between capture and delivery space, and to mathematically transform the signal from one color space to the other. Providing a means of ensuring color accuracy from capture to display is the other fundamental function of explicitly defined color spaces.

Some color spaces whose names you may already be familiar with include Rec. 709, Rec. 2020, DCI-P3, Arri LogC, and RedWideGamutRGB.

When viewed as 3D plots, the concept of color “space” begins to make more intuitive sense.

Ok, so we now know that a color space is an explicitly defined range of colors and luminances. But we also know that no two humans see things the same way—so how do you explicitly define something as slippery as color? The maddening answer is that a color space can only be defined in reference to another color space.

So where does the madness end? Better yet, where does it begin? Is there a foundational color space against which all others can be defined?

Thankfully, there is.

In 1931, the International Commission on Illumination, or the CIE (from the French “Commission internationale de l’éclairage”) defined an all-encompassing color space based on human perception, using averaged data from experiments conducted with a small set of test subjects. Nearly a century later, this space, CIE 1931, remains the standard reference used to describe all other color spaces.

Why should I care about this?

Great question! The truth is that, for most of the history of motion imaging, none of this was important to anyone but the image scientists who designed film stocks, and later the engineers responsible for standardizing video capture and broadcast. The rest of us got to stand on their shoulders, operating within preconceived pipelines which ran from capture all the way through delivery. There was virtually no choice or control involved on the part of the content creator.

Today, those fixed pipelines and workflows are a thing of the past. Source material can come from one or several of the dozens of available capture formats — iPhone, GoPro, Canon, Sony, RED, Alexa, 35mm, etc.—many of which themselves offer multiple choices of color space

On the delivery side, a given piece of content may need to be played in theaters, as well as on SDR and/or HDR TVs, not to mention the ever-growing, ever-changing list of mobile devices and VR sets.

This is a complicated state of affairs, but it points to a simple truth:

The color space(s) we capture to, the way we guide footage to our delivery color space(s), and where and how in that process we choose to do our grading is up to us, and can have as much impact on our images as the grade itself.

This is why it’s so important to understand color space—because to be ignorant of it is to risk creating inferior images.

Defining a color space

As we learned earlier, we have the ability to define a color space against a standardized reference, thanks to the CIE.

But in practical terms, how do we clearly and simply invoke these definitions to describe a capture device or display? The most common way is to specify a gamut, gamma, and white point.


A color gamut is the defining a range of chromaticities—essentially a set of possible hues and their respective maximum saturations.

Think of the gamut as the borders of a color space, like Rec. 709.

As you can see in the above image, gamut can easily be plotted on a 2D graph, but in doing so we haven’t yet fully defined our color space. To do that we need a third dimension—luminance.

Gamma/Tone Mapping

A gamma curve or tone mapping curve is concerned with defining a specific non-linear distribution of luminance values. Different curves are designed for different purposes.

For example, a log curve (such as Arri LogC) is designed to store maximum dynamic range, while a Gamma 2.4 curve is designed to encode luminance values that feel perceptually linear to the human eye.

White point

In our day-to-day lives, we have no problem recognizing white. If I hand you a sheet piece of paper inside a fluorescent-lit office, then later show you the same piece of paper outside on a sunny day, you’ll identify it as white in both scenarios, despite the fact that any digital color measurement device — including a camera — would get vastly different readings in these respective lighting conditions.

This is because our eyes are constantly adapting to our environment, using context and visual cues to define white. Digital sensors and displays are (generally) not designed to do this, so we have to provide this information for them.

This also means we need this information in order to precisely define a color space. In the case of a camera, we basically need a numerical color value to explain what the camera saw as white at the time of capture, since a display may have a different target white point.

It can be tricky to think of white as a non-fixed variable, but that’s the reality when it comes to imaging and displays. White point is often expressed as a color temperature, such as 3200K or 5600K, or as one of a list of standard illuminants defined by the CIE.

To make things slightly more confusing, white point is often implied, rather than explicitly denoted, when describing color spaces. And as we’ll see below, in the case of cameras, it is variable depending on the original scene.

Now that we’ve discussed these three parameters, here are some practical examples:

  • An Arri Alexa records media in Arri Wide Color Gamut, with an Arri Log C tone mapping curve, and a white point ranging from 2,000K to 11,000K.
  • A RED Dragon captures media in RedWideGamutRGB gamut, with a Log3G10 tone mapping curve, and a white point ranging from 1,700K to 10,000K (other gamut and gamma choices are available).
  • A cinema projector has a DCI-P3 gamut, a Gamma 2.6 tone mapping curve, and a standard illuminant D63 white point.
  • An SDR TV has a Rec 709 gamut, a Gamma 2.4 tone mapping curve, and a standard illuminant D65 white point.

Principles and best practices

Now that we’ve got a solid technical foundation, here are some key practical principles to understand when thinking about color space in your workflows.

1. Any color space can be transformed into any other color space with the right math.

One important caveat to this statement: when transforming from a larger space into a smaller one, there will be so-called “out of gamut” values to contend with that can’t be reproduced in the source space. There are a number of methods for addressing these values, but there’s no getting around them. You can’t cheat mathematics and physics.

There are a wide variety of tools for transforming color space, but my favorite is the aptly-named Color Space Transform plug-in inside of DaVinci Resolve.

Color Space Transform Plug in

In the above screenshot, the Color Space Transform plug-in takes an Arri Log C/Arri Wide Color Gamut image and transforms it to RedWideGamutRGB/Log3G10. The tone and gamut mapping features are primarily useful when the destination color space is significantly smaller than the source color space, as noted above. While the usage of these features is beyond the scope of this article, suffice it to say you can safely leave them off by default. For more information, you can refer to Blackmagic’s documentation.

Note that the Color Space Transform does not transform white point, so you’ll need to take this into account if your source and destination color space don’t share a common white point. Thankfully, as of Resolve 16, there is a second plug-in designed for this purpose called Chromatic Adaptation.

Chromatic Adaptation Plug in

In the above screenshot, the Chromatic Adaptation plug-in takes an input image with a white point of standard illuminant D60, and transforms it to a white point of standard illuminant D65.

Note that you need to specify your current color space’s gamut and gamma. If we were to do this operation immediately following the instance of the Color Space Transform plug-in mentioned above, this would be REDWideGamutRGB/Red Log3G10. The “method” drop-down allows users to choose between several algorithms for making this transformation, but the difference between them is relatively minor, particularly for small adjustments like this. It can generally be left on its default value of CAT02.

Remember, to successfully move between color spaces, you need a total of six pieces of information: source gamut, gamma, and white point; as well as destination gamut, gamma, and white point. Without each of these pieces, you’re introducing unwanted guesswork into your transform.

2. In nearly all cases, a given piece of content will need to have its color space transformed at least once before it’s delivered.

At a bare minimum, you’ll almost certainly need to move from camera color space to display color space before delivering your content, unless these happen to be one and the same, which is increasingly rare in professional workflows. The takeaway? Even in the simplest of workflows, it pays to understand color space!

3. Within any grading environment, the same knobs and tools will have different effects depending on the color space you’re operating in.

Under the hood, every knob and tool is driven by simple math, and as a result their behavior is subject to the influence of any math that precedes and follows them—including the math used to transform color spaces.

This means that in order to get consistent behavior and results, the ideal approach is to introduce a third color space into your workflow, between capture color space and delivery color space. We’ll call this an intermediate, or grading space. The idea is that all source material gets mapped into this space, and our final deliverable is created by applying a single transform to get to our display color space.

Why not skip the intermediate space and do all our grading after mapping into the unified delivery space? There are multiple reasons, but maybe the biggest one is that color grading works better when done upstream in a larger color space. Results come faster, look better, and feel more natural. Think of it as the difference between tweaking the ingredients in a cake before it goes into the oven versus after.

4. In any image pipeline, color space ideally gets progressively smaller as we move from capture to delivery.

Why? Because once our image is in a given color space, any color outside of it is gone forever.

We want to accurately capture and preserve as much color as possible for as long as possible, with compromises made only for the sake of the end display, where they can’t be avoided.

With this approach, you end up with a master that has a large, future-proof color space, which can be easily transformed to target other displays. In a moment we’re going to take a detailed look at this type of workflow.

Color spaces vs color models

We’ve talked a lot in this article about what color spaces are, but it’s also important to touch on what they’re not. An overlapping but distinct concept is that of color models, such as RGB, LAB, HSV, CMY, and XYZ.

Unlike color spaces, color models aren’t about expressing different ranges of color and luminance, but about differently expressing the same range of color and luminance.

We’re generally most familiar with the RGB color model, wherein we describe a given color in terms of its proportions of red, green, and blue. Other color models simply plot and target colors in alternative ways — for example, HSV describes a color in terms of its hue, saturation, and value (roughly equivalent to brightness).

As with color spaces, we can formulaically transform from one color model to another — the difference is that this won’t yield a visual change. That said, there are still many creative and technical reasons for transforming between different color models — but that’s a topic for another day.

A sample workflow

Now for the fun part! We’re going to put together everything we’ve learned into a practical real-world example.

In this hypothetical scenario, we’re working on a two-hour documentary comprised of multiple sources — Arri Alexa, Sony FS7, and archival video — which will need to deliver for SDR broadcast and streaming, HDR broadcast and streaming, as well as theatrical release. This presents us with several challenges right from the start:

  • How do we deal with the mismatch in source color spaces?
  • What color space should we be round-tripping our VFX in?
  • What color space should we request graphics be provided in?
  • Do we have to forego any global look LUTs or filters, since they’ll yield different results depending on the source color space?
  • With limited time and thousands of shots to grade, how do we establish a grading environment whose controls we feel comfortable and confident with?
  • How do we ensure the most consistent look possible across our various deliverables?
  • How can we avoid driving ourselves crazy by constantly relying on subjective compensations and guesswork to make everything feel unified?

The answer to each of these questions is the same: harnessing our understanding of color spaces, employ a color-managed workflow which eliminates as much guesswork and subjective compensation as possible.

What’s a color-managed workflow? Nothing but a fancy term for a workflow that’s designed in a color space-conscious way.

Here’s what it looks like in flowchart form:

Talking through this, we’re going to start by mapping our source material into a large, unified grading color space. In this case I’m denoting ACES, but the only vital factors are that the space be sufficiently large, and that everything be correctly transformed into it. We could just as easily use Arri WCG/Arri Log C for our grading space.

Once we’ve correctly mapped each source into ACES with its own color space transform, we’re ready to grade. Since everything is now in a single color space, we’ll have less initial eye-matching to do, and our controls and tools will have more consistent “feel” across each source. We also now have the ability to deploy LUTs or plug-ins not only on individual shots, but across entire scenes, or even the entire film if we so desire.

None of this is to say that we’ve magically made archival video look like Alexa (or vice versa), but we’re much closer to a film that visually flows, simply by managing our color spaces effectively at the outset. We also will be shipping out any shots requiring VFX in a single space, and can request they be delivered back in that same space.

Throughout our grading process, we have a downstream color space transform moving us from ACES into the color space of our mastering display, which we’re using to make our creative choices. At any time, we can connect our grading system to another display and switch this transform if we want to see how our work translates across to our other destination displays.

We may find we want to make small subjective adjustments tailored specifically to one deliverable or another, but this will be the exception rather than the rule, and they’ll be easy for us to spot, since we won’t be fatigued by chasing down and eye-adjusting discrepancies at every turn.

In the end, we essentially have a single graded film, with separate masters targeting each of our three delivery spaces. To top it off, we’re future-proofed for additional deliverables we may need to strike next week or next year, regardless of the color space required.


Congratulations on seeing this one through — color space can be a very daunting topic at first. If you’re confused, or you have questions, that’s good! That means you’re learning. Take another lap through the article, and post your questions below.

It can take years to become fluent in these concepts, but every bit you learn is a huge addition to your arsenal as a filmmaker. Understanding color space is more important today than ever, and its value will only grow as our cameras and screens continue to multiply.

Remember, in the end this isn’t about memorizing technical data, but about shaping your thinking. Committing yourself to these concepts will lead to a clearer, more authoritative command of your craft, and to creating the kind of eye-popping images that drew you to filmmaking in the first place.

Cullen Kelly

Cullen Kelly is a Los Angeles-based senior colorist with credits spanning film, television, and commercials, for clients and outlets including Netflix, HBO, Hulu, Microsoft, McDonald’s, and Sephora. With a background in image science as well as the arts, he’s passionate about the intersection of the creative and technical, and its role in great visual storytelling. In addition to his grading work, Cullen is an educator and proven thought leader, with platforms including his podcast The Color Code as well as his YouTube channel.