Grading for Mixed Delivery: Cinema, Home, and Every Screen in Between

Grading for Mixed Delivery: Cinema, Home, and Every Screen in Between

At some point, every content creator is confronted with the problem of inconsistent renderings of their images. Content looks one way in the color suite, and a thousand different ways out in the wild, spanning phones, computers, tablets, TVs, and other screens in every conceivable viewing environment.

In those moments, it feels like we’ve got no control over viewers’ experience of our work. Is it pointless to pretend otherwise? Is there anything we can do to remedy this problem?

Turns out there is. While we lack control over a number of critical factors in the visual experience of our work, there are several other factors we can control, or at least influence.

In today’s article, we’re going to tackle this issue head-on, starting with understanding what causes images to translate poorly in the first place. Armed with this, we’ll discuss practical strategies we can employ for more consistent results across various screens and viewing environments. We’ll also touch on best practices when you don’t have access to a proper grading suite, as well as the impact of HDR in this area.

Before we go any further, we need to define our goal. It would be great if we could make every image look identical across every possible device and environment, but, for reasons we’re going to discuss, this is simply not possible. So what should our goal be?

As a colorist, my aim is always to give the greatest number of viewers the most faithful reproduction possible of the creator’s visual intent.

Let’s dive in!

Why images translate poorly

Without understanding the causes of our problems, we’re unlikely to solve them. So let’s start by looking at the key factors that cause images to translate poorly from the grading suite to the end viewer.

Varied display standards

One of the key concepts we’ll be returning to throughout this article is that all content is mastered to a specific target gamut and gamma.

In simplest terms, gamut describes a finite range of possible colors, and gamma describes a finite range of luminance and contrast. These properties can describe the capabilities of a display, a capture device, or even an abstract intermediate container such as ACES.

Common display gamut/gamma pairs include: Rec709 Gamma 2.4, sRGB Gamma 2.2, and P3 Gamma 2.6. Common capture gamut/gamma pairs include: Arri WCG LogC, Sony S-Gamut S-Log3, and RedWideGamutRGB Log3G10.

This concept leads to the first cause of poor image translation across devices: a mismatch between the mastering gamut/gamma pair of a piece of content and the native gamut/gamma pair of the display it’s being viewed on.

For example, if I grade a film on a P3 Gamma 2.6 theatrical projector, it will not appear correctly on a Rec. 709 Gamma 2.4 TV.

The combinations here are endless, but suffice it to say that any mismatch between mastering gamut/gamma and display gamut/gamma is a problem.

Varied viewing environments

This one may sound obvious, but that’s also what makes it so easy to forget: a piece of content viewed outside on a sunny day will look much dimmer than on that same device in a darkened room.

This concept applies not only to the overall amount of ambient light, but also to the color of ambient light. The same piece of content viewed on the same device in a daylit room versus a tungsten-lit room will appear respectively warmer or cooler, because of the adaptive nature of our vision—our eyes are forever white-balancing to our environment.

So what are the ideal viewing conditions? In a perfect world, we’d have a fixed amount of artificial light, no brighter than our display, whose color temperature matches the white point of that display. The further from this ideal we get, the greater difference we’ll perceive from what was seen in the color suite.

1 GradingSuite
A color grading suite with controlled artificial light. The lighting is dimmer than the mastering display, and matches its white point in color.

 

Varying display accuracy

We’ve talked about the importance of matching up mastering gamut/gamma and display gamut/gamma. But there’s another wrinkle here: just because a display targets a particular standard doesn’t mean it actually hits it.

Display accuracy essentially refers to how tightly a display actually adheres to its purported standard. When a display does this poorly, you get color washes, hue shifts, highlight/shadow clipping, and oversaturated colors. Displays that fail to hit their targeted standards are another key factor in images translating poorly.

This is where calibration comes into play—special instruments are used to measure the accuracy of a display’s gamut and gamma, and adjustments are made to minimize any inaccuracies.

In any color workflow, there are two displays whose accuracy we need to consider.

End viewer display

The average end-viewer display is almost certain to be wildly inaccurate. While there are many reasons for this, most can be boiled down to the reality that accuracy doesn’t sell displays. Most consumers are going to be drawn to the brightest, most colorful display in the store, and manufacturers know it.

If I can make a TV that produces an image 5% brighter than my competitors’, I have a far better chance of outselling them. Who cares if I’m exceeding the technical boundaries of the display’s target gamut/gamma?

While this dynamic isn’t likely to change, we can be optimistic about the growing demand from consumers and image-makers for a mode or preset that prioritizes display accuracy. This is precisely the agenda of the “filmmaker mode” TV setting being spearheaded by the UHD Alliance in partnership with Martin Scorsese, Christopher Nolan, and other filmmaking luminaries.

Mastering display

Having discussed the unpredictable accuracy of the average end-viewer display, we can start to understand just how important it is to have an accurate mastering display.

The reason is simple: any inaccuracies in our mastering display will compound with those of the end viewer display, creating a far larger maximum error than either display has individually.

This demand for accuracy in professional mastering displays is the key reason why they’re significantly more expensive than consumer models—they have lower tolerances and are harder to manufacture. Even with a professional display, it’s necessary to regularly calibrate to account for “drift” as the display ages. Since this is the only display we have direct control over, we need it to be as dead-on as possible.

Encoding and metadata

Video and image files carry not only picture and audio data, but metadata, which is exactly what it sounds like: data about data. Metadata for a given file can contain hundreds of entries, anything from the program title to the name of the camera assistant who loaded the magazine.

It can also contain information about the color gamut and/or gamma curve of the content, which some devices and software will detect and use to make image compensations. A simple example would be an image mastered in Rec. 709 unintentionally encoded with metadata identifying it as a Rec. 2020 master. Certain software and/or hardware will read this Rec. 2020 metadata and make a compensation intended to best reproduce a Rec. 2020 master gamut within a Rec. 709 display gamut.

2 MetadataIssues
These images on are identical in every way except for their metadata tags — the image on the left is tagged Rec. 2020, which causes Quicktime Player to saturate it in an attempt to give the most accurate reproduction possible.

Problems can arise not only from the presence of incorrect metadata tags, but from the absence of correct ones. For example, if we were to master an image in Rec. 2020, but failed to encode appropriate metadata, many programs and devices will assume a gamut of Rec. 709, resulting in a desaturated rendering of our content.

Strategies for more consistent images

Now that we’ve got a grasp on what causes images to translate poorly, we can look at strategies for fixing the problem.

Articulate your deliverables

Take an inventory of the outlets for which you’re delivering. If your content is destined for screens of more than one gamut/gamma standard, you need more than one gamut/gamma encoded deliverable.

For example, if I need a Quicktime from which to strike a DCP, as well as a Quicktime for Vimeo upload, these will need to be separate assets: one targeting P3 Gamma 2.6 (for DCP), and one targeting Rec. 709 Gamma 2.2 (for Vimeo). If you try to deliver the same file to both outlets, you’ll end up getting at least one bad rendering.

Work with a colorist in a proper grading environment

I’ll openly admit my bias here, as I’m a colorist by trade. But my take is that the best single advantage you can give yourself in the battle for consistent renderings of your images is grading with a calibrated display in the proper environment with someone who knows the tools. We know from the prior section that working with an accurate mastering display in the correct environment will minimize the inaccuracies your end-viewers experience.

The colorist is optional, provided you don’t mind spending years mastering their tools and techniques yourself.

Use a color-managed workflow

Using a color-managed workflow, you can target alternate gamut/gamma pairs for your renders without needing to make creative adjustments or guesses. It also means you can target display spaces without needing to monitor for them.

For example, if I do a color-managed grade on a film targeting my Rec. 709 display, I can later switch that target to P3 Gamma 2.6, and even though I’m not able to monitor that color space, I can be confident it will look the same on a P3 Gamma 2.6 projector as it did on my Rec. 709 display.

Targeted web deliverables

Perhaps the trickiest outlet for ensuring accurate renderings is the web, simply because it encompasses so many platforms and devices whose display standards can significantly vary from one another.

ytcount N8Pnhrcr73o unsplash
The web is challenging for color grading, as different devices and platforms can render colors very differently, even when watching the exact same video.

Unfortunately, there’s no cut and dried answer for the perfect gamut/gamma target for web; but in my experience, you’re better off targeting sRGB’s gamma of 2.2, rather than the TV standard of gamma 2.4 or BT.1886. This will get you a closer match to your master on nearly any screen that’s not a TV — phone, laptop, tablet, etc.

Be mindful of encoded metadata

As discussed above, incorrect or incomplete metadata can cause a wildly inaccurate reproduction of your images. At a minimum, you should use a metadata inspector/editor to ensure that there are no incorrect entries, such as a flag for Rec. 2020 when you’re mastering gamut was Rec. 709.

Additionally, you can ensure that there are no absent entries pertaining to mastering gamut or gamma. Digital Rebellion’s Pro Media Tools is a great product for reviewing and editing metadata as needed.

Educate your viewers

While we can’t directly control the devices and environments on which viewers experience our content, we can still influence these factors by engaging with our audience. I’ll never forget reading the back of the Boogie Nights DVD wherein director PT Anderson left a personal note encouraging viewers to “use those color bars, get those whites white and those blacks black and turn it up loud.”

3 BoogieNights Jacket

I’m sure many viewers didn’t read these instructions or bother to follow them, but I sure as hell did. Give viewers who care about your images the chance to experience them in the very best way possible.

Lower your expectations

Yep, you read that correctly. It’s important to remember that if you have this problem, so does everyone else. The goal is not to miraculously do away with a reality that’s been around as long visual media itself. The goal is to give the greatest number of viewers the most faithful reproduction possible of the creator’s visual intent.

Cost-effective solutions

You may ask yourself: “What if I can’t afford a colorist or calibrated display?” Sadly, that’s a reality. It’s just not always feasible with budget or logistics to give a piece of content the love it deserves in a color suite. If you find yourself in this jam, here’s my cheat sheet:

  1. Work in an environment with constant lighting: Block out as much daylight as possible, as daylight changes color and intensity and will affect your perception of images.
  2. Use a color-managed workflow: This will put less demand on your inaccurate display.
  3. Watch your scopes: Ensure deep shadows and peak highlights on your histogram are respectively reaching 0 and 100 without spilling too far below or above. Balance your red, green, and blue peaks for neutral white balance. Ensure your colors aren’t reaching the edge of the vectorscope and clipping out.
  4. Use a light touch. Take a conservative approach to your grade. Bold or aggressive moves are more likely to translate poorly in the wild than small adjustments.
  5. Accept that you’ve chosen a compromise that can’t be completely negated. By grading on an inaccurate display, the maximum possible inaccuracy of your content is a lot higher.
histograms
Clipped (top) vs contained (bottom) luminance as seen on a histogram.
vectorscopes
Clipped (left) vs contained (right) chroma as seen on a vectorscope.

HDR and beyond

We’ve been talking primarily in SDR terms throughout this article, but everything we’ve discussed becomes even more essential to understand with the arrival of HDR’s new universe of encoding and display standards.

Now is the perfect moment to wrap your head around the concepts we’ve discussed here, because they’re the only sensible way to navigate this increasingly complex jungle of standards, devices, and viewing environments.

In closing

This topic can feel complicated and intimidating, but if you’ve come this far, you’re well on your way to more consistent images.

A final word of advice: before you immediately dive into the strategies above, take another lap through the “why” section. Without a good grasp on its concepts, the strategies that follow can easily confound your problems even further. But once you understand the causes of the problems you’re addressing, the solutions are easy to apply, evaluate, and tailor.

Remember the goal: to give the greatest number of viewers the most faithful reproduction possible of the creator’s visual intent.

Thank you to Cullen Kelly for contributing this article.

Cullen Kelly is a Los Angeles-based colorist with credits spanning film, television, and commercials, for clients including Netflix, Microsoft, American Airlines, and McDonald’s. Currently a staff colorist at Apache, he’s passionate about the vital role of workflow and image pipeline in great visual storytelling.

Interested in contributing?

This blog relies on people like you to step in and add your voice. Send us an email: blog at frame.io if you have an idea for a post or want to write one yourself.