ARRI REVEAL Color Science Explained

In 2022, ARRI introduced the ALEXA 35 camera, which came equipped with a brand new Super 35 sensor dubbed the ALEV IV. To keep up with the new sensor, ARRI had to reimagine its entire image processing pipeline from start to finish. That new system was eventually named ARRI REVEAL.

The motivation for this new color technology is probably best expressed by Art Adams, ARRI’s cine lens specialist. “When I look at color, I have an emotional response to it,” he explains on the company’s YouTube channel. “But I always want to know why I have that response because then I can reproduce it… With REVEAL, we have probably come as close to matching human perception as a camera or color science has ever been able to achieve.”

That’s a big claim, and it’s one that could be profoundly useful for filmmakers. According to ARRI, a DP using ARRI REVEAL can confidently say that their ALEXA images will be a 100% match to what they see by eye, or so close to it that the difference doesn’t matter.

But Art’s quote left me with some questions. First of all, what makes ARRI REVEAL so different from all the other color sciences on the market? Is REVEAL, as ARRI claims, really more accurate to the human eye? If so, what metrics does ARRI use to make that claim? And Is REVEAL actually better than those other color sciences?

Let’s investigate.

What is ARRI REVEAL?

ARRI REVEAL can be broken down into five basic building blocks. They are: 

  1. ADA-7: ARRI Debayer Algorithm
    • A new debayering process that can “see” more detail in images.
  2. ACE4: ARRI Color Engine
    • A completely rewritten color engine that uses new mathematical models to map color and brightness.
  3. AWG4: ARRI Wide Gamut 4
    • A smaller but more accurately mapped color space for faster and easier grading.
  4. LogC4
    • A new log curve that works with the ALEV IV sensor’s expanded dynamic range.
  5. LUT aka ARRI Look File 4 (ALF4)
    • A new look system that separates a project’s creative intent from its technical output transform.

Perceptual brightness

Before we unpack the technology behind ARRI REVEAL, let’s look at how this new color science affects images. But to understand what REVEAL does, we first need to understand perceptual brightness.

Humans do not see all colors the same way. Different colors have a perceived brightness value, which is how the human eye observes and processes color illumination. In general, humans see blue as dark, green as bright, and red as somewhere in the middle. That’s because humans can naturally see detail in green far better than they can in red or blue. 

Green comprises about 70% of a human’s sense of brightness. Red, on the other hand, is about 20-22%, and blue is only 8%. This is why night vision imaging usually displays pictures in shades of green. It is also why most camera sensors have twice the number of green-sensitive photosites as red or blue. More green information in a digital image means there is more detail for the human eye to see. But those smaller amounts of red and blue can create big problems when it comes to color science.

A page from Bayer pattern creator Bryce Bayer’s notebook. Image © Eastman Kodak Company
A page from Bayer pattern creator Bryce Bayer’s notebook. Image © Eastman Kodak Company

A nudge towards blue

ARRI admits that, in the past, it had to cheat a little to get its color science to work. This was one of the main motivators in creating ARRI REVEAL. Over email, Art Adams tells me, “It was a give-and-take process. The perceptual brightness of a color was something that occasionally had to give. For example, we might sacrifice cherry red to make skin tone perfect.”

ARRI’s traditional color science takes reds and nudges it towards the blue side of the spectrum, which can cause saturated reds to appear more purple after image processing than they do in real life. 

Also, because ARRI’s earlier color space, AWG3, is larger than ACES AP0, original ALEXA footage sometimes clips in the blue region when mapped into ACES. That can look disturbing, especially since blue LED lights are far more common in the world now than they were when the ALEXA cameras first came out over ten years ago.

What does ARRI REVEAL look like?

The great thing about ARRI REVEAL is that it is compatible with both ALEV III and ALEV IV sensors. That means you can use REVEAL with the ALEXA 35 camera and any ARRI camera that uses the ALEV III sensor, like the ALEXA LF and MINI LF. You won’t get all of REVEAL’s benefits unless you shoot on the ALEXA 35, but you’ll get a lot of them, and I’ll go into what those benefits are a bit later.

For now, let’s look at some images to see how REVEAL is different from ARRI’s previous color science. Please note that these images were all shot on the ALEXA MINI LF. They were then post-processed through either ARRI’s previous color science (marked Mini LF Original) or REVEAL (Mini LF REVEAL). They are only meant to show how REVEAL handles color, not the capabilities of the ALEXA 35’s new sensor.

In REVEAL, colors can come through a little more honestly. If you compare the color charts above, you’ll see that there is a greater range of blues and purples in the REVEAL space. And notice the shadows on the red peppers. They seem more contrasty, right? That’s because REVEAL has mapped those red brightness values more accurately. If you compare the two images, you’ll see that the shadows on the red peppers look slightly lifted in the original color science. As you can see, REVEAL no longer sacrifices reds in order to get perfect skin tones.

Highlights, neon tubes, and brake lights

Digital cameras usually struggle with highlight colors. Brake lights on the backs of cars sometimes come out looking white instead of red, and neon signs can appear less vibrant on camera than in real life. Bracketing exposure doesn’t always help with highlight color rendering either, because digital cameras can have color shifts as they ramp through exposures.

The differences in REVEAL are subtle but important, especially in the red highlights. They look less purple in REVEAL, right? Image © ARRI
The differences in REVEAL are subtle but important, especially in the red highlights. They look less purple in REVEAL, right? Image © ARRI

In general, color tracks very well in REVEAL, with good color saturation from hot highlights to deep shadows. And those highlights still have the same gentle rolloff that ARRI is famous for. But in REVEAL, that rolloff happens higher up in the exposure curve, giving color information more room to come through in the brightest areas of the frame.

ARRI REVEAL also allows surface colors to shine through when they’re lit by color-saturated lights. In the REVEAL example below, you can see that the actor’s skin tone and blue shirt are coming through underneath the purple light. ARRI’s traditional color science renders the actor and his shirt as uniformly purple, the same color as the purple light.

More accurate color mapping also gives objects added dimensionality. Compare the subjects below. Which face looks fuller, with more contouring and depth? Pay special attention to the embroidered rose on the actor’s shirt. Which rose has more complexity? This is an example of what can be achieved when colors are mapped more accurately. 

The standard observer

Clearly, ARRI REVEAL reproduces colors a bit differently. But how do we know that REVEAL is, as ARRI claims, “better” at reproducing what human eyes can see?

To judge color perception, camera manufacturers like ARRI use a system called the CIE 1931 color space, which is a map that numerically defines every color that the human eye can see.

If you’re curious to learn more about CIE color spaces, Ben Bailey and Cullen Kelly have both written great articles on the history of color science. The important thing to take away is that the CIE gamut is the industry standard for what colors the human eye can see. It’s the foundation that ARRI and other camera manufacturers use to map out their digital colors.

But doesn’t vision vary from person to person? That’s true. So how do camera manufacturers like ARRI determine what the average human eye can see? That’s where the standard observer comes in. 

The standard observer is a formula that “standardizes” the human response to the visual spectrum. It’s the part of the color calculation that represents the visual system of a “standard” person with “color normal” vision. It was first established by two researchers from the UK using light, a small hole, and lots of human volunteers

Although there are multiple standard observers, ARRI maps its color with the 2 Degree Standard Observer because that is the preferred method used in monitor calibration. 

The difference between ARRI and other camera manufacturers is that ARRI starts with the human eye as a goal and tries to build its equipment to that standard as opposed to profiling their cameras to existing hardware limitations.

So, when ARRI claims that REVEAL is closer to the human eye, part of what they are describing is how their color science maps values inside the CIE gamut as defined by the 2 Degree Standard Observer. If REVEAL can match those parameters more accurately than other sciences, then it is reasonable for ARRI to claim that REVEAL also matches the human eye better. 

Admittedly, it gets a lot more complicated than that. A deeper explanation of color science could quickly turn into a discussion of Boolean algebra or the quadratic equation. But, as we’ll see in the next section, ARRI has designed REVEAL to both help real-life color grading and to improve the shortcomings of the company’s previous color science. By leveraging new technology, ARRI has brought its ALEXA images closer to what we currently believe human eyes can see.

The building blocks of REVEAL

Let’s dive into each of REVEAL’s building blocks and see what ARRI has changed to create more color accurate images.

ADA-7: ARRI Debayering Algorithm 

ADA-7 is ARRI REVEAL’s improved debayering algorithm. It is designed to make the most of the ALEXA 35’s new sensor, but, like everything else in REVEAL, it can be used with the ALEV III sensor as well.

Here’s a fun fact: digital cameras do not actually pick up color. Instead, information collected by their “bayered” sensors must be “de-bayered” into a colorized image. If you want to learn more about debayering, Ben Bailey has a great explanation in his A Beginner’s Guide to Shooting Raw article.

ARRI’s previous debayering algorithm, ADA-5, did not perform well in green or bluescreen work. It sometimes produced artifacting problems, especially at the edges between a subject and the background. So, ADA-7 was designed to minimize aliasing on edges and contouring artifacts, making it a perfect tool for our VFX-heavy world.

The way an image is debayered or processed for viewing is a huge link in the chain of image capture. ARRI is one of the only camera companies to offer different ways of debayering its ARRIRAW footage, so if users don’t like how ADA-7 functions, they are free to choose another mode that might work better for them.

ACE4: ARRI Color Engine

Let’s take a step back and ask, “What is a color engine?”

In general, color engines convert camera information into something a human can view. Cameras don’t “see” the world like we do, so they need an extra calibration step to create their images. This step is what we call a color engine. It converts the camera’s native RGB signal into a calibrated or colorimetric state.

In REVEAL, ACE4 transforms the camera’s RGB signal into the ARRI Wide Gamut 4 color space. It brings what the camera ‘sees’ closer to human color perception as defined by the 2 Degree Standard Observer.

Previous ARRI color engines only applied a single transform to all image colors, which contributed to those purple reds and clipped blues I mentioned earlier. But ACE4 allows for local optimization, which allows REVEAL to calibrate for both natural skin tones and color-saturated highlights at the same time.

Like ADA-7, this new calibration engine isn’t only limited to the ALEXA 35 camera. Users can also bring older ARRIRAW footage from the ALEXA LF and Mini LF into REVEAL to take advantage of the updated ACE4 color engine.

AWG4: ARRI Wide Gamut 4

Take a look at the CIE horseshoe above. As you can see, ARRI’s new color gamut, ARRI Wide Gamut 4 (AWG4), is smaller than its predecessor, ARRI Wide Gamut 3 (AWG3), and much smaller than ACES AP0. But there’s a good reason for that. If you look closely, you’ll see that both AWG3 and ACES overshoot the CIE horseshoe by quite a bit. That means those color gamuts might attempt to encode colors that the human eye cannot see.

Trying to encode “virtual” or “imaginary” colors typically results in a less-than-faithful representation of saturated colors. If the camera tries to encode an “imaginary” color, the signal will hit the limit of the CIE horseshoe and stay there, resulting in a color that doesn’t match what was in front of the lens.

ARRI’s old math forced us to pull and twist certain primaries outside of the spectral locus to place important hues in the right spots.

AWG4 solves this problem by being smaller and better calibrated to the CIE graph. Via email, Art Adams explains that “ARRI’s old math forced us to pull and twist certain primaries outside of the spectral locus to place important hues in the right spots… We don’t need to overshoot so much anymore: the new math puts things right where they belong.”

ARRI designed AWG4 to reproduce even the most highly saturated colors. This new gamut no longer tries to encode colors that the human eye cannot see. It firmly maps every color to the inside of the observable CIE spectrum.

Also, AWG4’s new primaries—meaning the definition of what red, green, and blue values represent—were chosen after extensive conversations with colorists. So AWG4 is ARRI-tested and colorist-approved.

LogC4

Log C4 is probably REVEAL’s biggest change, and it must be to accommodate the sensor-level changes in the ALEXA 35. The ALEXA 35 offers 17 stops of dynamic range, whereas other ARRI cameras only offer around 14.5 stops. This difference requires REVEAL to use a new LogC4 curve with a lower middle gray. So, while users can process any previous ARRI ALEXA camera using ARRI REVEAL color science, they will not benefit from the full potential of REVEAL’s expanded dynamic range without shooting on the ALEXA 35.

In designing the LogC4 curve, ARRI also decided to fix some issues that users had with their previous LogC3 curve. 

When the original ALEXA came out, the common workflow was to record ARRIRAW, pre-render it into 10-bit DPX, and then take those files into a DI color grade. However, that 10-bit quantization created a bit of a bottleneck, and LogC3 was designed to smooth out that process. But that 10-bit encoding required LogC3 to have different linear slopes for different exposure indexes, mainly to eliminate banding when capturing HDR images.

“We may have overcompensated a bit,” Art Adams admits. “ARRI tries not to do things that will produce issues later on that will get caught in post and result in reshoots.” The old log curves made processing or converting LogC3 a bit difficult because white values would change as ISO settings were adjusted. Also, each ISO had to have a different log curve associated with it. Now, in LogC4, those values always stay the same. Every exposure index in LogC4 has the same curve.

The new LogC4 image itself also looks quite different from the LogC3 one. It’s darker by eye because it needs to have more headroom for the ALEXA 35’s expanded EI. Since the whole curve is flatter, mid-grays appear darker. But don’t worry—there’s more than enough code value in LogC4’s images. ARRI’s new ARRIRAW encodes in 13-bit, meaning it has twice the number of code values in those shadows. In Apple ProRes, the LogC4 curve always gets encoded in 12-bit.

LUT or ARRI Look File 4 (ALF4)

ALF4 is ARRI’s new and improved system for dealing with creative looks. It’s a bit different from what came before. 

ALF2

To understand ALF-4 we need to understand the previous version, ALF-2. The ALF-2 workflow uses what ARRI calls a log-to-video transform for its look files. This transform takes a Log C signal and directly converts it for a specific display, such as a monitor calibrated for Rec709. That means an ALF-2 file becomes a fusion of two things: the creative intent built by DPs, DITs, and colorists AND the technical transform that allows the image to be viewed on a specific display device.

That’s a fine workflow until you need to provide multiple deliverables. A feature film, for example, might ask a colorist to provide a Rec709 SDR transform for monitoring, a DCI P3 transform for a theatrical release, and a Rec2100 transform for at-home HDR streaming. If you use the old ALF-2 workflow, you will need to have a new file for every deliverable because the creative intent will fuse with the output transform. That’s time-consuming, and it may endanger the creative look down the line since the look has been merged with a transformation that might eventually become obsolete.

It can also become a problem on set. A cinematographer using an ALF-2 workflow would only be able to monitor a single live transform at a time. They would have to choose to see an SDR or an HDR image, for example, and sacrifice the confidence that their creative intent is going to survive in both versions.

ALF-4

ALF-4 solves these issues by decoupling the creative look file from the display transform. It turns what used to be one transform into two: the Creative Modification Transform (CMT) and the Display Rendering Transform (DRT). The CMT holds the creative look, while the DRT makes the image look correct on a specific viewing monitor.

ALF-4 uses what ARRI calls a log-to-log transform. It’s a bit like the LMTs (Look Modification Transforms) in ACES. It turns both the input and the output of the transform into LogC4, allowing DPs and colorists to utilize the format’s larger dynamic range all the way through to delivery or final display, whether it is in SDR or HDR.

Benefits of the split

Splitting these transforms also gives added flexibility on set. If you’re using the ALEXA 35, you can program a different DRT to each of your camera’s SDI outputs. With ALF-4, a DP and a DIT can monitor both SDR and HDR simultaneously on set. Users shooting on the ALEXA 35 can also adjust the intensity of their show LUTs in-camera. The ALEXA 35 allows users to adjust the CMT in 10% steps to crank up or dampen down any look’s intensity. In essence, you can now adjust looks on set without having to stop the show and load a new file into the camera.

ARRI designed ALF4 to be future-proof. Art Adams explains it like this: “We’re separating the creative intent from the technical color space transform, so when new displays or projection technologies come along, you won’t have to rebuild your looks for each new device. All you will do is render a new display LUT. For the creative look, you’ll just have to choose your favorite log-to-log LUT from your archives, so your creative intent stays intact.”

And, like the rest of REVEAL, any ARRI camera with the ALEV III sensor can use the ALF-4 workflow. So, you can already start future proofing your creative looks even if you’re not using the ALEXA 35.

What’s the catch? 

ARRI REVEAL certainly has a lot of advancements, especially when combined with the ALEXA 35. But unfortunately, this ALEXA 35 workflow comes with a bit of a catch. The catch is that the looks DPs, DITS, and colorists have been using with their ALEXA footage for the past twelve years don’t tend to work with the ALEXA 35’s LogC4 footage. 

In this ARRI Tech Talk, senior colorist Florian “Utsi” Martin explains that “because LogC4 has a new encoding, you need different lookup tables… All the existing lookup tables you’ve been using cannot be used with the ALEXA 35.” Users were instead encouraged to work with a DIT or colorist to build new LUTs for LogC4 that match their existing LogC3 LUTs. You can learn how to recreate a LogC3 Look File for LogC4 here.

Not everybody was happy about this on release. Colorist and Frame.io contributor Cullen Kelly is among them. Via email, Kelly tells me, “I always found this to be creatively unacceptable. If you happen to like the REVEAL transform as a foundation, it makes sense to develop a look underneath it. But what if you don’t like it, or you already have a transform that you like? Building your LUT under this constraint is inefficient at best and visually compromised at worst.”

Building your LUT under this constraint is inefficient at best and visually compromised at worst.

Why is this?

ARRI had a good reason for asking its customers to abandon their old LUTs when using the ALEXA 35. ARRI’s previous log curve, LogC3, didn’t have enough headroom to handle the ALEV IV sensor’s expanded dynamic range, so ARRI had to develop LogC4. 

Unfortunately, the introduction of this new LogC4 curve means that LUTs built in LogC3 are not backward compatible with ALEXA 35 footage. Using a LogC3 LUT on LogC4 footage will cause the image to look contrasty, crunchy, and generally dark. To get all the benefits of the ALEV IV sensor, ARRI recommends that users use the ARRI-branded lookup tables hosted on the company’s website

…to a LogC4 image using the same LogC3 LUT. The LogC4 image appears much darker because LogC4 utilizes a different log curve. Images @ ARRI
…to a LogC4 image using the same LogC3 LUT. The LogC4 image appears much darker because LogC4 utilizes a different log curve. Images @ ARRI

Update available

But what if you’re like Cullen Kelly and you already have a transform that you like? Do you just have to throw out the LogC3 transforms you love and rebuild them from scratch?

Not so fast. You have another great option: you can completely customize your color pipeline however you want using Custom Color Management. This became available earlier this year with the ALEXA 35 SUP 1.2.1 and ART 1.4.0 updates. You can read all about the SUP 1.2.1 update on ARRI’s website, and you can learn all about how to create a custom ARRI look file here.

A word of warning

ARRI’s new Custom option comes with a big warning: you really have to know what you’re doing if you want to use Custom Color Management. ARRI created this workflow for users who want full control over their entire color management, usually to work with Print Film Emulations (PFTs), ACES, or their very own color management. 

You should only choose “Custom Color Management” if you know exactly what you are doing so you don’t, as ARRI says in their Release Notes, “royally mess things up.” I encourage users seeking more information to watch ARRI’s Tech Talk video on the subject.

Lift, Gamma, Gain: How REVEAL changes the game for colorists

So far, we’ve looked at how REVEAL’s new color science renders images, and we’ve discussed how the technology behind REVEAL brings those changes to life. But how does all this affect working colorists? 

The main change comes from LogC4. As I mentioned before, LogC4 maps middle gray to a different part of the log curve than it did in LogC3. LogC3 mapped middle gray to 39%, but LogC4’s middle gray is now 28% to handle the ALEV IV’s wider dynamic range.

Because of this change, Lift, Gamma, and Gain do not work the same way on LogC4 footage.

In LogC3, pulling down blacks using lift, gamma, and gain would increase saturation and basically force a Rec709 transform. That no longer works in LogC4 because middle gray is now lower on the log curve. There is simply not as much room to pull the blacks down.

If one tries to grade LogC4 footage using this old method, they will lose the gentle shadow roll-offs that ARRI provides in their Display LUTS. The image will lose shadow detail and blacks will start to appear very crunchy.

Textures

Along with REVEAL, the Alexa 35 introduced a new set of in-camera tools called Textures. Although Textures are not directly tied to the color pipeline, they introduce image manipulations that permanently affect color and brightness, so they are important for colorists and VFX artists to understand.

Until lately, image characteristics like overall sharpness and a camera’s particular noise “fingerprint” were pre-set in the factory. With the Alexa 35, cinematographers now have a choice when it comes to their image’s texture.

You can find Textures under the Image menu in the Alexa 35. The default option, K445, is the setting that matches the look of other cameras in the ALEXA family. Other options result in different amounts of grain-like noise on the image or other local contrast enhancements.

It’s important to note that Textures are applied not only before the EI setting but also before debayering. This setting is not metadata that travels with clip files—it is a fundamental image characteristic that is “baked” into the footage.

There can be only one

There is always one ARRI Texture active in any ARRI digital camera, but the ALEXA 35 allows you to change the grain and contrast behavior in many ways. Some Textures give the footage a vintage look by increasing grain and lowering contrast. Others are strictly for shooting in low light, and so on.

Users hoping to generate their own Textures are out of luck for now. According to ARRI, each Texture setting is created by a combination of around thirty image processing settings that all interact with one another. However, ARRI assures us that Textures are “an ongoing process, and additional Textures will be added over time.”

Because Textures influence the noise in the image, they affect the signal-to-noise ratio (SNR) and could potentially reduce overall image quality. Some productions, especially VFX-heavy ones, should be wary of baking anything into their footage that might complicate their workflow down the line. As a camera assistant, I must recommend that all users test and view any image manipulations in the highest-quality viewing environment possible before permanently baking them into the footage.

If you want to check out some ARRI Textures sample footage, see it here on ARRI’s Frame.io account!

Skypanel X

In the near future, colorists and DITs may have a say in how DPs light their sets. That’s because digital color science doesn’t just start in the sensor anymore—it starts in the lights themselves.

Recently, ARRI released the Skypanel X, a modular LED panel light with modes specifically calibrated to the spectral sensitivity of ALEXA cameras. For the first time, lights and cameras are speaking the same language when it comes to exactly reproducing colors.

ARRI warns against using this mode when shooting with anything other than ARRI ALEXA cameras, specifically those with ALEV III and ALEV IV sensors. Colors produced by the SkyPanel X in ALEXA mode may not be accurate on sensors using different color sciences. Instead, users should choose other modes (CRI, TLCI, TM30, etc.) when filming with any non-ALEXA cameras.

This new stage of ARRI is interesting to see. It makes sense for lights to have dedicated modes that match the camera the crew is working with. Soon, we may have to flip our lights to “Sony” when shooting on the VENICE camera, or to “RED” when using the V-RAPTOR.

Final points

It’s easy to see why Art Adam and other ARRI alumni are excited about REVEAL. 

“REVEAL delivers the best color match to human visual perception I’ve ever seen,” Adams says. “If I look at a scene lit by a 5600K light through an Alexa 35, my brain tells me that the colors on the monitor and the colors I see by eye are the same, or so close that it doesn’t matter.”

“That might sound dull,” Adams admits, “but it is incredibly hard for digital cameras to produce colors that are so close to what the eye sees. And when you can get that close, a lot of things change. You have more information to push around in post, plus a DP can look at a set by eye and see how the colors will interact without necessarily having to look at a monitor.”

As for how REVEAL impacts color grading, Adams says, “It’s always easier to grade from a proper image to an abstract image than it is to grade from one abstract image to another abstract image.”

In general, Cullen Kelly agrees but notes that “accurate reproduction isn’t nearly as straightforward as it seems, and it has little to do with the camera. Cameras don’t capture data correctly or incorrectly; they simply respond to real-world stimuli in a specific way. Accurately rendering the captured data is a matter of having a model for that particular camera’s response, and accounting for that response in the formation of the final image.”

Wrapping up 

So, does REVEAL match what the human eye can see? The answer seems to be yes, or at least that ARRI appears to be closer to that goal than ever before. And even though working with the ALEXA 35 probably means rebuilding your LogC3 LUTs for LogC4, it’s definitely worth doing so to fully utilize REVEAL and the ALEXA 35’s updated sensor. 

But what about my other question? Is REVEAL better than other color sciences? I’ve learned through the course of writing this article that “better” is really a matter of taste more than anything else, and while color accuracy and expanded EI are important, they don’t necessarily equal attractiveness. But with REVEAL’s new Custom Color Management, ARRI’s users can do whatever they want in regards to looks. Although it might mean learning a new tool, I see no downsides to using REVEAL’s improved color science now that ARRI has made it fully customizable.

We should all remember that an artist isn’t good at painting just because they own a lot of brushes. Artists should understand how their tools work and why they were created. Hopefully, the information outlined here will help you to create the kinds of evocative, emotion-inducing images that drew you to filmmaking in the first place.

Jay Kidd

Jay Kidd is a camera assistant and writer based in New York City. He’s snapped slates on shows like The Good Wife, Smash, White Collar, The Affair, The Marvelous Mrs. Maisel, and many more. When he’s not working he’s probably writing or talking to a stranger’s dog.

How to Manage RAW Footage Using Nodes in DaVinci Resolve

The Complete Frame.io Camera to Cloud Connection Guide

How a Pro Colorist Handles Crushed Blacks in Resolve