How to Choose the Right Codec for Every Project

Codecs don’t need to be hard. No, really, they don’t.

By the end of this article, you will be able to pick the best codec for you on each project. My goal is to empower you to make your own informed decisions about codecs, instead of relying on what worked for someone else.

I’m going to walk you through every step in the process of making a video. Click on a heading to jump to that section. I’ll cover:

At each stage, I’ll explain which factors you should be considering as you choose a codec, and I’ll give you some examples of the most commonly-used codecs for that stage.

Along the way, we’ll cover why low-end codecs and high-end codecs can each slow down your editing, the reasons for a proxy/offline edit, a real-world project walkthrough, some storage-saving strategies, and an explanation for why transcoding cannot improve your image quality.

The benefits of optimizing your codecs can be huge. The right codec will preserve your images in the highest quality, help you work faster, and it will also enable you to take the best advantage of your computer and storage. You’ll be able to work faster on a laptop than many can on a high-end tower.

What a Codec Does

A codec is a method for making video files smaller, usually by carefully throwing away data that we probably don’t really need, and they’re pretty smart about how they do that. A few years ago, I created a video that covers the main compression techniques that many codecs use. It’s not required viewing to understand this article, but it certainly won’t hurt.

How Codecs Work – Tutorial.

If you’re skipping the video, here are some very basic explanations:

  • Chroma subsampling: Throws away some color data (4:4:4 is no chroma sampling. 4:2:2 is some chroma subsampling.4:2:0 is lots of chroma subsampling). Bad if you’re doing color-correction. Really bad if you’re doing green screen or VFX work.
  • Macro-Blocking: Finds blocks (varying size) of similar colors and makes them all the same color. Bad for VFX and color-correction. Almost all codecs use this to some degree, and the amount tends to vary with the bitrate.
  • Temporal compression: Uses previous frames (and sometimes following frames) to calculate the current frame. Bad for editing.
  • Bit depth: The number of possible colors. Deeper bit-depth (larger numbers) is good for color-correction and VFX.

Codec Comparison Table

I’ve also pulled together a list of all of the most common codecs used in the postproduction world. This list can help you compare different codecs against each other and make the best decision for your project.

There are many different codecs that can be used in the editing process, but the ones I’ve included are by far the most common. There is a significant advantage to using popular codecs – they are more likely to work on your system, your client’s system, your system-in-five-years, etc. And it’s easier to find help if something goes wrong.

Open the table in a new tab, and think about which codecs might be a good fit for you as you read through the article.

codec table image

Check out the table

Lossyness

One of the columns in the table is “lossyness,” which is an important concept with codecs. When I’m talking about lossyness, I don’t necessarily mean what your eye sees. I mean the amount of data that is retained by the codec, only some of which you can see. The question is: If I had an uncompressed image, and then I compressed it with this codec, how similar would the new image be to the old image? How much information is lost in the transcode? If the two images are very similar, then the codec is not very lossy, and if they’re pretty different, then it’s more lossy.

The lossyness is a combination of the techniques that the particular codec uses and its bitrate. A more lossy codec is not necessarily “bad.” In some cases (when viewing online, for instance), it’s really not necessary to retain 100% of the original image. Using a more lossy codec can be a really smart move because of how much space it saves.

If the image looks just as good to my eye, then why should I care if it’s technically ‘lossy’?

You should care because you may want to change the image. If you are doing any sort of color correction, then you will be changing the image, allowing you to see elements of the image that weren’t visible (or prominent) when you captured it.

For example here is an image that was captured raw.

Orig

Here is a screengrab of it compressed with H.264, using a standard YouTube-recommended settings.

h264 grab

And then compressed with DNxHD 350x:

DNX grab

They all look pretty much the same, don’t they? The visual quality is just about the same, and the H.264 file is a fraction of the size of the DNxHD file. This is why it’s the recommended setting for YouTube. It looks just about as good to the eye, and the file is much easier to upload to the internet.

The trouble with the H.264 version, however, comes when you try to make changes to the image. What if you wanted to increase the exposure?

Sunset graded 2Sunset graded 1

Now we can see where the highly-compressed image falls apart. Her hair and shirt look terrible in the h.264 image, and the buildings by the river look all mushy.

Sunset graded shoulder 2 Sunset graded shoulder 1 Sunset graded crop3Sunset graded crop4

This is why you really want a high-quality codec when you capture the image – because you will probably want to make changes later on, but you don’t know yet what those changes might be. You’ll want to tweak the color and contrast, maybe tweak the speed, maybe add some VFX. A highly-compressed file doesn’t allow for those changes without breaking down.

This is why it’s a good idea to capture your footage in 10-bit even if you may be outputting an 8-bit file in the end – you don’t know, when you shoot, which bits you’re going to want.

 

The Codec Journey

the codec journey

Now that we’ve gotten some of the foundational ideas out of the way, it’s time to walk through the different stages that you’ll encounter in each project.

Every projects starts with a codec that you capture in the camera, and it ends with a codec that you export (delivery codec) and hand off to your client or upload to the web. In the simplest case, you do all of your editing and color-correction right on the camera files and then export to your delivery codec, so you’re only ever using two codecs.

But most of the time it gets a little bit more complicated. You might transcode to a different codec for editing, and potentially for color-correction, and definitely for VFX. But it all starts with…

the codec you shoot

 

The Codec You Shoot With

(back to the index)

 

 

This is your capture codec, (also called the “camera native codec” or “acquisition codec”).

Generally speaking, you should aim for the highest-quality codec that your camera (or your budget) can capture. When I say “highest quality”, I mean that you want to capture as much information as possible, so you want less-lossy codecs: less compression, higher bit-depth, and less chroma subsampling. The more information you have when you capture, the more flexibility you will have later, especially in color-correction and VFX (if you’re doing that).

Of course, you also have to consider a lot of other, practical factors in this decision, otherwise we would always be shooting 8K raw, right?

Cost

The first consideration is obviously cost. Generally speaking, the more expensive the camera, the higher quality codecs are available on it. I say generally because there are some “sweet spot” cameras that can offer excellent codecs at a reasonable price. Panasonic’s GH series (especially in the early days when the GH2 was hacked) were known for offering better codecs than the other cameras in its price range.

Tip: Better Codecs with External Recorders
One way that people (myself included) have found to capture higher-quality codecs on cheaper cameras is to use an external recorder.

These devices (many of which can double as external monitors) take an uncompressed signal from the camera, via HDMI or SDI, and compress it separately. So you end up with two copies of your footage – one copy heavily compressed on the camera, and a second copy lightly compressed on the external recorder. The key thing here is that the camera sends the signal out to the recorder before compressing it.

One important note here is that many cheaper cameras only output 8-bit, and often not in 4:4:4. An external recorder might be able to compress to a 12-bit codec, but if the camera is only sending 8 bits, the recorder can only record 8 bits. Some cheaper cameras may also not output a “clean” HDMI signal that is suitable for recording. We call an output signal “clean” when it’s just the pure image with no camera interface overlays.

Storage

The second factor to consider is storage space. High-quality codecs tend to be higher bit-rate, which means that the files are larger. You need to be prepared to store and back up all of those files as you’re shooting, and you may also have to upgrade your memory cards in order to be able to record the high-bitrate data. If you’re shooting solo, then you may end up choosing a lower-quality codec because it allows you to change memory cards less often and focus on the story instead.

Finishing

Another factor to consider is how much color-correction and VFX (collectively referred to as finishing) you plan to do. If you’re going to be doing very minimal color-correction and no VFX, then you can probably get away with lower bit-depth, chroma subsampling, and macro blocking that come with lower quality capture codecs.

Editing Hardware

The last factor to consider is your editing machine, because most capture codecs are not well suited to editing without a high-performance computer. H.264 and some raw files require a powerful CPU/GPU to edit smoothly, and very-high-bitrate codecs may require high-speed hard drives or data servers. Unless you happen to be shooting an edit-friendly codec, you may have to transcode your files to another codec before editing, which can take time. For most people, transcoding the footage isn’t a huge issue because it can be done overnight or on a spare computer. If you’re working on very tight turn-around times, however, you may choose a codec that will allow you to start editing immediately after a shoot, even if that means a higher cost or a sacrifice in image quality. I explain which codecs are best for editing in the next section.

the codec you edit

 

The Codec You Edit With

(back to the index)

Alright, you’ve shot your film, and you’ve got all of your files onto your computer. Now you need to decide whether you’re going to edit with these files, or whether you want to transcode into another format.

Why should I transcode before editing? Can’t I just edit the files that came out of the camera?

Well, it depends. Pretty much all of the major software packages can now edit any codec that your camera creates, (unless you’re a badass shooting on a brand-new camera with brand-new technology). But while it’s almost always possible to edit the codecs that your camera shot, it’s not always the best idea.

If you are lucky enough to be shooting on a codec that is great for editing (see the codec chart), then you can skip this step.

For many of us, however, the capture codec isn’t going to be optimized for editing. There are two main factors you need to consider when choosing your edit codec: compression type and bit rate.

Highly-Compressed codecs can Slow Down Your Editing

(back to the index)

Most lower to mid-range cameras record with codecs that use temporal compression, also known as long-GOP compression. I will give you a simple explanation here, but if you’re interested in learning in more detail, check out my codecs video, starting at 19:00.

The simple explanation of a long-GOP is that, for each frame, the codec only captures what has changed between this frame and the previous frame. If the video doesn’t include a lot of motion, then this means that the new file can be a LOT smaller than the original. The difference between this frame and the last frame is just a few pixels, so all you need to store is a few pixels. That’s great!

temporal compression

The issue, however, is that these codecs tend only to work well when played forward. (If you’re curious why, take a look at the video). That’s great for viewing on YouTube and your DVD player, but it’s not great for editing, because when you’re editing you’re often jumping around, or playing a clip backward. It takes a lot more processing power to do those things quickly with a long-GOP codec. A high-end computer might have no trouble, but even a mid-range computer will lag and stutter when you skim through the footage quickly or jump around.

Codecs that aren’t long-GOP (a.k.a. Intra-frame codecs), however, can play backwards just as easily as forwards, and even a mid-range computer can skip around very smoothly. If you’ve only ever edited clips straight from the camera, you not might realize what you’re missing!

The other thing that can cause issues with playback is raw video. Raw video needs to be converted before it can be displayed (sort of like a codec does), and some computers can’t decode the raw file fast enough, especially if it’s 4K. Ironically, both the low-end cameras and the highest-end cameras produce files that are hard to edit!

High-Bitrate codecs can Slow Down Your Editing

(back to the index)

For low to mid-range codecs, you don’t have to worry about the bitrates at all. Once you start moving up the ladder, however, high bitrate codecs can cause issues with editing, especially if you’re working on everyday computers.

The reason is because your computer needs to be able to read the data from your hard drive at a bitrate that is at least as high as your codec’s bitrate. It makes sense — if your codec is 50Mb/s (fifty megabits per second), then your computer needs to be able to read that file from your hard drive at 50Mb/s or else it’ll fall behind and stutter.

(note that Mb/s stands for megabits per second, while MB/s stands for megabytes for second. There are eight bits in a byte, so you need to multiple by 8 when converting from MB/s to Mb/s)

The good news is that hard drives are getting faster every day, so 50Mb/s is never going to cause any problems. But what if you’re editing ProRes 422HQ at 4K, which is 734Mb/s? The average external hard drive is only just barely fast enough to play that back, and some cheaper hard drives won’t manage it. And then, what if you’re editing a multicam with three cameras? Suddenly you need 3x that data rate: 2,202Mb/s! At that point, you’re going to need to invest in some high-performance hard drives or RAIDs.

Here are some rough guidelines for common data storage speeds, though of course there will always be certain models that underperform or overperform.

  • Standard spinning drive: 100-120MB/s
  • Professional spinning drive: 150-200MB/s
  • Standard SSD: 400-500 MB/s
  • Low-end RAID: 200-300 MB/s
  • High-end RAID: 1000-2000 MB/s

Shooting in log can slow down your editing

Shooting in a log color space is a way of preserving as much of your dynamic range as possible, allowing you to capture a scene that has bright highlights and dark shadows without blowing out the highlights or crushing the blacks. Blown-out highlights are a particularly nasty side-effect of shooting on video instead of film, and so shooting in log can help make your footage feel more cinematic. Now that log profiles are available even on most prosumer cameras, it’s an extremely popular way to work.

 

log example 1

The downside is that the image that comes out of the camera doesn’t look so great, so you need to add in a bunch of contrast and saturation in order to get even close to the final image. The most common way to do that is to add a LUT to your footage, which is essentially a simple preset color correction that brings your footage back to a “normal” look.

log example 1 corrected

If you’re shooting in a log color space, then you need to apply a LUT to your footage in order to preview it with normal color and contrast. This means that your editor will need to apply the appropriate LUT to all of the clips when editing. This can be annoying to manage, and it can also slow down the computer a bit, because it needs to first decode each frame and then apply the LUT before displaying it. It’s certainly possible to edit the log footage without any LUT, but it’s not ideal. The color of two shots may influence how you intercut them.

If you’re going to transcode your files before editing them, then you can apply the LUT during the transcode process. That way, the editor is always working with footage that has good contrast and color and never has to bother with LUTs. Note that you should only do this if you are using a Proxy workflow, not a Direct Intermediate workflow (described below).

Consider time spent encoding

The main downside of transcoding your footage before editing is simply the time it takes to do the transcode. If you have a lot of footage to go through, and your computer isn’t particularly fast, it may take a long time. If you’re not in a big hurry, you can let the transcode run overnight, potentially on multiple computers if you have access to them, but that’s not always ideal.

When I worked at Khan Academy, our founder would regularly record short video messages to send out to people, sometimes on very tight schedules. I would usually shoot in 4K in a long-GOP log format, and edit them on a MacBook Pro. Editing 4K long-GOP with a LUT (to correct for the log footage) on a laptop would mean I could play the video back just fine in Premiere Pro, but I couldn’t zoom around the timeline as fast as I wanted without stuttering. But that didn’t bother me too much because the edit was extremely simple – just a few cuts, maybe some music, a title, and I was done. Even though my editing speed wasn’t ideal, I would have spent more time in the transcode than I would have saved in editing speed, so I just used the original files.

If I were editing a longer piece with the same setup, however, I would transcode to DNxHD or ProRes. Generally, I would do most of the transcoding overnight, often with multiple machines running at the same time.

Proxy Edit

(back to the index)

If you’re going to transcode the native camera files before you edit them, then you’ll use an “intermediate” codec. It’s called intermediate because it comes between the capture codec and the export codec. There are two common ways of working with intermediate codecs:

The first is the “proxy” workflow or “offline edit.” This means that you are transcoding your captured footage into an intermediate format, editing with that format, and then re-linking back to the original camera files before exporting. Because you will use the camera files to export and not the proxy files, you don’t need to worry so much about picking a proxy codec with great image quality – lossy codecs are fine. You can optimize for editing speed and storage convenience instead.

The proxy workflow is so common that many high-end cameras record a high-end raw file *and* a ProRes or DNxHD proxy file at the same time. After the shoot, the raw files are backed up and put in storage, while the proxy files are sent off to the editors and to the director/producers for dailies.

When choosing a proxy codec, you want to go for one that does not use temporal compression (aka inter-frame compression or long-GOP compression), and you want to pick one that has a lower bitrate. The low bitrate means that the files are much smaller, so you can use fewer/smaller/cheaper hard drives, simplifying your workflow. Woot!

While the proxy files are great for editing, you shouldn’t do more than basic color-correction with proxy files. If you are going to do all of your color-correction inside of your editing software, then it’s best to re-link back to your camera files because your proxy files may have lower color quality.

The good news is that most editing software today can switch between the camera files and the proxy files in just a couple clicks, so you can even go back and forth if you need to.

We’ve published detailed guides for proxy workflows in each of the major NLEs:

Some good choices for proxy codecs

By far the most common proxy codecs are DNxHD/DNxHR and ProRes. They have both been around for years, so they’re very widely supported. Everyone knows how to handle them. They are both very well suited to a proxy workflow (ProRes even has a preset called “proxy”), and are nearly interchangeable when used for proxies.

Since DNxHD is made by Avid, and ProRes is made by Apple, it makes sense that DNxHD would work better on Media Composer and ProRes would work better on Final Cut Pro X. That used to certainly be true, but nowadays both codecs work very smoothly on all modern editors (including Premiere Pro). There may be a slight speed increase in using the codec that was designed for the system, but it’s very slight.

The only significant difference between the two for a proxy workflow is the fact that you may have trouble creating ProRes on a PC, while DNxHD is very easy to create cross-platform. The only officially-supported way to create ProRes on a PC is with Assimilate Scratch. There are some other unsupported methods for creating ProRes files on a PC, but they’re not always reliable. PCs can easily play back and edit ProRes files, but you can’t encode new ProRes files on a PC as easily as DNxHD, and so some editors prefer a DNxHD workflow for that reason.

Regardless of which of the two codecs you pick, you also have to pick which flavor you want. This is really going to depend on your storage constraints – it’s a tradeoff between image quality and file size. The good news is that you don’t need tip-top image quality when you’re editing, so you can choose a low-bitrate codec.

Start off with the smallest ProRes or DNx codec in the same resolution as your capture codec. Look at the GB/hr column and multiply it by the number of hours of footage you have. If you have enough storage space, then you’re good – use that codec. If you have lots of extra storage space, think about using the next largest flavor.

If you don’t have enough storage space, or if you’re on an underpowered machine, then take the resolution down a notch. A lot of huge-budget Hollywood films were edited in 480p just a few years ago, so don’t sweat it if you need to lower your resolution from 4K down to 720P for the edit.

Direct Intermediate

The other type of intermediate workflow is something that I’m calling “Direct Intermediate.” This means that you transcode your camera files into a codec that is both good for editing and very high-quality (not very lossy). Because the codec is very high quality, almost all of the original information from the camera files has been preserved, and so it’s not necessary to re-link back to the camera files – you can just export directly from the intermediate files. There will be some theoretical loss of information when you transcode, but if you pick a good enough intermediate codec, it’ll be small enough that you don’t need to worry about it.

(Note: I’m calling this process “Direct Intermediate” because there isn’t a common name for this workflow. People usually just call this “intermediate,” but that can be confusing because proxy workflows are also a kind of intermediate workflow. Some people will also call this an “online” workflow, but this is also confusing because that term was created to describe a workflow that includes an offline and an online edit, not a workflow that’s online from start to finish.)

The key to picking a good Direct Intermediate codec is to make sure that you are preserving all of the information from your capture codec. An intermediate codec will never make your images better (more detailed explanation below), but it can definitely make them worse if you choose the wrong codec. The important thing is to understand the details of your original footage and make sure that your intermediate codec is at least as good as your capture codec in each area. If you capture your footage on a DSLR like a Sony A7Sii at 4K, then you will be recording in a 4:2:0, 8-bit, Long-GOP codec at 100Mbps. You want an intermediate codec that is at least 4:2:0 and 8-bit. Going beyond these values (e.g. to 4:4:4 and 12-bit) won’t hurt, but it also won’t help at all, so it’s probably not worth the extra storage space.

Let’s say, for example, that we want to go with a ProRes codec. We have 4 options to choose from that are 4:2:2 and 10-bit.

  • 145Mb/s ProRes 422 Proxy
  • 328Mb/s ProRes 422 LT
  • 471Mb/s ProRes 422
  • 707Mb/s ProRes 422 HQ

You might think that all you need is to match the camera bitrate (100Mbps), but you actually need to greatly exceed the camera bitrate. This is because h.264 is a much more efficient codec than ProRes. Because h.264 uses long-GOP compression, it can pack a lot more information into those 100 megabits than ProRes can. In order for ProRes to match the image quality of h.264, you need a much higher bitrate. I would recommend only using ProRes 422 or ProRes 422 HQ if you’re starting with a 100Mbps h.264 codec. ProRes 422 will probably do just fine, but if you have lots of storage space, then going up to ProRes 422 HQ will have a slight edge.

While it’s fine to simply match the bit-depth and color sampling when choosing an intermediate, you should always increase the bitrate at least a little. If you’re going from long-GOP to a non-long GOP codec, then you should increase the bitrate a lot.

Side note: If you wanted to go with DNxHD instead of ProRes, you have similar options, except that DNxHD also offers an 8-bit version for the lower-end codecs. Since our footage is 8-bit to start with, that won’t hurt us at all.

The proxy workflow sounded pretty good. Why do the Direct Intermediate?

Part of the reason why the Direct Intermediate workflow is common is because it used to be a lot harder to use a proxy workflow. Some of the major software providers didn’t make it particularly easy to relink back to the original camera files, and so people would choose a direct intermediate workflow. Nowadays, however, it’s pretty easy to do in any editing package. The main exception is when you have a lot of mixed footage types. If you have multiple frame rates and frame sizes in the same project, switching back and forth from the proxies to the capture codecs can be a headache.

If you are using some third-party tools to help prep and organize your footage before you start cutting, those can also make the relinking process more tricky. One common example might be software that automatically syncs audio tracks or multicam shoots.

Another reason why you might want to use a Direct Intermediate workflow is because you can move right on to color-correction and VFX (“finishing”) process without swapping around any files. Keep reading, and I’ll explain more about why that’s convenient in the Color-Correction and VFX sections.

One downside, however, is that you can’t “bake in” the LUTs for your editor – you’re going to need to apply a LUT via a color-correction effect in your editing software. If you were to include the LUT in your transcode for Direct Intermediate workflow, you would be losing all of the benefits of recording in log in the first place.

The other obvious downside is that you need to store all of these (much larger) files.

An intermediate codec will never make your images better

(back to the index)

This is very important, because it is very commonly misunderstood, and there is a lot of misinformation online. Transcoding your footage before you edit will never increase the quality of the output. There are some extra operations that you could do in the transcode process (such as doing an up-res) that could increase the image quality, but a new codec by itself will never increase the quality of your image.

If you choose the right codec, you can avoid hurting your image, but you can never improve it.

That includes going from h.264 to DNxHD or ProRes. That includes going from 8-bit to 10-bit. That includes going from 4:2:0 to 4:4:4.

Here is an illustration that can help you understand this concept:

This is a photo of a rose reflected in a water droplet. It’s 4 megapixels, and it looks pretty nice on my 27-inch monitor.

Rose 1

Now what if I take a photo of my monitor with a Red Helium 8k camera. This is a beast of a camera. I shot the photo of the rose a few years ago with a cheapo Canon Rebel DSLR, worth about $250 today. The Red Helium setup costs about $50,000, it’s 35 megapixels, it’s raw, it has one of the best camera sensors ever produced.

Rose 2

Which will be a better image – the 4 megapixel photo, or the 35 megapixel photo?

The Red camera has more megapixels, right? It’s raw, and it has all of the Red digital magic, right? But since I’m using my high-resolution camera to take a photo of the photo, not a photo of the rose, my fancy new image will never be better than the first one. I have a file that is technically higher-resolution, but it does not capture any more of my subject (the rose) than the first one did.

This is what you’re doing when you’re transcoding. You are making a copy of a copy, taking a photo of a photo. If you use a fancy high resolution camera to take a photo of a photo, you will be able to preserve pretty much all of the information in the original image, but you won’t be able to add anything more.

The big caveat is that, if you are doing any processing, any transformation of the image (adding a LUT, for instance), then you definitely do want to transcode into a higher-quality codec, which will retain new information. But if you’re not altering the image, then transcoding will not make your image somehow “better.”

I’ll explain this in some more detail in another post, coming soon.

A Real-World Example

(back to the index)

Let’s say you’re editing a documentary that captured 4K footage using a Sony A7sii camera, recording in the long-GOP version of XAVC-S. Not ideal for editing. If they shot 40 hours of footage for your feature-length documentary, you’d end up with about 2.7TB of camera files, which can fit on one hard drive easily (though you’ve made other, separate backups, of course!).

You could convert that to a high-quality, not-very-lossy codec for a Direct Intermediate workflow, maybe ProRes 422 HQ in 4K.

The downside is that you would need about 12.7TB in order to store that all of those ProRes files. You would have to use an expensive RAID setup in order to have easy access to all of that footage in one project, at least $1,000. Peanuts for a big facility, but a significant investment for a solo editor.

So you might decide to use a Proxy workflow instead and transcode your files to the ProRes 422 Proxy 4K format. Then your footage would only take up 2.8TB, just barely more than your captured footage. You can then easily edit off of a single hard drive, and your workflow gets a lot simpler. (For instructions on how to calculate bitrates and file sizes, check out this article: The Simple Formula to Calculate Video Bitrates).

Let’s say that you’re working with another editor who’s on the other side of the country. You might decide to transcode the footage even further down to ProRes 422 Proxy HD, which would shrink your footage down to just 640GB, which becomes more feasible to send over the Internet if you have a fast connection. (18hrs to download on an 80Mbps connection)

When the edit is all done, you just re-link your project back to the original camera files and export. Even though you and your remote editor have been working in a pretty lossy codec, the final export bypasses it, so you don’t lose any quality.

the codec you color-correct

 

The Codec you Color-Correct

(back to the index)

Ok, now you’ve got your video edited, and it’s time for color-correction. Everything we’re talking about here will apply whether you are color-correcting inside your editing application, or whether you are sending your edit to dedicated color-correction software.

The big question at this point is whether you want to color-correct straight on the original camera files, or whether you want to transcode. If you did a proxy/offline edit, then you definitely don’t want to color-correct the proxy files, because they have a lower image quality. In order to make good decisions about color, you need the highest quality image that you have available, because you need to be able to see exactly what you have to work with.

So we need to work with high-quality images, and we have a few different options:

A. Grade the camera files

This is certainly a simple option. If you did a proxy edit, you can relink to the camera files for the finishing process and go to town. This will give you maximum image quality, but remember how the camera files can be slow to work with? The camera files may slow down the process a little, but depending on the software you use and the amount of work you need to do, you might decide that the simplicity is worth a little bit of potential slowdown. If you have a short edit without a lot of complexity, then this can be a great and easy workflow.

Let’s assume that the color-correction slow-down bothers you, so you need a codec that is easier to work with. You could transcode all of your footage to a high-image-quality codec, link to those files, and then start doing your color-correction. But… that kind of defeats the purpose of a proxy workflow, doesn’t it? We used proxies because we didn’t want to have to deal with the large files that would create. Fortunately, there is another option.

B. Consolidate and Transcode

(back to the index)

If you used a proxy/offline workflow for the edit but don’t want to color-correct the camera files, one good option is to relink to the camera files, consolidate your project and then transcode to a high-end codec.

When you consolidate a project, your editing software will make a copy of your project along with a copy of the media, but only the particular files that you ended up using in your sequence. So if you shot 7 takes but only used one of them in the edit, it’ll only copy that one take. This cuts down on the storage a lot, which comes in handy at this stage. You can also consolidate down even further so that you only keep the specific portions of each take that you actually used in the edit, discarding the rest. In this case, the software will usually include a few seconds before and after each take (called “handles”), in case you want to add a fade or motion tracking.

Now you can take this new consolidated project (after relinking to the originals) and transcode all of these files to a very high-quality, high-bitrate codec, and start color-correcting. This is different from the Direct Intermediate workflow because you are not transcoding all of your footage – just the footage that made it into the final edit, which might be 1/20th or 1/50th the length of the footage that you originally shot. Now it doesn’t sound so bad to transcode to a high-bitrate codec because you don’t have to store so much of it. Even at ProRes 4444 4K, a full-length feature film will only be about 2TB – pretty manageable.

Now you can finish your film with top-quality images and speedy processing, on a hard drive that fits in your pocket. Woot!

C. Carry on the Direct Intermediate

The third option is to go with the Direct Intermediate editing workflow, in which case you’re good to go. You already transcoded all of your files to a high-quality codec before you started editing, so you can just carry on with those same files for color-correction. That is also convenient because those files are good both for editing and for color-correcting and VFX (see below).

If you are handing off the project to an external colorist or VFX person, then you can either give them all of your high-quality footage (potentially annoying because of the size), or you can use the same consolidation tip that we used above. Handing off the consolidated project can help you move faster and save your colorist’s time as well.

In addition to the simplicity of the Direct Intermediate workflow (you use only one set of files), you have one other advantage: going back and forth between editing and color-correcting is simpler.

Imagine you’ve finished your proxy edit – you consolidate and transcode, send it off to your colorist, and then decide that you need to make some changes to the edit. Now you’ve got go back to the proxies to make the edit and then re-consolidate and re-send the footage. The mechanics of that can get pretty messy. In a high-end post-production workflow, there is usually a “lock” on the edit so that the finishing processes can start. This means that (unless bad things happen) you will try very hard not go back and make changes to the edit. But hey, bad things happen, so it’s best to be prepared.

And now we find another good reason for a Direct Intermediate edit. If you are going to do some of your color work and your editing work simultaneously, or at least are going to go back and forth a couple times, then it can be simpler to use one codec for both. This is especially convenient if you are doing your editing and finishing in the same software package (or set of packages, e.g. Creative Cloud).

the codec you send to vfx

 

The Codec you send to VFX

(back to the index)

If you’re doing any VFX work, then you’re probably going to need to send files to another program (potentially another machine, for another artist). If you’re doing all of your VFX work in your editor (which is becoming more and more viable for simple jobs), then you can skip this section. Just use the same codec as you used for your color correction.

For most of us, however, we need to set up a “round-trip” process that sends clips from the editor to the VFX software and then back again when they’re finished. This happens on a shot-by-shot basis, so you’re not sending the entire sequence to VFX, like you probably did for color grading. The question of when in the process you send your shots to VFX depends very much on the particular workflow. Some people will send to VFX after the edit is locked and color-correction finished, but time pressure can force you to start sending off shots before then.

If you’re editing in Premiere Pro and doing mild VFX in After Effects with Dynamic Link, then you can also skip this section. Dynamic Link automatically does the round-tripping for you. If you’re doing a lot of VFX work, you may still want to use the techniques in this section, because Dynamic Link can be a little bit finicky with too many projects. Adobe is always working on those bugs, however, and so it’s partly up to personal taste.

Go big or go home

In the VFX process, you tend to use very high-end (high bitrate) codecs for two main reasons. The first is simply that VFX artists need all the information you can give them in order to do their job well. VFX artists are some of the pickiest people when it comes to codecs, and for good reason. Everyone wants high-quality images, but image issues can often pose more of a problem for VFX than it does for editing, color-correction, and final export.

Many tasks in VFX work require very detailed analysis of the image on a pixel-by-pixel level, which most editors never need to do. For instance, if you’re doing a green-screen extraction, you want the edge between your character and the greenscreen to be as clean as possible. We’ve all seen awful greenscreen shots where the edges of the character are all choppy or blurred out. These problems often arise because of image compression artifacts that are invisible to the naked eye. 4:2:2 or 4:2:0 color subsampling, for instance, has almost no visible impact on the image. The human eye cares mainly about contrast and seldom notices low color resolution, but the greenscreen extraction process relies primarily on color values. If the codec has thrown away a large portion of the color values by using 4:2:0 chroma subsampling, a good color key may be impossible.

The second reason why you want to use high-end codecs is because of generation loss. In the VFX process, you will probably have to compress your file multiple times. You will compress the file once when you send it to them. And then, if they need to pass the file on between multiple specialists, they may compress that file two or three times before they send it back. When a file is compressed multiple times, we call that multiple generation loss. If you are using a low-end codec, the image will get progressively worse each time you re-compress it. One of the great things about the really high-quality codecs is that you can compress them a couple times without losing much quality. While it’s always better to avoid compressing a video multiple times, if you’re using very high-quality codecs, you’re usually pretty fine.

Some high-end VFX workflows will only use lossless compression for this reason. The good news is that your VFX shots are usually only a few seconds per clip, which means your file sizes will be small even with high-end codecs. So go big! If you captured 4:4:4 in the camera, then definitely send 4:4:4 to VFX. Otherwise, I would pick a top-of-the-line 4:2:2 codec (ProRes 422 HQ or DNxHQX).

And of course, you should always communicate beforehand with VFX about what codec to send. If you think they’re making a bad choice, send them this article 🙂

the codec you export

 

 

The Codec You Export

(back to the index)

Now you’ve finished the editing, the color, and the VFX – you’re ready to export. You will usually do the final export from the software that you used for color-correction, using the codec that you used in the color-correction process.

If your client is in the media business, they should know what codec they want, so you can skip the rest of this section!

If your client is not a video expert, they may not know what they want, so you need to make some decisions for them. Most of the time, your client is going to want a video to upload to YouTube and/or other social media sites. You may be tempted to choose a codec that is good for streaming on the Internet. But you would be wrong! The reason why: these sites do not stream the same file that you upload to your viewers – they compress the file *again* before streaming it, and you have absolutely no control over the settings that they use. This means that, if you upload a low-quality codec, then we have the scenario where we’re taking a low-quality photo of a low-quality photo that we talked about. Bad! Avoid!

As a general rule, if you want the best quality result, you should upload the best quality source. They’re going to compress again anyway, so giving them more data work with can’t hurt, right? If you have a fast enough connection, you could upload a ProRes 422. Some people have reported slightly (only slightly) better results when uploading ProRes instead of the recommended h.264. If you are delivering a file to a client, for them to upload to Youtube, then I would not give them ProRes, since you don’t know what kind of bandwidth they’re going to have. Fortunately, these sites tend to publish recommended upload specs (just Google it). I personally will take whatever bitrate they recommend and multiple by about 1.5x to 2x.

Your client may also want a file that they can embed directly into their website (though I would dissuade them, if you can). Generally speaking, you want a very heavily-compressed h.264. If you’re curious what a good bitrate is, my reasoning is that, if anyone knows what the sweet-spot bitrate is, it’s YouTube. I periodically download a video from YouTube and check its bitrate, and use that as a benchmark.

If the video is not public, they may also want a small file that they can email or link directly to their own clients so that they can download it. In these cases, it may be appropriate to deliver more than two separate files, especially if it’s a long video. The file they should upload to YouTube will be too large to email conveniently. In this case, I will usually down-res the file and compress it very heavily. You also have to be realistic and decide whether you think that your client will actually understand the difference between the two files.

If I need to deliver more than one file, I will usually call one of them “HD” in the filename and the other one “small” or “not HD” in the filename. If you try to describe the different codecs to them, I can almost guarantee they’ll have forgotten the difference by next week, but they’ll probably remember what HD and “not HD” means.

the codec you archive

 

 

The Codec You Archive

(back to the index)

You’ve delivered the file(s) to your client, so now you can sit back and relax… almost.

As any working professional in this industry knows, the day when you deliver the finished product to your client is often not the last time you touch a project. Sometimes a client wants to go back and change something weeks later, or they want a higher-quality codec, or maybe you want to add it to your personal reel. In any of these cases, you may have moved on to a different machine or to different software, making it a headache to open up the original project and re-export.

This is where it comes in handy to have a great archive of the finished project in an extremely high quality codec. If your client requested a very high-quality codec for delivery, then you’re generally set. Just keep a copy of that file, and you’re good. If they need a delivery codec that’s not tip-top quality, however, it’s always good to do your own export with a codec that is lossless or as close to lossless as you can afford, given the space it will take. I will generally export to a very high-bitrate 4:4:4 codec – either DNxHD/HR or ProRes.

Got questions? Comment away.

I actually read every comment. This article is a work in progress, and I’ll be updating it with more explanations and examples based on your feedback. If you have some private feedback or questions, shoot me an email: david at frame dot io.

Want to write for the Frame.io Blog? Email me: blog at frame dot io.

Many thanks to Larry JordanShane Ross, and Philip Hodgetts for their input on this article!

  • gavingreenwalt

    “And of course, you should always communicate beforehand with VFX about what codec to send. If you think they’re making a bad choice, send them this article”

    Or… maybe just listen to and trust the VFX supe if everything you know about codecs is from one blog post. ?

  • Anders Kjaer

    Thanks David,
    Very insightful article and clear explanation. I realize that you wrote with the higher-end professional. Could you touch on the best workflow for us still shooting with a dslr, AvcHD low bitrate codec.
    Is it best to choose the “Direct Intermediate” method for editing of is sticking with the camera format better? And for archival purpose what is the best codec to use?

    Anders

    • David

      Hi Anders,

      All of the principles in this article apply to DSLR shooters as well as high-end professionals. If you’re shooting AVCHD on a DSLR, then you can still get a lot of benefit from a proxy or direct intermediate workflow, because it will speed up your editing.

      As far as whether to choose a proxy or direct intermediate workflow, that’s up to you. Both can work well for a DSLR project, and it often just depends on what storage you have available.

      I would recommend trying out a proxy workflow on a small project. Try editing off of the camera files, then switch to the proxies, and try editing off of them for a bit. If you see an increase in speed.

  • Tamzid Farhan Mogno

    I did a 20 sec motion design project a few month back with flat colors. Exported it from After effects to H.264 for Vimeo. But vimeo would give me warning about the colors & suggested I upload in Quicktime YUV 4-2-2. Whatever I uploaded ended up looking a little pale in vimeo no matter what.

    What would you suggest I should have done to avoid this?

    https://vimeo.com/194081180

    • Did you find any answer? I’m getting very bright colors on Youtube with H.264 that they can hurt my eyes… The blue goes green. This just happens to motiongraphics done on After Effects. I don’t see big changes on the footage’s colors.

  • David, thanks for putting this together. There’s lots of great information here, and I’m glad someone is distilling all of this knowledge together and making it freely available.

    With that said—I have a few quibbles!

    1. In The Codec You Color Correct With § B. Consolidate and Transcode, I think you missed a description of conforming to the original camera negatives. You write,

    If you used a proxy/offline workflow for the edit but don’t want to color-correct the camera files, one good option is to consolidate your project and then transcode to a high-end codec.

    When you consolidate a project, your editing software will make a copy of your project along with a copy of the media, but only the files that you ended up using in your sequence. So if you shot 7 takes but only used one of them in the edit, it’ll only copy that one take. This cuts down on the storage a lot, which comes in handy at this stage. You can also consolidate down even further so that you only keep the specific portions of each take that you actually used in the edit, discarding the rest. In this case, the software will usually include a few seconds before and after each take (called “handles”), in case you want to add a fade or motion tracking.

    This seems like it’s missing a description of the conform, because you wouldn’t send off the proxy files for color-grading—you’d first conform to the OCNs and then transcode those OCNs, either as discrete clips with handles, or as a preconformed flattened media file with an EDL, into a DI format.

    For what you’re calling the “Direct Intermediate” workflow, you’ve indeed eliminated the necessary step of conforming to the OCNs, but your description for the online/offline workflow reads like it would be suitable to merely transcode your proxies to a DI format, and send that off to color, and I don’t think that’s what you mean.

    2. In the video on Vimeo that includes and explanation of chroma subsampling, you might want to include what the chroma subsampling notation actually indicates. It’s not just that a 4:2:2 scheme has half resolution of the chroma signal as the luma signal, and that the 4:2:0 scheme has a quarter of the resolution of the chroma signal as the luma signal—it’s that the second two digits are referring to the chroma subsampling per scanline. This is a holdover from the days of SD on CRTs. The first digit is full-resolution luma, always 4, the second digit is the chroma factor relative to the first digit, and the third digit is the same as the second digit or 0 to reflect the second scanline. For a deeper understanding, you should consult Charles Poynton’s Digital Video and HD, Second Edition (2012).

    3. In The Codec You Shoot With § Finishing, there’s a typo—“chrome” should read “chroma.”

  • Pingback: The Simple Formula to Calculate Video Bitrates - Frame.io Blog()

  • Pingback: Compare 50 Intermediate Codecs on One Page - Frame.io Blog()

  • Pingback: The Ultimate Guide to FCP X Proxies: Make Final Cut Faster - Frame.io Blog()

  • Pingback: Why DaVinci Resolve is the Ultimate (Free) Tool to Sync Clips - Frame.io Blog()

  • Jireh

    BEST IN DEPTH EXPLANATION THUS FAR!
    I have almost zero knowledge when it comes to these tech. I would just film, drag, edit and export.
    After reading such vast information, I wouldn’t say I have complete mastery of the whole process, but I do get the gist of it.

    I have a scenario which Id love you to help me out David.

    I film on my A6000, AVCHD either 24fps or 60fps.
    I am using FCPX to edit on my MacBookAir 1.4 GHz, i5, 8GB.
    My edit will revolve around basic color correction like adjusting contrast, saturation, adding some letterbox and lower thirds for effect. My mac is pretty empty, with over 150GB free space.

    I use clipwrap to transcode. Which format do you think best suit my criteria? My final destination will be youtube. I have tried exploring before but the preview will look ok but looks blurry and noisy after exporting to H.264. I wasnt sure what I did wrong. I transcoded it to prores 422 Proxy, drag into fcpx and edit.

    From what I’ve read, I believe Direct Intermediate best suits me. Should I transcode it to prores 422 instead before editing? My main concern would be the stutterness. I have tried editing on davinci resolve, lightworks, hitfilm and all gave me terrible experience while editing on the timline. It was just TOO LAGGY. FCPX definitely provided me with an obvious improvement in editing smoothness but I’d like to maximize what I can do with it.

    Are there any information do you need me to provide for you to better understand my situation in helping me? I’d love to receive a respond from you David!

    Your student,
    Jay

  • Pingback: Working with Proxy Media in Avid Media Composer()

  • Matheus Meireles

    Your work is Wonderfull David, it takes real generosity to teach that way. Its a git to have found this
    I like to know which lossess codecs are better from archiving. And if is possible in a further article to know more about lossless codecs and even about DCP

    • David

      Thank you very much for the kind words, Matheus! I will add it to my list, and hopefully we can cover that in a future article. This one is long enough as it is 🙂

  • David, thanks for putting this together. There’s lots of great information here, and I’m glad someone is distilling all of this knowledge together and making it freely available.

    With that said—I have a few quibbles!

    In The Codec You Color Correct With, Section B: Consolidate and Transcode, I think you missed a description of conforming to the original camera negatives. This seems like it’s missing a description of the conform, because you wouldn’t send off the proxy files for color-grading—you’d first conform to the OCNs and then transcode those OCNs, either as discrete clips with handles, or as a preconformed flattened media file with an EDL, into a DI format.

    For what you’re calling the “Direct Intermediate” workflow, you’ve indeed eliminated the necessary step of conforming to the OCNs, but your description for the online/offline workflow reads like it would be suitable to merely transcode your proxies to a DI format, and send that off to color, and I don’t think that’s what you mean.

    In the video on Vimeo that includes and explanation of chroma subsampling, you might want to include what the chroma subsampling notation actually indicates. It’s not just that a 4:2:2 scheme has half resolution of the chroma signal as the luma signal, and that the 4:2:0 scheme has a quarter of the resolution of the chroma signal as the luma signal—it’s that the second two digits are referring to the chroma subsampling per scanline. This is a holdover from the days of SD on CRTs. The first digit is full-resolution luma, always 4, the second digit is the chroma factor relative to the first digit, and the third digit is the same as the second digit or 0 to reflect the second scanline. For a deep dive, you should consult Charles Poynton’s Digital Video and HD, Second Edition (2012).

    • David

      Hi Seth,

      You’re definitely correct in pointing out that you must conform to the original camera files. I think you may have missed that? In the section you’re referring to, I say, “one good option is to relink to the camera files, consolidate your project and then transcode to a high-end codec.”

      When I say “relink to the camera files” I’m just using the term “relink” instead of “conform.”

      Regarding the Vimeo video, you are correct, but I decided to simplify the explanation to the essential points that people need to to know to make practical decisions. There is of course much more detail I could have given, but I didn’t want to go too long.

  • Slackline Slacklife

    This is one of the best articles about Video I ever read. And for sure the best one about codec.

    EXCELENT! I sent to every people I know. Thank you.

  • Pingback: Homepage()

  • Todor Bradev

    beautiful! i learned more watching the video and reading the article than i did in 3 years of university!

    one question i have: except for trial-and-error, is there a more foolproof way of deciding what DNxHD bitrate to transcode an H.264 video to? i get 1080p / 50 Mbps VBR / 8-bit / 4:2:0 / GOP M=1, N=6 with 2 RefFrames from my Sony HDR-CX405 and I don’t know if I should use DNxSQ (115) or DNxHQ (175) for editing and/or master archiving. my eyes couldn’t detect any difference at all and I even stacked them in Premiere with the top one’s blending mode set to Difference.

    thank you, i’ll be following frame.io from now on 🙂

    • David

      Hi Todor, unfortunately there isn’t a fool-proof way to choose which DNx flavor to use, partly because it may actually depend on the complexity of the footage itself – the amount of motion, contrast, detail, etc.

      There are best practices, and you’ll find various opinions if you google your question.

      It also, of course, depends on what your workflow is. If you’re using a proxy workflow, then you can choose a low-bitrate DNx codec without worrying.

  • Pingback: sscn bkn 2017()

  • Pingback: The Best of Filmmaking & Video Production May 2017 • Filmmaking Lifestyle()

  • Chris

    THANK YOU! I’m a total noob with no experience who was looking for foundational knowledge in order to move to (a very slightly) higher level. I put on classical music to keep the ADD at bay [SQUIRREL!] and read thru every line in 1 sitting, and IT ANSWERED ALL OF MY QUESTIONS! Never have I gained so much knowledge from 1 single source, in 1 single sitting. I’m ready to go work at a pro house now! (ok, too far, but I’m super excited that after a year of poking around the internet trying to figure this stuff out I finally did it in about an hour).
    Even though I’m at a consumer level I feel like thanks to you I now have a really good grasp of the basic processes involved in video production and how it all fits together from start to finish, and it’s all very applicable to even an entry level enthusiast.
    Would you please teach literally everything to literally everyone? YOU. ARE. AMAZING.

    • David

      I’m so glad that it was helpful to you, Chris! Feel free to suggest other topics that you’re particularly interested in learning about – I’m always open.

  • Lee

    This is the clearest, most comprehensive explanation I’ve ever read. Thank you so much!

    • David

      Thanks for the kind words, Lee 🙂

    • Giacinto

      I totally agree!! Great article!

  • Pingback: Why ProRes on Windows is Still Not a Good Option for Editors - Frame.io Blog()

  • Aaron Jones

    Thank you David, Just one question….since it is possible to record in ProRes on a Ninja Inferno, that seems the simplest way to go but is ProRes a good capture codec?

    • David

      Hi Aaron, ProRes can definitely be a good capture codec if it’s available to you, and if you have enough recording media to support the high bitrates on-set. It can simplify your workflow greatly. If you’re capturing 4K, though, you may still want to do a proxy edit at 1080p, depending on your editing setup.

  • Pingback: Why Increasing Bit Depth Will Not Help Your Footage - Frame.io Blog()

  • sudhanshu soni

    Hi David!!!

    I, soon going to start a film production house. So i need some serious advice regarding the equipment i should purchase.

    I have gone through some of your articles about codec, production and Vfx. That drove me to share my project details with you and to ask for few technical difficulties i will face in my project.

    My first project will be a pure VFX project of 90 mins duration. Entire Video will be needed to shoot at 4K/2K resolution with 24 FPS and with 12 bit color depth. Also as it is a pure VFX project, so each frame of it will be edited with Green Screen Chroma keying. It will also contain some slow motion shots, can be 30 shots, each of 30-40 seconds in length. Sound will be created as 7.1/10.1 channel. Finally the project will be exported as a “Digital Cinema Package”

    Please guide me with the following :-
    1. Which Camera i should go with?
    2. Which codec is suitable for Recording?
    3. Which external recorder will be good?
    4. Which codec is suitable for Green Screen Chroma keying and Color Correction?
    5. Which external recorder is suitable for sound recording?

    6. Which mics should be used for Dialogue recoring and Sound Effects recording?

    7. Which format is suitable for recording the entire sound of the project?

    8. Which channels should be dedicated to Dialogues, channels for environment effects sounds, channels for shoe tap, hand claps, breathing, gunshot sounds, channels for background score (Relative to 7.1/10.1 channel setup)?

    9. Is the above discussed technical information enough to create a perfect JPEG2000 2k 24p 250 Mbps “DCP”? (This is the main reason to record the video in 4K/2K resolution with 24 FPS and with 12 bit color depth)

    I am looking for mid price range stuff!!!!!

    Thanks in Advance!!!

  • Pingback: Why DaVinci Resolve May Be the Single Most Powerful Tool in Post()

  • Jody Glover

    This article was the best yet that I’ve read online. You clearly explained a lot of missing pieces for me. Thanks!

  • Pingback: The 9 Most Common Crashes in Premiere Pro, and How to Fix Them()