How to Choose the Right Codec for Every Project

Codecs don’t need to be hard. No, really, they don’t. All that matters is that you choose the right codec.

By the end of this article, you will be able to pick the best codec for you on each project. My goal is to give you what you need to make your own informed decisions about codecs. So you can choose the right codec for yourself, instead of relying on what worked for someone else.

I’m going to walk you through every step in the process of making a video. Click on a heading to jump to that section. I’ll cover:

At each stage, I’ll explain which factors you should be considering as you choose a codec. I’ll also give you some examples of the most commonly-used codecs for that stage.

Along the way, we’ll cover why low-end codecs and high-end codecs can each slow down your editing, the reasons for a proxy/offline edit, a real-world project walkthrough, some storage-saving strategies, and an explanation for why transcoding cannot improve your image quality.

The benefits of optimizing your codecs can be huge. Choose the right codec and you’ll preserve your images in the highest quality. It can also make your work faster, and lets you take the best advantage of your computer and storage. You’ll be able to work faster on a laptop than many can on a high-end tower.

What a Codec Does

A codec is a method for making video files smaller, usually by carefully throwing away data that we probably don’t really need. And they’re pretty smart about how they do that. A few years ago, I created a video that covers the main compression techniques that many codecs use. It’s not required viewing to understand this article, but it certainly won’t hurt.

If you’re skipping the video, here are some very basic explanations:

  • Chroma subsampling: Throws away some color data (4:4:4 is no chroma sampling. 4:2:2 is some chroma subsampling.4:2:0 is lots of chroma subsampling). Bad if you’re doing color-correction. Really bad if you’re doing green screen or VFX work.
  • Macro-Blocking: Finds blocks (varying size) of similar colors and makes them all the same color. Bad for VFX and color-correction. Almost all codecs use this to some degree, and the amount tends to vary with the bitrate.
  • Temporal compression: Uses previous frames (and sometimes following frames) to calculate the current frame. Bad for editing.
  • Bit depth: The number of possible colors. Deeper bit-depth (larger numbers) is good for color-correction and VFX.

Codec Comparison Table

I’ve also pulled together a list of all of the most common codecs used in the postproduction world. This list can help you compare different codecs against each other and make the best decision for your project.

There are many different codecs that can be used in the editing process. The ones I’ve included are by far the most common. There is a significant advantage to using popular codecs. They are more likely to work on your system, your client’s system, your system-in-five-years, etc. And it’s easier to find help if something goes wrong.

Open the table in a new tab. That way you can compare codecs as you read through the article.

Check out the table

Lossyness

One of the columns in the table is “lossyness,” which is an important concept with codecs. When I’m talking about lossyness, I don’t necessarily mean what your eye sees. I mean the amount of data that is retained by the codec, only some of which you can see. The question is: If I had an uncompressed image, and then I compressed it with this codec, how similar would the new image be to the old image? How much information is lost in the transcode? If the two images are very similar, then the codec is not very lossy. And if they’re pretty different, then it’s more lossy.

The lossyness is a combination of the techniques that the particular codec uses and its bitrate. A more lossy codec is not necessarily bad. In some cases (when viewing online, for instance), it’s really not necessary to retain 100% of the original image. Using a more lossy codec can be a really smart move because of how much space it saves.

If the image looks just as good to my eye, then why should I care if it’s technically ‘lossy’?

You should care because you may want to change the image. If you are doing any sort of color correction, then you will be changing the image. The result might be that you see elements of the image that weren’t visible (or prominent) when you captured it.

For example here is an image that was captured raw.

Here is a screengrab of it compressed with H.264, using a standard YouTube-recommended settings.

And then compressed with DNxHD 350x:

They all look pretty much the same, don’t they? The visual quality is about the same, and the H.264 file is a fraction of the size of the DNxHD file. This is why it’s the recommended setting for YouTube. It looks just about as good to the eye, and the file is much easier to upload to the internet.

The trouble with the H.264 version, however, comes when you try to make changes to the image. What if you wanted to increase the exposure?

Now we can see where the highly-compressed image falls apart. Her hair and shirt look terrible in the h.264 image, and the buildings by the river look all mushy.

This is why you really want a high-quality codec when you capture the image. Because you will probably want to make changes later on, but you don’t know yet what those changes might be. You’ll want to tweak the color and contrast, maybe tweak the speed, maybe add some VFX. A highly-compressed file doesn’t allow for those changes without breaking down.

This is why it’s a good idea to capture your footage in 10-bit even if you may be outputting an 8-bit file in the end. You don’t know, when you shoot, which bits you’re going to want.

The Codec Journey

Now it’s time to walk through the different stages that you’ll encounter in each project.

Every project starts with a codec that you capture in the camera. And ends with a codec you export (delivery codec) and hand to your client or upload to the web. In the simplest case, you do all of your editing and color-correction right on the camera files. Then you export to your delivery codec, so you’re only ever using two codecs.

But most of the time it gets a little bit more complicated. You might transcode to a different codec for editing, and potentially for color-correction, and definitely for VFX. But it all starts with…

The Codec You Shoot With

This is your capture codec, (also called the “camera native codec” or “acquisition codec”).

Generally speaking, you should aim for the highest-quality codec that your camera (or your budget) can capture. When I say “highest quality”, I mean that you want to capture as much information as possible. So you want less-lossy codecs: less compression, higher bit-depth, and less chroma subsampling. The more information you have when you capture, the more flexibility you will have later. Especially in color-correction and VFX (if you’re doing that).

Of course, you also have to consider a lot of other, practical factors in this decision. Otherwise we would always be shooting 8K raw, right?

Cost

The first consideration is obviously cost. Generally speaking, the more expensive the camera, the higher quality codecs are available on it. I say generally because there are some “sweet spot” cameras that can offer excellent codecs at a reasonable price. Panasonic’s GH series (especially in the early days when the GH2 was hacked) was known for offering better codecs than the other cameras in its price range.

Tip: Better Codecs with External Recorders
One way to capture higher-quality codecs on cheaper cameras is to use an external recorder.

These devices (many of which can double as external monitors) take an uncompressed signal from the camera, via HDMI or SDI, and compress it separately. So you end up with two copies of your footage. One copy heavily compressed on the camera, and a second copy lightly compressed on the external recorder. The key thing here is that the camera sends the signal out to the recorder before compressing it.

One important note here is that many cheaper cameras only output 8-bit, and often not in 4:4:4. An external recorder might be able to compress to a 12-bit codec. But if the camera is only sending 8 bits, the recorder can only record 8 bits. Some cheaper cameras may also not output a “clean” HDMI signal that is suitable for recording. We call an output signal “clean” when it’s just the pure image with no camera interface overlays.

Storage

The second factor to consider is storage space. High-quality codecs tend to be higher bit-rate, which means that the files are larger. You need to be prepared to store and back up all of those files as you’re shooting. And you may also have to upgrade your memory cards in order to be able to record the high-bitrate data. If you’re shooting solo, then you may end up choosing a lower-quality codec because it allows you to change memory cards less often.

Finishing

Another factor to consider is how much color-correction and VFX (collectively referred to as finishing) you plan to do. If you’re going to be doing very minimal color-correction and no VFX, then you can probably get away with lower bit-depth, chroma subsampling, and macro blocking that come with lower quality capture codecs.

Editing Hardware

The last factor to consider is your editing machine. Most capture codecs are not well suited to editing without a high-performance computer. H.264 and some raw files require a powerful CPU/GPU to edit smoothly. And very-high-bitrate codecs may require high-speed hard drives or data servers. Unless you’re shooting an edit-friendly codec, you may have to transcode your files to another codec before editing. And this can take time. For some, transcoding the footage isn’t a huge issue because it can be done overnight or on a spare computer. If you’re working on very tight turn-around times, however, you may choose a codec that will allow you to start editing immediately after a shoot, even if that means a higher cost or a sacrifice in image quality. I explain which codecs are best for editing in the next section.

The Codec You Edit With

Alright, you’ve shot your film, and you’ve got all of your files onto your computer. Now you need to decide whether you’re going to edit with these files. Or whether you want to transcode into another format.

Why should I transcode before editing? Can’t I just edit the files that came out of the camera?

Well, it depends. Pretty much all of the major software packages can now edit any codec that your camera creates. (Unless you’re a badass shooting on a brand-new camera with brand-new technology). But while it’s almost always possible to edit the codecs that your camera shot, it’s not always the best idea.

If you’re lucky enough to be shooting on a codec that is great for editing, you can skip this step.

For many of us, however, the capture codec isn’t going to be optimized for editing. There are two main factors you need to consider when choosing your edit codec: compression type and bit rate.

Highly-Compressed codecs can Slow Down Your Editing

Most lower to mid-range cameras record with codecs that use temporal compression, also known as long-GOP compression. I will give you a simple explanation here, but if you’re interested in learning in more detail, check out my codecs video.

The simple explanation of a long-GOP is that, for each frame, the codec only captures what has changed between this frame and the previous frame. If the video doesn’t include a lot of motion, then the new file can be a LOT smaller. The difference between this frame and the last frame is just a few pixels. So all you need to store is a few pixels. That’s great!

Always forward

The issue, however, is that these codecs tend only to work well when played forward. (If you’re curious why, take a look at the video). That’s great for viewing on YouTube and your DVD player, but it’s not great for editing. When you’re editing you’re often jumping around, or playing a clip backward. And tt takes a lot more processing power to do those things quickly with a long-GOP codec. A high-end computer might have no trouble, but even a mid-range computer will lag and stutter when you skim through the footage quickly or jump around.

Codecs that aren’t long-GOP (a.k.a. intra-frame codecs), can play backwards just as easily as forwards. So even a mid-range computer can skip around very smoothly. If you’ve only ever edited clips straight from the camera, you not might realize what you’re missing!

The other thing that can cause issues with playback is raw video. Raw video needs to be converted before it can be displayed (sort of like a codec does). And some computers can’t decode the raw file fast enough, especially if it’s 4K. Ironically, both the low-end cameras and the highest-end cameras produce files that are hard to edit!

High-Bitrate codecs can Slow Down Your Editing

For low to mid-range codecs, you don’t have to worry about the bitrates at all. Once you start moving up the ladder, however, high bitrate codecs can cause issues with editing, especially if you’re working on everyday computers.

The reason is because your computer needs to be able to read the data from your hard drive at a bitrate that is at least as high as your codec’s bitrate. It makes sense — if your codec is 50Mb/s (fifty megabits per second), then your computer needs to be able to read that file from your hard drive at 50Mb/s or else it’ll fall behind and stutter.

(note that Mb/s stands for megabits per second, while MB/s stands for megabytes for second. There are eight bits in a byte, so you need to multiple by 8 when converting from MB/s to Mb/s)

Some good news

The good news is that hard drives are getting faster every day. So 50Mb/s is never going to cause any problems. But what if you’re editing ProRes 422HQ at 4K, which is 734Mb/s? The average external hard drive is only just barely fast enough to play that back. Some cheaper hard drives won’t manage it at all. And then, what if you’re editing a multicam with three cameras? Suddenly you need 3x that data rate: 2,202Mb/s! At that point, you’re going to need to invest in some high-performance hard drives or RAIDs.

Here are some rough guidelines for common data storage speeds. (There will always be certain models that underperform or overperform.)

  • Standard spinning drive: 100-120MB/s
  • Professional spinning drive: 150-200MB/s
  • Standard SSD: 400-500 MB/s
  • Low-end RAID: 200-300 MB/s
  • High-end RAID: 1000-2000 MB/s

Shooting in log can slow down your editing

Shooting in log is a way of preserving as much of your dynamic range as possible. This allows a scene that has bright highlights and dark shadows without blowing out the highlights or crushing the blacks. Blown-out highlights are a particularly nasty side-effect of shooting on video instead of film. So shooting in log can help make your footage feel more cinematic. Now that log profiles are available even on most prosumer cameras, it’s an extremely popular way to work.

The downside

The downside is that the image that comes out of the camera doesn’t look so great. You’ll need to add contrast and saturation in order to get even close to the final image. The most common way to do that is to add a LUT to your footage. This is essentially a simple preset color correction that brings your footage back to a “normal” look.

If you’re shooting in a log color space, then you need to apply a LUT to your footage in order to preview it with normal color and contrast. This means that your editor will need to apply the appropriate LUT to all of the clips when editing. This can be annoying to manage, and it can also slow down the computer a bit. This is because it needs to first decode each frame and then apply the LUT before displaying it. It’s certainly possible to edit the log footage without any LUT, but it’s not ideal. The color of two shots may influence how you intercut them.

If you’re going to transcode your files before editing them, then you can apply the LUT during the transcode process. That way, the editor is always working with footage that has good contrast and color and never has to bother with LUTs. Note that you should only do this if you are using a Proxy workflow, not a Direct Intermediate workflow (described below).

Consider time spent encoding

The main downside of transcoding your footage before editing is simply the time it takes to do the transcode. If you have a lot of footage to go through, and your computer isn’t fast, it may take a long time. If you’re not in a big hurry, you can let the transcode run overnight, potentially on multiple computers if you have access to them, but that’s not always ideal.

When I worked at Khan Academy, our founder would regularly record short video messages to send out to people. Who were often on very tight schedules. I would usually shoot in 4K in a long-GOP log format, and edit them on a MacBook Pro. Editing 4K long-GOP with a LUT (to correct for the log footage) on a laptop would mean I could play the video back just fine in Premiere Pro, but I couldn’t zoom around the timeline as fast as I wanted without stuttering.

But that didn’t bother me too much because the edit was extremely simple. Just a few cuts, maybe some music, a title, and I was done. Even though my editing speed wasn’t ideal, I would have spent more time in the transcode than I would have saved in editing speed, so I just used the original files.

If I were editing a longer piece with the same setup, however, I would transcode to DNxHD or ProRes. Generally, I would do most of the transcoding overnight, often with multiple machines running at the same time.

Proxy Edit

If you’re going to transcode the native camera files before you edit them, then you’ll use an “intermediate” codec. It’s called intermediate because it comes between the capture codec and the export codec. There are two common ways of working with intermediate codecs:

The first is the “proxy” workflow or “offline edit.” This means that you are transcoding your captured footage into an intermediate format, editing with that format, and then re-linking back to the original camera files before exporting. Because you will use the camera files to export and not the proxy files, you don’t need to worry so much about picking a proxy codec with great image quality – lossy codecs are fine. You can optimize for editing speed and storage convenience instead.

The proxy workflow is so common that many high-end cameras record a high-end raw file *and* a ProRes or DNxHD proxy file at the same time. After the shoot, the raw files are backed up and put in storage. The proxy files are sent off to the editors and to the director/producers for dailies.

Avoid temporal compression

When choosing a proxy codec, you want to go for one that does not use temporal compression (aka inter-frame compression or long-GOP compression), and you want to pick one that has a lower bitrate. The low bitrate means that the files are much smaller, so you can use fewer/smaller/cheaper hard drives, simplifying your workflow. Woot!

While the proxy files are great for editing, you shouldn’t do more than basic color-correction with proxy files. If you are going to do all of your color-correction inside of your editing software, then it’s best to re-link back to your camera files because your proxy files may have lower color quality.

The good news is that most editing software today can switch between the camera files and the proxy files in just a couple clicks, so you can even go back and forth if you need to.

We’ve published detailed guides for proxy workflows in each of the major NLEs:

Some good choices for proxy codecs

By far the most common proxy codecs are DNxHD/DNxHR and ProRes. They have both been around for years, so they’re very widely supported. Everyone knows how to handle them. They are both very well suited to a proxy workflow (ProRes even has a preset called “proxy”), and are nearly interchangeable when used for proxies.

DNxHD is made by Avid, and ProRes is made by Apple. So it makes sense that DNxHD would work better on Media Composer and ProRes would work better on Final Cut Pro X. That used to certainly be true, but nowadays both codecs work very smoothly on all modern editors (including Premiere Pro). There may be a slight speed increase in using the codec that was designed for the system, but it’s very slight.

The only significant difference between the two for a proxy workflow is the fact that you may have trouble creating ProRes on a PC, while DNxHD is very easy to create cross-platform. The only officially-supported way to create ProRes on a PC is with Assimilate Scratch. There are some other unsupported methods for creating ProRes files on a PC, but they’re not always reliable. PCs can easily play back and edit ProRes files, but you can’t encode new ProRes files on a PC as easily as DNxHD, and so some editors prefer a DNxHD workflow for that reason.

Pick a lane

Regardless of which of the two codecs you pick, you also have to pick which flavor you want. This is really going to depend on your storage constraints – it’s a tradeoff between image quality and file size. The good news is that you don’t need tip-top image quality when you’re editing, so you can choose a low-bitrate codec.

Start off with the smallest ProRes or DNx codec in the same resolution as your capture codec. Look at the GB/hr column and multiply it by the number of hours of footage you have. If you have enough storage space, then you’re good – use that codec. If you have lots of extra storage space, think about using the next largest flavor.

But if you don’t have enough storage space, or if you’re on an underpowered machine, then take the resolution down a notch. A lot of huge-budget Hollywood films were edited in 480p just a few years ago, so don’t sweat it if you need to lower your resolution from 4K down to 720P for the edit.

Direct Intermediate

The other type of intermediate workflow is something that I’m calling Direct Intermediate. This means that you transcode your camera files into a codec that is both good for editing and very high-quality (not very lossy). Because the codec is very high quality, almost all of the original information from the camera files has been preserved, and so it’s not necessary to re-link back to the camera files – you can just export directly from the intermediate files. There will be some theoretical loss of information when you transcode, but if you pick a good enough intermediate codec, it’ll be small enough that you don’t need to worry about it.

(Note: I’m calling this process “Direct Intermediate” because there isn’t a common name for this workflow. People usually just call this “intermediate,” but that can be confusing because proxy workflows are also a kind of intermediate workflow. Some people will also call this an “online” workflow, but this is also confusing because that term was created to describe a workflow that includes an offline and an online edit, not a workflow that’s online from start to finish.)

Preservation is key

The key to picking a good Direct Intermediate codec is to make sure that you are preserving all of the information from your capture codec. An intermediate codec will never make your images better (more detailed explanation below), but it can definitely make them worse if you choose the wrong codec. The important thing is to understand the details of your original footage and make sure that your intermediate codec is at least as good as your capture codec in each area. If you capture your footage on a DSLR like a Sony A7Sii at 4K, then you will be recording in a 4:2:0, 8-bit, Long-GOP codec at 100Mbps. You want an intermediate codec that is at least 4:2:0 and 8-bit. Going beyond these values (e.g. to 4:4:4 and 12-bit) won’t hurt, but it also won’t help at all. So it’s probably not worth the extra storage space.

Let’s say, for example, that we want to go with a ProRes codec. We have 4 options to choose from that are 4:2:2 and 10-bit.

  • 145Mb/s ProRes 422 Proxy
  • 328Mb/s ProRes 422 LT
  • 471Mb/s ProRes 422
  • 707Mb/s ProRes 422 HQ

Over and above

You might think that all you need is to match the camera bitrate (100Mbps), but you actually need to greatly exceed the camera bitrate. This is because h.264 is a much more efficient codec than ProRes. Because h.264 uses long-GOP compression, it can pack a lot more information into those 100 megabits than ProRes can. In order for ProRes to match the image quality of h.264, you need a much higher bitrate. I would recommend only using ProRes 422 or ProRes 422 HQ if you’re starting with a 100Mbps h.264 codec. ProRes 422 will probably do just fine, but if you have lots of storage space, then going up to ProRes 422 HQ will have a slight edge.

While it’s fine to simply match the bit-depth and color sampling when choosing an intermediate, you should always increase the bitrate at least a little. If you’re going from long-GOP to a non-long GOP codec, then you should increase the bitrate a lot.

Side note: If you wanted to go with DNxHD instead of ProRes, you have similar options, except that DNxHD also offers an 8-bit version for the lower-end codecs. Since our footage is 8-bit to start with, that won’t hurt us at all.

The proxy workflow sounded pretty good. Why do the Direct Intermediate?

Part of the reason why the Direct Intermediate workflow is common is because it used to be a lot harder to use a proxy workflow. Some of the major software providers didn’t make it particularly easy to relink back to the original camera files, and so people would choose a direct intermediate workflow. Nowadays, however, it’s pretty easy to do in any editing package. The main exception is when you have a lot of mixed footage types. If you have multiple frame rates and frame sizes in the same project, switching back and forth from the proxies to the capture codecs can be a headache.

If you are using some third-party tools to help prep and organize your footage before you start cutting, those can also make the relinking process more tricky. One common example might be software that automatically syncs audio tracks or multicam shoots.

No swapping required

Another reason why you might want to use a Direct Intermediate workflow is because you can move right on to color-correction and VFX (“finishing”) process without swapping around any files. Keep reading, and I’ll explain more about why that’s convenient in the Color-Correction and VFX sections.

One downside, however, is that you can’t “bake in” the LUTs for your editor – you’re going to need to apply a LUT via a color-correction effect in your editing software. If you were to include the LUT in your transcode for Direct Intermediate workflow, you would be losing all of the benefits of recording in log in the first place.

The other obvious downside is that you need to store all of these (much larger) files.

An intermediate codec will never make your images better

This is very important, because it is very commonly misunderstood, and there is a lot of misinformation online. Transcoding your footage before you edit will never increase the quality of the output. There are some extra operations that you could do in the transcode process (such as using sophisticated up-res tools) that could increase the image quality in some cases, but a new codec by itself will never increase the quality of your image.

If you choose the right codec, you can avoid hurting your image, but you can never improve it.

That includes going from h.264 to DNxHD or ProRes. It also includes going from 8-bit to 10-bit. And going from 4:2:0 to 4:4:4.

Here is an illustration that can help you understand this concept:

This is a photo of a rose reflected in a water droplet. It’s 4 megapixels, and it looks pretty nice on my 27-inch monitor.

Now what if I take a photo of my monitor with a Red Helium 8k camera. This is a beast of a camera. I shot the photo of the rose a few years ago with a cheapo Canon Rebel DSLR, worth about $250 today. The Red Helium setup costs about $50,000, it’s 35 megapixels, it’s raw, it has one of the best camera sensors ever produced.

Which will be a better image – the 4 megapixel photo, or the 35 megapixel photo?

A capture of a capture

The Red camera has more megapixels, right? It’s raw, and it has all of the Red digital magic, right? But since I’m using my high-resolution camera to take a photo of the photo, not a photo of the rose, my fancy new image will never be better than the first one. I have a file that is technically higher-resolution, but it does not capture any more of my subject (the rose) than the first one did.

This is what you’re doing when you’re transcoding. You are making a copy of a copy, taking a photo of a photo. If you use a fancy high resolution camera to take a photo of a photo, you will be able to preserve pretty much all of the information in the original image, but you won’t be able to add anything more.

The big caveat is that, if you are doing any processing, any transformation of the image (adding a LUT, for instance), then you definitely do want to transcode into a higher-quality codec, which will retain new information. But if you’re not altering the image, then transcoding will not make your image somehow “better.”

A Real-World Example

Let’s say you’re editing a documentary that captured 4K footage using a Sony A7sii camera, recording in the long-GOP version of XAVC-S. Not ideal for editing. If they shot 40 hours of footage for your feature-length documentary, you’d end up with about 2.7TB of camera files, which can fit on one hard drive easily (though you’ve made other, separate backups, of course!).

You could convert that to a high-quality, not-very-lossy codec for a Direct Intermediate workflow, maybe ProRes 422 HQ in 4K.

The downside is that you would need about 12.7TB in order to store that all of those ProRes files. You would have to use an expensive RAID setup in order to have easy access to all of that footage in one project, at least $1,000. Peanuts for a big facility, but a significant investment for a solo editor.

Choosing proxy

So you might decide to use a Proxy workflow instead and transcode your files to the ProRes 422 Proxy 4K format. Then your footage would only take up 2.8TB, just barely more than your captured footage. You can then easily edit off of a single hard drive, and your workflow gets a lot simpler. (For instructions on how to calculate bitrates and file sizes, check out this article: The Simple Formula to Calculate Video Bitrates).

Let’s say that you’re working with another editor who’s on the other side of the country. You might decide to transcode the footage even further down to ProRes 422 Proxy HD, which would shrink your footage down to just 640GB, which becomes more feasible to send over the Internet if you have a fast connection. (18hrs to download on an 80Mbps connection)

When the edit is all done, you just re-link your project back to the original camera files and export. Even though you and your remote editor have been working in a pretty lossy codec, the final export bypasses it, so you don’t lose any quality.

The Codec you Color-Correct

Ok, now you’ve got your video edited, and it’s time for color-correction. Everything we’re talking about here will apply whether you are color-correcting inside your editing application, or whether you are sending your edit to dedicated color-correction software.

The big question at this point is whether you want to color-correct straight on the original camera files, or whether you want to transcode. If you did a proxy/offline edit, then you definitely don’t want to color-correct the proxy files, because they have a lower image quality. In order to make good decisions about color, you need the highest quality image that you have available, because you need to be able to see exactly what you have to work with.

So we need to work with high-quality images, and we have a few different options:

A. Grade the camera files

This is certainly a simple option. If you did a proxy edit, you can relink to the camera files for the finishing process and go to town. This will give you maximum image quality, but remember how the camera files can be slow to work with? The camera files may slow down the process a little, but depending on the software you use and the amount of work you need to do, you might decide that the simplicity is worth a little bit of potential slowdown. If you have a short edit without a lot of complexity, then this can be a great and easy workflow.

Let’s assume that the color-correction slow-down bothers you, so you need a codec that is easier to work with. You could transcode all of your footage to a high-image-quality codec, link to those files, and then start doing your color-correction. But… that kind of defeats the purpose of a proxy workflow, doesn’t it? We used proxies because we didn’t want to have to deal with the large files that would create. Fortunately, there is another option.

B. Consolidate and Transcode

If you used a proxy/offline workflow for the edit but don’t want to color-correct the camera files, one good option is to relink to the camera files, consolidate your project and then transcode to a high-end codec.

When you consolidate a project, your editing software will make a copy of your project along with a copy of the media, but only the particular files that you ended up using in your sequence. So if you shot 7 takes but only used one of them in the edit, it’ll only copy that one take. This cuts down on the storage a lot, which comes in handy at this stage. You can also consolidate down even further so that you only keep the specific portions of each take that you actually used in the edit, discarding the rest. In this case, the software will usually include a few seconds before and after each take (called “handles”), in case you want to add a fade or motion tracking.

Start your grade

Now you can take this new consolidated project (after relinking to the originals) and transcode all of these files to a very high-quality, high-bitrate codec, and start color-correcting. This is different from the Direct Intermediate workflow because you are not transcoding all of your footage – just the footage that made it into the final edit, which might be 1/20th or 1/50th the length of the footage that you originally shot. Now it doesn’t sound so bad to transcode to a high-bitrate codec because you don’t have to store so much of it. Even at ProRes 4444 4K, a full-length feature film will only be about 2TB – pretty manageable.

Now you can finish your film with top-quality images and speedy processing, on a hard drive that fits in your pocket. Woot!

C. Carry on the Direct Intermediate

The third option is to go with the Direct Intermediate editing workflow, in which case you’re good to go. You already transcoded all of your files to a high-quality codec before you started editing, so you can just carry on with those same files for color-correction. That is also convenient because those files are good both for editing and for color-correcting and VFX (see below).

If you are handing off the project to an external colorist or VFX person, then you can either give them all of your high-quality footage (potentially annoying because of the size), or you can use the same consolidation tip that we used above. Handing off the consolidated project can help you move faster and save your colorist’s time as well.

Another advantage

In addition to the simplicity of the Direct Intermediate workflow (you use only one set of files), you have one other advantage: going back and forth between editing and color-correcting is simpler.

Imagine you’ve finished your proxy edit – you consolidate and transcode, send it off to your colorist, and then decide that you need to make some changes to the edit. Now you’ve got go back to the proxies to make the edit and then re-consolidate and re-send the footage. The mechanics of that can get pretty messy. In a high-end post-production workflow, there is usually a “lock” on the edit so that the finishing processes can start. This means that (unless bad things happen) you will try very hard not go back and make changes to the edit. But hey, bad things happen, so it’s best to be prepared.

And now we find another good reason for a Direct Intermediate edit. If you are going to do some of your color work and your editing work simultaneously, or at least are going to go back and forth a couple times, then it can be simpler to use one codec for both. This is especially convenient if you are doing your editing and finishing in the same software package (or set of packages, e.g. Creative Cloud).

The Codec you send to VFX

If you’re doing any VFX work, then you’re probably going to need to send files to another program (potentially another machine, for another artist). If you’re doing all of your VFX work in your editor (which is becoming more and more viable for simple jobs), then you can skip this section. Just use the same codec as you used for your color correction.

For most of us, however, we need to set up a “round-trip” process that sends clips from the editor to the VFX software and then back again when they’re finished. This happens on a shot-by-shot basis, so you’re not sending the entire sequence to VFX, like you probably did for color grading. The question of when in the process you send your shots to VFX depends very much on the particular workflow.

Some people will send to VFX after the edit is locked and color-correction finished, but time pressure can force you to start sending off shots before then.

If you’re editing in Premiere Pro and doing mild VFX in After Effects with Dynamic Link, then you can also skip this section. Dynamic Link automatically does the round-tripping for you. If you’re doing a lot of VFX work, you may still want to use the techniques in this section, because Dynamic Link can be a little bit finicky with too many projects. Adobe is always working on those bugs, however, and so it’s partly up to personal taste.

Go big or go home

In the VFX process, you tend to use very high-end (high bitrate) codecs for two main reasons. The first is simply that VFX artists need all the information you can give them in order to do their job well. VFX artists are some of the pickiest people when it comes to codecs, and for good reason. Everyone wants high-quality images, but image issues can often pose more of a problem for VFX than it does for editing, color-correction, and final export.

Many tasks in VFX work require very detailed analysis of the image on a pixel-by-pixel level, which most editors never need to do. For instance, if you’re doing a green-screen extraction, you want the edge between your character and the greenscreen to be as clean as possible. We’ve all seen awful greenscreen shots where the edges of the character are all choppy or blurred out. These problems often arise because of image compression artifacts that are invisible to the naked eye. 4:2:2 or 4:2:0 color subsampling, for instance, has almost no visible impact on the image. The human eye cares mainly about contrast and seldom notices low color resolution, but the greenscreen extraction process relies primarily on color values. If the codec has thrown away a large portion of the color values by using 4:2:0 chroma subsampling, a good color key may be impossible.

Generation loss

The second reason why you want to use high-end codecs is because of generation loss. In the VFX process, you will probably have to compress your file multiple times. You will compress the file once when you send it to them. And then, if they need to pass the file on between multiple specialists, they may compress that file two or three times before they send it back. When a file is compressed multiple times, we call that multiple generation loss.

If you are using a low-end codec, the image will get progressively worse each time you re-compress it. One of the great things about the really high-quality codecs is that you can compress them a couple times without losing much quality. While it’s always better to avoid compressing a video multiple times, if you’re using very high-quality codecs, you’re usually pretty fine.

Some high-end VFX workflows will only use lossless compression for this reason. The good news is that your VFX shots are usually only a few seconds per clip, which means your file sizes will be small even with high-end codecs. So go big! If you captured 4:4:4 in the camera, then definitely send 4:4:4 to VFX. Otherwise, I would pick a top-of-the-line 4:2:2 codec (ProRes 422 HQ or DNxHQX).

And of course, you should always communicate beforehand with VFX about what codec to send. If you think they’re making a bad choice, send them this article 🙂

The Codec You Export

Now you’ve finished the editing, the color, and the VFX – you’re ready to export. You will usually do the final export from the software that you used for color-correction, using the codec that you used in the color-correction process.

If your client is in the media business, they should know what codec they want, so you can skip the rest of this section!

If your client is not a video expert, they may not know what they want, so you need to make some decisions for them. Most of the time, your client is going to want a video to upload to YouTube and/or other social media sites. You may be tempted to choose a codec that is good for streaming on the Internet. But you would be wrong!

The reason why: these sites do not stream the same file that you upload to your viewers – they compress the file *again* before streaming it, and you have absolutely no control over the settings that they use. This means that, if you upload a low-quality codec, then we have the scenario where we’re taking a low-quality photo of a low-quality photo that we talked about. Bad! Avoid!

Aim for best quality

As a general rule, if you want the best quality result, you should upload the best quality source. They’re going to compress again anyway, so giving them more data work with can’t hurt, right? If you have a fast enough connection, you could upload a ProRes 422. Some people have reported slightly (only slightly) better results when uploading ProRes instead of the recommended h.264.

If you are delivering a file to a client, for them to upload to Youtube, then I would not give them ProRes, since you don’t know what kind of bandwidth they’re going to have. Fortunately, these sites tend to publish recommended upload specs (just Google it). I personally will take whatever bitrate they recommend and multiple by about 1.5x to 2x.

Your client may also want a file that they can embed directly into their website (though I would dissuade them, if you can). Generally speaking, you want a very heavily-compressed h.264. If you’re curious what a good bitrate is, my reasoning is that, if anyone knows what the sweet-spot bitrate is, it’s YouTube. I periodically download a video from YouTube and check its bitrate, and use that as a benchmark.

Going small

If the video is not public, they may also want a small file that they can email or link directly to their own clients so that they can download it. In these cases, it may be appropriate to deliver more than two separate files, especially if it’s a long video. The file they should upload to YouTube will be too large to email conveniently. In this case, I will usually down-res the file and compress it very heavily. You also have to be realistic and decide whether you think that your client will actually understand the difference between the two files.

If I need to deliver more than one file, I will usually call one of them “HD” in the filename and the other one “small” or “not HD” in the filename. If you try to describe the different codecs to them, I can almost guarantee they’ll have forgotten the difference by next week, but they’ll probably remember what HD and “not HD” means.

The Codec You Archive

You’ve delivered the file(s) to your client, so now you can sit back and relax… almost.

As any working professional in this industry knows, the day when you deliver the finished product to your client is often not the last time you touch a project. Sometimes a client wants to go back and change something weeks later, or they want a higher-quality codec, or maybe you want to add it to your personal reel. In any of these cases, you may have moved on to a different machine or to different software, making it a headache to open up the original project and re-export.

That’s handy

This is where it comes in handy to have a great archive of the finished project in an extremely high-quality codec. If your client requested a very high-quality codec for delivery, then you’re generally set. Just keep a copy of that file, and you’re good. If they need a delivery codec that’s not tip-top quality, however, it’s always good to do your own export with a codec that is lossless or as close to lossless as you can afford, given the space it will take. I will generally export to a very high-bitrate 4:4:4 codec – either DNxHD/HR or ProRes.

Many thanks to Larry JordanShane Ross, and Philip Hodgetts for their input on this article!

David Kong

Filmmaker, teacher, coder. Head of Product Intelligence. You can learn more about my various adventures at www.davidkong.net

The Beginner’s Guide to Conforming with DaVinci Resolve

8 Pitfalls to Avoid in an Offline-Online Video Workflow

Turbo Charge Your FCP X Workflow with DaVinci Resolve