8 Pitfalls to Avoid in an Offline-Online Video Workflow

What’s an offline-online workflow?

Offline-online workflows date back to the film days, when it was important to preserve the original camera negatives as pristinely as possible. Reels of film straight from the camera would be duplicated to other reels and cut into workprints. Editors could freely cut up and splice these “offline” reels just to make the edit decisions, and then the editing decisions reflected in the workprints could be methodically conformed back to a fresh copy from the original camera negatives.

Even though modern, digital, file-based workflows prevent degradation of the “original digital negatives,” there are still good reasons to adopt an offline-online workflow. Original digital negatives can be large, processor-intensive, use gammas and gamuts not intended for direct viewing, and are often not yet connected to dual-system sound.

The modern digital dailies process serves to separate technical work from creative work by preparing proxies that are color-corrected, synced to dual-system sound, and in a format that’s easy on the CPU and GPU without taking up much disk space.

I’m a big advocate of the offline-online workflow, and although it’s robust and efficient, it has lots of moving parts and can get quite complicated in practice. There are many potential pitfalls. What follows are some battle-hardened warnings for offline-online workflows.

My experience is with DaVinci Resolve for dailies and conform, and Premiere Pro for offline editing, but the insights should translate to other NLEs and color-grading applications.

RELATED: Sign up for advanced notice of the most comprehensive workflow guide ever! From capture to delivery, everything you need to know in one 100,000+ world guide.

#1: Not Using Timecode

Timecode is the glue that holds an offline-online workflow together. While Premiere Pro is relatively good at keeping track of clips that don’t have timecode, Resolve more or less requires it. While Premiere Pro can assign a kind of “fake” timecode inside of a Premiere Pro project, it won’t translate properly into Resolve via an XML.

A typical offline-online workflow involving Premiere Pro and DaVinci Resolve consists of:

  1. Preparing Proxies in Resolve
  2. Doing the offline editing inside Premiere, with the timeline referencing offline media
  3. Kicking out an XML from Premiere
  4. Importing a folder of original digital negatives into a Resolve project
  5. Relinking the imported timeline from the XML to the online clips inside the Resolve project

By default, without timecode, that timeline inside Resolve will have pretty much no way of identifying the proper in and out points for each clip in the timeline.

Resolve might place the clips into the sequence at the correct in and out points, but there’s no guarantee that the points for all clips will be correct. If Resolve doesn’t have timecode, all in and out points will be arbitrary.

Unfortunately, not all video file formats can hold timecode. The MP4 container, a very common container for H.264, is not capable of storing timecode. This can be a bit confusing, because in many post-production apps, the clips display a count that looks like timecode, but it’s not actually timecode. So, if you want to use an offline-online workflow, you first need to be familiar with which formats can and cannot contain timecode.

Timecode is not just the particular count on your media that shows up in the source monitor. Just like your clip has a video stream composed of still images in a particular sequence, and an audio stream composed of audio samples in a particular sequence, any professional format that contains SMPTE-compliant timecode assigns each frame a particular hour, minute, second, and frame, according to the SMPTE 12M-2 standard.

ProRes files wrapped in their native MOV container, or DNxHR files wrapped in MOV, MXF OP1a, or MXF OP-Atom, all contain timecode, but you’ll need to check the exact specifications. Different cameras will list in their technical specs whether or not they capture with SMTPE-compliant timecode. So, always double check to make sure your gear can handle this workflow.

As long as the proxies you transcode out of Resolve have timecode that matches the timecode in the original digital negatives from the camera, each post-production application should properly be able to switch back and forth between the online and offline formats as they’re needed. In Premiere, you can check the properties of a format in the QuickTime container, and it’ll show you that there’s a timecode stream.

#2: No Organized Folder Structure

One of the most important parts of mastering an offline-online workflow is to make sure that the folder structures for the Original Camera Negatives (OCNs) and the proxies are identical, and that those folders are tightly organized.

The reasoning for having an organized folder structure is more profound than just keeping the offline-online workflow tamed. It pertains to the same deeper reason that you’re using an offline-online workflow in the first place. Having an organized folder structure and using an offline-online workflow are two methods that allow for many disparate collaborators to seamlessly work on a project quickly and efficiently.

Let’s say you have two different collaborators that are separated by geography—this might be a director and editor who are in different cities, or two editors working on different deliverables, but who require the same source material.

If their folder structure is well-organized and identical, one collaborator can quickly direct the other to any particular shot. Both parties can recall the asset instantly. If different collaborators just start throwing their own assets into their own folder structures ad hoc, then projects will stall when collaborators waste time merely trying to recall particular assets.

The vast majority of my work is documentary, so shooting can occur over many days, with many different cameras.

Not Having Unique Folder Names for Camera Rolls

It’s important that a folder structure separates out every single camera memory card by a unique name. Our convention at Freethink is for every different camera on a particular project to get its own letter. We then count for each lettered camera, in the order that the footage was shot.

So, if we had three cameras, we’d call one camera A, one camera B, and one camera C. If we shot 5 cards on A, 2 on B, and 4 on C, the full list of camera cards would be:

  • Camera A: A001, A002, A003, A004, and A005
  • Camera B: B001 and B002
  • Camera C: C001, C002, C003, and C004

This is one method to help alleviate the problem of camera files that might have name collisions. Even if you’re not able to rename such problem files, having each camera card with its own unique name means that if you need to go digging into an XML or a project file to check what the last good file path of a file might have been, you would quickly be able to see the difference between /A003/Clip001.mov and /F002/Clip001.mov.

Not separating camera cards by date

For documentary work, wherein you might have directors or producers trying to remember something that they shot on a particular date, it’s best not only to have each card separated out by a unique camera letter and number, but also by date.

So the folder structure at Freethink for a folder of camera originals and proxies has been standardized to look something like:

  • Original Digital Negatives
    • 20180723
      • 20180723_Sound
      • A001
      • A002
      • B001
      • B002
      • C001
    • 20180724
      • A003
      • A004
      • B003
      • B004
      • C002

Dates are in ISO 8601 format, so that the filesystem lists them in order, chronologically, by default.

When a director is collaborating with an editor who wasn’t on the shoot, the director can refer to footage in the natural way the human mind remembers it: “Hey, so we had a great shot of this on Tuesday morning…”

#3: Locking Picture Before You Conform

The offline-online workflow is designed to optimize work for each individual kind of collaborator. The offline editor gets color-corrected intraframe files at a low data rate, synced to sound, so that they don’t have to worry about syncing, color management, etc. The offline editor should just be able to play footage and set the order of the clips.

Broadly speaking, the offline editor’s deliverable is a recipe for how particular clips should be arranged on a timeline. That deliverable might technically be a .prproj file, or an XML, or an EDL, but the format doesn’t matter. The goal is for the editor to hand off the particular recipe for the particular editing decisions they’ve made.

Online collaborators have very different goals, though. Colorists, VFX artists, sound editors, and sound designers aren’t finessing the exact timing of particular cuts—in fact, they’re taking many precautions to ensure that the cuts on the piece they’re working on don’t change at all. It would be a dire technical mistake for someone in the online editing process to make a timing change to any particular clip.

Given how online applications are designed, once a piece has been conformed for color grading and sound mixing, even minor tweaks to the timing of edits can quickly spiral into time-consuming headaches to correct.

Imagine, for instance, if you’ve already conformed a picture-locked cut, and it’s been sent out for color, VFX, and sound. Let’s say a director or producer wants to just tweak something “simple,” and just wants to swap one particular shot for another.

For the uninitiated, this seems like it would be extremely easy to do, because in an editing session, an editor could do this in a matter of seconds. However, if this piece has already gone to other collaborators, such a request creates a time-wasting headache compounded across multiple people. This headache would ensue for an assistant editor:

  1. The director, editor, and/or producer would need to sit down and specify the exact sequence timecodes for what needs to change.
  2. The AE would go grab the replacement clip and upload it.
  3. The colorist would need to be informed and given the exact in and out points on the timeline.
  4. The shot will need to be sent to the colorist—hopefully, the colorist hadn’t already spent a lot of time on that shot, because any such work will now have been for naught.
  5. The VFX artist would need to be informed and given the exact in and out points on the timeline.
  6. The shot would need to be sent to the VFX artist—hopefully, the VFX artist hadn’t already spent a lot of time on that shot, because any such work will now have been for naught
  7. The sound editor would need to be informed and given the exact in and out points on the timeline.
  8. The audio for the clip would need to be sent to the sound editor—hopefully, the sound editor hadn’t already spent a lot of time on that shot, because any such work will now have been for naught.
  9. The sound designer would need to make sure that any sound design elements are swapped accordingly—hopefully, the sound designer hadn’t already spent a lot of time on that shot, because any such work will now have been for naught.

Are you seeing the pattern and problem? Making timing changes after picture lock is a great way to turn what would have been a 5-second task in the offline edit into up to an hour or more of wasted time across different collaborators.

There’s a quotation that I love from Alexis Van Hurkman, author of the Color Correction Handbook and the DaVinci Resolve manual. In the Color Correction Handbook, Hurkman writes, “Locking the edit should not be looked at as a technological limitation but as a scheduling milestone. Sooner or later, the director and producer will have to make up their minds, be done with the edit, and allow the project to go through finishing.”

RELATED: 3 Ways to Work with Proxies in Premiere Pro

#4: Clips Without Unique File Names

When swapping the offline media in Premiere with the online media into DaVinci Resolve, the filename itself is one of the primary ways that the app relinks to a clip. As the application traverses a specified directory recursively, it’s looking for particular filename matches.

Now, many “prosumer” cameras like DSLRs, GoPros, and drones simply aren’t able to keep track of clip numbering across card changes. In professional RED, Canon, Sony, and Arri cameras, the numbering of a new card will pick up from whatever the last clip of the previous card was numbered; but in a “prosumer” camera, every time a card is inserted and formatted, the numbering restarts. This could result in a single day’s worth of footage that has 10 different clips all named “C000,” 10 clips named “C001,” 10 clips named “C002,” etc.

One way to avoid this would be to stick to cameras that can keep track of numbering across cards—but with tight budgets, that’s not always possible.

Another solution would be to simply rename the clips so that every single one in the entire project is unique. Sony Catalyst Prepare can rename camera files as it ingests, while properly keeping all the sidecar metadata intact for Sony formats. Since the release of Yosemite (MacOS 10.10) in 2014, the Mac OS Finder has had a handy built-in function to batch rename files.

#5: Not Using the Right Offline Format

If you’re going to go through the whole process of preparing proxies for the offline editor, wherein you’re syncing audio, color correcting from log to Rec. 709, and meticulously keeping your folder structure for the proxies identical to the folder structure of the camera originals, you might as well use the correct format.

One of the primary reasons to use proxies in the offline edit is to make those proxies portable and easy to work with. Selecting a proper format designed for this is crucial.

A good proxy format accomplishes two objectives:

  1. Relatively low data rate
  2. Easy to serve up frames—typically by using only intraframe compression (compression based on information contained within a frame) and entirely avoiding interframe compression (compression based on looking at information from preceding and succeeding frames). The latter takes more computing power.

A format with a low data rate will allow for more footage on fewer hard drives, and intraframe compression will allow the footage to be played on low-powered systems without very powerful CPUs or GPUs.

Remember, these proxies are just for the editor to figure out the timing of the edits; the format is not meant to preserve the image fidelity. Preserving image fidelity will be handled later when conforming the cut to the camera originals or a suitable digital intermediate format in preparation for the final color grade.

Two good options for proxies are Apple ProRes 422 Proxy or DNxHR LB.

#6: Not Using Data Burn-ins

The tightly-organized folder structure is one part of the workflow that allows for quick recall among different collaborators, but it’s not the only part. Data burn-ins on the proxies is also crucial. (Data burn-in’s are the various metadata and/or timecode information related to a clip burned onto the image.)

The folder structure discussed above can allow a collaborator to identify particular shots, camera cards, and shoot days, but it’s also important to include data burn-ins on those proxies as well.

Freethink’s practice is to include data burn-ins on the proxies of the following:

  • Clip name
  • SRC TC [Resolve’s term for the embedded timecode of the video track]
  • AUD TC [the embedded timecode of the audio track, in case the production sound mixer didn’t jam sync perfectly, or if something went wrong with the timecode in the camera]
  • Date [the date the clip was shot]
  • Card [the camera card assigned, i.e. “A001”]

With data burn-ins on all the proxies, any collaborator can quickly call up a shot in the Source Monitor, and within seconds, navigate to any frame in a whole project. This scales down to a short commercial that might only have have an hour of footage all the way up to a documentary that might have hundreds of hours of footage.

#7: Not Reconciling Sizing Between Apps

For a long time, I didn’t understand the relationship between how different programs treated the reframing of clips. I would pull an XML from Premiere Pro into DaVinci Resolve, and the positioning data would be wildly inconsistent; some shots would match up perfectly, but others would be way too small, way too big, or placed completely wrong within the frame.

Then, I happened on Patrick Inhofer’s excellent series on Mixing Light that goes in-depth on how Premiere Pro and DaVinci Resolve treat resizing. For my particular workflow, this was a godsend. You should definitely go check out the full series from Mixing Light, but in short, there are particular combinations of settings across both applications, which, if you rigorously adhere to, will allow you to consistently and reliably transfer your repositioning data out of Premiere Pro and into DaVinci Resolve via the XML.

Instead of spending hours and hours going shot by shot in Resolve and fixing positioning errors, I can now trust my shots to come into Resolve with proper placement.

This will obviously vary if you’re using other NLEs like FCP X or Media Composer—but the lesson here is that you can, and should, systematically run your own experiments so that you can figure out a reliable combination of settings that work for you.

#8: Not Syncing Audio Losslessly

Recall that the purpose of the proxy workflow is organizational—to allow for each individual collaborator to get exactly what they need to get to work creatively. How does audio fit into this?

For the process of syncing audio and then rendering your proxies, it’s best practice to take the interleaved [a.k.a. polyphonic] WAV files from the production sound mixer and losslessly rewrap the files into the proxies.

It’s tempting to transcode to a lossy, compressed audio format like AAC, but it doesn’t really accomplish anything. By losslessly rewrapping the uncompressed audio, not only will the offline editor have access to each individual microphone’s own isolated track, but when it comes time to conform the sound files for sound mixing and sound design, there won’t be any additional work in tracking down the original files from the production sound mixer’s field recorder.

Relative to video files, audio files—even uncompressed audio files— are tiny, so transcoding into a lossy format like AAC serves no purpose other than to create pointless additional work and to waste time. If you’ve already set up your storage to be able to handle video files, you should still have plenty of storage space for uncompressed audio.

Within the Avid suite, there are indeed tools to go fetch the original field recorder files, but if you just include all of the uncompressed iso tracks, you can skip that step altogether.

Conclusion

The offline-online workflow, when properly implemented, is robust and has been serving filmmakers well for about a century. It enables collaboration and gains from the division of labor. However, in the modern age, with file-based workflows, we have to keep track of much more than physical reels of film. As you craft and refine the offline-online workflow for your next project, heed these warnings, and don’t fall prey to these mistakes. Your collaborators involved in the process will thank you for keeping everything running smoothly and efficiently.

If you want to learn more about offline-online workflows, check out Frame.io’s Workflow: The Definitive Guide to Film and Video Workflows launching December 11. Sign-up below to be notified of when it launches.

Seth Goldin

Seth Goldin is Director of Production Technology and Workflow at Freethink Media. He oversees the post-production for Freethink's short-form documentaries, from digital dailies through finishing.