sound editor

How to Keep Your Sound Editor Happy (According to a Sound Editor)

With workflows getting more complex and post-production schedules getting tighter, it is more important than ever to have a clear and well-thought-out sound workflow in place. It is particularly crucial to establish this workflow with your sound editor well before postproduction starts.

As a sound editor and re-recording mixer for television and film, it is often my responsibility to interface with the picture department to make sure I get what I need to do my job properly. The less time I spend dealing with turnover issues, the more time I can devote to making a project sound amazing.

This article puts together a series of best practices for when it comes time for the turnover for sound, particularly as it applies to larger film and television projects.

Digital Field Recorders

In order to understand post-production audio workflows properly, we must start on the set and understand how the audio gets recorded. Today, production sound mixers (PSM) have access to tools that mixers in the past would only have dreamt about. One of the main tools a PSM uses is a digital multitrack recorder. Many of these newer digital sound recorders have superior audio quality, timecode recording/jam syncing, and of course the ability to record more tracks simultaneously than ever before. In fact, it’s not uncommon to see track counts of 4, 8, 16 and more! These audio tracks typically include a boom mic (or several boom mics), isolated lavalier microphones (often called ISOs) as well as any plant mics placed around the set by the PSM.

Using some or all of these individual microphones, the PSM will also create an additional mixdown track commonly used for dailies and editorial. This mix track, along with the individual mics, is recorded to a polyphonic broadcast WAV file for use by the post-production team.

MnyGF4TXDuM34LApg7eEIBHsVsqFTtF Z5eNiroOtYa0gbChof3JIqWliIDmcLvGpaeiZAs2048
Typical layout for a Sound Devices SD788t


In addition to the audio tracks, these polyphonic WAV files include metadata, which is essential to most post-production workflows. This metadata can include the project name, scene, take, shoot date, sample rate, bit depth, timecode, tape ID, channel name, and many other useful bits of information.

Once the audio files are turned over to editorial, the decision must then be made on how to ingest the WAV files into the NLE (Non Liner Editor = editing software) for use during the edit while keeping the metadata completely intact. This is important because this metadata is later used by the audio post-production department when it comes time for the turnover to sound (more on this later).

A few notes about ingesting your audio and specific NLE systems.

  • I suggest that you sync your field recorder audio files in your NLE and not in Resolve or any other 3rd party software utilities during the transcode, which I know is a common workflow. Often, these utilities are only good for making “review” dailies and do not pass the original sound metadata into the synced clips, making re-linking later impossible.
  • One of the advantages of using Media Composer to sync your production audio is the AutoSync tool. This tool allows you to keep your metadata intact throughout the edit process and gives you the option to easily switch your audio from the mix track to the individual boom, ISO, or plant tracks should you need them.
  • Alternatively, when syncing clips in Premiere while using the merge command, much of the metadata is stripped out, making a conform later impossible. Adobe, please fix this!! I recommend using the multiclip functionality when syncing in Premiere instead.
Sync Selection

Conforming the Production Audio Tracks

In an ideal workflow, after ingest and syncing, the video editor chooses to use only one track (often the boom or the mix track) from the original production audio. They then remove any scratch audio contained in the video file and begin the edit. When the edit is locked, the video editor then exports a Quicktime file with reference audio as well as an AAF with externally-linked WAV files. They send these files to the sound department along with the original field recorder files from the shoot.

The sound department, usually the dialog editor, then imports the AAF into their digital audio workstation (DAW) and uses it to conform the production audio to the editor’s tracks. This re-creates the video editor’s audio timeline in their DAW using the field recorder files, giving the sound department access to all the tracks recorded on set.

Field Recorder
The Pro Tools field recorder match criteria window, this allows the sound editor to choose what metadata to match the sequence audio with the field recorder audio.

So why would you (and your sound department) prefer to conform from a single production track as opposed to using the all of the individual boom, ISO, and plant mics from original production audio files directly in your timeline? Assuming your metadata is intact, there are some very practical reasons to do so:

  • As stated earlier, most production audio recorders today can record many more tracks than they have been able to in the past. It can be cumbersome in NLEs to deal with these 4, 8, 16 or more tracks of audio all the while managing a complicated video edit. Using so many tracks can also increase the likelihood that you will run out of track space on your timeline, especially after adding music and sound effects.
  • Your sound department can link your single mixdown track to the full tracks from the production audio files. This ensures that they have everything that was recorded on set and gives them access to the full metadata that was entered by the production sound mixer. This gives the sound editor the ability to pick alternate mics or takes and gives you more options during the mix.
  • It also allows the dialog editor to have the full length of the files, not just whatever audio is included in the handles from the AAF. This can be helpful for your dialog editor when looking for room tone or alternate takes.

Best practices for your audio timeline

There’s nothing worse than sending your audio post team a cluttered and disorganized AAF filled with low-res audio. Try and keep things tidy as possible:

  • Separate temp VO, dialog, sound effects, and music onto their own sets of tracks. This means keeping mono and stereo elements separate as well. Lay them out from top to bottom in the following order: mono scratch vo, mono dialog, mono sound effects, stereo sound effects, and stereo music. With stereo tracks, make sure you remain consistent with which track is left and which track is right. In other words, don’t do this:
Not cool, keep your lefts and rights consistent.
  • Avoid using adaptive tracks in Premiere as they can create duplicate tracks when exporting the AAF.
  • You never know what audio that you put into your edit will make it to the mix stage so make sure you have the best possible quality from the beginning. Use the highest quality VO, sound effects, and music you can get. This means no MP3s or AAC encoded files. See if you can record your scratch VO on anything other than a smartphone. Yes, I know… I said temp VO. It never really is temp audio, is it? I can’t tell you how many times a director has asked me to try and make the temp VO (which they recorded on their iPhone in the edit room) match audio recorded in a professional studio. Also, get high quality WAV or AIFF files from your sound effects or music source whenever you can, as opposed to an MP3 or AAC. You might not hear a difference in the edit room, but you will definitely hear a difference on the mixing stage.
  • Be sure there are no muted audio regions on your timeline. If you want to keep a region on the timeline but you don’t want it audible, pull the volume on the region all the way down. Muted audio can often cause errors when exporting the AAF.
  • Whenever possible, avoid using keyframe (volume automation) for fades as this can cause unwanted clicks and pops. Use the dissolve/fade tool instead. All other automation is welcome and encouraged.
The dissolve/fade tool is your friend.
  • Start your sequence at 00:58:00:00 and to help maintain sync, place an audio 2-pop (generally 1 frame of 1k tone) on each track of your sequence corresponding to a visual cue, such as an academy leader. Depending on your project, the 2-pop should fall on either 00:59:58:00 with FFOA (first frame of audio) at 01:00:00:00 (TV) or 01:00:06:00 with FFOA falling on 01:00:08:00 (film). If splitting out to reels, make sure the reel number corresponds to the hour of the timecode. As an added safety, place an additional 2-pop at the tail of the project or at the end of each of the reels.
See the 2? That’s where your 2-pop goes.

Exporting the AAF and Reference video

After finishing your edit, you’ll need to export an AAF of your NLE’s audio timeline. Here are my suggested best practices:

  • Duplicate your sequence and delete any video tracks. Copy any audio clips that have effects on them to another track and render them, as most audio effects don’t translate over AAF. Then turn the volume all the way down on the original clips. This gives the sound department access to not only your effected audio but the original audio as well.
  • Export an audio-only AAF with consolidated external WAV files (embedding can strip out needed metadata from the audio)
  • Export 48K/24 bit with at least 10 second handles
  • Often an assistant editor will export an additional AAF with only the production audio tracks referenced directly to the field recorder files. This saves the sound editor from having to do their own conform, but take great care that you include all tracks and that the sync is spot on.

Now you need to export a reference video file for the sound department:

  • Make sure there is a timecode burn-in window matching your sequence timecode. Make it as small and as unobtrusive as possible while still being easy to read, preferably outside the video frame.
  • Pick a computationally efficient codec such as DNxHD or ProRes Proxy at 1920×1080. You’ll want to pick something that will result in an approximate 2 to 8 gigabyte file per 20 minutes of program material.
  • Be sure to include the rough mix from your NLE on the Quicktime. This is useful for the sound department to check sync against the AAF. If the Quicktime will also be used for ADR sessions, pan the production audio to the left side and everything else to the right.


Copy your exported AAF(s) and its folder of media files to a separate hard drive for delivery. Include the Quicktime reference file as well as ALL the production audio files from the production sound mixer and their notes. I also like to ask for picture and sound EDLs (CMX 6300), a list of the director and/or the editor’s notes, a list of character names and the actor’s names that played them, and an up-to-date lined script. As each production is different, talk to the audio department to see what additional materials they prefer.


As mentioned, it’s critical talk to your audio post-production department as early as possible to plan your workflow. This will get everyone on the same page and help your sound department make your project sound it’s best.

This guide is based on my personal experience and is not the end all be all for audio post-production workflows. There is no right or wrong here and as with any technology things move fast. I welcome any questions or comments below.

Thanks to Steve “Major” Giammaria, Brian Gates, John Purcell, and Avid Technologies for their input into this article.

Jeff Hinton

Jeff is a NYC-based sound editor and re-recording mixer working on television, film, and commercial projects. He has mixed films and TV shows for The History Channel, A&E, Discovery, SyFy, MTV, VH1, and many others. Additionally, his work can be heard in commercials for Nike, Pepsi, Dodge Ram, Mercedes Benz, Heineken, ESPN, Verizon, and others. You can listen to his work at: