Timecode and Frame Rates: Everything You Need to Know

Timecode and Frame Rates: Everything You Need to Know

If you’ve grown up entirely in the digital age, timecode might seem like a relic of broadcast TV and celluloid film. But it isn’t.

Timecode is a way of recording exactly what happened when, from notes on set to the cuts in the edit to the dialogue, music, and sound effects.

Think of it as a reference tool. When you browse the web, you wander from page to page, searching on Google or clicking on links in an email. Most of the time, you don’t pay much attention to the URL in the top bar because you’re focused on the page itself. But when you want to remember a page, or share a page with someone, or communicate about a page, then you need that URL which records the exact location. Timecode does the same for video frames.

Suppose you’re shooting a music video and you’re on the 15th take of one shot; all the takes look identical but there’s this one, perfect moment. Will you remember that spot after shooting 30 identical takes? Wouldn’t it just be easier to have an exact timestamp of the moment you want so that you can find it immediately once you’re in the cutting room?

If you’re using Frame.io to get detailed feedback and your client suggests shortening a cut, their comment appears on your timeline in precisely the right place. You know it’s the right place because of timecode. Timecode runs behind the scenes much of the time, but it’s the glue that holds everything together.

It also, crucially, helps keep the sound and the picture in sync during the shoot.

But why can’t I just sync everything in post?

Our digital tools are so sophisticated that it’s become routine to think that everything can be fixed in post. If you’re an editor, you know how false that premise is. You can’t fix everything in post—it just isn’t possible. And when you can, it’s often very expensive.

Yes, if your clips aren’t too long, there are applications you can use in post to sync audio and video, including FCP X’s built in ability to sync audio and video, Premiere Pro CC,  Red Giant’s PluralEyes, or even Audacity. But no matter how advanced, the programs you’re using in post are not looking at anything other than digital references to the material. So where you see video and audio that aren’t synced up, or worse, an audio and a video take that have been matched up by error but are totally unrelated, the computer only sees timecode and doesn’t know that there’s a problem.

Red Giant’s Plural Eyes in conjunction with Adobe Premiere Pro CC.

Now suppose your non-synced audio and video footage weren’t recorded at the same frame rate, so your timecode doesn’t even run at the same rate between the two sources.  What if your source material doesn’t have slates on top of that? You see where this is going.

With timecode, you might know what the obvious problems are right at the beginning, but most of the time many issues only become apparent after you’ve done a fair amount of work, in which case you end up having to redo what you already did or throw it out and start again. That’s a lot of lost effort, time, and money that you have to make up because deadlines don’t move and renders can’t go any faster.

A basic search online for “timecode problems” shows that it’s not foolproof and that you can run into issues both with timecode you thought was fine but isn’t, and with timecode that is fine but where other issues come up because of transcoding or unknown bugs with the programs you’re using.

For example, not long ago, QTChange was using a value of 23.98 instead of 23.976 to calculate its frame rate and timecode, which caused problems for its users. Premiere Pro also had an issue with timecode because it was round-tripping start timecode with audio files. Those programs are written by human beings who can make mistakes and can’t think of every possible situation.

Remember that timecode is native to your files, so bad timecode is basically like a genetic defect.

You should regard sync in post as an absolute last resort and plan for the timecode you record on set to be as perfect as possible because you still might have problems with it in post anyway.

If you’re not on set but are involved in production prep, talk to whoever is going to be handling the camera about what settings they’ll be using, and make requests if you can.

If you’re not involved with the production at all before the material gets to post, talk to the crew who shot the material and get as much information as you can about the camera settings. You may not be able to prevent recording issues, but the more information you have, the better prepared you’ll be to work out any problems you encounter.

The upside is that if you read on, you should be able to set up your timecode to be as trouble-free as possible or at least understand what problems you might run into depending on how the footage has been shot. None of this guarantees that post will be problem-free but it should definitely reduce the number of issues you’ll encounter.

What is Timecode?

Timecode is a way of precisely labeling all the frames in a recording so you can know the exact location of a recorded point.

The primary timecode we’ll concern ourselves with is SMPTE which was developed in the 1960s by the Society of Motion Picture and Television Engineers, hence the acronym. SMPTE records both audio and video signals.

Timecode is used to synchronize and reference all types of audiovisual media—video files, sound files, captions, visual effects, and more.

A normal timecode/SMPTE display reads HOURS:MINUTES:SECONDS:FRAMES


Hours, Minutes, and Seconds are numbered like they are on a regular clock. Seconds can be divided up into frames, which are single images and the smallest increment you can have in timecode.

By giving every frame a unique identifier based on the length of the recording or the time at which it was recorded, you can find any specific frame in a recording if you have its timecode reference. But timecode can only count in whole rather than fractional frames.

Depending on the project you’re working on, there are two primary ways of running the timecode: Free Run and Record Run.

Free Run Timecode is used when you want to know the actual time at which events that are being recorded occurred, so it’s useful on a documentary or on music or sports events that last a few days. In Free Run, the recording device runs timecode continuously whether you are shooting or not. The timecode runs at the frame rate set on the device. It’s common to set the clock to the time of day, but it’s important to remember that because the timecode counts in frames and regular clocks do not, there may be drift between the camera’s clock and the actual time of day.


Free Run is now commonly used when you have multiple cameras and audio recorders so you don’t have to worry about stopping and starting all your devices at precisely the same time.

In Record Run, the recording device runs timecode only when the camera is recording. It counts only the frames recorded so that the total runtime is a reflection of the total usable footage. Record Run is used less frequently these days but is a good option on shoots with a single camera and audio recorder. Many digital cameras simply output TOD (Time of Day) timecode and do not have a Record Run option.

Once you know if the timecode is going to be in Free Run or in Record Run, the next step is identifying what frame rate to use.

Picking the Right Frame Rate

Different parts of the world use different timecode frame rates. The most common ones are:

  • 24 frame/sec (film, ATSC, 2k, 4k, 6k)
  • 25 frame/sec (PAL, used in Europe, Uruguay, Argentina, Australia), SECAM, DVB, ATSC)
  • 29.97 (30 ÷ 1.001) frame/sec (NTSC American System (US, Canada, Mexico, Colombia, etc.), ATSC, PAL-M (Brazil))
  • 30 frame/sec (ATSC)

Why the different frame rates around the world? Because originally television was only live and not recorded, so the only way to ensure sync between studio cameras and home TV sets was to sync the signal to electrical mains. Mains were 60Hz (30fps) in the US and 50Hz (25fps) in Europe. But television was originally in black and white, so when color came along, SMPTE engineers wanted to keep the color signal backward compatible with black and white TVs. To do that, they had to “slide” a color signal between the existing black and white signal and slightly alter the frame rate from 30fps to 30/1.001 = 29.970 fps to avoid artefacts, thereby creating the NTSC color standard.

Because the smallest increment in timecode is a whole rather than a fractional frame, timecode running at 29.97fps cannot account for the 0.03 frames that are missing in every second; so a device running timecode at 29.97fps runs a little slower than a normal clock.

It’s important to remember that, even though they are related, timecode and frame rate are not the same thing. Timecode is a way to label frames in a recording and frame rate is the speed at which images have been recorded or are played back.

Since the NTSC standard had a peculiar frame rate, a special kind of timecode had to be invented so that editors would know how much real time had passed with a simple frame-labeling method. This is how Drop Frame Timecode was created.

What is Drop Frame Timecode?

Remember that the NTSC frame rate is 29.97fps instead of 30fps, which means that .03 frames are unaccounted for every second. Since timecode can only count in whole frames, after an hour there should be 30fps x 60sec/min x 60min/hr = 108,000 frames. Because NTSC is 29.97fps, after an hour there will be 29.97fps x 60sec/min x 60min/hr = 107,892 frames.

So there’s a discrepancy of 108 frames in NTSC, which means that after one hour of real time, the timecode on your recording would be behind by 3.6 seconds (108 frames/30fps = 3.6sec). The timecode count would be 01:00:03:18.

Drop Frame Timecode works by dropping two frame numbers from each minute except every tenth minute. Your recording is unaffected because it drops frame numbers, not actual frames! Because it drops those numbers, at one hour in real time, your timecode will increase by exactly one hour.

It looks like this:

Note that drop-frame timecode is written with a semicolon at before the last number, instead of a colon.

Logically, you use Drop Frame (DF) timecode when you shoot material at 29.97fps or 59.94i (59.94 interlaced) because it’s meant for TV broadcast. The general confusion around all these identical-looking frame rates means that sometimes people still refer to this as 30fps or 60i even though that’s technically incorrect. If you look back at the various frame rates and the standards they apply to, the only one left that is 30fps is ATSC, which is compatible with 29.97fps / 59.94i. 30fps and 60i are uncommon but unfortunately some recording devices do record in those formats, so it’s important to make sure that your frame rate is exactly what you think it is.

Non-Drop Frame Timecode is straightforward: for every frame of recording, a timecode frame is recorded. The ratio of frame recording to timecode count is 1:1.

Non drop frame

Why am I hassling you with calculations? Because the more familiar you are with what different kinds of timecode look like, the more quickly you’ll be able to tell if there’s a problem with your source material.

When you make a decision about what timecode to use, you have to take the circumstances into consideration. For example, if you’re shooting a long event in Free Run because you’re keeping a log of the real time at which different things happened, and you’re shooting at 29.97fps, you’ll have to use Drop Frame timecode so that your timecode clock stays in sync with real clocks.

If you used Non-Drop Frame timecode, you’d be behind by 3.6 seconds every hour, which means that after 24 hours your timecode would be 24hrs x 3.6sec/hr  = 86.4 seconds behind—nearly a minute and a half!

You can use Non Drop Frame timecode for most other situations which don’t involve the 29.97fps NTSC standard.

If you’re using multiple recording devices on the same shoot, make sure they’re all set to the same kind of timecode—DF or NDF. Any situation in which you mix them will result in massive post headaches for the simple reason that nothing will line up and you’ll be spending your time trying to sync footage that has no matching timecode. That may not seem like a big deal if you have slates, but it still is; think about the number of takes you’d have to manually sync on a whole show.

Occasionally, when using multiple devices that you can’t sync while recording, you might have to identify the weakest link in the chain and adjust your timecode settings on the other devices to match the weakest one. Your aim is to have timecode that counts in the same increments across your recording devices. Camera timecode that doesn’t match clock time may be annoying, but camera timecode that doesn’t match another camera can be a nightmare.

Drop Frame and Non Drop Frame timecode are relatively straightforward once you know how they came to be and what you use them for. But what about 23.98 vs 24fps; what’s the story there?

23.98fps vs. 24fps

We’re all familiar with the 24fps standard because we’ve all seen movies made on film. The idea that 24 frames go into a second of filmed material is so ingrained as to probably cause major confusion for people getting into post.

Movies were shot on film at a rate of 24fps but video was/is broadcast at 29.97fps (NTSC Standard). In order to properly fit the 24fps of film into a 29.97fps video signal, you have to first convert the 24fps frame rate into 23.976fps.

So 23.976fps, rounded up to 23.98fps, started out as the format for dealing with 24fps film in a NTSC post environment.

23.98fps now exists as a standalone HD video format. But logically it’s only used in NTSC countries so you wouldn’t find it in a PAL 25fps country.

Just to get an idea of the numbers, with a camera shooting in Free Run at 23.98fps, the drift will also be 3.6 seconds after one hour of real time so the timecode count will be 01:00:03:14. (0.6 seconds x 24 frames a second = 14.4 frames).

Though it would be nice to be able to compensate for the 0.024 frames unaccounted for every second, there is no Drop Frame standard for 23.98fps because there’s no amount of frame numbers that can be dropped from the timecode to make it fit neatly into real time. We got lucky with 29.97, but it just doesn’t work with 23.98.

When using an external sound recorder, it too must have 23.98fps as an available choice, or there will be drift between the sound and picture.

Many older sound recorders only have real-time timecode clocks or can only do 24fps timecode. If that’s the case, then you should probably shoot at 24fps instead of 23.98fps to help keep the audio and picture timecodes in sync.

If you have to shoot video at 23.98fps but your sound recorder can’t record at 23.98fps, there is a workaround which I’ve included at the end of the article.

Whatever you do, be consistent. Don’t mix footage with different frame rates.

If possible, check what your post options are before you shoot to optimize your post time instead of spending a lot of time and money on conversions.

If you’re not part of the production before post, try to find out everything you can about how the material was shot before you do anything with it.

Let’s look at some of the physical tools available so that if you have any say over what’s happening on set, you can ask the proper questions, make the right kinds of requests, or even operate the recorders yourself. If you don’t, you can still ask if these tools were used as long as you understand what they are.

True Sync: Timecode In/Out and Genlock

Again, timecode is a way of labeling frames in a recording. When handled properly, it can be used  to sync devices while shooting but that’s not its primary purpose. The terms sync and timecode are often used interchangeably, but timecode alone is not a reliable way of maintaining sync between devices.

Why not? Because many cameras and audio recorders have quartz crystal or similar kinds of clocks which are neither highly precise nor consistent across devices. This means that two different cameras may count a second slightly differently. It doesn’t take long, sometimes as little as half an hour, for two devices to drift apart enough to have visible sync issues. This quickly becomes a problem considering one frame of drift is enough to notice a lip sync error.

It’s less significant with a single camera (although still possible between your camera and audio recorder), but can be quite dramatic when using multiple cameras.

Most cameras have a timecode in/out port. To connect multiple cameras through their camera in/out ports, you first have to  set them to “Free Run” timecode. The simplest way to basically sync two cameras is to connect the primary (master/hero) camera’s timecode out port to the secondary (slave) camera’s timecode in port. But remember what we just said about drift between different devices? Once both cameras start running, the secondary camera will run timecode however it normally does, which is why you can still have drift between the two cameras. This is where Genlock comes in.

co body lock timecode 1

co body supply timecode

Just a note before we start on Genlock: Some of the newer cameras don’t have timecode in/out ports. If they don’t and you need to record sync sound on a separate audio recorder which has a timecode out port, you can record the timecode output from the audio recorder as an audio signal on your camera’s audio tracks. While these won’t line up perfectly with your video recording, they will be easier to match up in post than if you leave the two devices running timecode independently of each other. Remember this in case you know the cameras being used don’t have TC In/Out ports and the camera operator isn’t familiar with this trick.

Genlock stands for generator locking. It is sometimes referred to as Sync Lock. Genlock sends out a regular, metronome-like  “beat”  to all of the cameras. The cameras use that signal instead of their own internal clocks, preventing possible drift from unreliable clocks. So Genlock synchronizes frames. Timecode by itself doesn’t synchronize anything–it’s a reference tool used to sync material in post after the recording devices have been synced with Genlock. Hopefully that difference is now clear.

sync pulse generator

By synchronizing frames, Genlock keeps multiple devices from drifting apart. So with multiple cameras, audio recorders, etc. for true sync, you have to use both the timecode in/out ports and the Genlock port. All the recording devices have to be fed a common timecode source and a sync (beat/pulse) source that is locked to the timecode. Since the sync and master timecode are locked, the cameras can’t run their own slightly different timecodes and there can’t be drift.

The downside is that all the recording devices have to be hardwired to the sync and timecode source. With multiple cameras, that can mean a lot of cable and a lot of camera calibration if you’re using different length cables. This isn’t a problem in a studio environment but it can be very tricky when shooting on location (which is a good reason to double check what was done if you weren’t on set). The best scenario is to connect each camera to a reliable sync device such as an Ambient Recording Lockit Box or a Sound Devices recorder with sync output.

AmbientACL204 front oAnt S 1024x427


You might be wondering why I haven’t mentioned jam-sync. Jam-sync refers to a situation in which you have a master video camera and a slave audio camera. If the master camera has a timecode problem in which there is a drop-out, the slave camera fills in with timecode. So jam-sync is not necessarily true sync if there’s any issue with the slave’s timecode relative to the master device’s timecode. Each camera is still relying on its own internal clock, so cameras that are jam-synced have to be re-jam-synced frequently, otherwise their timecode starts to drift apart.

It’s important to find out if TC In/Out Ports and Genlock were used on a multiple camera shoot if you weren’t on set because you’ll know right away to expect drift between the cameras.

If it’s impractical to hardwire all your recording devices to your sync and timecode source, you can try to feed timecode to your recording devices using a wireless audio transmitter. And if you weren’t on set while wireless timecode was used, here’s what you should know about it before you start working on the material in post.

Wireless Timecode

Timecode such as SMPTE 12M LTC can be passed as an analog audio signal which makes a specific, recognizable sound when played through speakers. A number of companies make apps and other products that promise fantastic wireless timecode and sync. It’s a developing field which will hopefully become more reliable as wireless networks become more reliable.

In the meantime, it’s good to remain slightly skeptical for the following reasons:

  • As we all know from the use of our cellphones and other wireless devices, wi-fi networks aren’t foolproof and there can be a loss of signal when audio or video is sent wirelessly.
  • More importantly, if the timecode signal drops for any reason, the camera will revert to its internal clock which may cause any of the drift problems discussed above.
  • If there are only small signal drops, the problem may not get noticed until it’s too late to be able to do anything about it.
  • If anything goes wrong, troubleshooting over a wi-fi network without timecode hardware may make any already existing problems worse.

On the other hand, companies like Ambient make increasingly solid and reliable wireless hardware based on temperature-compensated crystal oscillator technology (TCVCXO) which drifts by less than one frame per day of shooting.

Again, if you’re on set, you have to weigh your options and decide what’s best for you under the circumstances.

To be able to help yourself before post or in post where timecode is concerned, you need to:

  1. Understand the difference between timecode and sync.
  2. Understand what frame rate will be or was used.
  3. Understand how you can best match your video frame rate to your audio frame rate, whether on set or if you need to do a conversion on some of your material in post.
  4. Either have a reliable timecode source to avoid drift on set, i.e.,learn not to simply rely on your devices’ internal clocks and understand how to connect your timecode in/out ports. This also means using Genlock for true sync.

OR if you’re in post, know to ask if one was used and which one was used and if Genlock was used as well.

5. On set, hope for better wireless timecode asap since more and more prosumer cameras don’t have a Genlock port.

In post – Know to ask if wireless timecode was used, particularly if the material was shot with multiple cameras on location.

Addendum on shooting at 23.98fps

If you have to shoot video at 23.98fps but your sound recorder doesn’t offer the option of recording at 23.98fps, the closest thing you can do is record sound at 29.97fps Non Drop Frame if possible. That way, you’ll still have drift between the video and the audio, but it will be smaller than if you run the sound recorder at any other frame rate or according to a real internal clock.

Here’s a breakdown of how this works:

Shooting at whole frame rates means that you are unlikely to experience major drift or issues with your timecode sync. So if you shoot video at 23.98fps, over an hour your picture timecode will drift by 3 seconds 14 frames, but if you record audio at a whole frame rate like 24fps or 29.97fps Drop Frame, the timecode will match real time and there will be no drift.

  • Video timecode after 1hr @ 23.98fps = 01:00:03:14
  • Audio timecode after 1hr @ 24fps or 29.97fps DF = 01:00:00:00

The difference/drift between your video and your audio timecode will be 3 seconds and 14 frames.

If you shoot audio at 29.97fps Non Drop Frame, the drift will be 3.6 seconds as we said above. (0.6 seconds x 30 frames per second = 18 frames)

  • Video timecode after 1hr @ 23.98fps = 01:00:03:14
  • Audio timecode after 1hr @ 29.97fps NDF = 01:00:03:18

The drift between your video and your audio timecode will be 4 frames.

It’s not perfect, but it’s as close as you can get without shooting at the exact same rate of 23.98fps.

Thank you to Hilda Saffari for contributing this article.

Hilda Saffari is a media and technology consultant with experience in feature post-production and telecommunications. Her feature credits include the first 3D digital intermediate (Spy Kids 3D), Terminator 3: Rise of the Machines and Master and Commander: The Far Side of the World.

Interested in contributing?

This blog relies on people like you to step in and add your voice. Send us an email: blog at frame.io if you have an idea for a post or want to write one yourself.