The 11 Biggest Backup Mistakes That Editors Make

As filmmakers in the digital age, our data is our most valuable asset. If you lose your data, you lose your work, and possibly your professional reputation. And yet, for some reason, many editors and filmmakers don’t keep robust backups. The culprit is a simple psychological concept: backing up properly requires immediate, clear sacrifices (time and money), in order to protect against an uncertain future scenario. It’s easy to get lazy about backups. But hard drives fail. All the time. If you haven’t had a hard drive die on you yet, you’ve probably not been in the business for very long. This article isn’t going to teach you a comprehensive strategy for how to handle your backups; but I will cover what I feel are the most important concepts, listed in priority order.. First, go watch this video. It’s quite entertaining, and it also shows you how real this danger is.

Alright, here are the big mistakes that you need to avoid:

1. You only have one backup

Did you watch the Pixar video? They nearly lost all of Toy Story 2 (which would have killed Pixar) when both their main system and their backup were lost at the same time. Actually, they didn’t fail at exactly the same time. The backup failed at some unknown point in time, and they only discovered the issue when their main system failed. If your backups are offline, you won’t know if they fail “silently.” Simultaneous failure happens. Always keep multiple backups.

What are the chances that two drives will fail?

It’s nearly impossible to calculate the exact likelihood that you will have two drives fail simultaneously because each hard drive model is different, but it definitely happens. The Toy Story example is perhaps the most famous one, but there are hundreds of stories of people losing data because they only had two copies. It’s always a trade-off between the security of your data and the inconvenience or cost of safety precautions, and the trade-off will depend on the value of your data. If I’m just playing around, shooting a project for fun that I will probably never publish, then I don’t worry so much. If I am doing work for a paying client, however, I will follow all of the principles in this article. You get to make your own decisions, but after reading this article, you will at least be making informed decisions.

2. Your backups are connected

If you have two copies of a file, and they are connected to the same machine (even via a network), you should not consider one of them to be a backup of the other because a single issue could erase them both. A virus could delete all of the files that your computer has access to, or a strong power surge could fry all of the devices you have connected to the same electrical system. Each of your backups should be electrically separate from the others.

3. Your backups are all in one place

At my last company, a gang of very professional burglars cleaned out our entire office: laptops, monitors, and (you guessed it) hard drives. Tens of thousands of dollars worth of equipment. They took everything that looked expensive (and most of it was), and we never got any of it back. Luckily, we had off-site or cloud backups of all of our most important data. In spite of all of the warnings around the internet, I still see people keeping all of their backups in the same building. No matter how many copies of your footage you have, if they are all in the same physical location, your data is very vulnerable. A fire, a flood, an earthquake, even a powerful magnet can destroy all of your backups at once. backup workflow Always keep offsite backups.

4. Your RAID is your local backup

RAIDs are wonderful things. They give you extra transfer speeds, they allow you to combine multiple drives to simplify your setup, and they can also help protect against drive failure. It can be tempting to think that storing your data on your RAID gives you an automatic backup. The problem is that a RAID only protects against drive failure, which is only one of many ways that you can lose data. You could knock it over, or spill a cup of coffee into it. The logic board could fail, corrupting the data on both drives. Most RAIDs don’t even have the ability to recover from file deletion. If you click “delete”, your RAID will delete the file from both drives simultaneously. Some RAID configurations (like RAID 1 and RAID 10) provide robust protection against hard drive failure, which is wonderful, but consider that an added bonus. A RAID is still one device, and so it can fail all at once. A RAID is great, but you still need two other copies of your data.

5. You’re using RAID 5

RAID 5 is a particularly volatile RAID configuration that can trick you into a false sense of security. Because of how RAID 5 works, double failures (where two disks have errors simultaneously) are easy to miss until it’s too late and your data is gone. RAID 5 sounds amazing on paper, but in practice, it doesn’t even offer the promised protection against a single drive failure. Essentially, what happens is that one file on one disk gets corrupted silently. Neither you nor the system realizes this, because the rest of the files on that disk look just fine. You carry on happily. Then, a different disk fails. You stick in a new disk and then the system discovers the previous failure (which may have happened months ago), and then your entire rebuild process fails. In the best case, you lose that one file, and the rest of your data is fine. In the worst case, you lose all of your data. Don’t use RAID 5 unless you are comfortable losing the entire RAID.

6. You’re using RAID 0

RAID 0 offers wonderful speed and simplicity: with just two drives, you can double your read and write speeds (RAID 1, on the other hand, only doubles your read speeds, not your write speeds). A RAID 0 is a striped RAID, which means that every file is broken up into small pieces which are written alternately to each hard drive. Raid0 This means that a RAID 0 with two drives gives you double the read and write speed of a single drive; but it also gives you double the chance of failure. If either drive fails, then you lose ALL of our data. A RAID 0 can serve a legitimate use case, but most people don’t realize that RAID 0 is extra-volatile. Not only does a RAID 0 not give you data projection, but it is twice as likely to fail as a single hard drive is. RAID 10 is an excellent alternative to RAID 0. It gives you fast reads and writes, but it also gives you the ability to recover from a drive failure. The downside is that it requires twice as many drives as RAID 0. That’s why RAID 0 is so tempting. It can be a great idea to use a RAID 0 because of its speed and simplicity, but if you do, you have to adjust your backup strategy accordingly, to compensate for the higher likelihood of a RAID 0 failure. Don’t use RAID 0 unless you have a robust backup situation to recover from the higher likelihood of failure.

7. You don’t know which backups are where

You’re pretty sure that you have backups—lots of them. In fact, pretty much everything is backed up somewhere. You think. Maybe? If you don’t know where your backups are, that can create all kinds of problems. You could accidentally overwrite a backup that you need to keep. Your memory could be wrong, and you might not even have the backups that you think you have. You may have misplaced the drive, without realizing it. You could end up with two backups on the same drive (I have done this…). Keeping track of your backups is very easy—there’s no need to get complicated. Just keep a list of each project and the locations of its backups. I highly recommend giving each hard drive a unique name or number and writing it on the drive itself. That gives you a quick and simple mapping system. Keep track of your backups.

8. You’re not doing continuous project-file backups

Project files (the ones generated by your NLE) are so small nowadays that you’d be foolish not to keep running backups. If you use Dropbox or Google Drive (or another cloud backup services that uses watch folders), tell your NLE to put auto-save inside the backed-up folder. You can keep your main project file wherever you like, but your backups will be synced to the cloud. If you spill coffee on your computer, you still have all of your work saved online. In Premiere: You can change the autosave location in Project Settings under Scratch Disks. In FCP X: Go to Library Properties, under Storage Locations, click Modify Settings and then choose the backups folder. Media Composer and Resolve don’t allow the user to specify the location of autosaves—they are kept in a particular folder that can’t be changed. You could, however, keep the main project file itself in Dropbox, and it would upload a new version every time you saved the projects. Or you could use scripting programs like Applescript or( Powershell on Windows machines) to regularly copy the auto-backups from the default folder into your Dropbox/Drive folder. Takeaway: Point your auto-saves to Dropbox, Drive, or another cloud backup.

9 You don’t mount your memory cards in read-only mode

This tends to be a problem more for shooters and camera people than for editors, which is why it’s not at the top of this list, but I’m including it here because many editors do also shoot and have to deal with juggling memory cards. When you take the memory card out of your camera, you are in a very vulnerable state from a data perspective. Unless your camera can simultaneously record to two different memory cards, you only have one copy of your data in existence. Your first action should be to make a backup of that card, and when you do so, you should always mount it as read-only, if possible. This is one of the reasons why I love the SD card format. The convenient switch on the side of the card allows you to lock the card to read-only mode as soon as you remove it from the camera. Read-only mode protects you from errors that could damage the files on your card if you suddenly lost power or accidentally pulled the card out of your computer. It also protects you from accidentally deleting files from your memory card when you thought you were deleting files from a different folder your computer. If you’re savvy with the terminal, you can mount any hard drive or memory card in read-only mode with these commands on a Mac. First, figure out which disk you’ve just plugged in.

diskutil list

And then unmount it and remount it with the readOnly flag.

diskutil unmount /dev/yourdisk

diskutil mount readOnly /dev/yourdisk

It’s also possible to mount a drive as read-only on Windows, but it’s a bit more complicated Takeaway: Mount your memory cards as read-only if possible.

10. Your memory card is your backup

It can be very tempting to think of your memory card from your camera as a backup of your footage, and technically it is. For now. The problem is that, for most people, you need to keep using those memory cards, which means that you have to keep erasing them. If you have enough memory cards that you can afford to set them aside for long periods of time, and if you have a robust system for keeping track of exactly what footage is currently on each of your cards, then it can be okay to consider your memory card to be a backup of your footage. But in my experience, that is a rare situation and occurs mainly on higher-budget shoots. Takeaway: Your memory card should be a temporary backup only until you have time to copy to hard drives.

11. Your backups are in the cloud, and the download speed is slow

There are some excellent cloud services that allow you to back up very large amounts of data, and some people use them instead of backing up to hard drives. That can be a good strategy in some cases—but it’s important to consider the time it would take to download your files if your local copies fail. You might be able to store your 2TB of project data in the cloud, but if you can only download that data at 10mbps, it will take you 18 days to download it all! (Use these formulas to calculate the download time). If you’re on a tight schedule that could cause huge problems. Takeaway: If you can’t download your files quickly, only use cloud backups for long-term archiving.

Roll your own

At the end of the day, unfortunately, many people only start implementing robust backup practices once they’ve lost important data. I personally have had hard drives fail on 5 different occasions, and that doesn’t count the number of times that I’ve accidentally deleted files that I needed. These are recommendations, not rules. You can break them if you need to, but consider carefully before you do. Consider the value of the data that you are protecting. If your professional reputation is on the line, then handle your files with care.

  • Nice overview as always. Though I think it’s worth proposing (and especially encouraging) that RAID arrays are old-skool and irrelevant in the era of SSDs. You can certainly archive to spinning platters (arrayed or not), but almost any project you’re working on will fit on a 1TB SSD (or a few) which of course kills the throughput of even the most ambitious RAID array. And, it’ll cost less.

    • B Gracey

      I love the idea of SSD and can’t wait until it is as affordable as RAID, but I’m not able to recommend just any SSD in place of HDD, and for that reason I am not able to find an SSD solution that costs less than HDD RAID.

      I bought some Intel SSDs for film shoots and while Intel’s warranty was good (five years) 6 of the 8 units we had were bad before a year. The most common failure mode for those specific devices meant a complete loss of footage before any backups could be made. I stopped making warranty claims on them because the replacements weren’t any better.

      I use nothing but Samsung Pro series SSDs now, but they do not represent a less expensive alternative to RAID. I use unRAID for backup which keeps costs low, but has drawbacks in the speed department. Still, this combination has led to no loss of footage and no downtime, so overall it’s been a success.

      • Sorry to hear that you had SSD troubles, truly. But from my experience, it’s nearly unheard of, and with capacity being the only limitation, so long as you can swap cartridges (e.g., I use Vantec removable cartridges at native SATA-III with Samsung 850s), an SSD is simply faster and preferable to old-skool spinning platters in redundant/vulnerable RAID arrays (last-decade tech!).

    • David

      It’s a good point – SSDs are getting cheaper and cheaper, and their throughput tends to be excellent. And SSD by itself doesn’t give you the ability to recover from a hard drive failure like many RAIDs do, though. I’m a big fan of SSDs, but I don’t think that they’re ready to *entirely* replace RAIDs – they can replace RAIDs for some cases.

      Maybe I should do an article on RADs vs SSDs for video editors… hm…

  • gantico

    I would add point 12. You don’t use any UPS and you never check its batteries level: they need to be replaced reguraly over the years. A major cause of physical damage to HDDs and electronics in general are hiccups on the powerline. Working under a good UPS is really foundamental, especially for external Hard Drives that are completely unprotected. Internal HDDs have at least a minium protection offered by the power supply with its fuse, but an uninterrupted power supply is a far better option in order to avoid the risk of any loss of power while a HDD is writing.

  • Luke Stirtz

    Curious if anyone has recommendations for scheduled software backup solutions? (both mac and pc). I’ve used carbon copy before, but I believe it’s only for mac. Also curious to hear the pros/cons of using something like carbon copy (which simply mirrors the file structure onto another drive) vs. a software that consolidates each backup into a single file (which you would “unwrap” in the even that you need to retrieve something).

    Great post, thanks!

    • I’ve found the Windows options to be surprisingly poor. Native Windows Backup, AOMEI and EaseUS are really awful and overly simplistic. That said, cloud backup from Windows and Mac remains surprisingly affordable, even after CrashPlan abandoned their core market, with Backblaze becoming the unlimited provider of choice:

    • B Gracey

      I’ve had bad luck with many solutions so I wholeheartedly recommend using a solution similar to carbon copy where the files are mirrored separately. Single-file compilations where the archive checked okay a month prior had a habit of being corrupt by the time I needed them.

      At least when you keep files in their native, usable state on your backup you won’t lose everything if one or two files get corrupted on your volume.

      I like rsync and robocopy used with scheduled tasks for the flexibility but it means learning how to set up the tasks and options for each tool on your own. I’ve been using Syncovery on Windows recently and it seems to be half-decent with options for watch-folder type backups, which is nicer than trying to cobble that kind of backup together with scripts and schedulers.

      • Luke Stirtz

        Thanks for the input @bgracey:disqus @focuspulling:disqus
        I’ll look into those options: rsync, robocopy and Syncovery, as well as the backblaze cloud backup options.

  • Wout Boekeloo

    Great post David, thanks! But what about LTO solutions?