The 5 Most Powerful AI Tools in DaVinci Resolve (and When to Use Them)

It’s clear that machine learning (ML) and artificial intelligence (AI) have found a natural home in post-production workflows.

Software developers like Adobe, Blackmagic Design, and FilmLight have all been quick to add features that rely on these emerging technologies. Like Speech to Text in Premiere Pro. And this trend shows no signs of slowing down.

In my daily work as a colorist and finishing artist, I’ve started regularly reaching for these tools in DaVinci Resolve, and thought I’d share five ways that you might introduce them into your own workflow (as well as a few examples of when they fall flat.)

What smart tools are available in DaVinci Resolve?

Blackmagic Design started their AI journey with Neural Engine, introduced with DaVinci Resolve 16 Studio in 2019.

It was their first step into providing AI/machine learning tools within the application, and included automated effects features like Face Refinement, Auto Color, and Optical Flow Speed Warp.

With the release of DaVinci Resolve 17 Studio, Blackmagic Design took this several steps further with multiple Neural Engine-powered features, such as Magic Mask, Scene Cut Detection, Smart Deinterlacing, Smart Object Removal, and Smart Reframe.

So let’s take a look at my favorites.

1. Face Refinement

One of my favorite AI-driven effects within DaVinci Resolve has to be the Face Refinement.

This nifty tool makes fast work of touching up skin, removing eye bags, enhancing and sharpening eyes, and even allowing a subtle relighting of the subject’s face (when appropriate).

When you first apply the Face Refinement ResolveFX and start its analysis, the tool tracks an outline of the talent’s face, allowing you to independently adjust facial features, which have been arranged in convenient sections.

 

Importantly, if there are any issues with the automated detection process for detecting skin tone or facial features, you can manually override the effect by feeding it an alpha channel to manually inform where the face is or tweaking the settings in the OpenFX menu to inform the software what skin tone looks like. 

The critically important aspect for me is access to individual effect controls, which offers quick and easy adjustments that ensure consistency between shots without too much fiddling on my part.

Unfortunately, the drawback of this effect is that the “analysis” process must be run on every single clip manually, which can dramatically slow down your workflow and—potentially worse—can easily be forgotten, which can result in the infamous “floating eyes” output.

 

This occurs when Face Refinement uses tracking data from the previous shot after being copied from another shot.

2. Detect Scene Cuts

DaVinci Resolve may have had a Scene Detect Workflow previously, but it’s nothing like the smarter Neural Engine powered tool introduced with Resolve 17.

The previous Scene Cut Detection tool—available only on the media page prior to import—required a lot of hand-holding or a perfectly prepared edit decision list (EDL) provided from editorial, which I don’t always receive. Honestly, it wasn’t much better than going through with a razor tool and slicing, which I often found myself doing as a faster alternative, especially if the timeline includes transitions.

Luckily, the new Detect Scene Cuts feature is fast, easy, and intuitive for anyone familiar with Resolve’s non-linear editing environment. Simply select the clip you’d like to detect cuts within on the timeline, go to the Timeline menu, and select Detect Scene Cuts.

 

A dialogue box appears while Resolve processes and once complete, you’ll see these new cuts on your timeline. If you need to add or remove a cut, you can easily do so using Resolve’s native tools after the AI-driven process finishes.

I can’t emphasize how useful this feature is for boosting workflow efficiency.

While it’s not as sexy as some of the other Neural Engine-powered features like Magic Mask or Face Refinement, Detect Scene Cuts has become a daily-use tool for me.

Whether it’s a few pick-up shots or a revised output from editorial, Detect Scene Cuts eliminates the need to round-trip between editing applications, which saves me time for more important creative tasks.

3. Magic Mask

If you ask me, DaVinci Resolve’s Magic Mask fits neatly into Arthur C. Clarke’s quotation about technology…

Any sufficiently advanced technology is indistinguishable from magic.

Once activated on a clip, Resolve’s Magic Mask requires the user to draw a stroke to identify one or more subjects to track throughout the shot. The AI-powered algorithm, clearly trained on a convolutional neural network, then attempts to track the selected subject throughout the clip, completely isolating it for precise adjustment.

 

Magic Mask can take this several steps further, using its model of human features to intelligently isolate specific sections of the subject’s body—or even just the wardrobe or exposed skin tone on a specific area of the talent.

For example, you can choose to isolate highly specific areas of a subject—like only the exposed skin, only clothing, only the face, only torso, etc. You can even be as specific as a hat or shoes, but it tends to fail when with things that it hasn’t been well trained on, such as an animal.

I’ve saved a lot of time using a hybrid strategy to isolate a character’s white suit on a recent show, relying on the Magic Mask’s human identification to track the subject through challenging situations, and combining it with traditional isolation methods like luma qualifiers and Power Windows to further refine the results.

While it’s possible to achieve the same objective manually with Resolve’s qualifier and power windows, having a computer doing the heavy lifting is something I frequently take advantage of. 

Not perfect

There are, of course, limitations. I’ve found Magic Mask to be slower and more computationally intensive than traditional isolation techniques.

In my experience, I’d estimate using a Magic Mask takes about three times as long as other methods, but that’s still faster than manually keyframing or rotoscoping in many circumstances. That makes it a great tool to keep in your toolkit when you’re on a tight deadline and don’t have time to roto.

Your mileage may vary, especially if you’re working on a low-powered machine. Most of my experience is with workstation-level hardware, with dozens of CPUs and a beefy GPU with a lot of VRAM, but I still see the occasional stutter or loss of responsiveness.

If you’re on a less-powerful desktop or laptop with fairly low-spec hardware, I’d suggest avoiding these tools.

If you’re on a less-powerful desktop or laptop with fairly low-spec hardware, I’d suggest avoiding these tools, as they’re far more likely to slow down your workflow than improve it.

That said, Magic Mask works very well when combined with other tools and I highly recommend combining it with other isolation methods like traditional qualifiers and power windows. 

4. Speed Warp

One of my personal pet peeves about slow motion or retimed footage is when it stutters.

I immediately feel manipulated and it pulls me out of whatever I’m watching, so I like to keep it as smooth as possible.

Of course, a decision to slow down a shot might not be made until editorial, so you might only have 24 fps to work with—not ideal for speed ramps.  Luckily, Resolve’s optical-flow-powered retiming combined with Speed Warp motion interpolation mode should provide you great-looking results. Well, as good as you’re going to get, at least.

You can find these retiming and motion estimation settings within the clip attributes and project settings, but Speed Warp can only be set on a clip-by-clip basis within clip attributes.

It’s certainly not a fast render, but it provides a smooth retime that clients love, especially if they’ve been approving references based on the lower-quality methods such as nearest or frame-blending.

In finishing, I’m matching the offline editorial as closely as possible, so I’m usually trying to match offline’s speed adjustments, but potentially smooth things out, if the speed adjustment is feeling harsh.

5. Object Removal

Let’s say you’re working in post on this amazing television show, set in the dark ages.

Production was careful to avoid anything modern and shot with limited practical lighting. But in the rush to capture everything using as much natural light as possible, a few small mistakes were made. A boom mic snuck into view or a highly recognizable branded coffee cup was left in frame, for example.

Several years ago, this type of problem would automatically be a job for VFX, no matter the complexity. Today, our color and finishing applications have evolved quite a bit, offering a variety of tools that allow a multi-faceted artist the ability to perform object removal during the color and finishing process, instead of requiring an entirely separate process.

For example, DaVinci Resolve Studio allows for basic AI-Assisted object removal on the Color Page for basic work or using Fusion for more advanced and manual work.

In my work, I’ve recently used the basic Object Removal ResolveFX for removing logos, signs, and address numbers off buildings, along with removing elements from the sky.

This effect utilizes the Neural Engine to generate its own clean plates, but you can also override it and input your own if you prefer—sometimes I do this for removals as a still paint-out in Photoshop can be fairly easily tracked back into place for small objects or removals.

Still a way to go

With everything being said about AI-based tools, I still personally feel this specific area is in its infancy.

I had to do a fairly significant amount of clean-up on most shots where I used AI-powered assistance and the additional time spent processing didn’t always outweigh the time it would have taken to just do it manually.

Also, there were times when it would just go haywire, likely because the shots were just too advanced for the algorithm in its current form. So manual effort is sometimes required.

You can sleep soundly—at least for a little while. The machines have a way to go, yet.

If you’re an artist with a well-developed skill (like roto, object removal, paint-outs, etc) and you’re worried the AI-Powered tools will rob you of work, you can sleep soundly—at least for a little while. The machines have a way to go, yet. 

Further reading

I’ve only scraped the surface with Resolve 17, its AI-powered feature set, and where this might all go from here, but I hope these five tips helped point you in the right direction.

Or at least got you intrigued about some new tools you might consider adding to your workflow and skillset. You never know when someone might forget something on-set and no one notices until it reaches your hands. It happens more often than you’d think.

If you have any additional questions about any of the features mentioned above, I highly recommend you check out the official user manual for Resolve, which you can access in the help menu. At over 3,500 pages long, it’s an incredibly comprehensive guide that details every feature in Resolve quite well.

Blackmagic Design’s training portal has some great resources.

Honestly, I’ve read the entire thing and I still pick up a few things every time I go through it for reference.

Alternatively, you can head to Blackmagic’s training portal, where you’ll find PDF guides that run the gamut of DaVinci Resolve’s features along with lesson files and exams to test your skills.

Jason Bowdach

Jason Bowdach is a professional colorist and finishing artist based in Los Angeles, California. His credits span film, television, and streaming platforms, with clients including Adobe, Google, Fox, and Disney. Outside his color and finishing work, Jason is a Blackmagic certified trainer and has produced training for companies such as FMC, Mixing Light, Lowepost, Ripple Training, BorisFX, and Digital Anarchy. He’s also the founder and creator of PixelTools, a company dedicated to streamlining the color grading process.