The term “virtual production” covers a broad spectrum of workflows that includes everything from LED volume/in-camera VFX to motion capture and visualization. And virtual production differs from the traditional editing pipeline in several significant ways.
So in this article, let’s take a look at some of the new opportunities and challenges that virtual production can specifically present to editors, so you can get a head start on this exciting new world.
First things first
In any modern project, editors need to control an evolving and highly complex assembly of media that’s in different states of completion throughout the post-production pipeline. Virtual production takes this to a different level by adding even new layers of content that you’ll forge into the final product.
More importantly, it also requires a significant shift in mindset, switching the current paradigm of “fix it in post” to “fix it in pre.” In other words, instead of deferring completed imagery to post compositing, everything needs to be camera-ready for the production phase. This is especially true for LED volume productions, which are increasingly replacing green-screen shoots.
With this in mind, you’ll need to be engaged in the pre-production stages, where you can evaluate things like previs. You’ll also be able to guide temporary edits sooner because assets are created much earlier than in a conventional post-rendered pipeline.
There’s no sense in delaying the engagement of an editor until the production- or post-production stages of a virtual production pipeline as this simply won’t work. So expect to be involved from the outset.
Your role as an editor is still to judge what shot best tells the story, and how to cut it. But now you also need to consider the various flavors of previs, postvis, techvis, and other forms of virtual production in the mix. So it’s up to you to understand how to best use these new assets.
By understanding how virtual production techniques operate, you can take full advantage of them. Being the editor who understands virtual production and gets involved early will lead to greater agency, inclusion, and creative satisfaction.
Philosophies of virtual production
Most of us will be familiar with Robert Bresson’s quote, sometimes paraphrased as “A film is made three times: in the writing of the script, in production, and in post.” It’s a great observation of the creative process and how stories are shaped during their creation. But it’s also an oversimplification.
Everyone who contributes to a production—the screenwriter, the actors, the previs artists, the cinematographer, the editor—rewrites the movie to some degree with each action they take. And virtual production only increases this possibility for collaborators to shape the story dramatically because the assets are highly realistic and infinitely adjustable. They’re also immediately visible, unlike working in a green-screen set.
But this immediacy doesn’t happen without one thing—preparation.
Conventional production workflows can be equated with building a skyscraper one floor at a time and constantly revising the structure throughout the project. Filmmakers often start production armed with only a screenplay, and we all know how vague that can be.
Having a shared vision of a project’s final form is one of the most challenging aspects of filmmaking.
Virtual production tools like previs help, especially when they use real-world physics and high-quality real-time rendering. But previs is still only a best guess to help people get on the same page.
Most would agree that having a shared vision of a project’s final form is one of the most challenging aspects of filmmaking. That the director, actors, cinematographer, and editor imagine a completely different project when they read a script isn’t just commonplace; it can ultimately lead to a compromised final version as those differing visions collide. Hollywood is littered with stories of “creative differences.”
Of course, having a less-defined path can also lead to different visions combining to create positive outcomes. Virtual production makes this second possibility more likely because the provisional imagery will look closer to the final than simple storyboards, animatics, and other preview methods, giving every stakeholder a clearer view of the Big Picture.
So let’s dive into the three forms of virtual production you’re most likely to encounter as a virtual production editor. These are already in use at the higher end of the production spectrum, but are rapidly proliferating in the corporate, event, music video, commercial, and indie spaces. So it’s just a matter of time before you’ll be expected to work with them.
1. In-camera VFX with LED volumes
While it may not have been the first commercial use of in-camera VFX with LED volumes, Disney’s The Mandalorian definitely brought this technique into the public eye. The project dropped green screens and expansive physical sets in favor of live, real-time visual effects backgrounds projected on a massive LED volume.
Now, many other TV series and features are using this technique. From Ant-Man 3 to How I Met Your Father, a broad spectrum of projects uses this tech. There are many unique tenets for successful LED volume work, and an editor should be proactive and involved with all of them to achieve their best results.
Welcome to The Volume.
— Peyton Reed (@MrPeytonReed) May 17, 2021
QUANTUMANIA Prep
Pinewood Studios, Iver Heath
May 2021 pic.twitter.com/ih2c55MbWD
Virtual scouting is one of the critical initial phases of LED volume work. In this phase, key creatives explore a 3D environment using various interactive methods. These can include viewing the environment in a multiuser game mode on different screens to fully immersive VR goggles.
As an editor, you should push to participate in these sessions to familiarize yourself with the story’s environments and make editing-specific recommendations for improving them. There’s plenty of room for experimentation here because the environments are highly malleable, without the associated cost of changing a physical environment.
The primary advantage of LED volume production is eliminating the incomplete footage editors usually have to work with in green-screen productions. And it’s worth pausing for a moment to consider that.
What sounds better to you? A complete take with in-camera VFX, which can be edited and evaluated as any scene shot on a completed set or actual location? Or a green screen shot with actors on a green void, with story-specific elements utterly absent from the shot?
And to make the second possibility even less appealing, any change to shots that have been completed carries an additional cost in money and time, down to the frame. It’s easy to see the allure of in-camera VFX in this context.
Eliminating the dreaded “shot missing” title card from an editing timeline carries many benefits. It means less uncertainty because it can be cut like regular live-action vs. unfinished shots with missing vfx. It becomes an easier task to see the final timing and evaluate whether a cut is working or not.
It means less uncertainty because it can be cut like regular live-action.
Virtual production also causes much less concern about frame-by-frame costs associated with non-finalized imagery needing to be turned over to VFX for rendering. Ultimately, virtual production with LED volumes provide you with more creative freedom to follow the story instead of being entangled by the VFX rendering process.
2. Editing motion capture
Motion capture or mocap refers to various techniques used to capture the movements of actors or objects. The use cases include actors puppeteering digital characters’ bodies and faces or bringing physical elements into a virtual world. It’s used on live-action movies with digital characters and fully animated productions.
Mocap is generally captured via optical volumes with tracking markers from companies like Vicon and OptiTrack or via sensor suits such as Xsens and Rokoko.
Famous (and highly-successful) examples of mocap include Avatar, Planet of the Apes, Lord of the Rings’ Gollum, and Thanos in Marvel’s Avengers movies. But this technique is also filtering down to more modest productions, as indie/DIY YouTuber Matt Workman demonstrates with a makeshift setup in his basement. Ane creator Cory Strassburger takes mocap to a whole new level for indies with Xanadu.
Editors can interact with motion capture in a surprising number of ways. Decoupling motion from image capture means shots can be redone entirely from new camera angles once motion has been captured. In other words, if an editor is working with a motion-captured shot as CG animation and believes that a different angle or composition would better tell the story, it’s highly possible to simply reprocess the existing motion data and retarget it to a different shot.
That process is more straightforward and cost-effective than going back into live-action production. This alone gives a lot of power to the editor to remain continually iterative instead of limited by production schedules and budgets.
The power to suggest entirely new shots from existing motion also requires evaluating movement for believability. Several different processing techniques can be deployed to refine raw motion capture data. So if a shot looks somehow too perfect or not perfect enough, the editor can request reprocessing data or recapturing of motion to deliver a more credible image.
Often it’s possible to recapture brief moments of motion with much less expensive stand-ins or stunt performers if the original actors are no longer available or would be too costly to bring back in.
Attending the mocap session—or at least viewing it remotely—should also be a priority for editors. In the same way that spending time on set with the team gives you a sense of the production’s vibe which can help in editorial, so does being present during mocap. It will also give you valuable insight into the process, allowing you to knowledgeably ask for pickups or new material when you get to editorial.
3. Editing a conventional VFX pipeline with visualization services
Last, but not least, a form of virtual production that may be most familiar to editors is the use of a more traditional post-rendered VFX pipeline aided by visualization services. In other words, there’s no LED wall or mocap characters, but green screen and set extensions which are first previsualized.
Previs is generally used as provisional imagery to visually express how a scene will be captured to help the various departments prepare all the needed elements. Basically a moving storyboard.
These days, previs can have greatly improved visual quality thanks to real-time tools such as Unity and Unreal Engine. And because they generate new visuals in real time, it can take place while everyone watches in-person— or even remotely, as happened often during the pandemic.
You’ll also find that previs imagery will often find its way into temp edits—sometimes even in the final edit—as placeholder imagery. So weighing in on its creation will be essential for you as the editor.
Post visualization or postvis is another crucial visualization area for the editor to master. Postvis takes shots—like green screen, set enhancement, and digital characters—and adds provisional effects to help guide the post process for timing, cutting of story beats, and more.
Generating postvis often costs considerably less than final VFX image quality and is, therefore, more accessible regarding the budget.
Editors may often work with a VFX supervisor or directly with postvis artists to refine shots. In real-time pipelines, the postvis imagery often forms the direct basis of the final imagery.
It’s worth an editor’s time to invest in the development of postvis and not just treat it as a throwaway placeholder.
Try to make the most of postvis before it becomes locked into the final, more costly final render, where additional changes may be impossible. It’s worth an editor’s time to invest in the development of postvis and not just treat it as a throwaway placeholder.
It’s a brave new world
As we’ve seen throughout this story, virtual production presents incredible new opportunities for editors to collaborate and influence the story far in advance of post-production. Taking the time to become well-versed in the different forms of virtual production will pay off during development, production, and post-production revisions.
You will still be the arbiter of what makes the cut and what doesn’t. Virtual production will not change that. In fact, your work as an editor is more relevant than ever. There’s so much more content to sort through with virtual production, both provisionally and in final form. The achievement of a locked picture has never been more complex or more rewarding for those up to the challenge.
Featured image courtesy of Diamond View