RED In-Camera Integrations: The Next Step in Cloud-First Productions
As a person who always looks to the future, I can’t help coming back to my favorite Shakespeare quote: “What’s past is prologue.”
Today, the Frame.io Camera to Cloud in-camera integrations with the RED V-RAPTOR, V-RAPTOR XL become officially available. The significance of this step toward a more creatively collaborative cloud-first workflow compels me to reflect a little on what’s gotten us here—and about how that influences what lies ahead.
Twelve years ago, David Fincher made The Social Network. It was the first movie shot with a RED ONE camera equipped with the Mysterium-X sensor and the company I founded, Light Iron, handled the digital intermediate and finishing. After that project, David asked what it would take to be able to automatically capture and playback timecode-accurate clips transmitted from within the camera without having to download first—which, at the time, was a completely new concept.
Here we are, 12 years later and, thanks to pioneers like Emery Wells and John Traver who created Frame.io, and the team at RED, whose company ethos is to run toward challenges, we’ve finally achieved a truly frictionless workflow, which I believe points toward the future of all filmmaking workflows. To say that this integration is groundbreaking is an understatement.
So how can we be sure this integration will be so groundbreaking? Because we’ve been using this workflow ourselves and the ease, speed, and reliability are beyond impressive. But what really blew us away is that we’re now able to have full creative control over our projects while we’re still on set—because we can collaborate with all our key creatives in real time, as we’re working.
We no longer have to wait for camera cards to be offloaded to get our takes to any of the people who will benefit from having immediate access. Our editor can start cutting while we’re shooting. Producers or other creatives can view and comment on what they’re seeing. We can get elements to a second unit shooting concurrently. Our marketing team can begin activation while the production is still happening. In fact, when we were making the video to announce this new integration, we uploaded more than 2TB of data from the camera directly to the cloud on day one of our shoot and more than 4TB total—without ever downloading a mag or backing up to a local or on-prem medium.
What we did that day was a technological first. But what’s more relevant is why we did it—and what it means to our industry over the next few years and beyond.
A shared mission
Emery and John created Frame.io because, as creatives themselves, they deeply understood how important accurate, frictionless collaboration is to the creative process. Their mission was to create the standard cloud operating system that centralizes creators with their collaborators and their content. When Frame.io hit the market in 2015, there was really nothing else like it.
When I joined the company in 2019, it was with the express purpose of developing Camera to Cloud, something I’d been working toward since those early days with David Fincher and Light Iron. The synergy between Frame.io’s mission and my efforts was essential and undeniable. And because we have our own in-house production team, our engineering team has been able to develop this workflow by having them put it through the most rigorous test cases. Our production team lives and works in Frame.io day in and day out, using it from the moment we have a creative brief until we deliver the final assets.
We endorse this technology not because we built it, but because we are our ideal customer. We know, in our DNA, every challenge that our customers experience. When you see one of our productions, you see top-quality work. But what you don’t see is how profoundly these new workflows are affecting how we create it.
Pulling back the curtain
So when we did our shoot for our big RED announcement, we decided it was time to show the world how we do what we do.
On every one of our shoots, we set new challenges for ourselves so we can test a particular workflow, use case, or feature. On this shoot, we were committed to using the V-RAPTOR and V-RAPTOR XL to automatically transmit 8K REDCODE RAW R3D files along with customized CDLs from our DIT, a ProRes proxy file (ProRes LT), a WAV file, and a custom LUT associated with each take directly to the cloud—more than 4TB of 8K REDCODE RAW right to Frame.io—without ever downloading a single media card.
In fact, there were no hard drives anywhere on the set. Additionally, we wanted to demonstrate how this new RED integration allows you to accurately record and play back off-speed footage with perfect parity between proxies and OCF, which is why we creatively added the element of falling snow.
Not only did it work, it worked with complete reliability. The engineers at RED have built the first cinema camera capable of automatically offloading RAW and proxy files directly to the cloud over a network. Once the media is in Frame.io, it’s checksummed and it’s sharable. The probability that it’s safe is 99.999999999 percent—which is the real number according to our experts—because Frame.io servers are backed up multiple times.
“Not only did it work, it worked with complete reliability.”
When we decided to wipe the mags while we were still on set we could see our media right there in Frame.io, neatly organized into folders. We even had a point at which our internet connection went out, but the moment it was restored the takes uploaded automatically and accurately.
This workflow is also possible with the RED KOMODO, RED’s smallest cinema camera—and one of its most widely used. The beta firmware is now available, making it possible to shoot up to 6K RAW or 4K ProRes straight from your KOMODO to Frame.io.
Until now, there’s been a traditional order to physical media workflows that has to be executed in a specific sequence. Media cards need to be downloaded to hard drives, hard drives need to be shipped (and received), dailies need to be processed and sent to an editor, and then someone needs to ingest them into an NLE. It meant that the creative process has been historically bound by a series of events that couldn’t really be altered, which meant that the flow of creativity was largely limited by the physical world.
So what this workflow does, and why it’s so important, is to break down the barriers that process imposes, which means using the cloud to open up collaboration in ways that enable greater creative control. What’s really thrilling is that we’re only just starting to explore the possibilities, and with time we’ll discover new and novel ways of working that we haven’t even thought of yet.
But not only does this process give you peace of mind and security, it also means that post can start while the shoot is ongoing. If you look at our production, we had our editor and assistant editor cutting with the takes as soon as they were available in Premiere Pro. We even had our colorist previewing looks on the 8K files for our director while he was still shooting.
Our editors often attend the shoots and work on set. But just as often they don’t, and the truth is that it doesn’t matter because the speed at which they receive the takes is the same whether they’re physically there or not. The point is that we have the flexibility to decide what works best on any given shoot and use the technology in service of the creative.
Now that the barrier between production and post has been eliminated, communication is direct and instantaneous. As soon as a take is uploaded to Frame.io, any of the collaborators or project stakeholders you choose to give access to can comment on it or work with it.
Imagine that your editor is looking at a take and wants to recommend a different framing that might better match into a scene. Or that a VFX supervisor needs to compare a plate to a foreground element to see if the lighting works. Or a second-unit director needs to reference what the main unit is shooting. Now they can communicate that to you in the moment, when you have the ability to make any necessary adjustments.
“Getting original assets to the cloud is the key to unlocking faster, more streamlined, less location-dependent workflows.”
Getting original assets to the cloud is the key to unlocking faster, more streamlined, less location-dependent workflows. And it’s not only camera-originated material—it’s also audio files, location photos, lighting settings, camera reports, and script notes—that makes the cloud ecosystem so powerful.
Comfort with the uncomfortable
When I think back on the momentous changes in workflow that have occurred in our industry—the shift from film to tape, from tape to files, from SD to HD—I think about how, at first, it was uncomfortable to make those changes. Even if you knew that making them would improve the way you worked, you still had to get accustomed to doing things differently.
Early adopters of new technologies are people who don’t mind taking a chance. It’s one of the reasons the RED partnership has proven to be so successful. RED has been challenging the status quo since they came on the scene in 2006. They’re comfortable with being uncomfortable because they know that the discomfort is going to lead them to something that’s better—whether it’s a higher-quality image or a faster workflow, or both.
In 2014, as Light Iron was being acquired by Panavision, RED was in the early stages of developing their first 46mm large-format 8K sensor. Panavision had just built their early large-format lenses, the Panavision Primo 70 Series. Light Iron had a booth at Cinegear, where we shot and played back the first 8K footage on a 105″ 8K panel. We even gave out commemorative shirts that read “8K Possible” to capture the significance of the moment. Back then, not everyone saw the potential of this technology. Some even laughed. “No one needs to shoot 8K on a large format sensor!”
The point is that the period of being uncomfortable unlocks new potential in creating images, which leads to that becoming an industry standard. And once it becomes the standard it then becomes easy. Today, large-format and 8K sensors are a standard. We’re already exploring 12K and beyond.
The time it takes for something to go from a novelty—from being considered a fad—to being business as usual isn’t something that happens instantly. Look at some of the milestones over the past 20 years and how long they took for full adoption: it was 20 years ago when we started talking about HD broadcasting, and it took approximately ten years for most people to actually have HD televisions. We first started hearing about digital projection approximately 20 years ago, but it took ten years for DLP projectors to become ubiquitous in theaters. Five years ago we started talking about the transition from SDR to HDR, but only in the past two years has it become commonplace.
Camera to Cloud is no different. The first feature film to use it took place in July 2020. Since then, more than 6,000 productions have used it, uploading more than 25,000 hours of content.
But that doesn’t mean it’s for everyone today. The next technological hurdle we need in order to fully leverage the cloud is internet bandwidth.
We really can’t discuss this workflow without addressing the issue of bandwidth. Because today, at the end of 2022, the biggest limitation in this workflow is internet availability and speed. The good news is that with the continued growth of 5G, satellite internet, and WiFi 6, by 2031 bandwidth will have increased to the point where moving RAW files will be a common practice compared to today. For now, however, it’s important to understand what the limitations are—and what solutions currently exist for sending files to the cloud.
Looking at the chart below, you can see what kind of bandwidth you need to upload the various file formats in real time.
On our shoot, we used a QNAP 5G BaseT adaptor to convert the camera’s USB-C data into ethernet. This hardline was run into a network port switch at the stage, which had about 750 Mbps of upload speed. With this speed, we weren’t quite hitting real time to the cloud, especially with off-speed shooting, but we were able to keep up since uploads were happening between takes.
Most people will elect to upload ProRes LT in HD, which requires 82 Mbps for real-time uploading. But consider this: if you have only 40 Mbps upload bandwidth, that’s not really an obstacle. Because it takes 6-10 hours to shoot a total of one or two hours of material. So during the downtime—when you’re changing setups or there’s a pause in the action—you can take advantage of available bandwidth to upload.
It’s still a lot faster than waiting for the card to be pulled so you can first download it and then upload those files to the cloud. Also consider that because these RED cameras are themselves transmitting files directly to the cloud, the takes will sit there in a queue and as soon as the camera stops recording and they’re sitting idle they’ll automatically start uploading.
But even with “just” 200 Mbps, you can easily upload the ProRes proxies and use hard drives for the RAW, which still buys you a jumpstart on getting editorial going and getting timely stakeholder feedback. And because the R3D RAW files and the ProRes files have perfect parity, you can edit the ProRes and relink to the RAW.
In the past two years since we’ve been using Camera to Cloud, we’ve discovered a few networking solutions for boosting your internet speed—or for getting internet connectivity in more remote locations. Providers like Sclera Digital, Mr. Net, and First Mile Technologies have an array of battery-powered, bonded, prioritized LTE, 5G, and satellite mobile hotspots, and are experts in bringing the internet to your location.
You can read more about the impact of 5G here.
What lies ahead
In a short two-year time frame we went from having two initial hardware partners to now having more than a dozen hardware and software integrations. And the fact that we went from needing external hardware to power the Camera to Cloud workflow to having the first in-camera integration in that time is nothing short of astonishing.
This rate of adoption and integration is important not just because it expands the access to, and functionality of, the workflow. What’s more significant is that it validates the predictions that have been widely circulated through the industry since the release of the MovieLabs 2019 white paper that laid out the vision for what would occur technologically during the 2020s.
What we’re observing is that we’re tracking toward the prediction that over the next eight years, every media and entertainment workflow will make the permanent shift to cloud-first technology that will increase access, speed, manipulation, and creative control in ways never before possible. The days of recording to a camera card and downloading it only to have to upload to the cloud will be over. A physical medium will no longer be required to get your OCF to the cloud, where all your creative collaborators can access it and work with it, immediately, no matter where they are in the world.
“Over the next eight years, every media and entertainment workflow will make the permanent shift to cloud-first technology.”
New workflows and inventions require visionaries who are willing to take risks. The RED community has been on the bleeding edge of technology advancements for years, and together we want to make this new way of working…not new. We all understand that becoming more efficient means we can work more creatively.
At Frame.io, we see the proof of this every day on our own productions, and are starting to see how filmmakers are embracing new workflows on their productions. When I think back to David Fincher wanting to watch dailies on his iPad and I see films like Devotion receiving HDR dailies on their iPhones or iPads, I feel a combination of excitement, gratification, and validation.
As always, we want to grow this community. We welcome new pioneers and partners to join in our efforts, and invite you to reach out to us so we can continue to accelerate adoption and help more creators embrace the freedom that cloud-based workflows provide.
You can learn more about this integration and download the firmware at frame.io/c2c/red