Panavision Millennium DXL2 with C2C shooting on set.

5 Time-Saving Workflows Powered by Cloud-Connected Cameras

Since we initially launched our Camera to Cloud technology, we knew our customers would push this innovative new tool beyond what we could imagine.

After only four months of public availability, we’re being blown away by the incredible ingenuity of the hundreds of teams who are already using C2C.

See, there’s this magical moment when someone first uses a cloud-connected camera. It’s the lightbulb moment, when they get a shot in their hands instantly and realize this new capability can fundamentally change how they work. They know they’re not bound to the same tools, processes, or timelines as before, and they’re connecting the dots to create entirely new workflows of their very own.

That freedom and customization is a core part a Camera to Cloud’s value. It unlocks a new world of possibilities that helps your team save time and money, and gives you more control over your production.

So today, let’s look at some practical examples of how cloud-connected production tools help your efficiency, creativity, and budget.

What makes cloud-connected cameras so valuable?

First, let’s talk about some of the core experiences of cloud-connected production.

At a basic level, C2C gives video teams access to instant playback, remote review, and instant dailies.

Each of these features is powered by the high-quality proxy files that are automatically generated and uploaded to the cloud after the camera cuts. But they each solve for a different challenge.

Most scripted feature and episodic productions have a crew member who’s responsible for playing back takes and rehearsals. While this role may show up on call sheets under a few different names, it’s often referred to as VTR.

The VTR helps stakeholders review a takes by cueing it and playing it back. But the VTR is vital for many reasons beyond playback as well: they can do simple composites to preview green and blue screen, capture camera metadata (like lens information or spatial data) for VFX, and are responsible for setting up and maintaining video village. Some VTR technicians might also be responsible for providing internet and WiFi to the set.

Today, every type of business relies on video content—for both external and internal publishing. And that content comes in many forms, shot by crews of many sizes.

Productions that aren’t union feature films or episodics and have lower production budgets may not have a dedicated VTR tech—but that doesn’t mean that stakeholders on these productions don’t also need access to high-quality instant playback.

Camera to Cloud provides an inexpensive, automated way for productions to have instant playback.

Small crews without a VTR tech no longer need to worry about whether they’ve got their shot—authorized stakeholders can now instantly play the take back on the web or on their mobile device using the iOS app. This also means that they’re in complete control of what they play back, when, and where.

Think about cases in which certain members of the team may be overseeing and working on several different productions at once—many of which might even be shooting concurrently across town or across the globe. But they need to be connected to, and involved with, each production.

Because Camera to Cloud instantly captures and uploads proxies to, those takes are available for instant playback from anywhere.

That means that anyone who’s remote and not physically on set can have the benefit of instant review and give instant feedback.

These assets are also available for use editorial use in a timeline.

C2C proxies contain all the original camera filenames and timecode, so editors can import the files into their NLE and start cutting while the shoot is still underway.

While some productions—like narrative features or episodics—may still want (or need) to use hero dailies created from the original camera files in their edit, many others can leverage the proxies directly.

For example, productions with extremely tight turnarounds can begin cutting even while they’re still shooting.

And because these proxies have original camera filenames and timecode, they can be relinked to original camera files in the NLE, or conformed to them in a finishing tool.

Remote commercial production

Let’s look at an example.

Imagine a commercial shoot for a web company that’s spread across the globe. They’re shooting in Los Angeles, but the producer is in New York, the editor is in San Francisco, and the company headquarters are in London.

The camera rolls in LA and seconds after “cut” is called on set, the take is available on Since there’s no VTR on the shoot, the director on the shoot immediately plays the take back in the iOS app and determines what she wants to change for take two.

The editor in San Francisco pulls the take into his timeline. He plays the take back in the cut and leaves a comment letting the director know that the talent’s action doesn’t match a previous take. So the director takes the note and talks to the talent about adjusting their performance.

Meanwhile, the producer in New York is also monitoring the project during the shoot and plays back the take. She notices that another company’s logo appears in the background of the shot. She draws a circle around it and leaves a comment tagging the legal team in London, asking if they need to remove it.

The legal team in London receives the comment and reviews the same take. They reply to the producer’s comment and let her and the director on set in LA know that the logo should be removed. The director in LA sees the feedback and removes the logo from the frame. She then replies to the same comment thread on, letting the editor in San Francisco know not to use the take with the logo in it.

And all this happens before the next take rolls.

In this scenario, team members several time zones apart were able to collaborate and exercise a note on the set—in real time. The note was addressed and a major issue was avoided, saving everyone time, money, and effort.

In many instances, footage could be captured hundreds—or thousands—of miles away from where post-production is taking place.

Before anyone in post could view the footage, production would need to download all the original camera files on the set to hard drives. Then, those drives would have to be shipped to post-production, which can take a day (at best) or up to several days if they’re shooting in a different country. Once the drives arrive at post-production, the media must be offloaded and transcoded to an NLE- friendly format. Only after all of that can the post team begin working.

Every hour is precious during any production, but tight turnarounds make anything that saves time incredibly valuable.

With cloud-connected cameras, teams don’t have to wait for media to be offloaded, shipped, transcoded, and transferred.

This means that for each day of shooting, cloud-connect production could save a day or more in post-production, giving the editorial and creative teams both more time to fine tune and more control over the final deliverable.


Let’s now look at how cloud-connected cameras can help one of the most common types of video shoot: interviews.

Interviews appear in numerous sorts of content, from documentaries to commercials, sports, news, and all manner of scripted and unscripted productions.

Interview shoots typically capture fewer takes, but they’re often quite long. Productions may let the camera roll for 30, 45, 60 minutes or more, and might use multiple cameras.

That’s a lot of content. And since an interview shoot isn’t broken down into traditional takes, editors need to know what’s in a shot and where it is.

Interviewees may also be speaking in a different language or dialect than the final delivery, and therefore may need to be translated or subtitled.

This is where transcription comes in. Transcription is the process of converting the words spoken in a video or audio asset into written words that the creatives can read and use to create a “paper” edit.

Typical transcription workflows require the footage from the cameras to be downloaded to a drive, transported to editorial or to a lab, and transcoded to a dailies format. Then, the editorial team brings those clips into their NLE, identifies the clips that need transcription, and, in many cases, creates new transcodes of those clips into a smaller codec.

These new transcodes are then sent to a transcription service, where a human or AI listens to the audio and writes the words into a file that is then sent back to the editors. It’s labor intensive and can take several days to complete.

With Camera to Cloud, the interview clips are available as soon as they’re uploaded to after the camera cuts. This means that while production is shooting the next take of the interview, the editorial team can send the C2C proxy directly to the transcription house to start the process. This compresses the duration of the original workflow loop into a fraction of the time.

Getting faster access to the content of your interview means you know sooner what you have—and what you don’t—so you can be sure to get what you need.


Sports teams both generate and consume a lot of video content, from training films to in-game highlights to post- and pre-game interviews and much, much more.

Given their action-packed schedules, teams have to move fast, so all of this video content has to be turned around as quickly as possible.

Training films need to be ready for review at the end of the day, in-game highlights need to be posted to social media channels immediately after they happen, and interviews are frequently conducted on the road, making it difficult to get footage to an editor.

That’s why so many sports teams are embracing cloud-connected cameras to streamline their processes.

With C2C, footage is available as quickly as they can shoot.

For training films, footage can go to coaches and other stakeholders who can’t be on the field. Additionally, all those clips can be aggregated or edited together by the end of the day, providing the team with a comprehensive review of that same day’s game.

So many sports teams are embracing cloud-connected camera to streamline their processes. With C2C, footage is available as quickly as they can shoot.

During a game, camera operators with C2C-equipped rigs can give the video team access to an especially exciting moment immediately. It can then be pushed straight to social channels minutes later. Even if they’re shooting with multiple cameras and need to do a quick edit, they can still get the content out to fans who are eager to see the highlights immediately.

Between and after games, the video team can shoot player interviews while they’re on the road. These interviews can go straight to editors, who can make their cuts without having to wait for camera files to be downloaded and shipped, creating content that’s far more timely.

News and political content

The world of news and political video content combines the volume of interview-heavy productions with the immediacy of sports video, while also adding the complication of getting legal clearances.

Camera to Cloud workflows help both campaigns and third-party producers stay on track and in the loop.

Producers of news and political content need to make the most of their opportunities, since it’s difficult to get busy and in-demand candidates or other talent in front of the camera. With instant remote review, campaign stakeholders can make sure they get what they need from the on-screen talent while they’re still shooting.

During a campaign, timelines are incredibly short, given the pace of the rapidly changing zeitgeist. By leveraging cloud-based media, content can be shot and immediately dropped into a timeline for editing.

Pieces such as responses to another campaign’s comments, current events, or even gaffes can move as quickly as possible to get ahead of pundits and opponents. With takes going directly to, editorial teams can get time-critical content cut and delivered to keep pace with the 24/7 news cycle.

Political ads are also subject to intense legal scrutiny. In fact, taking legal action is often used as a campaign tactic by rival campaigns. Since content is being uploaded immediately to shared cloud workspace, teams can review takes in real time to identify anything that might trigger potential legal challenges.

By catching these ahead of time, content can get out even sooner, keeping the campaign as up-to-date and relevant as possible.

Big-budget scripted productions

We’ve already covered the many types of productions that can leverage cloud-connected tools to enhance their workflows.

But big-budget union productions can also use C2C to enhance their workflows even though their use cases may be a bit more specific.

Productions such as feature films and narrative episodics are executed by highly skilled, specialized teams, often with union crews. Every crew member on a union set has a vital and well-defined role.

So let’s start by saying that Camera to Cloud is meant to enhance, not replace, positions or workflows that currently exist.

As we’ve discussed, the VTR technician is responsible for playback on set and cues up takes for creatives and stakeholders watching at video village. Because of the fast-paced nature of a union set, VTR is necessary to streamline the review process while on set so that the director, cinematographer, and producers can stay in motion.

However, most VTR setups are designed to work locally on the set. Camera to Cloud can expand a VTR technician’s toolkit by allowing them to provide a playback solution for stakeholders who can’t be on the set.

Additionally, while most big-budget productions have their dailies processed from the original camera files overnight, there may be instances or workflows where access to an instant proxy is useful.

Using C2C, this can happen while the crew is there on location, ensuring they’ve got what they need before moving on or striking a set.

One example might be a pick-up shoot. The editor (who can be anywhere) can pull a proxy into a timeline to make sure the additional shot will cut together with footage from principal photography.

With a cloud-connected workflow, this can happen while the crew is there on location, ensuring they’ve got what they need before moving on or striking a set. The hero dailies for editorial would still be processed by the lab and delivered the next day, but C2C brings the editor to the set in a way that wasn’t previously possible.

Another example would be for visual effects teams, who need high-quality files that they can load into their tools and pipelines.

These files are usually DPX or EXR transcodes made from the original camera files. VFX requests these transcodes from the post-house in a process called pulling, and they generally consist of a list file (like an EDL or XML) and reference files so that the post house knows which clips to pull from archive and transcode.

But editorial needs to tell the VFX team which clips, specifically, require work. This means that VFX teams can’t start requesting pulls—a time-intensive process—until editorial receives and organizes their dailies, which can take a couple of days.

VFX teams know ahead of time which scenes will require VFX work, but not the clipnames of the takes that make up the scene.

By using C2C, VFX producers can access proxies of the clips they know they’ll need to pull as they’re being shot on the set. And, because these proxies have original camera filenames and timecode, this means they can start generating their pull lists on the same day the footage or plates are being shot.

They could even start using the proxies to rough-in their work, then link to the EXR or DPX later. This gives the VFX artists more time to work on shots or gives them the opportunity to deliver their shots earlier, which accelerates and enhances the revision process.

The future of cloud-based production

Because C2C is so new, we’ve only just begun to scratch the surface of what’s possible.

Creators are not only saving time and money, but discovering new and better ways of working—because they’re creative not just in what they produce but how they produce it.

We developed Camera to Cloud to work the way you want to work. So we’re excited to learn how you are using these tools for real productions. Meanwhile, we’re busy expanding the C2C ecosystem and welcome your input.

What do you want to do with C2C? We’d love to hear from you.

Robert Loughlin

Robert Loughlin is a technology specialist on's Innovation team. Based in the New York area, he has spent the past decade at the cutting edge of dailies and post-production.