Cellphone vs. Cinema. How Did We Get Here, and Does It Matter?
It’s a commonly held belief that we’re still waiting for technology to mature before cellphone cameras can rival digital cinema cameras. But with radical new advancements in cellphone technology, some would argue that we’re already there.
Film vs. digital is not a new argument. Take Star Wars: Attack of the Clones for example. It’s well known for being shot using a pretty elementary codec. The Sony HDW-F900 used for the live action elements of this movie was infamously a good camera hamstrung by the mediocre HDCAM digital tape format. But it enabled groundbreaking new workflows, and the movie itself returned six times its production budget.
When you consider it in context, film was barely achieving higher resolutions at the time. Before digital enabled high-quality, bit-for-bit copying at the press of a button, movie theaters were all playing copies of copies of copies. And while different technologies succeed and fail in different ways, traditional film distribution at this time ended up with a resolution of only 1.5K at best.
So if you’re looking at this in terms of just resolution, the phone in your pocket has been able to do better than film for a while. And if you’re looking for capture resolutions that match current movie formats, it’s just a matter of time. Hardly surprising, given that billions of us are investing so much of our salaries on personal tech each year.
How it all started
Videophones were an early ambition and there’s no easily identifiable beginning to the mobile video revolution. Olympus’s Deltis VC-1100 camera could send images over the cellular network in 1994. The first cellphone built with an integrated camera was probably the Kyocera VP-210 in 1999. It could store 20 pictures or stream a staggering two frames per second over Japan’s 1900MHz “personal handy-phone” protocol.
Just months later, Sharp launched the J-SH04 to the Japanese market—where it stayed. This phone boasted a 256-color display and a tenth-of-a-megapixel camera. That’s not even standard definition, but crucially, the SH04 could send its photos using cellular communications.
An event that’s often claimed as a first, but probably wasn’t quite the epoch-making moment it’s often reported to be, is Philippe Kahn’s 1997 experiment. On June 11 of that year, Kahn, the entrepreneur behind Borland, shot a princely 320 x 240px image of his newborn daughter Sophie on a Casio QV stills camera, and combined a Motorola StarTAC with a Toshiba laptop to get it online.
Not by any definition
Those early pictures weren’t exactly cinema-ready, but they were higher resolution than contemporary moving picture formats found elsewhere. For example, Commodore had developed CDXL for its CDTV and Amiga computer line in the late 80s, which, on a single-speed CD-ROM drive, could achieve 160 by 100px resolution at 12 frames per second.
Commodore was actually ahead of the curve; it wasn’t until 1994 that the game Wing Commander III: Heart of the Tiger forced audiences to squint through sequences of full-motion, 320 x 165px video at 15 frames per second. It still credited a cinematographer (Virgil L. Harper, of Tremors) and won multiple awards, so it’s hard to point the finger.
(The embed below is scaled better than it was in Wing Commander, but it’s based on the original game data.)
It’s not clear exactly when the idea of shooting single-camera drama on a phone arose, but it’s clearly been possible for longer than it’s been popular.
Interpreting what happened next is a matter of opinion. We can either be impressed by the skyrocketing capability of cellphone cameras, or horrified at the gap in capability that existed, at least initially, between phones and mainstream photography. For instance, it’s worth remembering that Apple’s first iPhone in 2007 was not a market-leading photographer’s tool. It only captured 2-megapixel photographs, which compared poorly to Nokia’s contemporaneous 5-megapixel N95.
Things get interesting
Step forward to 2010, and Nokia’s N8 was a clear bid to reestablish a technological lead after a chilly reaction to the N97. By this time, digital cinema was entering the mainstream, with interest in full-scale cinema cameras approximating Super-35mm film already established. ARRI had been toying with the D20 and D21 for a while and launched the Alexa EV at NAB the same year.
While the N8 was far from Super-35mm, its 1/1.83” sensor was larger than its peers and competed ably with dedicated point-and-shoot cameras offering 12-megapixel images and 720p video. The ambitious short The Commuter, starring Pamela Anderson, Charles Dance and Dev Patel, was shot to promote the release.
Nokia followed up with some even bigger numbers on the 808 PureView in 2012, with its 1/1.2”, 41-megapixel sensor, which remained the highest resolution camera in any phone until the Honor View 20 in 2019. But the watershed moment here was not the 41-megapixel sensor in itself, but the implication that the final image wouldn’t necessarily be derived directly from the sensor data.
Instead, it indicated a reliance on some degree of processing, or computational photography, which would turn out to be a huge part of the reason that modern phones look as good as they do. And so we enter the age of computational photography, which employs clever in-camera techniques to alter the raw data coming from the image sensor.
Some of these techniques are intended to minimize problems, like noise, while others exist to mimic the behavior of dedicated still and movie cameras by simulating limited depth of field. It’s the success of those techniques which has led to exactly the speculation we started with: that it’s increasingly possible to shoot material that looks like a real movie, on a cellphone.
Why you really should shoot on a cellphone
Whatever your stance on the phone vs camera debate, it’s impossible to deny their performance in the right circumstances. Getting the most out of them, as with so many things, means playing to their strengths.
Making phones more portable has been a goal ever since the briefcase-sized options of the 1990s, and that pursuit has become almost a science. Mostly they hover around 150 grams, which makes almost every cellphone tiny compared when held against even the smallest DSLR or mirrorless camera.
Sure, the Sigma fp is tiny, and manages to pack a full-frame sensor into its diminutive body. But add a lens that covers that sensor and the end result will outweigh, outsize, and out-price almost any smartphone.
Anyone who’s hauled a full-size camera package around an airport or a movie set knows why portability matters. It saves time on a single-camera drama setup, and it saves sweat on documentary shoots. And these things matter.
“If there’s a genuine need to scramble up a forbidding escarpment, which would you prefer?”
There are documentary directors whose ideas lean toward camera positions at the top of misty, distant mountaintops. (Experienced crew generally know how to persuade those directors to change their minds.) But if there’s a genuine need to scramble up a forbidding escarpment, which would you prefer? An iPhone 14 (barely 180 grams) in a ruggedized case, or a rented Alexa 65 (10,500 grams, body only) that empties 150-watt-hour batteries in 25 minutes?
From anywhere, to anywhere
But that’s just the practicalities. Phones can go places Alexas can’t in more ways than one. Most of them are small and light enough to double as action cameras assuming that you’re willing to put them in harm’s way. Even just achieving a very low angle above the ground is easier when the camera is less than half an inch from front to back. If you need to mount them on grip equipment, that equipment can be small and light, and a lot of phones have good enough stabilization that gimbals and grips might not even be necessary.
Another, less obvious but powerful advantage is that everyone has a phone, so nobody’s surprised to see one. The option to shoot almost anywhere without attracting the attention of someone in a high-viz vest can be a huge boon to documentarians. It’s no harm to dramatic productions keen to steal a difficult location, either, though the legality of that approach can be complicated.
Assuming that you’re shooting ethically, responsibly, and within the bounds of local laws, being able to do so without being noticed can be a powerful tool.
But the standout advantage has to be the presence of all that communications hardware, particularly if you’re moving towards a camera-to-cloud workflow. Some camera companies—notably JVC with its Connected Cam range—have approached by building cellular network communications into traditional broadcast cameras. While others, like Teradek and Atomos, have developed hardware that bridges the gap between camera and cloud.
The industry is changing fast but for now, the cellphone with a camera and always-on internet connectivity has the advantage. And streaming directly to an audience or a remote server can also make citizen journalism very hard to suppress.
And they’re everywhere
If there’s one advantage of phones that almost no other camera could possibly match, it’s ubiquity. Modern news reports bulge with phone footage, but not because the phone camera is better than an ENG camera. It isn’t. The phone wins because it’s there, and in the right circumstances, concerns over technical quality wilt in the face of what’s happening in front of the lens.
Because of that, the 2010s and beyond will likely be the best-recorded period in human history. Moving-picture records of the past were silent, rare and flickery and are frequently degraded by time. Peter Jackson’s They Shall Not Grow Old added color and sound to WWI archive footage to make the people depicted relatable as, well, people, rather than distant shadows in the original material.
Modern phone footage is ubiquitous, looks better and lasts longer, and that’s going to make a difference to a society that will last longer than any of us.
Why you really shouldn’t shoot on a cellphone
Let’s not get too excited. People who regularly use cinema cameras will already be leaping for the comments section to say that you can’t portray the phone as something it is not. Like a high-end cinema camera.
I agree. It’s absolutely not.
Taking an Alexa from its typical working environment and replacing it with a cellphone often works shockingly well. But that’s more about the environment that it’s in, rather than the capabilities of the device itself.
Without the professional lighting (and the professionals who set up that lighting) no-one’s going to choose the cellphone footage over the Alexa.
The phone won’t be able to match a cinema camera’s dynamic range. The data that’s discarded in order to fit the phone’s limited storage will be a colorist’s nightmare. And you lose a lot of the benefits of computational photography when the lights go out—modern cinema cameras eclipse even the most capable noise reduction algorithms, anyway.
The point is that no-one would choose to shoot a scene on a cellphone if they had the option to shoot on a cinema camera instead. It’s just not a reasonable comparison.
Even the cellphone’s portability argument has its flaws. For example, you might be able to set up that low angle shot more easily with a cellphone, but framing and focus can be a little tricky when your viewfinder is pointing in the wrong direction.
And battery life is a definite factor. It’s not something most people consider because phones typically last all day—assuming that it spends most of it riding in your pocket. But use a phone as a production camera—or a lighting control, or a remote monitor—and it quickly becomes clear that it can only handle a few hours of full-time work. The lack of a swappable battery means USB power banks or a trailing charger, or an easy-to-dislodge cable that sacrifices much of the compactness and portability we wanted in the first place.
“Use a phone as a production camera and it quickly becomes clear that it can only handle a few hours of full-time work.”
There are other problems, from audio recording to offloading footage when storage is full. While you can mitigate these by having several phones in rotation, this adds the issue of swapping or duplicating any rigs you’ve set up for lenses, follow focus, and other devices. Another time sink or cost.
So if you need a cinema camera, use a cinema camera.
Vive la différence
There shouldn’t be any great controversy in recognizing the differences between these two very different pieces of technology. Nobody’s ever going to take a high-end cinema camera to steal dubiously-legal shots on a subway, and nobody’s going to shoot the next Fast and Furious on an iPhone.
Regardless, the concerns of using phones as production cameras, no matter the kind of production, aren’t just about picture quality. And if they are, then the production camera will always have the upper hand.
View this post on Instagram
The lasting problems are issues of practicality, ergonomics, and engineering. Changing them might mean that your cellphone might become less a phone and more a camera—or bringing your production camera closer to your phone—which changes the whole discussion.
But this tension between the two sides should be welcomed. Competition can lead to rapidly advancing technology and new features—the video below is mostly about computational photography but it’s not a stretch to imagine features making the jump from still to motion. Whether these changes are welcome is down to you.
For the moment, what matters is that the tech you choose to shoot with is not the limiting factor it once was. You can shoot great material with a phone just as you can shoot terrible footage with a cinema camera. We live in a world where we’re less restricted by the technology at our disposal, and I’d call that progress.