Design for the Edges: Video for the Blind and Deaf
[Editor’s note: as the audience for the Frame.io Insider grows, so will the variety of content we cover. We will always provide the deep, tactical, and practical articles geared towards the post-production professional. But we will soon begin providing additional stories, articles, “tips and tricks,” and even op-eds that are relevant to the industry as a whole. Today’s fantastic article by Katie is a step in that direction. Let us know if you’d like to see more articles like this. And if you think you can write an industry-related story that our audience would appreciate, email us at firstname.lastname@example.org to give us your pitch.]
- All of the discussion about inclusion and diversity in the industry is overlooking a huge population—those with sensory-based disabilities.
- Blind audiences are the most overlooked by producers, yet they are watching many of the same shows as able-bodied audiences.
- Preparing your productions to be well-received by a blind audience starts with gathering great audio in the field.
- It’s surprising how much an audio description can add to the story without being disruptive.
- There’s an art to making an audio description that preserves the same emotional beats without giving away too much.
- The benefits of closed captioning extend far beyond just serving the hearing impaired.
- As powerful as speech-recognition AI is, live humans are still needed to help train and improve the AI’s results.
I do a lot of writing and speaking on issues of diversity, particularly when it comes to the workplace in the Screen Industry. I am someone who looks for what’s missing—I seek out and lift up those who often get left behind. Recently I realized that while diversity and inclusion initiatives seem to be increasingly popular both on and off-screen, we’re still overlooking one large sector of our community. We’re missing people with disabilities.
In thinking about this, I realized I had a few questions to answer. I set out to explore them within my local Screen Production community. I wanted to know:
- How do blind people watch television and movies?
- How are deaf people consuming and producing media?
As an able-bodied person, I didn’t want to write a piece speaking on behalf of anyone. Instead, I decided to share my journey of answering these questions, and a few of the many things I learned on the way.
According to a 2010 OECD study, globally, there are more than 1 billion people with some form of disability—that is about 15% of the world’s population (or one in seven people).
We as people without disabilities often see the physical barriers those with disabilities face—but we often underestimate the additional barriers faced due to social misconceptions and attitudes.
With so many people around me living with disabilities, my first and most logical step toward learning more would be to ask a friend or colleague.
A block from the post-production house where I work, is a production company called Attitude, where they specialize in creating accessible content about people living with disabilities. One-third of the staff there are disabled themselves, so it seemed like an obvious first place to go.
I knew Dan Buckingham and Jai Waite already; we generally hang out together at industry events. Jai and I can talk endlessly about technology and hot sauce. I brought donuts, knowing that Dan had just participated in the New York Marathon. Dan is tall, handsome and well-spoken. He’s played rugby for most of his life–first as an able-bodied athlete through college, then as a wheelchair athlete, winning a Gold and a Silver medal at the Paralympics. He started his television career as a researcher, eventually becoming a Post Supervisor, Producer, and is now General Manager of Attitude Live.
We started our conversation by talking about some of the things Attitude did to make their own content more accessible to a wide range of people. Dan told me that while they knew they would have to consider ways to make content for people who are blind and deaf, it was working with and creating content for people with intellectual disabilities that surprised him the most.
“Even though I’d worked in the sector for many years, I still had some preconceived ideas,” he told me, referring to when they first launched their video content on the web. During user testing, he noticed how easy it is for people with intellectual disabilities to navigate online, where they’re given the time to work things out. “We did things like make sections color-coded so people could navigate easier, used clear concise wording, and incorporated lots of iconography.”
He also noted that having lots of white space, consistency across pages, and color contrast also helped people with visual impairments better navigate the site, whether via a screen-reading device, or with partial vision.
Making Videos for the Blind
An estimated 4% of the U.S. population identifies as blind. Dan told me that blind audiences are the most overlooked by producers when it comes to accessibility, but that in fact, blind audiences are watching many of the same shows as I am. They do this through Audio Descriptions added to the soundtrack.
Across the table from me was Jai Waite. Like most editors I know, he is as nerdy as he is artistic. Right at home in a small cutting room, he’s often quiet until the conversation turns to something he can geek out about. Jai said that making content better for blind audiences is something he does in the edit.
“I like listening to audio documentaries to keep myself in that space of being aware that sound is so important. Blind people really like lots of natural sounds in a scene to help paint the picture.”
He encourages production crews to capture as much good, clean location sound as possible. He then consciously leaves spaces in the edit where he can, to allow for audio descriptions. “In the edit, it’s a balance. You want to try and leave more space, but you also don’t want to add dead-air to too much space. But you can leave space in sequences, like musical interludes—just do what you can to make it accessible and enjoyable to all.”
The guys told me that it’s worth trying to watch something with the audio description on. It’s amazing how creative it can be and how much it can add to the story without being obtrusive. It struck me what a creative process it must be to write and produce these elements, so I went to see how it’s done.
Creating Audio Descriptions
Netflix and YouTube are providing a growing number of videos with Audio Descriptions. Here in New Zealand, where I currently live and work, Able is a non-profit organization providing subtitles and audio description services for content on local television. I met James Kupa, whose smooth, enunciated radio voice and bright disposition makes it clear that he is an Audio Describer, and he loves his job.
James and his colleagues describe a wide range of content for blind audiences, from live events, to crime procedurals, even The Simpsons. The day I met him, James was in the middle of describing a BBC Drama Mini-Series.
“We get the media, which is this here,” he says, pointing to his monitor which has an NLE timeline of the show itself in a single part. “This is what goes to air. We find breaks in the dialogue, and we describe what’s happening in the scene or what’s pertinent to someone who might not be able to see what’s on screen. So if you didn’t watch this,” he continues, hitting play, “you wouldn’t see any of this.”
I watch a long shot of a man driving his car down a deserted road. There’s no dialogue, but in true British Drama style, a lot is going on. James goes back to the start of the shot, and narrates the scene in a way that feels like he’s reading the story as a novel.
“Rosie and Jack are riding their bikes down a narrow road. David passes them in his car. He pulls over, and gets out.”
After stopping the recording, I turned to James and said “this sounds like the shooting script”. On his second monitor, I see James has a script, that he’s constantly writing, amending and annotating. I ask him if he ever uses the stage direction or script to make his job easier. He tells me he gets the scripts from Production, but that his script is quite different. Not only does he literally have to describe what’s on screen, but he needs to avoid describing too much of the character’s motivation or feelings. “Everybody experiences watching a show differently. So you want people to be able to make up their own minds about who’s guilty and who they should be sympathetic with.”
I watch as James continues his work. It’s interesting how much he holds back. There are places where he has time to describe, but chooses not to. He wants to ensure the blind audience gets to experience the same emotional beats, pauses and tension as the director intended. He tells me that there are rules he follows, too. He doesn’t use a character’s name until their name is used. Instead, he describes them using physical traits. He doesn’t want to give away anything in the story; but sometimes, he says, there’s a look in a character’s eye that wants to tell the audience “that dude looks shady”.
Before taking up too much of his work day, I wanted to ask him one more question.
“What is the hardest thing to describe?”
“Comedy,” he says immediately. Able has audio described The Simpsons, which is dialogue-heavy, full of visual gags, and of course, is weird. Describing the opening “couch gag” alone was an enormous creative challenge for their writers and describers, and episode 1 took two hours to write the audio descriptions script.
I wanted to know more, so I went to talk to Wendy Youens, Able’s CEO. She outlined the creative process that goes into describing, as efficiently as the descriptions themselves.
“The creative process comes down to what are the key elements that are important to the story in this visual shot, what’s just happened, what’s happened since the last description, what’s going to happen before the next one, and what do people really need to know?”
She explains that the describers are constantly making judgment calls that affect how much an audience can enjoy the content. They must determine what is the most important thing to describe, in the small gap between dialogue. I asked her, “If a picture paints a thousand words, how do you fit a thousand words into four seconds?”
“You just can’t,” she replied. “There’s only so much you can describe.”
Like Jai Waite, Wendy and James also told me that content creators need to leave gaps for audio descriptions. I’m reminded that adding accessibility features is good for everyone, especially when it enables a wider audience to enjoy the work I’ve created.
YouDescribe is a great online tool that crowdsources audio descriptions, allowing volunteer sighted describers to take a YouTube video and easily create an audio description soundtrack for it so that blind audiences can enjoy the same viral videos, movie trailers and cultural content.
Just down the hallway from the audio describers at Able, men and women were training machines to help them create captions for the 80% of deaf and hearing audiences who for various reasons, enhance their media consumption with subtitles.
Audio for the Deaf is Video for Everyone
Subtitles, captions, timed text—we’re all familiar in some way with these terms, especially as they are becoming more common than ever in the content we create and consume. In 2014, captions became part of iOS. They’re now automatically part of most online social content, and part of the standard deliverables to streaming services such as Netflix, Hulu, Amazon and BBC iPlayer. There are laws in most countries that require certain film and television content to be captioned, and these days it’s considered best practice to do so.
Official statistics suggest an estimated 4% of the U.S. population is deaf—the same proportion as those who are blind. This encompasses those whose hearing is impaired enough that they have trouble listening to normal conversation, even with a hearing aid.
Captions are not only used by deaf and hearing-impaired audiences. They are also popular with people who are trying to learn or improve their skills in a second language, or those seeking to consume content from other cultures. They’re also becoming ubiquitous to those of us who watch videos on mobile devices—85% of videos on Facebook are viewed without sound. Using captions allows for second-screen or less disruptive viewing in public environments.
YouTube allows the uploader to select automatic captions, created using speech to text AI, or to upload captions created manually. Amara crowdsources captioning and translation for video content, making it more accessible and affordable for productions. Subtitle Edit is a free online tool for creating caption files, and of course Avid has Subcap as part of the NLE.
Able has a large captioning department, where captioners specifically work to create subtitles for a wide range of content and platforms. Wendy told me that the most important things to consider when creating captions are readability, accuracy, and placement on the screen. Make sure they aren’t too fast, that you’ve checked the spelling and grammar, and that the captions aren’t obscuring important action on the screen. She suggests you ask your viewers for feedback and follow up on it where you can.
Captioning has been around in broadcast content since the 1980’s. Recently, technology has changed how it’s done, enabling audiences to watch much more with subtitles. Wendy explained to me that there’s some really interesting technology augmenting people’s abilities, especially for the deaf.
Hearing Loops are an older technology, yet still very popular. In a movie theater, for example, the Hearing Loop allows a person to connect their hearing aid device directly to the audio output feed. Also in participating cinemas, there are screens that plug into the seat, showing captions in sync with the movie. There are apps that use voice recognition to automatically caption on your phone what is being spoken on the movie screen; but both the app and the screen, Wendy noted, are not ideal. “It’s not great, having to have your phone screen on in a cinema. It’s disruptive and not exactly comfortable to have to look at both screens at once.”
We walked down the hall, past rows of captioners typing text and watching videos, and I asked how much of their work is done using AI. I was surprised when she told me that they use speech-to-text and Machine Learning in an unexpected way.
In a corner of a room was a woman wearing headphones and speaking to a computer as though it were alive, but just learning English. A cup of tea sat on the desk. I felt witness to a great deal of patience. “That’s one of our re-speakers, teaching the computer,” Wendy said.
We stepped out of the room, and Wendy explained that they work around the errors and limitations of speech-recognition AI by using a small team of individuals, called re-speakers. These re-speakers work with a single copy of a speech recognition software called Dragon, taking time every day to train it to specifically and perfectly learn their individual patterns of speech. Then, when a live production goes to air, every time a person speaks, the re-speaker repeats it to the computer. The software is so well-trained on that one person’s speech, it automatically captions without errors. The re-speakers also say all the necessary punctuation, getting mostly-accurate text on screen with minimal delay. Wendy explained that this used to be done by specialist typists, and audiences had to accept that it would be relatively slow, and prone to errors.
As we strolled back through the office, I paused at a bookshelf across one wall. “Are these dictionaries?” I asked. Wendy smiled. “Yes, those are a blast from the past. We use the Oxford Online dictionary for all our spelling. But we like to keep the old library from before.” The bookshelf contained more dictionaries than I knew existed. There were books of slang, culinary terms, and thesauruses. The collection even contained different editions of favorite volumes, for what I imagined were simply nostalgic purposes known to librarians, linguists, and captioners.
As I was leaving, I asked Wendy whether any of their staff were blind or deaf. “My job is the only one here that doesn’t require a person to have good eyesight and hearing,” she explained. “That’s the thing about diverse hiring though. You have to consider what physical attributes really are required. Most desk jobs, especially, don’t require a person to be able-bodied.”
Wendy helped me realize that making content more accessible is easier than I thought. Aside from her practical solutions, she urged me to think about simply being a better ally to people with disabilities, using whatever influence I have to bring positive change. “The best thing you can do is influence those you’re dealing with to consider accessibility. Talk to your distribution chain. Ask yourself where is your film going, and how will it be made accessible throughout its life cycle? Ask whether screenings of your film will be accessible, and if not, why not? Do what you can to raise awareness about the importance of accessibility for everyone.”
One thing we can all easily do is consider how each production we work on can be made more accessible in some way. (See the addendum at the end of the article for ways of creating captions for the hearing impaired).
“Some people worry that once they start trying to be more accessible, lots of these questions come up and they’re opening up a can of worms. But just start where you can. Work with the feedback that you get and make changes as you can.” -Wendy Youens
Dan Buckingham, as a producer, also acknowledged that even when the production wants to make their content as accessible as possible, sometimes you’re on a deadline and you have to make a compromise. In terms of seeing more people with disabilities in the office, I haven’t stopped thinking about how much value would be added to the team by someone who approaches everything in life from a different perspective, and who is by necessity an expert problem-solver. The changes we need to make to have more inclusive workplaces are so minimal, and would benefit the whole team. Dan told me the best thing we can do is “design for the edges”, that when you make something more accessible for someone, you make it better for everyone.
If our content is going on YouTube, we can use the automatic captions, which are created by speech-to-text AI, but I’m told we should always review and correct them because they can be quite error-prone. You can edit the automatic captions by going to video manager, click Edit > Subtitles and CC. In this menu, you can also click Add New subtitles or CC, and either upload or manually type in your own.
YouTube Caption Editor
In Vimeo, when uploading a video, go to the Advanced tab in Settings, select choose file and upload your SRT, WebVTT, DFXP/TTML, SCC, or SAMI file.
If you are cutting in Avid, you can use the SubCap effect to create your subtitles, and export .stl or .txt, which you can then convert either using BelleNuit or an online tool. This online tool can extract subtitle files from text layers in FCP, and in Premiere Pro, go to file > New > Captions, or right click in your project window and select New Item > Captions. When you’re done editing your subtitles in Premiere, you can export an appropriate file format by going to File > Export > Media, and then down to your captions option tab.