Captions provide access for deaf and hard of hearing individuals and many others. They can benefit visual learners, emerging readers, non-native speakers, and those in noisy environments. Captions communicate spoken dialogue, sound effects, and speaker identification.
Types of Captioning
Many types of captioning options are available - some are more effective than others. Listed below are some of the most common examples in use today:
- Offline captioning are added in post-production. They’re often seen in videos. Offline captioning allows for the most accurate captioning possible since it exactly follows the film.
- Speech-to-text is used for live translation. It is an umbrella term for when spoken communication and other auditory information is translated into text in real-time. A service provider types what is heard and the text appears on a screen for the consumer to read.
- Real-time captioning refers to captions that are created as an event is taking place. A student may choose this option as a classroom alternative to a sign language interpreter. Other examples: Emergency announcements or a conference screen may show live images of a speaker with captions at the bottom of the screen. Because these captions are live, there may be a higher rate of errors than scripted captioning.
- Open captions are a permanent part of the video image - they cannot be turned off.
- Closed captions are encoded in the video signal. They can be turned on or off.
- Subtitles are seen in foreign language materials translating the dialogue into another language for the viewer. They do not depict all audible sounds (music or someone knocking at the door).
- Subtitles for Deaf and Hard of Hearing These include information on collateral sounds (doors slamming, coughing, music) and indicate who is speaking. Sometimes “SDHH” is used to refer to captions that may not include non-vocal sounds. Check the media to ensure that captions are present, instead of subtitles.
Are There Alternate Accommodations?
Only time-synced, verbatim captions provide full and equitable access to video content. Replacing captions with another accommodation such as interpreting, real-time captioning, or a transcript does not always provide complete access.
- Substituting an interpreter does not provide equivalent access because it forces the deaf or hard of hearing student to toggle between watching the interpreter and the video, sometimes losing important information. Interpreters are also no benefit if the DHH student does not use sign language.
- Similarly, speech-to-text services for a video does not provide equivalent access. Viewers must divide their attention between the visuals in the video and reading the dialogue. Real-time captioning is also likely to include errors that are eliminated with offline (post-production) captioning.
- A paper transcript of a video is not equivalent access, either. When a student is given a script to read while watching the video, it is nearly impossible to accomplish simultaneously. There is also no easy way to track where the script matches the video, so the student has to choose one - text or visuals.
Ensuring Media is Captioned
Choosing media that is already captioned before classes begin is the most efficient way to ensure your media is captioned. If an instructor waits until a DHH student registers for his/her class, or worse, until s/he’s ready show the media for the first time, the process can become more complex. The availability of captions varies greatly by media type. What to expect concerning captioning:
- Commercial media by large production companies is often already captioned.
- Smaller or independent production companies may not include captions but may be willing to add captions upon request.
- The automatic captions produced on YouTube videos are notoriously inaccurate because they’re voice-generated. They cannot be relied upon for access.
- Instructor-produced media will most likely not have captions.
If captioning is not readily available, there are three options:
- Creating the captions in-house
- Outsourcing to a captioning vendor
- Choosing comparable media that is already captioned
Many institutions utilize a combination of methods depending on the demand and staff availability to fulfill requests. A well-prepared institution will have reasonable timelines for requesting captioning, regardless of how captioning is generated.
Elements of In-House Captioning
Determining the individual or department responsible for captioning is the first step in the process. This is most often managed by a disability office or ADA compliance officer. Some disability offices can caption videos; others turn it over to their institution’s media center. Institutions deciding whether to take on the task of captioning in-house must consider the labor involved in each step of the captioning process. Can current staff meet the demand for captioned videos? Will additional staff be needed? As a general rule of thumb: 30 minutes of video equals 7 to 10 hours of captioning. Training for captioning and technical support must be factored into the overall labor cost as well.
The basic process for creating captions includes creating a verbatim transcript, dividing the transcript into 32– character lines, Using Captioning software to add audio synchronization time codes and importing the completed caption file into the video.
Quality captions are accurate, consistent, clear, readable, and equal. Cost and quality vary considerably. Institutions can shop around to find high-quality captions that fit their budget.
The FCC sets quality standards for closed captions on television. New regulations require captions that are accurate, synchronous, complete, and properly placed.