The Closed Captioning Handbook is All-Inclusive
By Vicki Kipp
SBE CHAPTER 24 NEWSLETTER
March 1, 2003
If you want to know just about anything and everything about closed captioning for broadcast, then The Closed Captioning Handbook is the book for you. Author Gary D. Robson has compiled details on all areas of closed captioning in this 352-page paperback published in 2004.
Why Closed Captioning?
Robson begins by making the case for why content should be closed captioned. The first reason that comes to mind for most of us is that closed captioning makes content accessible to hearing impaired. Also, captions are a helpful tool in teaching literacy. Video can be captioned in multiple languages to broaden its reach. In addition to all of these accessibility reasons for captioning, then there is the legal requirement for broadcast television captions.
Timeline
As the self-appointed captioning industry “chronicler,” Gary Robson prepared a historic timeline of closed captioning development. The Telecomm Act of 1996 called upon the FCC to mandate captions for broadcast television. Present Telecomm Act of 1996 Caption Phase-In Deadlines:
• 2006 virtually 100% of all “new” English language programs must be captioned.
• 2007 1,350 hours per quarter (roughly 15 hours per day) of “new” Spanish language programs.
• 2008 At least 75% of all “old” (first aired 1997 or earlier) English language programs.
• 2010 100% of all “new” Spanish language programs.
• 2012 75% of all “old” Spanish language programs.
Although the FCC mandates broadcast captions and has a random audit policy, they rely mainly on consumer complaints for tracking broadcaster compliance.
Critiquing Closed Captioning: Translation and Transmission Rates
When a captioner covers an event, the incoming caption strokes are constantly tracked in a translation dictionary. A percentage is assigned to the captioning based on the number of entries that match dictionary words. Captioners with a 99% translation rate, or one translation error every 20 lines of captioning with five words to a row, are considered ready to caption on the air. Similar to a video game, the real time captioning software displays the translation rate on the screen so captioners can gage their performance. Captioners hit a milestone when they accomplish their first “hundred percent” translation.
As useful as translation rate is a metric, it does not address mistranslates or misfingerings. The Total Error Rate (TER) is determined when captions are reviewed after a broadcast for correct meaning and grammatical errors. The TER score is usually a little lower than the translation rate. A respectable TER of 98% equals one error per ten caption lines.
Both the translation rate and the TER are somewhat subjective. It is difficult to consistently determine caption quality.
Another factor in caption readability is the transmission rate at which captions are displayed for the viewer. According to a study by Carl Jensema called “Viewer Reaction to different Captioned Television Speeds,” 145 words per minute (WMP) is a common preference for caption transmission rate.
Caption Conventions
While there is not an FCC standard for caption quality or an industry standard for caption presentation, there is a de facto reference for caption presentation. The federally funded Described and Caption Media Program (DCMP) has a reference for captioning of their own videos. DCMP’s Caption Key: Guidelines and Preferred Techniques details caption conventions. The guide is downloadable from http://www.dcmp.org.
Both The Closed Captioning Handbook and the DCMP Caption Key cover captioning nuances such as all uppercase versus mixed case captioning, verbatim versus edited captions, speaker identification for rollup and pop-on captions, acronyms and abbreviations, sound effects and onomatopoeias, and other caption conventions.
Roll-up captions are visible as they are entered, while pop on captions appear all at once to the viewer.
ALL CAPS versus Mixed Case
Captions have historically been entered in all uppercase. While capital letters may have been necessary with early character generators and caption decoders, this technology limitation disappeared long ago. People expect mixed case captioning for multimedia content. Some broadcast caption providers have recently switched to mixed case captions.
Verbatim versus Edited Captions
Is it okay to edit the dialogue when captioning in order to increase the reading speed and comprehension of the spoken word? It depends on who you ask.
The verbatim captioning crowd feels that there is no good reason to present less information to the deaf and hard of hearing viewers than hearing viewers receive. Verbatim supporters feel that captioning should provide equal access to communications, not partial access based on someone else’s interpretation of what they need.
Those who support editing captions feel that paraphrasing increases accessibility without decreasing the amount of information portrayed, as long as the meaning of the captioning is preserved. On occasion, caption editing may be required for high burst speed dialogue (ex: talk shows) to accommodate the bandwidth limitations of line 21.
Consumer Caption Decoders
The Closed Captioning Handbook touches on consumer caption decoders. Legacy external caption decoders include the National Captioning Institute’s TeleCaption series, Teknova, ViewCom, MYCAP USA, and SoftTouch. The SoftTouch MagHubcap product performs standard decoder duties and can remove the black background behind captions, send caption text to a computer, and act as a broadcast-quality character generator. The TV Guardian Caption Decoder and Obscenity Filter and ProtecTV decoder can censor captions.
The Television Decoder Circuitry Act (TDCA) of 1990 ended development for consumer television external caption decoders. TDCA was legislation that mandated that every television set 13” or larger manufactured in 1991 or later for sale in the United States must have an internal caption decoder.
Some external caption decoders send captions to computers using a serial or USB connection. Internal computer video cards with TV tuners, such as the ATI All-in-Wonder video capture card, allow users to view television programs and captions on a computer. Some cards display the captions in a separate caption window, instead of placing text over the video.
Some capture cards allow viewers to store captions on their computers for review later. Viewers could even make their own transcripts. Viewers may use caption cards to monitor stations for caption compliance. Some cards come with a keyword search that triggers an audible or visible alert, begins saving captions, starts recording video, or begins other pre-programmed tasks when the software detects a keyword.
Newer televisions may include an automatic caption on during mute feature. Most televisions can’t display captions and the On-Screen Display (OSD) menu simultaneously. Many televisions with Picture in Picture (PIP) can’t display captions for the program displayed within the PIP window.
Based on the premise that most live shows use roll-up captions and commercials use pop-on captions, some hobbyists have built a “Mute on Commercial” decoder for their television sets. By modifying your television set or sacrificing a remote control, you can make your television mute audio when it senses a pop-up caption and resume audio when a roll-up caption appears. This logical approach doesn’t work so well when there are uncaptioned commercials or during the few second delay after a commercial break and before a show’s roll-up captions begin.
Broadcast Data Recovery Decoders
Broadcasters and multimedia developers who need to decode closed captions often use external decoders manufactured by EEG, Link Electronics, or Norpak. The Closed Captioning Handbook contains a list of decoder command codes that can be helpful for troubleshooting. Computer caption decoder cards include the ATI All-in-Wonder, the Adrienne PCI-21/RDR, and the Viewcast Osprey. The Osprey card has a cross-platform programming interface for custom development. For serious caption monitoring, the Norpak decoder and WHAZ-it software and line of data recovery cards can monitor dozens of channels at a time, logging the streams to an SQL database for analysis and for loss of captions, VChip data, XDS, station identification, and time of day packets.
Movie Theater Captions
The Closed Captioning Handbook has photos of various movie theater captioning systems. Special open caption showings of new releases are occasionally available. Scrolling text LED signs with captions have been poorly received in theaters. The National Center for Accessible Media’s (NCAM) Rear Window Captioning, in which captions are reversed and displayed in bright LEDs from the theater back wall sign and patrons use a transparent reflector panel to view captions, could catch on.
Cinematic Captioning Systems has developed a similar system that uses a sign at the back of the theater and a mirror that clips on to the back of theater seats.
Personal Captioning Systems, Inc. (PCS) uses a transmitter and wireless PocketPC PDA receiver called the Palm Captioning Display. PCS also sells a Clip-On Captioning Display that uses the same concept, but it clips on to the patron’s glasses to send text through a prism suspended in front of the wearer’s eye.
DTV Loophole
The Closed Captioning Handbook discusses DTV captions in depth. After implementation of the TDCA, it seemed safe to assume that all new televisions would be able to decode closed captions.
However, when television receivers began to split from the display device, as with computer receive cards or DTV set top boxes, the receivers were exempt from the TDCA because they did not have a display bigger than 13 inches—or any display at all. The FCC closed this loophole effective July 1, 2002, with Report and Order FCC-00- 059 “Closed Captioning and Video Description of Video Programming, Implementation of Section 305 of the Telecommunications Act of 1996, Video Programming Accessibility.” As a result, all DTV tuners in the US must meet minimum caption decoder requirements, regardless of whether they include a display device. The same Report and Order mandated enhancements to EIA-708 caption format for DTV.
DTV CC Enhancements include:
• Decoders must support standard, large, and small caption sizes.
• Providers can choose a caption size and the viewer can choose an alternate caption size.
• Decoders must support eight fonts.
• Providers can choose a font and the viewer can choose an alternate font.
• Decoders must support eight character and character background colors (white, black, red, green, blue, yellow, magenta, and cyan)
• Providers can choose a caption foreground and background color and the viewer can choose alternate colors, including a translucent background.
• Decoders must allow viewers to alter the appearance of the caption character edge.
• Decoders must be able to decode and process data for six standard services, but only one caption service need be displayed at a time.
• Decoders must have a default option that displays captions as intended by the captioner. Decoders must also include an option that allows the viewer’s chosen setting to remain until the viewer alters these settings, including when the television is shut off.
• Cable providers and other multicasters must transmit captions in a digital cable television set compatible format when transmitting to digital television devices.
• Since DTV screen resolutions can vary from set to set, captions are displayed in a caption window, instead of in a fixed location. The position of a caption window is determined by the coordinates at which its anchor point locks to the screen. Each caption service may have up to eight active windows displayed at a time.
Preserving Captions When Encoding
While there is no mandate that requires captioning of videos, DVDs, media player files, or streaming media on the Internet, unless the content is provided by the Federal Government, media accessibility should be a priority in the digital domain as well. The FCC caption mandate applies only to broadcast programs. However, the Rehabilitation Act of 1998, Section 508 mandates that training and informational videos developed or used by a [federal] government agency are accessible, both with captions and with audio description. Distance learning programs used by or funded by the Federal Government are also subject to the Act. Section 508 could be interpreted as a mandate for captioned multimedia presentations and fully accessible web sites.
There is software available to enable captioning of digital video files for DVDs and media players. Software is more likely to support broadcast-quality external hardware encoders than encoder cards because external hardware versions don’t change as often as cards do. Captions for DVD When Japanese anime DVDs were released in America, they often had Japanese subtitles, but not English subtitles.
While creating the initial captions for a program is a labor-intensive task, modifying captions is not. Using a DVD ripper, fan subtitlers (“fansubs”) ripped the DVD, translated the text track dialog, and then burned a new DVD. Web sites distributed the fansub translations in file formats compatible with various DVD burners. Subtitling capability in DVD rippers and burners for home users was improved by the fansub community. Some applications import captions as SAMI, XML-like, or HTML-like formats that spare the user the tasks of rendering and generating TIFF files for DVD captions. The fansub culture fizzled out when new Japanese anime releases began including English subtitles. However, the DVD caption modification software tools remain available.
Captioning Media Player Files
Since no industry organization came forward to set a standard for streaming synchronized captions, each media player company developed their own standard. Microsoft developed SAMI. SMIL and RealText were developed by RealNetworks. Apple’s QuickText formats work with SMIL files. Adobe Flash uses XML files and often requires users to download the HiCaption Viewer plugin. The separate formats mean that separate caption files must be prepared in order to caption for each major media player. Also, variations in each player’s caption handling mean that a different process must be followed to make captions accessible in the respective media players.
On the upside, captions are displayed in their own area on a media player, instead of covering the video. Also, the appearance of captions can be customized more easily in a media player. The caveat is that content providers must select a widely available font in order to ensure that the selected font should be installed on the playback computer.
Recommendation
If you would like to learn more about any of the topics covered here, then The Closed Captioning Handbook is for you. Signed copies are available for purchase from http://www.captioncentral.com/handbook.

Leave a Reply