2000 Conference Proceedings

Go to previous article 
Go to next article 
Return to 2000 Table of Contents

From A(nalog) to D(igital): Access to New and Emerging Media

Larry Goldberg
Director of Media Access and
Director of the CPB/WGBH National Center for Accessible Media (NCAM)
WGBH Educational Foundation
125 Western Avenue
Boston, MA 02134
Email: larry_goldberg@wgbh.org

The Media Access department at WGBH consists of Access Services (The Caption Center and Descriptive Video Service) and the CPB/WGBH National Center for Accessible Media (NCAM). Together, these departments within the nation's most successful public broadcasting organization have pioneered and delivered accessible media to disabled students, adults, and their families, teachers, and friends since 1972.

NCAM is a research and development facility dedicated to the issues of media technology for disabled people in their homes, schools, workplaces, and communities. NCAM's mission is: to expand access to present and future media for people with disabilities; to explore how existing access technologies may benefit other populations; to represent its constituents in industry and legislative circles; and to provide access to educational and media technologies for special needs students. NCAM is also pioneering the use of accessible media in the classroom through projects which empower students, educate software and hardware developers, design new media access devices and procedures, and in general help assure that disabled students are able to reap the benefits of existing and impending educational media.

Information about all of NCAM's projects is available on the web at: http://www.wgbh.org/ncam

A History of Media Access

In 1939, television was first demonstrated to the public at the New York World's Fair and regular TV broadcasts began. This milestone was proceeded by at least a decade of laboratory experiments and trials and errors. But it wasn't until 1972 -- 33 years later -- that the first television program was (open) captioned for deaf and hard-of-hearing viewers. Finally in 1980, the public was introduced to closed captioning -- four decades after they were introduced to television itself. And in 1985, the first public demonstration of video description for blind and visually impaired people was broadcast and the world's most pervasive medium finally had the means to become universally available and accessible.

The history of access to media for people with sensory impairments is a history not only of technology, but of policy and economics and politics. And as is true of most of the technological developments of this century, timing and luck are key aspects of success. Now as we approach the 21st century, we have in our possession many of the tools and much of the understanding needed to assure that the myriad emerging media and information technologies can be designed and developed with full accessibility for people with disabilities. This, of course, doesn't mean that they will in fact be developed accessibly; awareness and understanding and acceptance are still low among the leaders and creators of our new media.

Recent Work at NCAM

Demonstrations of innovative solutions to access to such media as CD- ROM and Web-based multimedia and interactive and digital television are essential to ensure that accessibility does not lag far behind the introduction of these media and their successors in the marketplace. NCAM has developed a series of projects, with public- and private- sector funding, which show that even the most complex of media can be designed and proliferated in forms accessible to people who are deaf or blind or who have other disabilities.

This presentation will explain and demonstrate the work of such NCAM media access projects as:


The Motion Picture Access Project, known as MoPix, enables people who are deaf, hard-of-hearing, blind or visually impaired access to closed captioning and descriptive narration in movie theaters through development of the Rear Window (R) Captioning (RWC) and DVS Theatrical (R) Systems.

The patented Rear Window Captioning System displays reversed captions on a light-emitting diode (LED) text display which is mounted in the rear of a theater. Deaf and hard-of-hearing patrons use transparent acrylic panels attached to their seats to reflect the captions so that they appear superimposed on the movie screen. The reflective panels are portable and adjustable, enabling the caption user to sit anywhere in the theater. The Rear Window System was co-developed by WGBH and Rufus Butler Seder of Boston, Massachusetts.

DVS Theatrical delivers descriptive narration via infrared or FM listening systems, enabling blind and visually impaired moviegoers to hear the descriptive narration on headsets without disturbing other audience members. The descriptions provide narrated information about key visual elements such as actions, settings, and scene changes, making movies more meaningful to people with vision loss.

These technologies have been available in specialty theaters--such as large format movie theaters and theme parks--for several years. The digital audio technology used in many movie theaters--Digital Theater Systems (DTS) --enabled WGBH to bring these technologies to conventional movie theaters.

DTS is the national leader in digital sound for feature films, providing multi-channel digital audio on CD-ROM. A reader attached to the film projector reads a timecode track printed on the film and signals the DTS player to play the audio synchronous to the film. For the Motion Picture Access Project, DTS adapted its technology to include the caption and descriptive narration tracks on a separate CD- ROM, which plays alongside the other discs in the DTS player. In turn the DTS player sends the captions to the LED display and the descriptive narration to the infrared or FM emitter.

"Cornerstones": Closed Captioning for Literacy

Cornerstones is a balanced approach to literacy development for children who are deaf and hard-of-hearing. It recognizes that such children encounter tremendous difficulty in understanding the meaning of print, learning words, and identifying words in written form. The technology that supports the approach is a multimedia library of motivating materials, centering on a collection of video stories, as well as lesson guides for teachers that focus on both higher-order and lower-order literacy processes. The library will be delivered to classroom computers (or computer/television hybrids) via the Internet, digital television, or CD-ROM. A user-friendly interface will help teachers navigate and retrieve materials easily and effectively.


An interactive prototype of Arthur, the Emmy award-winning show based on author Marc Brown's popular children's books. The prototype explores new educational opportunities provided by digital television. Enhancements are designed to address the literacy needs of deaf children but will also offer useful and expressive learning opportunities for all children. Sign language alternate presentations as well as two levels of captions read-along captions and a Spanish audio track are all accommodated in the digital TV prototype.

Digital Television Access

NCAM's DTV Access Project addresses an urgent, time-sensitive need to improve the effectiveness of Digital Television systems to deliver high-quality closed captioning and video description services to individuals who are deaf, hard of hearing, blind or visually impaired. Previous research conducted by NCAM has investigated the access needs of blind and deaf people seeking to use analog television resources. NCAM's most recent project, DTV Captioning, developed an interim closed-captioning standard for DTV and gathered consumer input to the development process.

With initial funding from the National Institute on Disability and Rehabilitation Research, this three-year project will build on research and development previously undertaken by WGBH and other industry partners to design and implement DTV captioning and video description processes.

Specifically, the DTV Access Project will achieve the following objectives:

  1. Serve as an information resource to public broadcasters and the industry at large to assist in the transition of access services for DTV.
  2. Develop and disseminate data files that test DTV systems for quality and accuracy in handling DTV captions and video descriptions (audio services for people who are blind). These files will provide tools for encoding, transmitting and decoding in accordance with accepted industry standards, official minimum requirements and a full range of advanced features.
  3. Work within industry organizations to draft standards, recommended practices and engineering guidelines to support the effective delivery of DTV captions and video descriptions.
  4. Evaluate the effectiveness of DTV receivers to decode DTV captions and video descriptions, and measure implementation of advanced features.

Convergent Media

The goal of this Project is to make it possible for people who are blind or visually impaired to effectively use convergent media by influencing industry standards and developing new media delivery technologies.

Convergent media refers to programming and services growing out of the intersection of broadcast and cable television, DTV, PC and Internet technologies. Ensuring that blind and low-vision consumers have access to the interactive features arising from convergence requires that we pay attention now to the standards which are under development for digital set-top boxes, to the software which will run on them, and to the content-creation tools and paradigms to come.

The amount of interactive content available to consumers is expected to explode in the very near future as digital technologies are deployed by cable, television, telephone, internet and satellite providers. Convergent media will soon penetrate the homes, classrooms, workplaces and communities of all Americans offering interactive educational, civic, marketplace, and entertainment services. In order to ensure that this technology doesn't create a society of the information-rich and the information-poor, all of these technologies must be accessible to and usable by people with disabilities.

The highly visual nature of these media dictates that access solutions for blind and low-vision consumers should be a high priority. Blind and visually impaired users of convergent media must be able to track available program and service options at any given time. Access solutions must anticipate how blind and visually impaired users will interact with the choices offered.

The integration of access solutions into this new industry will offer significant gains to blind and low vision users in every home and in every environment where there is a television.

Toward this end, the Project will identify barriers to access, influence standards, and create a prototype program guide system for a digital set-top box. In the process, the Project will enlist the participation of a diverse community within the advanced telecommunications industry, people personally affected by access barriers because of vision loss, and developers of access solutions.

By proposing extensions to existing industry standards and by producing a successful prototype, this Project will foster integration of accessible solutions into the next generation of digital set-top boxes and other devices and will lay the groundwork for an accessible digital infrastructure.

Accessible Web-based Multimedia

NCAM has been experimenting with ways to provide captioning and audio descriptions for Web-based multimedia through the use of Apple's QuickTime, the W3C's Synchronized Multimedia Integration Language (SMIL) and Microsoft's Synchronized Accessible Media Interchange (SAMI).

Apple's QuickTime software makes it easy to add accessibility features to movies. Digital movie clips will be demonstrated in which captions have been added as a text track, and descriptions have been recorded onto a separate audio track (in addition to the program audio and video tracks).

The World Wide Web Consortium released version 1.0 of Synchronized Multimedia Integration Language (SMIL) in 1998. SMIL multimedia presentations are made up of elements-- sound, pictures, text-- which are stored separately and then synchronized as the clip is played. In addition to making it easier to create and play multimedia, SMIL can be used to add captions and descriptions to multimedia presentations. NCAM's Physics Interactive Tutor project with MIT will be demonstrated: streaming video using RealNetwork's G2 player and incorporated captions and descriptions.

Microsoft has released version 1.0 of its Synchronized Accessible Media Interchange (SAMI) format. As with the W3C's SMIL format, SAMI works by synchronizing separate media elements as a movie is playing. However, SAMI was designed specifically to help make multimedia accessible through the use of captions and, soon, audio descriptions. Examples of SAMI-based accessible media will also be demonstrated.


Directions for the future and anticipated challenges and barriers will be discussed during the concluding question and answer period.

Go to previous article 
Go to next article 
Return to 2000 Table of Contents 
Return to Table of Proceedings

Reprinted with author(s) permission. Author(s) retain copyright.