2004 Conference Proceedings

Go to previous article 
Go to next article 
Return to 2004 Table of Contents 


Jan Richards
Adaptive Technology Resource Centre
University of Toronto
130 St. George St.
Toronto, Ontario
Tel: (416) 946-7060
Email: jan.richards@utoronto.ca

Charles Silverman
Centre for Learning Technologies
Ryerson University
350 Victoria St.
Toronto, Ontario
M5B 2K3
Tel: 416.979.5000 ext. 7110
Email: dfels@ryerson.ca

Deborah Fels
School of Information Technology Management
Ryerson University
350 Victoria St.
Toronto, Ontario
M5B 2K3
Tel: 416.979.5000 ext. 7619
Email: dfels@ryerson.ca

The World Wide Web began as an almost purely textual domain, and despite the growth and possibility of incorporating various multimedia components, text and static displays remain the dominant design approach. This is especially apparent when the Web is compared with television, which is mostly comprised of moving images, and spoken and gestural communication. In general, the textual nature of the Web is beneficial because character encoding systems (ASCII, UNICODE, etc.) have been developed for most written languages. These encoding systems are highly efficient and enable automated transformations into other forms (e.g. text-to-speech, text-to-Braille, separating presentation from content, etc.) that benefit accessibility [1]. In fact, the Web Content Accessibility Guidelines (W3C, 2003) include text equivalents as one of the highest priority recommendations.

However, in the case of languages (spoken or signed) that lack a written form, the textual nature of the Web presents a barrier. Sign languages, such as American Sign Language (ASL), Langue des Signes Québécoise (LSQ) and British Sign Language (BSL), lack printable encodings [2], but nevertheless are employed by significant numbers of people in countries where Web access is widespread. For example, it is estimated that in the United States, ASL is used by between 500,000 and 2 million people (Lane, Hoffmeister and Bahan, 1996).

Some Web sites already provide sign language content. However, the sign language videos used on these sites are usually surrounded by navigation mechanisms and supplemental content provided exclusively in a second, textual language (e.g. English, French, etc.). Signing users must, therefore, continually switch between their language of choice and a second language, which in some case may present a literacy barrier. In fact, it may be most fitting to describe this signed content as being "on" the Web, rather than "of" the Web.

Signing Webs

The Web may be seen as a collection of sub-webs, each created by a particular linguistic community (e.g., "English Web", etc.). Signed content will only be "of" the Web when Signing Webs have begun to form. A Signing Web is a collection of Web pages created by a particular signing community (e.g., "ASL Web", "LSQ Web", etc.) and bound together by sign language based connections, rather than text based ones. Although there is a tendency for content authors to make links to content of the same language, nothing prevents cross-over points, in which pages in one Signing Web link to pages in other Signing Webs or in text-based Webs.

Each page on a Signing Web centers on a single video or animation, featuring whatever signed content the author wishes to create, whether informational (e.g. government services, etc.), commercial (e.g. product catalog, etc.), personal (e.g. personal Web page, "blog", etc.), artistic (e.g. poetry, theatre, etc.) or even whimsical.

The key to the development of these Signing Webs is a navigation system based on moving, gestural signs that will enable users to browse and navigate the Signing Web, to move from one piece of signed content to the next without any necessity for text in a second language. We have termed this mechanism, SignLinking.

SignLinking: Signed Language Based Web Pages

It may be said that the hyperlink enabled the Web. Ironically, the design chosen by Tim Berners-Lee for the hyperlink was thought by some to be overly simple, but it has proved extraordinarily effective. The design involves associating a string of text in a source document with the URL of a single target document. The user clicks the mouse on the text string, which is typically represented by browser applications in blue with an underline, to load the target page. SignLinking involves the equivalent design concept, but realized through moving images such as video and animation rather than text. The meaning of the SignLink is embedded in the moving images, rather than in a static representation.

In the SignLink system the blue underlined text string is replaced by a red rectangular outline around the current speaker's upper torso, head and arms for a given time interval of the video (e.g., 2 seconds) that the author has decided provides the user with enough description of the link to understand its purpose. Red was chosen instead of blue to emphasize current links (just as some text-based Web pages change the color of links under the mouse pointer), while blue is reserved to indicate non-current links.

This addresses the appearance of individual links, but because multiple text hyperlinks can be easily placed in text and then scanned over a single viewing (by sighted users), they combine to enable a gestalt-type perception of the linked content on the part of the viewer. Specifically, by paying attention to the number and clustering of blue underlined text strings, users can quickly come to understand whether a page is higher or lower in a site structure, what navigation tools are available and which groups of links might be thematically related. Once users understand a page, they can make better navigational choices with a single click.

Replicating this parallelism in a video based hyperlinking system is a challenge because a video is essentially a serial medium. As with a screen reader, video output can move along at a steady pace or be jumped arbitrarily forward or backward, but the user's view of the content remains serial.

In an attempt to bring in some of the Web page gestalt, two views of the hyperlinks are extracted from the video material in SignLinking. The first view is a set of thumbnail images arranged in a table near the video, and the second is a link density bar that shows how many SignLinks are present in the video.

For each hyperlink in the video, there is a thumbnail image that may be either a frame captured from the linked video interval or an image of the author's choosing. Each image is given focus when the corresponding hyperlink occurs in the video. Selecting the thumbnail image plays the linked time interval in the main video in order to make clear the meaning of the potentially ambiguous static thumbnail image. If the user wishes to follow a link, then they select a navigation button below the image. In addition, it is possible for authors to set a second, longer time interval that the author believes could provide an expanded context of the link to convey more meaning. If the author chooses to use this feature, it will appear as a second button underneath the thumbnail image. The list of thumbnails is always available, allowing a user who is familiar with a page to find and select known hyperlinks in a parallel fashion.

The link density bar provides the user with a graphical depiction of the number, length and distribution of all the hyperlinks in the video. Clicking on the different link indicators in the density bar plays the corresponding link interval in the video.

Text Equivalents

In order to support bilingual applications and signers who may be losing their sight, two optional text features are available. The first is an optional text label that can be added next to the link icon, below each thumbnail. The label is a hyperlink with the same URL as the SignLink.

The second text feature is an optional text content area. How this area is used is left up to the author, but some possibilities include: keywords for search engines, a short description, a full alternate text version, or form controls, if user input is required. The text can include hyperlinks, but only to target documents that are also linked with the SignLinking.

Authoring SignLinked Pages:

One of the most important features of the Web is its participatory nature. Anyone with a Web connection can create a personal Web page or "blog". This process is facilitated by a range of authoring tools, including tools that conceal technical details, allowing authors to focus on content.

In order to facilitate adoption of the SignLink mechanism, we have developed an authoring tool that requires no knowledge of the underlying Web technologies (HTML, QuickTime, and JavaScript).

The authoring tool allows the user to import existing videos or to capture new video from a Webcam, if available. From there, an author can perform basic video editing operations. Once this is complete, authors can add SignLinks to the video and, if the user wishes, optional text content.

The authoring tool also includes a novel help feature: ASL tool tips. Throughout the authoring interface small "help" icons are provided. When the user clicks on one of these buttons, a signing avatar [3] appears in a separate window and provides a short context sensitive explanation in ASL. Testing of this system, including an examination of the effectiveness of the avatar, is ongoing.


While the development of non-western character encodings has empowered linguistic communities all over the world to create their own Webs, the textual nature of the medium has, until now, prevented the development of Signing Webs. We have presented a mechanism, SignLinks, which may facilitate the development of such Webs, without requiring any degree of bilingualism with a written language.

End Notes:

[1] Not applicable to characters "painted" directly into images.

[2] Some printable encodings for sign languages have been proposed (e.g. SignWriting, Hamburg Sign Language Notation System, ASCII-Stokoe Notation), but none have been widely adopted within Deaf culture.

[3] Created with Vcom3D SignStudio (http://www.vcom3d.com).


Lane, H., Hoffmeister R., and Bahan, B. (1996) A journey into the deaf-world. San Diego: DawnSignPress.


Funding for this project is provided by the Applied Research in Interactive Media program of CANARIE, grant number CP13. The authors also wish to gratefully acknowledge The Canadian Hearing Society, Marblemedia, the University of Toronto and Ryerson University, Cindy Carey, Darlene Kent, Danny Lee, Catherine MacKinnon, Sima Soudian, Jutta Treviranus, Katie Varney, Laurel Williams, Daniel Yang for all of their work on this project.

Go to previous article 
Go to next article 
Return to 2004 Table of Contents 
Return to Table of Proceedings

Reprinted with author(s) permission. Author(s) retain copyright.