2002 Conference Proceedings

Go to previous article 
Go to next article 
Return to 2002 Table of Contents


SIGN LANGUAGE 3D ANIMATION SOFTWARE AUTHORING TOOL: PROVIDING ACCESS TO DIGITAL MEDIA

Danny Roush, M.A. CI/CT
Senior Linguist
Vcom3D, Inc.
3452 Lake Lynda Drive, Suite 260
Orlando, FL 32817
407.737.7310

The Need

The growing availability of multimedia computers-at work, at school, and at home-presents an opportunity to enrich the lives of Deaf and hearing individuals with new and improved interactions between Deaf, Hard of Hearing and their hearing acquaintances and colleagues. While audio capabilities are now enhancing a hearing person's computer interactions by providing cues and feedback for education and entertainment software applications as well as communications, Deaf and Hard of Hearing individuals generally are not benefiting from these enhancements. Additionally, while computer graphical interfaces, text and images presented in software programs and on Internet Web pages, are all part of the visual modality, this does not mean that, in being visual, these are equally accessible for Deaf and Hard of Hearing individuals. Research indicates that the median reading comprehension of seventeen and eighteen year old Deaf and Hard of Hearing students is at a fourth grade level (Holt, et al., 1997) (GRI, 1996). Thus, English text, although visual, is not necessarily accessible. Many Deaf and Hard of Hearing individuals prefer communication in some variant of American Sign Language (ASL) (a visual-gestural language unlike the grammar of English), their first language, or ideally to have the option of English captioning presented concurrently with the ASL.

Although video playback and video teleconferencing with interpreters are available to Deaf and Hard of Hearing individuals, they address only part of the problem. Playback from disk, which provides sufficient quality for sign visualization, is limited to prerecorded media and does not support significant user interaction. Video teleconferencing, unless provided by prohibitively expensive, high-speed links, does not provide sufficient quality to communicate signs effectively. Even if these limitations were overcome, they would not improve the ability of those not trained in sign language to communicate with Deaf persons, and they would not enrich the interactions of individuals who are Deaf and Hard of Hearing with interactive, non-video based entertainment and education software.

SigningAvatar(tm) Technology

SigningAvatar(tm) educational CD-ROM and server-based products, now in use in over 30 school systems, have demonstrated the benefits of using computer-generated three-dimensional (3D) animated characters (also known as avatars) that can communicate in variants of ASL, to provide access and increase English literacy for Deaf and Hard of Hearing children. The use of animation data, as opposed to video formats, offers the following key advantages over other graphical representations of sign currently available over the Internet and on CD-ROM:
* The scripts that tell the avatar what to sign can be stored and transmitted using only about 2% of the data or bandwidth required for comparable video representations. The signed translation of a hundred pages of text can be stored on a floppy disk, or downloaded in minutes.
* Whole sentences of signs can be linked together smoothly, without abrupt jumps or collisions between successive signs.
* Manual signs and facial expressions can be combined in any desired manner.
* The author or user can easily control speed and repetition of signing content.
* New content can be developed cost effectively using only the authoring tool or a text editor.
* Signs animated on one character can be easily applied to other characters. These characters include different ages and ethnic/gender appearances. These characters also include cartoon and animal characters, as may be appropriate for children's stories, or to add entertainment value.
Student comprehension of the stories on the CD-ROM increases significantly when presented with both English text and variants of ASL (Sims 2000).

SigningAvatar(tm) Script

Until now, SigningAvatar(tm) enabled content could only be authored by highly trained users who are skilled at sign language and know the conventions of the SigningAvatar(tm) script. This script or code includes commands to control the signs, facial expressions, eye gaze and timing controls of the avatar. Directly, authoring this script can be a tedious task. Once the script is created it is then embedded in the HTML code of a web document to be accessed through a web browser.

Automated Scripting

The SigningAvatar(tm) Authoring Tool will allow individuals to rapidly create SigningAvatar(tm) scripts for creating sign-enabled content. The Authoring Tool provides the user with the ability to either import text from another document or to directly type in English sentences. The Authoring Tool will recognize over 7,500 English words that are mapped onto 2,000 ASL signs. For multiple meaning English words (i.e. watch, wind, left) the Authoring Tool will automatically select the most grammatically probable sense of the word and its associating sign. (There are different signs for both senses of the word left. One meaning the direction opposite of right and the other meaning the past tense of the verb to leave.) This process is called disambiguation. If an entered word is not recognized the Authoring Tool will fingerspell it. At this point, the rendered signed content can be considered a word-for-word translation or transliteration of the English text. For some situations transliterated content is sufficient for effective communication and access depending on the preferences and needs of the consumers viewing the content.

Advanced Authoring Controls

If the goal of the signed content is grammatical ASL, the Authoring Tool provides an interface to edit the transliteration. The interface layout is designed to be non-linear, meaning that you can insert, copy, move, paste and delete the content at any point without having to re-enter it (much like using a word-processor vs. a typewriter). The interface also has separate horizontal tracks of information that are situated along a timeline. This allows the user to easily insert and arrange concurrent commands. There is a track for manual signs, a track for facial expressions and a track for eye gaze behavior. There are many other features included that assist the user in refining the signed content. These include:

* Auto-complete feature for inserting text
* Auto-scroll of the word index preview
* Graphical preview and description of 24 possible facial expressions
* Ability to graphically select the gaze point to track the viewer, the hand or a selected point in 3-D space
* Ability to graphically "stretch" facial expressions and eye gaze behavior over multiple signs
* Ability to graphically tweak the timing of the signs
* A 3-D viewer window to preview the signed content
* Controls to change the avatar (there are 9 of them), the viewing position, and the background color
* Ability to select smaller sections of the signed content for previewing.

Using some of these features will allow the user to make the following grammatical changes:

* Use of grammatical ASL facial expression
* Use of grammatical eye gaze
* Omission of articles, prepositions and "to be" verbs
* Emphasis in the form of change in speed, or holding of a sign
* Use of signs that help organize a list of objects or persons using the fingers of the non-dominant hand
* Modified sentence structure (e.g., subject-verb-object to object-subject-verb).

Plans for future development include graphical tools that will allow the user to spatially inflect signs.

Publishing Enabled Content

After the user has refined the transliteration, the SigningAvatar(tm) script can be exported as a text file or pasted directly into the HTML code of a Web document. Templates will be provided that allow authors to easily integrate this technology in their HTML documents. The SigningAvatar(tm) script also can be imbedded in other digital media using Active X Controls, and for development of CD-ROM software that is accessible for Deaf and Hard of Hearing individuals.

Conclusion

The SigningAvatar(tm) authoring tool will not only empower users to make current digital media accessible, it also allows for rapidly creating new educational content and activities and will therefore bring student independent learning to a higher level. This technology will increase access of Deaf and Hard of Hearing children and adults to digitally based information and promote inclusive education and employment approaches which accords with the language and intent of the New Freedom Initiative, recent amendments to Section 508 of the Rehabilitation Act of 1973, the Americans with Disabilities Act, and Section 255 of the Telecommunications Act, and, thus, will have broad societal benefits.

References

Gallaudet Research Institute, Stanford Achievement Test, 9th Edition, Form S, Norms Booklet for Deaf and Hard-of-Hearing Students, (Including Conversions of Raw Score to Scaled Score & Grade Equivalent and Age-based Percentile Ranks for Deaf and Hard-of-Hearing Students.) Gallaudet University, Washington, DC, 1996.

B. Holt, Judith A., Traxler, Carol B., and Allen, Thomas E., Interpreting the Scores: A User's Guide to the 9th Edition Stanford Achievement Test for Educators of Deaf and Hard-of-Hearing Students, Gallaudet Research Institute Technical Report 97-1, Gallaudet University, Washington, DC, 1997.

Sims, E. (2000). SigningAvatars, Final Report for SBIR Phase II Project, Contract # ED-98-0045, U.S. Department of Education.


Go to previous article 
Go to next article 
Return to 2002 Table of Contents 
Return to Table of Proceedings


Reprinted with author(s) permission. Author(s) retain copyright.