2001 Conference Proceedings

Go to previous article 
Go to next article 
Return to 2001 Table of Contents


Alisha Magilei and Lisa White
Words+, Inc.

The purpose of this paper is to demonstrate the methodology behind rapidly transitioning individuals using an AAC symbol-based system to that of a word-based/text-to-speech AAC system. Individuals who follow this literacy continuum learn to become more effective and efficient communicators.

For non-speaking persons who are able to see, but who are unable to spell or read often times will use a dynamic graphic display to communicate. A dynamic graphic display is one that can present a changing set of symbols to the user. Computer screens with graphic capabilities are dynamic graphic displays. The new modern systems allow rapid changing of the display and the ability to manipulate features in the system which allows the programmer the ability to easily transition the individual from a pictographic-based system to a word-based system.

In this presentation, we will discuss how a physically disabled non-speaking individual can participate in the multisensory approach of SEE IT-HEAR IT-SAY IT, DO-IT using an AAC system. We will present the literacy transition of an AAC user using a TuffTalker, an augmentative communication system using a pictographic software, manufactured by Words+ into a literate text-speech user. In the field of AAC, much research has been published on the topic of how to successfully use AAC equipment (e.g. how to program a device, strategies for building overlays, and organizing language effectively on dynamic displays. However, in order to implement successful transition of literacy using an AAC system, for the purpose of this paper, it will be understood that the speech pathologist or AT specialists has a basic understanding of AAC techniques.

It is essential to remember that without a process, there is no product. We need to understand that literacy occurs on a continuum. It is a very simple process to take a symbol and change it into a word format without the non-literate user losing the ability to communicate. However, most therapists, teachers, parents, and children do not know the process for transitioning from a symbol-based system to a word-based (literate) system.

Individuals with a physical disability often cannot write or speak. They are unable to participate in a multimodality approach to language as they are unable to physically perform the kinesthetic approach (hands-on) or verbal aspect involved. In school, most of us learned to read and write using a multisensory (e.g. visual, auditory, verbal, and kinesthetic) approach. In reviewing the multisensory approach, we know that teachers provide a list of spelling words to students in the classroom (e.g. a visual approach- SEE IT). Often this list is written on the board so the students can copy them down (kinesthetic-DO IT). At home, the parents say the words aloud to the child who then spells the word aloud (e.g. auditory and verbal, HEAR IT, SAY IT). The teacher can give a spelling test and the students write the word (auditory and kinesthetic, HEAR IT and DO IT). Then the teacher engages the students in a spelling bee (e.g. auditory and verbal- HEAR IT and SAY IT). So the teacher has effectively used a multisensory approach (SEE IT, HEAR IT, DO IT, and SAY IT,) to teach her students literacy.

In our research, we have found that there are four steps involved in teaching the process of literacy to an individual using an AAC system. The first step is to identify the target word (e.g. WANT). The second step is to pair the target word with a picture/symbol of want. Now the student has learned to associate the picture of "want" with the written word want. (SEE IT). The third step is programming that symbol to say "want" and a verbal message (e.g. "I want that") (HEAR IT and SAY IT). The fourth, and final step is to program an alphabet/letters page so that the user can physically spell the words want" either through use of a switch or direct select (DO IT). The individual can then touch the word want- to SAY IT. At this point, the individual is using all four modalities ( SEE IT, HEAR IT, DO IT, and SAY IT).

After the individual has been using the TuffTalker and this multisensory approach, automaticity will develop quickly. At this point, the programmer can turn off the symbols on the page and only the written text/target words will remain. This process can then be easily applied to that of a text-to-speech program. Using EZ Keys for Windows software (a text-speech word prediction software) on the TuffTalker, we can demonstrate use of written phrases using the same key sight words that the user already has mastered. For example, the phrase ,"I WANT that" that is already in the AAC user's pictographic vocabulary. Since the AAC user has not fully transitioned into a literate user, they probably will not comprehend the written words "I" and "that" in the phrase. However, using this same multisensory process, the AAC user develops automaticity and they can begin to transfer into using written text through the basic knowledge of key target words. Therefore, they will be able to combine the key target words (e.g. "I," "want", "that" ) into forming other sentences. (e.g. "I want McDonalds"). Assuming that McDonalds was a sight word paired with a symbol on the AAC user's dynamic display.

Use of this multisensory approach ( SEE IT, HEAR IT, DO IT, and SAY IT) is an effective method of teaching literacy to students in the regular education curriculum as well as students with special needs. This multisensory process needs to be shared with teachers, parents, therapists, and AT specialists in order to effectively promote literacy among our non-literate AAC users.

Go to previous article 
Go to next article 
Return to 2001 Table of Contents 
Return to Table of Proceedings

Reprinted with author(s) permission. Author(s) retain copyright.