2001 Conference Proceedings

Go to previous article 
Go to next article 
Return to 2001 Table of Contents


NOW I UNDERSTAND

Jamie Judd-Wall, Executive Director
Kathleen King, Consultant
Technology and Inclusion 
Box 150878, Austin, TX 78715-0878 
Phone: voice/message: (512) 280-7235 
FAX: (512) 291-1113
Email: jamie@taicenter.com 
Web site: www.taicenter.com


Abstract 

Our center is a non-profit assistive technology training center in central Texas. We work with Medicaid waiver programs, school districts and rehabilitation programs. We refer to the folks we work with as clients. Our clients range in age from 3 years old to 55 years old, though in the past we have had clients who are younger and older. We see clients with a wide range of handicapping conditions.

Like many of you, we work with students and adults with autism and other severe communication delays. We followed the best practice strategy of making information visual, using demonstrations prior to performance and providing appropriate prompts. However, our clients (both students and adults) weren't making the progress we wanted. We identified four main areas in which our clients continued to experience difficulty:
- following a sequence of instructions
- answering simple comprehension questions (who, what, when & where)
- using yes and no with meaning
- reducing echolalic speech

With a lot of trial and error, we developed a technology-based training program to help our clients progress in each of these areas. It is our belief that without the use of technology in this process that our clients would not have been successful. Surely, they used seen flash cards and picture cues in the past. However, these tools alone had not been enough to create success. In our training we developed a repeatable process that not only takes advantage of best practices like technology and visual strategies, but also created a learning design that could be applied to any new environment or set of information. We developed and repeated the process of pictoral and voice support in the initial stages and faded support as skills grew. The training strategies we used in following directions and yes-no teaching were repeated in the comprehension teaching. We had developed a reliable training cycle that gave our clients both a successful experience with content, but also a successful experince with faded support. 


Here is a short description of the training of each of the identified areas of difficulty:

1. Following Directions: Our program starts with strategies for following directions. It was our assumption that if we wanted to teach or impart information, we had to gain the client's attention and they had to act on the specific information we provided ... follow our directions. Because we wanted to make the process fun and functional we worked in a drawing environment. We used commonly found software, such as Kid Pix Studio. We started with following a sequence. We made templates of letters or numbers and the client had to draw a line to follow the sequence. We quickly discovered that our clients had extreme difficulty moving across the screen. They had specific directionality and locational preferences ... and those preferences varied from client to client. They couldn't draw a line from 1 to 2 to 3 to 4! 

We wondered if our clients were putting so much effort into maintaining movement and pressure with the mouse that they couldn't attend to other aspects of the task. If that was the case, using mouse alternatives might offer a solution.

We started using alternate mouse methods, trackball, joystick and touch screens. Of these the least successful was the touch screen. Our clients couldn't sustain the pressure needed to draw a line on the touch screen just like they couldn't sustain pressure on the mouse. We needed another approach, so we used trackballs & joysticks to eliminate the requirement of continuous pressure. These tools included programmable buttons that could be programmed to create a 'drag' function eliminating the need for continuous pressure.

Using a structured, multisensory training with alternate mouse tools, we taught following directions. Our clients were taught to follow directions to create items with a sequence of up to 10 steps... and they had fun doing it. We made and solved numbered mazes, shuffled the letters in vocabulary words and the numbers in phone numbers and drew lines in sequence. We required that our clients create the sequence in one long line from item to item, verbalizing the sequence as they went. 

From the sequenced letters and numbers, we sequenced phrases in the sequence of various tasks, coming home from school, walking in the hallway and so forth. As the client drew lines from item to item they read/repeated the phrases in the sequence. In short period of time, they were repeating the phrases and using the information in the situation... and reducing unwanted behaviors in those situations. When I would ask about what to do in the hallway, my clients could tell me, in sequence what to do. Over time, we have also been able to use this information and this strategy in problem solving when a step was skipped.



2. Using yes and no with meaning
It was our belief that our students with autism and severe communication delays, especially those with many unwanted behaviors, had experienced so much compliance training, that they had learned to respond 'yes' to almost everything. 'No' was used rarely and almost always as a last option with strong emotions. 
We wanted to teach what we referred to as a neutral no. It's like the response you'd give if someone asked if you wanted another piece of pizza or if you were cold and wanted a sweater; 'no, thanks' ... not angry, not refusal; just communicating the concept of 'that's not it'.

We started with listing items in favorite categories. We eased the language load by using software that combined text with pictures, like PixWriter. The trainer would create a bank of 12-15 pictures and type the heading is it ... (food, clothing, colors etc.) The client would click on each item and then type yes or no. Each 'no' response was combined with comments about the silliness of including the specific item- ie: "Do you eat bricks? No, no, no, that's so silly!" The software offered the additional feature of voice output so that all our 'silly lists' were repeated and we could laugh at ourselves over and over.
We quickly moved to silly sentences, asking each other is that right, and answering "No, that's silly". Our client's parents and care givers reported that 'no' responses increased in general conversation ... occasionally accompanied by short explanations "No, it tastes yucky." 

We started to see meaningful use of yes and no emerge ... although I cannot say that all our parents, teachers and care givers were as excited by the increase in 'no' responses as I was!

3.Answering Simple Comprehension Questions (who, what, when & where)
In examining the errors made by our clients in responding to simple questions... those questions whose response was contained in the text of the sentence, we found that responses were locationally based. In other words, a client either answered the item at the beginning of the sentence or at the end of the sentence. The response was not related to the content of wither the question. 
We began to think that our clients didn't understand the catrgorization of responses, who is a person, where is a place and so forth. If they didn't understand that when is a time and that table isn't a time, then they couldn't answer a question about the sentence "Sit at the table at 2:00."

We restructured our 'yes-no' training activity to address the basic concepts represented in simple questioning. "Is table a time? No that's silly a table is a thing." Even of our highly literate clients found this activity challenging. We decided to start every client doing this activity with a picture based word processor like PixWriter or Writing with Symbols. They really needed the support of the graphics due to the the language load of categorizing abstracts. It was much easier to identify the bus driver as a person when there was a picture with the text. We were able to gradually fade the pictures and rely on the yes-no strategy as the clients gained comfort with the tasks.
After categorizing, we moved to sentence writing with the picture based word processor, the client could then identify the person, time, place etc in each sentence. The final step was the move to a text only process. 


4. Reducing Echolalic Speech
Unlike most of my colleagues, our approach at TAI was to use echolalic speech. It was our belief that echolalic speech is a step in language development that will be abandoned when no longer needed. Our job was to create sufficient language support that the client could confidently abandon echoing as a communication strategy.
We tried to set the language stage so that echoed speech provided information or reinforcement to the client. We acknowledged the echoed speech as a comment, occasionally echoing the clients speech to show that their message had gotten across.
By combining echoed speech with information sequencing, yes-no messages and categorical comprehension, we were able to direct echolalic speech into functional use. Part of that process was the scripting of echoed speech on the computer. As the client echoed our speech, we would type their words into the a voice out word processing program, like IntelliTalk or PixWriter. We would perform one of several different tasks with their echoed message:
- re-order the text creating the desired statement 
- type yes-no comments to accept or negate the message 
- answer a question about the contents of the message
- expand on the contents to elaborate on the message.

In each case we used the voice out software to assist in the process. By using voice out software, the voicing of the message was a separate task from the training. The trainer was no longer the voice of the client. The client could create his/her own voice via the technology. It was easily and clearly understood when the message was the clients and when it was the trainers. Sure enough; echoed speech reduced in length and frequency.


References:

KidPix Studio, by Broederbund Software
Joystick Plus by Penny & Giles
Joystick with keyguard by RJ Cooper
Trackball by Kensington
PixWriter by Slater Software
Speaking Dynamically, and Writing with Symbols by Mayer Johnson Company 
Touch Window by Edmark Corporation
IntelliTalk by IntelliTools


Go to previous article 
Go to next article 
Return to Table of proceedings


Reprinted with author(s) permission. Author(s) retain copyright.