2001 Conference Proceedings

Go to previous article 
Go to next article 
Return to 2001 Table of Contents


HOW DOES A BLIND SWITCH USER WRITE? Orthographic output for people with profound multiple disabilities

Paul Blenkhorn – University of Manchester Institute of Science and
Technology
Paul Hawes - Sensory Software International

Introduction

In the provision of access to computer systems, any one disability has a tendency to make the access systems devised for people with a different disability more difficult to use. Switch operated input, although slow, does not present a huge barrier, provided that you can see the screen. Similarly, many blind people operate computers extremely effectively by touch-typing with the aid of a speech synthesis system.

However, for someone with a double disability, trying to cope simultaneously with a switch or voice input system while relying entirely on speech feedback can be extremely confusing, and success is rare.

A few years ago, I gave a paper (Communication Matters, 1994) on people with multiple disabilities using AAC by means of different combinations of adaptive software. Most of these solutions depended on one disability or another being less profound. Solutions for the intelligent totally blind switch user remain partial at best. Most telling of all was the very small number of success stories. Over 20 years working in the field of assistive technology, I can only recount a handful of users who have been able to use such systems effectively.

The no-compromise solution

In theory, one could use an on-screen keyboard with auditory scanning to drive a standard word processor, with a screen reader providing the speech feedback from the word processor. In practice this does not work.

Firstly, such a system requires an enormous effort to learn (remember, the user must find all the functions from memory) and with the complexity of modern software, it is easy to get lost if you cannot see the screen. Another difficulty arises from the differing features and behaviour of various assistive systems, and in their interaction with application programs. It is a serious technical challenge to control a screen reader and an application simultaneously with a switch input program.

The total compromise solution

The opposite approach is to write software that handles the whole of the input and output of the computer in a single program; with all the speech prompts built in, and unwanted complications kept out. The advances in modern development tools make this a more attractive option than in the past and a number of programs have appeared over the years, including an abandoned project by the author.

The major problem with this is the absolute limitation that it places on the capabilities of the system. You have one, and only one application that you can ever use.

Our solution

There is a clear need for systems that are easy to teach to users, and yet provide a sensible level of functionality. A middle way is to use an on-screen keyboard that supports auditory scanning, and to use this with speech-enabled applications. In this way, the user is presented with a manageable system from the outset, but is not prevented from using additional applications in the future.

In practice there are still many problems, such as conflicts between the speech used in different programs, incomplete speech output from talking software, and lack of functionality in on-screen keyboards.

We had recently completed two separate projects; an advanced on-screen keyboard and a talking word processor for use by young blind typists. At this time we were dealing with three people with complex and challenging problems arising from MS. This was the perfect opportunity to modify both programs to ensure that they could be operated together seamlessly.

Some case histories

It would be worth pausing for a while to look at some success stories from the past.

Shirley

Shirley lost both her hand function and vision due to MS, and she used switch input and voice output even before personal computers as we know them existed. She used a machine called a Microwriter, which used a chord keyboard similar to that used on the present-day BAT keyboard. The machine was modified with a scanning circuit with lights and differently pitched beeps to replace the keys. This was plugged into an early Votrax synthesizer, which had to be disconnected to make way for the printer when her document was finished. The system took her a year to master, after which she used it to produce personal correspondence and to become the secretary for a support organisation for switch users.

Jonathan

Jonathan has cerebral palsy and is totally blind. He had no literacy skills and no method of effective communication until the age of about 15. He then learnt to use a scanning Microwriter with speech like Shirley. Later, he was accepted for a college course and needed to learn a system that would allow him to operate WordStar on a PC. He learnt a new coded input system based on the CID 2 switch code. This uses two switches that are held down while a counted number of beeps are sounded. For example, holding the left switch for one beep and the right switch for two may enter a letter on this system. This system was transparent to the PC, and allowed the Hal screen reader and one of the many simplified WordStar clones that were then in use to be operated.

Jonathan now lives in his own flat.

Mark

Mark was profoundly deaf and totally blind. He had no speech and severe physical impairments.

His communication system consisted of a portable computer, a Morse key and a vibrator placed against his neck. He would communicate to other people by entering the Morse code. The text could then be read on the computer screen. To talk back, you typed a message on the keyboard, and it was tapped out in Morse code on Mark’s neck by the vibrator.

There are more, but not a great many more. It must be obvious that people like Shirley, Jonathan and Mark possess a degree of determination that is not given to many. The challenge of mastering such systems is great.

Our solution

As explained in the introduction, we were anxious to produce a solution that would be easier to learn, but that would also provide a reasonable degree of functionality to the user. Whilst completing the software, we worked with some users, addressing the practical problems as they occurred. The talking word processor needed to be controlled entirely by the on-screen keyboard, and the speech feedback needed to be total.

Getting the features in the programs right was only a part of the task. It was equally important to work out the methodology that would work effectively for people.

Getting adequate speech support

AllWrite is a talking word processor that was originally written specifically to help blind children to learn typing and literacy skills. Using Word and a full feature screen reader is a tall order for any blind newcomer to computers. Although there are plenty of educational word processors with speech output, they are all aimed at reinforcing literacy, and do not allow the detailed interaction between speech and text that a blind user needs. Talking cursor and editing keys are needed, along with talking menus and – most importantly – the option to use a high quality text to speech program. AllWrite has these features, and can be used with DECtalk. DECtalk is ideal as it is not only highly intelligible, but is very fast in its response. (Technical note: we do not use the SAPI version)

Grids suitable for auditory scanning

HandsOff is an advanced on-screen keyboard with many customisable features that allow it to be configured to both the user and the application program being run.

The first task is to provide auditory scanning. Typically, this has meant laboriously scanning the alphabet a letter at a time. However, using a suitable grid layout, row/column scanning can be used by auditory scanners. The grid is made with the vowels at the beginning of each line, thus:

A a b c d E e f g h I i j k l m n O o p q r s t U u v w x y z As the rows scan, the speech says "A row, E row, etc. It is easy for people to remember after which vowel the letter they want occurs. When the row is selected, it reads that part of the alphabet in order. The locations on the right hand end of the short rows were used for a few other common characters, such as space, enter and punctuation.

When the user has become thoroughly accustomed to the layout, a change is made to the grid settings, so that the initial letters are replaced by word prediction.

A separate grid handles the main commands needed to operate AllWrite. Again, intelligent use is made of the row/column scanning.

FILE new open save print EDIT cut copy paste undo bold underline ARROWS up down left right home end READ letter word to cursor line sentence document

Getting the right switch methods

As well as getting the grids right, it is also important to have sophisticated switch operation. A blind user cannot watch the scan, and so finds it much harder to anticipate a coming switch press. As a result, overruns and missed scans are common unless a very low scan rate is adopted. HandsOff allows the user to reverse the direction of the scan at any time, allowing the scan rate to be higher. You can even have a slower rate when scanning backwards.

Nonetheless, not every user will manage to use scanning, and a two-switch method is also available.

The coded option

Coded input does away with the need for auditory scanning, and allows the more able switch user to enter text far more quickly.

The attraction of Morse code is that it is very adaptable for switch users. Normally, pressing down a Morse key for a short or long time enters the dots or dashes. However, if the user can manage two switches, then the dots can be entered with one switch and the dashes with another.

The drawback with Morse (or any other learned code) is that there are so many functions needed that the codes become very cumbersome and hard to learn.

In HandsOff, this is overcome by embedding the Morse code within a normal scanning grid. The user’s Morse file can be edited in Notepad, allowing additional codes to be added for basic punctuation, selection of prediction and so on. However, these additional codes are kept to a minimum. When the user wishes to select a rarely used character or a special command function, he enters a special Morse code that drops back to auditory scanning.

Communication function

A fairly large proportion of the users of these systems also have a speech problem.

The talking word processor can be used for real time conversation, but stored phrases make life much easier. HandsOff allows grids to be used as message banks, with phrases stored in the cells. To make this useable, the spoken prompt is a shortened version of the message.

The prompts may be kept private if necessary, as the prompts are spoken on the right hand channel of the sound system only. Thus an earphone may be connected to this channel and a speaker to the left.

Conclusions

The people who have been using this combination of software and techniques are progressing rapidly. One has graduated from total passivity in the residential facility where she lives to active participation and writing articles for the newsletter. The elements that have given success are:

Ensuring that the programs really work in harmony, requiring some specific software fixes. Re-thinking the way that the grid design interacts with the target application. Patient support from a trainer/helper who is regularly available. A great benefit of the system is the possibility for future expansion. Not only can new grids be made for other speech-enabled software, but HandsOff can also plug many of the gaps in the speech output of other talking programs. You don’t need to have a talking menu in your application if you can make your own virtual menu in a HandsOff grid.

Another advantage is that users can practice with the system without the trainer being present all the time. With training time being so precious, this is important.

There is no doubt that people with profound multiple handicaps have frequently depended on systems that are complex to learn, and limited in functionality. The effort to get the software right, and to develop effective ways to use it, is already paying dividends.


Go to previous article 
Go to next article 
Return to 2001 Table of Contents 
Return to Table of Proceedings


Reprinted with author(s) permission. Author(s) retain copyright.