Go to previous article
Go to next article
Go to Table of Contents for 1993 Virtual Reality Conference
Hugh MacMillan Rehab Centre
Derrick De Kerckhove
McLuhan Program in Culture and Technology
Interactive Artist, Toronto, Canada
Development in the field of alternative computer access (as an accessible method of controlling communication, writing, or vocational and educational pursuits) has reached a bottleneck. For individuals with severe physical restrictions, alternative access systems continue to be slow, effortful and cognitively taxing. The field of alternative computer access is badly in need of a brainstorm.
Virtual Reality provides the impetus to explore radical new approaches. Artists who use virtual reality as an artform have the freedom to dream. The artist, unlike other developers, can believe in things that might not survive under intense rational scrutiny. Working without deadlines and consciousness of the bottom line, the artist can pursue directions not available to developers or engineers. The artist is therefore in a position to work on long term goals without having to justify their viability in the shortterm. The combined skills, knowledge and perspective of artists, adaptive technology consumers and alternative access developers may be the ingredients for the inspiration needed to drive this field beyond the present impasse.
Why Virtual Reality?
Three important aspects of virtual reality systems offer new possibilities to userÆs with disabilities, namely: how they are controlled, what feedback is given and what is controlled.
How they are Controlled
Present alternate computer access systems accept only one or or at most two modes of input at a time. The computer can be controlled by single modes such as: pressing keys on a keyboard, pointing to an on-screen keyboard with a head pointer, or hitting a switch when the computer presents the desired choice, but present computers do not recognize facial expressions, idiosyncratic gestures or monitor actions from several body parts at a time. Most computer interfaces accept only precise, discrete input. Thus many communicative acts are ignored and the subtleness and richness of the human communicative gesture is lost. This results in slow, energy intensive computer interfaces.
Virtual reality systems open the input channel for individuals with physical impairments (Shein, Brownlow, Treviranus, Parnes, 1990). The potential is there to monitor movements or actions from any body part or many body parts at the same time. All properties of the movement can be captured, not just contact of a body part with an effector.
Given that these actions are monitored why can the user control more in the virtual world than in the real world. In the virtual environment these actions or signals can be processed in a number of ways. They can be translated into other actions which have more effect on the world being controlled, for example virtual objects could be pushed by blowing, pulled by sipping and grasped by jaw closure. Proportional properties such as force, direction and speed could become interchangeable allowing the person with arthritic joints to push something harder without the associated pain by simply moving faster. They could be filtered to achieve a cleaner signal. With work presently being done on processing noisy signals, computers can glean the intention from poorly controlled spastic movements. Thus the virtual hand would move without the tremor or spasm of the real hand. The user could concentrate on the task rather than on controlling the involuntary spasms. Dysarthric speech could be reprocessed to achieve speech which is more intelligible to untrained listeners. Actions can be amplified, thus movement of the index finger could control a virtual arm swinging a tennis racket. Alternately movements could be attenuated giving the individual with large, poorly controlled movement more precise control of finer actions.
Despite the increased number of input channels and the possible reprocessing, for many individuals who cannot speak, the number of communicative acts that they can produce will fall far short of the representational set required to control the virtual world. This will mean that an intelligent code must be developed. In order to reduce mental load and increase speed the computer will also need to assist by making intelligent guesses as to the intentions of the user. Developers designing the coding interface could study the paralinguistic components of communication to devise coding systems which are natural or instinctual to the user.
Researchers in augmentative communication have identified the phenomenon of the familiar assistant or facilitator. Augmentative communication users communicate very fluently with certain people who know them well and who are very sensitive to their needs.
Communication facilitated by this person is usually much faster, much more satisfactory to all parties, with fewer communication breakdowns and a lower stress level. The familiar facilitator takes on the role of interpreter or intervener. It has been suggested that the familiar partner can be used as a model for the human computer interface. By carefully analyzing what signals the partner monitors, what information the partner draws upon to interpret the signals and in what way and at what time the partner assists communication, a formula for human computer interfaces can be derived. To facilitate efficient control, communication between the user and the computer must at minimum be multimodal, fluent, and evolving, all of which present computer interfaces are not but which virtual reality systems promise to be.
Because VR systems display feedback in multiple modes, feedback and prompts can be translated into alternate senses for users with sensory impairments. Rather than manipulating graphic representations of objects on a screen the person who is blind could control design programs through touch and receive feedback regarding color and brightness of objects through auditory signals. The user who is blind could have an extendable virtual arm which could stretch to feel the entire environment. The environment could be reduced in size to get the larger or overall perspective (without the "looking through a straw effect" usually experienced when screen readers or tactile displays). Objects and people could show speech bubbles for the person who is deaf. Sounds could be translated into vibrations or into a register which is easier to pick up. Environmental noises can be selectively filtered out. The user with a spinal cord injury with no sensation in her hands could receive force and density feedback at the shoulder, neck or head.
For the individual with unimpaired senses, multimodal feedback insures that the visual channel is not overloaded (Baecker & Buxton, 1987). Vision is the primary feedback channel of present day computers, frequently the message is further distorted and alienated by representation through text. It is very difficult to represent force, resistance, density, temperature, pitch etc. through vision alone. Virtual reality presents information in alternate ways and in more than one way. Sensory redundancy promotes learning and integration of concepts (Baecker & Buxton, 1987).
What is Controlled
The final advantage is what is controlled. Until the last decade computers were used to control numbers and text by entering numbers and text using a keyboard. Recent direct manipulation interfaces have allowed the manipulation of iconic representations of text files or two dimensional graphic representations of objects through pointing devices such as mice (Brownlow, Shein, Thomas, Milner & Parnes, 1989). The objective of direct manipulation environments was to provide an interface which more directly mimics the manipulation of objects in the real world. The latest step in that trend, virtual reality systems, allow the manipulation of multisensory representations of entire environments by natural actions and gestures.
This last step may make accessible valuable experiences missed due to physical or sensory impairments. These experiences may include early object centered play, and early independent mobility. Exploratory play, independent hypothesis testing and independent mobility are believed to be necessary precursors to the development of later academic skills . Current theory and research supports the notion that early motor and sensory experiences are needed and must be adequately integrated before higher-order cognitive skills such as reading and writing can be developed. It has been shown that sensory or experiential deprivation can alter neural organization, in some cases permanently (Dore & Dumas). There appears to be a consensus among researchers that these cognitive building blocks cannot be gained through passive observation but must be child controlled (Acredolo, Adams & Goodwyn, 1984; Berenthal, Campos, & Barret, 1984; Butler, 1986, Gustafson, 1984).
In an adapted virtual environment the toddler or infant with limited mobility can experience such concepts as under, around and through. She can throw a toy down to be retrieved, spill and pour liquids, bang objects together, explore textures, shapes, densities and actively hypothesis test. The challenge is to accurately read the child's intentions communicated through available voluntary movements. The second challenge is to act upon these intentions without unnecessarily misleading the child about the properties of the real world. What fallacies are we imprinting about the nature of the world and laws of physics if lifting an eyelid can throw a ball into the air. Perhaps the intelligent assistance provided by the VR system can be represented as a responsive parent or helping hand which acts upon the childs signals.
In virtual environments we can simulate inaccessible or risky experiences, allowing the user to extract the lessons to be learned without the inherent risk. Virtual reality systems can allow users to extend their world knowledge. Many children with disabilities are deprived of rich and varying experiences because much of the world is inaccessible and because of the large amount of time it takes to go through mundane daily routines. Conceptualization and visualization of abstract concepts requires previous experiences. We draw upon previous experiences to relate to and integrate new ideas (Baeker & Buxton, 1987). Through VR a mobility impaired child can climb Mount Everest or even explore the local junkyard with her peers.
Children with and without disabilities can share in the experience of defying laws of physics and seeing the world from previously impossible perspectives. Participants can also manipulate their identity and their relationship with others. Virtual reality has the potential to radically expand the frequently restrictive horizons of individuals with physical and/or sensory impairments.
Why Interactive Artists?
A new breed of artists are appearing who have chosen to explore the man-machine interface for their artwork. These are the people concerned with "interactive arts." To maintain proficiency and keep on developing new work, these artists have had to develop competencies in top-level computer programming, interactive robotics and tele-robotics, graphic design, audio-visual virtual environments, and many other high level, highly technical skills which put them on par with professional engineering, but also benefit from their artistic sensibility. The artist is aware and receptive to intangibles. Artists can add a new and perhaps richer perspective on the needs of people with or without disabilities. Although the artist works with dreams and intangibles this does not mean that the process will not result in something practically useful.
Indeed, art and rehabilitation research and development at its best have at lease five concerns in common:
1. A primary focus and emphasis on persons rather than on technology or expertise: interactive arts bring the user to the center of attention; the highly individualized nature of the aesthetic relationship between a person and an artwork also reflects the personalized relationship between the clinician and the patient.
2. An informed and searching interest in the senses and sensory modalities: the words defining the realms of art and of therapy are almost identical because they refer to the same common body of knowledge: aesthetics is the science of pleasurable perception, just as aesthetics is the science of sensory inputs and outputs.
3. An understanding of interactive technologies as extensions and projections of the human body: by focusing their attention on the behavior of people in their environment, interactive artists, like rehabilitation experts are brought to recognize that the human body entertains with the environment direct and indirect relationships which go beyond the limits of the skin and the reach of the senses.
4. An interest in developing techniques to monitor, process, and interpret human input: just as rehabilitation is based on restoring individual autonomy by improving access and control, interactive arts explore the potential of technology to help users achieve expanded control over their surroundings.
5. A common desire to promote the quality of life and the richness of the human experience: the condition for success in the aesthetic experience is the same as that for the therapeutic one; artists, like clinicians are motivated by a deeper sense of what being human is all about.
Art is a form of social therapy just as rehabilitation is a form of therapeutic artistry. Unfortunately there are few forums which bring these two cultures together. The McLuhan Program in Culture and Technology is establishing a network or user group which has as its participants, artists, adaptive technology consumers, clinicians and adaptive technology developers who will work together to generate fresh solutions to man-machine interface challenges.
The following are examples of the ideas, products and programs generated when the combined talents
The Multimedia Camp
Virtual reality developments may produce a more accessible artform or means of creative expression. This artform could be unencumbered by inaccessible tools or instruments, allowing a more immediate link between the artist's intentions, actions and the artwork. A pilot project launched by the Microcomputer Applications Programme at the Hugh MacMillan Rehabilitation Centre will explore the use of multimedia technology and virtual reality technology to give children and adults, who cannot use traditional tools and instruments of creative expression, access to a full range of media in an integrated setting.
Art and creative expression were chosen as the focus of the camp because these are areas of endeavor without well defined measures, leaving the greatest opportunity for pure, non-judgemental play. As stated by John Pfeiffer art and play both entail "imitation, pretending, a measure of fantasy, the freedom to improvise, to make and break rules and create surprise." The programme will provide access to computer mediated art not only through traditional alternative access techniques which require discrete or digital control but also through Virtual Reality systems which capture all of the artists movements, allowing more fluid and forgiving control of the media. The technology gives the artist the freedom to concentrate not on what she has but what can be created.
Tools to be used in the pilot program include the Mandela system by Vivid and the Very Nervous System by David Rokeby. Both systems process the video image of the user in real time. The Very Nervous System captures movement of any body part and translates the movement into music or manipulation of a visual image. The speed, distance, size of moving part, direction of movement, relationship of one movement with another, timing of movement and location of movement in space, can all be used to craft the music or visual image produced. The Mandela System uses video capturing technology to extract the image of the user from the background and superimpose the image on a screen displayed virtual environment.
The image of the user can touch and interact with virtual objects on the screen or take on new properties itself. Using accessible tools such as the Mandela system and the Rokeby system, the user's body can become the paint brush or the musical instrument. All of space becomes responsive to the artist's actions. Thus poorly controlled limb movements, eyegaze and facial expression all have a part to play in a virtual orchestra or dynamic visual canvas.
Because traditional assumptions about cause and effect and about ability and skill are put into question it is hoped participants will be lead to explore and reshape established views about their role and relationship to others in society. It is hoped that virtual reality environments and the act of creative expression will breed new perspectives on inclusion and socially imposed divisions. Using tools such as morphing software and identity toolkits (which allow the user to manipulate and disguise images of themselves) participants will be given the opportunity to play with their identity, to explore self as a work of art, to role play and to take on other identities. Using these and other creative activities it is hoped that participants will surprise and challenge perceptions of themselves and others. The organizers must craft a milieu which allows new creative and social possibilities.
Computer Access System
A computer input system designed for a client with locked-in-syndrome illustrates how virtual reality monitoring systems designed by interactive artists have been put to use as adaptive tools. Due to a localized CVA in the motor pathways a university student had only one remaining voluntary action: namely to lower and raise her eyelids. She was unable to control her gaze. She required a communication system both for communicating with unfamiliar partners and to write. She communicated face-to-face with familiar partners using a partner assisted auditory scan of the alphabet, where acknowledgement of the desired group or letter were signaled using an eyelid up.
System specifications were that the system must monitor and detect the occurrence and duration of the eyelid-up signals. This task was compounded by occasional neck and facial spasms and constant nystagmus in both eyes. The system must therefore filter out noise in the form of eyeblinks and facial or neck spasms. An additional specification was that nothing be mounted on or around the client's head or face as instruments such as eyeglass frames or EMG surface electrodes triggered severe migraines.
It was determined that the client could control eyelid up signals of two durations reliably and a third duration occasionally. Thus the task of the monitoring system was to interpret the eyelid movement as one of four signals: short eyelid up, long eyelid up, extended eyelid up or noise.(blink or spasm). These were to be used in a uniform-length binary code aided by word disambiguation.
No commercially available technology met her needs. Infrared switches mounted around the eye, were very sensitive to changes in mounting position and were frequently triggered by nystagmus. EMG signals were too weak to allow the use of EMG switches. Other interfaces which detect ocular movement were set off by the client's nystagmus and spasm. All of these interfaces also required mounting hardware on or around the face.
A remote unencumbering monitoring system was required. David Rokeby was approached to determine whether his technology could fulfill the specifications. Rokeby was not new to the field of alternative access, having provided the technology for a group of musicians with quadriplegia, called Supercussion in 1988.
Rokeby's Very Nervous System was designed to detect and interpret change in movement. The initial prototype monitoring system monitored the presence and location of the rolling eyeball (under the influence of nystagmus). An eyelid-up state was assumed when there was continuous movement in a certain confined area of the image. This initial system had two major disadvantages: it was too dependent on camera and head position, requiring very precise setup and adjustments; and it was dependent on a symptom which could fade over time.
A new approach, less dependent on setup and the presence of motion, was devised. The system processes the video image of the clients face to extract eyebrow and eyelash features. It then analyzes the distance between the two features. The height of the eyelid across time is interpreted to determine how long the eyelid is in the up state. This information is made available to custom communication software running in Hypercard.
Feature detection is achieved by averaging the pixel value across each horizontal line of the image. This results in two distinct dark lines across a largely blank space, the lines being the eyelash and the eyebrow. The information is filtered to remove noise and to simplify and clarify the two bands. The peeks of the two bands are found and used to determine the distance between the eyelid and eyebrow over time. The entire process is achieved in real time on dedicated video processor hardware which attaches to a MacIntosh via a SCSI interface.
The system is able to filter out noise in the form of blinks or face and neck spasms which result in movement of the head. If the lash comes down for a certain period of time it is assumed to be a blink and not a lash down. The feature detection routine will quickly reestablish the position of the eyelid and eyebrow following a shift in head position. If the "lash" moves further than physically possible or if the "lash" is too close or far away from the brow it is presumed to be an anomaly and is ignored.
The thresholds which indicate a lash up or down state constantly adapt to the present state. As soon as the feature is detected the system establishes a minimum and maximum range. If the client tires or is not comfortable with the level of eyelid-up the threshold will adapt to the range she is adopting.
The system provides immediate auditory feedback upon lash up upon passing the time threshold from short to long and extended eyelid up and when the lash comes down. The feedback can be reconfigured by the user.
The advantages of this system are: to the client's head does not need to remain stable, so no mounting is required on or around the client's head (this has both functional and cosmetic advantages), it effectively handles involuntary spasms and blinking, it is flexible without requiring complex setup, finally it will interpret the target gesture from any number of camera angles or lighting situations.
Virtual Communication Book
The third example is a project still in the dreaming, planning and experimentation stage of development. For several decades designers of augmentative and alternative communication (AAC) systems have been grappling with how to adequately represent the possible messages AAC users may wish to communicate. This is especially difficult when the user is a very young child who is not yet literate and may not yet have the cognitive skills needed to interpret symbols or pictorial representations of vocabulary units. Abstract concepts and verbs provide the greatest challenge as they are difficult to represent using static images. Incorporating all the vocabulary items necessary, as well as manageable methods of retrieving or navigating through the vocabulary, is frequently impossible. Another challenge when designing communication systems for very young AAC users without pointing abilities is to provide control methods which can be mastered without operational concepts beyond the child's developmental level (Light & Lindsay, 1990).
VR gives us the opportunity to create virtual communication books. These could be navigational worlds into which the young augmentative communicator could invite his listeners in order to share his message. These worlds could be rich communication environments structured in a way that makes sense to the child. Thus live things would be alive and could perform verbs. Hot would feel hot and cold would feel cold, music would play music. The world could be manipulated and items chosen using actions the child is able to control. The rooms or places in this world would mirror the child's real and imagined worlds, but with labels to help the child make the transition from "concrete" representation to more symbolic or abstract representation of messages.
The possibility of virtual realities have lead us to reexamine our assumptions about the limits of the world, our skills and our relation to others. This period of transition may supply the perfect milieu in which to devise new approaches to the challenges experienced by individuals with disabilities. Marshal McLuhan in his book Understanding Media queried "if men were able to be convinced that art is precise advance knowledge of how to cope with the psychic and social consequences of the next technology, would they all become artists? Or would they begin a careful translation of new art forms into social navigation charts?" Perhaps the artist as visionary and interpreter has a role to play in exploiting the full potential of the new technology for users with and without disabilities.
Go to previous article
Go to next article
Go to Table of Contents for 1993 Virtual Reality Conference
Return to the Table of Proceedings