Go to previous article
Go to next article
Return to 1995 VR Table of Contents
Kenneth Nemire, Ph.D. and Rebecca Crane
Interface Technologies Corporation
1840 Forty-First Avenue, Suite 102
Capitola, CA 95010
The objective of this research was to design a means for interacting in virtual worlds that could utilize the motor capabilities of people with cerebral palsy. This effort is part of a larger effort to design the Virtual Environment Science Laboratory (VESL(TM)) for these users. Six people with cerebral palsy participated in this experiment. Spatial tracking technology and special prediction software were used to help them touch selected targets in a virtual world. Results indicated that two different kinds of prediction software enabled the participants to reach targets faster than they could without software enhancements, and with no increase in the number of misses (the participants tended to make very few misses). The authors' attempts to improve performance of users' interactions in a 3D virtual world were successful.
People with cerebral palsy (CP) and other physical disabilities have been excluded from many of life's experiences because of motor control limitations. The goal of this project is to design means for interacting in virtual environments that can be accessed by both people with CP and the nondisabled. Previously, the Virtual Environment Science Laboratory (VESL(TM)) was created to enhance science education for all students and to accommodate persons with spinal cord injuries and other types of paralyses (Nemire, 1995a; Nemire, 1995b; and Nemire, et al., 1994a). This paper describes further efforts to re-design VESL(TM) so those with CP can manually interact in the virtual worlds. The diagnosis of CP indicates problems with motor control resulting from damage to the motor areas of the brain. An estimated 9000 infants develop CP every year (National Advisory on Neurological Disorders and Stroke Council, 1990). Two of every 1000 school-aged children in the United States have cerebral palsy (Paneth and Kiely, 1984; Rosen and Dickenson, 1992). More than 90% of these children live to adulthood (Evans and Alberman, 1991), and constitute almost 25% of children reported to have a physical disability as their primary one (McNeil, 1993). Spasticity is the most frequently observed disorder in CP and occurs in about 60 - 70% of this population (Eicher & Batshaw, 1993; Manske, 1990; Price, et al., 1991).
Spasticity is generally observed as increased muscle tone with limited movement and control. Of the spastic patterns, quadriplegia usually involves all extremities, trunk, mouth, and head. It is the most common type, accounting for 27% of all people with CP. Diplegia involves the trunk and legs with only fine motor incoordination of the arms. It is observed in 21% of all those with CP. Hemiplegia involves one side of the body only, with the upper extremity frequently more affected. It is observed in another 21% of all people with CP (Eicher & Batshaw, 1993). Another major type of CP is dyskinesia. There are two types: choreoathetoid and dystonia.
Choreoathetoid CP is characterized by slow, writhing motions in the upper extremities and distal muscles, facial grimacing, and speech difficulties. A twisted, extended torso and upper extremity posture is observed. Dystonia is characterized by exaggerated and sudden movements. Dyskinesia occurs in 20% of all cases.
A third type, occurring in about 1% of people with CP, is ataxia. It is characterized by balance difficulties and a broad-based gait (Blackman, 1987; Eicher & Batshaw, 1993). The remaining 10% of all incidences of CP are of a mixed or rigid type.
1. Description of Motor Control in People with CP. Many people with CP meet insurmountable barriers when attempting to use computers because of inadequate human interfaces. Arm and hand movements among people with CP are not typical to those of non-physically disabled people. Although some general patterns of movement are common to a specific type of CP, movement patterns often vary between individuals with the same type of disorder (Fetters, 1991). Those with CP may be unable to turn on a computer monitor; put a floppy disk into a drive; provide computer input through standard keyboard, joystick, or mouse-driven controls; or interact with elements of the software interface. Even modifications, such as to the standard manual or speech input devices, may not be sufficient to allow many persons with CP to use a computer. Motor control problems that make it difficult to interact with computers, as they are currently designed, may include one or more of the following:
* Speech dysfunction(s) can prevent a person with CP from using voice recognition systems.
* When reaching toward an object, a person with CP may move the arm as one unit and have difficulty moving and coordinating the upper arm, forearm, wrist and fingers, resulting in a loss of control. Involuntary arm movements also are observed in people with CP. These characteristics often impair the ability to perform movements requiring timing and accuracy. A slow or irregular reaction time makes time-dependent input, such as that required to play video games, unreliable.
* A person with CP may have difficulty processing information regarding arm or head position, range and angle of movement, acceleration of upper extremity parts, muscle length and tension, and speed. As a result, people with CP often cannot precisely position their arms and head.
* Spasticity makes it difficult to access and manipulate conventional controls, and may even cause one to overshoot a target when reaching for it.
* Poor upper extremity coordination and minimal force exertion (under 100 grams or approximately the force required to type on a fairly stiff keyboard) may make it impossible to operate some physical controls.
* Hand dysfunction might prevent the operation of switches or controls that require a twist or push motion, a grasp, or a thumb to finger contact. This lack of fine motor control can undermine one's ability to effectively handle and operate mouse and touchpad input devices. Limited movement control also can cause one to strike unwanted keys when typing.
2. Computer System Solutions. There are several methods of using software and hardware adaptations to accommodate the motor capabilities of people with CP, and facilitate interaction with computers. While adaptive hardware capitalizes on a disabled person's most consistent motor abilities, adaptive software can minimize the physical demands placed on the user by allocating the bulk of the tasks to the computer, and by making predictions of the user's intentions. Input devices are selected based on an individual's controllable motor sites. As an example, if a person with dyskinesia has poor mouth, eye gaze, and upper extremity control, but has good head control, a head-controlled mouse might be chosen as an appropriate input device. Examples of adaptive input devices are listed below.
2.A. Switches. A variety of switches has been developed to meet physical requirements and mounting needs of those with CP. These can be triggered by small or large muscle movements and are made in different sizes and shapes. In each case, special switches make use of at least one muscle over which the person has voluntary control (e.g., head, finger, knee, mouth). Hundreds of switches and switch sites can be used to tailor input devices to individual needs.
2.B. Modified keyboards and key guards. These allow one to rest and stabilize the hand or enlarge the surface area of each key.
2.C. Software displays. Adaptations include enlarged character sizes, color contrasting and highlighting mechanisms, digitized or synthesized speech output, and voice recognition.2.D. Software controls. Adaptive software programs can engage 'sticky' keys to activate multiple key combinations when the user can only depress one key at a time. Keyboard emulation is available for persons with severe movement limitations.
2.E. Word prediction programs. These can minimize keystrokes by predicting words and phrases.
2.F. Scanning. In scanning input, cursors or lights scan symbols or letters displayed on computer screens or external devices. External displays and controls could be attached to one's body for easy activation. To make selections, individuals use switches triggered by movements of body parts and functions like the head, breath, finger, and foot. Scanning and other special input programs can be designed to run in the background or under multitasking. A relatively new approach to adaptive technology is Virtual Reality Technology (VRT).
3. Problems and Opportunities with Virtual Reality Technology. The traditional adaptive hardware and software input mechanisms, such as scanning devices and switch arrays, while good for two dimensional (2D) interaction, may not be the best options for interacting in three-dimensional (3D) virtual worlds. The 2D features of these older technologies need to be extended to 3D so the control devices can be more useful for virtual interactions. One difficulty that people with CP have in using VRT involves their ability to manipulate virtual objects. Current maneuvering in VR employs devices like joysticks, forceballs, data gloves, and spatial trackers. Some of these technologies require a high level of motor control and visual-motor coordination. Research conducted by other organizations to enhance the accessibility of virtual worlds for people who are severely physically disabled has taken different approaches to these problems.
3.A. Data gloves. A gesture recognition system used fiber optic sensors on a glove to measure finger position when signing in American Sign Language (ASL)(Newby, 1994). This has been useful for people who are hearing impaired, but the technology has not been adapted for the idiosyncratic arm movements of people with CP. Newby (1994) also used spatial trackers to track arm position and orientation (See 3.B.).
3.B. Spatial trackers. One project uses spatial trackers to track real-time arm movements in one-dimension (1D) to provide people with CP an interface to a video game (Conway, 1994). This interface design would need to be expanded to a 3D prototype for most effective use in VR.
3.C. Software Predictions. Intelligent VR combines human and robotic skills by employing Virtual Assistive Agents (VAA). VAA are logical constructs that are similar to a blueprint for complex system behaviors, and are designed to predict the user's actions. VAA systems work together to minimize the number of physical tasks performed by the user by executing the most logical action (Andaleon, et al., 1994). Intelligent VR could take over some of the physical functions required to interact in a virtual environment.
4. Opportunities. Most of the research with VR input devices is in a developmental stage. Bringing VR from it's infancy towards maturation requires designing technology that meets the needs of the people using the system, does not compromise the quality or efficiency of computer interaction for a wide range of users, and does not require the development of separate equipment for physically challenged users (Delaney, 1994; NIDRR, 1994; Nemire, 1994b; Scadden, 1993). The focus of the current project is to meet these requirements for manually interacting in VR. The authors decided to expand on previous research using 1D spatial tracking (Conway, 1994) and predictive software (Andaleon, et al., 1994). This was accomplished by asking users to touch a target at different locations in 3D space. Spatial tracking technology and proprietary software for predicting arm movements were used to facilitate target acquisition by the users.
5. Preliminary Investigation.
5. A. Participant Users. Six persons with CP, one with spastic diplegia, four with spastic quadriplegia, and one with spastic hemiplegia volunteered as participants. Selection criteria included corrected or good vision, access to good postural stabilization for performing controlled arm movements, and minimum age of ten years. Five of the participants were children (10 to 15 years) and one was an adult. Three were male and three were female. Users had mild to severe motor impairments, and none had speech or cognitive impairments.
5. B. Apparatus. The components of the VESL(TM) prototype used for this study were an image generating system, a visual display, and electromagnetic spatial trackers on the back of the hand and forearm near the elbow. Visual stimuli were one white target star and two blue distracter stars on red solid rectangles positioned in a 3D landscape so that participants could reach them.
5. C. Procedure. After reading and signing an informed consent, participants were provided instructions. Sensors were attached to a forearm and back of hand. The virtual world was presented with a head-mounted color display for the first three participants. The last three participants saw the targets on a 17 inch color monitor because of discomfort complaints from two of the previous participants. Seating adjustments were made when necessary to ensure that persons were placed in the most supportive manner for reaching with their hand. An auditory cue was given to start each task. Upon hearing the signal, participants moved their arm and hand, and a virtual representation of their arm and hand moved accordingly. Their task was to touch their virtual hand to the white star on the virtual target and to avoid the two blue stars. Participants were asked to touch the white star as accurately as possible. Gentle feedback about their success or failure, and a prompt to move back to the starting position, were provided on the screen following each trial. Participants were given fifteen practice trials before starting. These practice trials were the same as in Condition 3 described below. There were three experimental conditions with fifteen trials per condition. Two conditions used two different versions of predictive software to assist participants in touching the target (Predict 1 and Predict 2).
The third condition was an unenhanced condition (Control) that allowed participants to touch the target without any assist. A two minute rest interval was given between each condition. The experimental sessions were approximately one hour in length. Participants' performances were recorded on videotape, and performance data were collected by computer. The data discussed in this paper are (1) the total time it took participants to move their hand from the starting position to the target (movement time), and (2) the number of times participants selected the wrong target (misses). A one-way within-subjects Analysis of Variance (ANOVA), in which each of the six participants was tested under each of the three experimental conditions, was performed. Planned comparison tests were performed to determine whether the movement times or misses during the two enhanced reaching conditions (Predict 1 and Predict 2) were different from the movement times or misses during the unenhanced reaching condition (Control). We predicted that using the two versions of predictive software would result in faster movement times and fewer errors in reaching the correct target than using the unenhanced movement method.
6.A. Movement times. The average (mean) movement times to a target (and standard errors) for each of the conditions are as follows: Predict 1, 6.3 seconds (0.87); Predict 2, 3.11 seconds (0.40), Control, 16.1 seconds (1.66). The ANOVA indicated significant effects of the enhanced movement conditions on movement time (F(2,10) = 22.70, p < .01). The planned comparisons showed that users moved to the target faster during Predict 1 than during the unenhanced condition (t(10) = 4.89, p < .01), and that users moved to the target faster during Predict 2 than during the unenhanced condition (t(10) = 6.46, p < .01). 6.B. Number of misses. The average number of misses (and standard errors) for each of the conditions are as follows: Predict 1, 3.5 (0.05); Predict 2, 4.2 (0.05), Control, 1.2 (0.03). The ANOVA indicated NO significant effects of the predictive conditions on number of misses (F(2,10) = 3.53, p > .05). Regardless of the nonsignificant findings, we performed the planned comparisons to find out if one predictive condition resulted in fewer misses than another. These explorations indicated that users did NOT miss the target more during Predict 1 than during the unenhanced condition (t(10) = 1.97, p > .05), and that users DID miss the targets more during Predict 2 than during the unenhanced condition (t(10) = 2.53, p < .05). 7. Discussion. The authors' attempts to improve the time and accuracy of users' interactions in the 3D virtual world were successful. Results of the data analyses showed that both prediction procedures enabled the users to touch the target in a shorter period of time when compared with the unenhanced condition. The results also showed that overall, users did not miss the target during the prediction conditions more than during the unenhanced condition. Further analyses revealed that users missed the target more during Predict 2 than during the unenhanced condition, but that the number of misses during Predict 1 and the unenhanced condition were not different. These results are being used to refine further the predictive software, and provide a solid foundation for testing with a wider range of conditions and a wider range of users. By building on these technologies, the authors expect to create new and effective ways in which virtual worlds can be accessible to all people, including those with CP.
This material is based upon work supported by the U. S. Department of Education under contract number RA94129013 and the U. S. Department of Defense under contract number M67004-95-C-0016. We thank John F. McLaughlin, M.D., Kathy Appenrod, O.T., and the participants for their generous contributions to this study. Any opinions, findings, and conclusions or recommendations expressed in this publication are those of the authors and do not necessarily reflect the views of the U. S. Department of Education, U. S. Department of Defense, Dr. McLaughlin, or Ms. Appenrod.
Andaleon, D. D., Maples, C. C., Miller, K., & Stansfield, S. A. (1994). VR-based training research at Sandia National Laboratories. Proceedings of the Second Annual International Conference on Virtual Reality and Persons with Disabilities. Northridge, CA: California State University. Blackman, J. (1987). Disorders of motor development. In M.L. Wolraich (Ed.), The Practical Assessment and Management of Children with Disorders of Development and Learning (pp. 164-193). Chicago, Illinois: Year Book Medical Publishers, Inc. Conway, M., Vogtle, L., & Pausch, R. (1994). One-dimensional motion tailoring for the disabled: A user study. Presence, 3 (3), 244-251. Delaney, B. (1994b). Building on VR. CyberEdge Journal, 4, (5), 1,4-6. Eicher, P. S., & Batshaw, M. L. (1993). Cerebral Palsy. Pediatrics Clinics of North America, 40 (3), 537-551. Evans P. M., Evans S. J. W., and Alberman E. (1991). Cerebral palsy: Why we must plan for survival. Arch. Dis. Child 65, 1329-1333. Fetters, L. (1991). Measurement and treatment in cerebral palsy: An argument for a new approach. Physical Therapy, 71 (3), 244-247. Manske, P. R. (1990). Cerebral palsy of the upper extremity. Hand Clinics, 6 (4), 697-709. McNeil, J. (1993). Americans with Disabilities: 1992: Data from the Survey of Income and Program Participation. Washington, D.C.: U.S. Department of Commerce (P70-33). National Advisory on Neurological Disorders and Stroke Council. (1990). Implementation Plan Decade of the Brain. Bethesda, MD: National Institute of Health. National Institute on Disability and Rehabilitation Research (1994). Focus group on universal design: Report of proceedings, July 19-20. Washington, D.C.: U.S. Department of Education, OSERS, NIDRR. Nemire, K. (1995a). Learning in a virtual environment: Access by students with physical disabilities. Proceedings of the Tenth Annual International Conference on Technology and Persons with Disabilities. Northridge, CA: California State University. Nemire, K. (1995b). Virtual Environment Science Laboratory for students with physical disabilities. Ability, 15, 22-23. Nemire, K., Burke, A., & Jacoby, R. (1994a). Human factors engineering of a virtual laboratory for students with physical disabilities. Presence, 3 (3), 216-226. Nemire, K. (1994b). Building usable virtual environment products. CyberEdge Journal, 4, (5), 8-10,12,14. Newby, G. (1994). Gesture recognition based upon statistical similarity. Presence, 3 (3), 236-243. Paneth N. and Kiely J. (1984). The frequency of cerebral palsy: A review of population studies in industrialized nations since 1950. Clinics in Developmental Medicine 87, 46-56. Price, R., Bjornson, K. F., Lehmann, J. F., McLaughlin, J. F., & Hays, R. M. (1991). Quantitative measurement of spasticity in children with cerebral palsy. Developmental Medicine and Child Neurology, 33, 585-595. Rosen, M. G. & Dickenson, J. C. (1992). The incidence of cerebral palsy. Am. J. Obstet. Gynecol., 167, 417-423. Scadden, L.A. (1993). Maximizing market share through design. CE Network News, January, 10-11.
Go to previous article
Go to next article
Return to 1995 VR Table of Contents
Return to Table of Proceedings