1993 VR Conference Proceedings

Go to previous article 
Go to next article 
Go to Table of Contents for 1993 Virtual Reality Conference 


Touching Reality: A Robotic Fingerspelling Hand for Deaf-Blind Persons

Deborah Bessonny Gilden, Ph.D.
Rehabilitation Engineering Research Center
The Smith-Kettlewell Eye Research Institute
2232 Webster Street
San Francisco, CA 94115
(415) 561-1665

Brad Smallridge
Upstart Robots
567 Belvedere Street
San Francisco, CA 94117
(415) 550-0588

Abstract

Virtual reality engages our senses. But what if we could not hear or see? Can a deaf-blind person also enjoy a virtual reality?

To help alleviate the social and informational isolation associated with deaf-blindness, a computer-controlled robotic hand, called Dexter, is being developed. Letters typed on a computer keyboard signal Dexter to form the corresponding letters of the one-hand manual alphabet used by deaf persons. The deaf-blind individual places a hand on Dexter and "reads" the message being sent to him or her one letter at a time. In effect, Dexter becomes a "virtual hand" for communicating with a deaf-blind individual.

This paper also touches on the communication possibilities of a robotic hand for able-bodied individuals to enrich a "traditional" virtual reality system.


Starting from Scratch: Touch and Grow

Vision and hearing have received the lion's share of sensory research for centuries (Boring, 1957), and have long been the darlings of technology for entertainment. In keeping with this tradition the field of virtual reality has been putting most of its energy into visual and auditory displays.

But perhaps we are missing out on an important sensory channel. Touch is a basic and powerful modality for both object identification and communication. On the evolutionary scale it wins "first" awards in two categories: the first sense developed in single-celled organisms, and the first means of communication. Protozoa which have no ability to see or hear locate and identify organisms and objects by literally bumping into them. This theme of the "basicness" of touch also appears in an article aptly entitled "The First Language" (Miller, 1992). The article states that touch is the human neonate's first communication with the outside world. Juxtaposing the evolutionary firsts with the neonatal first presents a variation of biology's famous observation that "ontogeny recapitulates phylogeny." And we know that interacting with the world through touch stimulates neural development. The need for touch to ensure normal development in human infants was learned from observing institutionalized babies. Now animal studies "suggest that the type and location of touch early in life affect neural connections in the brain that are closely tied to behavior and cognitive skills" (Miller, p.33). The role of touch in inducing positive physiological effects also later in life has recently become the subject of medical and other scientific study (Ackerman, 1990; Miller, 1992; Ornish, 1982).


A Sense of Reality: Touch and Know

Because it directly connects us to the object in question, touch is often better than vision or hearing at allowing us to identify the essence of a thing. Touch tells us if that little chirping ball of yellow fluff is a live baby animal or merely an electronic chicken, if the image within the picture frame is actually an expanse of crinkled paper or simply a painting, or if the contents of the fruit bowl are delicious and nourishing or merely decorative.

Oliver Sacks tells an interesting anecdote about how a newly sighted man, still very dependent on touch, relates to a piece of wax fruit as compared to those who could always see (Sacks, p.72). In many instances it would be more accurate to say "feeling is believing" than "seeing is believing."

An esoteric approach using technology to in effect extend blind persons' ability to touch objects was initiated at the Smith Kettlewell Eye Research Institute in 1967. Researchers used the skin as a retina by displaying images from the environment on the backs of blind people via a matrix of vibrating pins. Research on this Tactile Vision Substitution System (TVSS) showed it to provide a perceptual breakthrough. For the first time, congenitally blind subjects were able to understand various aspects of perspective such as foreshortening, vanishing points, etc. It also allowed them to experience the dynamics of a flickering flame, etc. (Collins, 1970; Gilden 1991).


Feeling Emotional: Touch and Glow

Touch is enmeshed with emotions and language in interesting ways. For example, the word "feel" refers not only to tactile perception but to one's emotional state. The interconnections among touch, communication and emotions are reflected in a wealth of English expressions including: "I don't feel good," "a touching scene," "that feels right," "say it with feeling," "he's feeling his oats," "your words really touch me," "he's out of touch with reality," "I feel like I'm with you," "keep in touch," "my, aren't we feeling touchy today," etc. These phrases affirm that touch is fundamental in connecting us to our environment, to ourselves and to each other.

Related to this is the silent communication embodied in physical contact. An encouraging pat on the shoulder or a warm squeeze of the hand convey powerful emotional messages. Ackerman (1990) reports on a research study which showed that in certain situations simple hand contact is interpreted as a smile -- even though none was displayed.


A Word in the Hand: Tactile Communication Systems

Not only does touch have an inherent quality to express emotions, but it can also be used to convey sophisticated linguistic meaning. In fact, it serves as the communication medium for thousands of Americans who, unable to see or hear, rely on touch for virtually all of their information about the world, as well as for all of their interpersonal communication. For people who are deaf and blind the world is encompassed within arm's reach, and the touch of a hand holds the meaning of everything.

The leading cause of deaf-blindness in the United States is a disease called Usher's syndrome (Duncan, et al., 1988). Usher's patients typically have an early hearing loss followed by vision loss later on in life. Most Usher's syndrome children are not aware that they will experience vision loss. They are educated in special programs for the deaf, where they generally learn fingerspelling and sign language, but not braille.

Fingerspelling refers to use of the one-hand manual alphabet used by deaf persons, with each letter represented by a different hand configuration (Figure 1). When their vision loss becomes severe enough to interfere with "reading" signs, these individuals usually prefer switching to tactile (hand-on-hand) fingerspelling rather than learning braille.

There are other tactile communication systems besides tactile fingerspelling. These include Print-on-Palm (POP), a variant of POP using each finger to represent a different vowel, Morse Code, Tadoma (a system in which the deaf-blind person places a hand on the speaker's neck, chin and lips and "feels the speech"), etc. Tactile fingerspelling is usually the system of choice, however. Compared to visual fingerspelling and signing, tactile fingerspelling is slow, but people tend to prefer using a code they already know. Unfortunately, this particular code limits the deaf-blind person's input to messages from those few individuals who know how to fingerspell, plus are willing to engage in a communication system requiring continuous touch -- a restricting, fatiguing process. In an attempt to remedy this situation a variety of mechanical fingerspelling robotic hands have been developed.


From Hand to Hand: A Brief History of Dexter

In 1975 the SouthWest Research Institute in San Antonio, Texas initiated the idea of a robotic hand to be used to communicate with deaf-blind people (Laenger,1987). They built a solenoid-driven device which resembled a hand and could form the letters of the one-hand manual alphabet as the letters were typed on a keyboard.

In 1983, Dr. Deborah Gilden of the Smith-Kettlewell Eye Research Institute resurrected this approach. She collaborated with mechanical engineering students at Stanford University and with the Veterans Administration Medical Center in Palo Alto on the design and construction of a more state-of-the-art robotic hand which she named "Dexter" (Gilden, 1987a, 1987b, 1987c; Gilden and Jaffe, 1988). This first Dexter (Dexter I) reaffirmed that a machine could reproduce the configurations of most of the letters of the one-hand manual alphabet accurately enough to be read tactiley by deaf-blind persons. There were, however, some major limitations with this prototype. Special problems arose for configuring letters which require wrist flexion, wrist rotation, or finger crossing. Nor could the hand move with reasonable speed or fluidity as it had to return to a "neutral position" (all fingers straight) between each letter pair. In addition, inaccurate and inconsistent hand configurations limited reliable legibility. Finally, because the first Dexter was constructed of heavy metal and needed a tank of compressed gas to power the drive mechanism, it was too heavy, bulky and difficult to operate for general consumer use.

Another attempt, Dexter II, was sponsored by the VA and designed by a later class of mechanical engineering students at Stanford University, with guidance from Smith-Kettlewell. It used a small servo-mechanism which made the system much more compact and practical than Dexter I. The VA subsequently built a few other fingerspelling hands, although these do not perform the wrist rotations which are standard to forming letters J and Z.

Upstart Robots of San Francisco is developing the third generation of Dexter. The device will consist of a human-like robotic hand connected to a dive box. Stepper motors inside the box move the fingers, thumb, and wrist into the gestures of the American one-hand manual alphabet of the deaf. This will allow deaf-blind users to tactilely read all letters transmitted to them by someone typing the letters on an interfaced keyboard. The long-term objective is to allow deaf-blind users communications access to computer and telephone systems.

While communication technology is rapidly linking computer users, most deaf-blind people remain socially and informationally isolated. We need to build a robotic hand capable of making the fluid movements seen in the letter transitions of "natural" (human) fingerspelling, and this must be performed at a reasonable speed. We anticipate that this sophisticated hand, complete with computer interface, will have to sell for around $6,000.

Dexter III is tendon driven with no actuators within the hand. Our experience indicates that 16 tendons are needed for proper letter formation, with each tendon being driven by its separate motor. Each finger is controlled by two motors, one for the metacarpal-phalangeal joints (the joint that connects the finger to the palm) and one for the interphalangeal joints (the two more distant joints.) Having two motors for each finger allows the fingers to "flap" at the metacarpal-phalangeal joints as well as "curl" at the interphalangeal joints.

The other eight motors are deployed as follows: four motors control the thumb; two motors control the wrist to allow for expressing H, J, P, Q, and Z; and two motors jointly control the abduction or adduction of the index, middle, and ring fingers for the proper formation of F, R, V,and W.

Many different driving systems were considered, but we decided to use stepper motors in order to keep the parts cost per tendon driver under about $50. RC servo motors are also in this price range, but we do not believe that this motor technology will transfer to a commercial product. We use hefty M25 frame motors that have a lot of power to overcome the "weight" of the reader's hand pressing against Dexter.

The hand was also designed to be built inexpensively. It is constructed of DelrinTM plastic that is contoured and tied together in such a way to allow rotation of "member" cylinders around "knuckle" cylinders. It is quite literally a stack of plastic. This design allowed the hand to be built with unsophisticated shop tools. Clevis and axle joint designs may need to be evaluated to determine if more durability or less friction is possible.

The motion control will be accomplished by a single Dallas Semiconductor DS5000 microprocessor. The chip is an upgraded version of the 8051 microprocessor. The DS5000 has battery-backed-up RAM integrated into the chip's package, which will allow us to modify the letter shapes and allow the system to remember the shapes when the system is turned off. Optocouplers isolate the electrical noise of the motor side of the circuitry from the microprocessor circuitry.


Robots, Touch, and Virtual Reality: Feeling for the Future

Telerobotics as a means of manipulating objects displayed on goggles or screens has been a central theme of virtual reality. The feedback for this manipulation has been primarily visual.

Tactile feedback could facilitate telerobotic manipulations, as well as add "presence" to any virtual reality experience. Little is known about what is needed to simulate a realistic tactile experience, but there is an incredibly diverse array of approaches currently being explored. These include a matrix of vibrating tactors, single vibrating joysticks, pneumatic gloves, etc. These approaches are all attempts to display virtual objects via touch and/or proprioception.

The communication methods of deaf-blind people have demonstrated that in addition to being ideally suited for near object recognition, touch can also be an effective receptor of linguistic information. Dexter is one example of a machine which has demonstrated that robotic hands are not only tools for grasping objects, but that they also enable people to grasp ideas. Although Dexter is designed for use by deaf-blind persons, it may have important implications for able-bodied people as well, suggesting an additional means of receiving information, especially for people concentrating on auditory and visual displays. A combination of linguistic and nonlinguistic "messages" conveyed via touch, in addition to tactile representations of virtual objects, could prove to be a powerful enhancement to auditory/visual virtual reality experiences.

More creative applications of tactile displays suggest exciting possibilities. Imagine reaching out to shake the hand of a virtual President Clinton but actually contacting a mechanical, but life-like, hand. Or imagine storing all of the mathematical parameters of your own handshake so that your great-great grandchildren will be able to experience this tangible aspect of you while simultaneously seeing and hearing you through recorded data long after you are gone.

Because tactile displays can carry so many different types of information about the environment, as well as transmit both emotional and linguistic information, a multifaceted tactile display featuring a sort of tactile "combination plate" is conceivable. Such a rich display could form the basis of a virtual reality system for deaf-blind people, and allow them to enjoy a greatly expanded universe through a plethora of virtual experiences.

Technology has eliminated distance as an obstacle to communicating and to receiving information for those who can see and/or hear. Virtual reality has expanded opportunities for dealing with time and space even more by simulating elements of the real world and enabling us to interact with them through simulated grasping, flying, etc. Our challenge is to use technology to also expand the experiential universe of those who can neither see nor hear -- not only to put deaf-blind people in touch with the rest of the world, but to also provide them with experiences to encourage their imaginations to soar.


References


Acknowledgments

We would like to acknowledge Dr. David Boonzaier, University of Capetown, for his role in helping with the development of Dexter I.

This research was supported by grants from the National Institute on Disability and Rehabilitation Research and The SmithKettlewell Eye Research Institute.

Figure 1. One Hand Manual Alphabet

Go to previous article 
Go to next article 
Go to Table of Contents for 1993 Virtual Reality Conference 
Return to the Table of Proceedings 


Reprinted with author(s) permission. Author(s) retain copyright.