1992 VR Conference Proceedings

Skip navigation window
Abstract image of CSUN Oviatt Library.

and Persons

2003, 2002,
2001, 2000,
1999, 1998,

Virtual Reality
and Persons

1995, 1994,
1993, 1992

Voice I/O and
Persons with

Main Page

Center On Disabilities
Virtual Reality Conference 1992

Return to the Table of Proceedings

Table of Contents

Introduction of Jaron Lanier

Founder and Chief Scientist,
VPL, Inc., Redwood City, CA

by Dr. Harry Murphy, Director,
Seventh Annual International Conference,
"Technology and Persons with Disabilities"

The part of the conference that deals with Virtual Reality and Disabilities, next up on our agenda, is sponsored by the U.S. Department of Education, Fund for the Improvement of Post-Secondary Education (F.I.P.S.E.). Representing F.I.P.S.E. is Brian Lekander from Washington D.C. Would you stand please? (applause)

I have the honor now of introducing Jaron Lanier, who is one of the leading spokespersons and one of the most visible persons in the world on the subject of Virtual Reality. Jaron is widely quoted. You will see him in the press and you will see him on television quite frequently. He is the founder of VPL in Redwood City and after serving in several administrative roles he now acts as chairman of the board and the chief scientist.

Virtual Reality has to do with three dimensional interactive computer imaging. That's a very shorthand way to think about it. It is much more than that and much more exciting than that.

As we began to explore Virtual Reality, I came across Jaron's name again and again. Finally, I asked if I could come and visit him about the potential for Virtual Reality with persons with disabilities. Jaron called to confirm the visit and asked if he could invite a colleague, Dr. Walter Greenleaf, President of Greenleaf Medical, Inc. in Palo Alto. This has become a very strong friendship and the springboard for CSUN's Office of Disabled Student Services to submit a grant proposal to the U. S. Department of Education in the area of Virtual Reality. This proposal is strongly supported by Jaron and Walter. I believe that we will soon be doing some good work together.

So, I now have the honor of introducing someone who is a strong supporter of CSUN, who believes in the potential of people with disabilities and who has come to this conference to share his work with us and to learn more about our work. Jaron Lanier.

Harry J. Murphy, Director
Office of Disabled Student Services
California State University, Northridge
18111 Nordhoff Street - DVSS
Northridge, CA  91330
(818) 885-2578  Phone
(818) 885-4929  FAX
California State University, Northridge

Return to the Table of Contents 

Return to the Table of Proceedings 

Keynote Address: Virtual Reality and Persons With Disabilities

Jaron Lanier


I have about twenty minutes here and I think what I can do is try to introduce Virtual Reality and explain it. I want to stress that I'm really here to listen more than to speak, so I hope that some of you will come to the Question and Answer period this afternoon. It's probably more important for me to hear what you have to say than what little I can tell you. What I hope I can do is give you a clear idea of what exactly Virtual Reality is.

This is especially important right now because the movie, "Lawnmower Man," sort of shows what it isn't. So, it's a new situation in which all of a sudden there is a lot of inaccurate information around, whereas before there wasn't very much information around at all.

To begin with, Virtual Reality is a part of computer science and it represents a new approach to computer science. Instead of treating the computer as a box that's out there that is supposed to accomplish something, you put a human being in the center and say, "Let's look at the human being closely. Let's see how people perceive the world or how they act. Let's design a computer to fit very closely around them, like a glove, you might say. Let's match up the technology to exactly what people are good at."

Now let's look at how you perceive the world. You have sense organs: eyes, ears, skin and so forth and then you have motor ability, so you're able to manipulate the world.

Virtual Reality uses the approach of designing clothing devices, "computer clothing," which is worn directly over the sense organs. The objective is to instrument this clothing in such a way that you can provide exactly the stimulus to the person's sense organs that they would receive if they in fact were in an alternate environment. By programming a computer in a certain way, it is actually possible to create the illusion of sort of an experience- creating machine.

Let's go into a bit more detail. Most of the major sense organs are at the surface of the body and so they can be addressed by devices that are built into clothing.

Head Mounted Display

Let's start with the eyes. For eyes there is a thing called a "Head Mounted Display" which is a sort of a goggle that you wear. There is a little display screen in front of each eye. It completely fills your visual field of view with three dimensional images. The images are wide enough that you feel like you're inside of them rather than looking at a device from the outside like a television.

Now, there is a very critical thing about these visual images that you see when you wear a Head Mounted Display. They are generated by a computer. The computer is also measuring how you move your head so if you turn your head, for instance, the simulated place that you might find yourself to be inside of will rotate. The reason it has to do this is it has to compensate for your head-movement and that's what creates the illusion that it's real.

So, the fact that the world that you perceive is constantly changing in response to your own movements is the essence of what a Virtual Reality system is and that's why these have to be generated by a computer. It can't be done with video tape or film because video tape or film always plays back the same way. Now, over the ears you wear head phones which are ordinary head phones but there's a special sound computer that's able to make the sound three dimensional. Now it's possible to move sounds around in 3-D.


The next important peripheral is the glove, and with the glove the computer measures your visible hand well enough to create a virtual hand or computer generated living sculpture of your hand inside Virtual Reality. This lets you pick up imaginary things as if they were real and manipulate them. So, basically you get the idea, you have a certain general synthesizer of experience.

What Virtual Reality Isn't

Now, let me go over what it isn't. First of all it's not remotely possible to confuse it with physical reality. The reason is, since everything you see has to be generated by computers, it takes on a "computerish" or "cartoonie" look. Even though it feels real, it's like a real place that's simpler than physical reality. The other thing to be extremely clear about is that it's purely a computer interface. In the movie "Lawnmower Man" it's connected with psychic energy and telekinesis and smart monkeys and endless, endless weird things. Virtual Reality has nothing to do with any of those.

Virtual Reality as Communication

Let me talk about a couple of reasons why this stuff is exciting, important and of interest to people here. First of all, let's ask, what is the use of this thing? Aside from being an interesting toy, why would we care about it?

I think the primary reason is that it would seem to be an incredible new device and technique for communication between people. If you think about it you can take two different people who are each using Virtual Reality equipment and network them together. The illusion is that they are inside the same virtual place.

For instance, they could play a game of catch with a ball or go ballroom dancing or something inside this alternate place. Philosophically it's an incredible thing to consider because it would seem that this new shared virtual place is the first new, objectively shared thing since the visible world. It's a new kind of platform for having shared experiences that has a lot of the qualities of the physical world, except that it's totally programmable, it's totally under our control, it's fluid. To me, this has enormous possibilities both for work and for our culture.

Virtual Reality in Architecture and Medicine

In terms of using it for work I will just give you a very basic sampling of some of the things we're doing at VPL. Our systems have been used in the last year to help redesign a subway station in Berlin, where two architects and two members of the city council, and members of the community were able to go inside a simulation of how the subway might be built and talk about it and try it out. They were able to "experientially" share ideas instead of just conceptually share them. That's one example.

We're working on the same sort of thing in medicine. We have a big project to help doctors be able to demonstrate operating procedures using the data gathered from a real patient, so you can use it as a fantastic personalized medical simulator.

Human-Centered Tool for Culture

These are some of the ideas for the use of this thing. If you consider it as a "tool for culture" I think it's truly fantastic. It's like a form of a waking state-- shared dreaming--like that. What sorts of possibilities are there for sharing between people with Virtual Reality? What sorts of culture would arise?

One of the reasons Virtual Reality has gotten as much attention as it has is because of how it's human - centered instead of machine - centered. What I hope Virtual Reality is, is one of many examples of a philosophy of science and technology, that believes that things should be centered around people. The other idea is that technological power for its own sake is really a self-destructive idea, a sort of dangerous underlying idea for science.

What we really need to focus on is not so much on the ultimate power of our computer or increasing the ultimate power of our technology but rather us saying how can we empower each other, how can we create bridges between us?

Virtual Reality and Persons with Disabilities

To me, technologies are for everybody. In a sense, the work that we might be able to do with Virtual Reality and people with disabilities is absolutely continuous and not different at all from any of the useful work we do with technology anywhere else.

It's very important to understand that for me there's no distinction and I also want to say something that you might not realize! In the history of Virtual Reality development, the community of researchers building Virtual Reality machines and Virtual Reality software has been, in many cases, almost the same community as the people working on tools for disabilities.

There's been an incredible overlap between the two communities and I think the reason for that is very obvious: our goals are almost the same. The goal is to see how can you use technology and mold it to a person instead of asking the person to come to the technology. Again, how do you make things human-centered? And furthermore, when you work with Virtual Reality you discover that the amount of individual variation between people and the way that their senses work is pretty high and so in the future, Virtual Reality systems will be individualized anyway.

So in our field, I don't know if there ever will be a distinction of what work is for people with disabilities and what isn't. I think it's really all part of the same thing.

Commercial Virtual Reality Systems

One of the other things I want to make clear from a reality point of view is that Virtual Reality is still a pretty young field. Commercial Virtual Reality systems have only been available for about three years now. Most of the Virtual Reality systems are very expensive. The very good ones -- the ones that begin to feel reality -- are extremely expensive. That's a very frustrating situation for everybody.

And obviously, that means, as a practical matter, there's a lot of barriers to the use of the technology. However, that's improving very rapidly. I would predict that within five years from now there will start to be good quality systems that are fairly low cost. There are some lower cost systems that are becoming available now.

Concluding Remarks

Then, the other thing I wanted to mention is that really what I urge you to do is to give me a hard time in the Question and Answer period this afternoon. What I like to do is speak as little as possible and listen a lot. So, this afternoon, you are invited to ask hard questions. Let's try and share some ideas and brainstorm the problems you face.

I've reached a natural resting point for this topic so I will conclude now. I am grateful to Harry Murphy for inviting me here and for the effort that's gone into this conference. I keep on learning more about what CSUN has done and it's really awesome. I'm also grateful to have contact with all of you so I thank you very, very much.

Jaron Lanier
Original Artists 
8523 Broadway, Suite 1901 
New York, NY  10003 
212-254-1234 Phone
212-254-3121 FAX

Return to the Table of Contents 

Return to the Table of Proceedings 

Virtual Reality and the Physically Disabled: Speculations of the Future

A.S. Akins


This paper reports the results of an experiment in which twenty-four participants on an intra-company computer conferencing system speculated how Virtual Reality, in the future, may impact the lives of the physically disabled. Two qualitative techniques for thinking about the future, the ninebox and the timeline, were used.

The participants felt that future Virtual Reality technology could have a very significant impact on the lives of the physically disabled. Such technology could provide environments where physical disabilities no longer matter. A major concern expressed by the participants was whether the technology would be inexpensive enough to impact significant portions of the population.

Using the results of the survey, the paper concludes with a short scenario describing a possible future in which Virtual Reality technology is available to the physically disabled.

What is Virtual Reality?

Virtual Reality, a relatively new area in computer science, is the concept of computer generated three-dimensional simulated models combined with devices that track the movement of a user's eyes, head, hands or body. The system monitors changes in user movement or in the model and updates the model.

Virtual Reality is based on the theory that humans do not directly experience reality. Humans receive external stimuli, such as light or sound, which is then interpreted by the brain as reality. If a computer can send the various external stimuli the brain interprets, then the computer generated reality is potentially indistinguishable from reality.

The current state of Virtual Reality technology requires expensive computing power. Users wear data helmets, which envelop the user with sight and sound, and data gloves or suits, which allow the computer to track the user's hand or body movements. Today's computer generated Virtual Realities are crude in resolution and richness when compared to the real world, but represent the first stages of increasingly complex Virtual Realities, which may one day be as real as the world in which we live in today.

Studying the Future

Futurists often use qualitative tools to help them think about the future. This experiment asked people to consider the future of Virtual Reality and the physically disabled by using two such tools. Participants were shown how to use a ninebox to describe their opinion of the interplay of probability and impact for an issue or trend. Participants were also shown how to use a timeline to describe the strength of a second issue over time. Finally, participants were asked to supply any comments along with their responses.

Responses were analyzed in the following manner. Two response types were analyzed: the consensus response (the response that reflects the opinion of the majority of participants) and the outliers (responses which lie outside of the consensus). Identifying the outlier response is important, as their opinions and comments may point to new ideas or concepts that the "group" may have not considered.

Ninebox Questions and Analysis

The nine-box, a three by three square, is used to consider the interplay of probability and impact for a trend or issue. The vertical axis maps the probability of the trend or issue occurring, while the horizontal axis maps the expected impact of the trend or issue. An individual places an x in the box that best matches their opinion of the combination of probability/impact. The following issue was posed to the audience:

The impact upon society as the technology of direct mind interaction implants is developed and used.

A total of 24 responses were collected. The collected responses are shown in the following ninebox chart:

The consensus response indicates that the impact of direct mind interaction on society would definitely be high. It implies that the probability of direct mind interaction is medium (as the greatest number of participants, (9) selected a medium probability). Two outlier groups were identified, both indicating a belief that the probability of direct mind interaction is low. The outlier groups commented that direct mind interaction provided numerous opportunities for misuse and the possibility of injury to individuals made this use of technology too risky to consider.

Timeline Question and Analysis

A timeline is a two dimensional chart used to consider the impact of a technology or trend over time. The vertical axis shows the impact of the trend or issue, from low to medium to high. The horizontal axis shows the time span being considered, in this case in 5 year increments, from 1990 to 2015. Participants are asked to draw a line or curve that fits their idea of how the impact of the trend will change over time. The following trend was posed to the audience:

Increasing interest in the Virtual Reality field will lead to inexpensive, and available Virtual Reality technology that will enhance the lives of the physically handicapped.

A total of 24 responses were collected. The responses map to five major curve types as shown on the following graph.

The response shows a relatively wide spread, and no clear consensus. Over half (13) of the participants felt that Virtual Reality could have a very significant impact on the lives of the handicapped, by providing an environment, a Virtual Reality, where they would not be handicapped. A second, smaller (10) group felt that the expense of Virtual Reality technology would almost assuredly keep the technology from having a large impact on the lives of the handicapped. This group felt that the expense of the technology would lead to a temporarily greater separation between the physically handicapped and the rest of the population. Virtual Reality technology would first be available to those who could afford it, a group which would probably not include the majority of the physically handicapped population. The outlier opinion shows a strong initial growth for Virtual Reality, until its own popularity leads to a down turn. The outlier opinion felt that the temptation to use Virtual Reality as an escape from reality, as an electric narcotic or drug, would lead to a move away from the use and research of the technology.

Speculations about the Future of Virtual Reality

In this section a scenario is used to present a possible alternative future in which the technology of Virtual Reality plays a prominent role. In this alternative future the progress and acceptance of Virtual Reality technology has been rapid. In particular, a great deal of Virtual Reality research has been conducted by the medical world. Virtual Reality is viewed as a technology that could help those unable to walk to walk, or the blind to see. It is seen as a liberating technology and its first beneficiaries are the handicapped.

In the Virtual World

I can run, fast as any man. By today's standards I am judged ruggedly handsome, as I bear the scars of a professional sports career. I have friends the world over. We meet every Sunday, to trade tales of the work week, to remember who we were and to talk about what we have become.

I work as a teacher and educator. I use computers to help people experience the joy of learning. They come to my classroom anxious, nervous, afraid. They see how the world has changed and continues to change, they see how people communicate in the virtual world, and they wish to become a part of it, but they are afraid. Afraid of the companion, the small, no larger than a dime, computer that is placed under the skin, that allows them to communicate in the virtual worlds. They are afraid of technology, they know of Orwell's 1984 and of later works that suggest how technology could be used to control people. Their fear hides their eyes from the freedom that awaits them.

My classroom exists in the mind's eye. I can only suggest how it may look. In each student's mind the classroom takes on a different sense of reality, of substance. In my classroom I attempt to take away their fear of being connected. I try to show the worlds that await. Worlds in which they can do things beyond their imagination, worlds unlimited by day to day reality, worlds that will allow their minds to function in new and innovative ways, worlds that will allow them to work in an unrestricted manner, to be as productive as they dare. I have a sales pitch, I give it, and some are convinced. They pull off the data helmet and body suits, and sign up for the implant, they have seen the light. Others are not so easily convinced. With them I am forced to show more and more of what Virtual Reality can provide. I show them the future and the past, I let them take part in it. I show them how to travel to any place at any point in time. I show them how to truly experience every aspect of the virtual worlds. And yet, there are still some who are unconvinced. For them, there is one final sales pitch.

I ask the unconvinced to come visit me in the real world. I give them my address and I await them. Some never show up, for what reason I do not know. For those who do come, a surprise awaits them. They arrive at the front door, it asks them to identify themselves, and once it realizes they are invited guests, it opens and beckons them to enter. In the front hallway, two newspaper articles are framed and displayed. The first article shows me at the age of 22, accepting the Heisman Trophy as the outstanding college football player of the year 2005. The second article describes how I was nearly killed in an automobile accident three years later. It describes how I lost my right arm and leg from the injuries I suffered. It also describes how I was paralyzed from the neck down, my spinal cord was severed. Finally, it describes how I sank into a coma shortly after the accident.

I stayed in that coma for five years, awaiting a miracle. A technology that I knew little about would provide my miracle. Virtual Reality was a buzzword in the nineties, great advances were made by the turn of the century. As the technology became more prevalent, people began to think how the technology could be applied. A small group saw Virtual Reality as a technology that could free those imprisoned by their own bodies and minds. As I lay in a coma, they worked to free me.

In the year of 2013 I was reborn. An operation placed a companion in my neck and attached my brain to its connections. I exploded into a new world, a world where I was not bedridden, a world where I was whole again. I could walk, run, laugh, just like any other man. I was freed from the wrecked form of my own body. After they have a chance to read both newspaper articles, my visitors are shown to my room, where I lay in bed, unmoving, unconscious. They stare at my physical self, and for the first time they realize the freedom that the virtual world and its richness can provide. In my bedroom, I lay still, unmoving. But, through my companion, and in the virtual worlds in which I live, I move without restriction, with freedom.

IBM Federal Sector Division
MC 6402
3700 Bay Area Blvd.
Houston, TX  77058-1199

Return to the Table of Contents 

Return to the Table of Proceedings 

The Age of Magic

James R. Fruchterman


What lies beyond the Information Age? We are on the threshold of the Age of Magic, which offers astonishing possibilities for all of humanity, especially people with disabilities. Advances in computer hardware and software, medicine, nanotechnology and Virtual Reality will create a world where the individual will have extensive control over his or her personal environment and communications with other people.

Arthur C. Clarke once said that any sufficiently advanced technology is indistinguishable from magic. We are increasingly crossing into an era where technology will be able to accomplish almost anything we can imagine. Imagine uttering an arcane spell and turning your mother-in-law into a turnip! Fortunately (or unfortunately!), the place in the future where the most magical events will be occurring will not be the real world, but the almost real world of cyberspace. Cyberspace is the space inside your computer, where Virtual Reality makes almost anything possible. Still, technology will soon make many things possible in the real world as well that would be considered magic today.

The areas of technology advancement that will create the Age of Magic are computer hardware and software, nanotechnology, medicine, telecommunications and Virtual Reality. The progress in each area is worth considering with the likely implications of expected advances. After examining the foundation of the Age of Magic, we will step into it and examine what the Age of Magic seems likely to hold for people with disabilities.

The contents of this paper are based on a book project in process which I am writing jointly with David Ross, and many of the ideas expressed here were developed with or by him.

Computer Hardware and Software Advances

Most people are familiar with the amazing pace of price/performance improvements in computer hardware. Computers are getting smaller, more powerful, more memory, more storage, more communicative, more talkative, and so on. Less than six years ago I was with a company that delivered a breakthrough product in Optical Character Recognition for $40,000. It is now possible to purchase the same capability for less than $500. These trends seem to be unchanging and we can confidently make predictions based on what is likely to happen over the next few decades. If we extrapolate current trends, we can project that in 2025, it will be possible to purchase a desktop computer with rough processing power equivalent to a human being. In just twenty-five more years, we can expect that a desktop computer will have the processing power of the entire human race. This is somewhat frightening and something I will touch upon again at the end of this paper.

We are going to need a tremendous amount of processing power to deliver the promise of the Age of Magic. The computational effort needed to generate realistic Virtual Reality systems is extreme. We will also need a great deal of processing capability to equip computers with the senses and analysis they need to carry out our wishes. We should expect that voice recognition and extensive image recognition technology will tax the processing power of the steadily advancing state of the art of computer systems for quite a while.

Not only do we need to sustain the rapid pace of computer hardware advances, but we need to speed up the pace of software development. Unlike the relentless progress of hardware advances, software development has only made marginal productivity improvements over the last twenty years. At some level, a human programmer is still needed to precisely explain to a computer how to do a task. In the case of tough problems like voice recognition, many people have worked on the problem without yet delivering the algorithms that solve the task.

Several advances seem likely to improve the situation. Computer-Aided Software Engineering ( "CASE"), will lead to much higher productivity through the reuse of programming effort and higher level direction of the software development process. Ways of teaching computers how to do a task have also advanced considerably. Examples of this are learning systems such as neural nets or today's optical character recognition.


Nanotechnology is the creation of machines the size of viruses. These nanomachines have the ability to manipulate molecules according to a specified program. These tools will let us grow computer processors and revolutionary drugs at will, cure genetic diseases by repairing damaged genes, and rebuild the damaged arteries and nerve cells of stroke victims. In time, they will let us modify our bodies by design.


Medicine will be revolutionized by the combination of nanotechnology and genetic engineering. Genetic engineering lets us create plants that produce human insulin and human growth hormone. Soon, we will inject modified genes into ourselves to treat muscular dystrophy and diabetes. In the next century, genetic diseases such as Tay-Sachs and Sickle-Cell Anemia will follow smallpox to extinction. We will cure horrors like AIDS before they become epidemics and reverse aging as easily as today we prevent and cure diphtheria. Understanding of the human genome applied with nanotechnology tools will make it possible to regenerate damaged organs and limbs.


In the coming years we can expect the web of communication interconnection that already exists to become even more pervasive. By expanding the amount of data that can be moved around and by drawing an ever-increasing proportion of the world into the web, we will find ourselves plugged into the primary means of communication. It will become the place where business is transacted as information becomes the main commodity exchanged in our society.

Virtual Reality

Virtual Reality is the science of creating artificial worlds. Today, a real-estate developer can don a set of goggles and walk through an architect's design of a building before it is built. Doctors can plan complex surgery on computer models of their patients, "seeing" in artificial images each of the muscles and nerves involved. These cartoon-quality images are the beginning of our move into cyberspace, the artificial realm inside our computers. Cyberspace is ultimately a universe where all sights, sounds, tastes, smells, and touches are created by computer but seem as real as our every-day world. To many twenty-first century business people and technologists, cyberspace will be the everyday world.

Today's Virtual Reality systems are visually oriented with hand gestures being the primary control mechanism. Tomorrow's Virtual Reality systems will be far more general and powerful. One of the main drivers of these advances will be improvements in artificial perception. When computers can understand our speech and analyze a scene effectively, some exciting things happen. People who do not speak each other's language will understand each other with instantaneous translation. Computers will need to recognize very subtle gestures or voice commands to control Virtual Reality.

To some extent, the commercial opportunity of Virtual Reality will drive further advances in artificial perception. These advances will have a larger impact on the adaptive technologies field than the first major artificial perception technology, optical character recognition. The voice recognition technologies developed for natural interacting with Virtual Reality systems will deliver the number one technology desired by deaf people: real-time text translation from voice to visual characters. Scene analysis will advance to the point where real-time descriptive video will be generated for blind people. Gesture recognition will extend to recognizing American Sign Language or whatever gestures a person wishes to use. All of these advances will be useful both inside and outside Virtual Reality.

The combination of Virtual Reality and the expanded telecommunication network will make it possible for most people to work from wherever they choose to be through telecommuting. This kind of telecommuting will define a very usual office environment. The office will become a defined place in cyberspace, where you indeed occupy an office in a shard environment. You can walk down the hall to the next office and chat with your co-worker, who could be located twenty, two hundred, two thousand or twenty thousand miles away from you!

The Age of Magic

The Age of Magic will come about in stages. The first stages will occur in cyberspace. Because the environment is completely under computer control, virtually anything can happen. The method of controlling this environment will likely to be voice control (incantations) or gestures. The parallels between the terminology of technology and magic of literary fame will grow increasingly close.

What kind of magic can we work? Inside Virtual Reality, almost anything we desire can be made to happen. This power carries both risks and rewards. I want to make it clear that Virtual Reality will not make disabilities magically disappear. However, it will become a powerful tool for many people with disabilities. To a great extent, employment in a virtual office will remove many of the barriers that exist today. Advances in telecommunications and Virtual Reality will redefine the workplace of the future. Imagine an employment process where you will be evaluated primarily on what you can accomplish rather than trying to overcome de facto discrimination.

The primary benefit of Virtual Reality will be increased control. There will be options that do not exist today in the areas of travel, communication, employment, work environment, and information access. The power to control your personal work environment and method of communication will be great. You will be able to control that environment in the way that you prefer and be portrayed in the way that you desire.

The powerful artificial perception technologies will find applications inside and outside cyberspace. Because of the commercial profit motive that will drive these technology advances, people with sensory disabilities will then take advantage of artificial perception to remap information between senses. Blind people will be able to receive much more visual information audibly or tactually; deaf people will be able to receive audible information visually.

Nanotechnology and advances in medicine will create the next stage of the Age of Magic. Much of the magic worked in cyberspace will find expression in the real world. Nanotechnology offers the possibility of building almost anything given sufficient time, materials, processing power, and energy. The understanding of human genetic engineering will give us the specification of how to modify our bodies. Depending on the pace of technology advances, these opportunities may present themselves to integrate artificial perception technologies more closely with people than is possible today. This sort of sensory augmentation will not be exclusively limited to people with disabilities, since greater than normal sensory capabilities will have a wide appeal.

The other possibility is that we will be able to rebuild organs that are damaged according to the code in a person's DNA or correct the genetic program that specifies organs with defects. This possibility personally seems more likely to occur first compared to sensory augmentation, although I expect both to occur.

Space does not permit a wider discussion of the implications of the powerful capabilities that will define the Age of Magic. Humanity will face some major decisions in choosing the direction of the future when the power of our technology gives us the ability to work magic of all kinds. As a technologist who plans to work on some of the technologies that will bring about the Age of Magic, my goal is to build tools that give people with disabilities more options in pursuing their lives the way they see fit. The Age of Magic will be a revolutionary age for people with disabilities.

James R. Fruchterman
Arkenstone, Inc.
1185 Bordeaux Drive
Suite D
Sunnyvale, CA 94089
(800) 444-4443  Phone
(408) 752-2200
(408) 745-6739  Fax

Return to the Table of Contents 

Return to the Table of Proceedings 

DataGlove, DataSuit and Virtual Reality:
Advanced Technology For People With Disabilities

Walter J. Greenleaf, Ph.D.

Meeting the Challenge-Advanced Technology For People With Disabilities

For people with disabilities, technology can be a great equalizer. The wide availability and multiple applications of computers, for example, have had a profound impact upon the world of the physically and mentally challenged. Many of the new applications are expensive and still under development. But some rather remarkable products have progressed beyond prototype. Voice Recognition and Speech Synthesis devices, for example, are available now to enhance the capabilities of people with disabilities.

New Technology to Magnify Movement - The DataGlove and DataSuit Technology

The DataGlove and DataSuit provide a dramatic new method for the measurement and amplification of human motion.

The DataGlove is a thin cloth glove with fiber optic cables running along its surface. When the joints of the hand bend, the fibers bend and the angular movement is recorded by the sensors. These recordings are digitized and forwarded to the computer, which calculates the angle at which each joint is bent. On screen, an image of the hand moves in real time, shadowing the movements of the hand in the DataGlove and immediately replicating even the most subtle actions. The DataSuit is a customized body suit fitted with the same sophisticated fiber optic sensors found in the DataGlove.

The DataGlove and DataSuit collect data dynamically in three-dimensional space. Their fiber optic sensors are able to track the full range of motion of the person wearing the Glove of Suit as he/she bends, moves, grasps or waves.

Applications Under Development

Our research group (Greenleaf Medical) is currently developing products to enhance the productivity of individuals with disabilities and to improve quantitative movement assessment. We currently have under development applications in the following areas:

The Gesture Control System will improve the functional capabilities of individuals with disabilities. The Gesture Control System will enable individuals wearing the DataGlove to perform complicated tasks through simple hand gestures. Recognized via the fiber-optic sensors of the DataGlove, the user's simple gestures will correspond to a pre-programmed set of instructions.

As an example of the use of the gesture control system, Greenleaf Medical has created a prototype "Gesture Controlled Switchboard" that uses the DataGlove and a Macintosh computer to control a telephone receptionist station. Using hand gestures, the receptionist will instruct the computer to answer and route telephone calls, or to activate pre-recorded messages to "speak" with the caller.

The GloveTalker which "speaks" for the user, is an extension of the Gesture Control System. In this application the wearer of the DataGlove is able to speak by signaling the computer with his/her personalized set of gestures: the DataGlove is used to recognize hand positions (gestures) and this information is passed through to the computer's voice synthesis system, which will speak for the DataGlove wearer. The voice output can be sent easily over a computer network or over a telephone system, thus enabling vocally impaired individuals to communicate verbally over a distance. The system allows for a programmed amount of freedom in interpreting the gesture so that those capable only of relatively gross muscle control will still benefit from the system.

The Movement Analysis System will adapt the DataGlove's fiber optic technology and link it with new software to create a tool for quantitative assessment of upper-extremity function. The Motion Analysis System will provide new methods of collecting data and performing quantitative analyses that will be critical to employers for proper job site assessment and design.

Virtual Reality - New, Emerging Technology

Virtual Reality is an emerging technology that radically alters how individuals interact with computers. By donning clothing mounted with sensors that communicate movement and location to a computer and by wearing a helmet allowing the user to "see" via a sophisticated graphics system, the user "breaks through" the computer screen and enters a three-dimensional world. Inside Virtual Reality, we can walk through a virtual house, drive a virtual car, run a marathon in a park still under design. In Virtual Reality, unlike television, the user is not a voyeur but an actor: this technology demands that the user interact with his surroundings.

The experience of entering this computer-generated world is compelling. Recent advances in computer processor speed and graphics make it possible to create very realistic environments. The practical applications are far reaching: today, within Virtual Reality, architects design office buildings, NASA controls robots at remote locations, and physicians plan and practice difficult operations.

How Can Virtual Reality Help People With Disabilities?

The utility of Virtual Reality for impaired individuals lies within its unique ability to enable the impaired to accomplish tasks and have experiences that would otherwise be denied them because of physical limitations.

Within a virtual environment, the user can maneuver and affect objects without the limitations he or she would normally experience in the real world. For example, an individual with cerebral palsy who is confined to a wheelchair can, within a virtual environment, run a telephone switchboard, play hand ball, dance. The variety of experiences is limited only by our imagination.

Walter J. Greenleaf, Ph.D.
Greenleaf Medical Systems
2248 Park Blvd.
Palo Alto, Ca  94306
(415)321-6135 Phone

Return to the Table of Contents 

Return to the Table of Proceedings 

Biocontrollers for the Physically Disabled:
a Direct Link from Nervous System to Computer

R. Benjamin Knapp
Hugh S. Lusted


Over the course of the past 5 years an experimental biological signal processing system called the BioMuse has been developed. The BioMuse is a real-time computer controller that uses the bioelectric signals generated by the eyes, the muscles, and the brain to control both video and music [1], thus creating a direct link from nervous system to computer. Previous reports on the BioMuse have focused on the application of this system for real-time control of electronic musical instruments [2][3]. In this paper several other applications currently under investigation are described. Specifically, the three areas of clinical diagnosis and bioelectric signal research, medical rehabilitation, and computer interfaces for the physically disabled will be discussed.

Clinical Diagnosis and Bioelectric Research

A) Strabismus (Eye Alignment) Research and Diagnosis Strabismus is a condition of the eye muscles in which the individual eyes are not aligned and do not move in conjunction with each other. The present method for measuring and correcting for strabismus involves the use of prisms to empirically align the eyes[4]. The number of prism diopters necessary to achieve this alignment is then recorded and used to determine the type and amount of correction to the muscles that will be performed under surgery. This measurement technique is not very well reproduced and is not very accurate. The BioMuse has the capability of accurately tracking and recording the vertical and horizontal composite position of both eyes if the eyes are misaligned [5][6]. While typical users of the BioMuse use the composite position tracking for control of a computer video display (see below and figure #1), a physician may use the individual eye tracking capabilities to accurately determine the degree of misalignment in a patient with strabismus. The patient is asked to track with their eyes an object on a computer's video screen. The object is moved to various locations on the display while the exact positions of each eye is being recorded. A complete muscle analysis is then printed out.

B) Robotic Manipulation Another area of research with the BioMuse is in the development of robotic manipulators. To accurately design a robotic arm, the movement of the human arm must be extremely well understood. Typical Virtual Reality "data gloves" record the movement and absolute position of the hand and arm. They don't, however, record the amount of muscle tension that exists. This is important because many actions of the hand and arm are isometric, i.e., there is no actual movement, but rather an increase or decrease of tension. For example, grasping and holding or crushing an object depends solely on the muscle tension not on physical motion. The BioMuse can learn and recognize specific muscle tension patterns which the engineer can then use to design the robotic arm. In addition, the capability of the BioMuse to perform real-time neural network time-space-frequency pattern recognition also enables the user to control the robotic arm in real time.

C) EEG Cognitive Pattern Recognition The ability to understand the relationship between the electroencephalogram (EEG) and specific cognitive activities has been the subject of on-going research [7, 8]. One of the principle areas of research has been in automatic sleep scoring. During sleep a person will transition through six sleep stages. Physicians "score" sleep (determine which sleep stage a person is) in order to understand both normal and abnormal sleep behaviors. The capability of the BioMuse to analyze sleep states in real-time using both neural network and fuzzy set pattern recognition may eventually allow for automatic monitoring and sleep scoring in a clinical setting. It is expected that information learned from this research will lead to the real-time recognition of other more complicated cognitive activities.

Medical Rehabilitation

A) Orthoptics
As mentioned previously, the BioMuse has the capability of tracking individual eye position for the measurement of eye misalignment. With this capability the strabismus patient can control two separate objects on the video display. If the two objects align, then the patient's eyes are also aligned. Thus a "video game" could be created to train the patient to align the object on the screen and to keep the objects aligned as the gaze is shifted. Training and exercising the eye muscles in this fashion may alleviate the necessity of surgery in mild strabismus cases.

B) Physical Therapy
Another use of the BioMuse system is for physical therapy. Two important aspects of physical therapy are having a therapy program tailored to an individual's needs and having the therapy enjoyable so that it is performed consistently. Because the BioMuse may be customized to detect any level of muscle activity for most muscle groups, it can easily be adapted to an individual's needs. As in the case of orthoptics described above, the therapy can be set up as a video game. For example, a video game may be used where a bird drops a ring onto a dolphin. The speed of the dolphin is controlled by the tension in the user's forearm. The tension must be just right to make the speed of the dolphin equal to the speed of the bird. If the tension is too light the ring will fall in front of the dolphin. If the tension is too great the ring will fall behind the dolphin. In this way an interesting way of performing an exercise is created which requires a prescribed amount of exertion but avoids over exertion.

Man-Machine Interface

A) Two and Three Dimensional Eye Tracker As mentioned previously, the BioMuse can be used as an eye-tracking device. Thus it can be used as a real-time man-machine interface (eye mouse) for the disabled. Wherever the user looks, the cursor on the video display moves to that point. The system has an automatic calibration procedure which adjusts for the screen size and the distance the person is away from the monitor. Two applications for this kind of capability is for using menu-driven software and for controlling an intelligent word processor, such as the one available from Words Plus. Using this word processor, a disabled individual can quickly write complete documents without having to spell out every word. Combining the eye-tracker with the capability to use jaw muscle tension as a "button click" a user can write complete documents, using only the BioMuse as the input device.

Because both eyes can be tracked individually the convergence of the eyes can be used to move an object on a three-dimensional video display in the "z" dimension [5], i.e., in or out of the display. In this way, the user may select an object "up close" or "far away" simply by looking at it, creating an extremely fast and natural way of moving through a virtual environment.

B) Muscle Controller By adjusting parameters for the BioMuse, any muscle of the body can be used to control any external electronic device. To date, this capability has been used to allow disabled individuals to play standard MIDI controlled electronic instruments. As mentioned previously the muscle controller, coupled with a "data glove" style device, may be used to control a robotic limb or any external mechanical device.

C) Controller Using Brain Activity The use of brain activity for controlling computers is still in the research stage. While the use of simple brainwave detection has been used to control the timbre of a muscle synthesizer [3], it will still be many years before the user of the BioMuse will be able to think of a violin and hear a violin sound or think about a word processor and have it appear on the screen. In the not too distant future, however, it may be possible to augment the pattern recognition of muscle and eye movement with the pattern recognition of the EEG to create an interface that comes even closer to being a perfect link from the nervous system to a computer.


  1. R.B. Knapp and H.S. Lusted, "Biopotential Controller for Music and Video Applications," Patent Pending.
  2. R.B. Knapp and H.S. Lusted, "A Real-time Digital Signal Processing System for Bioelectric Control of Music", Proceedings of the IEEE Int. Conf. Acoust.Sp.Sig.Proc., Vol. 5, pp. 2493-2495, 1988.
  3. R.B. Knapp and H.S. Lusted, "A Bioelectric Controller for Computer Music Applications," Computer Music Journal, MIT Press, Vol. 14, No. 1 pp. 42-47, Spring 1990.
  4. E. Krimsky, "Method for Objective Investigation of Strabismus," J.Am.Med.Assoc., Vol. 145, No. 8, pp. 539-544, Feb. 1951.
  5. R.B.Knapp, L.E. Hake, and H.S. Lusted, "A Method for 3-D Eye Tracking and Strabismus Measurement," Patent Pending.
  6. L.E. Hake and R.B. Knapp, "A New Method for the Measurement of Strabismus," presented at Ophthalmology Conf. at Univ. Ca. San.Fran. Feb. 6, 1992.
  7. A.H. Hiraiwa, K. Shimohara, and Y.Tokunaga, "EEG Topography Recognition by Neural Networks," IEEE Eng. Med. Bio. Mag., Vol. 9, No. 3, pp. 39-42, Sept. 1990.
  8. J. Hu and R. B. Knapp, "Electroencephalogram Pattern Recognition Using Fuzzy Logic," Proc. IEEE 25th Conf. Sigl., Syst., and Comp., Vol. 2, pp. 805-807, Nov. 1991.
Benjamin Knapp
San Jose State University
129 Tenth St.
San Jose, CA  95192-1248
Hugh Lusted BioControl Systems Inc. 430 Cowper Street Palo Alto, CA 94301 (415)329-8494 Phone (415)321-5973 FAX

Return to the Table of Contents 

Return to the Table of Proceedings 

Virtual Realities: From the Concrete to the Barely Imaginable

Stephen Marcus, Ph.D.
Suddenly I don't have a body anymore. At least I know where I left it. It's in a room in a building in a town in California. But I -- or "I" -- am in cyberspace, a universe churned up from computer code, then fed into my eyes by a set of goggles through whose twin video screens I see this new world. All that remains of my corporeal self is a glowing, golden hand floating before me like Macbeth's dagger. I point my finger and drift down its length to the bookshelf on the office wall....

-- John Perry Barlow

Forward to the Basics

At various laboratories and art installations around the country, and in some malls and video arcades, you can now enter worlds that resemble ones you're generally familiar with, but in which the laws of the universe are sometimes modified or eliminated.

To enter these worlds, you typically wear special headgear that tracks head movements and provides a three-dimensional image (a kind of Walkman for the eyes). There is also a glove or other device that tracks the position and configuration of your hand. Both pieces of apparatus are connected to a computer that provides the visual displays and responds to your head and hand movements. (In some cases, you're also treated to sounds, pressure, and resistance to movement).

These technologies are used to create a "Virtual Reality," a simulation that you seem to enter physically. It's a kind of deep medium. Randall Walser, of the Autodesk Research Lab, suggests that if text "tells, and video and film show, then a virtual reality embodies the world it creates." Virtual Realities dissolve the line between the interface and the innerface of the worlds you wish to explore.

Although I'll be describing several very different kinds of systems below, I'll be limiting my discussion here more to the areas of reading and writing. I can only hope to sketch out the What? So What? and Now What? dimensions of Virtual Reality.

John Perry Barlow, who's experience with the AutoDesk system introduces this discussion, tries to "grab a book, but my hand passes through it." After some guidance, he succeeds: "When I move my hand again the book remains embedded in it. I open my hand and withdraw it. The book remains suspended above the shelf." Since the only part of himself he sees in this new world is "a glowing, golden hand," he realizes that he doesn't "seem to have a location exactly. In this pulsating new landscape, I've been reduced to a point of view. The whole subject of `me' yawns into a chasm of interesting questions."

In a very different setup, David Rokeby's "A Very Nervous System," your body's movements (with no glove or goggles) are tracked by video cameras and translated into music. Space itself is now the interface between you and the computer; space has become a medium in itself. People who have experienced this sort of environment sometimes note that the relationship between cause and effect seems to disappear. Rob Appleford, for example, noted that, " (As I walked back to the university) a truck splashed through a puddle, and I caught myself trying to make that neat noise again by retracing my steps." The ability to make "neat noises" is at the heart of Timothy McGuiness's application of this system for quadraplegics, who with very limited movements of fingers or of a mouth stick, have become part of Supercussion, a performing jazz group.

With a related kind of system, "Mandal," from Vivid Effects, you stand in front of a video camera with a blank background behind you. Your image is transferred into a computer screen that has special images on it (e.g., musical instrument, floating alphabet letters, or a hockey rink). As you move in front of the camera, your image on the screen can interact with the objects there: playing the instruments, grabbing letters to form words, blocking shots. You (or is it your image?) have become a virtual icon, interacting with other icons on the screen.

A different goal is being addressed by the MCC consortium in Texas, where they're developing ways to represent large and complex bodies of knowledge in a computer-generated virtual space. In this case, navigation through a body of knowledge involves being able to yaw, pitch, and roll as if you were flying a helicopter, a rather different set of skills than knowing how to turn pages or to use an index or table of contents.

Closer to Home

It's hard to communicate, in print, the look and feel of entering virtual worlds. (Can you easily describe to a non-reader what it feels like to be "lost in a good book"?) Having tried, however, to make strange things familiar, it might be instructive at this point to make some familiar things strange.

Consider this. Text simulates thought. It's an artifact, representing with its own particular richness and its own unique limitations the ineffably complex workings of the human mind going about its business of making sense of things. Text is a version of the thought that the text represents. It's a working model of what's on our minds. As with other kinds of simulations, we can interact with and affect text (aka reading and writing). Text is virtual thought.

We've long since become accustomed to a range of materials that embody text, paper being the predominant one (which some people, interestingly, equate with being "carved in stone").

Now consider this. A word processor creates virtual paper. The technology seems, in effect, to fill a new medium with an old content. We tend to think of a word processor as a kind of way-station on the way to a printout, "hard copy." We imagine there's a sort of paper scroll moving past the "window" of the monitor screen. This simulated paper has, however, powers and abilities far beyond those of normal paper. The "videotext" sometimes blinks, ripples, and slides. It can disappear and reappear. It can change its shape and sometimes its color. It can have embedded in it "buttons" (and I am talking about word processors here) that will play sounds and recorded voices or show moving pictures. This sort of virtual paper is less and less designed to be "printed out" (although more and more we're hearing the phrase, "print to video"). Increasingly, this sort of document's home is in the computer environment in which it was created.

Many people's thinking includes pictures and sounds as well as words-unless, like Einstein, you also think in colored patterns and muscle sensations (contrary to my high school English teacher's dictum that "if you can't put it into words, you can't be thinking it"). Thus, these kinds of technology-enabled multimedia and "hypermedia" environments, ones that allow a variety of "documents" to be stored, linked, intermixed, displayed, and altered by the "writer" and the "reader," are steps toward the creation of virtual texts.

And Down the Road

There are, to be sure, "virtual books" that are currently available commercially. The illustrated children's stories in the Discs collection, on CD ROM discs, can read themselves to you in English or Spanish, by word or phrase of your choice. They can provide you with definitions of each word and will remember the words you didn't know. You can have the books instantly "rewritten" in different typographies. The recently released Expanded Books, from the Voyager company, provide moving images and sounds to accompany the text, along with special text searching and annotating features. The "books" are stored on diskettes, for use with Apple Computer's PowerBook. And for younger children, there are now the Living Books, from Broderbund, each "page" of which contains a variety of interactive possibilities.

These three examples illustrate small, carefully designed steps toward the development of virtual books and writing implements. Combined with the more thought-provoking examples above, they suggest that we can begin to ask a new set of design questions regarding future reading and writing environments.

What should a "smart book" do? Learn your reading level and adjust itself accordingly? Shouldn't your word processor learn your writing style and bring into play a set of tools adjusted to your habits and needs? (In point of fact, four major software publishers have announced development plans along these lines.) How do you turn the page of a virtual book? With a wave of your hand? Does it even have pages? Can the notion of a book, set in type, with a fixed point of view, be expanded to include a more fluid entity, one that is shaped by each reader, with the construction of meaning derived from conscious and explicit interactions between the text, the reader, and previous readers who have become contributing authors? What about sound tracks for books?

Akram Midani, Dean of Fine Arts at Carnegie-Mellon University, notes that our involvement with new technologies generally moves from an ambivalent relationship with augmented abilities to the "dawning of irreversible change." Think of word processing in this regard. In the teaching of writing, there are still wide-spread uncertainties about its value, its effects on writing, the manner in which it should be taught, and the ancillary computer-based tools that should be used in conjunction with it (prewriting software, spelling and style checkers, keyboarding tutorials). Yet, are there many people who regularly use a word processor who would willingly give it up?

Virtual Realities provide a very curious set of technologies, ones that have implications for the basic tools and substance of our work. Barlow, quoted above, felt himself to be a "traveler in a realm what will ultimately be bounded only by human imagination." At this point, however, the territory of virtual realities remains a kind of unreal estate.

Developments in Virtual Reality hardware and software -- as evidenced by the presentations at this conference -- are both deriving from, and enriching the use of technology for persons with disabilities. Those who are exploring the nature and applications of Virtual Reality are helping raise our expectations and expand our visions for integrating technology into education and our daily lives.

Stephen Marcus
University of California
SCWRIP, Graduate School of Education
Santa Barbara, CA 93106
(805)893-4422  Phone  
(805)893-8061 FAX

Return to the Table of Contents 

Return to the Table of Proceedings 

Matching Virtual Reality Solutions to Special Needs

Teresa Middleton

In our world today we are surrounded by a wonderful array of technologies. We have assimilated many of these technologies into our everyday lives, sometimes with difficulty - have you tried programming your VCR recently? - and at other times, almost unconsciously - did we notice the transition from dialing a telephone to touchtone "dialing" with anything but relief?

Increasingly, these different technologies are becoming integrated to provide new capabilities and services. And most frequently a computer is the heart of this integration. This is the case with Virtual Reality - a so-called technology which actually is a very sophisticated integration of a number of technologies.

What is a Virtual Reality System?

First let's provide some kind of definition of Virtual Reality (VR) which will help us more easily discuss it. We can say that VR is a computer-based technology which incorporates specialized input and output devices to allow the user to interact with and experience an artificial environment as if it were the real world. A VR system permits the user to explore a three-dimensional virtual - or artificial - environment and to interact with lifelike and/or fantasy elements created by the designer. In the virtual world, the user can do things as routine as throwing a ball or as fantastic as flying through space. And these things can be made to occur by something as simple as a hand gesture or a nod or (one day) a sound.

The specialized input and output devices the systems must use currently include gloves, body suit, head-mounted display (e.g. eyescreens), and receivers that allow three-dimensional spatialized sound. Future systems will allow voice input and other advanced interface devices. A feeling of presence is important so the user wears sensors to tell the system about his position and movement so that the display he sees and the sounds he hears can be changed appropriately as he turns his head - just as his perception of the real world would change as he moved.

Data gloves or body suits provide information to the system; they also serve as a representation of the user within the environment to connect him to the virtual world, giving him a feeling of presence. The user sees a representation of his hand (in the case of the data glove). If he lifts his hand and points to the right, he will see the representation of his hand do the same thing within the virtual world.

In the past five or six years, developers have carried VR technology to the state where we, the potential users, can visualize applications for our own special needs. Applications for VR will include: communication, education and training, design, recreation and entertainment, and teleoperation. The purpose of this paper is to identify promising matches between VR technology and the special needs of people with disabilities within the framework of these applications.

Attributes of Virtual Reality

When we talk about technologies, we tend to think of technology products - computers, input devices, trackers, and so on - just as I have done by listing the VR components. But to identify matches between a technology and the special needs of users, we have to think in terms of attributes of those technologies rather than components - in other words, what the technology will do, rather than what it is physically. The main attributes of a VR system are:

Perhaps the most significant feature of a VR system is its ability to present any kind of world, including unreal worlds. In the real world we are stuck with what we have; the sky is blue on a cloudless day, buildings remain at the height they were built, shadows are defined by the brightness of the light, and so on. A virtual world, on the other hand, can be created any way we want - and we will see later how important this can be to the disabled user.

Regarding the other attributes, when we are in a virtual world, we must have a feeling of presence - we must believe we are "there." Unlike television, VR presents the environment and its objects with three-dimensional images, so they have substance. 360-degree sound (that is sound that comes from all around the user) further enhances this perception and body tracking allows the system to know where the user is in the world and thus to present images and sounds in the most realistic way possible. VR allows the user to interact with the world, pick up and move objects, open doors, walk around corners just as he would in the real world. In this interactive environment, haptic feedback - a feature of VR that is in its infant stages - will allow the user to sense the weight or the tactile feel of an object. For example, in the future when you pick up a cup of hot coffee in a virtual world, you may be able to feel its weight and the heat of the coffee. Finally, because a world can get to be a lonely place if you are the only person in it, VR supports multi-user capabilities.

Special Needs

All people - with and without any disabling condition - might ultimately benefit from some aspect of VR. For the purposes of this paper, however, I am focusing particularly on the needs of hearing-impaired persons, people with visual impairments, and those with physical or motoric impairments. Some needs of these populations include:

Each of these needs is discussed with reference to VR's attributes and in relation to specific disabilities.

Safe Environments for Learning and Practicing Skills

All of us, children and adults alike, need to learn and practice many skills during our lifetime. Many tasks are best learned "on the job," including the tasks a little child learns, for a small child's play is her "job." Depending on the task to be performed, for someone with a disability, learning a new skill and spending time practicing it may require enormous effort, and may actually be impossible to do on the job - at least in the real world. The turning point may be in providing that person with an artificial environment in which he can learn and practice a skill in his own way, at this own pace, and in complete safety. To illustrate how VR can help support learning and practice, let's take a look at how apprentices are best trained. In apprentice training, a technique called "scaffolding" is frequently used. In this process the scaffolding, in the form of training help (e.g., a trainer, reference manuals, a mentor), is constructed as the beginner starts to learn the task. As she becomes familiar with elements of the task, the scaffolding (or training support) is removed, little by little until finally, when the task is completely learned, all scaffolding has been removed and the apprentice is on her own, doing the job.

In the real world, the scaffolding takes the form of add-ons--trainers, self-help manuals, look--up references, and so on-because the world itself cannot be changed. In a virtual environment, we can construct worlds any way we want. We will be able, for example, to construct quite simple worlds within which tasks are performed and then add complexity to the world as the user becomes familiar with the task and is ready to move ahead with the training. Thus, we have transferred the scaffolding function to the VR designer who will simulate the task within an environment that can be changed on demand.

Further, the environment can be designed with a particular disability in mind. Take, for example, a person who is visually impaired, who maybe sees in shadows and light, and has a need to learn how to operate equipment and tools in a particular setting. With VR, the setting can be built in a virtual world, and all equipment and tools in that world can be given distinct profiles (such as heavy, blackened outlines) so the user can more easily distinguish them, while he is learning how to operate them safely within that setting. The flexibility of VR will allow the world to be changed, to add, for example, equipment one piece at a time, while he is learning how to operate them safely within that setting. The flexibility of VR will allow the world to gradually become more like the real world (outlines will decrease and disappear) as the user becomes more confident with the task and the use of equipment and tools. In the end he will be able to perform the risk in the real world.

Now let's examine VR's ability to support learning of communication skills. Children without disabilities tend to learn these skills quite naturally by picking up visual and spoken cues from friends, family, teachers, and, of course, television. A child with a hearing loss (or one who is profoundly deaf) does not hear the casual conversation which provides so many clues for socialization; a child who is vision impaired also misses valuable visual aids to assist him as he learns to socialize and communicate. VR could help such children; the virtual world could be constructed to provide extra cues depending on the child's needs. For the hearing-impaired child the virtual world will be greatly enhanced when it incorporates a speech recognition component. I am sure many of you are aware of products already on the market to help hearing-impaired and deaf children monitor their speech by letting them see their speech output on a computer screen. With these products students are presented with a model of correct sound, visualize their own speech pattern, and modulate it to match the model. Visualization will be the key to expanding this concept in a VR environment. Imagine having a whole world that will react to your voice - if you speak the words correctly! Doors will open, flowers will bloom, birds will fly - anything that is in the designer's creative imagination can be made to happen on command.

Another important aspect of VR technology is its ability to allow the user to "see the unseeable." It is particularly hard for a child who has no hearing to understand abstractions. For example, in a real room, you cannot see what happens when a light switch is thrown because electric wires are buried behind the wall. In a virtual environment, the walls could become transparent and you could trace the flow of electricity after a switch is set. This ability to "zoom" into the environment permits exploration of structures in ways not possible in the natural world. The user can perceive phenomena that normally are not perceptible at all - the user can see the invisible.

Entertainment and Recreation

Does VR have a place in entertainment for persons with disabilities? Absolutely. Consider the pervasiveness of Nintendo in today's child's life. A child with no sight can't take part in those games; they are "aim and shoot" kinds of games that rely very heavily on vision. Virtual Reality incorporates 360o sound and this could provide the basis for audio-based games for vision-impaired youngsters and adults, who could interact with lifelike or fantasy elements in worlds that are as exciting as those provided in video games now.

For physically impaired persons, the possibilities are perhaps even more exciting. A virtual reality system permits the user to explore a three-dimensional virtual environment and to interact with a world in his own time, at his own pace. As I have said, in the virtual world, the user can do things as routine as throwing a ball or as fantastic as flying through space by something as simple as a hand gesture or a nod, or (one day) a sound. With virtual reality technology, people with physical impairments will be able to experience things that have been denied them in the past. The technology will enable them to exert control over environments in which they will get to choose what to explore; and as the technology advances and refinements such as tactile feedback are built in, there will be opportunities for them to experience sensations that they have never before been able to have.

Improved Equipment and Building Design

VR technology is already being used for architectural purposes. Buildings are constructed in a virtual world before they are completed in the real world so architects and their clients can "walk through" the building and come to agreement on final design issues. With the passing of the Americans with Disabilities Act (ADA) we can expect that many of these design issues may involve access for persons with disabilities.

Much of the work that is now being performed in VR development labs supports the human factors modeling for equipment or environment design. This, of course, has direct bearing on persons with physical disabilities for example, a new tracking device from Ascension called "Flock of Birds" simultaneously measures the position and orientation of up to 10 receivers. (Typically, at present, only two devices are used, one on top of the head and one on a glove to track head and hand movements.) The 10 receivers can be placed on various parts of the body to provide head and body-segment tracking in support of functions such as biomechanical analysis. This and other future products will allow for the accurate design of equipment that is particularly responsive to individual needs.

Designers are already thinking about ways to let a wheelchair-bound user wander through virtual world using his chair. They are trying to simulate the progression of a wheelchair in a virtual world by placing the chair on rollers, which, like the sensors on a data glove, become the input device that enables the computer to sense where the wheelchair is at any given time.


VR technology is still in a very early stage of development, but we can expect to see many of the following developments during the coming years:

Finally, as we have seen, VR has the potential of supporting a wide variety of applications for persons with disabilities - and I have given you just a few examples. VR has caught people's imagination, particularly in the areas of entertainment, training and design, and future prospects for further investment in research and development are very good. Many of the components of a VR system have significance for a variety of disabling conditions and these will be tracked with interest by all of us working in the area of technology and persons with disabilities.

One word of caution. In its present configuration, a VR experience can be very disorienting for hearing-impaired or deaf persons. Before putting persons with little or no hearing into a virtual environment, make sure they know exactly what to expect, since they will receive no cues from the "real world" once they are immersed. (Most of us can stay in touch with the real world by listening for instructions which we can still hear, even though we are wearing earphones and listening to sounds in the virtual world.)

VR has already made inroads into the area of entertainment. In San Francisco a theater performance called "Invisible Site: A Virtual Show," incorporating many VR techniques (inclusing 3-D effects) has received very good reviews and is playing to enthusiastic crowds. The film "The Lawnmower Man" is about VR, and it incorporates many of the graphics processed used in VR.

First itemized in Middleton (1991)


  1. Koved, L., "Architectures for Industrial Strength Virtual Worlds," presented at The Virtual Worlds Conference, SRI International, June 1991.
  2. Marcus, E.A., "How to Make VR Feel Real," presented at The Virtual Worlds Conference, SRI International, June 1991.
  3. Middleton, T., "The Potential of Virtual Reality Technology for Training," Journal of Interactive Instruction Development, Warrenton, VA. 1992.
  4. Middleton, T., "Advanced Technologies for Enhancing the Education of Students with Disabilities," Journal of Microcomputer Applications, J Academic Press Limited, London, England, 1992.
  5. Middleton, T., "Who Needs Virtual Reality?" presented at Third National Conference on College Teaching and Learning, Jacksonville, FL April 1992.
  6. Middleton, T., Means, Bl, "Exploring Technologies for the Education of Handicapped Infants, Children, and Youth:" SRI International, Menlo Park, CA June 1991.
  7. Piantanida, T.P., Means, B., "Feasibility of Using Virtual-Environment Technology in Air Force Maintenance Training," proposal for research, December 1990.
  8. Robinett, W., "Perceiving the Imperceptible," presented at The Virtual Worlds Conference, SRI International, June 1991.
  9. Schlager, M., Middleton, T., Boman, D., Wilcox, G., "Behavioral Requirements for Training and Rehearsal in Virtual Environments," unpublished document, SRI International, June 1991.
  10. Weghorst, S.J., "Biomedical Uses of Inclusive Visualization," presented at The Virtual Worlds Conference, SRI International, June 1991.
  11. Wenzel, E.M., "Three Dimensional Acoustic Displays," presented at The Virtual Worlds Conference, SRI International, June 1991.
Teresa Middleton
SRI International
333 Ravenswood Ave.
Menlo Park, CA 94025
(415)859-3382  Phone
(415)859-2861  FAX

Return to the Table of Contents 

Return to the Table of Proceedings 

Access for Persons with Disabilities Through Hypermedia
and Virtual Reality

Alice Rose, MA


Due to the infancy of its development and high costs, complete Virtual Reality systems (three-dimensional computer-generated graphic environments that one can enter through the use of a head-mounted display, body suit, and gloves [Rheingold, 1992]), are presently inaccessible not only to people with disabilities, but to the public at large. However, sophisticated hypermedia and video-projection programs and technologies can be explored now and can give us a peek into the promised benefits of those complex systems. The design and implementation strategies employed by the developers of the more readily available systems can familiarize us with and further our experiences with concepts and representations of interactivity (modes and degrees of cause and effect) and virtuality (the degree to which something seems real).

The important components of access, interactivity, and complete virtual environments are essentially the same; both address the issues of freedom of choice, unrestricted expression, understandable conventions and information, meaningful participation and interaction, and have effect through control in an environment. By exploring the concept of access we come to understand interactivity and virtual environments better, by exploring interactivity we come to understand virtual environments and access better, and so on.

To begin this exploration the following products, research tools, prototypes and programs will be used for illustrative purposes: QuickTime Navigable Movies, explorable two dimensional digital environments developed by Apple Computer, Inc.; Guides, a multimedia navigation tool prototype from Apple Computer, Inc.; Life Forms, an interactive software package to be used in the creation of dance movement and composition from MacroMedia; Parents . . . Join The Discussion, an interactive laserdisc virtual discussion group prototype produced by graduate students at San Francisco State University; two programs adopting aspects of the landmark METAPLAY and VIDEOPLACE environments designed by Myron Krueger-Very Nervous System, an interactive gestural music system produced by David Rokeby; and the Mandala Virtual Reality System, a complete human interface multimedia package, from The Vivid Group.

Navigable Movies

Gaining access to environments, navigating through them, and interacting with objects are issues that were addressed by Apple Computer, Inc's QuickTime Human Interface design team. QuickTime, the multimedia extension to the Macintosh operating system, enables users to store and playback digital movies from within applications. The QuickTime 1.0 Developer's CD contains a collection of movies originally captured by a pan and tilt camera. Combining this approach to image representation with clever software, a user can explore a particular digitized environment by clicking and moving the mouse over the image on the screen. As the mouse is moved to the right, it is as if the user has looked to the right; as the mouse is moved downward, it seems that the user's gaze is moving down the wall and toward the floor. The feeling is of being there, of being in the center of a room, atrium, etc., and looking around. A team of interface designers is working currently to create manipulable object movies that can be a part of a navigable movie. With a navigable scene the user is fixed and has control over where to look from a central point; the manipulable object spins on its center. With this combination, one could explore a location and even select and "pick up" an object to look at all its sides. For individuals who may find a particular environment inaccessible, either due to mobility restrictions, location or structure, the implementation of this technology could enable him or her to wander at leisure through that given space.


How to reduce the cognitive load incurred by users trying to navigate through more abstract databases, such as large bodies of textual information, was a challenge taken on by Apple Computer's Advanced Technology Group [Oren, Solomon, Kreitman, and Don, 1990]. These designers developed a strategy that could help in information retrieval by offering the user the opportunity to form new alliances with virtual computer and video interface agents called Guides. In a demonstration project about American History from 1800-1850, these guides represented different points of view, by virtue of their identity as either a settler woman, Indian, slave, or preacher, etc., and offered information that could alter the user's course through the data. Ostensibly, these anthropomorphized interface agents were there to act on behalf of the user in a context and form that is already familiar to most of us - a character - and to help in the more unfamiliar territory, the computer-based virtual environment [Laurel, 1990].

A variation of this concept was incorporated into an interactive videodisc prototype on AIDS prevention for persons with mental retardation [Kraus and Rose, 1990]. The users were instructed to choose a guide (a blind person with AIDS, a doctor, a person who had taken AIDS prevention education classes, or a concerned neighbor) to be with them through their journey through the database. In extensive user-testing [Rose, 1990], students reported that they liked choosing someone they could trust to help them learn.

Parents . . . Join The Discussion

Interacting with characters who form the database itself, as opposed to characters who help one navigate through the database, was the pivotal design element in Parents . . . Join The Discussion. This program, designed and developed by a team of six graduate students from San Francisco State University's Center for Educational Technology, is an interactive videodisc program (level-three) prototype intended to help meet the special needs of parents of children with disabilities. The purpose of this innovative HyperCard-based media project was to provide a meaningful supplement to the counseling and referral resources of social service agencies concerned with families that experience disabling conditions. In communities where support groups for parents do not exist, where there may be no others facing like challenges, or when a parent is hesitant to join a support network or approach others in similar circumstances, this `virtual discussion group' can serve as a needed resource, a bridge to others like themselves, or to community resources.

With this special program, the parent user (either independently, with a counselor or with other parents) `sits in' on a support group discussion in progress. The interface features a graphic representation of group participants around a table where the issues to be discussed are literally and figuratively on the table. Because of this design, users report that they feel as if they are pulling up their chairs to join the group. A user can access and explore the database from a variety of perspectives by clicking with a mouse on the digitized images of any participant and on the issue of choice. Issues addressed include empowerment, expectations, family life, attitudes towards professionals, self-esteem, support, and the disability community. Support group participants include: parents of young and grown children with disabilities discussing their parenting; young and grown siblings of individuals with disabilities talking about how having a disabled brother or sister affects their lives; teens and adults with disabilities expressing their thoughts and feelings about how they were parented and treated by professionals; and professionals giving their perspectives on how to provide positive experiences for all involved.

Life Forms

In that Parents . . . Join the Discussion draws the user into a virtual discussion group, MacroMedia's Life Forms enables the user to dance on its stage through the use of a surrogate dancer or corps of dancers. While doubters might scoff at the suggestion that a person with a significant physical disability can dance with power and grace, on the virtual stage and in the virtual time of Life Forms, one can dance if one can choreograph. Life Forms software for the Macintosh provides the user with a stage that can be rotated on a 360 degree axis, body outlines, menus of positions and movements, and editing tools that enable the user to choreograph every major body part in time and space, from all points of view. The program provides automatic interpolation and playback for viewing works in progress, placing much value on the iterative process [T. Schiphorst, T. Calvert, C. Lee, C. Welaman, and S. Gaudet, 1990].

De Anza Community College, in cooperation with The High Tech Center Training Unit of the California Community Colleges, is exploring the use of this software to give students with physical disabilities the opportunity to participate meaningfully in the movement arts. The program under consideration is an instructional setting in which students with and without physical disabilities can, in an integrated class, explore, experience, and choreograph dance within the Life Forms computer-generated environment.

Very Nervous System

While Life Forms enables the user to take on the characteristics of another body to create art, some systems incorporate users' bodies into the interface. David Rokeby, a Toronto artist, has created Very Nervous System, a real-time interactive virtual environment using video cameras, image processors, computers and a sound system. In this environment of `human-scaled physical space' [Rokeby, 1991] even the smallest of movements or gestures, such as the blink of an eye or the bending of a finger, can be translated into MIDI information for the creation of intricate series of sound and/or music. The `unencumbering' [Krueger, 1991] technology employed by Rokeby requires no head-mounted displays, bodysuits or wiring, and relies solely upon video tracking. It can be used by one or many users at the same time and requires no keyboard, no mouse, and no computer monitor for data input or output.

In 1988, David Rokeby combined artistic efforts with Timothy McGuiness and a musical band, Supercussion, comprised of four people who are quadriplegic. Using the Very Nervous System, members have had success demonstrating freedom of expression through music by using methods such as wiggling a drum stick from the mouth and bending fingers. As legitimate artists, they have performed publicly to rave reviews.

Mandala Virtual Reality System

Just as the Very Nervous System uses the human body to produce effects in an invisible but audible virtual `world', Vincent John Vincent and the Vivid Group have commercialized a product that goes further in incorporating the user in a virtual space. The Mandala Virtual Reality System uses a video capturing process to project one or more users into a variety of digitized video environments that they can see on a monitor or screen in front of them. When they `touch' an object in that video environment, they can expect dramatic responses, such as sounds and animation. In one of the scenes, for example, the user can `play' an ensemble of percussion instruments with any body part or object; in another `landscape' the user finds that bubbles can be transformed into birds with a mere touch of the hand [Verbum 5.2, 1991]. Like the Very Nervous System, there is no `encumbering' technology with which the user needs to bother.

At the Association for Computing Machinery, SIG Computer Human Interface Conference in New Orleans (1991), Vincent John Vincent talked about the motivational effect his system has had upon individuals with both mental and physical disabilities in rehabilitation therapy programs. It is not difficult to see how users could be inspired to engage in movement activities in order to become a part of a magical animated world, effecting magical changes that can be seen and heard.


These examples of hypermedia and video-projection systems demonstrate that for people with disabilities, as well as for those without disabilities, interactivity and access can be broadened now to include environments and characters that are virtually there.


  1. Kraus, L., and Rose, A. "Using Interactive Videodisc Technology for AIDS Prevention for Persons with Mental Retardation", Proceedings of the Fifth Annual Conference `Technology and Persons with Disabilities,' March 1990.

  2. Krueger, M. W. Artificial Reality II, Reading, Massachusetts: Addison Wesley Publishing Company: 1991.

  3. Laurel, B. Computers As Theater, Reading, Massachusetts: Addison Wesley Publishing Company: 1991.

  4. Laurel, B., "Interface Agents Metaphors with Character," The Art of Human-Computer Interface Design, B. Laurel, ed. Menlo Park, CA: Addison-Wesley, 1990.

  5. Oren, T, Solomon, G., Kreitman, K., and Don, A. Guides: "Characterizing the Interface" The Art of Human-Computer Interface Design, B. Laurel, ed. Menlo Park, CA: Addison-Wesley, 1990.

  6. Rheingold, H. Virtual Reality, New York: Summit Books: 1991.

  7. Rokeby, D. "Interactive Kunst," Electronica International Compendium of the Computer Arts, 1991

  8. Rose, A. "The Effectiveness of the InfoUse Interactive Videodisc Program on AIDS Prevention for Adults With Developmental Disabilities: A Comparative Study of Two Control Options," Master's Thesis, San Francisco State University, San Francisco, CA, Fall 1990.

  9. Schiphorst, T., Calvert, T., Lee, C.,Welaman, C., and Gaudet, S. "Tools for Interaction with the Creative Process of Composition," ACM. CHI Proceedings, April, 1990.

  10. Verbum,Inc. "Gallery I," Verbum Journal of Personal Computer Aesthetics 5.2, Fall/Winter 1991.

Alice Rose
High Tech Center  Training Unit
California Community Colleges
21050 McClellan Road
Cupertino, Ca 95014
(408)996-4636 Phone
(408)599-6042 FAX

Return to the Table of Contents 

Return to the Table of Proceedings 

Orientation Enhancement Through Integrated Virtual Reality and Geographic Information Systems

Erik G. Urdang
Rory Stuart


People with certain disabilities do without sensory information that others use for orienting themselves and getting around the world. One of the most difficult yet crucial tasks for the vsually disabled traveller upon arriving in unfamiliar surroundings is to perform this orientation in order to begin navigation. Once a person knows where s/he is and which way s/he is facing, it is possible to count city blocks, listen for predictable landmarks, and avoid known obstacles, but until s/he is oriented these activities are either impossible or futile. The system we propose in this paper helps to solve this problem by providing for the visually impaired person the same sort of cues used by the sighted person for rapid orientation in a new location. We will also briefly discuss how related Virtual Reality systems could help a hearing impaired user.

Imagine that you are visually impaired and that you have just arrived for the first time in New York City. Before leaving on your trip, using raised-relief maps, braille, or with the help of a sighted person, you carefully studied the layout of the metropolis and know where your ultimate destination is with respect to your anticipated point of arrival. You also have a good mental map of the important landmarks with respect to both of these points. Now, suppose you are wearing a device comprised of a directional antenna attached to a small processor and a pair of micromonitors or earphones. Suppose further that a number of well-known landmarks (say, The Empire State Building, The World Trade Center, The Statue of Liberty, The CitiCorp Building, and The Chrysler Building) all have radio beacons transmitting audio signals representing their names repeated at two-second intervals: "Empire State Building...Empire State Building..." Non-speech auditory icons could also be used to represent landmarks although this might be better suited to travellers who are most familiar with the area. As you get off the bus, you switch on your device, "look around", and pick up the signals being transmitted from the various landmarks.

A Taxonomy of Virtual Reality Systems

Before describing the application in more detail, it is worth considering some elements of a taxonomy of Virtual Reality Systems and where, within this frame-work, our proposed system resides [4]. Please see Figure I. Integration

One parameter by which a VR system may be characterized is the degree to which actual reality "intrudes into" (or is perceivable from within) the simulated world. When using typical Head-Mounted Display (HMD) with stereo goggles, no matter where you look, all you see is the simulation. With a head-up display (or e.g. with the Private EyeTM, the image appears as if it were overlaid upon the actual scene. This dimension can be described as integration -- the degree to which the real world is integrated with the simulated reality [6]. In the auditory domain, an integrated Virtual Reality would let the user hear the sounds generated by the VR system, but would permit users to simultaneously hear the real-world sounds around them. Alternatively, an "exclusionary" (or non-integrated) auditory VR would block out the sounds of the outside world (as micro-monitors do), and let the user hear only the system-generated sounds (if an integrated auditory VR is desired using micro-monitors, this can be achieved by mixing in sounds from the external world that are captured with small microphones).


Another parameter by which a VR system can be described is immersion. In the visual domain, the conventional stereo goggle setup is totally immersive in that the user sees the virtual world no matter which direction s/he looks. An example of a much less immersive system is one using Crystal Eyes TM, in which the 3 - D image is seen only when the user gazes at the computer screen. In the auditory realm, a non-immersive virtual world would have sound coming only from a particular direction, whereas an immersive virtual world would give the user the impression that sounds could come from any direction in space.


Synesthetic representation refers to the representation of information from one sensory modality in another. An example of this is an existing system is the representation of touch via sound in the NASA AMES system [10]. Synesthetic representation can be iconic or can be achieved by isomorphic mapping.

The System

Phase I
In the simplest version of the system, a small number of tall landmarks (perhaps 10 to 20, each of 200 meters high) would be chosen in order to create an irregular but dispersed grind of points. Each one would have attached to it one or more omnidirectional antennas transmitting at the same frequency as used by all of the other beacons. The frequency selected for these transmitters should be around 10 MHz in order to minimize reflections from buildings and to maximize transmission distance. Each of these beacons, as noted above, would emit at frequent intervals, an audio message stating the common name of the landmark. The traveller would, upon arrival, locate one or more of these simply by turning his/her head until one of the signals could be heard. At this moment the device, along with the user's face, would be pointing directly at the landmark and the user would, therefore, be oriented with respect to that landmark (please see Figure II). Ideally the user would be able to locate more than one beacon and would thereby be able to triangulate and ascertain with fair accuracy his/her location and orientation.

Phase II
With the placement of a multitude of small, low-power, high-frequency (greater than 3 GHz), extremely-short-range transmitters (perhaps one at each street corner), the user could hear local information about which street corner was being passed. If tied in with the traffic light system, the user could also hear WALK/DON'T WALK information. By combining this with the beacons mentioned in Phase I, the user could know, for example: "I am currently standing at East 43rd St. and 5th Ave. [from the local source], facing the Empire State Building, with the Statue of Liberty ahead and to the right." Please see Figure III.

At this range, transmitter localization would be much more feasible and this would permit the use of Head-Related Transfer Functions (HRTFs) to simulate what would happen if real speakers were positioned in the external environment. The intended sensation is that the appropriately equipped objects are each calling out their names from the direction of their actual locations. HRTFs can be implemented in the form of Finite Impulse Response (FIR) filters to mimic the pinnae (outer ear) transforms that would occur in a human listener responding to a naturally occurring audio point source, thus making use of the human perceptual capabilities in sound localization [1, 2, 3]. Although pinnae characteristics vary from person to person, it has been shown that people can localize well with non-individualized transfer functions [8, 9].

Phase III
The third phase of this system would incorporate a number of additional technologies both to provide redundant location information and to enhance the user interface. As for location, low-cost Global Positioning System receivers are now available which could provide latitude and longitude input to the device with accuracy down to several meters. The device could also contain a small Geographic Information System which would have been pre-loaded with geographically registered data for the user's destination city. This could contain information about public transportation, landmarks, hazards, even restaurants and theaters. If this system were to come into wide use, one can even imagine vendors vying for advertising space in commercially available GIS data layers!

By augmenting the GIS technology with Artificial Intelligence enhancements, including speech recognition, the system could respond to user inquiries regarding directions, schedules, and path planning; again, the synthesized speech responses could be presented in spatialized audio and could be integrated with both real-world sounds and synthesized sounds from the beacons. Rather than try to specify the design of the actual user interface, we recommend working interactively with a number of potential users of the system in order to create an interface that will best meet their needs [7].

For Further Study: Hearing Impaired Users

We have focussed here on a system that would help a visually impaired user with orientation and navigation. The problem in acquiring information about the world faced by the hearing impaired are different, but could also be addressed by an integrated immersive Virtual Reality system that uses synesthetic representation. A particular challenge is how best to graphically represent sound events (the ring of a doorbell, the honking of an oncoming car, the sound of a pot boiling over), especially those that are not in the user's line of sight.

We can imagine a number of approaches, using virtual framed mirrors, motion, and color coding, as well as the use of other modalities (e.g. tactile output), but iterative prototyping and evaluation will be needed to determine what representation works best for the target user population.

What we see in common is that, in each case, integration of the virtual and real world is critical. Visually impaired users certainly will not want a system that cuts them off from the sounds of the world around them; nor will hearing-impaired users accept a system that prevents them from seeing the real world around them. Immersion is important and synesthetic representation of information can best take advantage of the un-impaired senses of the user.

Using special circuitry in the device, these messages will be presented in 3-D audio to help you better localize them. As a result, you will be able to position and orient yourself in terms of your mental map and more autonomously travel to your destination. We will present the system in three phases with the first being the simplest and least expensive to implement and each successive stage being more refined and more useful. In addition to being ordered in terms of complexity these phases will be presented in such a way that the development of each could reasonably build upon the previous one.

There are a range of possible representation in VR (see e.g. [5]), and determination of the best representation should take into account task, user, and context.


  1. Begault, D.R., and Wensel, E.M. "Techniques and Applications for Binaural Sound Manipulation in Human-Machine Interfaces." NASA Technical Memorandum 102279. Moffett Field, California. August, 1990.
  2. Begault, D.R., and Wensel, E.M. "Headphone Localization of Speech Stimuli." In Proceedings of the Human Factors Society, San Francisco, California, September 1991, 82-86.
  3. Foster, S.H., Wenzel, E.M., and Taylor, R.M. "Real Time Synthesis of Complex Acoustic Environments." In IEEE Workshop on Applications of Signal Processing to Audio & Acoustics. New Paltz, N.Y. Oct. 20-23, 1991.
  4. Stuart, R., and Kellogg, W.A. "A Taxonomy of Virtual Realities." Paper in process. 1992.
  5. Stuart, R. and Thomas, J.C., "The Implication of Education in Cyberspace." In Multimedia Review, Volume 2, Issue 2 Summer 1991.
  6. Stuart, R., "Virtual Reality: Directions in Research and Development." In Interactive Learning International, April 1992.
  7. Urdang, E.G., "How to Design a GIS User Interface...Don't Bother: User Interface Design Principles for Geographic Information Systems." User Interfaces for Geographic Information Systems: Position Papers for the Specialist Meeting of the National Center for Geographic Information and Analysis, Research Initiative #13, June 23-26, 1991, Buffalo, New York.
  8. Wenzel, E.M., Wightman, F.L., Kistler, D.J., Foster, S.H. "Acoustic Origins of Individual Differences in Sound Localization Behavior." Journal of the Acoustic Society of America, 84(1988).
  9. Wenzel, E.M., Wightman, F.L., Kistler, D.J. "Localization with Non-individualized Virtual Acoustic Display Cues." Proceedings of CHI '91 ACM Conference on Human Factors in Computing Systems. New Orleans, Louisiana. April 27-May 2, 1991.
  10. NASA, "Virtual Reality Videotape." Broadcast & AudioVisual Branch, Code PMD, Washington, D.C. 2054 6. 1991.

Erik Urdang
Rory Stuart
NYNEX Science & Technology, Inc.
251 Locke Drive, Room S2B80
Marlborough, MA 01752
(508)624-1502 Phone
(508)624-4728 FAX

Return to the Table of Contents 

Return to the Table of Proceedings 

Access Issues Related to Virtual Reality for People with Disabilities

Gregg C. Vanderheiden, Ph.D.
John Mendenhall, B.S.
Tom Andersen, M.S.


In examining the application of Virtual Reality with individuals who have disabilities, there are new potentials and opportunities, as well as a potential for creating new barriers. In some cases, what represents a potential new opportunity for individuals with one type of disability will create barriers for individuals with other types of disabilities. In some cases, the barriers are artificial, and can be overcome through careful design and implementation of the Virtual Realities. In other cases, the barriers are inherent in the Virtual Reality itself.

In order to help identify those Virtual Reality applications which are most likely to cause access problems, as well as to indicate possible solution strategies, it is useful to divide the applications into two major categories or classes: those applications where Virtual Reality is used to construct a metaphor for the presentation and manipulation of information which could also be provided in other forms (e.g., verbally), and those applications which are inherently three-dimensional or multi-sensory in nature.

This paper will first lay a foundation for looking at Virtual Reality applications from the perspective of these two classes, and then go on to discuss the impact of the two classes on each of four major disability groups. The paper will then conclude with a discussion of some problems and areas in which focused efforts may help to maximize the benefit of Virtual Reality and minimize its potential for creating new barriers.

A Two-Class Model for Dealing with Virtual Reality Applications

The two-class model presented here is an extension of the same model used to discuss graphical user interfaces (GUIs). In discussing the application of this model to Virtual Realities (VR), it is useful to draw parallels between the fairly mature technology of the GUIs and the still-developing technologies of VR. In this fashion, some of the access issues, as well as potential applications, can be highlighted. The two classes as they apply to graphical user interfaces are:

  1. Class 1: information that is presented using graphic metaphors in the GUI, but which could also be presented in words. This includes text and visual metaphors for concepts that can be expressed verbally (e.g. a scroll bar, which could be replaced by a series of commands such as Page Up, Page Down, Set Position, etc. and an indicator of the individual's location within the overall document as a percentage).
  2. Class 2: information which is inherently graphic and cannot be described easily and completely in words.

Class 1 Applications in Virtual Reality

Class 1 VR applications would similarly be those applications where either textual information is being presented or where a three-dimensional VR metaphor is being used to present information which is not inherently 3 - D in nature and could be expressed in words. Looking at the parallel between the graphical user interface and the Virtual Reality applications is useful here.

For instance, on the GUI, the iconic representation of the desktop with folders and files could also be handled with a hierarchical directory in a text format. The information itself is essentially verbal and requires no visual component to effectively convey its meaning. This type of information is therefore categorized as Class 1, since it is inherently verbal in content and no meaning is lost when interpreting these iconic representations verbally. Some users may find the visual metaphor easier to use and some may find the verbal easier.

In a similar manner, Class 1 applications within VR would involve the use of Virtual Reality as an alternate way of presenting or manipulating information which could otherwise be presented in verbal fashion. Similar to the example from the GUI, the process of filing a document in Virtual Reality might involve using a 3 - D desktop with folders and file drawers. However, this function could also be handled as a hierarchical directory using text format and commands. As the VR technologies develop, some users may find it easier to interact and manipulate some types of information using the virtual metaphors, just as many users of GUI find that a visual metaphor facilitates their carrying out some type of computer operations. However, this is not necessary, and the information/manipulation could be handled other ways if a user were unable to use the VR environment.

Class 2 Applications in Virtual Reality

The Class 2 applications would then be those applications which are not metaphorical, and which involve the presentation and manipulation of information which is distinctly 3 - D in nature (visual, auditory, tactual, or combinations of these three). For example, we might use the VR display to study the difference in laminar flow and turbulence near the wall of a tube. This would be an application of VR technology that could not be expressed verbally in a way that would convey the same information as is made available through the VR presentation. A second example involves the manipulation of molecules to study molecular forces. This is again distinctly VR in nature and could not be accomplished or experienced in the same way through verbal commands and verbal feedback.

The Importance of This Distinction

Drawing a distinction between these two types of VR applications (those which are metaphorical and could be expressed through verbal commands and feedback and those dependent upon the VR presentation format) is useful when considering the implications of VR for people with disabilities. In particular, it is useful to separate those applications of VR which will fall into Class 1. For individuals with disabilities which limit their ability to access some aspects of the Virtual Reality (e.g., individuals who are blind would be unable to access the visual component; individuals who are deaf would be unable to access the auditory component), alternate mechanisms for presenting the same information can be employed. For example, if the Virtual Reality display is being used to present some underlying concepts in a 3 - D metaphor, individuals who are unable to access that metaphor may be able to access and control the underlying concepts through a different metaphor that does not involve the particular sense or ability which they lack.

The converse can also be true. For some disabilities, it may be that activities which are now carried out in verbal fashion (e.g. text) might be converted into a Virtual Reality metaphor and thereby making the activities easier to understand. For instance, individuals with cognitive or language disabilities who have difficulty dealing with systems involving verbal commands and verbal feedback may find it easier to carry out the same commands when they are presented in a Virtual Reality made up of salient 3 - D metaphors.

Applications of Class 1 and Class 2 VR with Different Types of Disabilities

To examine some of the different applications and implications of Virtual Reality with people with disabilities, let us take a quick overview of VR from the perspective of four major disability groups.

VR and Visual Impairment

Currently, the most highly developed aspect of Virtual Reality displays is the visual component. For individuals who are blind, this means that current Virtual Reality displays are of limited value. In those applications where the Virtual Reality is used simply as a metaphor, individuals who are blind could have access to the same systems if the underlying concepts being displayed and command structures required for operation were made available. Individuals who are blind could then use a verbal or other nonvisual mechanism to carry out the same tasks accomplished via the Virtual Reality visual interface. This would be similar to the provision of verbal access to the graphical user interfaces on modern computer systems.

Access to Class 2 applications or the Class 2 aspects of an application in Virtual Reality, however, would remain elusive as long as the Virtual Reality is dependent upon the visual display of information. As Virtual Realities add sound, touch, and force feedback, the value and accessibility of these environments for people who are blind will increase. Their access to these Virtual Realities would then approach their access to everyday reality. The difference would be that, since the Virtual Reality is computer-generated information such as color or visual texture could be presented verbally or converted into tactually discernible information to make it "visible" to the individual who is blind. In addition, the individual could also manipulate the overall size of the environment. Familiarizing themselves with the layout of a building might be accomplished by shrinking the building to a small size, which the individual could quickly explore with their hands. This would be much faster and more effective than walking around on the floor to get an orientation for corridor layout, etc. Similarly, detail on very small object, which could be examined by sight but not fingertip, could be easily enlarged so that they became tactually distinct.

The ability to create special virtual guides or to enhance objects tactually to facilitate their location could also be used. For example, individuals trying to locate an object could request that the object put out a taut string to the individual, which the individual could then follow back to the object. Similarly, the individual could request that an object be made larger or in some other way tactually distinct, and/or emit a sound, so that it could more easily be located.

Thus, at the same time that Virtual Realities can threaten to create a larger gap between the abilities of people with sight and those without, they can also, as sound and tactile technologies are refined, provide new capabilities and opportunities for people who are blind to explore and manipulate things in their world.

Virtual Reality and Cognitive Impairments

While the translation of metaphorical applications of VR to verbal form was discussed for people who are blind because they could avoid the virtual (visual)metaphor and achieve the same concepts verbally, just the opposite may be true for individuals with cognitive or language impairments. Activities which are now carried out through verbal (text) commands and feedback may be much easier for people with cognitive or language impairments if they were rendered as Virtual Reality metaphors. For example, activities or devices which currently require the individual to react to information presented in written text (e.g., following the directions for cooking) might be easier if the instructions were presented as graphic sequences. They may be easier yet if the information is presented as actual three-dimensional representations or if manipulation of the activities could be carried out through manipulation of three-dimensional metaphorical objects.

In addition, Class 2 applications of Virtual Reality could be used to help individuals with cognitive impairments by allowing them to practice in a less complex and more forgiving environment (for instance, developing skills associated with activities of daily living). Again, this would require that the Virtual Reality technology had achieved a very high degree of visual, tactile and sonic reality. Once high fidelity VR is available, environments where mistakes are less catastrophic, and the overall stimuli are reduced could be used to train individuals to carry out activities of daily living. Slowly, the environment could become more complex and realistic bringing the individual to the point of being able to function safely and independently in the real world.

Virtual Reality and Hearing Impairment

Here, the use of visual metaphors for auditory events could be used both as an alternate presentation for auditory events and to help teach some concepts such as sound directionality to individuals who are congenitally deaf. Individuals in virtual environments (or virtual environments which overlay real environments) could use new techniques to allow sound events to be presented visually. Visual sound waves emanating from a ringing phone or sound arrows emanating from devices which are making noise provides salient information about 3 - D auditory sources. The shape, thickness, and color of the arrows could provide additional information regarding the sound character. These as well as other techniques for presenting sonic information visually could be used to provide access to people who are deaf or with severe hearing impairments within both Class 1 and Class 2 VR applications.

In addition to providing individuals who are deaf or with severe hearing impairments with access to regular VR applications, there are a number of specific applications which might be of particular benefit to people with hearing impairments. For example, the ability to have a fully animated cut-away view of an individual's mouth and throat could greatly facilitate the ability of an individual who is congenitally deaf to learn to speak by observing the inner works of the oral cavity of others and/or themselves.

Virtual Reality and Physical Impairment

With regard to the presentation of information, individuals with physical impairments have all of the same abilities to access and use Virtual Realities as individuals without physical impairments. The primary difficulty would involve their ability to manipulate the virtual objects. However, since the manipulation is actually being carried out through some sensing of the controlled movements of the user it should be possible to allow individuals with physical disabilities to use alternate control sites or strategies to carry out the same manipulations. Thus, where an able-bodied individual would use their hand to reach out and pick up a virtual flask and pour out the contents, an individual who was paralyzed may be able to use a combination of head, mouth and facial movements to control the fingers of the virtual hand and carry out the same manipulations. Individuals who have movement, but who are very weak or who have restricted range of motion, could use a virtual environment to increase both their reach and their virtual strength in manipulating objects. In addition, the virtual environment provides everyone with the ability to move about, fly through the air, and otherwise maneuver within the virtual environments in ways which are unrelated to the physical constraints of our human bodies. When maneuvering by pointing one's hand, for example, it makes no difference whether one is sitting in an ordinary chair or a wheelchair. An individual can move about and manipulate objects within the virtual environment with the same case.

Finally, individuals who, because of a physical disability or medical support systems, find it difficult to physically travel to conferences or for sightseeing may find that Virtual Realities and telepresence will allow them to more easily participate in virtual conferences and/or virtual travel.


As with most advancing technologies, Virtual Reality offers opportunities and dangers. In many cases, the dangers can be avoided if attention is directed toward them early on. Three areas that should be kept in mind as VR evolves are:

  1. Where Virtual Reality is used as a metaphor, the underlying constructs should also be made available. Just as with graphical user interfaces, Virtual Reality metaphors are likely to appear in computer or information systems of the future. As long as the underlying concepts being represented metaphorically are also available so that they can be presented in other forms (e.g. nonvisual forms for people who are blind, nonauditory for people who are deaf), access to these systems can be maintained. Similarly, commands or manipulations which are carried out through VR metaphors that require physical dexterity or eye-hand coordination should be executable via command or other mechanisms, to allow them to be accessed by individuals with physical impairments or with visual impairments, requiring an interface demanding less physical dexterity or eye-hand coordination.
  2. As long as the VR environment is essentially visual, it will preclude participation by individuals who are blind. As sonic and tactile technologies are brought on-line, the accessibility of these environments to people with visual impairments can be greatly enhanced.
  3. As these capabilities come on-line, it will be important to present information redundantly, using as many senses as possible, in order to facilitate the participation of users with mild and moderate, as well as severe, visual or hearing impairments.

Although the VR environments allow individuals to utilize all of their physical and sensory abilities in this respect they are biased toward individuals who have all of these abilities intact. However, because the VR environment is artificial it is also possible to provide sensory substitution or alternate physical control in ways that would be very difficult or impossible in the real environment. The extent to which this alternate sensory and physical presentation and control is possible with future Virtual Reality environments will be a function of how open the underlying structure of these systems is, and how well they have been designed to support alternate presentation and manipulation interfaces. Our experience with the current graphical user interface suggests that unless these issues are raised early, the user interfaces will be optimized for individuals with full sensory and physical capabilities, and the ability to tap into the VR architectures at a level appropriate for using alternate access strategies will not exist except on a post-hoc or "patch" basis. Although virtual reality now appears to be something that will only be practical some time off in the future experience has taught us that the future creeps up on us at ever-increasing rates. Now is the time to ensure that these basic concepts and the importance of these alternate presentation and control strategies are clearly seated in the awareness of those at the forefront of developing this new technology.

Gregg Vanderheiden, John Mendenhall, and Tom Anderson
Trace R&D Center, University of Wisconsin
S-151 Waisman Center
1500 Highland Avenue
Madison, WI 53705
(608)262-6966 Phone
(608)262-8848 FAX

Return to the Table of Contents 

Return to the Table of Proceedings 

The SLARTI System: Applying Artificial Neural Networks to Sign Language Recognition

Peter Vamplew and Anthony Adams


Hand-sensing technology developed for Virtual Reality applications has also created new possibilities for using computers as a communications interface between deaf or nonvocal individuals and non-signing persons. Manual speech systems based on this technology have already been developed. The SLARTI (Sign Language Recognition) system aims to build on these existing systems with the overall goal of being capable of recognizing the hand gestures involved in Auslan (Australian Sign Language) and converting them into a format suitable for use by a voice synthesizer.


A communications barrier exists between deaf people and the hearing community. Whilst the deaf can communicate effectively amongst themselves using sign languages the vast majority of hearing persons are ignorant of such languages. Although methods such as hand - written messages and simple gestures can be used as a means of communication, such mechanisms are not always adequate or convenient. These difficulties can restrict the extent to which deaf people become involved in society on both a professional and personal level.

Until recently technological attempts to overcome this communications barrier have required much adaptation on the part of the deaf user, mainly due to the problem of measuring the elements of sign language. For example, devices such as the TDD allows a non-vocal person to communicate but only by an unnatural means of communication such as a keyboard. Whilst techniques for recording and analyzing typing or speech are well- developed, the technology for sensing hand position and movement has been lacking. The human visual system is capable of discriminating between the hand motions that constitute signs, but computer equivalents of this system lack the sophistication to perform this task. However recent interest in the field of Virtual Reality has lead to the development of censored gloves which may provide the key to the creation of automated sign language translators. Most VR systems now use a glove equipped with sensors measuring finger bend and hand position and movement to track the gestures made by the user. At the very least such gloves allow the first steps to be taken along a path leading to more natural communication device.

Existing Hand - Gesture to Speech Systems

With the development of such gloves, research into computerized recognition of gestures and sign language became possible. The two most advanced systems so far created are those built by Sidney Fels of the University of Toronto and Jim Kramer of Stanford University.

Glove-Talk is a prototype system developed at the University of Toronto, based around the VPL DataGloveTM. Gestures made by the user wearing the glove are converted into text and then passed to a voice synthesizer. The system provides the user with the ability to control the rate at which words are spoken as well as the stress place on a word. However the gestures recognized by Glove-Talk whilst loosely based on American Sign Language do not constitute an real language. The handshape made by the user selects a root word from a list of 66 words, and the direction of movement of the hand chooses an ending for that word. The major weakness of this system is that it is not based on an actual sign language and hence there is more training involved in learning how to use it [1,2].

The Talking Glove developed by Jim Kramer of Stanford University is part of a planed 2-way integrated portable system for communication between vocal and non-vocal individuals. The glove allows the deaf person's hand movements to be converted to speech whilst a vice recognition unit linked to either an alphanumeric or handheld Braille display converts the hearing person's speech into a recognizable format. The Talking Glove makes use of the CyberGloveTM which was designed by Kramer specifically for this project. The initial aim of the system was to recognize and interpret Pidgin Signed English, but this had to be scaled down to the task of converting one-hand fingerspelling into speech. Common words can also be assigned to particular handshapes in order to speed up communication. Therefore the existing system deals only with the static handshape produced by the user and is not concerned with any movements of the hand or fingers. However Kramer's ongoing research aims to add these facilities to the system and expand it to meet he original goal of converting PSE into speech [4,5].

The SLARTI System


The SLARTI system is based around a more advanced version of the CyberGloveTM used for Kramer's Talking Glove project. The modified glove incorporates position and motion detectors which will provide as inputs to SLARTI all four manual components of Auslan signs (handshape, place of articulation, orientation, and movement) as opposed to the existing systems which deal almost exclusively with static or quasi-static handshapes with only limited amounts of motion. These additional measuring facilities will allow SLARTI to take the next step along the path to full sign-language-to-English translation. However it is important to note that the SLARTI system itself will not prefer such translation: whilst it will be capable of recognizing Auslan signs, the context dependent task of translating these signs into English is left for future research.


In recent years there has been growing interest amongst researchers in the fields of computer science, neurophysiology, and psychology in artificial neural networks. These networks are essentially very simplistic models of the interaction of neurons in the brain. Each neuron receives weighted inputs from several other neurons, and produces an output varying in level depending on the combined magnitude of these inputs. Neural networks have proved extremely useful for the task of pattern recognition and classification because of their ability to learn from example data and hence generalize to previously unseen examples. This makes them more suitable than traditional programming methodologies for problems such as sign language recognition where the exact algorithm to perform the required task is unknown.

Fel's Glove-Talker system was developed using neural networks, and demonstrated that this paradigm could be applied to the task of gesture recognition. The SLARTI system aims to apply this neural network architecture to the more complex task of gesture recognizing genuine signing.


The SLARTI system will consist of a number of linked sub-networks, each performing a particular sub-task, as illustrated in Figure 1. For example, there will be a network associated with handshape, classifying each set of data presented to it as one of the 31 handshapes used in Auslan based on the information provided by the CyberGloveTM [3]. The early networks dealing with the basic features such as handshape, orientation and location will serve as pre-processors for the motion network which will itself be a pre-processor for the main sign-classification network. Dividing the system into a series of smaller networks in this manner has a number of advantages. Firstly the networks can be trained independently of each other which both facilitates the detection of errors in the system and reduces the amount of training required. After training the networks can be interconnected either by the `connectionist glue' approach of adding and training additional connection neurons [8] or via standard iterative code (thereby creating a hybrid system). In practice a combination of these methods will probably be used. Secondly, this structure makes the system easier to extend when new signs are required to be added to its vocabulary as the pre-processing networks do not require retraining.

The division of the system into sub-networks based on our knowledge of the structure of the problem also allows the selection of the appropriate network architecture for each sub-task rather than demanding a homogenous architecture over the entire network. For example the handshape, location and orientation networks need not be concerned with the temporal component of the signal. Therefore a standard feed - forward network should be the best topology for these networks. In contrast the motion and sign - classification networks will be required to make use of temporal information and hence will require a more sophisticated architecture, such as the Time Delay Neural Network topology [7].


The task of sign language recognition is obviously a major one, and it would be foolish to attempt to develop the entire SLARTI system in one leap. Development therefore will be a gradual process starting with simplified problems and building up to the full task.

There are a number of different features of the problem which can be simplified for the purposes of system development. The first is the variability found in human signing. As in any human action perfect repeatability is not found in signing and therefore the same sign performed twice by one person (or by different people) will not be identical. To overcome this difficulty the initial pilot study of SLARTI is being trained on simulated perfect data and then tested on noisy data before being extended to handle the variability of genuine signs recorded from actual signers.

One of the major difficulties foreseen is the motion component of signs. For this reason the first systems will be trained on static signs for which movement is not a distinguishing feature. Signs involving motion will then be added to the training set at a later stage in the project.

The blurring of one spoken word into the next during continuous speech has posed problems for voice recognition systems and it is anticipated that similar difficulties will occur with sign recognition. Therefore the pilot system will be limited to the task of recognizing isolated signs, rather than attempting to identify the individual signs amidst a stream of continuous signs. At a later stage the system will be extended to also deal with continuous signing.

Progress report

Progress so far has been limited due to unavoidable delays in obtaining the CyberGloveTM. However work has commenced on the simulated handshapes being created based on estimates of the outputs of the glove's sensors. Whilst it is difficult to know how accurate these simulations are in the absence of real data for comparison results so far have been encouraging.

Twenty separate single hidden-layer feed-forward networks trained using back-propagation on the simulated data made no errors when tested on the same data. In the presence of significant levels of noise added to the test data, the networks still performed well, as can be seen from the results in Table 1. Superior results were obtained by employing the "committee system", in which several nets are trained and presented with the same data and the output selected by the most networks is taken as the system's output. This use of committees is particularly effective when high levels of noise are present [6].

Note that these tests were being performed only on static handshapes, whilst in real-life the hand is constantly moving from one shape to the next. In an effort to replicate this situation the same network has been tested on inputs gained by interpolating between randomly chosen handshapes from the initial training set. It was found that whilst the appropriate output nodes peaked in the correct locations corresponding to genuine handshapes, false peaks were sometimes observed between these locations as the simulated data derived by interpolation bore a resemblance to another handshape. To eliminate these false handshapes a pair of post-processing thresholds were added to the network. A handshape is output only if the activation of its output node remains above the magnitude threshold for a certain amount of time determined by the temporal threshold. It was found that the magnitude threshold greatly improved the performance of the network, particularly in the presence of noise. The temporal threshold was of little use, with a value of 1 producing the best results for all examples.

The limited effect of the temporal threshold was found to be due to two shortcomings in the stimulation. Firstly the number of interpolated steps between handshapes was low, meaning that it was not possible to make sufficiently accurate variations in the time threshold. This was addressed by increasing the number of interpolated steps to a more realistic value, based on the sampling rate of the CyberGloveTM. Secondly, the interpolation used was linear producing an unnatural jerkiness in the simulated movement of the hand. The linear interpolation was replaced by a quadratic scheme producing more samples around the region of genuine handshapes to more accurately simulate the smooth movement of the hand between different handshapes. These modifications increased the effectiveness of the time threshold allowing the network to detect the genuine handshapes with a very low error rate even with high levels of noise in the inputs as shown in the last column of Table 2.

These improvements in performance were not gained without cost. The modified network occasionally records two `hits' for the same genuine handshape, as the output level temporarily falls below the threshold before rising above it again. It should be possible to overcome this problem with the addition of a second magnitude threshold at a lower level. Once a node's output has risen above the higher threshold, the corresponding handshape is not considered to have ended until the output falls below the lower threshold. In this way, temporary variations in the output level will not lead to multiple recognitions of the same handshape.


Ideally the SLARTI system would be capable of recognizing full Auslan and converting it into spoken English output. If Auslan was merely a manual representation of English this task would involve only recognition of the signs and vocalizing of the corresponding English words. However Auslan is a true language in its own right, with grammar and syntax separate to that of English. Hence the system would also require the capability of translating from Auslan into English. Although great strides have been made the problem of automatic translation has yet to be solved, even for similar written languages such as English and French. The task of translating from Auslan to English, two languages without a common medium, is well beyond current knowledge and hence for now the SLARTI system will produce only the gloss associated with each sign recognized rather than attempting full translation. However as more progress is made in machine translation it is envisioned that SLARTI could be incorporated into a more comprehensive translation system as illustrated in Figure 2.

In the meantime SLARTI will augment conversations between a signer and a hearing person with some knowledge of signing. The combination of signing and spoken glosses should be a more effective means of communication than the signing alone. The system's effectiveness will be enhanced by the fact that even when conversing with a hearing person with some knowledge of signing, native signers will generally use a pidgin form of signing, rather than full Auslan [3]. Therefore the spoken output will tend to more closely resemble English than if the signers were to use the normal form of Auslan.

In addition to this purpose, SLARTI may help to break down the communications barrier from the other side as well, by serving as a feedback tool to aid hearing persons in learning Auslan. One problem in learning signing is the difficulty in knowing whether the signs are being performed correctly in the absence of feedback from an experienced signer. SLARTI may be capable of providing such feedback and hence allowing the novice signer more opportunities to practice. This would be particularly useful if SLARTI was combined with recently developed video-disc systems for displaying the correct formation of signs. In practice, this is more likely to be the initial role of SLARTI as it will be useful as a training device even with a vocabulary which is insufficient for it to be utilized as a means for communication.


The hand-measuring gloves developed for the realm of Virtual Reality have also inspired interest in the development of computer systems capable of providing a means of communication between deaf and non-vocal persons and non-fluent signers. The research conducted by Fels and Kramer has shown that the glove technology can be used to translate hand gestures into speech. The challenge now is to build on this foundation by developing systems more closely tied to existing means of manual communication. In the near future such technology will serve to augment communication across the manual/vocal language barrier, and in the long run it may even break down this barrier by providing full translation facilities.


The authors wish to acknowledge the funding received for this project from the University of Tasmania and the Department of Computer Science.


  1. Fels, S. and Hinton, G., "Building Adaptive Interfaces with Neural Networks: The Glove Talk Pilot Study," in Daiper D., Gilmore, D., Cockton, G. and Shackel, B. (eds.), Proceedings of the IFIP TC-13 Third International Conference on Human-Computer Interaction, pp 683-688, North-Holland, Amsterdam
  2. Fels, S., "Building Adaptive Interfaces with Neural Networks: The Glove Talk Pilot Study," Department of Computer Science, University of Toronto, Technical Report CRG-TR-90-1, February 1990
  3. Johnston, T., Auslan Dictionary: A Dictionary of the Sign Language of the Australian Deaf Community, Deafness Resources Australia Ltd, Petersham, New South Wales, 1989
  4. Kramer, J. and Leifer, L., "The Talking Glove: An Expressive and Receptive "Verbal" Communication Aid for the Deaf, Deaf-Blind, and Nonvocal," in Murphy, Harry J. (ed.), Proceedings of the Third Annual Conference on Computer Technology/Special Education/Rehabilitation, California State University, Northridge, October 15-17, 1987
  5. Kramer, J. and Leifer, L., "The Talking Glove: A Speaking Aid for Nonvocal Deaf and Deaf-Blind Individuals," RESNA 12th Annual Conference, New Orleans, Louisiana, 1989
  6. Vamplew, P. and Adams, A., "Real World Problems in Backpropagation," Department of Computer Science, University of Tasmania, Technical Report R91-4, December, 1991
  7. Waibel, A., Hanazawa, T., Hinton, G., Shikano, K. and Lang, K., "Phoneme Recognition Using Time-Delay Neural Networks," in IEEE Transactions on Acoustics, Speech, and Signal Processing, Vol 37 No 3, March 1989, pp 328-339
  8. Waibel, A., Sawai, H. and Shikano, K., "Modularity and Scaling in Large Phonemic Neural Networks," in IEEE Transactions on Acoustics, Speech, and Signal Processing, Vol 37 No 12, December 1989, pp 1888-1897

Peter Vamplew
Anthony Adams
University of Tasmania, Computer Science
GPO Box 252C
Hobart, Tasmania 7001

Return to the Table of Contents 

Return to the Table of Proceedings 

Displaced Temperature Sensing System For Use in Prosthetic Limbs Research, Virtual Reality and Teleoperated Robots

Mike Zerkus
Bill Becker
Jon Ward
Lars Halvorsen


A new concept has been developed for transmitting temperature information to humans. The Displaced Temperature Sensing System (DTSS) is both an information display as well as a device that can replace lost sensory capabilities in disabled persons.

The basic function of the DTSS is to transmit a temperature from a remote location to some part of the human body where the temperature can be felt. An electronic temperature sensor is used to gather remote temperature data. The output from the sensor is fed to a computer control network that, in turn, drives a small thermoelectric heatpump. The thermoelectric heatpump is placed in contact with the human body. A feedback sensor, integral with the thermoelectric heatpump, transmits the temperature feedback to the control computer, thus, allows for proper regulation of the temperature on the sensing area of the body.

Presenting Temperature Information

Information has 2 basic types, inherent and abstract. Inherent information is information that is common to all humans. For example, hot, cold, loud, rough, smooth . . . are common to all humans, regardless of how they may be expressed. Abstract information is text, graphics, and other things that require interpretation and prior knowledge.

The notion of temperature is implied in our language that describes reality. . . a summer day, a winter storm, a cup of coffee, or a drink at the water fountain. Thermal sensation gives other cues to the nature of things in the environment around us; for example, the average person can easily tell the difference between metal and wood because the difference in the thermal conductivities is felt as apparent cold. Temperature is inherent information and therefore, best displayed as hot and cold, i.e. felt as hot and cold. Reality is not complete without temperature, it fills in our picture of reality with the details that make everything seem correct.

What is a Thermode?

A thermode is an assembly consisting of a thermoelectric heat pump, a temperature sensor and a heat sink. The heat pump moves heat into or out of the heat sink to produce a temperature at the surface of the thermode. Using feedback from the sensor the DTSS regulates the temperature of the thermode.

A thermode can also serve as an input; sensing temperature and surface thermal conductivity.

The basic physical configuration of a thermode is shown in figure 1. A thin film temperature sensor is mounted on top of the thermoelectric heatpump. The temperature sensor provides feedback to the control network. The heat sink is in contact with ambient temperature air.


Our first DTSS product is the model X/10. The X/10 is designed as a research unit for those who want to add temperature to their work. The X/10 has 8 thermode channels. Each channel is software programmable as an input or an output. The inputs can be "mapped" to outputs, such that the output temperature tracks the input temperature; this is called analog track mode. Any input can be mapped to any output or group of outputs. The DTSS can be operated from the front panel or remotely via RS-232. A front panel is provided so the unit can be used in a stand alone configuration. The front panel also makes troubleshooting easier in situations where the X/10 is part of a larger system.

Zero to 5 volt differential analog inputs are provided so the X/10 can track a signal from some external device.

Proportional Integral Derivative (PID) control law is used for closed loop control of thermode temperature. The gains of each part of the control law (P, 1, and D) are software adjustable via the front panel or the serial communications port.

Apple Macintosh® or IBM P.C.® compatible demonstration software (with source code) is included to provide examples for interfacing the X/l0 to other systems.


The comfort zone for humans is from 13 oC to 46 oC, with pain below and above these limits. The average human can feel a temperature change of as little as 0. 1 oC over their entire body, however, at the finger tip a sensitivity of 1 oC is typical. Exact numbers vary from person to person.

A Thermal Electric Heat Pump used to stimulate thermal sensation to fingertips has several inherent safety problems.

The finger contains heat which must be dissipated in order for a person to feel cool. Because heat is convected to air slower than heat conducted from the finger, the heat sink size of the thermoelectric heatpump has to be large enough and have enough surface area so that the heatsink is not overwhelmed. If the heatsink is overcome (usually because the heatpump was operated in cooling mode for an extended period of time), the heat pump can not maintain the temperature difference. The heat in the heatsink will come back through the heat pump and burn the finger. Also the heatsink must not protrude in such a manner that the virtual explorer is cut or hurt when they close their hand.

A typical thermoelectric heatpump can cause up to a 67oC temperature difference with ambient temperature. Heat pumped (which produces the temperature difference) is roughly proportional to the current through the heatpump. Heat, in effect, has inertia and therefore temperature cannot be changed instantly. In order to change temperature as quickly as possible maximum current is sometimes needed, so tight closed loop control of the current through the heatpump is needed.

Another potential safety problem occurs if the heat pump is operated in cooling mode for an extended period of time and the power to the unit fails. In such a situation the unpowered heat pump becomes a sandwich of ceramic and metal (with good heat conductivity), and the heat in the heatsink flows back through the heat pump and burns the finger.

The DTSS X/10 has the following safety features.

The DTSS X/10 temperature reproduction range is 1oC to 45oC, with an ambient temperature operating range from 1oC to 35oC. By operating within the comfort zone for humans, the temperature differences are kept small, which allows for better use of energy.

The size of the heatsinks are designed for maximum surface area.

Power to the thermode has to be actively engaged by the computer after computer power up.

A non-computer safety circuit zeros the input to the power amp (which causes the heatpump to move to a neutral temperature setting) if the temperature range is violated.

Redundant safety software zeros the input to the thermode if the operating range is exceeded.

Control System

Figure 2 shows a block diagram of the control system used for the DTSS. The goal of the control system is to have the temperature at the finger tip follow the temperature command.

An early DTSS prototype used a proportional control law. It was found that in order to have an effective response time the gain had to be very high, but this caused temperature ringing at the fingertip (a very weird physical sensation). The DTSS X/10 uses a Proportional Integral Derivative (PID) control law. The control law is implemented in software. The constants for the control law are adjustable from the serial port.


In a telerobotics application temperature sensors could be placed in the fingers of remote manipulators. Temperature signals would be sent to the DTSS and drive thermodes on the fingers of the operator. The DTSS X/10 can accept analog input as well as serial digital input.

Prosthetics research application. The DTSS X/10 can be used by researchers to explore application of displaced sensing to prosthetics. Temperature sensors could be placed in the fingers of the prosthetic limb, the displaced sensing system would be used to transmit the temperature felt by the prosthetic fingers to some point on the body where the temperature could be felt.

A Virtual Reality application would not require a temperature sensor input; the DTSS would take serial digital commands from the computer controlling the simulation. For example, thermodes would be placed on the fingers of the virtual explorer, a temperature value would be assigned to objects or locations in the virtual world as the hand moved near these objects commands would be sent via RS-232 to the DTSS to change the temperature of the thermodes.


Another building block for the virtual world has been developed, thus another aspect of reality can be simulated.


We would like to acknowledge the help and contributions of Randy Martin, Liz Eichelman, Nan Crowhurst, and Gloria Zerkus.

Apple Macintosh® is a registered trademark of Apple Computer, Inc.
IBM P.C.® is a registered trademark of IBM, Inc.

This is not strictly true: thermoelectric heatpumps are not linear, but do have regions where they are near linear. There is also a performance difference between heating and cooling.


  1. Application Notes for Thermoelectric Devices, Melcor Corp., Trenton NJ., 1985
  2. J. H. Seely, Elements of Thermal Technology, Marcel Dekker, Inc., 1981.
  3. M. Kutz, Temperature Control, John Wiley & Sons 1968.
  4. J. J. Distefano 111, A. R. Stubberud and 1. J. Williams., Feedback and Control Systems, McGraw - Hill, 1967.
  5. T. R. McKnight, "The effects of sinusoidal ripple current upon the temperature difference across a thermoelectric cooling device," a report prepared by the U. S. Naval Ordinance Laboratory, White Oak, Maryland, March 1965.
  6. G. Kirby, "Fast cool down thermoelectric cooler, Nuclear Systems, Inc.;" prepared for the Night Vision Laboratory, Ft. Belvoir, VA, October 1973.
Mike Zerkus
Bill Becker
Jon Ward
Lars Halvorsen
CM Research
2815 Forest Hill League City, Texas 77573
(800)262-1CMR Phone
713-334-4860 FAX

Return to the Table of Contents 

Return to the Table of Proceedings 

Reprinted with author(s) permission. Author(s) retain copyright.