1993 VR Conference Proceedings

Go to previous article 
Go to next article 
Go to Table of Contents for 1993 Virtual Reality Conference 

Intelligent Assistive Technologies

Philip Barry
George Mason, University, Fairfax, VA
John Dockery, Defense Information Systems Agency (JIEO),
Reston, VA, and C3I Center,
George Mason, University, Fairfax, VA
David Littman, Datamat Systems Research, Clifton, VA
Melanie Barry, General Research Corporation. Vienna, VA


Intelligent Assistive Technology (IAT) refers to the integration of existing assistive technology with Artificial Intelligence (AI) techniques and advanced environment interfaces. AI can be used to integrate a great deal of sensor information to amplify primary intent as well as to conduct background tasks. Through the application of AI we believe that we can level the playing field for persons with disabilities and maximize existing and future capabilities of assistive devices. By beginning a dialogue now between the developers of AI and Assistive Technology, IAT can become a reality.

Introduction: Motivation and Goals

A Preamble

In this paper we shall be dealing with a concept for the design of future generations of assistive devices for the disabled. We will propose the infusion of certain software technology into the mainstream of hardware technology. The capstone of this infusion may be virtual reality, but as part of a system; and hence in a form other than it is now. Nonetheless, virtual reality itself plays a role that is potentially very special. At a workshop hosted by Dockery and Littman in February of 1993 near Washington, DC, the virtual reality camp summarized the subject as follows:

"Virtual reality is inherently barrier free".

Outline of the Case

We begin by casting a searchlight on the current market. In the United States today, there are more than 20,000 assistive devices on the market. (Source: Able Database) Of necessity, these assistive devices are constructed with a design methodology that uses relatively straightforward approaches to control an action. That is, the user of the device is responsible for 1) deciding when to use it, 2) deciding what to do with it, and 3) deciding how to control it during the time that it is in action.

For example, take the example of a device to assist the mobility impaired, a powered wheel chair. The chair is controlled by a joystick that the user must move forwards, backwards or sideways to initiate some action. The chair is perfectly obedient and ignorant; if the user elects to move the chair in front of a bus the chair will happily comply. The chair cannot provide even the simplest interpretation of the user's actions or consequences.

In the past decade the high-tech community has focused on the development of several areas of technology that offer the possibility of adding artificial intelligence (AI) to assistive devices. The military has developed sophisticated ways of assisting problem solvers in data intensive or real-time situations (such as air battle) to make decisions, deploy resources, and monitor the effectiveness of the deployed resources. Examples of this technology are well known; Aegis, the Autonomous Land Vehicle, and the Pilot's Associate readily come to mind. These applications of Artificial Intelligence have the goal of helping users 1) decide when to use the technology, 2) how to use it, and 3) how to control it when it is in action. These are exactly the problems that are typically left to the users of assistive technologies.

What is happening? The user is off-loading some of the cognitive tasks necessary to control his interaction with a given situation onto an interface. Think of them as housekeeping tasks associated with running the equipment. With this introduction of a "smart" interface, he is freed to concentrate on the higher order task requirements such as finding a target for the case of the pilot. In the wheelchair example, navigation might for instance be taken over by the interface.

If pilots, who we might characterize as hyperable, need assistive technology, what then of persons with disabilities? The comparison is dramatic. A far greater percentage of their remaining motor skills can be consumed by interface requirements of the assistive device at issue. In fact the requirements can overwhelm the intended user. What we need then is to concentrate on building the interface in such a manner that the userÆs intent, how so ever muffled it may be, is interpreted, amplified, and executed. This latter idea of intent amplification is what powers our concept of adding AI to virtual reality to produce what we call intelligent virtual reality (IVR) about which we shall say more later. IAT is a generalization of IVR.

In our view, the time is right to begin to formulate a strategy that will allow developers of assistive technology to reap the benefits of the huge effort in intelligent technology research. Our inspiration for this idea comes from the Japanese Fifth Generation Project, which has had a significant influence on the plans of governmental agencies to insure civilian uses of high tech [FEI83]. With proper planning, the marriage of AI, advanced environment interfaces and assistive technology could produce the next generation of assistive technology; Intelligent Assistive Technology (IAT).

[In Figure 1 which follows we show the possible components needed to produce an intelligent assistive technology. Shown is the basic (and current) device, namely a powered wheelchair. To this is added Artificial Intelligence Techniques and Advanced Interface possibilities. The result is an Intelligent Assistive Device (Technology) solution.]

Figure 1: Intelligent Assistive Technology Example

Two main reasons undergird our confidence in the proposed union of AI and assistive technology. First, AI devices are out of the laboratory and are being fielded now. Second, government agencies are coming under increasing pressure to show "dual use", or transfer their technologies to the private and public sectors. The main purpose of this paper, therefore, is to open a dialogue between communities, which develop intelligent technologies, and the users and developers of assistive devices. It is the latter whom we hope might wish to take advantage of intelligent technologies to make their devices "smarter".

Might we not then introduce the discipline of Intelligent Assistive Engineering with the avowed goal to "level the playing field". All users of high technology operate under some disadvantage such as an inability to type or vision impairment. Intelligent Assistive Devices should be designed with the purpose of reducing differences in the abilities of individuals to access opportunities, resources, and benefits in our society.

In this paper, we begin to define and explore the concept and applications of Intelligent Assistive Technologies. First, we present the concept of Intelligent Assistive Technology. Second, we suggest a Framework for Development and illustrate this with a hypothetical IAT device, the Mobile Conveyance. We then examine Advanced Environment Interfaces and conclude with some possibilities for the future.

The Intelligent Assistive Technology Concept

As we have already said, IAT refers to the integration of existing assistive technology with AI techniques and advanced interfaces. Some examples of current assistive technologies are shown in Table 1a from [MAN93]. In Table 1b are some AI technology counterparts for comparison. The gap between Tables 1a and 1b is considerable to say the least. It thus becomes a measure of the challenge. (The Tables follow on the next page.)

We see three levels of IAT, which correspond roughly to the time it will take to implement them. Level one consists of integrating existing low technology with intelligent databases. An example of this would be a wheel chair with preprogrammed avoidance and minimal path access in public places. For level one systems, there is minimal research and development. Most of the work is in system integration.

Level two consists of existing higher technology coupled with reasoning and learning systems. For example, this could be to a neural net or expert system integrated with computer voice input device, which anticipates commands and provides recommendations. Level two would require some research and development to merge disparate technologies in new ways. However, the technologies being used are mature in their own right; working systems currently exist.

Level three consists of state-of-the-art research technology integrated with next generation AI. An example of this would be an environment populated by semi-autonomous agents with robotic connections to the physical world to accomplish every day task for the severely disabled. Level three systems will be highly complex and are composed of components that may only exist in concept or in highly experimental form. The key reason for designing level three systems is to provide a road map and a pointer for future research and development of these technologies.

[Table 1a lists a category of current assistive technology paired to the left with multiple examples. It is immediately followed by pairings of AI techniques and possible environmental interfaces in Table 1b.]

Table 1a: Current Assistive Technology Categories and Examples:

Mobility/Balance Examples:

Visual Aid Examples:

Modification of Everyday Device through enlargement or Voice Output


Telephone/Communication Accessories Examples:

Safety Examples:

AI Technology Examples:

Framework for Development

We view the construction of a traditional system as consisting of two layers: the functional layer and the environment interface layer. However, for an IAT system we propose three major layers: the functional layer, the advanced environment interface and the IAT layer. The IAT layer will provide a translation between the functional layer and the user layer. This is the layer onto which repetitive cognitive tasks will be off-loaded as we previously alluded to. Thus, IAT will provide tracking and monitoring, reasoning and learning, and metaphorical representation to the user. The value added by IAT is that technology will allow for non-literal or incomplete commands as well as metaphorical representation of complex commands. The next section will describe the layers in more detail.

The Functional Layer

This layer provides all of the "behind the scenes" functionality. For a computer program, this would be the computation and algorithms. For a wheel chair this would consist of the translation of the user input to electromechanical energy. The operative point to keep in mind is that the functional layer is user independent.

User interfaces, the inter-related subject of the other two layers, can be designed to capitalize on the user's abilities.

The (Advanced Environment) Interface Layer

The second general layer is that of the user interface. Whether advanced or not, the user interface provides the environmental mechanism between the user and the system. Unfortunately, in the past, user interfaces have been more often exclusive than inclusive. For example, the use of a computer via keyboard is prohibitive to those individuals without full motor control over their hands. To alleviate this, voice recognition or eyegaze techniques have been employed to allow the use of computers to these individuals. While clever and interesting, these techniques are costly because they have modified existing designs after the fact. Further, they are still along the stimulus-response model discussed earlier. We have just modified the stimulus mechanism. In this layer the operative issue is who in charge? Does the interface accommodate the user, or is the reverse true?

The Intelligent Assistive Technology Layer

The operative reason for introducing this layer between the previous two is thus: we can off-load tasks onto the interface and know that the functional layer will be properly instructed. One wishes to say "I'm thirsty" rather than "Bring me a glass of water from the tap." We propose that IAT will provide tracking and monitoring, reasoning and learning, and metaphorical representation to/from the user. Key to this is that the IAT layer will require more information; and consequently more input than traditional stimulus-response systems.

IAT Tracking and Monitoring

Therefore, the first subcomponent for IAT is the tracking system. The tracking system will be comprised of sensors and a database that will store the current and recent situational information. For example, suppose that we want to construct an intelligent mobile conveyance. The tracking and monitoring system will assess the current location of the conveyance in relation to other objects, both fixed and moving. This information will be stored in a database that will be updated at a rate that is deemed appropriate for the situation. This technology is not fringe; robotics currently has developed robots that can navigate through hospitals and deliver food to patients.

IAT Reasoning and Learning

The next subcomponent of the IAT layer is reasoning and learning. This layer will compare the goals inferred from the user interface; and develop a course of action as to how to accomplish this. The exact format of the reasoning can vary. Whether an expert system, case based reasoner or neural network is applied is irrelevant. The key is that the system is making decisions and judgments based upon the data gained from the tracking and monitoring system as well as the user goals.

To continue with the ongoing wheelchair example, suppose the user would like to get down the stairs and greet guests at the front door. The user would submit this as a goal, which would be evaluated by the reasoner. The reasoner would plan a route from the bedroom, to the elevator, initiate the elevator, exit through the living room and to the front door. The user did not have to tell the conveyance that the front door was down the stairs, that there are couches and chairs in the living room, or that there was a wall between the user and the hall to the elevator.

Consider a variant of the example. Suppose that the user decides that he does not want to go through the living room on a Tuesday because his wife has her computer science meetings that night. Once again, the door bell rings on a Tuesday night and the user instructs the conveyance to go to the door. However, he has informed the conveyance that on Tuesday evenings the route should go through the kitchen. This has resulted in a rule modification in the expert system to the route planner. Let's say the first rule looked like:

If the goal is the front door and the current location is the elevator - Then proceed through the living room.

This rule would be modified to look like:

If the goal is the front door and the current location is the elevator - And If the day is not Tuesday - Then proceed through the living room. Else proceed through the kitchen

IAT Metaphorical Representation

The final component of the IAT layer is metaphorical representation. This is a mapping or a translation of the mechanics of the action or information to a format that is more readily comprehended to the user. It works two ways. The user can represent his goals to the device with the utmost economy of communication. Intent amplification is the process by which these spare communications are made specific and put into context. The simplest form of metaphor is the null metaphor. In the mobile conveyance example, if the user is required to move the wheel to initiate motion, direct physical manipulation results in forward or backward progress.

What IAT can do is provide further levels of metaphorical abstraction to encompass and encapsulate other processes. Suppose that a representation of answering the front door is an icon of a door on a computer screen. By "opening" the iconic door the user can activate a remote monitoring system to view who is at the door. By placing a cartoon face over the door, the user can activate the route planning device in the IA and move the conveyance to the door.

Three Layers in One?

What may be the ultimate representation technique of this genera, that could be employed at the IAT layer, is Intelligent Virtual Reality (IVR). It was first described by Dockery and Littman in [DOC192], [DOC292], and [DOC392]. Actually IVR as envisioned by the authors has woven elements of all three layers.

Basically IVR consists of mating AI and VR. On the input side we have sensors to sample the environment, and the user. In the middle is the VR enhanced with AI so that the VR environment becomes reactive and capable of amplifying intent so as to present the proper VR decision environment. On the output side IVR consists of a system of distributed agents that can communicate and cooperate, all of which are in service to the user.

Some agents are concerned with user presentation while other agents are concerned with lower level reasoning schemes. For a given instance of IVR, it will be crucial to be able to modularly improve, as well as to create, new features. Further, the elements in the VR will certainly need to exhibit intelligent behavior, especially for metaphorical interpretation. To achieve this level of intricacy requires a distributed system where reasoning schemes, and their associated knowledge bases, are local to agents. Further, agents should not be locked into a given reasoning scheme, but should have access to the most appropriate one. Since the reasoning schemes are time varying as well as the behavior of the knowledge bases, the execution of any, but the most simple system, must be able to evolve and customize itself to the user.

With IVR, users can enter into a world that will allow intelligent agents to simulate real world behavior. This can provide the most realistic and nondeterministic training situation. IVR also can employ significant metaphorical representation to allow the user to perform even the most complicated actions which will be presented in a readily comprehensible manner. Further, IVR allows the user to experience activities they would not otherwise be able to experience. If IVR is the ultimate, we still need a path to it. That path runs through the general subject of advancements in interfaces.

Advanced Environment Interfaces

The computer lies behind all of the conceptualizing in this paper. To proceed we must look more at the computer interface than at its processing power. In a certain sense in using a computer, every human is an individual with a disability. The major challenge is to facilitate interaction by trying to customize the interface and capitalize on the users' abilities as opposed to require abilities that they do not have.

Suppose the user cannot control anything but his eyes. Eyegaze Computer SystemTM is a vision controlled system. If the user interface is now dependent upon the eyes, then there is no reason why this basic functionality cannot be expanded to include eye control. A similar argument can be made for a head mounted system.

The next step in user interfaces is to provide an intelligent interface for the user. Current interfaces are comprised of object-action composites. For example, point at an icon with a mouse and double-click on it and the window will expand. If the user cannot manipulate the mouse and point, the window will never open. However, by providing an interpretive system, which perhaps employs fuzzy logic, a near miss would be interpreted as a sign of wanting to expand the window.

An intelligent interface control would also be interactive and customize itself to the capabilities of the user. For example, most current computers are hardwired for one or two user interfaces, such as keyboard and mouse or keyboard and voice input. Suppose, however, the computer had a preprocessor expert system which evaluated how the user was doing with given interfaces and proposed the use of the most effective interface to the user. The computer could then configure itself to the user instead of the user configuring itself to the computer.

Another area of improvement for user interfaces would be syntax.

Current computer interfaces use a rigid syntax, whether it be via command line, voice input, or constrained GUI. For example, speech recognition systems map speaker commands to existing programs. Lee [LEE89] describes a system called Sphinx which employs a predictive model of speech recognition called the Hidden Markov Method (HMM). HMM achieves excellent recognition results by modeling speech as a sequence of vector quantized symbols. However, this does not address the semantic meaning of the words. If we train the system to delete a file and then tell it to get rid of the file, the system will not understand. Further, if the file contains sensitive financial data, current systems will make no differentiation between that and a drawing done on a scratch pad.

We propose that the next generation of user interfaces should make attempts to translate user intent. As we have already indicated, user interfaces should become intent amplification devices. In the next generation of user interfaces, the interface should be able to assess the importance of a given file and delete if it is unimportant, or save it automatically if it is important. Obviously, deep domain knowledge of both the user and the application is required to accomplish this. Consequently, the next generation user interface must have access to a large knowledge base of domain knowledge as well as a large database containing state-space information.

The next generation user interface should also be embedded in the environment and able to conduct processing on several levels. For example, as regards our mobile conveyance, let it be connected via microwaves to the phone system in the house. While the user is moving to the door, an incoming phone call is received. By consulting a list of disapproved numbers, the conveyance interprets the phone call as acceptable and routes it through to the user.

While this is happening, the user's wife steps in front of the conveyance on its way to the door. The conveyance stops forward progress and activates the phones messaging system while the user talks to his wife. After the wife leaves, the conveyance queries the user if the trip should continue and also informs him that he received a phone call and who it was from.

In table 2, which follows, we can summarize the state of advanced environmental interfaces by listing across the page an aspect of a user interface, its current, and its projected features. The display is adapted from the April 1993 Communications of the ACM. [GRU93]

Table 2: Aspects of Advanced User Interfaces (after Grudin, 1993).


We have defined a concept of integrating Artificial Intelligence techniques with existing assistive technologies to result in IAT. We have shown that IAT provides "value added" to the user through enhancing assistive technology capabilities and providing additional new capabilities. Our example showed how IAT can go beyond existing assistive technologies through the incorporation of intelligent user interfaces and a monitoring, reasoning and learning capability.

However, the question remains, why should a manufacturer of assistive devices or software want to participate in this venture? The answer is clear from both moral and legal grounds. With the introduction of the American's With Disabilities (ADA) act, it became necessary to provide access to any individual with a disability. This can run into a tremendous amount of funding if existing equipment needs to be retrofitted. However, if the hooks for existing systems are clear and can be attached to other user interfaces, retrofitting is far less expensive. The dual use technology emphasis will facilitate the introduction of defense developed technology into the commercial world for use in assistive technology.

The government can participate by sponsoring research and products that promote expandable systems and systems that employ IAT concepts. Due to the volatility of the Federal budget, direct funding may be tough to identify. However, with the prospect of rising taxes we believe that tax breaks to industries and developers which design and implement products would be sufficient inducement to yield tangible results. The layered approach to IAT will promote the design of inclusive systems that will benefit all users since the only difference in the system development will be the user interface; the IAT layer will be common.

We believe that voluntary cooperation will yield better results in a creative industry than coerced partnerships. Further, since the spin-offs from the basic research will contribute to other areas of society, the scientific/technical community will benefit from spin-off products and research. In essence, the full development of IAT will generate good results for the scientific, academic, product development and user community.


Go to previous article 
Go to next article 
Go to Table of Contents for 1993 Virtual Reality Conference 
Return to the Table of Proceedings 

Reprinted with author(s) permission. Author(s) retain copyright.