1994 VR Conference Proceedings

Go to previous article 
Go to next article 
Return to the 1994 VR Table of Contents 

Autonomous Wheelchair Control by an Approximate Virtual Reality System

By: G. Powell, T. McJunkin, R. Gunderson
Center for Self-Organizing and Intelligent Systems Utah State University Abstract

Researchers at The Center for Self-Organizing and Intelligent Systems (CSOIS) at Utah State University (USU) are working on an innovative approach for advanced control of wheeled vehicles. CSOIS has produced a unique approximate virtual control system architecture. This unique system takes the approach of minimizing the amount of virtual environment processing, while providing sufficient representation of relevant information to the human operator. This paper will describe the system and potential benefits to persons with certain disabilities.

Hazard detection is based on sensor information obtained from imaging a fan of five laser planes with a pair of CCD cameras. Only a few rows of pixels, scan lines, are used for the virtual algorithm, thus the reduced processing requirements. The key to the system is a virtual map that represents the real environment surrounding the wheelchair to a tele-operator via a graphical interface, indicating which areas around the wheelchair are impassable. The map is structured so that it is presentable to an artificial neural network that can be trained by and eventually replace the human tele-operator. The map provides a simple memory which allows the system to make appropriate decisions yet still could reduce processing necessary enough to economically allow it to be integrated into a wheeled vehicle system.


The majority of people who use a wheelchair do not need a completely automated vehicle that moves them from point to point autonomously, but would appreciate a vehicle that would make their travel more efficient and pleasant. A system that would allow them to "window-shop" instead of concentrating on maneuvering around every obstacle, and would emergency stop before going over a curb or down a stair way. This vehicle would require on-board obstacle detection and avoidance, and would make course corrections based on intelligent decision making algorithms.

CSOIS is working with NASA's Jet Propulsion Laboratory (JPL) on a remote planetary exploration vehicle, referred to as the "Rocky" rover [Note 1]. The center is also working for the Idaho National Engineering Laboratory (INEL) for hazardous waste control using intelligent vehicles. These vehicles have many of the same control problems as does a wheelchair vehicle. Therefore, research at CSOIS, in cooperation with the Center for Persons with Disabilities(CPD), also located at Utah State University, is converting state of the art vehicle control algorithms to wheelchairs, with the goal of providing a vehicle that improves the control and navigation for a person with limited mobility.

Current virtual environment techniques require the use of large and complex processing, with emphasis on conveying a high level of personal presence within the environment. This sometimes leads to a lot of irrelevant processing of data, thus complicating the original goal of controlling the system. CSOIS researchers have devised a method of vehicle control using an approximate virtual "map" display; which can allow tele-operation, the control of the vehicle in a remote location; and eventually on-board autonomous control of a wheeled vehicle.

A virtual map couples the real environment surrounding the wheelchair vehicle with a tele-operator, by updating regions as to their passibility or impassability. This system is eventually replaced with an artificial neural network or decision rule base (i.e. fuzzy logic rule base). Therefore, creating a vehicle with on-board adaptive decision making capability with "on the fly" processing.

Obstacle Detection Method

The wheelchair will use an optical sensor suite to detect obstacles, which at first, will be similar to the rover vehicles on which CSOIS is currently working. The hardware for the system is made up of a CCD camera on each side of the vehicle and five striping lasers that project a fan of light in front of the wheelchair. The lasers form a plane of light which follows the terrain. If the terrain is flat the lasers form straight lines, but if an obstacle appears in the path of a laser the stripe bends over it creating a deviation in the path of the laser in the camera image. The amount of deviation is a function of the abruptness and distance of the obstacle.

The process is streamlined by using a limited number of pixel rows in the detection of obstacles. This reduces the image processing to scanning only those rows for the location of the lasers. This also limits the amount of information to a tractable size. The rows of pixels, called scan lines, are selected at critical distances in front of the wheelchair, such as, one at one vehicle distance ahead and another at two vehicle lengths ahead. This enables the wheelchair to detect detrimental hazards in its path.

Position of an obstacle can be determined by knowing the geometry of the system and the location of a laser point on a scan line. The following equations show the coordinates of the laser spot with respect to the vehicle for a perfect pinhole camera:

[Note: Three math equations are not included here due to limitations on displaying graphics in universally accessible formats.]

Equation 1

Equation 2

Equation 3

[Equations omitted here]

with geometric constants defined aswhere hc is the height of the camera; *c is the declination from horizontal of the camera; *ln is the angle of the nth laser from the vehicles centerline; df is the distance forward from the position of the cameras to the point where the lasers intersect; do is the distance the cameras are offset from the center of the vehicle (for the * the top sign is used for the right camera and the bottom for the left); Nx and Ny are the number of pixels in the horizontal and vertical dimensions of the CCD array. Using Equations (1),(2),and (3); knowing the pixel location of a laser (xp, yp) and the laser ln that is responsible; the coordinates (xr, yr, zr) of the illuminated terrain can be found. The vertical pixel location, yp, is fixed by the scan line choice.

The system can then detect obstacles at various distances and locations from the vehicle. Holes or ridges, such as a stairway or curb, can be detected in a similar manner. A shallow depression in path will cause the laser-stripe to bend in the opposite direction as it would for a protruding obstacle. A laser reflection for a deep hole, or stair way, on the hand, is too dim to detect. So the absence of a laser could be assumed to be a dangerous depression.

Figure 1 Typical image from sensors with superimposed scan lines as vehicle approaches a door way.

Figure 1 shows an image of a doorway taken from the vehicle. The image is distorted due to the low resolution cameras being used, however the laser-stripes are very visible. The three white horizontal dashed lines are the locations of the scan lines processed by the algorithm, as explained above. The middle laser passes directly through the doorway and does not intercept any obstacles, thus there are no deviations of that laser-stripe. In contrast all of the other laser-stripes intercept an object, such as the wall or door. This interception shows up in the image as a bend in the laser-stripe or change in their expected pixel location. From this image it can easily be concluded there would be enough information to guide the vehicle safely through the doorway.

Initial research on wheelchair navigation is being done using the same five fixed laser-stripes as the other research vehicles, but future research will investigate the use of other types of sensors, such as a single scanning sonic sensor. This will reduce the sensor suit cost and improve obstacle detection range.

Linguistic Data, Fuzzy Logic, and The Navigation Algorithm.

The approximate virtual navigation algorithms are based on processing obstacle information described in the previous section. From this information, an approximate "virtual" map is generated.

Figure 2. Approximate Map Regions.

The virtual map accumulates and displays sensor information. The objective is to display the data according to its linguistic features, for example, the degree of its impassability or passability. These linguistic descriptions can be captured using fuzzy-logic and formally manipulated to provide meaningful visual information for use by either a human or automatic decision-control processor. The design process first trains a human controller to use visual cues, provided by the display, to successfully avoid obstacles and progress toward the target position. In the next step a neural network is trained to mimic expert human response for given display patterns. By using rule-based fuzzy logic (inference), color can be brought into the display as an effective cue to the degree of impassability. The display is "virtual" , in the sense that it is capable of presenting information on the terrain around the vehicle via the map placing the operator in the midst of the vehicles environment.

Fuzzy Mapping

The key to CSOIS's system is the implementation of a localized approximate mapping scheme. The map will be referred to as the "fuzzy" map, see Figure 2, because it will describe the terrain surrounding the vehicle in two fuzzy ways. First, the area around the vehicle is divided up into overlapping regions that can be loosely drawn as ellipses, which correspond with general regions around the wheelchair, such as, far to the left or near to the right. Secondly, the regions are classified according to how passable they are, or what their membership value (in the range [0, 1]) is in the set of passable regions or inversely in the set of impassable regions. For example, a region with no obstacles or a couple of small stones would have a low membership value (near 0); while a region with a large boulder or a curb type drop off will certainly be given full membership in the set of impassible regions (near 1).

Figure 2 shows the locations of the regions of the approximate map. The regions are represented as ellipses surrounding a rectangle representing the vehicle. Laser-stripes fan out from the front of the vehicle and scan lines, shown as vertical lines in front of the vehicle, intersect with the lasers.

The regions in front of the wheelchair are associated with intersections of lasers and scan lines. Those regions will gain their membership value from the obstacle data provided by the sensors described previously. As before, the height and distance of an obstacle can be determined approximately from the equations in the previous section. Also it can be determined heuristically according to pixel deviation of known obstacles from the expected location for flat terrain. The pixel deviation maps an obstacle to the fuzzy membership value through a membership function, described below, which is defined according to intuition at first and fine tuned to give an accurate representation of reality.

Figure 3. Membership value *p Vs Pixel Deviation

Consider now, the shape of the membership function for a protruding obstacle. Figure 3 shows the piece-wise linear shape of the membership function for a sensor input (pixel deviation) versus the membership values in the set of impassable regions. Suppose a protruding obstacle causes a negative pixel deviation in a particular camera. A small pixel deviation could be the result of noise. But as the deviation increases beyond a given threshold, the size and nearness of the obstacle increases proportionally. It increases to a point where an obstacles size and location would prevent safe entry into the associated region. Between the point where the obstacle signal emerges from the noise and the point where it becomes a menacing obstacle, the membership function should increase monotonically from 0 to 1. From here the function should stay at 1 until the position of the obstacles inside the next region, thus the responsibility of a the next scan line. At this point the function of the first region should decrease, becoming zero when the deviation would put a vertical obstacle at a membership of 1 for the next closest region. This places the obstacle completely in the nearer region. The interpolation of the membership functions between these points will be linear for simplicity unless it is deemed necessary to make the functions more complex.

The membership values are also additive. A previous membership value is added to the new sensor input as the vehicle moves and takes another sample of the environment. In this way information is accumulated and abstracted from the sensor data into the fuzzy map instead of arduously tracking the obstacle as coordinate pairs.

What's more, the fuzzy map provides a local memory of past obstacles by translating values. Thus the regions on the map without any direct sensor input contain a part of the past sensor information. This translation works by adding a percentage of a membership value of one region to the one(s) behind it, with respect to the wheelchair movement, and then subtracting the same percentage from the original region. Values also revolve to adjacent regions with turning motion of the vehicle. A certain amount of decay is provided for during translation also to allow errors in obstacle detection to flow away when not reinforced by successive detections.

For several general references to fuzzy logic, the creation of Latfi A. Zedah, refer to the reference section of this paper. [Notes 2, 3, 4, 5] The fuzzy map has takes some semblance from Bart Kosko's Fuzzy Cognitive Map (FCM) [Note 6] idea of modeling processes connecting related variables.

Simplified Algorithm Summary

This fuzzy map accumulation can be summarized in the following algorithm:

Main loop: (while still in operating condition)

  1. Collect obstacle data
  2. Determine pixel deviation, yp, for each scan line/laser pair.
  3. Evaluate membership function for each sensor *R(yp)
  4. Add membership function to current value bounded to [0, 1] for each region R with a sensor input.
  5. Display map to decision scheme or operator for action to be taken.
  6. Instruct vehicle to perform movement operation.
  7. Translate map according to movement made for each region appropriately: (5) with a's chosen appropriately for adjacent regions in a particular translation. Summation is over adjacent regions, i.
  8. Repeat.

Fuzzy Map User Interface

While a computer system is very comfortable with numerical representations of the fuzzy map, a human operator would struggle with a list of numbers even when printed in the appropriate orientation to each other. A graphical user interface supplies a good tool for a human to interact with the system.

The user interface generated by CSOIS portrays the regions around the vehicle as ellipses, as previously shown in Figure 2. The color of the ellipses vary with the membership value associated with the region. The colors vary from green to red. Green means no obstacles, obviously triggering the "go" instincts of a driver. In contrast red means "stop", lots of big obstacles. The green will go through mixes of yellow and then pass to red, completing the stop light comparison.

From this display a tele-operator can drive the vehicle with only the information supplied by the map. Data can be collected pairing map values with a human's "intelligent" response. This data can then be used to supervise the training of a neural network to emulate the decisions made by the human operators. The decision data could also be applied to a fuzzy clustering algorithm to develop a fuzzy rule base for a similar decision system.

The Value of the Map with Artificial Intelligence Control Scheme

The map provides several features that are conducive to developing a tractable autonomous navigation system with reasonably cost efficient system.

As mentioned previously, the map provides a minimal memory and processing scheme for tracking sensor information.

The whole point of creating an approximate representation of the terrain around the vehicle is to create a tractable image to present to an autonomous system.

The map creates a representation of the world that can be presented to a human operator. If a human operator can negotiate obstacles referring only to the approximate map, it is highly probable that a reasonable system can be designed to perform a similar process.

Since the map compresses the information about the environment the size and complexity of the decision system can be made computationally tractable. This is extremely important if the attempt to mimic human behavior is taken on by a neural network or fuzzy rule base, since the complexity of the structure increases greatly with the addition of more inputs. So the reduction from a full image (given a 320 by 200 image) to the map (28 regions) is extremely valuable.

Another benefit is the map translates the values back so it can give an approximate indication of when an obstacle has passed out of the immediate region of concern for a particular movement. Thus the map can also indicate that the rover has just entered or is about to enter a trap as mentioned earlier. It would, in some circumstances, recognize the situation before it was too late to simply back out. The system also remembers an obstacle that has passed to the side and can consider this information when the desired maneuver is in that direction.

The visual memory is also in a form which can be analyzed. The regions to the side and behind the vehicle can be visually inspected with the position of obstacles around the test vehicle to assure that the system is remembering in an accurate and valuable manner. Finally, the structure of the map allows it to easily link to several intelligent decision making schemes, such as, a neural network, traditional logic, or fuzzy logic inference systems.

These possibilities are the topic of the next section of this paper.

Implementation of Intelligent Vehicle Control

Two schemes are being researched for interpreting the map and making a navigation decision. The first being explored is a fuzzy rule base approach. In this method, an expert, or panel of experts, will drive the rover through various obstacle fields via the approximate map. The decisions based on the map will then be accumulated into a rule base by associating an explanation or linguistic rule with each decision. A statement like "I turned right because the regions in the front left of the vehicle were highly impassable according to the map" is easily turned into a fuzzy rule. After these rules are accumulated for many scenarios, they are grouped and common denominators found to decrease the number of rules. The system is then tested on this rule base and iterations and modifications made based on experimental results and analysis of the rule base.

The second idea would take away the need for a human to describe why or how a decision was made. A neural network could be trained on the map image versus the human decision. In this way, the network learns patterns that should trigger a certain response. Assuming the human training the network makes good decisions, the network would then be able to make similar decisions for similar situations on its own.

Both ideas, and possibly a hybrid, of the ideas are being considered by CSOIS.

Current Development Status

A good portion of the control algorithm has and is being developed for other applications. This greatly aids in bringing the wheelchair controller to maturity. All of the funding, so far, for this research is coming from internal sources at CSOIS and from CPD. This has limited the development schedule.

A prototype of the autonomous wheelchair control system is being developed from old instrumentation extracted off of unused test vehicles. Current plans place initial testing of a wheelchair system at the end of August 1994.

[Note 1] T. R. McJunkin, G. Powell, and R. W. Gunderson. A Unique approach to advanced obstacle detection and avoidance.

Accepted to 2nd International Symposium on Missions, Technologies and Design of Planetary Mobile Vehicles, Moscow, Russia, May 1994.

[Note 2] L. A. Zadeh. Outline of a new approach to the analysis of complex systems and decisions process. IEEE Transactions on Systems, Man, and Cybernetics, 3(1):28-44, January 1973.

[Note 3] C. C. Lee. Fuzzy logic control systems: Fuzzy logic controller--Part I. IEEE Transactions on Systems, Man, and Cybernetic, 20(2):404-418, March/April 1990.

[Note 4]C. C. Lee. Fuzzy logic control systems: Fuzzy logic controller--Part II. IEEE Transactions on Systems, Man, and Cybernetic, 20(2):419-435, March/April 1990.

[Note 5] Daniel McNeill and Paul Freiberger. Fuzzy Logic. Simon and Schuster, 1993

[Note 6] Bart Kosko. Fuzzy Cognitive Maps. International Journal of Man-Machine Studies, 24:65-75, January 1986.

Go to previous article 
Go to next article 
Return to the 1994 VR Table of Contents 
Return to the Table of Proceedings 

Reprinted with author(s) permission. Author(s) retain copyright.