1993 VR Conference Proceedings

Go to previous article 
Go to next article 
Go to Table of Contents for 1993 Virtual Reality Conference 

Personal Guidance System for the Visually Impaired using GPS, GIS, and VR Technologies

Jack M. Loomis
Department of Psychology

Reginald G. Golledge
Department of Geography
Roberta L. Klatzky
Department of Psychology
University of California, Santa Barbara CA 93106


We describe a prototype of a navigation aid for the visually impaired. Our plans are for a portable, self-contained system that will indicate the positions of environmental landmarks to a blind traveler by having their labels, spoken by speech synthesizer, appear as virtual sounds at the correct locations within the auditory space of the traveler. The system consists of the following components: (1) A differential GPS (Global Positioning System) receiver that will provide the traveler's longitude and latitude with an accuracy on the order of 2-5 m, (2) a Geographic Information System comprising a detailed database of the surrounding environmnent (with locations, names, and attribute information about buildings, walkways, large permanent obstacles, etc.) and functions for selecting information to be displayed, and (3) a user interface consisting of an acoustic virtual display and an input device to allow the user to change modes of operation, select routes, and interrogate the database.

Vision serves several functions for human travelers. A primary function is in connection with geographic orientation. Before setting out, the traveler may consult a map for determining an optimal route. Once underway, the traveler uses vision to sense distant landmarks for maintaining goal orientation and to sense nearer landmarks that signify a familiar route or that can be compared with those depicted in the map. Vision also provides access to optical flow for velocity-based dead reckoning. A second primary function is in connection with the local aspects of wayfinding, such as keeping the observer on the selected path, detecting hazards such as dropoffs, and avoiding stationary and moving obstacles.

Obviously, a traveler lacking vision is at a considerable disadvantage, for aids like tactual maps are rarely available, the traveler has little information about remote landmarks and paths that permit the selection of novel routes while underway, and obstacles and other hazards are encountered with little advance warning. In connection with geographic orientation, a missed cue can result in the traveler's becoming momentarily disoriented or indeed lost, even in familiar environments. As a consequence, the traveler can experience stress, fear, and even panic, emotions that would rarely beset a sighted traveler except perhaps in the most remote wilderness or in densest fog.

The seeing-eye dog, the long-cane, and a number of electronic travel aids (e.g., the laser cane and ultrasonic sensors) assist the blind traveler with the local aspects of wayfinding, such as obstacle avoidance. In contrast, aids for geographic orientation do not exist, apart from a few in the experimental stage (e.g. the "talking signs" of Loughborough, 1979, and Kelly, 1981). This paper briefly describes a project at the University of California, Santa Barbara (UCSB), the goal of which is to contribute toward development of a practical navigation aid for the visually impaired. The project constitutes the second phase of National Eye Institute support, the first phase of which was devoted to understanding the spatial competencies of the blind and the feasibility of their being aided by such a system (Klatzky et al. 1990; Loomis et al., 1993). The aid we have in mind will inform a traveler of his or her current position and orientation with respect to the environment being navigated, will provide information about the immediate surroundings, and will assist in route planning.

The system we are developing, which we refer to as a Personal Guidance System, is really a test bed for trying out different design options, for assessing the potential of such an aid, and for indicating potential problems. It will also be used in our research on the mental representations underlying non-visual navigation. The prototype being developed consists of a laptop computer and peripheral hardware, most of it to be worn in a backpack. (Well into the future, one can assume that minitiarization of the electronics will reduce its size so that most of the hardware will fit into a small pack worn at the waist.) Functionally, the system consists of three modules, each involving a different new technology. The first module, which determines the position and orientation of the traveler, will have as its primary input, signals from a GPS (Global Positioning System) receiver. The second module is software constituting a Geographic Information System (GIS) and consists of a detailed spatial database representing the surrounding environment and functions for planning routes and for selecting information from the database to be displayed to the traveler. The third module is the user interface, which will give the traveler control over the functioning of the device and will display desired navigational information. One design of the system involves a virtual acoustic display, a product of Virtual Reality (VR) technology (Loomis et al., 1990; Wenzel, 1992). With such a design, the navigation aid will indicate the positions of environmental landmarks by having their labels, spoken by a speech synthesizer, appear as virtual sounds at the correct locations within the auditory space of the traveler, as first proposed by Loomis (1985) and more recently by Urdang and Stuart (1992).

Because the navigation aid is not intended to replace travel aids for sensing the near environment, the user will still need to make use of such aids (e.g., long cane, seeing-eye dog) for avoiding obstacles, pathkeeping, and so forth. However, if a successful implementation involves the use of headphones, the inclusion of signals for detecting obstacles from an ultrasonic sensor might be feasible.

Our hope is that if such a navigation aid ever becomes practical, it will allow blind users to travel without assistance over unfamiliar territory and will instill in them feelings of independence and confidence that are lacking in all but the most competent and adventurous of blind travelers. We also are hopeful that it might permit the user to develop better cognitive representations of the environments than is currently true.

Because it will provide them with information about the layout of surrounding landmarks, there is the real possibility that they will be able to form reasonably accurate internal representations of surrounding space. However, as with any new technology, there is a potential downside as well. Aside from the obvious potential negatives, such as risks accompanying faulty operation, unreliability, high cost, and cosmetic undesirability, there is the possibility that the traveler will become dependent upon the aid and allow his/her normal travel skills to languish, so that when the aid is not available, the person's wayfinding ability is actually impaired.

Module I: Determining Position and Orientation

The purpose of this module is to provide the computer with orientation and location information, which can then be converted to the coordinates of a local digital map. The primary means of determining position will be a GPS receiver with differential correction. The possibility of using GPS in a navigation system for blind was first suggested by Collins (1985) and then considered in greater detail by Loomis (1985).

Very shortly now, the full complement of 21 satellites (and 3 replacements) will be in orbit, thus allowing localization of a GPS receiver with uniform accuracy over the earth's surface. A low-cost commercial grade GPS receiver provides spatial accuracy of about 100 m when Selective Availability (the military's deliberate perturbation of the satellite signals) is in effect. A means of obtaining much higher accuracy is differential correction. Here one uses two receivers communicating by a radio data link. One receiver (the base) has fixed and known coordinates while the other is mobile. Errors in the signals arriving at the base receiver are computed and then used to correct the signals at the mobile one. With Differential GPS one can obtain 1-3 m accuracy within several miles of the base station. Our project uses a Trimble Navigation real-time DGPS configuration, consisting of a base station, spread spectrum radios, and a portable roving receiver. Differentially corrected coordinates accurate to several meters will serve as primary input to the laptop computer running the GIS and user-interface software.

One of the major limitations of GPS for our application is the loss of signal occasioned by nearby buildings, foilage, etc. For GPS to ever be useful in such an application, it will have to be supplemented by some other means of position determination. At later stages in the project we hope to experiment with velocity sensors for performing dead reckoning or accelerometers for performing inertial navigation.

In order to specify the azimuth of a spatial landmark relative to the traveler, we will use a fluxgate compass worn by the observer and interfaced to the computer. For a conventional synthesized speech display, body orientation is sufficient, and a single compass mounted on the torso is all that is needed. In the case of the virtual acoustic display, head orientation is required, in which case the compass will be mounted on the strap of the headphones worn by the user. Because we expect some problems associated with local distortions of the earth's magnetic field, we are contemplating the use of mechanical or optical gyros as potential alternate orientation sensors.

Module II: Geographic Information System

The primary purpose of the second module is to provide a spatial database of the environment, information from which can be communicated to the traveler by means of the user interface. A second purpose, which we will deal with in later stages of the project, is to select optimal routes of travel.

This second component is a GIS, for it links spatial data about objects, such as their shapes and locations, to nonspatial attributes such as the object's category or its value. A GIS can be used to graphically portray an area to a user or to compute information such as the number of objects of a given type within a given region (e.g., the number of Chinese restaurants within one mile of the traveler). As this example indicates, a GIS allows data retrieval to be constrained both spatially and semantically.

We have been developing a spatial database for our test site, the campus of the University of California, Santa Barbara (Golledge et al., 1991). The database consists of a number of layers, correponding to entities such as potential routes, buildings, and large permanent obstacles.

Our GIS module is intended to provide functions different from those in most existing GIS's and to do so in support of real-time navigation. Types of functions the database should support can be seen by considering an apparently simple task: directing the traveler along a predetermined path.

In a particularly simple design, the traveler might be led along the predetermined path by a succession of virtual auditory beacons, sounds that are presented through headphones but appear externalized within the auditory space of the traveler. For travel along straight-line segments, virtual beacons can be positioned just beyond the ends of the various segments. By homing on each beacon, the traveler can walk to the end of that segment, at which point a new beacon will be activated. Obviously, for this implementation the spatial database must include possible routes as well as objects. The GIS software will keep track of the traveler's position with respect to the to-be-navigated route, determining when the traveler has arrived at the current segment endpoint and then determining the location of the next beacon.

The situation becomes somewhat more complex when we consider alternative means of directing the traveler along the pathway, such as the use of natural-language commands (go five paces forward, turn right 30 degrees, etc.) rather than homing. In this case, the functions required to repeatedly determine the traveler's position are the same as previously, but new functions are required to generate the appropriate linguistic description.

More complex still is a situation in which the traveler is to be informed about landmarks in surrounding space. This would occur, for example, if the traveler had a "mental model" of the space and wished to pursue some trajectory that was defined in terms of known landmarks. The spatial database must now perform functions that select a group of items to display and determine the sequence of item presentations.

Toward the latter stages of the initial project period we hope to incorporate a modest route selection algorithm that will connect source and destination locations with linear segments along unobstructed paths. Subsequent work will then attempt to implement algorithms for optimizing route selection under a variety of constraints (e.g., minimum path distance, minimum travel time).

Module III: The User Interface

The user interface will provide the user with two-way communication with the GIS module. We will be comparing two display alternatives, namely, conventional speech display through headphones or speaker or a virtual acoustic display through binaural headphones. The virtual display would indicate the positions of landmarks by having their labels spoken by a speech synthesizer, to appear as virtual sounds, including speech, at the correct locations within the auditory space of the traveler, whereas the conventional display would give spoken instructions for movement and spoken descriptions of the traveler's surroundings. The use of either display expands the potential range of transmitted data beyond position information; for example, the function, occupancy, or layout of a nearby building could be provided. Our research will consider not only the relative merits of the two displays, but also the effects of presenting route information only (e.g., by way of auditory beacons) versus presenting information about off-route landmarks. For the virtual display, we plan to use the the analog hardware device developed by Loomis, Hebert, and Cicinelli (1990). However, an alternative is one of the commercial virtual acoustic displays, such as the Convolvotron (Wenzel, 1992), a digital signal processing board that does high-speed convolution of the head-related transfer function with the signal from a monaural input source.

The input component of the user interface will allow the user to select a destination, add landmarks to the database, change the display mode (e.g., from auditory beacons to the spoken names of landmarks), and change display parameters, such as map scale. This part of the interface will probably be implemented using either a small keypad or voice input in conjunction with limited-domain speech recognition.


Go to previous article 
Go to next article 
Go to Table of Contents for 1993 Virtual Reality Conference 
Return to the Table of Proceedings 

Reprinted with author(s) permission. Author(s) retain copyright.