2006 Conference General Sessions

INTERACTIVE LOCALIZATION AND RECOGNITION OF OBJECTS FOR THE BLIND

Presenter(s)

Andreas Hub

University of Stuttgart

Universitaesstrasse 38

Stuttgart 70569 Germany

Day Phone: 0049 711 7816 259
Fax: 0049 711 7816 340
Email: Andreas.Hub@informatik.uni—stuttgart.de

Introduction
The foundation of any orientation and navigation system for the blind is an accurate determination of the user’s position and the location of objects within his environment. This is also the reason why most available systems have weaknesses or even fail. In order to address this problem, we previously focused on indoor environments. One reason for this approach is that localization within indoor environments is more difficult since GPS signals are normally unavailable. The second reason is that, if we can solve the localization problem indoors, it should also be possible to solve it outdoors by combining our assistant with the global positioning system.

Within the framework of the development of a widely applicable assistant
system for the blind [2, 101, we achieved recent advances in the interactive localization accuracy of objects within indoor environments. This was accomplished by combining data from cameras and from local inertial sensors with 3D model building information.

Related Work
Commercially available navigation systems for the blind capable of covering large areas, as well as navigation systems for sighted pedestrians; usually suffer from the inexact localization of points of interest. This includes the indefinite determination of the user’s own position. For many outdoor systems (such as [8] or [13]), variance may amount to ten meters (approx. 33 feet) or more, even if GE’S signals are available and not blocked by trees or buildings. Other positioning systems that allow high spatial resolution often require a special infrastructure, or a system of pre-placed electronic beacons or visual markers. This restricts the usable area and necessitates time—consuming installations [1, 6, 7, 12]. There are integrated systems under development that may be used both indoors and outdoors [9, 11], however indoor use also requires special infrastructure.

Interactive Localization and Recognition of Objects for the Blind
Components of the Assistant System
Our current prototype of a navigation assistant system for the blind consists of a sensor module and a portable computer. The sensor module, connected by cable to the portable computer, consists of a stereo camera, and an inertial sensor (MT9B by Xsens) that includes a 3D compass, a 3D gyroscope, and a 3D acceleration sensor. The portable computer can be carried in a backpack. By using a keyboard the blind user can send inquiries concerning navigation
or objects locally to the portable computer, or via wireless connection, to a server platform. Navigation advice and/or object descriptions are transmitted acoustically to the blind user over a text-to--speech engine and loudspeaker. For the deaf-blind the information is presented on a portable Braille display [4]. In both cases, the information is available in different languages [5]. We have developed two different versions of the sensor module. One is hand—guided, which can be moved like a combination flashlight/cellular phone, and the other
a head-guided, integrated into a bicycle helmet at the request of blind users. Both versions of the sensor module have their advantages and disadvantages. The head-guided model affords hands—free operation, and provides easier interpretation of sensor signals, since head movements are typically less complex than movements of the hand. However, the helmet is not as discreet as the hand-guided module.

 

Modeling for Object Recognition

To allow for object recognition by the blind, a 3D model of an indoor environment is generated, and all object features, including color and size. This information is linked to other navigation support data, such as room/office numbers and their occupants, and warnings about stairways, revolving doors, and other potentially dangerous localities (3).

 

Localization and Object Identification

By using corresponding feature pointes within the stereo images of the real indoor environment, we can detect walls and surfaces in these rooms and determine the corresponding depth information of these points and objects. The depth information enables us to draw conclusions concerning the current position. The viewing direction is measured simultaneously using the inertial sensor. We then relocated the 3D model according to this depth and orientation information by minimizing the differences between measured distances and model information. This step of the algorithm can be done close to real-time demands, if feature pointes within the images can be found. In the worst case of homogenous colored walls without any texture, the user may be asked to turn a little until an edge or a part of a texture can be detected. Under normal office conditions the matching can be done in less than a second, and the blind user can start immediately to identify objects by pointing or looking at them.

 

Results

We have developed an improved prototype of our orientation and navigation assistant for the blind, which allows the user to detect objects interactively. If the object is part of a 3D environment model, and has the same position as the real object, the name and pertinent object features can be transmitted immediately to the blind user over the text-to-speech engine or the portable Braille display. Using our system blind people are able to identify objects just by pointing at them. If there is no model of the object, we can at least give information about the available distance, the size, and the color of the object. We can also detect if a door is open or closed. Compared to our previously=presented prototype [2], the depth information of feature points significantly improves location accuracy. Our method is image-based. Accurate localization is dependent upon the distance of the object, the lighting conditions, and the texture of the environment. In normal lighting and in a typical room environment, accuracy up to twenty centimeters can be achieved. It should be emphasized that no further infrastructure is needed when the starting point (e.g. the room number) within the model is known.

 

Discussion and Future Work

Based on these result, improved navigation and orientation options can be offered to the blind, provided that the many maps and 3D models in existence worldwide are accessible to the blind. When the starting point is know, the blind user can immediately start to use our system. In the future we will try to identify the starting point as well, by image comparison and plausible assumption based upon the history of previous locations. To provide additional cues regarding the current location, we are using WiFi signals indoors and GPS signals outdoors. One of the next steps will be to update the model to reflect any changes, e.g., if a chair is moved to a new location. The comparison of the image content with orientation and acceleration information can also be used to distinguish between movements of the user and movements within the environment like approaching persons. Furthermore we have .started to recognize persons and faces. To do this reliably and interactively with portable devices we have to wait for the next powerful processor generation. We are aware of the fact that the ergonomics of our system has to be optimized so that the device can be used more discreetly. This could be done through integration into clothes, into a pair of glasses, or even into jewelry.

Conclusion
By the integration of sensors that are comparable to human senses in combination with detailed 3D environment models, we came one step closer to our goal of an assistant system for the blind that is in a wide range independent of any infrastructure. Therefore, our assistant system can be the basis of a worldwide navigation system that allows blind people to investigate unknown environments on their own without help from other persons.

Acknowledgments
This project is funded by the Deutsche Forschungsgemeinschaft within the Center of Excellence 627 “Spatial World Models for Mobile Context—Aware Applications”. We would like to thank all blind and deaf blind persons and mobility teachers, especially our colleague Alfred Werner, who also tested our previous prototypes, and offered important suggestions for improvements.

References
[1] Coroama, V. “The Chatty Environment — A World Explorer for the Visually Impaired”. Adjunct Proceedings of Ubicomp, 2003.
[2] Hub, A., Diepstraten, J., Ertl, T. “Design and Development of an Indoor Navigation and Object Identification System for the Blind”. Proceedings of the 6th ACM SIGACCESS conference on Computers and Accessibility ASSETS 2004, Atlanta, GA, USA, Designing for accessibility, Vol. 77&78, 147—152.
[3] Hub, A., Diepstraten, J., Ertl, T. “Augmented Indoor Modeling for Navigation Support for the Blind”. Proceedings of the 2005 International Conference on Computers for People with Special Needs, Las Vegas, Nevada, USA, 54-59,2005.
[4] Hub, A., Diepstraten, J., Ertl, T. “Design of an Object Identification and Orientation Assistant for the Deafblind”. Proceedings of the 6th DbI European Conference on Deafblindness, Presov, Slowakia, 97, 2005.
[5] Hub, A., Diepstraten, J., Ertl, T. “Learning Foreign Languages by using a New Type of Orientation Assistant for the Blind”. To appear in: Proceedings of the International Council for Education of People with Visual Impairment, Chemnitz, Germany, 2005.
[6] Knapp, M., Reitmayr, .0., Schmalstieg, D., ?SignPost 2 — Mobile AR Navigation System”. http://www.studierstube.org/projects/mobile/SignPost2/
[7] Kulykukin, V., Gharpure C., DeGraw N. “Human-Robot Interaction in a Robotic Guide for the Visually Impaired”. AAAI Spring Symposium, 158-164, 2004.
[8] Loomis, J., Gooledge, R. G., Klatzky, R. L. “Navigation System for the Blind:
Auditory Display Modes and Guidance”. Presence Vol.7, No.2, 192-203, 1998.
[9] Navigation and Guidance for the Blind.
http://www.vtt.fi/tuo/53/projektit/noppa/noppaeng.htm
[10] Nexus. http://www.nexus.uni-stuttgart.de/index.en
[11] Ran, L., Helal, S., Moore, S. “Drishti: An Integrated Indoor/Outdoor Blind Navigation System and Service”. Proceedings of the Second IEEE International Conference on Pervasive Computing and Communications 2004, Orlando, FL, USA, 23-32, 2004.
[12] Sonnenblick, Y. “An Indoor Navigation System for Blind Individuals” Proceedings of the 13th Annual Conference on Technology and Persons with Disabilities, 1998.
[13] VisuAide Inc., VisuAide Trekker. http://www.visuade/gpssol.html


Go to previous article
Go to next article
Return to 2006 Table of Contents


Reprinted with author(s) permission. Author(s) retain copyright