2003 Conference Proceedings

Go to previous article 
Go to next article 
Return to 2003 Table of Contents 


VISION BASED MOBILITY SYSTEM FOR OBJECT IDENTIFICATION

Presenters
S. Rahman
Q. K. Hassan
School of Computing Science, Middlesex University, UK.
Email: s.rahman@mdx.ac.uk 
Email: q.hassan@mdx.ac.uk

Introduction

Traditionally long cane and guide dog are the primary mobility aids for visually impaired and blind people. However there are secondary mobility aids and these are grouped into three distinct categories: electronic navigation system (ENS), electronic travel aid (ETA) and vision based system (VBS). The VBS research again focuses on two main areas: the artificial vision/eye (interfacing camera directly to the human visual cortex or optical nerve), and dynamic scene analysis (interpreting and analyzing objects from moving images).

Object identification from moving images is always a challenging task. Gavrila and Philomin (1999), and Broggi et al. (2000) developed a safe vehicle driving system to identify objects like traffic signs and pedestrians. Dobelle (2000) reported an artificial vision system and New Scientist (2000) reported that Professor Claude Veraart and his team developed an artificial eye at the Catholic University of Louvain. In both of these cases, surgical operations are needed to implement the devices for the mobility aid. Other research works focused on analysing images which does not involve any surgical operations (i.e. Meijer 1992, Sanith et al. 1998, Everingham et al. 1999).

Research Objective

In the context of the previous and existing research in this area, our research is focused on dynamic scene analysis which has been looking at different techniques to identify objects commonly found on the road/footpath. In this paper we have presented our research activities and findings to develop a vision-based mobility aid. The proposed system will aid blind and visually impaired people to navigate through roads. The main objective of the research is to integrate the scene-analysis algorithm in a visually guided system. This will include not only to identify the usual objects found on the road but to also identify other specific features like footpath edges, walls at the side of the road, curve ending of the road, and side-barriers on the road.

Object Recognition Method

Two types of features are usually encountered on road: static and moving features. We have looked at the static features only. The basic static features considered for this research are lamp post, bus stop, traffic signal, post box, waste bin, trees and edges but the specific features considered are footpath edges, walls at the side of the road, curve ending of the road, and side-barriers on the road. We have applied the following algorithms to identify the basic static features: gray level template matching, shape model matching, scale invariant shape model matching and edge based shape model matching. Shape model matching gave the best results to identify these basic static features (Rahman & Hassan 2002). The shape model matching technique prepared shape templates from the selected features. Once a region of interest (ROI) is selected from the test image plane, then the shape of the feature is generated with the aid of the contrast feature and background information. The shape models are correlated with moving images and the best match is given by the best matching score estimation. Although this method produced reasonably good matching results (75% successful matching rate) but we anticipated even better results. Specially it did not produce satisfactory results when applied to identify the specific features. So our next research phase looked at ways to improve the matching performance and to identify the specific features with greater accuracy. In the next phase of our research we applied the optimized generalised Hough Transform (GHT) (Ulrich et al. 2001) to identify the objects. Pixel information/statistics (radiometric information) was also taken into account. This approach in identifying is novel in the sense that radiometric information has not been used along with the GHT in the previous research.

Radiometric Properties Integration

Once we have found the required object within the search image, it has been found that the recognized object is not quite the same as the desired object. This is because optimized GHT only considers the shape of the object in the recognition phase but it does not take into account any other pixel information or the radiometric properties of the object. To eliminate the possibility of false matching on the basis of shape only, we have integrated the radiometric properties along with the optimized GHT. Suppose the object model consists of n numbers of near-homogeneous areas, then statistical parameters such as mean (M) and standard deviation (Sd) over each of the homogeneous areas are evaluated for the radiometric properties.

The radiometric values for all the near-homogeneous parts of the objects are then normalized to construct a set of values for images acquired under different weather, lighting and seasonal conditions. Once an object has been recognised within the search image, these normalized values are then compared with the statistical parameters to finally confirm the recognised object.

Relative position of the Object

Once the object has been identified on the pathway it is then important to find the relative position of the object from the person. We have used a novel approach here what we call a 'Rectangular Zoning' identification method. In this case we have drawn a rectangular buffer zone of a specific width at the centre of the image frame along with a vertical line. The rectangle boundary on the left side of this vertical line is considered to be the 'left zone' and the same on the right side is to be considered as the 'right zone'. The area between the left and the right boundary is then considered to be the central zone. The distance from the centroid of the recognized object is then monitored whether they fall within these zones. The outcome of this comparison will uniquely identify the location of the object whether it is at the left, right or at the centre of the walk pathway. This information will then be used to alarm the user about the existence of an object as well as its relative position.

Object Database based on Synthetic Images

Our research has looked two ways of creating the object database. In the first phase of our research we used object models created directly from the moving images. The matching results obtained using these object models were not satisfactory. One of the reason was that additional edge information were produced once they had been segmented. So we created specific object models based on synthetic images to avoid this problem. We created thirteen synthetic object models and we used a sequence of 15,000 image frames to test our matching algorithms. These images have been acquired at different weather, lighting and seasonal conditions. The matching results obtained from our research are illustrated in Table 1.

Optimized GHT Optimized GHT
with Radiometric information
Successful matching False matching No matching Successful matching False matching No matching
Identification under normal lighting condition 75% 15% 10% 90% 6% 4%
Identification under some specific lighting condition (shades, poor lighting and varying weather conditions) 70% 20% 10% 85% 10% 5%

Table 1 Results based on optimized GHT and optimized GHT with radiometric information

Conclusions and Continuing Work

In this paper we have discussed methods for automatic detection of objects on road/footpath. We have investigated different ways of identifying objects and their positions for a VBS. This ongoing research will look at suitable hardware devices (DSP and fast video processors) to interface with the prototype VBS. In conclusion, the obtained results are very promising and forms the basis for further research on vision based mobility aid.

Acknowledgement

This research project has been funded by Engineering and Physical Sciences Research Council (EPSRC), UK.

References

A. Broggi, M. Bertozzi, A. Fascioli and M. Sechi, 2000, Shape-based pedestrian detection, Proceedings of IEEE Intelligent Vehicles Symposium, 215-220

D. Graham-Rowe (reported by), 2000, Hitting the nerve, New Scientist Magazine, 29th April issue, 10

D. M. Gavrila and V. Philomin, 1999, Real-time object detection for smart vehicles, Proceedings of IEEE International Conference on Computer Vision, 87-93

M. R. Everingham, B. T. Tomas and T. Troscianko, 1999, Head-mounted mobility aid for low vision using scene classification techniques, International Journal of Virtual Reality, 3 (4), 3-12

M. Snaith, D. Lee, P. Probert, 1998, A low-cost system using sparse vision for navigation in the urban environment, Image and Vision Computing, 16(4), 225-233

M. Ulrich, C. Steger, A. Baumgartner, H. Ebner, 2001, Real-time object recognition in digital images for industrial applications, Optical 3-D Measurement Techniques, Armin Grun, Heribert Kahmen (Editors), 308-318

P. B. L. Meijer, 1992, An experiment system for auditory image representations, IEEE Transactions on Biomedical Engineering, 39(2), 112-121

S. Rahman, and Q. K. Hassan, 2002, Object Detection with Vision Based System: A Secondary Aid for Visually Impaired and Blind People, 6th World Conference on Systemics, Cybernatics and Informatics (SCI, 2002), Image Processing and Vision based Applications session; Proceedings International Institute of Information Systems (IIIS), Florida, USA

W. H. Dobelle, 2000, Artificial vision for the blind by connecting a television camera to the visual cortex, American Society of Artificial Internal Organs (ASAIO) Journal, 46, 3-9


Go to previous article 
Go to next article 
Return to 2003 Table of Contents 
Return to Table of Proceedings


Reprinted with author(s) permission. Author(s) retain copyright.