• CSUN Assistive Technology Conference Logo

    The Premier AT Conference

Enabling Autonomous Walking with Wearables and Sensor Fusion

Being able to walk around independently from point A to point B in the real-world is one of the more challenging tasks for blind and visually-impaired people. Sensor fusion is about combining data from different onboard sensors on a device to arrive at a more accurate final output. Autonomous vehicles have onboard sensors like LiDAR sensors along with traditional cameras in order to improve the overall accuracy of their final output. So AI models aren't just trained on images along in this case but trained to recognize a combination of image and sensor data. We're seeing a similar trend with smartphones and wearables. The new iPhones(12 Pro and above), iPads and some wearables are beginning to come bundled with sensors like LiDAR, etc. We'll be exploring at a very high level how today's cutting-edge AI models to work with data from these sensors and regular 2D image data. This combination of AI models and sensors on wearables would mean that blind and visually-impaired people will be able to get around a lot more independently than before. Advances in Object Detection and Augmented Reality will make it possible to accurately detect obstacles along with LiDAR, simply because we'll be able to combine images and sensor data. The presentation will walk users through this very specific scenario that's possible today. Advances in Image Segmentation will also be discussed and how autonomous cars use them today to differentiate between different things it sees. We'll see how it could apply to the autonomous walking problem in segmenting different items on a sidewalk and informing users accordingly. The audience will get to experience how this works through a simple demo on a smartphone.  
  • Information & Communications Technology
  • Research & Development
  • Transportation
Audience Level
Session Summary (Abstract)
The session showcases how the AI and sensors being built today for autonomous cars will be adopted to help blind and visually-impaired people get from point A to point B independently. Using accessible AR and inbuilt sensors on wearables may see this feat be a reality in the future.  
Session Type
General Track  
  • Artificial Intelligence (AI) & Machine Learning (ML)
  • Independent Living
  • Virtual Reality (VR), Augmented Reality (AR), & Cross/Mixed Reality (XR)


  • Karthik Kannan

Back to Session List