AI-based navigation and handling assistance

Description

A heterogeneous and dynamic environment is often a great challenge for people with visual impairments. Although they use all their other senses for this and learn basic techniques for basic techniques for orientation and mobility, many scenarios arise in everyday life in which they are dependent on external support. The concrete goal of the project is the development of a wearable assistance system that is not only
intended specifically for a particular application or situation, but rather different scenarios in different areas of everyday life. The assistance system includes input devices (cameras and microphone), a computing unit and output devices (headphones and tactile devices) to relay information to the wearer. At this research project, the aim is to develop scene understanding and decision making algorithms for wearable devices that can assist visually impaired and blind people, based on artificial intelligence (AI) and machine learning (ML). These algorithms assist people with visual impairments in perceiving their surroundings and in performing targeted actions for orientation, movement and action. The AI algorithms shall not only capture the dynamic environment, but rather also observe the wearer's behavior, and analyze it in relation to the current situation. Compared to the state of the art, the algorithms should not only be fast, but also energy efficient, in order to keep the weight of the device down and thus not to impair the mobility of the wearer.
 

Goals

  • Train and evaluate deep/machine learning models like semantic scene segmentation and object detection
  • Model the detected static and dynamic objects in an interactive graph and build graph neural networks for scene understanding
  • Representing the interactive graph as a spatial-temporal graph of the environment that can be used for a safe planning and decision making
  • Evaluate the developed models and publish the results
  • Test the developed algorithms in real and complex scenarios
     

Keywords

Scene Understanding
Decision Making
Computer Vision
Graph Neural Network
Autonomous Systems
 

Contact

Prof. Dr.-Ing. Naim Bajcinca
Phone: +49 (0)631/205-3230
Mobile: +49 (0)172/614-8209
Fax:  +49 (0)631/205-4201
naim.bajcinca(at)mv.uni-kl.de