Developing automatic mobility assistance systems for the safe navigation of visually impaired pedestrians in unstructured environments presents several complex challenges. The dynamic and unpredictable conditions and the diverse range of obstacles encountered by a pedestrian walking on an unstructured road add to the difficulties of automatic navigation. Another significant aspect of safe mobility assistance is real-time performance, which demands lightweight architectures and fast processing. In this paper, we propose an obstacle detection framework combining a road object detection model and a road anomaly detection model, using parallel processing for fast real-time performance. The models are based on convolutional neural network backbones, use transfer learning, and are trained on custom datasets manually collected in unstructured environments. The proposed system addresses the challenges of the complexities of the camera-based automatic navigation to detect obstacles and, based on their position and size, alerts the user via audio feedback.
Camera-based Mobility Framework for Visually Impaired Pedestrians in Unstructured Environments
2024-06-02
1922301 byte
Conference paper
Electronic Resource
English
Tactile surfaces for visually impaired pedestrians
British Library Conference Proceedings | 1992
|SAFETY OF ELDERLY AND VISUALLY IMPAIRED PEDESTRIANS
British Library Conference Proceedings | 2001
|Safety of Elderly and Visually Impaired Pedestrians
British Library Conference Proceedings | 2001
|Taylor & Francis Verlag | 2023
|