We present an integrated multi-modal safety assessment framework for autonomous mobile robots navigating complex environments. Our system combines traffic sign recognition, dynamic object detection, and semantic environment segmentation to enable a thorough understanding of the environment, thus facilitating safety assessments. By integrating multiple perception modalities, we achieve a more robust approach to safe navigation that outperforms single-module systems. A custom-built Convolutional Neural Network is trained on an augmented dataset that merges the German Traffic Sign Recognition Benchmark and newly collected Albanian traffic sign images, achieving state-of-the-art performance with reduced computational complexity. To guide navigation in complex terrains with dynamic objects, we implement a YOLO-based algorithm optimized for pedestrian and vehicle detection. A DeepLabv3+ based semantic segmentation component classifies the scene into contextual areas relevant for safe navigation. Outputs from individual modules are integrated using a network-based safety classifier to identify scenes as “safe” or “unsafe” for navigation. The multi-modal system leads to improved navigation safety assessment, accurate object classification and detection” and context-aware navigation.
Safety Analysis for Autonomous Mobile Robot Navigation Using Traffic Sign Recognition, Dynamic Object Detection, and Semantic Segmentation
18.12.2024
4209715 byte
Aufsatz (Konferenz)
Elektronische Ressource
Englisch
Semantic Segmentation to Develop an Indoor Navigation System for an Autonomous Mobile Robot
BASE | 2020
|Object Detection Techniques Applied on Mobile Robot Semantic Navigation
BASE | 2014
|Semantic Segmentation to Develop an Indoor Navigation System for an Autonomous Mobile Robot
BASE | 2020
|Traffic Sign Detection and Recognition Using Ensemble Object Detection Models
Springer Verlag | 2023
|Sign detection for autonomous navigation
SPIE | 2003
|