In object detection for autonomous driving and robotic applications, conventional RGB cameras often fail to sense objects under extreme illumination conditions and on texture-less surfaces, while LIDAR sensors often fail to sense small or thin objects located far from the sensor. For these reasons, an intuitive and obvious choice for perception system designers is to install multiple sensors of different modalities to increase (in theory) the detection robustness. In this paper we focus on the analysis of an object detector that performs early fusion of RGB images and LIDAR 3D points. Our goal is to go beyond the intuition of simply adding more sensor modalities to improve performance, and instead analyze, quantify, and understand the performance differences, strengths and weaknesses of the object detector under three different modalities: 1) RGB-only, 2) LIDAR-only, and 3) Early fusion (RGB and LIDAR), and under two key scene variables: 1) Distance of objects from the sensor (density), and 2) Illumination (Darkness). We also propose methodologies to generate 2D weak semantic training masks, and a methodology to evaluate the object detection performance separately at different distance ranges, which provides a more reliable detection performance measure and correlates well with object LIDAR point density.
Understanding Strengths and Weaknesses of Complementary Sensor Modalities in Early Fusion for Object Detection
2020 IEEE Intelligent Vehicles Symposium (IV) ; 1785-1792
2020-10-19
2598425 byte
Conference paper
Electronic Resource
English
British Library Conference Proceedings | 2020
|Understanding the strengths and weaknesses of Britain's road safety performance
TIBKAT | 2016
|