We propose a new camera-lidar fusion method for road detection where the spherical coordinate transformation is introduced to decrease the gap between the point cloud of 3D lidar data. The camera’s color data and the 3D lidar’s height data are transformed into the same spherical coordinate, and then input to the convolution neural network for segmentation. Faster segmentation is possible due to the reduced size of input data. To increase the detection accuracy, this modified SegNet expands the receptive field of the network. Using the KITTI dataset, we present the experimental results to show the usefulness of the proposed method.
Fast Road Detection by CNN-Based Camera–Lidar Fusion and Spherical Coordinate Transformation
IEEE Transactions on Intelligent Transportation Systems ; 22 , 9 ; 5802-5810
01.09.2021
3177719 byte
Aufsatz (Zeitschrift)
Elektronische Ressource
Englisch
MULTI-STAGE RESIDUAL FUSION NETWORK FOR LIDAR-CAMERA ROAD DETECTION
British Library Conference Proceedings | 2019
|Vehicle detection based on LiDAR and camera fusion
IEEE | 2014
|