Road detection is an important task in autonomous navigation systems. In this paper, we propose a road detection framework induced by the inverse depth of LiDAR's point cloud. This framework is a fusion of a 3-D LiDAR and a monocular camera, where the 3-D point cloud of LiDAR is projected onto the camera's image frame, to exploit both range and color information. For the same road detection task, we propose an inverse-depth aware fully convolutional neural network based on image information and a line scanning strategy based on an inverse-depth histogram of LiDAR's point cloud. Finally, a conditional random field fusion method integrates the two road detection results. Our method is evaluated on KITTI-Road benchmark. Experiments demonstrate that our method achieves the state-of-the-art performance in road detection among all the referable methods that have ever reported their results on the KITTI-Road benchmark.
3-D LiDAR + Monocular Camera: An Inverse-Depth-Induced Fusion Framework for Urban Road Detection
IEEE Transactions on Intelligent Vehicles ; 3 , 3 ; 351-360
2018-09-01
4361398 byte
Article (Journal)
Electronic Resource
English
MULTI-STAGE RESIDUAL FUSION NETWORK FOR LIDAR-CAMERA ROAD DETECTION
British Library Conference Proceedings | 2019
|British Library Conference Proceedings | 2018
|