Object avoidance for autonomous driving is a vital factor in safe driving. When a vehicle travels from any random start places to any target positions in the milieu, an appropriate route must prevent static and moving obstacles. Having the accurate depth of each barrier in the scene can contribute to obstacle prevention. In recent years, precise depth estimation systems can be attributed to notable advances in Deep Neural Networks and hardware facilities/equipment. Several depth estimation methods for autonomous vehicles usually utilize lasers, structured light, and other reflections on the object surface to capture depth point clouds, complete surface modeling, and estimate scene depth maps. However, estimating precise depth maps is still challenging due to the computational complexity and time-consuming process issues. On the contrary, image-based depth estimation approaches have recently come to attention and can be applied for a broad range of applications. A vast majority of camera depth estimation methods intend to determine the depth map of the whole input image using binocular cameras or a 3D camera, which is time-consuming too. In this paper, a novel approach is proposed that predicts the depth of the head obstacle using only a 2D mono camera. The bounding boxes of barriers are extracted through a deep neural network at the first stage. Rather than those methods, which calculate the depth map of the entire image pixels, in this paper, the average depth of each bounding box is calculated and assigned as labels. Then labels and feature vectors (four values of the bounding box) are set as input data of the proposed method. This network maps feature vectors of the previous stage to the estimated depth values. The results suggest that the model can reasonably predict the depths of obstacles on the Kitti dataset.


    Access

    Check access

    Check availability in my library

    Order at Subito €


    Export, share and cite



    Title :

    KDepthNet: Mono-Camera Based Depth Estimation for Autonomous Driving


    Additional title:

    Sae Technical Papers


    Contributors:

    Conference:

    WCX SAE World Congress Experience ; 2022



    Publication date :

    2022-03-29




    Type of media :

    Conference paper


    Type of material :

    Print


    Language :

    English




    KDepthNet: Mono-Camera Based Depth Estimation for Autonomous Driving

    Tavakolian, Niloofar / Fekri, Pedram / Zadeh, Mehrdad et al. | British Library Conference Proceedings | 2022


    KDepthNet: Mono-Camera Based Depth Estimation for Autonomous Driving

    Tavakolian, Niloofar / Fekri, Pedram / Zadeh, Mehrdad et al. | British Library Conference Proceedings | 2022


    Deep-PDANet: Camera-Radar Fusion for Depth Estimation in Autonomous Driving Scenarios

    Zheng, Lianqing / Ai, Wenjin / Ma, Zhixiong | SAE Technical Papers | 2023


    Towards Depth Perception from Noisy Camera based Sensors for Autonomous Driving

    Nagiub, Mena / Beuth, Thorsten | TIBKAT | 2022

    Free access

    A variational approach for estimation of monocular depth and camera motion in autonomous driving

    Hu, Huijuan / Hu, Chuan / Zhang, Xuetao | SAGE Publications | 2022

    Free access