UAVs are vastly used in practical applications such as reconnaissance and search and rescue or other missions which typically require experienced operators. Autonomous drone navigation could aid in situations where the environment is unknown, GPS or radio signals are unavailable, and there are no existing 3D models to preplan a trajectory. Traditional navigation methods employ multiple sensors: LiDAR, sonar, inertial measurement units (IMUs), and cameras. This increases the weight and cost of such drones. This work focuses on autonomous drone navigation from point A to point B using visual information obtained from a monocular camera in a simulator. The solution utilizes a depth image estimation model to create an occupancy grid map of the surrounding area and uses an A* path planning algorithm to find optimal paths to end goals while navigating around the obstacles. The simulation is conducted using AirSim in Unreal Engine. With this work, we propose a framework and scenarios in three different open-source virtual environments, varying in complexity, to test and compare autonomous UAV navigation methods based on vision. In this study, fine-tuned models using synthetic RGB and depth image data were used for each environment, demonstrating a noticeable improvement in depth estimation accuracy, with reductions in Mean Absolute Percentage Error (MAPE) from 120.45% to 33.41% in AirSimNH, from 70.09% to 8.04% in Blocks, and from 121.94% to 32.86% in MSBuild2018. While the proposed UAV autonomous navigation framework utilizing depth images directly from AirSim achieves 38.89%, 87.78%, and 13.33% success rates of reaching goals in AirSimNH, Blocks, and MSBuild2018 environments, respectively, the method with pre-trained depth estimation models fails to reach any end points of the scenarios. The fine-tuned depth estimation models enhance performance, increasing the number of reached goals by 3.33% for AirSimNH and 72.22% for Blocks. These findings highlight the benefits of adapting vision-based models to specific environments, improving UAV autonomy in visually guided navigation tasks.


    Zugriff

    Download


    Exportieren, teilen und zitieren



    Titel :

    A Framework for Autonomous UAV Navigation Based on Monocular Depth Estimation


    Beteiligte:


    Erscheinungsdatum :

    2025




    Medientyp :

    Aufsatz (Zeitschrift)


    Format :

    Elektronische Ressource


    Sprache :

    Unbekannt




    Autonomous Navigation Algorithm of Monocular UAV Based on Depth Estimation and Robust VIO

    Zhen, XiangYu / Deng, ZhongLiang / Lou, BoYang et al. | IEEE | 2024


    Monocular SLAM Position Scale Estimation for Quadrotor Autonomous Navigation

    Nieto-Hernandez, L. / Gomez-Casasola, Angel A. / Rodriguez-Cortes, H. | IEEE | 2019


    MoNA Bench: A Benchmark for Monocular Depth Estimation in Navigation of Autonomous Unmanned Aircraft System

    Yongzhou Pan / Binhong Liu / Zhen Liu et al. | DOAJ | 2024

    Freier Zugriff

    Autonomous aerial navigation using monocular visual‐inertial fusion

    Lin, Yi / Gao, Fei / Qin, Tong et al. | British Library Online Contents | 2018


    Autonomous Robust Navigation System for MAV Based on Monocular Cameras

    Caldas, Kenny A. Q. / Benevides, Joao R. S. / Inoue, Roberto S. et al. | IEEE | 2022