Unmanned aerial vehicle (UAV) swarms have seen extensive deployment across a spectrum of military and civilian applications in recent years. The success of UAV missions is contingent upon robust communication and collaboration among the UAV, which has become a pivotal area of technical research. However, in environments rife with communication uncertainties, both subjective and objective environmental factors can disrupt UAV communication and collaboration. This interference can prevent UAV from accurately transmitting and receiving information, thereby jeopardizing the success of collaborative missions To address this challenge, a fault-tolerant UAV collaboration method grounded in reinforcement learning and semantic communication was developed to cater to the leader-follower UAV mission pattern within environments constrained by limited communication capabilities To enhance the follower UAV's strategy for reinforcement learning-based following, a semantic communication mechanism coupled with a Proximal Policy Optimization (PPO) method was implemented. This approach facilitated the prediction of the leader UAV's actions. Under normal communication conditions, the follower UAV received data transmitted by the leader and executed the corresponding command operations. In scenarios where communication was interfered, the follower UAV leveraged historical flight and communication data to extract semantic information. This information was then used autonomously to predict the future flight paths of the leading UAV. By integrating the learned and predicted behavior patterns of the leader, the follower UAV was able to make informed decisions. The proposed scheme, which did not necessitate additional anti-interference equipment, enabled the UAV swarm to counteract communication interference and bolster the efficiency of collaboration within a challenging and obstructed communication context. Experimental studies show that, when compared to benchmark methods, the proposed scheme not only endures complex environments with interferences but also significantly improves the efficiency of UAV leading-following operations and the overall mission success rate. This research provides valuable insights into viable solutions for future UAV swarm collaborations within communication-constrained and interfered environments.


    Access

    Download


    Export, share and cite



    Title :

    Semantic communication aware reinforcement learning for communication fault-tolerant UAV collaborative control


    Contributors:
    ZHANG Yang (author) / GU Hongyu (author) / FENG Bohao (author) / WANG Ran (author)


    Publication date :

    2024



    Type of media :

    Article (Journal)


    Type of material :

    Electronic Resource


    Language :

    Unknown





    Delay/Disruption Tolerant Reinforcement Learning Aurora based Communication System (DREAMS)

    Stottler, Richard | British Library Conference Proceedings | 2022


    Fault-Tolerant Formation Control for Heterogeneous Vehicles Via Reinforcement Learning

    Zhao, Wanbing / Liu, Hao / Valavanis, Kimon P. et al. | IEEE | 2022


    Reinforcement Learning-based Fault-Tolerant Control for Unmanned Aerial Vehicles

    Wang, Guoqi / Wang, Xudong / Li, Yang et al. | IEEE | 2024


    Soft Actor-Critic Deep Reinforcement Learning for Fault Tolerant Flight Control

    Dally, Killian / Kampen, Erik-Jan Van | TIBKAT | 2022