In physical layer security, one interest of the community is the development of practical approaches to achieve reliable and secure communication, such as model-free approaches, in which the gradient of the channel model is required. This paper proposes a new model-free approach based on information-theoretic metrics. We train the encoder with deep reinforcement learning that uses a policy-based gradient descent algorithm whose loss function contains a feed-forward neural network. Simulation results show that our model is capable of retaining the eavesdropper’s BLER at a high level whilst ensuring the legitimate receiver’s BLER reduces to nearly zero.


    Access

    Check access

    Check availability in my library

    Order at Subito €


    Export, share and cite



    Title :

    Deep Reinforcement Learning For Secure Communication


    Contributors:


    Publication date :

    2022-09-01


    Size :

    379283 byte





    Type of media :

    Conference paper


    Type of material :

    Electronic Resource


    Language :

    English



    Deep Reinforcement Learning-Based Resource Allocation for Secure RIS-aided UAV Communication

    Iqbal, Amjad / Al-Habashna, Ala'a / Wainer, Gabriel et al. | IEEE | 2023


    Joint optimization via deep reinforcement learning for secure-driven NOMA-UAV networks

    DENG, Danhao / WANG, Chaowei / XU, Lexi et al. | Elsevier | 2025

    Free access