In physical layer security, one interest of the community is the development of practical approaches to achieve reliable and secure communication, such as model-free approaches, in which the gradient of the channel model is required. This paper proposes a new model-free approach based on information-theoretic metrics. We train the encoder with deep reinforcement learning that uses a policy-based gradient descent algorithm whose loss function contains a feed-forward neural network. Simulation results show that our model is capable of retaining the eavesdropper’s BLER at a high level whilst ensuring the legitimate receiver’s BLER reduces to nearly zero.
Deep Reinforcement Learning For Secure Communication
2022-09-01
379283 byte
Conference paper
Electronic Resource
English
Joint optimization via deep reinforcement learning for secure-driven NOMA-UAV networks
Elsevier | 2025
|