In emergency communication scenarios, task nodes often need to perform rescue missions in areas lacking basic network coverage. To ensure reliable communication, emergency communication relay networks are constructed, necessitating relay nodes to establish multiple relay links and dynamically adjust the network topology and the number of nodes. Traditional relay deployment schemes, which rely on heuristic algorithms, often lack flexibility and may become trapped in local optima. To address these issues, we propose an Intelligent Relay Nodes Deployment Method (INDM). This method combines Twin Delayed Deep Deterministic Policy Gradient (TD3) and Double Deep Q-Network (DQN) algorithms to optimize the number and placement of nodes in the relay link. Additionally, an adaptive segmented reward function is designed to guide agent iterations, while noise exploration mechanisms and an adaptive experience replay algorithm enable agents to fully explore and identify optimal strategies. Simulation results demonstrate that our algorithm effectively reduces relay node energy consumption and the total number of nodes, all while ensuring high communication quality for task nodes.
An Intelligent Relay Nodes Deployment Method Based on Multi-Agent Deep Reinforcement Learning For Emergency Communications
2024-10-07
2747949 byte
Conference paper
Electronic Resource
English
European Patent Office | 2021
|