This study investigates an orbital multi-player “encirclement-capture” game where multiple pursuers with an encirclement configuration aim to capture an evader, whereas the evader tries to escape all of the pursuers. First, an elliptic encirclement configuration is designed for the pursuers to exploit the initial position advantage. Then, the pursuit-evasion process for capturing the evader is formulated as a discrete Markov game. To acquire superior pursuit-evasion strategies, a distributed distributional deep deterministic policy gradient algorithm is employed and modified for the multi-player game. The main structure of the algorithm is modified as a parallel adversarial-learning framework to achieve efficient two-sided training. Meanwhile, the policy networks and policy-gradient calculation are modified to achieve a decentralized-decision coordination among multiple pursuers. Simulations showed that the pursuers and evader trained via the proposed algorithm can learn ‘active-cooperation’ pursuit strategy and ‘multi-target’ evasion strategy, respectively. Meanwhile, the obtained strategies outperform traditional pursuit-evasion strategies.
Orbital Multi-Player Pursuit-Evasion Game with Deep Reinforcement Learning
J Astronaut Sci
18.12.2024
Aufsatz (Zeitschrift)
Elektronische Ressource
Englisch
Orbital Multi-Player Pursuit-Evasion Game with Deep Reinforcement Learning
Springer Verlag | 2024
|Orbital Three-Player Pursuit-Evasion Game
Springer Verlag | 2025
|Orbital Three-Player Pursuit-Evasion Game
Springer Verlag | 2025
|Three-Player Pursuit and Evasion Conflict
AIAA | 2014
|