This paper addresses the problem of robotic capture of an uncooperative spinning target spacecraft. To do so, a computationally lightweight and real-time implementable guidance, navigation, and control architecture that relies on deep learning as well as pseudospectral optimization is proposed and experimentally validated. Specifically, a convolutional neural-network-driven stereovision pose determination system is first combined with a deep-reinforcement-learning-based guidance algorithm and pose tracking controller to cancel the relative motion between a chaser platform and an uncooperative spinning target platform in real time. Then, real-time tracking of a pseudospectral-based optimal guidance law generated offline deploys a robotic arm while minimizing the overall attitude corrections required to keep the target in view. The integrated experiment carried out using Carleton University’s Spacecraft Proximity Operations Testbed (a state-of-the-art planar air bearing facility, introduced in this work) demonstrates the performance of the developed deep learning architecture.
Optimal Capture of Spinning Spacecraft via Deep Learning Vision and Guidance
01.05.2025
Aufsatz (Zeitschrift)
Elektronische Ressource
Englisch
On Deep Reinforcement Learning for Spacecraft Guidance
AIAA | 2020
|ON DEEP REINFORCEMENT LEARNING FOR SPACECRAFT GUIDANCE
TIBKAT | 2020
|