As part of the DARPA Assured Autonomy program, our team has developed or evaluated a number of technologies to address gaps in traditional hardware and software assurance processes that make it difficult or impossible to demonstrate the correctness and safety of machine learning (ML) components. These include new approaches for testing and completeness metrics, formal analysis of neural networks, input domain shift assessment, and run-time monitoring and enforcement architectures. Although many of these tools and methods were successfully applied to demonstration platforms, most have not been evaluated on real-world product development efforts in a certification context. In this paper, we describe our evaluation of these new assurance methods and tools applied to ML-based systems that will soon be undergoing certification.
Evaluation of New Assurance Tools for Airborne Machine Learning-Based Functions
29.09.2024
1007177 byte
Aufsatz (Konferenz)
Elektronische Ressource
Englisch
Toward Design Assurance of Machine-Learning Airborne Systems
TIBKAT | 2022
|Safety Assurance of Machine Learning for Perception Functions
Springer Verlag | 2022
|