Accurate and interpretable satellite health monitoring systems play a crucial role in keeping a satellite operational. With potentially hundreds of sensors to monitor, identifying when and how a component exhibits anomalous behavior is essential to the longevity of a satellite’s mission. Detecting these anomalies in their early stages can protect million-dollar assets and their missions by preventing the escalation of minor issues to system failure. Traditional methods for anomaly detection utilize expert domain knowledge to produce generally accurate and easy-to-interpret results. However, many are cost- and labor-intensive, and their scope is usually limited to a subset of anomalies [1].Over the past decade, satellites have become increasingly complex, posing a significant challenge to dated methods. To combat this, state-of-the-art machine learning algorithms have been proposed. These methods include high-dimensional clustering [2], [3], large forest decision trees [4], and Long-Short-Term Memory RNNs [5], [6], [7]. Although having shown improved accuracy, these newer models lack interpretability —insight into how a model makes decisions. Satellite operators express caution in entrusting multi-million-dollar decisions solely to machine learning models that lack transparency. This missing trust leads to reliance on dated, semi-reliable algorithms, despite the risk of missing catastrophic anomalies.To bridge the gap between high detection accuracy and human interpretability, this paper explores methods of explainability incorporated with machine learning. Our method of investigation involves two steps: the implementation of machine learning and the development of explainability. First, we use current state-of-the-art machine learning algorithms applied to telemetry data from a previously flown Air Force Research Labs (AFRL) satellite to classify anomalies. Then, we apply and evaluate three explainability methods, namely SHAP (Shapley Additive Explanations) [8], LIME (Local Interpretable Model-Agnostic Explanations) [9], and LRP (Layer-wise Relevance Propagation) [10]. We propose the use of non-classifier machine learning models combined with post-hoc explainability methods to foster trust in machine learning by providing explanations for satellite operators to make more informed decisions.


    Zugriff

    Zugriff prüfen

    Verfügbarkeit in meiner Bibliothek prüfen

    Bestellung bei Subito €


    Exportieren, teilen und zitieren



    Titel :

    An Explainable Machine Learning Approach for Anomaly Detection in Satellite Telemetry Data


    Beteiligte:
    Kricheff, Seth (Autor:in) / Maxwell, Emily (Autor:in) / Plaks, Connor (Autor:in) / Simon, Michelle (Autor:in)

    Erschienen in:

    Erscheinungsdatum :

    02.03.2024


    Format / Umfang :

    7258967 byte




    Medientyp :

    Aufsatz (Konferenz)


    Format :

    Elektronische Ressource


    Sprache :

    Englisch



    EVALUATING ANOMALY DETECTION IN SATELLITE TELEMETRY DATA

    Nalepa, Jakub / Benecki, Pawel / Andrzejewski, Jacek et al. | TIBKAT | 2022


    Supporting Anomaly Detection from Satellite Telemetry Data by Regression Trees

    Nakatsugawa, M. / Yairi, T. / Isihama, N. et al. | British Library Conference Proceedings | 2004


    A Deep Learning Anomaly Detection Framework for Satellite Telemetry with Fake Anomalies

    Yakun Wang / Jianglei Gong / Jie Zhang et al. | DOAJ | 2022

    Freier Zugriff

    European Space Agency Benchmark for Anomaly Detection in Satellite Telemetry

    Kotowski, Krzysztof / Haskamp, Christoph / Andrzejewski, Jacek et al. | ArXiv | 2024

    Freier Zugriff

    Telemetry Anomaly Detection System Using Machine Learning to Streamline Mission Operations

    Fernandez, Michela Munoz / Yue, Yisong / Weber, Romann | IEEE | 2017