Deep learning models that exceed human capability are also pruned to error when facing adversarial attacks. Recent works on adversarial reprogramming have unleashed the capability of repurposing a machine-learning model for another task without changing the model parameters. In this paper, we prove the possibility of repurposing a single machine-learning model to solve multiple tasks by adding adversarial-reprogramming functions. In addition, adversarial reprogramming can also be a solution in a memory-limited device that saves more space by storing only one model instead of multiple task-specific models in a single device. In our experiment, adversarial reprogramming can achieve 95% accuracy prediction from a model trained on a different domain. Even in the cross-model type, adversarial reprogramming can perform 83% accuracy on sentiment analysis of a sentence with a model that receives an image as its input type. We also did the experiment on a low-resource language, Bahasa Indonesia, and achieve 75% accuracy without changing the model parameter


    Zugriff

    Zugriff prüfen

    Verfügbarkeit in meiner Bibliothek prüfen

    Bestellung bei Subito €


    Exportieren, teilen und zitieren



    Titel :

    Adversarial Reprogramming as Natural Multitask and Compression Enabler


    Beteiligte:


    Erscheinungsdatum :

    01.06.2023


    Format / Umfang :

    4140543 byte





    Medientyp :

    Aufsatz (Konferenz)


    Format :

    Elektronische Ressource


    Sprache :

    Englisch



    Bayes-Based Distributed Estimation in Adversarial Multitask Networks

    Wang, Tiantian / Li, Yuhan / Chen, Feng et al. | IEEE | 2022


    Enabler operator station

    Bailey, Andrea / Keitzman, John / King, Shirlyn et al. | NTRS | 1992


    RITES: Infrastructure enabler

    Mehrotra, Rajeev | IuD Bahn | 2012


    Enabler Operator Station

    A. Bailey / J. Kietzman / S. King et al. | NTIS | 1992


    Enabler operator station

    Bailey, Andrea / Kietzman, John / King, Shirlyn et al. | NTRS | 1992