Deep learning models that exceed human capability are also pruned to error when facing adversarial attacks. Recent works on adversarial reprogramming have unleashed the capability of repurposing a machine-learning model for another task without changing the model parameters. In this paper, we prove the possibility of repurposing a single machine-learning model to solve multiple tasks by adding adversarial-reprogramming functions. In addition, adversarial reprogramming can also be a solution in a memory-limited device that saves more space by storing only one model instead of multiple task-specific models in a single device. In our experiment, adversarial reprogramming can achieve 95% accuracy prediction from a model trained on a different domain. Even in the cross-model type, adversarial reprogramming can perform 83% accuracy on sentiment analysis of a sentence with a model that receives an image as its input type. We also did the experiment on a low-resource language, Bahasa Indonesia, and achieve 75% accuracy without changing the model parameter
Adversarial Reprogramming as Natural Multitask and Compression Enabler
01.06.2023
4140543 byte
Aufsatz (Konferenz)
Elektronische Ressource
Englisch
NTRS | 1992
|IuD Bahn | 2012
|NTIS | 1992
|NTRS | 1992
|