This paper considers an outlier detection problem for a collection of vehicles or agents. These agents are represented by Markov decision processes and the trajectory data are assumed available. The work aims to learn the intentions or reward functions of agents, and infer the anomalous agents whose intentions differ from the majority. To achieve this, we propose a joint inverse reinforcement learning framework, which enables learning of a common reward function that captures the behavior of the majority as well as individual rewards for normal and abnormal agents. An example on the detection and analysis of driving behaviors is provided, demonstrating the effectiveness of the proposed framework.
Outlier-robust Inverse Reinforcement Learning and Reward-based Detection of Anomalous Driving Behaviors
08.10.2022
1059500 byte
Aufsatz (Konferenz)
Elektronische Ressource
Englisch
SAE Technical Papers | 2021
|British Library Conference Proceedings | 2021
|