The work presented here is a preliminary study on the feasibility for using the output of a Generalized Labeled Multi-Bernoulli filter as inputs to online imitation learning via Deep Inverse Reinforcement learning with the end goal of predicting the next states of each trajectory output from the filter. This work samples the labeled state trajectories from the filter and discretizes them to create episodes for learning. It is assumed the multi-target dynamics are unknown, but they are stochastic and act to maximize some unknown reward function. Because the ultimate goal is predicting the multi-target motion using only observations of the targets, Deep Q-learning is used to learn the dynamics. However, as this algorithm depends on the unknown reward function, Deep Inverse Reinforcement learning is used to learn the rewards. Due to the coupled nature of the learning problems, their solutions are iterated in an alternating fashion and upon reaching convergence, future states can be predicted over a given time horizon. The results are preliminary, and many extensions to this work are outlined.


    Zugriff

    Zugriff prüfen

    Verfügbarkeit in meiner Bibliothek prüfen

    Bestellung bei Subito €


    Exportieren, teilen und zitieren



    Titel :

    Inverse Reinforcement Learning for Generalized Labeled Multi-Bernoulli Multi-Target Tracking


    Beteiligte:


    Erscheinungsdatum :

    06.03.2021


    Format / Umfang :

    3492496 byte




    Medientyp :

    Aufsatz (Konferenz)


    Format :

    Elektronische Ressource


    Sprache :

    Englisch




    GENERALIZED LABELED MULTI-BERNOULLI SPACE-OBJECT TRACKING WITH JOINT PREDICTION AND UPDATE

    Jones, Brandon A. / Vo, Ba-Tuong / Vo, Ba-Ngu | British Library Conference Proceedings | 2016




    Target Tracking Method with Box-Particle Generalized Label Multi-Bernoulli Filtering

    Miao, Yu / Song, Liping / Ji, Hongbing | British Library Online Contents | 2017