Dynamic Motor Primitives (DMP) are nowadays widely used as movement parametrization for learning trajectories, because of their linearity in the parameters, rescalation robustness and continuity. However, when learning a movement with DMP, where a set of gaussians distributed along the trajectory is used to approximate an acceleration excitation function, a very large number of gaussian approximations need to be performed. Adding them up for all joints yields too many parameters to be explored, thus requiring a prohibitive number of experiments/simulations to converge to a solution with an optimal (locally or globally) reward. We propose here two strategies to reduce this dimensionality: the first is to explore only the most significant directions in the parameter space, and the second is to add a reduced second set of gaussians that should only optimize the trajectory after fixing the gaussians that approximate the demonstrated movement. ; Peer Reviewed ; Postprint (author’s final draft)


    Access

    Download


    Export, share and cite



    Handling high parameter dimensionality in reinforcement learning with dynamic motor primitives

    Colomé, Adrià / Alenyà, Guillem / Torras, Carme | BASE | 2013

    Free access



    Dimensionality reduction for probabilistic movement primitives

    Colomé, Adrià / Neumann, Gerhard / Peters, Jan et al. | BASE | 2014

    Free access

    Dimensionality reduction for probabilistic movement primitives

    Colomé Figueras, Adrià / Neumann, Gerhard / Peters, Jan et al. | BASE | 2014

    Free access