This chapter focuses on the position/force control problem. The position/force control has two control loops: the internal loop, which is the position control, and the external loop, which is the force control. The force control is designed as an impedance control. To obtain the position/force control, there are three steps: estimation of environmental parameters, the design of the desired impedance model via the linear quadratic regulator controller, and application of the position/force control. The chapter compares the four reinforcement learning methods given in this chapter: Q‐learning, Sarsa learning, Q(λ), and Sarsa. It utilizes reinforcement learning (RL) to learn the optimal desired force, which is equivalent to the optimal impedance model, and then an admittance control law guarantees both force and position tracking using an inner position control law. The RL methods generally discretize the action space, and the optimization over the action space becomes a matter of enumeration.
Reinforcement Learning for Robot Position/Force Control
2021-10-05
21 pages
Article/Chapter (Book)
Electronic Resource
English
Hybrid Position/Force Control of Robot Manipulators
NTRS | 1982
|Robust Hybrid Position/Force Control for Robot Manipulators
British Library Online Contents | 1993
|British Library Online Contents | 1993
|British Library Online Contents | 1996
|