Reinforcement learning has become more and more popular in robotics for acquiring feedback controllers. Many approaches aim for learning a controller from scratch, i.e., data-driven without any modeling of the physical plant. However, stability properties of the closed loop are often not considered, or established only a-posteriori or ad hoc. We propose to employ reinforcement learning in the context of model-based control, allowing to learn in a framework of stabilizing controllers built by using only little prior model knowledge. This way, the action space is suitably structured for safe learning of a feedback controller to compensate for uncertainties due to model mismatch or external disturbances. The resulting scheme is developed around a decentralized PD feedback controller. Therefore, given such a controller, by the proposed method one can also add a learning module for performance enhancement. We demonstrate our approach both in simulation and in a hardware experiment using a two degree of freedom robot manipulator.
A robust stability approach to robot reinforcement learning based on a parameterization of stabilizing controllers
2018-01-31
Article (Journal)
Electronic Resource
English
Design methodology for robust stabilizing controllers
AIAA | 1987
|A design methodology for robust stabilizing controllers
AIAA | 1986
|Parameterization of reliable nonlinear H~∞ controllers
British Library Online Contents | 2002
|