This paper presents a new method of learning visual docking skills for non-holonomic vehicles by direct interaction with the environment. The method is based on a reinforcement algorithm, which speeds up Q-learning by applying memory-based sweeping and enforcing the “adjoining property”, a filtering mechanism to only allow transitions between states that satisfy a fixed distance. The method overcomes some limitations of reinforcement learning techniques when they are employed in applications with continuous nonlinear systems, such as car-like vehicles. In particular, a good approximation to the optimal behaviour is obtained by a small look-up table. The algorithm is tested within an image-based visual servoing framework on a docking task. The training time was less than 1 hour on the real vehicle. In experiments, we show the satisfactory performance of the algorithm.
Learning visual docking for non-holonomic autonomous vehicles
2008 IEEE Intelligent Vehicles Symposium ; 1015-1020
01.06.2008
1398353 byte
Aufsatz (Konferenz)
Elektronische Ressource
Englisch
Learning Visual Docking for Non-Holonomic Autonomous Vehicles
British Library Conference Proceedings | 2008
|Elliptical trajectories for non-holonomic vehicles
British Library Conference Proceedings | 2007
|Elliptical trajectories for non-holonomic vehicles
IEEE | 2007
|Neural-network-based docking of autonomous vehicles
IEEE | 2006
|