Robots are often faced with complex environments when working, which requires robots to be able to autonomously obtain information in complex environments and use it for path planning, and path planning is directly related to whether the task can be completed or not, but when robots lack global information, they can only analyze the local environment information transmitted by the sensors and use it to plan paths. The most critical point in the whole process is how to utilize the limited information data for path planning in the unknown environment. In this paper, we utilize reinforcement learning algorithm to train the McNamee wheel unmanned vehicle for path planning in unknown environment. Based on the Q-learning algorithm, the McNamee wheel unmanned vehicle is used for path planning. It can reduce the unmanned vehicle to waste most of its time on non-obstacle avoidance needs, reduce training time, shorten the movement distance to improve efficiency.
Reinforcement learning based path planning for McNamee wheeled unmanned vehicles in unknown environments
International Conference on Mechatronic Engineering and Artificial Intelligence (MEAI 2024) ; 2024 ; Shenyang, China
Proc. SPIE ; 13555
2025-04-18
Conference paper
Electronic Resource
English
Unmanned Aerial Vehicles Path Planning Based on Deep Reinforcement Learning
Springer Verlag | 2019
|