Local planning is a critical and difficult task for intelligent vehicles in dynamic transportation environments. In this paper, a new method Suppress Q Deep Q Network (SQDQN) combining traditional deep reinforcement learning Deep Q Network (DQN) with information entropy is proposed for local planning in automatic driving. In the proposed approach, local planning strategy in complex traffic environment established by the actor–critic network based on DQN, the method adopts the way of execution action-evaluation action-update network to explore the optimal local planning strategy. Proposed strategy does not rely on accurate modeling of the scene, so it is suitable for complex and changeable traffic scenes. At the same time, evaluate the update process and determine the update range by using information entropy to solve a common problem in the network that over expectation of actions damage the performance of strategies. Use this approach to improve strategic performance. The trained local planning strategy is evaluated in three simulation scenarios: overtaking, following, driving in hazardous situations. The results illustrate the advantages of the proposed SQDQN method in solving local planning problem.
Local Planning Strategy Based on Deep Reinforcement Learning Over Estimation Suppression
Int.J Automot. Technol.
International Journal of Automotive Technology ; 25 , 4 ; 837-848
2024-08-01
12 pages
Article (Journal)
Electronic Resource
English
Local Planning Strategy Based on Deep Reinforcement Learning Over Estimation Suppression
Springer Verlag | 2024
|Deep Reinforcement Learning-Based Local Path Planning with Memory-Guided
Springer Verlag | 2025
|Automatic driving decision planning method based on deep reinforcement learning and deep learning
European Patent Office | 2024
|