In highway traffic environment, vehicle lane-change decisions are a critical and difficult task for an agent. The traditional deep Q-network (DQN) has been the main architecture used for this task. However, DQN still faces two major challenges in solving the lane-changing decision-making problem for autonomous driving: 1) the instability in the training process leads to frequent collisions and 2) the single reward function limits the agent’s ability to learn representative domain knowledge about highways. To address these issues, this paper proposes a multi-reward DQN for highway driving decisions, with long short-term memory (LSTM) and self-attention. We call this model LAMRDQN. It incorporates LSTM and self-attention mechanism into Q-network to reduce frequent collisions. In addition, the unitary reward is changed into multiple reward functions—each reward emphasizing a specific factor, such as speed, obeying traffic rules, and lane changing. The use of multiple rewards enables the model to capture and optimize various aspects of autonomous driving on highway tasks, thereby obtaining more representative domain knowledge. The experiment results show that, in the highway environment, the method can effectively improve the problems of frequent collisions and poor learning performance when vehicles change lanes, and has better overtaking performance than related reinforcement learning algorithms.
Lane-Change Decision of Automatic Driving Based on Reinforcement Learning Framework
Transportation Research Record: Journal of the Transportation Research Board
Transportation Research Record: Journal of the Transportation Research Board ; 2679 , 2 ; 187-198
2024-09-09
Article (Journal)
Electronic Resource
English
Automatic driving lane keeping decision-making method based on deep reinforcement learning
European Patent Office | 2025
|Hybrid automatic driving lane changing decision-making method based on deep reinforcement learning
European Patent Office | 2024
|European Patent Office | 2023
|European Patent Office | 2022
|