Stable and reliable wireless communications are absolutely essential in achieving safe, efficient, and comfortable autonomous driving systems, and require more channel resources. Cognitive radio (CR) alleviates the spectrum shortage by exploiting spectrum resources that are not used by primary users (PUs). Conventional CR-based vehicle-to-everything (V2X) communications still suffer from spectrum scarcity when the demands for wireless resources are excessive. Most previous analyses of this problem have adopted too simple models for V2X that assume constant vehicle velocity and that coarsely discretize vehicle positions. More realistic models for V2X are too topologically complex and too dynamically variant to find stable solutions of optimization problems by using conventional off-policy-based reinforcement learning (RL) algorithms such as Q-learning and deep Q-network (DQN) based on reply memories. Considering the above situation, this paper firstly builds a precise and realistic autonomous driving testbed by 3D engine. In the testbed, all vehicles move in an autopilot mode with random velocity and direction, which can provide more realistic traffic flow, positions, and motion for all vehicles. Due to the constrain of Kullback-Leibler divergence between updated and previous policies, on-policy RL algorithms such as proximal policy optimization (PPO) can find stable solutions to complex optimization problems. Inspired by this property, we expand the single-agent RL algorithm PPO into a multi-agent algorithm MA-PPO (multi-action proximal policy optimization). Computer simulations using the new testbed show the MA-PPO algorithm can gradually converge to more stable solutions and can achieve more steady and efficient data transmission than DQN.
Reinforcement Learning-Based Cognitive Radio Transmission Scheduling in Vehicular Systems
2023-06-01
2626266 byte
Conference paper
Electronic Resource
English