This paper studies the allocation of shared resources between vehicle-to-infrastructure (V2I) and vehicle-to-vehicle (V2V) links in vehicle-to-everything (V2X) communications. In existing algorithms, dynamic vehicular environments and quanti- zation of continuous power become the bottlenecks for providing an effective and timely resource allocation policy. In this paper, we develop two algorithms to deal with these difficulties. First, we propose a deep reinforcement learning (DRL)-based resource allocation algorithm to improve the performance of both V2I and V2V links. Specifically, the algorithm uses deep Q-network (DQN) to solve the sub-band assignment and deep deterministic policy-gradient (DDPG) to solve the continuous power allocation problem. Second, we propose a meta-based DRL algorithm to enhance the fast adaptability of the resource allocation policy in the dynamic environment. Numerical results demonstrate that the proposed DRL-based algorithm can significantly improve the performance compared to the DQN-based algorithm that quantizes continuous power. In addition, the proposed meta- based DRL algorithm can achieve the required fast adaptation in the new environment with limited experiences.


    Access

    Download


    Export, share and cite



    Title :

    Meta-Reinforcement Learning Based Resource Allocation for Dynamic V2X Communications


    Contributors:
    Yuan, Y (author) / Zheng, G (author) / Wong, KK (author) / Letaief, KB (author)

    Publication date :

    2021-07-26


    Remarks:

    IEEE Transactions on Vehicular Technology (2021) (In press).


    Type of media :

    Article (Journal)


    Type of material :

    Electronic Resource


    Language :

    English


    Classification :

    DDC:    629