This paper proposes a dynamic spectrum sharing scheme in an unmanned aerial vehicle (UAV) assisted cognitive radio network. The UAV serves as a secondary base station to provide communication services to multiple secondary users (SUs) by adaptively utilizing the spatio-temporal spectrum opportunities of multiple device-to-device primary users (PUs), where each PU’s spectrum occupancy follows a two-state Markov process. We jointly optimize the UAV’s trajectory and user association to maximize the expectation of its cumulative energy efficiency subject to the interference constraint of the PUs. We formulate this problem as a partially observable Markov decision process (POMDP), where the UAV can only observe the spectrum occupancy status of the adjacent PUs. Due to the lack of the PUs’ spectrum occupancy statistics, we propose a model-free reinforcement learning algorithm named partially observable double deep Q network (PO-DDQN) to obtain the near-optimal spectrum sharing policy. Simulation results show that our proposed algorithm outperforms the baseline policy gradient (PG) algorithm in terms of convergence speed and the UAV’s energy efficiency. Additionally, the spectrum utilization efficiency can be further enhanced when the UAV has wider observation radius, or if the PUs’ spectrum occupancy exhibits stronger temporal correlation.


    Access

    Check access

    Check availability in my library

    Order at Subito €


    Export, share and cite



    Title :

    Deep Reinforcement Learning for UAV-Assisted Spectrum Sharing Under Partial Observability


    Contributors:
    Zhang, Sigen (author) / Wang, Zhe (author) / Gao, Guanyu (author) / Li, Jun (author) / Zhang, Jie (author) / Yin, Ziyan (author)


    Publication date :

    2023-10-10


    Size :

    5327418 byte





    Type of media :

    Conference paper


    Type of material :

    Electronic Resource


    Language :

    English



    Hierarchical Reinforcement Learning Under Mixed Observability

    Nguyen, Hai / Yang, Zhihan / Baisero, Andrea et al. | TIBKAT | 2023


    Hierarchical Reinforcement Learning Under Mixed Observability

    Nguyen, Hai / Yang, Zhihan / Baisero, Andrea et al. | Springer Verlag | 2022



    Fast Spectrum Sharing in Vehicular Networks: A Meta Reinforcement Learning Approach

    Huang, Kai / Luo, Zezhou / Liang, Le et al. | IEEE | 2022


    Optimization of ride-sharing with passenger transfer via deep reinforcement learning

    Wang, Dujuan / Wang, Qi / Yin, Yunqiang et al. | Elsevier | 2023