In order to reduce the consumption cost for successive interference cancellation in non-orthogonal multiple access(NOMA), we propose a resource allocation scheme that involves both orthogonal multiple access and NOMA technologies. The scheme uses deep learning to choose the appropriate access according to the communication environment. Moreover, the scheme jointly allocates subcarrier and power resources for users by utilizing a deep Q network and a multi-agent deep deterministic policy gradient network. Meanwhile, an adaptive mechanism combining online learning and offline learning is introduced into allocation scheme to flexibly adapt to the communication environment. Results show that the proposed scheme can achieve better system performance in sum-rate. In order to better cope with changes in the environment and make the resource allocation strategy more robust, we propose a novel resource allocation algorithm combining transfer learning and deep reinforcement learning. The algorithm can effectively improve the model convergence speed when changing the communication environment. Furthermore, the algorithm allows us to transfer the subcarrier allocation network and the power allocation network simultaneously or separately depending on the environment.


    Zugriff

    Zugriff prüfen

    Verfügbarkeit in meiner Bibliothek prüfen

    Bestellung bei Subito €


    Exportieren, teilen und zitieren



    Titel :

    Hybrid Multiple Access Resource Allocation based on Multi-agent Deep Transfer Reinforcement Learning


    Beteiligte:
    Zhang, Yijian (Autor:in) / Wang, Xiaoming (Autor:in) / Li, Dapeng (Autor:in) / Xu, Youyun (Autor:in)


    Erscheinungsdatum :

    2022-06-01


    Format / Umfang :

    712996 byte





    Medientyp :

    Aufsatz (Konferenz)


    Format :

    Elektronische Ressource


    Sprache :

    Englisch