Space-assisted vehicular networks (SAVN) provide seamless coverage and on-demand data processing services for user vehicles (UVs). However, ultra-reliable and low-latency communication (URLLC) demands imposed by emerging vehicular applications are hard to be satisfied in SAVN by existing computation offloading techniques. Traditional deep reinforcement learning algorithms are unsuitable for highly dynamic SAVN due to the underutilization of environment observations. An AsynchronouS federaTed deep Q-learning (DQN)-basEd and URLLC-aware cOmputatIon offloaDing algorithm (ASTEROID) is presented in this paper to achieve throughput maximization considering the long-term URLLC constraints. Specifically, we first establish an extreme value theory-based URLLC constraint model. Second, the task offloading and computation resource allocation are decomposed by employing Lyapunov optimization. Finally, an asynchronous federated DQN-based (AF-DQN) algorithm is presented to address the UV-side task offloading problem. The server-side computation resource allocation is settled by an queue backlog-aware algorithm. Simulation results verify that ASTEROID achieves superior throughput and URLLC performances.


    Zugriff

    Zugriff prüfen

    Verfügbarkeit in meiner Bibliothek prüfen

    Bestellung bei Subito €


    Exportieren, teilen und zitieren



    Titel :

    Asynchronous Federated Deep Reinforcement Learning-Based URLLC-Aware Computation Offloading in Space-Assisted Vehicular Networks


    Beteiligte:
    Pan, Chao (Autor:in) / Wang, Zhao (Autor:in) / Liao, Haijun (Autor:in) / Zhou, Zhenyu (Autor:in) / Wang, Xiaoyan (Autor:in) / Tariq, Muhammad (Autor:in) / Al-Otaibi, Sattam (Autor:in)


    Erscheinungsdatum :

    2023-07-01


    Format / Umfang :

    2719626 byte




    Medientyp :

    Aufsatz (Zeitschrift)


    Format :

    Elektronische Ressource


    Sprache :

    Englisch