As the research and applications of large language model (LLM) become increasingly sophisticated, it is difficult for resource-limited mobile terminals to run large-model inference tasks efficiently. Traditional deep reinforcement learning (DRL) based approaches have been used to offload LLM inference tasks to servers. However, existing solutions suffer from data inefficiency, insensitivity to latency requirements, and non-adaptability to task load variations. In this paper, we propose an active inference with rewardless guidance algorithm using expected future free energy for offloading decisions and allocating resources for the LLM inference task offloading and resource allocation problem of cloud-edge networks systems. Experimental results show that our proposed method has superior performance over mainstream DRLs, improves in data utilization efficiency, and is more adaptable to changing task load scenarios.


    Zugriff

    Zugriff prüfen

    Verfügbarkeit in meiner Bibliothek prüfen

    Bestellung bei Subito €


    Exportieren, teilen und zitieren



    Titel :

    Large Language Models (LLMs) Inference Offloading and Resource Allocation in Cloud-Edge Networks: An Active Inference Approach


    Beteiligte:
    Fang, Jingcheng (Autor:in) / He, Ying (Autor:in) / Yu, F. Richard (Autor:in) / Li, Jianqiang (Autor:in) / Leung, Victor C. (Autor:in)


    Erscheinungsdatum :

    2023-10-10


    Format / Umfang :

    2445044 byte





    Medientyp :

    Aufsatz (Konferenz)


    Format :

    Elektronische Ressource


    Sprache :

    Englisch




    Joint Offloading and Resource Allocation for Scalable Vehicular Edge Computing

    Wu, Wei / Wang, Qie / Wu, Xuanli et al. | IEEE | 2020


    Delay Optimized Computation Offloading and Resource Allocation for Mobile Edge Computing

    Long, Long / Liu, Zichen / Zhou, Yiqing et al. | IEEE | 2019



    Resource Allocation and Offloading Strategy for UAV-Assisted LEO Satellite Edge Computing

    Hongxia Zhang / Shiyu Xi / Hongzhao Jiang et al. | DOAJ | 2023

    Freier Zugriff