As the research and applications of large language model (LLM) become increasingly sophisticated, it is difficult for resource-limited mobile terminals to run large-model inference tasks efficiently. Traditional deep reinforcement learning (DRL) based approaches have been used to offload LLM inference tasks to servers. However, existing solutions suffer from data inefficiency, insensitivity to latency requirements, and non-adaptability to task load variations. In this paper, we propose an active inference with rewardless guidance algorithm using expected future free energy for offloading decisions and allocating resources for the LLM inference task offloading and resource allocation problem of cloud-edge networks systems. Experimental results show that our proposed method has superior performance over mainstream DRLs, improves in data utilization efficiency, and is more adaptable to changing task load scenarios.


    Access

    Check access

    Check availability in my library

    Order at Subito €


    Export, share and cite



    Title :

    Large Language Models (LLMs) Inference Offloading and Resource Allocation in Cloud-Edge Networks: An Active Inference Approach


    Contributors:


    Publication date :

    2023-10-10


    Size :

    2445044 byte





    Type of media :

    Conference paper


    Type of material :

    Electronic Resource


    Language :

    English




    Joint Offloading and Resource Allocation for Scalable Vehicular Edge Computing

    Wu, Wei / Wang, Qie / Wu, Xuanli et al. | IEEE | 2020



    Delay Optimized Computation Offloading and Resource Allocation for Mobile Edge Computing

    Long, Long / Liu, Zichen / Zhou, Yiqing et al. | IEEE | 2019


    Resource Allocation and Offloading Strategy for UAV-Assisted LEO Satellite Edge Computing

    Hongxia Zhang / Shiyu Xi / Hongzhao Jiang et al. | DOAJ | 2023

    Free access