Intelligent Connected Vehicles (ICVs) impose stringent requirements on real-time computational services. However, limited onboard resources and the high latency of remote cloud servers restrict traditional solutions. Unmanned Aerial Vehicle (UAV)-assisted Mobile Edge Computing (MEC), which deploys computing and storage resources at the network edge, offers a promising solution. In UAV-assisted vehicular networks, jointly optimizing content and service caching, computation offloading, and UAV trajectories to maximize system performance is a critical challenge. This requires balancing system energy consumption and resource allocation fairness while maximizing cache hit rate and minimizing task latency. To this end, we introduce system efficiency as a unified metric, aiming to maximize overall system performance through joint optimization. This metric comprehensively considers cache hit rate, task computation latency, system energy consumption, and resource allocation fairness. The problem involves discrete decisions (caching, offloading) and continuous variables (UAV trajectories), exhibiting high dynamism and non-convexity, making it challenging for traditional optimization methods. Concurrently, existing multi-agent deep reinforcement learning (MADRL) methods often encounter training instability and convergence issues in such dynamic and non-stationary environments. To address these challenges, this paper proposes a MADRL-based joint optimization approach. We precisely model the problem as a Decentralized Partially Observable Markov Decision Process (Dec-POMDP) and adopt the Multi-Agent Proximal Policy Optimization (MAPPO) algorithm, which follows the Centralized Training Decentralized Execution (CTDE) paradigm. Our method aims to maximize system efficiency by achieving a judicious balance among multiple performance metrics, such as cache hit rate, task delay, energy consumption, and fairness. Simulation results demonstrate that, compared to various representative baseline methods, the proposed MAPPO algorithm exhibits significant superiority in achieving higher cumulative rewards and an approximately 82% cache hit rate.


    Access

    Download


    Export, share and cite



    Title :

    Joint Caching and Computation in UAV-Assisted Vehicle Networks via Multi-Agent Deep Reinforcement Learning


    Contributors:
    Yuhua Wu (author) / Yuchao Huang (author) / Ziyou Wang (author) / Changming Xu (author)


    Publication date :

    2025




    Type of media :

    Article (Journal)


    Type of material :

    Electronic Resource


    Language :

    Unknown




    Cooperative edge caching via multi agent reinforcement learning in fog radio access networks

    Chang, Q. (Qi) / Jiang, Y. (Yanxiang) / Zheng, F.-C. (Fu-Chun) et al. | BASE | 2022

    Free access


    Multi-Agent Reinforcement Learning for Cooperative Coded Caching via Homotopy Optimization

    Xiongwei Wu / Jun Li / Ming Xiao et al. | BASE | 2021

    Free access

    UAV-Assisted Relay Communication: A Multi-Agent Deep Reinforcement Learning Approach

    Huang, Longqian / Sun, Hongguang / Gao, Yinjie et al. | IEEE | 2024


    Scaling Collaborative Space Networks with Deep Multi-Agent Reinforcement Learning

    Ma, Ricky / Hernandez, Gabe / Hernandez, Carrie | IEEE | 2023