In the context of autonomous driving and vehicular networks, the effective offloading and processing of computational tasks are vital for sustaining high performance in multi- lane environments. However, the dynamic nature of these environments, compounded by the diversity and interdependencies of tasks, presents significant challenges in devising effective offloading strategies. This paper introduces a reinforcement learning- based task offloading strategy, specifically designed for multi-lane vehicular environments with interdependent tasks. The strategy is grounded in a Semi-Markov Decision Process (SMDP) model, which encompasses task dependencies, vehicle dynamics, and the heterogeneity of computing resources. An innovative aspect of this work is the incorporation of Meta Reinforcement Learning (Meta-RL) into the task offloading framework. Meta-RL allows the system to generalize across various vehicular environments, enabling rapid adaptation to new traffic conditions and task requirements. By learning over multiple tasks, the Meta-RL- based Actor-Critic algorithm optimizes the offloading policy to maximize task success rates while minimizing latency and energy consumption. Extensive simulations demonstrate that the proposed Meta-RL strategy outperforms traditional approaches, offering improved robustness, scalability, and efficiency across diverse vehicular conditions. This research introduces a novel method for task offloading in vehicular networks, significantly enhancing computational efficiency and system stability.
MetaRL-Based Task Offloading Strategy in a Dependency-Aware Multi-Lane Environment
2024-09-27
962718 byte
Conference paper
Electronic Resource
English