Fog computing reduces network latency by moving computational resources close to where the data is generated. Vehicular fog computing (VFC) is an emerging computing paradigm where fog nodes deployed on moving vehicles (i.e., vehicular fog nodes (VFNs)) complement stationary fog nodes (e.g., the ones co-located with cellular base stations) to satisfy the spatio-temporally varying demand for computing resources in a cost-efficient manner. On-demand VFC (ODVFC) supports the dynamic routing of VFNs to the places where demand emerges. Different from previous works on capacity planning and vehicle routing that utilize compute-intensive optimization methods such as integer linear programming (ILP), this paper explores the feasibility of applying reinforcement learning to dynamic capacity planning in a time-efficient manner. Specifically, we propose to apply the multi-agent reinforcement learning (MARL) method (i.e., actor-critic) to learn the VFN routing policies. This approach allows distributed VFNs to cooperatively maximize the techno-economic performance of ODVFC. For evaluation, we built an open-source VFC simulation platform that integrates vehicular traffic simulation with 5G NR V2X and MARL environment. Compared with decentralized learning (i.e., each VFN independently learns its routing policy), centralized learning (i.e., using a global agent for VFN routing), and ILP methods, our proposal proves to achieve 8.3% higher revenue and 13.2% higher number of served tasks than decentralized training; and it has 40.6% and 83% lower execution time than centralized learning and ILP, respectively, with only 14% lower revenue than both. It is also scalable to real-life scenarios with a great number of users and VFNs.
Multi-Agent Reinforcement Learning-Based Capacity Planning for On-Demand Vehicular Fog Computing
IEEE Transactions on Intelligent Vehicles ; 10 , 2 ; 1043-1057
2025-02-01
3727580 byte
Article (Journal)
Electronic Resource
English
Multi-Agent Deep Reinforcement Learning in Vehicular OCC
IEEE | 2022
|Cooperative perception in vehicular networks using multi-agent reinforcement learning
BASE | 2021
|