Machine learning technologies have become an in-valuable solution to operating mobile networks efficiently. In particular, multi-agent reinforcement learning (MARL) enables the learning of policies to sleep base stations (BSs) located in a broad range of areas cooperatively for energy saving in network operation. However, if the policies of agents are trained on an inaccurate simulation environment in MARL, their performance may be compromised when deployed in the real environment. In this paper, we propose a practical learning approach to obtain policies for BS sleep control via MARL with data-driven radio environment map (REM) calibration. In this approach, we first train REM calibration models as residual models using actual received power data collected in advance. By leveraging these data-driven calibration models in conjunction with statistical pathloss models provided by 3GPP, our network simulator accurately replicates the intensity of signals from BSs in accordance with the real environment. We then train policies for BS sleep control on the calibrated network simulator using an ex-tended constrained policy optimization algorithm for MARL that explicitly deals with constraint satisfaction on costs in addition to reward maximization. Numerical experiments demonstrate that the BS sleep control with the policies derived from our learning approach achieves both reducing power consumption of BSs and guaranteeing quality of service (QoS) in network operation without performance degradation upon deployment.
Optimal Base Station Sleep Control via Multi-Agent Reinforcement Learning with Data-Driven Radio Environment Map Calibration
2024-06-24
1496097 byte
Conference paper
Electronic Resource
English
Data-Driven Automotive Development: Federated Reinforcement Learning for Calibration and Control
Springer Verlag | 2022
|