Federated learning (FL) is an edge learning framework that has received significant attention recently. However, the cost of communication has become a major challenge for FL as the number of edge devices grows and the complexity of training models increases. Besides, data samples across all edge devices are usually not independent and identically distributed (non-IID), posing additional challenges to the convergence and model accuracy of FL. Therefore, we propose a novel personalized FL framework based on deep coded over-the-air computation, named DipFL. In this framework, we design a deep AirComp aggregation (DACA) module for n-to-1 information aggregation. Besides, a joint source-channel coding (JSCC) module is designed based on the variational auto-encoder (VAE) model, which not only encodes the transmitted data, but also reduces the bias of local samples by introducing certain regularisation terms. In addition, we propose a personalized mix module that allows local models to be more personalized by mixing the global model and the local models. Simulation results confirm that the proposed DipFL framework is able to significantly reduce the amount of transmitted data, while improving FL performance especially at low signal-to-noise regimes.
Deep Learning Based Coded Over-the-Air Computation for Personalized Federated Learning
2023-10-10
1245815 byte
Conference paper
Electronic Resource
English
Personalized Federated Learning of Driver Prediction Models for Autonomous Driving
ArXiv | 2021
|