Federated learning (FL) is a promising paradigm that enables collaboration among edge devices to train a neural network while preserving data privacy. This paper considers a previously unexamined scenario of over-the-air FL systems with model heterogeneity, where clients with varying computing capacities adopt local models comprising subsets of global parameters, and over-the-air computation is employed to accelerate parameter aggregation. We investigate three key design considerations, namely, subnet creation, client selection, and client composition, and assess their impact on signal distortion resulting from over-the-air computation and on the accuracy of model learning. Our results provide insights into the effects of each design aspect on the FL system’s performance and convergence.
Over-the-Air Federated Learning with Model Heterogeneity: A Comparative Study
2024-10-07
5925293 byte
Conference paper
Electronic Resource
English