Federated Learning (FL) has emerged as a promising decentralized machine learning (ML) paradigm where distributed clients collaboratively train models without sharing their private data. However, the heterogeneous properties of the clients, combined with the high dimensions of ML models considerably slow down the wall-clock convergence time. To address these challenges, we propose FedHC, a framework that jointly optimizes resource allocations and uplink compression levels of the clients to minimize the overall latency while respecting the energy budget and convergence guarantees. To solve the formulated optimization problem, we first derive the required number of global training rounds, to achieve the target accuracy. Then we propose an iterative algorithm, where at each step optimal CPU levels and bandwidth along with compression levels are derived. Our numerical results show the performance -with time reduction up to 4×- and robustness to non-IID data of our approach, compared to the benchmarks.
Latency Minimization in Heterogeneous Federated Learning through Joint Compression and Resource Allocation
2024-10-07
795227 byte
Conference paper
Electronic Resource
English