Distributed estimation over multitask networks, where the target parameter vectors (tasks) for different nodes can be different, has received much attention recently. In this article, we consider some practical application scenarios, where there are some similarities between tasks, and thus, intertask cooperation is beneficial to improve the estimation performance of nodes. In most of the existing multitask learning studies, local estimates are directly transmitted between neighboring nodes, and then, the adaptive combination strategy is adopted to achieve intertask cooperation. However, when the target parameter vectors contain sensitive information, direct transmission of local estimates may result in serious privacy breaches. To tackle this problem, we propose a privacy-preserving distributed multitask learning algorithm for collaborative estimation over networks. The proposed algorithm is implemented by a secure multiparty computation protocol designed on an organic combination of multiplicative/additive mask and additively homomorphic encryption technique. While allowing each node to adaptively cooperate with its neighbors, this protocol also preserves the privacy of the local estimates. Besides, we present a thorough privacy analysis of the proposed algorithm. Simulation results show that the proposed algorithm can effectively protect each node’s task against leakage without sacrificing the performance of estimation.
Privacy-Preserving Distributed Estimation Over Multitask Networks
IEEE Transactions on Aerospace and Electronic Systems ; 58 , 3 ; 1953-1965
2022-06-01
1183036 byte
Article (Journal)
Electronic Resource
English