Document Type : Research Paper
Authors
1 Department of Mathematical Sciences Sharif University of Technology, Tehran, Iran
2 Department of Mathematics and Computer Science, Amirkabir University of Technology, Tehran, Iran
Abstract
Recent advances in federated learning and IoT-driven edge analytics underscore the need for optimization techniques that are both scalable and privacy-preserving[1][2]. In this work, we introduce ADMM-DP, a variant of the Alternating Direction Method of Multipliers that seamlessly integrates differential privacy (DP) guarantees in a fully decentralized, multi-agent learning architecture[3]. ADMM-DP leverages an augmented Lagrangian formulation with adaptive inexact local updates and calibrated Gaussian noise injection into each exchanged message, ensuring rigorous (ε,δ)-DP without sacrificing convergence[4][5]. Theoretically, we establish convergence rates and privacy-utility bounds under realistic heterogeneous (non-IID) data conditions. Building on DP-ADMM literature, we prove that ADMM-DP converges to a stationary solution with an explicit utility-privacy tradeoff[6], and furthermore, for strongly convex losses the method attains linear convergence rates comparable to non-private ADMM[7]. Privacy loss is tracked via advanced composition (moments accountant) to yield tight end-to-end DP guarantees[8].
Keywords