Revisiting EXTRA for Smooth Distributed Optimization

02/24/2020
by   Huan Li, et al.
0

EXTRA is a popular method for the dencentralized distributed optimization and has broad applications. This paper revisits the EXTRA. Firstly, we give a sharp complexity analysis for EXTRA with the improved O((L/μ+1/1-σ_2(W))log1/ϵ(1-σ_2(W))) communication and computation complexities for μ-strongly convex and L-smooth problems, where σ_2(W) is the second largest singular value of the weight matrix W. When the strong convexity is absent, we prove the O((L/ϵ+1/1-σ_2(W))log1/1-σ_2(W)) complexities. Then, we use the Catalyst framework to accelerate EXTRA and obtain the O(√(L/μ(1-σ_2(W)))log L/μ(1-σ_2(W))log1/ϵ) communication and computation complexities for strongly convex and smooth problems and the O(√(L/ϵ(1-σ_2(W)))log1/ϵ(1-σ_2(W))) complexities for non-strongly convex ones. Our communication complexities of the accelerated EXTRA are only worse by the factors of (logL/μ(1-σ_2(W))) and (log1/ϵ(1-σ_2(W))) from the lower complexity bounds for strongly convex and non-strongly convex problems, respectively.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset
Success!
Error Icon An error occurred

Sign in with Google

×

Use your Google Account to sign in to DeepAI

×

Consider DeepAI Pro