A Richer Theory of Convex Constrained Optimization with Reduced Projections and Improved Rates

08/11/2016
by   Tianbao Yang, et al.
0

This paper focuses on convex constrained optimization problems, where the solution is subject to a convex inequality constraint. In particular, we aim at challenging problems for which both projection into the constrained domain and a linear optimization under the inequality constraint are time-consuming, which render both projected gradient methods and conditional gradient methods (a.k.a. the Frank-Wolfe algorithm) expensive. In this paper, we develop projection reduced optimization algorithms for both smooth and non-smooth optimization with improved convergence rates under a certain regularity condition of the constraint function. We first present a general theory of optimization with only one projection. Its application to smooth optimization with only one projection yields O(1/ϵ) iteration complexity, which improves over the O(1/ϵ^2) iteration complexity established before for non-smooth optimization and can be further reduced under strong convexity. Then we introduce a local error bound condition and develop faster algorithms for non-strongly convex optimization at the price of a logarithmic number of projections. In particular, we achieve an iteration complexity of O(1/ϵ^2(1-θ)) for non-smooth optimization and O(1/ϵ^1-θ) for smooth optimization, where θ∈(0,1] appearing the local error bound condition characterizes the functional local growth rate around the optimal solutions. Novel applications in solving the constrained ℓ_1 minimization problem and a positive semi-definite constrained distance metric learning problem demonstrate that the proposed algorithms achieve significant speed-up compared with previous algorithms.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset
Success!
Error Icon An error occurred

Sign in with Google

×

Use your Google Account to sign in to DeepAI

×

Consider DeepAI Pro