Near-Optimal Methods for Minimizing Star-Convex Functions and Beyond

06/27/2019
by   Oliver Hinder, et al.
0

In this paper, we provide near-optimal accelerated first-order methods for minimizing a broad class of smooth nonconvex functions that are strictly unimodal on all lines through a minimizer. This function class, which we call the class of smooth quasar-convex functions, is parameterized by a constant γ∈ (0,1], where γ = 1 encompasses the classes of smooth convex and star-convex functions, and smaller values of γ indicate that the function can be "more nonconvex." We develop a variant of accelerated gradient descent that computes an ϵ-approximate minimizer of a smooth γ-quasar-convex function with at most O(γ^-1ϵ^-1/2log(γ^-1ϵ^-1)) total function and gradient evaluations. We also derive a lower bound of Ω(γ^-1ϵ^-1/2) on the number of gradient evaluations required by any deterministic first-order method in the worst case, showing that, up to a logarithmic factor, no deterministic first-order algorithm can improve upon ours.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset
Success!
Error Icon An error occurred

Sign in with Google

×

Use your Google Account to sign in to DeepAI

×

Consider DeepAI Pro