Continuous-time Models for Stochastic Optimization Algorithms

10/05/2018
by   Antonio Orvieto, et al.
0

We propose a new continuous-time formulation for first-order stochastic optimization algorithms such as mini-batch gradient descent and variance reduced techniques. We exploit this continuous-time model, together with a simple Lyapunov analysis as well as tools from stochastic calculus, in order to derive convergence bounds for various types of non-convex functions. We contrast these bounds to their known equivalent in discrete-time as well as derive new bounds. Our model also includes SVRG, for which we derive a linear convergence rate for the class of weakly quasi-convex and quadratically growing functions.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset
Success!
Error Icon An error occurred

Sign in with Google

×

Use your Google Account to sign in to DeepAI

×

Consider DeepAI Pro