A Stochastic Path-Integrated Differential EstimatoR Expectation Maximization Algorithm

11/30/2020
by   Gersende Fort, et al.
0

The Expectation Maximization (EM) algorithm is of key importance for inference in latent variable models including mixture of regressors and experts, missing observations. This paper introduces a novel EM algorithm, called SPIDER-EM, for inference from a training set of size n, n ≫ 1. At the core of our algorithm is an estimator of the full conditional expectation in the E-step, adapted from the stochastic path-integrated differential estimator (SPIDER) technique. We derive finite-time complexity bounds for smooth non-convex likelihood: we show that for convergence to an ϵ-approximate stationary point, the complexity scales as K_Opt (n,ϵ )= O(ϵ^-1) and K_CE( n,ϵ ) = n+ √(n) O(ϵ^-1 ), where K_Opt( n,ϵ ) and K_CE(n, ϵ ) are respectively the number of M-steps and the number of per-sample conditional expectations evaluations. This improves over the state-of-the-art algorithms. Numerical results support our findings.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset
Success!
Error Icon An error occurred

Sign in with Google

×

Use your Google Account to sign in to DeepAI

×

Consider DeepAI Pro