Randomly initialized EM algorithm for two-component Gaussian mixture achieves near optimality in O(√(n)) iterations

08/28/2019
by   Yihong Wu, et al.
0

We analyze the classical EM algorithm for parameter estimation in the symmetric two-component Gaussian mixtures in d dimensions. We show that, even in the absence of any separation between components, provided that the sample size satisfies n=Ω(d log^3 d), the randomly initialized EM algorithm converges to an estimate in at most O(√(n)) iterations with high probability, which is at most O((d log^3 n/n)^1/4) in Euclidean distance from the true parameter and within logarithmic factors of the minimax rate of (d/n)^1/4. Both the nonparametric statistical rate and the sublinear convergence rate are direct consequences of the zero Fisher information in the worst case. Refined pointwise guarantees beyond worst-case analysis and convergence to the MLE are also shown under mild conditions. This improves the previous result of Balakrishnan et al <cit.> which requires strong conditions on both the separation of the components and the quality of the initialization, and that of Daskalakis et al <cit.> which requires sample splitting and restarting the EM iteration.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset
Success!
Error Icon An error occurred

Sign in with Google

×

Use your Google Account to sign in to DeepAI

×

Consider DeepAI Pro