Mean Estimation in High-Dimensional Binary Markov Gaussian Mixture Models

06/06/2022
by   Yihan Zhang, et al.
0

We consider a high-dimensional mean estimation problem over a binary hidden Markov model, which illuminates the interplay between memory in data, sample size, dimension, and signal strength in statistical inference. In this model, an estimator observes n samples of a d-dimensional parameter vector θ_*∈ℝ^d, multiplied by a random sign S_i (1≤ i≤ n), and corrupted by isotropic standard Gaussian noise. The sequence of signs {S_i}_i∈[n]∈{-1,1}^n is drawn from a stationary homogeneous Markov chain with flip probability δ∈[0,1/2]. As δ varies, this model smoothly interpolates two well-studied models: the Gaussian Location Model for which δ=0 and the Gaussian Mixture Model for which δ=1/2. Assuming that the estimator knows δ, we establish a nearly minimax optimal (up to logarithmic factors) estimation error rate, as a function of θ_*,δ,d,n. We then provide an upper bound to the case of estimating δ, assuming a (possibly inaccurate) knowledge of θ_*. The bound is proved to be tight when θ_* is an accurately known constant. These results are then combined to an algorithm which estimates θ_* with δ unknown a priori, and theoretical guarantees on its error are stated.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset
Success!
Error Icon An error occurred

Sign in with Google

×

Use your Google Account to sign in to DeepAI

×

Consider DeepAI Pro