Regularising Generalised Linear Mixed Models with an autoregressive random effect

08/09/2019
by   Jocelyn Chauvet, et al.
0

We address regularised versions of the Expectation-Maximisation (EM) algorithm for Generalised Linear Mixed Models (GLMM) in the context of panel data (measured on several individuals at different time-points). A random response y is modelled by a GLMM, using a set X of explanatory variables and two random effects. The first one introduces the dependence within individuals on which data is repeatedly collected while the second one embodies the serially correlated time-specific effect shared by all the individuals. Variables in X are assumed many and redundant, so that regression demands regularisation. In this context, we first propose a L2-penalised EM algorithm, and then a supervised component-based regularised EM algorithm as an alternative.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset
Success!
Error Icon An error occurred

Sign in with Google

×

Use your Google Account to sign in to DeepAI

×

Consider DeepAI Pro