Trading-Off Static and Dynamic Regret in Online Least-Squares and Beyond

09/06/2019
by   Jianjun Yuan, et al.
0

Recursive least-squares algorithms often use forgetting factors as a heuristic to adapt to non-stationary data streams. this paper rigorously characterizes the effect of forgetting factors for a class of online Newton algorithms. objectives, the algorithms achieve a dynamic regret of {O( T),O(√(TV))}, where V is a bound on the path length of the comparison sequence. forgetting factor achieves this dynamic regret bound. obtain a trade-off between static and dynamic regret. how the forgetting factor can be tuned to obtain and dynamic regret. algorithms, our second contribution is a novel gradient descent step size rule for strongly convex functions. regret bounds described above. regret of O(T^1-β) and dynamic regret of O(T^β V^*), where β∈ (0,1) and V^* is the path length of the sequence of minimizers. varying β, we obtain a trade-off between static and dynamic regret.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset
Success!
Error Icon An error occurred

Sign in with Google

×

Use your Google Account to sign in to DeepAI

×

Consider DeepAI Pro