Universal Online Learning with Gradual Variations: A Multi-layer Online Ensemble Approach

07/17/2023
โˆ™
by   Yu-Hu Yan, et al.
โˆ™
0
โˆ™

In this paper, we propose an online convex optimization method with two different levels of adaptivity. On a higher level, our method is agnostic to the specific type and curvature of the loss functions, while at a lower level, it can exploit the niceness of the environments and attain problem-dependent guarantees. To be specific, we obtain ๐’ช(ln V_T), ๐’ช(d ln V_T) and ๐’ชฬ‚(โˆš(V_T)) regret bounds for strongly convex, exp-concave and convex loss functions, respectively, where d is the dimension, V_T denotes problem-dependent gradient variations and ๐’ชฬ‚(ยท)-notation omits logarithmic factors on V_T. Our result finds broad implications and applications. It not only safeguards the worst-case guarantees, but also implies the small-loss bounds in analysis directly. Besides, it draws deep connections with adversarial/stochastic convex optimization and game theory, further validating its practical potential. Our method is based on a multi-layer online ensemble incorporating novel ingredients, including carefully-designed optimism for unifying diverse function types and cascaded corrections for algorithmic stability. Remarkably, despite its multi-layer structure, our algorithm necessitates only one gradient query per round, making it favorable when the gradient evaluation is time-consuming. This is facilitated by a novel regret decomposition equipped with customized surrogate losses.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset
Success!
Error Icon An error occurred

Sign in with Google

×

Use your Google Account to sign in to DeepAI

×

Consider DeepAI Pro