First- and Second-Order Bounds for Adversarial Linear Contextual Bandits

05/01/2023
by   Julia Olkhovskaya, et al.
0

We consider the adversarial linear contextual bandit setting, which allows for the loss functions associated with each of K arms to change over time without restriction. Assuming the d-dimensional contexts are drawn from a fixed known distribution, the worst-case expected regret over the course of T rounds is known to scale as Õ(√(Kd T)). Under the additional assumption that the density of the contexts is log-concave, we obtain a second-order bound of order Õ(K√(d V_T)) in terms of the cumulative second moment of the learner's losses V_T, and a closely related first-order bound of order Õ(K√(d L_T^*)) in terms of the cumulative loss of the best policy L_T^*. Since V_T or L_T^* may be significantly smaller than T, these improve over the worst-case regret whenever the environment is relatively benign. Our results are obtained using a truncated version of the continuous exponential weights algorithm over the probability simplex, which we analyse by exploiting a novel connection to the linear bandit setting without contexts.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset
Success!
Error Icon An error occurred

Sign in with Google

×

Use your Google Account to sign in to DeepAI

×

Consider DeepAI Pro