Empirical Likelihood for Contextual Bandits

Nikos Karampatziakis,John Langford,Paul Mineiro

We propose an estimator and confidence interval for computing the valueof a policy from off-policy data in the contextual bandit setting. Tothis end we apply empirical likelihood techniques to formulate ourestimator and confidence interval as simple convex optimizationproblems. Using the lower bound of our confidence interval, we thenpropose an off-policy policy optimization algorithm that searches forpolicies with large reward lower bound. We empirically find that bothour estimator and confidence interval improve over previous proposalsin finite sample regimes. Finally, the policy optimization algorithmwe propose outperforms a strong baseline system for learning fromoff-policy data.