Scalable Representation Learning in Linear Contextual Bandits with Constant Regret Guarantees
Andrea Tirinzoni,Matteo Papini,Ahmed Touati,Alessandro Lazaric,Matteo Pirotta
We study the problem of representation learning in stochastic contextual linear bandits. While the primary concern in this domain is usually to find extit{realizable} representations (i.e., those that allow predicting the reward function at any context-action pair exactly), it has been recently shown that representations with certain spectral properties (called extit{HLS}) may be more effective for the exploration-exploitation task, enabling extit{LinUCB} to achieve constant (i.e., horizon-independent) regret. In this paper, we propose extsc{BanditSRL}, a representation learning algorithm that combines a novel constrained optimization problem to learn a realizable representation with good spectral properties with a generalized likelihood ratio test to exploit the recovered representation and avoid excessive exploration. We prove that extsc{BanditSRL} can be paired with any no-regret algorithm and achieve constant regret whenever an extit{HLS} representation is available. Furthermore, extsc{BanditSRL} can be easily combined with deep neural networks and we show how regularizing towards extit{HLS} representations is beneficial in standard benchmarks.