Vector Quantized Models for Planning
Sherjil Ozair,u00a0Yazhe Li,u00a0Ali Razavi,u00a0Ioannis Antonoglou,u00a0Aaron Van Den Oord,u00a0Oriol Vinyals
Recent developments in the field of model-based RL have proven successful in a range of environments, especially ones where planning is essential. However, such successes have been limited to deterministic fully-observed environments. We present a new approach that handles stochastic and partially-observable environments. Our key insight is to use discrete autoencoders to capture the multiple possible effects of an action in a stochastic environment. We use a stochastic variant of Monte Carlo tree search to plan over both the agentu2019s actions and the discrete latent variables representing the environmentu2019s response. Our approach significantly outperforms an offline version of MuZero on a stochastic interpretation of chess where the opponent is considered part of the environment. We also show that our approach scales to DeepMind Lab, a first-person 3D environment with large visual observations and partial observability.


