Monte-Carlo Tree Search as Regularized Policy Optimization

Jean-Bastien Grill,u00a0Florent Altchu00e9,u00a0Yunhao Tang,u00a0Thomas Hubert,u00a0Michal Valko,u00a0Ioannis Antonoglou,u00a0Remi Munos

The combination of Monte-Carlo tree search (MCTS) with deep reinforcement learning has led to groundbreaking results in artificial intelligence. However, AlphaZero, the current state-of-the-art MCTS algorithm still relies on handcrafted heuristics that are only partially understood. In this paper, we show that AlphaZerou2019s search heuristic, along with other common ones, can be interpreted as an approximation to the solution of a specific regularized policy optimization problem. With this insight, we propose a variant of AlphaZero which uses the exact solution to this policy optimization problem, and show experimentally that it reliably outperforms the original algorithm in multiple domains.