The Statistical Benefits of Quantile Temporal-Difference Learning for Value Estimation

Mark Rowland,u00a0Yunhao Tang,u00a0Clare Lyle,u00a0Remi Munos,u00a0Marc G Bellemare,u00a0Will Dabney

We study the problem of temporal-difference-based policy evaluation in reinforcement learning. In particular, we analyse the use of a distributional reinforcement learning algorithm, quantile temporal-difference learning (QTD), for this task. We reach the surprising conclusion that even if a practitioner has no interest in the return distribution beyond the mean, QTD (which learns predictions about the full distribution of returns) may offer performance superior to approaches such as classical TD learning, which predict only the mean return, even in the tabular setting.