Counterfactual Credit Assignment in Model-Free Reinforcement Learning

Thomas Mesnard,u00a0Theophane Weber,u00a0Fabio Viola,u00a0Shantanu Thakoor,u00a0Alaa Saade,u00a0Anna Harutyunyan,u00a0Will Dabney,u00a0Thomas S Stepleton,u00a0Nicolas Heess,u00a0Arthur Guez,u00a0Eric Moulines,u00a0Marcus Hutter,u00a0Lars Buesing,u00a0Remi Munos

Credit assignment in reinforcement learning is the problem of measuring an actionu2019s influence on future rewards. In particular, this requires separating skill from luck, i.e. disentangling the effect of an action on rewards from that of external factors and subsequent actions. To achieve this, we adapt the notion of counterfactuals from causality theory to a model-free RL setup. The key idea is to condition value functions on future events, by learning to extract relevant information from a trajectory. We formulate a family of policy gradient algorithms that use these future-conditional value functions as baselines or critics, and show that they are provably low variance. To avoid the potential bias from conditioning on future information, we constrain the hindsight information to not contain information about the agentu2019s actions. We demonstrate the efficacy and validity of our algorithm on a number of illustrative and challenging problems.