Data-efficient Hindsight Off-policy Option Learning

Markus Wulfmeier,u00a0Dushyant Rao,u00a0Roland Hafner,u00a0Thomas Lampe,u00a0Abbas Abdolmaleki,u00a0Tim Hertweck,u00a0Michael Neunert,u00a0Dhruva Tirumala,u00a0Noah Siegel,u00a0Nicolas Heess,u00a0Martin Riedmiller

We introduce Hindsight Off-policy Options (HO2), a data-efficient option learning algorithm. Given any trajectory, HO2 infers likely option choices and backpropagates through the dynamic programming inference procedure to robustly train all policy components off-policy and end-to-end. The approach outperforms existing option learning methods on common benchmarks. To better understand the option framework and disentangle benefits from both temporal and action abstraction, we evaluate ablations with flat policies and mixture policies with comparable optimization. The results highlight the importance of both types of abstraction as well as off-policy training and trust-region constraints, particularly in challenging, simulated 3D robot manipulation tasks from raw pixel inputs. Finally, we intuitively adapt the inference step to investigate the effect of increased temporal abstraction on training with pre-trained options and from scratch.