Human-Timescale Adaptation in an Open-Ended Task Space

Jakob Bauer,u00a0Kate Baumli,u00a0Feryal Behbahani,u00a0Avishkar Bhoopchand,u00a0Nathalie Bradley-Schmieg,u00a0Michael Chang,u00a0Natalie Clay,u00a0Adrian Collister,u00a0Vibhavari Dasagi,u00a0Lucy Gonzalez,u00a0Karol Gregor,u00a0Edward Hughes,u00a0Sheleem Kashem,u00a0Maria Loks-Thompson,u00a0Hannah Openshaw,u00a0Jack Parker-Holder,u00a0Shreya Pathak,u00a0Nicolas Perez-Nieves,u00a0Nemanja Rakicevic,u00a0Tim Rocktu00e4schel,u00a0Yannick Schroecker,u00a0Satinder Singh,u00a0Jakub Sygnowski,u00a0Karl Tuyls,u00a0Sarah York,u00a0Alexander Zacherl,u00a0Lei M Zhang

Foundation models have shown impressive adaptation and scalability in supervised and self-supervised learning problems, but so far these successes have not fully translated to reinforcement learning (RL). In this work, we demonstrate that training an RL agent at scale leads to a general in-context learning algorithm that can adapt to open-ended novel embodied 3D problems as quickly as humans. In a vast space of held-out environment dynamics, our adaptive agent (AdA) displays on-the-fly hypothesis-driven exploration, efficient exploitation of acquired knowledge, and can successfully be prompted with first-person demonstrations. Adaptation emerges from three ingredients: (1) meta-reinforcement learning across a vast, smooth and diverse task distribution, (2) a policy parameterised as a large-scale attention-based memory architecture, and (3) an effective automated curriculum that prioritises tasks at the frontier of an agentu2019s capabilities. We demonstrate characteristic scaling laws with respect to network size, memory length, and richness of the training task distribution. We believe our results lay the foundation for increasingly general and adaptive RL agents that perform well across ever-larger open-ended domains.