Hyperparameter Selection for Imitation Learning
Lu00e9onard Hussenot,u00a0Marcin Andrychowicz,u00a0Damien Vincent,u00a0Robert Dadashi,u00a0Anton Raichuk,u00a0Sabela Ramos,u00a0Nikola Momchev,u00a0Sertan Girgin,u00a0Raphael Marinier,u00a0Lukasz Stafiniak,u00a0Manu Orsini,u00a0Olivier Bachem,u00a0Matthieu Geist,u00a0Olivier Pietquin
We address the issue of tuning hyperparameters (HPs) for imitation learning algorithms in the context of continuous-control, when the underlying reward function of the demonstrating expert cannot be observed at any time. The vast literature in imitation learning mostly considers this reward function to be available for HP selection, but this is not a realistic setting. Indeed, would this reward function be available, it could then directly be used for policy training and imitation would not be necessary. To tackle this mostly ignored problem, we propose a number of possible proxies to the external reward. We evaluate them in an extensive empirical study (more than 10u2019000 agents across 9 environments) and make practical recommendations for selecting HPs. Our results show that while imitation learning algorithms are sensitive to HP choices, it is often possible to select good enough HPs through a proxy to the reward function.


