OPIRL: Sample Efficient Off-Policy Inverse Reinforcement Learning via Distribution Matching

Hana Hoshino,Kei Ota,Asako Kanezaki,Rio Yokota,Hana Hoshino,Kei Ota,Asako Kanezaki,Rio Yokota

Inverse Reinforcement Learning (IRL) is attractive in scenarios where reward engineering can be tedious. However, prior IRL algorithms use on-policy transitions, which require intensive sampling from the current policy for stable and optimal performance. This limits IRL applications in the real world, where environment interactions can become highly expensive. To tackle this problem, we present Of...