Adversarial Motion Priors Make Good Substitutes for Complex Reward Functions

Alejandro Escontrela,Xue Bin Peng,Wenhao Yu,Tingnan Zhang,Atil Iscen,Ken Goldberg,Pieter Abbeel,Alejandro Escontrela,Xue Bin Peng,Wenhao Yu,Tingnan Zhang,Atil Iscen,Ken Goldberg,Pieter Abbeel

Training a high-dimensional simulated agent with an under-specified reward function often leads the agent to learn physically infeasible strategies that are ineffective when deployed in the real world. To mitigate these unnatural behaviors, reinforcement learning practitioners often utilize complex reward functions that encourage physically plausible behaviors. However, a tedious labor-intensive t...