Adversarial Motion Priors Make Good Substitutes for Complex Reward Functions
Alejandro Escontrela,Xue Bin Peng,Wenhao Yu,Tingnan Zhang,Atil Iscen,Ken Goldberg,Pieter Abbeel,Alejandro Escontrela,Xue Bin Peng,Wenhao Yu,Tingnan Zhang,Atil Iscen,Ken Goldberg,Pieter Abbeel
Training a high-dimensional simulated agent with an under-specified reward function often leads the agent to learn physically infeasible strategies that are ineffective when deployed in the real world. To mitigate these unnatural behaviors, reinforcement learning practitioners often utilize complex reward functions that encourage physically plausible behaviors. However, a tedious labor-intensive t...
Discussion
-
kulabukhova-0ryg5@myrambler.ruç §ç令人æè³ãæ¬æ çè¯ã æµ·æµªè§æ¯ ä½ ä»¬çå客 å®å¨å° ä¼ éç¥è¯ãä¸è¦æ¾å¼! -
kulabukhova-0ryg5@myrambler.ruæäº®ç æ è¡æ äº! æè°¢æ¿å±ã èå¡ç¶ç¾ æå 令人æå¹ç æ è¡é¡¹ç®, ç»§ç»åå± ç»§ç»åªåãè¡·å¿æè°¢. -
kulabukhova-0ryg5@myrambler.ruå¾é«å ´é 读 ç §çãé常 é¼è人å¿ã æåæ¯é» æ¬æ ç §çãçæ£ å¸å¼äººã -
kulabukhova-0ryg5@myrambler.ruå ³æ³¨æ´æ°, æä½ä¼å°, æ è¡å¸¦æ¥çµæãä¸åæè°¢ æ è¡çµæã å·¨åå£©é« æçç±, è¿éæçè¯çè¯è®ºãä½ çé¡¹ç® å°±æ¯ æ£æ¯è¿æ ·çãå æ²¹ã



Reply