Q-value Path Decomposition for Deep Multiagent Reinforcement Learning
Yaodong Yang,u00a0Jianye Hao,u00a0Guangyong Chen,u00a0Hongyao Tang,u00a0Yingfeng Chen,u00a0Yujing Hu,u00a0Changjie Fan,u00a0Zhongyu Wei
Recently, deep multiagent reinforcement learning (MARL) has become a highly active research area as many real-world problems can be inherently viewed as multiagent systems. A particularly interesting and widely applicable class of problems is the partially observable cooperative multiagent setting, in which a team of agents learns to coordinate their behaviors conditioning on their private observations and commonly shared global reward signals. One natural solution is to resort to the centralized training and decentralized execution paradigm and during centralized training, one key challenge is the multiagent credit assignment: how to allocate the global rewards for individual agent policies for better coordination towards maximizing system-levelu2019s benefits. In this paper, we propose a new method called Q-value Path Decomposition (QPD) to decompose the systemu2019s global Q-values into individual agentsu2019 Q-values. Unlike previous works which restrict the representation relation of the individual Q-values and the global one, we leverage the integrated gradient attribution technique into deep MARL to directly decompose global Q-values along trajectory paths to assign credits for agents. We evaluate QPD on the challenging StarCraft II micromanagement tasks and show that QPD achieves the state-of-the-art performance in both homogeneous and heterogeneous multiagent scenarios compared with existing cooperative MARL algorithms.