Towards Multi-Grained Explainability for Graph Neural Networks

Xiang Wang,Yingxin Wu,An Zhang,Xiangnan He,Tat-Seng Chua

When a graph neural network (GNN) made a prediction, one raises question about explainability: u201cWhich fraction of the input graph is most inufb02uential to the modelu2019s decision?u201d Producing an answer requires understanding the modelu2019s inner workings in general and emphasizing the insights on the decision for the instance at hand. Nonetheless, most of current approaches focus only on one aspect: (1) local explainability, which explains each instance independently, thus hardly exhibits the class-wise patterns; and (2) global explainability, which systematizes the globally important patterns, but might be trivial in the local context. This dichotomy limits the ufb02exibility and effectiveness of explainers greatly. A performant paradigm towards multi-grained explainability is until-now lacking and thus a focus of our work. In this work, we exploit the pre-training and ufb01ne-tuning idea to develop our explainer and generate multi-grained explanations. Speciufb01cally, the pre-training phase accounts for the contrastivity among different classes, so as to highlight the class-wise characteristics from a global view; afterwards, the ufb01ne-tuning phase adapts the explanations in the local context. Experiments on both synthetic and real-world datasets show the superiority of our explainer, in terms of AUC on explaining graph classiufb01cation over the leading baselines. Our codes and datasets are available at https://github.com/Wuyxin/ReFine.