Graph Convolutional Networks GCN

Tags: #machine learning #graph #GNN

Equation

$$H^{(l+1)}=\sigma(\tilde{D}^{-\frac{1}{2}}\tilde{A}\tilde{D}^{-\frac{1}{2}}H^{l}W^{l})\\ \tilde{A}=A+I_{N}\\ \tilde{D}_{ii}=\sum_{j}\tilde{A}_{ij} \\ H^{0}=X \\ \mathcal{L}=-\sum_{l \in Y}\sum^{F}_{f=1} Y_{lf} \ln Z_{lf}$$

Latex Code

                                 H^{(l+1)}=\sigma(\tilde{D}^{-\frac{1}{2}}\tilde{A}\tilde{D}^{-\frac{1}{2}}H^{l}W^{l})\\
            \tilde{A}=A+I_{N}\\
            \tilde{D}_{ii}=\sum_{j}\tilde{A}_{ij} \\
            H^{0}=X \\ 
            \mathcal{L}=-\sum_{l \in Y}\sum^{F}_{f=1} Y_{lf} \ln Z_{lf}
                            

Have Fun

Let's Vote for the Most Difficult Equation!

Introduction

Equation



Latex Code

            H^{(l+1)}=\sigma(\tilde{D}^{-\frac{1}{2}}\tilde{A}\tilde{D}^{-\frac{1}{2}}H^{l}W^{l})\\
            \tilde{A}=A+I_{N}\\
            \tilde{D}_{ii}=\sum_{j}\tilde{A}_{ij} \\
            H^{0}=X \\ 
            \mathcal{L}=-\sum_{l \in Y}\sum^{F}_{f=1} Y_{lf} \ln Z_{lf}
        

Explanation

In this formulation, W indicates layer-specific trainable weight matrix. H^{0} is the original inputs feature matrix X as H^{0}=X, with dimension as N * D, and H^{l} indicates the l-th layer hidden representation of graph. The model is trained with semi-supervised classification labels and the loss function L is defined above. You can check more detailed information in this ICLR paper, Semi-supervised classification with graph convolutional networks for more details.

Related Documents

Related Videos

Comments

Write Your Comment

Upload Pictures and Videos