We introduce Graph Information Bottleneck (GIB), an information-theoretic principle that learns robust representation for graphs. Built upon the principle, we propose two GNN models GIB-Cat and GIB-Bern that achieves state-of-the-art performance against adversarial attacks on structure or node features of graph data.
The algorithm iteratively extracts the representation of adjacency matrix (A) and the nodes (X) in each layer based on the local dependency assumption. For the modeling of the graph structure, we instantiate the GIB principle with two neighbor sampling algorithm, one using the categorical distribution: GIB-Cat, the other using the Bernoulli distribution: GIB-Bern, to model the neighbors under a local dependence.
We further derive variational bounds for the mutual information maximization and minimization, and optimize both models.
Experimental results show that GIB achieves more robustness than previous defence models against attacks on both graph structure and node features. We list both results on Cora and more results can be found in our paper.
The following people contributed to GIB:
Tailin Wu*
Hongyu Ren*
Pan Li
Jure Leskovec
Graph Information Bottleneck. T. Wu*, H. Ren*, P. Li, J. Leskovec. Neural Information Processing Systems (NeurIPS), 2020.