Strategies for Pre-training Graph Neural Networks

We develop a strategy for pre-training Graph Neural Networks (GNNs) and systematically study its effectiveness on multiple datasets, GNN architectures, and diverse downstream tasks.

Motivation

Many domains in machine learning have datasets with a large number of related but different tasks. Those domains are challenging because task-specific labels are often scarce and test examples can be distributionally different from examples seen during training. An effective solution to these challenges is to pre-train a model on related tasks where data is abundant, and then fine-tune it on a downstream task of interest. While pre-training has been effective for improving many language and vision domains, pre-training on graph datasets remains an open question.

Method

We develop a strategy for pre-training Graph Neural Networks (GNNs). Crucial to the success of our strategy is to pre-train an expressive GNN at the level of individual nodes as well as entire graphs.



We systematically study different pre-training strategies on multiple datasets and find that when ad-hoc strategies are applied, pre-trained GNNs often exhibit negative transfer and perform worse than non-pre-trained GNNs on many downstream tasks. In contrast, our proposed strategy is effective and avoids negative transfer across downstream tasks, leading up to 9.4% absolute improvements in ROC-AUC over non-pre-trained models and achieving state of the art performance.

Please refer to our paper for detailed explanations and more results.

Code

Python code to reproduce our experiments is available on GitHub.

Datasets

The datasets used are included in the code repository.

Contributors

The following people contributed to this work:
Weihua Hu*
Bowen Liu*
Joseph Gomes
Marinka Zitnik
Percy Liang
Vijay Pande
Jure Leskovec

References

Strategies for Pre-training Graph Neural Networks. W. Hu*, B. Liu*, J. Gomes, M. Zitnik., P. Liang, V. Pande, J. Leskovec International Conference on Learning Representations (ICLR), 2020.