QA-GNN: Reasoning with Language Models and Knowledge Graphs for Question Answering

QA-GNN is an end-to-end question answering model that jointly reasons over the knowledge from pre-trained language models and knowledge graphs through graph neural networks. It achieves strong QA performance compared to existing KG or LM only models.

Motivation

An integral aspect of question answering is accessing relevant knowledge and being able to reason over it. Knowledge can be encoded implicitly in large language models (LMs) pre-trained on unstructured text (e.g. BERT), or explicitly in structured knowledge graphs (KGs), such as Freebase and ConceptNet, where entities are represented as nodes and relations between them as edges. Pre-trained LMs have a broad coverage of knowledge, but they do not perform well on structured reasoning (e.g. handling negation). On the other hand, KGs are suited to structured reasoning and enable explainable predictions, but may lack coverage and be noisy. How to reason effectively with both sources of knowledge remains an important open problem.

Method

Combining LMs and KGs for reasoning presents two challenges: given a QA context (question and answer choice), methods need to (i) identify relevant knowledge from large KGs, and (ii) perform joint reasoning over the QA context and KG.


We develop QA-GNN, which addresses the above challenges through two key innovations: (i) relevance scoring, where we use LMs to estimate the importance of KG nodes relative to the given QA context, and (ii) joint reasoning, where we connect the QA context and KG to form a joint graph, and mutually update their representations through graph-based message passing.


QA-GNN achieves improved performance over existing LM and LM+KG models on the CommonsenseQA and OpenBookQA benchmarks using the ConceptNet knowledge graph.

Please refer to our paper for detailed explanations and more results.

Code

Our code is available on GitHub.

Datasets

The datasets used are included in the code repository.

Contributors

The following people contributed to QA-GNN:
Michihiro Yasunaga
Hongyu Ren
Antoine Bosselut
Percy Liang
Jure Leskovec

References

QA-GNN: Reasoning with Language Models and Knowledge Graphs for Question Answering. M. Yasunaga, H. Ren, A. Bosselut, P. Liang, J. Leskovec. North American Chapter of the Association for Computational Linguistics (NAACL), 2021.