August 20, 2019 13:42


Title: How powerful are graph neural networks, and what can they reason about?

Abstract: We study the theoretical and algorithmic aspects of Graph Neural Networks (GNNs) – an effective framework for learning with graphs. In part (i), we characterize the representational power of GNNs and build a maximally powerful GNN with Graph Isomorphism Network
(GIN) and Jumping Knowledge Network (JK-Net). In part (ii), we study the generalization of GNNs, with a focus on abstract reasoning tasks.
Our theory is based on an algorithmic alignment framework, and draws connections with over-parameterized NN theory. We show GNNs well align with dynamic programming algorithms, and thus, can solve a broad range of reasoning problems.

This talk is based on the following papers:

What Can Neural Networks Reason About?

How Powerful are Graph Neural Networks?

Representation Learning on Graphs with Jumping Knowledge Networks

Bio: Keyulu Xu is a Ph.D. student at Massachusetts Institute of Technology (MIT) in the EECS department, where he works with Professor Stefanie Jegelka. He is a member of the Computer Science and AI Lab
(CSAIL) and machine learning group. Keyulu has been a visiting researcher with Professor Ken-ichi Kawarabayashi at National Institute of Informatics (NII) since 2016. Keyulu is a also a research fellow with Hudson River Trading (HRT) AI Labs. His research interests span the theory and practice of algorithmic machine learning.

More Information

Date August 28, 2019 (Wed) 14:00 - 16:00


Nihonbashi 1-chome Mitsui Building, 15th floor, 1-4-1 Nihonbashi, Chuo-ku, Tokyo(Google Maps)