August 20, 2019 13:42

Abstract

Title: How powerful are graph neural networks, and what can they reason about?

Abstract: We study the theoretical and algorithmic aspects of Graph Neural Networks (GNNs) – an effective framework for learning with graphs. In part (i), we characterize the representational power of GNNs and build a maximally powerful GNN with Graph Isomorphism Network
(GIN) and Jumping Knowledge Network (JK-Net). In part (ii), we study the generalization of GNNs, with a focus on abstract reasoning tasks.
Our theory is based on an algorithmic alignment framework, and draws connections with over-parameterized NN theory. We show GNNs well align with dynamic programming algorithms, and thus, can solve a broad range of reasoning problems.

This talk is based on the following papers:

What Can Neural Networks Reason About? https://arxiv.org/abs/1905.13211

How Powerful are Graph Neural Networks? https://arxiv.org/abs/1810.00826

Representation Learning on Graphs with Jumping Knowledge Networks
https://arxiv.org/abs/1806.03536

Bio: Keyulu Xu is a Ph.D. student at Massachusetts Institute of Technology (MIT) in the EECS department, where he works with Professor Stefanie Jegelka. He is a member of the Computer Science and AI Lab
(CSAIL) and machine learning group. Keyulu has been a visiting researcher with Professor Ken-ichi Kawarabayashi at National Institute of Informatics (NII) since 2016. Keyulu is a also a research fellow with Hudson River Trading (HRT) AI Labs. His research interests span the theory and practice of algorithmic machine learning.

More Information

Date August 28, 2019 (Wed) 14:00 - 16:00
URL https://c5dc59ed978213830355fc8978.doorkeeper.jp/events/96570

Venue

Nihonbashi 1-chome Mitsui Building, 15th floor, 1-4-1 Nihonbashi, Chuo-ku, Tokyo(Google Maps)