June 18, 2018 21:51

Abstract

Speaker:
Alex Lamb (Université de Montréal, Canada)

Title:
Manifold Mixup: Encouraging Meaningful On-Manifold Interpolation as a Regularizer

Abstract:
Deep networks often perform well on the data manifold on which they are trained, yet give incorrect (and often very confident) answers when evaluated on points from off of the training distribution. This is exemplified by the adversarial examples phenomenon but can also be seen in terms of model generalization and domain shift. We propose Manifold Mixup which encourages the network to produce more reasonable and less confident predictions at points with combinations of attributes not seen in the training set. This is accomplished by training on convex combinations of the hidden state representations of data samples. Using this method, we demonstrate improved semi-supervised learning, learning with limited labeled data, and robustness to novel transformations of the data not seen during training. Manifold Mixup requires no (significant) additional computation. We also discover intriguing properties related to adversarial examples and generative adversarial networks. Analytical experiments on both real data and synthetic data directly support our hypothesis for why the Manifold Mixup method improves results.

More Information

Date July 20, 2018 (Fri) 14:00 - 15:30
URL https://c5dc59ed978213830355fc8978.doorkeeper.jp/events/76200