Speaker: Professor Aapo Hyvarinen
Title: Advances in Unsupervised Deep Learning
Unsupervised learning, in particular learning deep nonlinear representations, is one of the most difficult problems in machine learning. Here, we consider two different frameworks. First, we consider estimating latent quantities in a generative model. This provides a principled framework, and has been successfully used in the linear case, e.g. as independent component analysis (ICA) or sparse coding. However, extending ICA to the nonlinear case has proven to be extremely difficult: A straight-forward extension is unidentifiable, i.e. it is not possible to recover those latent components that actually generated the data. Here, we show that this problem can be solved by using an auxiliary variable, which might be defined by from temporal structure, thus unifying the earlier methods of TCL and PCL. Second, we consider the general problem of estimating the energy function (or density function) of data. Score matching is a promising approach which solves the problem of computing the partition function given by the energy; in other words, it enables estimation for unnormalized models.
However, its application to deep networks has seemed to be impossible because of computational difficulties in computing the objective function. Here, we show how score matching can be used in deep neural network based on the denoising score matching framework introduced by Vincent. Furthermore, we show how to combine neural networks with reproducing kernel methods to estimate conditional densities. Thus, we have two different frameworks for unsupervised deep learning, with quite different goals; what they have in common is that both are probabilistically principled.
|Date||June 18, 2018 (Mon) 11:00 - 12:00|