A few applications of deep generative models
Deep generative models are powerful tools to learn arbitrary distributions, and have achieved remarkable successes in generating realistic artificial data, unsupervised feature learning, etc.
This talk introduces a few applications of generative models for physics and computer vision. In the first application, we propose a general framework of performing Monte Carlo simulation, where the Boltzmann distribution of a physical system is approximated by a generative model such as autoregressive models and normalizing flows.
Our method makes full use of accessibility to the exact normalized sampling probability, and provide asymptotically unbiased estimates of physical observables involving the partition function.
Notably, free energy and entropy can be directly estimated along with reliable confidence intervals of the estimators.
In the second and third applications, we use Langevin dynamics for adversarial defense and for improving image translation performance, where the gradient of log probability is estimated by denosing autoencoders. Both applications rely on cooling down test samples with or without adversarial patterns, in order to make imperceptible
changes that affect performance significantly.
|Date||February 18, 2020 (Tue) 16:00 - 17:00|