Abstract
This talk will be held in a hybrid format, both in person at AIP Open Space of RIKEN AIP (Nihonbashi office) and online by Zoom. AIP Open Space: *only available to AIP researchers.
DATE & TIME
Dec. 27, 2024: 11:00 am – 12:00 pm (JST)
TITLE
Sampling is Reinforcement Learning and Generative Modeling is Imitation Learning
SPEAKER
Sangwoong Yoon (AI Research Fellow at Korea Institute for Advanced Study (KIAS))
ABSTRACT
In this talk, I will discuss how sampling and generative modeling, two well-established tasks in statistical machine learning, can be reinterpreted through the lens of sequential decision-making. Building on this connection, I will present novel algorithms for sampling and generative modeling, inspired by techniques in reinforcement learning. The first part of the talk will introduce Value Gradient Samplers (VGS), which treat sampling as a maximum entropy reinforcement learning problem. Similar to Langevin Monte Carlo, VGS generates samples through multiple steps of drifts, where the optimal drift direction is determined by a learned value function. At the expense of training costs for the value function, VGS can generate samples in significantly fewer steps than MCMC methods at test time. In the second part of the talk, I will present Diffusion by Maximum Entropy Inverse Reinforcement Learning (DxMI), a novel approach to training diffusion models with a reduced number of time steps. DxMI reformulates the training of a diffusion model as an instance of inverse reinforcement learning, where the reward is represented by an energy-based model (EBM), resulting in a joint training framework for the diffusion model and the EBM. DxMI can train a diffusion model that generates high-quality samples in just four steps, while also serving as an MCMC-free training method for EBMs. Notably, DxMI was selected for an oral presentation at NeurIPS 2024.
BIOGRAPHY
Sangwoong Yoon (https://swyoon.github.io/) is an AI Research Fellow at Korea Institute for Advanced Study (KIAS). Also, he is an incoming postdoc in Prof. Ilija Bogunovic’s Lab at University College London. His research interest is centered on understanding probabilistic learning principles and building their practical applications. I am deeply engaged with generative models, especially energy-based and diffusion models, exploring their potential in areas such as out-of-distribution detection, reinforcement learning, robotics, and decision making under uncertainty. He obtained his PhD in Mechanical Engineering from Robotics Laboratory at Seoul National University, under the supervision of Prof. Frank C. Park. Before PhD, he earned a master’s degree in Neuroscience from the same university, guided by Prof. Byoung-Tak Zhang.
More Information
Date | December 27, 2024 (Fri) 11:00 - 12:00 |
URL | https://c5dc59ed978213830355fc8978.doorkeeper.jp/events/180777 |