Approximate Bayesian Inference Team at RIKEN AIP (https://aip.riken.jp/labs/generic_tech/approx_bayes_infer/)
Speaker 1: Emtiyaz Khan (45 mins)
Title: Bayesian principles for Learning-Machines
Humans and animals have a natural ability to autonomously learn and quickly adapt to their surroundings. How can we design machines that do the same? In this talk, I will present Bayesian principles to bridge such gaps between humans and machines. I will show that a wide-variety of machine-learning algorithms are instances of a single learning-rule derived from Bayesian principles. The rule unravels a dual-perspective yielding new mechanisms for knowledge transfer in learning machines. In the end, I will summarize the research done by the group in the last 4 years. Overall, my hope is to convince the audience that Bayesian principles are indispensable for an AI that learns as efficiently as we do.
Speaker 2: Dharmesh Tailor (25 mins)
Title: Memorable Experiences of Learning-Machines
Humans and other animals have a natural ability to identify useful past experiences. How can machines do the same? We present “memorable experiences” to identify a machine’s relevant past experiences and understand its current knowledge. The approach is based on a new notion of duality which is an extension of similar ideas used in kernel methods. We demonstrate the application of memorable examples as a tool to understand knowledge learned by several types of machine-learning models.
Speaker 3: Pierre Alquier (35 mins)
Title: Meta-Strategy for Hyperparameter Tuning with Guarantees
Online gradient methods, like the online gradient algorithm (OGA), often depend on tuning parameters that are difficult to set in practice. We consider an online meta-learning scenario, and we propose a meta-strategy to learn these parameters from past tasks. Our strategy is based on the minimization of a regret bound. It allows to learn the initialization and the step size in OGA with guarantees. We provide a regret analysis of the strategy in the case of convex losses. It suggests that, when there are parameters θ1,…,θT solving well tasks 1,…,T respectively and that are close enough one to each other, our strategy indeed improves on learning each task in isolation. In the context of Approximate Bayesian Inference, our method can be interpreted as learning the mean and variance of a Gaussian prior. This opens new perspectives on more general methods to learn priors.
All participants are required to agree with the AIP Open Seminar Series Code of Conduct.
Please see the URL below.
RIKEN AIP will expect adherence to this code throughout the event. We expect cooperation from all participants to help ensure a safe environment for everybody.
|Date||March 10, 2021 (Wed) 15:00 - 17:00|