Abstract
The TrustML Young Scientist Seminars (TrustML YSS) started from January 28, 2022.
The TrustML YSS is a video series that features young scientists giving talks and discoveries in relation with Trustworthy Machine Learning.
Timetable for the TrustML YSS online seminars from May to Dec 2023.
For more information please see the following site.
TrustML YSS
This network is funded by RIKEN-AIP’s subsidy and JST, ACT-X Grant Number JPMJAX21AF, Japan.
【The 77th Seminar】
Date, Time, and Venue:
December 5, 2023: 11:00 am — 12:00 noon (JST)
Venue: Online
Language: English
Title: Post-Episodic Reinforcement Learning Inference
Speaker: Prof. Ruohan Zhan (Hong Kong University of Science and Technology)
Abstract: We consider estimation and inference with data collected from episodic reinforcement learning (RL) algorithms; i.e. adaptive experimentation algorithms that at each period (aka episode) interact multiple times in a sequential manner with a single treated unit. Our goal is to be able to evaluate counterfactual adaptive policies after data collection and to estimate structural parameters such as dynamic treatment effects, which can be used for credit assignment (e.g. what was the effect of the first period action on the final outcome). Such parameters of interest can be framed as solutions to moment equations, but not minimizers of a population loss function, leading to Z-estimation approaches in the case of static data. However, such estimators fail to be asymptotically normal in the case of adaptive data collection. We propose a re-weighted Z-estimation approach with carefully designed adaptive weights to stabilize the episode-varying estimation variance, which results from the nonstationary policy that typical episodic RL algorithms invoke. We identify proper weighting schemes to restore the consistency and asymptotic normality of the re-weighted Z-estimators for target parameters, which allows for hypothesis testing and constructing uniform confidence regions for target parameters of interest. Primary applications include dynamic treatment effect estimation and dynamic off-policy evaluation. This is joint work with Vasilis Syrgkanis from Stanford University.
Biography: Ruohan Zhan is an assistant professor in the Department of Industrial Engineering and Decision Analytics at the Hong Kong University of Science and Technology. She earned her PhD from Stanford University. Specializing in causal inference, statistics, and machine learning, Ruohan develops new methods to solve problems from online marketplaces, particularly on challenges related to causal effect identification, economic analysis, experimentation and operations. Her research has been published in top-tier journals including Management Science and Proceedings of National Academy of Sciences, as well as renowned machine learning conferences including NeurIPS, ICLR, WWW, and KDD.
All participants are required to agree with the AIP Seminar Series Code of Conduct.
Please see the URL below.
https://aip.riken.jp/event-list/termsofparticipation/?lang=en
RIKEN AIP will expect adherence to this code throughout the event. We expect cooperation from all participants to help ensure a safe environment for everybody.
More Information
Date | December 5, 2023 (Tue) 11:00 - 12:00 |
URL | https://c5dc59ed978213830355fc8978.doorkeeper.jp/events/166705 |