April 18, 2023 18:03

Abstract

Title: Robust Reinforcement Learning using Offline Data (paper link)
Abstract:
The goal of robust reinforcement learning (RL) is to learn a policy that is robust against the uncertainty in model parameters. Parameter uncertainty commonly occurs in many real-world RL applications due to simulator modeling errors, changes in the real-world system dynamics over time, and adversarial disturbances. Robust RL is typically formulated as a max-min problem, where the objective is to learn the policy that maximizes the value against the worst possible models that lie in an uncertainty set. In this work, we propose a robust RL algorithm called Robust Fitted Q-Iteration (RFQI), which uses only an offline dataset to learn the optimal robust policy. Robust RL with offline data is significantly more challenging than its non-robust counterpart because of the minimization over all models present in the robust Bellman operator. This poses challenges in offline data collection, optimization over the models, and unbiased estimation. In this work, we propose a systematic approach to overcome these challenges, resulting in our RFQI algorithm.

Research Bio for Kishan Panaganti:
I am a final year PhD student in Electrical and Computer Engineering at Texas A&M University, where I am advised by Prof. Dileep Kalathil. My research focuses on developing reinforcement learning algorithms and their theoretical guarantees in the robust regime. In my current research, I am studying the concept of robust reinforcement learning, which is concerned with learning a policy that is robust against uncertainty in model parameters. This type of uncertainty is common in real-world reinforcement learning applications due to factors such as simulator modeling errors, changes in system dynamics over time, and adversarial disturbances.

More Information

Date May 10, 2023 (Wed) 10:00 - 11:30
URL https://c5dc59ed978213830355fc8978.doorkeeper.jp/events/155601