Abstract
Date and Time: April 18, 2025: 10:00 – 11:00 (JST)
Venue: Online
Title: Universal Online Learning with Gradient Variations
Speaker: Yu-Hu Yan (Nanjing University)
Abstract: In this talk, I will introduce our recent works that enhance online convex optimization approaches with two levels of adaptivity. On a higher level, our methods are agnostic to the curvatures of the online functions. At the lower level, they are capable of modeling the difficulties of the online learning problem, enabling problem-dependent guarantees. Specifically, I will introduce our two recent works on this topic, published in NeurIPS 2023 and 2024. These two comparable works differ both in their results and the technical ideas used. Our findings not only provide robust worst-case guarantees but also lead to small-loss bounds in the analysis. Furthermore, the applicability of our results extends to adversarial and stochastic convex optimization, as well as two-player zero-sum games, demonstrating both the significance of our research and the effectiveness of the proposed methods.
Bio: Yu-Hu Yan (https://www.lamda.nju.edu.cn/yanyh/) is a Ph.D. student in the LAMDA Group at the School of Artificial Intelligence, Nanjing University, under the supervision of Prof. Zhi-Hua Zhou and Assistant Prof. Peng Zhao. He earned his bachelor’s degree from Nanjing University in 2020. His research interests span online learning, optimization, online game/control, and large language model (LLM) alignment. His work has been published in top conferences and journals, including JMLR, NeurIPS, ICML, and AAAI.
More Information
Date | April 18, 2025 (Fri) 10:00 - 11:00 |
URL | https://c5dc59ed978213830355fc8978.doorkeeper.jp/events/183679 |