October 17, 2025 15:19

Abstract

Workshop on Online Learning and Optimization 2025

Overview

This workshop aims to present the latest theoretical and applied advances in online learning and optimization, and to promote interdisciplinary collaboration and discussions toward the development of next-generation research. Leading researchers from Japan and abroad will be invited to facilitate active exchanges and foster future collaborative research opportunities.

Program (Tentative)

09:30 – 10:00 Registration / Opening

10:00 – 11:00 Keynote Talk
Nicolò Cesa-Bianchi (University of Milan / Politecnico di Milano, Italy)
Trades, Tariffs, and Regret: Online Learning in Digital Markets

11:00 – 11:20  Coffee Break

11:20 – 11:50 Kohei Hatano (Kyushu University / RIKEN, Japan)
Online Optimization over RIS Networks via Mixed Integer Programming

11:50 – 12:20 Kyoungseok Jang (Chung-Ang University, Korea)
Exploring Exploration Strategies in Reinforcement Learning

12:20 – 14:00 Lunch Break (on your own)

14:00 – 15:00 Keynote Talk
Nishant Mehta (University of Victoria, Canada)
Elicitation Meets Online Learning: Games of Prediction with Advice from Self-Interested Experts

15:00 – 15:15  Coffee Break

15:15 – 15:45 Junya Honda (Kyoto University / RIKEN, Japan)
Recent Advances in Follow-the-Perturbed-Leader for Bandit Problems

15:45 – 16:15 Yuko Kuroki (CENTAI Institute S.p.A., Italy)
Online Minimization of Polarization and Disagreement via Low-Rank Matrix Bandits

16:15 – 16:30  Coffee Break

16:30 – 17:00 Daiki Suehiro (Kyushu University / RIKEN AIP, Japan)
Online Combinatorial Optimization for Sequential Data Sampling in Neural Networks

17:00 – 17:30 Kaito Fujii (NII, Japan)
Bayes correlated equilibria and no-regret dynamics

17:30 – 19:30 Closing Remarks/Reception
Informal Discussion and Networking

Invited Speakers

  • Nicolò Cesa-Bianchi (University of Milan / Politecnico di Milano, Italy)
    • Title: Trades, tariffs, and regret: Online Learning in Digital Markets
    • Abstract: Online learning explores algorithms that acquire knowledge sequentially, through repeated interactions with an unknown environment. The general goal is to understand how fast an agent can learn based on the information received from the environment. Digital markets, with their complex ecosystems of algorithmic agents, offer a rich landscape of sequential decision-making problems, characterized by diverse decision spaces, utility functions, and feedback mechanisms. This talk will demonstrate how tackling challenges within digital markets has not only advanced our understanding of machine learning capabilities but also revealed novel insights into algorithmic efficiency and decision-making under uncertainty.
  • Nishant Mehta (University of Victoria)
    • Title: Elicitation Meets Online Learning: Games of Prediction with Advice from Self-Interested Experts
    • Abstract: The classical game of prediction with expert advice involves two players: Decision Maker, who forecasts outcomes based on expert advice, and an adversarial Nature that selects the experts’ forecasts of outcomes and the outcomes themselves. The experts’ forecasts are taken at face value: various benchmarks like external regret and swap regret are based on the performance of these forecasts. Yet, real-world experts may have beliefs about the outcomes they forecast. If not properly incentivized, self-interested experts can fail to report their beliefs truthfully, compromising benchmarks based on the experts’ beliefs. A series of recent works have developed online learning algorithms that succeed in the face of such self-interested experts, drawing from past results in online learning but also giving online learning both new results and new understanding. This talk will begin with a tour of fundamental mechanisms for eliciting experts’ beliefs. It will then cover recent progress in games of prediction with advice from self-interested experts, highlighting many open problems along the way.

In-Person Registration Form

https://forms.gle/pcUWcdFLsLzf86xo6
Registration Deadline: November 3, 2025

More Information

Date November 10, 2025 (Mon) 09:30 - 19:30
URL https://c5dc59ed978213830355fc8978.doorkeeper.jp/events/191839

Related Laboratories

last updated on October 10, 2025 09:35Laboratory