This is an online seminar. Registration is required.
1st Talk (45mins):
Han Bao (The University of Tokyo)
Learning Theory Bridges Loss Functions
In statistical machine learning, we often employ surrogate losses different from our target losses, i.e., evaluation criteria, due to intractability of the target losses. One of the famous examples is the cross-entropy loss: different from the misclassification rate even though it is widely used in classification tasks. In order to fill this gap, calibration analysis has been developed in the last decade to study several target losses such as binary classification, multi-class classification, etc.
In this talk, the speaker introduces basics of calibration analysis and recent advances in robust machine learning with application to class imbalance classification [Bao+ AISTATS2020] and adversarially robust classification [Bao+ COLT2020].”
2nd Talk (45mins):
Yuko Kuroki (The University of Tokyo)
Combinatorial Online Learning with Limited Feedback
Combinatorial optimization is one of the fundamental research fields in computer science, in which many classical optimization problems have been extensively studied including the shortest path, maximum weighted matching, and maximum weight independent set in a matroid, and it has a wide range of applications. Most existing optimization models require the exact parameters as inputs, which are, however, unknown/uncertain in many applications such as recommender systems, crowdsourcing, and online advertising. One approach to deal with such an uncertain scenario would be the online learning, exemplified by the classical multi-armed bandit problem, which lies at the intersection of statistics and machine learning. Despite the recent advances in multi-armed bandits, it has not been fully understood how to address the combinatorial optimization framework since we face the exponential blow up in combinatorial action space.
In this talk, I will introduce several works we have done recently, especially on the best-arm identification metric. Our aim is to develop polynomial-time bandit algorithms for finding the best combinatorial solution only from limited feedback, while the most existing work had the strong assumption on its feedback model. I will illustrate recent studies at a higher-level including our work [Kuroki+, Neural Computation20], [Kuroki+, ICML20], and [Chen, Du,& Kuroki, arXiv20] and discuss the current limitations and several open problems for future works in this line of research.”
|Date||September 10, 2020 (Thu) 10:00 - 11:30|