Abstract
The TrustML Young Scientist Seminars (TrustML YSS) started from January 28, 2022.
The TrustML YSS is a video series that features young scientists giving talks and discoveries in relation with Trustworthy Machine Learning.
Timetable for the TrustML YSS online seminars from Sep. to Oct. 2022.
For more information please see the following site.
TrustML YSS
This network is funded by RIKEN-AIP’s subsidy and JST, ACT-X Grant Number JPMJAX21AF, Japan.
【The 31st Seminar】
Date and Time: September 6th 9:00 am – 11:00 am(JST)
Venue: Zoom webinar
Language: English
9:00am – 10:00am
Speaker: Sheng Liu (New York University)
Title: Understanding Probability Estimation and Noisy Label Learning: From the Early Learning Perspective
Short Abstract
Recently, over-parameterized deep networks or large models, with increasingly more network parameters than training samples, have dominated the performances in modern machine learning. However, it has been well-known that over-parameterized networks tend to overfit and not generalize when trained on finite data. In probability estimation, the network is trained on observed outcomes of an event to estimate the probabilities of that event, leading to the network memorizing the observed outcomes completely and the estimated probabilities collapse to 0 or 1. Similarly, when learning with noisy labels, the network memorizes the wrong labels resulting in non-optimal decision rules. Yet before overfitting, the networks can learn useful information, known as early learning. Estimating probabilities reliably and being robust to noisy labels during training is of crucial importance in providing trustworthy predictions in many real-world applications with inherent uncertainty and poor label quality. In this talk, we will discuss the early learning phenomenon in probability estimation and noisy label learning, and how it can be utilized to prevent overfitting.
10:00am – 11:00am
Speaker: Sharon Y. Li (University of Wisconsin Madison)
Title: Challenges and Opportunities in Out-of-distribution Detection
Short Abstract
The real world is open and full of unknowns, presenting significant challenges for machine learning (ML) systems that must reliably handle diverse, and sometimes anomalous inputs. Out-of-distribution (OOD) uncertainty arises when a machine learning model sees a test-time input that differs from its training data, and thus should not be predicted by the model. As ML is used for more safety-critical domains, the abilities to handle out-of-distribution data are central in building open-world learning systems. In this talk, I will talk about challenges, research progress and future opportunities in detecting OOD samples for safe and reliable predictions in an open world.
All participants are required to agree with the AIP Seminar Series Code of Conduct.
Please see the URL below.
https://aip.riken.jp/event-list/termsofparticipation/?lang=en
RIKEN AIP will expect adherence to this code throughout the event. We expect cooperation from all participants to help ensure a safe environment for everybody.
More Information
Date | September 6, 2022 (Tue) 09:00 - 11:00 |
URL | https://c5dc59ed978213830355fc8978.doorkeeper.jp/events/142566 |