Speaker: Prof. Tongliang Liu (University of Sydney, Australia)
Estimating the Transition Matrix for Label-Noise Learning
Label noise is ubiquitous in the era of big data. Deep learning algorithms can easily overfit the noise and thus cannot generalize well without properly modelling the noise. The label noise transition matrix, which denotes the probabilities that clean labels flip into noisy labels, plays a central role in building statistically consistent classifiers. In this talk, we will discuss how to estimate the transition matrix. Specifically, an anchor point assumption is introduced to build an unbiased estimator. However, the assumption may not always hold practically. When there are no anchor points, the transition matrix would be poorly estimated, and those previously designed consistent classifiers may significantly degenerate. We then discuss how to remedy this problem. In the end, we will also envision potential directions for estimating the transition matrix, e.g., in the instance-dependent setting.
Speaker: Prof. Bo Han (Hong Kong Baptist University, Hong Kong)
Trustworthy Representation Learning: A Synergistic Tale of Labels, Examples and Beyond
Trustworthy representation learning (TRL) is an emerging and critical topic in modern machine learning, since most real-world data are easily imperfect and corrupted, such as online transactions, healthcare, cyber-security, and robotics. Intuitively, trustworthy learning system should behave more human-like, which can learn useful knowledge from even imperfect data. Therefore, in this talk, I will introduce TRL from three human-inspired views, including reliability, robustness and imitation. Specifically, reliability will consider uncertain cases, namely deep learning with noisy labels. Meanwhile, robustness will discuss adversarial conditions, namely training with adversarial examples. Last, imitation will focus on non-expert scenarios, namely imitation learning with diverse demonstrations.
|Date||August 13, 2020 (Thu) 13:30 - 15:30|