April 21, 2023 13:24
TrustML Young Scientist Seminar #68 20230420 Talks by Yue Gao / Ming Yin thumbnails

Description

The 68th Seminar
Date and Time: April 20th 10:00 am – 12:00 pm(JST)
10:00 am – 11:00 am(JST)
Speaker 1: Yue Gao (University of Wisconsin at Madison)
Title 1: The Vulnerabilities of Preprocessing in Adversarial Machine Learning:

11:00 am – 12:00 pm(JST)
Speaker 2: Ming Yin (UC Santa Barbara)
Title 2: Towards Sample-Optimal Offline Reinforcement Learning:

Venue: Zoom webinar
Language: English

Speaker 1: Yue Gao (University of Wisconsin at Madison)
Title 1: The Vulnerabilities of Preprocessing in Adversarial Machine Learning
Short Abstract 1:
Machine learning (ML) systems depend on preprocessing steps to manage diverse inputs in real-world scenarios. However, the vulnerabilities of standard and even defensive preprocessing steps are often overlooked in adversarial ML research. In this talk, we will first discuss the interplay between the vulnerabilities of standard image scaling algorithms and downstream models in a black-box setting, emphasizing how this interaction compromises robust defenses designed for individual components. After that, we will explore the limitations of preprocessing defenses aimed at providing white-box adversarial robustness. Despite increasing efforts to enhance defenses through more complex transformations, these defenses may be fundamentally flawed, necessitating a renewed understanding of their effectiveness. By addressing these vulnerabilities, we aim to offer guidance and insights for future research on preprocessing steps in real-world ML systems.

Bio 1:
Yue Gao is a Ph.D. candidate in the Computer Science Department at the University of Wisconsin – Madison, advised by Prof. Kassem Fawaz. His research interest broadly lies in machine learning security and system security. His current works focus on the adversarial robustness of real-world machine learning systems.

Speaker 2: Ming Yin (UC Santa Barbara)
Title 2: Towards Sample-Optimal Offline Reinforcement Learning
Short Abstract 2:
Reinforcement Learning has become the go-to solution for solving many sequential decision-making problems. Specifically, for many high stake real-world problems, exploration is prohibited and offline reinforcement learning is the central framework for real-life applications where online interaction is not feasible. In such cases, data is often scarce, and sample complexity is a major concern. In this talk, I will present the primary challenges of offline RL and highlight my recent efforts to address these issues. I will reveal how various techniques can enhance sample efficiency and how they can adapt to the complexity of individual problems. Additionally, we will examine the relationship between these methodologies and practical applications and outline potential avenues for future work. References:
https://arxiv.org/pdf/2110.08695.pdf
https://arxiv.org/pdf/2203.05804.pdf

Bio 2:
Ming Yin recently received his Ph.D. from the Department of Statistics and Applied Probability at UCSB, and he is currently a computer science Ph.D. candidate at UCSB. His research spans a wide range of machine learning topics, including reinforcement learning, robust ML, and Bayesian statistics. Ming serves on the program committees of several conferences including NeurIPS, ICML, AISTATS, UAI, and AAAI, and he is an Area Chair for NeurIPS 2023. He has also spent time at Princeton University and Amazon AWS AI.