要旨
The TrustML Young Scientist Seminars (TrustML YSS) started from January 28, 2022.
The TrustML YSS is a video series that features young scientists giving talks and discoveries in relation with Trustworthy Machine Learning.
Timetable for the TrustML YSS online seminars from March to April. 2023.
For more information please see the following site.
TrustML YSS
This network is funded by RIKEN-AIP’s subsidy and JST, ACT-X Grant Number JPMJAX21AF, Japan.
【The 68th Seminar】
Date and Time: April 20th 10:00 am – 12:00 pm(JST)
10:00 am – 11:00 am(JST)
Speaker 1: Yue Gao (University of Wisconsin at Madison)
Title 1: The Vulnerabilities of Preprocessing in Adversarial Machine Learning:
11:00 am – 12:00 pm(JST)
Speaker 2: Ming Yin (UC Santa Barbara)
Title 2: Towards Sample-Optimal Offline Reinforcement Learning:
Venue: Zoom webinar
Language: English
Speaker 1: Yue Gao (University of Wisconsin at Madison)
Title 1: The Vulnerabilities of Preprocessing in Adversarial Machine Learning
Short Abstract 1
Machine learning (ML) systems depend on preprocessing steps to manage diverse inputs in real-world scenarios. However, the vulnerabilities of standard and even defensive preprocessing steps are often overlooked in adversarial ML research. In this talk, we will first discuss the interplay between the vulnerabilities of standard image scaling algorithms and downstream models in a black-box setting, emphasizing how this interaction compromises robust defenses designed for individual components. After that, we will explore the limitations of preprocessing defenses aimed at providing white-box adversarial robustness. Despite increasing efforts to enhance defenses through more complex transformations, these defenses may be fundamentally flawed, necessitating a renewed understanding of their effectiveness. By addressing these vulnerabilities, we aim to offer guidance and insights for future research on preprocessing steps in real-world ML systems.
Bio 1:
Yue Gao is a Ph.D. candidate in the Computer Science Department at the University of Wisconsin – Madison, advised by Prof. Kassem Fawaz. His research interest broadly lies in machine learning security and system security. His current works focus on the adversarial robustness of real-world machine learning systems.
Speaker 2: Ming Yin (UC Santa Barbara)
Title 2: Towards Sample-Optimal Offline Reinforcement Learning
Short Abstract 2
Reinforcement Learning has become the go-to solution for solving many sequential decision-making problems. Specifically, for many high stake real-world problems, exploration is prohibited and offline reinforcement learning is the central framework for real-life applications where online interaction is not feasible. In such cases, data is often scarce, and sample complexity is a major concern. In this talk, I will present the primary challenges of offline RL and highlight my recent efforts to address these issues. I will reveal how various techniques can enhance sample efficiency and how they can adapt to the complexity of individual problems. Additionally, we will examine the relationship between these methodologies and practical applications and outline potential avenues for future work. References: https://arxiv.org/pdf/2110.08695.pdf,https://arxiv.org/pdf/2203.05804.pdf.
Bio 2:
Ming Yin recently received his Ph.D. from the Department of Statistics and Applied Probability at UCSB, and he is currently a computer science Ph.D. candidate at UCSB. His research spans a wide range of machine learning topics, including reinforcement learning, robust ML, and Bayesian statistics. Ming serves on the program committees of several conferences including NeurIPS, ICML, AISTATS, UAI, and AAAI, and he is an Area Chair for NeurIPS 2023. He has also spent time at Princeton University and Amazon AWS AI.
All participants are required to agree with the AIP Seminar Series Code of Conduct.
Please see the URL below.
https://aip.riken.jp/event-list/termsofparticipation/?lang=en
RIKEN AIP will expect adherence to this code throughout the event. We expect cooperation from all participants to help ensure a safe environment for everybody.
詳細情報
日時 | 2023/04/20(木) 10:00 - 12:00 |
URL | https://c5dc59ed978213830355fc8978.doorkeeper.jp/events/155540 |