March 31, 2023 14:51
TrustML Young Scientist Seminar #64 20230329 Talk by Xuanli He (University College London) thumbnails

Description

The 64th Seminar
Date and Time: March 29th 6:00 pm – 7:00 pm(JST)

Speaker: Xuanli He (University College London)
Title: Imitation Attacks and Defenses
Short Abstract:
Due to the breakthrough in deep learning, commercial APIs have gained credence. However, these APIs suffer from a severe security concern, in which malicious users can bypass the subscriptions via an imitation attack. This talk will first introduce the imitation attack. Then I will show that the vulnerability of the imitation attack has been underestimated. In addition to the violation of intellectual property (IP), the imitation models can ease adversarial attacks on black-box APIs and incur privacy leakage. Finally, I will present two novel watermarking methods for protecting IP of text generation APIs under the imitation attack, which has been underdeveloped in the literature.

Bio:
Xuanli He is a Research Fellow at University College London. He received his Ph.D. from Monash University (Australia). His recent research lies in an intersection between deep learning and natural language processing, with an emphasis on robustness and security in NLP models, including privacy leakage and protection, backdoor attack and defense, and imitation attack and defense. He has published more than 20 papers in top-tier machine learning and natural language processing conferences (e.g., NeurIPS