June 25, 2024 18:19

Abstract

The rapid advancement of deep learning has revolutionized numerous domains, from image recognition to natural language processing. However, the widespread deployment of deep learning systems has also highlighted critical concerns regarding their trustworthiness. This talk delves into the multifaceted challenges and solutions related to enhancing the security[1], robustness[2], and privacy[3] of deep learning models. The talk contains studies presented in the following works.
[1]: Ring-A-Bell! How Reliable are Concept Removal Methods for Diffusion Models?, ICLR 2024
[2]: Formalizing generalization and adversarial robustness of neural networks to weight perturbations, NeurIPS 2021
[3]: Exploring the benefits of visual prompting in differential privacy. ICCV 2023

More Information

Date June 27, 2024 (Thu) 15:00 - 16:00
URL https://c5dc59ed978213830355fc8978.doorkeeper.jp/events/175037

Related Laboratories

last updated on December 9, 2024 13:36Laboratory