要旨
The rapid advancement of deep learning has revolutionized numerous domains, from image recognition to natural language processing. However, the widespread deployment of deep learning systems has also highlighted critical concerns regarding their trustworthiness. This talk delves into the multifaceted challenges and solutions related to enhancing the security[1], robustness[2], and privacy[3] of deep learning models. The talk contains studies presented in the following works.
[1]: Ring-A-Bell! How Reliable are Concept Removal Methods for Diffusion Models?, ICLR 2024
[2]: Formalizing generalization and adversarial robustness of neural networks to weight perturbations, NeurIPS 2021
[3]: Exploring the benefits of visual prompting in differential privacy. ICCV 2023
詳細情報
日時 | 2024/06/27(木) 15:00 - 16:00 |
URL | https://c5dc59ed978213830355fc8978.doorkeeper.jp/events/175037 |