Abstract
Date and Time: November 10th, 9:00 a.m. – 10:30 a.m. (JST)
Language: English
Title: Towards Interpretable Deep Learning
Speaker: Prof. Lily Weng, UCSD, https://lilywenglab.github.io/
Abstract: Deep neural networks (DNNs) have achieved unprecedented success across many scientific and engineering fields in the last decades. Despite its empirical success, however, they are notoriously black-box models that are difficult to understand their decision process. Lacking interpretability is one critical issue that may seriously hinder the deployment of DNNs in high-stake applications, which need interpretability to trust the prediction, to understand potential failures, and to be able to mitigate harms and eliminate biases in the model.
In this talk, I’ll share some exciting results in my lab on advancing explainable AI and interpretable machine learning. Specifically, I will show how we could bring interpretability into deep learning by leveraging recent advances in multi-modal models. I’ll present two recent works in our group on demystifying neural networks and interpretability-guided neural network design, which are the important first steps to enable Trustworthy AI and Trustworthy Machine Learning.
Bio: Lily Weng is an Assistant Professor in the Halıcıoğlu Data Science Institute at UC San Diego. She received her PhD in Electrical Engineering and Computer Sciences (EECS) from MIT in August 2020, and her Bachelor and Master degree both in Electrical Engineering at National Taiwan University. Prior to UCSD, she spent 1 year in MIT-IBM Watson AI Lab and several research internships in Google DeepMind, IBM Research and Mitsubishi Electric Research Lab. Her research interest is in machine learning and deep learning, with primary focus on trustworthy AI. Her vision is to make the next generation AI systems and deep learning algorithms more robust, reliable, explainable, trustworthy and safer.
More Information
Date | November 10, 2023 (Fri) 09:00 - 10:30 |
URL | https://c5dc59ed978213830355fc8978.doorkeeper.jp/events/164810 |