September 27, 2024 16:36

Abstract

This talk will be held in a hybrid format, both in person at AIP Open Space of RIKEN AIP (Nihombashi office) and online by Zoom. AIP Open Space: *only available to AIP researchers.

DATE & TIME
Oct 29, 2024: 11:00 a.m. – 12:00 p.m. (JST)

TITLE
Analyzing the Inner Workings of Foundation Models: Towards Insights, Transparency, and AI Safety

SPEAKER
Dr. Oliver Eberle (TU Berlin)

ABSTRACT
Foundation models (FMs) represent a significant advancement in machine learning by separating the computationally intensive task of data representation from the numerous possible downstream applications. While this has led to a rapid uptake of FMs in the sciences, industry and society at large, their inner works remain only partially understood, posing a risk for their widespread adoption. To ensure broad applicability of these models, it is crucial for them to maintain a certain level of transparency and trustworthiness. In this talk, I will present recent contributions to the field of Explainable AI within the context of FMs. Focusing on a layer-wise decomposition of these complex models allows a detailed analysis of their prediction strategies. This approach reveals relevant model strategies, providing a starting point for enhancing the understanding of FMs and ensuring compliance with critical aspects of AI safety.

BIOGRAPHY
Oliver Eberle is a postdoctoral researcher working in the Machine Learning Group at Technische Universität Berlin and visiting fellow at UCLA’s Institute for Pure and Applied Mathematics in Fall 2024. He received a Joint M.Sc. in Computational Neuroscience from TU/HU Berlin in 2017 and a Ph.D. degree in Machine Learning from TU Berlin in 2022.
His research focuses on Explainable Artificial Intelligence (XAI) and deep learning, with applications across various scientific domains including the Humanities, Cognitive Science, and Biomedicine. He has collaborated with colleagues to develop XAI methods for complex model structures, focusing on transformer architectures and higher-order explanations for similarity models and graph neural networks.

More Information

Date October 29, 2024 (Tue) 11:00 - 12:00
URL https://c5dc59ed978213830355fc8978.doorkeeper.jp/events/177274

Related Laboratories

last updated on November 13, 2024 10:07Laboratory