Videos

Oliver Eberle - Interpretability for Deep Learning: Theory, Applications and Scientific Insights

Presenter
October 17, 2024
Abstract
Recorded 17 October 2024. Oliver Eberle of Technische Universität Berlin presents "Interpretability for Deep Learning: Theory, Applications and Scientific Insights" at IPAM's Theory and Practice of Deep Learning Workshop. Abstract: Deep learning models represent a significant breakthrough in ML by enabling complex data representation and high task performance across domains. However, their complex decision strategies remain opaque, necessitating approaches to improve understanding of these models. The field of Explainable AI develops methods to ensure transparency, safety, and trustworthiness in their deployment, while also facilitating the discovery of novel scientific insights. In this task, I will focus on methods for gaining a deeper understanding of the inner workings of DL models, revealing higher-order interactions and undesired model behavior. Finally, these tools can be applied for scientific insight discovery, where I will present our work on early modern history of science, human alignment with language models, and histopathology. Learn more online at: https://www.ipam.ucla.edu/programs/workshops/workshop-ii-theory-and-practice-of-deep-learning/?tab=overview