March 15, 2023 16:28
TrustML Young Scientist Seminar #59 2023010 Talks by Zahra Atashgahi / Adam Kortylewski thumbnails

Description

The 59th Seminar
Date and Time: March 10th 5:00 pm – 7:00 pm(JST)
** 5:00 pm – 6:00 pm(JST)**
Speaker 1: Zahra Atashgahi (University of Twente)
Title 1: Learning Efficiently from Data using Sparse Neural Networks:

** 6:00 pm – 7:00 pm(JST)**
Speaker 2: Adam Kortylewski (Max Planck Institute for Informatics)
Title 2: Robust Vision through Analysis-by-Synthesis with 3D-aware Networks
Venue: Zoom webinar
Language: English

Speaker 1: Zahra Atashgahi (University of Twente)
Title 1: Learning Efficiently from Data using Sparse Neural Networks:
Short Abstract 1:
Sparse neural networks (SNNs) address the high computational complexity of deep neural networks by using sparse connectivity among their layers and aiming to match the predictive performance of their dense counterpart. Pruning dense neural networks is among the most widely used methods to obtain SNNs. Driven by the high training cost of such methods that can be unaffordable for a low-resource device, training SNNs sparsely from scratch has recently gained attention, known as “sparse training” in the literature. In this talk, I will provide a brief introduction to SNNs and recent advances in the field of sparse training. Then, I present how SNNs can be utilized to perform different tasks efficiently with a focus on feature selection.
Bio 1:
Zahra is a PhD candidate at the University of Twente, The Netherlands. She has completed her bachelor and master in computer science at Amirkabir University of Technology, Iran, in 2017 and 2019, respectively. Currently, Zahra is a visiting Ph.D. student at the University of Cambridge. During her Ph.D., she focuses on Deep Learning and, particularly, sparse neural networks. She seeks to design environmentally friendly AI systems through the development of computationally efficient deep learning models. During her PhD, she has published in top-tier Machine Learning conferences and journals, including, NeurIPS, ICLR, MLJ and TMLR.

Speaker 2: Adam Kortylewski (Max Planck Institute for Informatics)
Title 2: Robust Vision through Analysis-by-Synthesis with 3D-aware Networks
Short Abstract 2:
Deep learning sparked a tremendous increase in the performance of computer vision systems over the past decade. However, Deep Neural Networks (DNNs) are still far from reaching human-level performance at visual recognition tasks. The most important limitation of DNNs is that they fail to give reliable predictions in unseen or adverse viewing conditions, which would not fool a human observer, such as when objects are partially occluded, seen in an unusual pose or context, or in bad weather. This lack of robustness in DNNs is generally acknowledged, but the problem largely remains unsolved. In this talk, I will give an overview of the principles underlying my work on building robust deep neural networks for computer vision. My working hypothesis is that vision systems need a causal 3D understanding images by following an analysis-by-synthesis approach. I will discuss a new type of neural network architecture that implements such an approach, and I will show that these generative neural network models are vastly superior to traditional models in terms of robustness, learning efficiency and because they can solve many vision tasks at once. Finally, I will give a brief outlook on current projects of mine and future research directions.
Bio 2:
Adam Kortylewski is a research group leader at the University of Freiburg and the Max-Planck Instutite for Informatics where he leads the Generative Computer Vision Lab. Before that he was a postdoc at Johns Hopkins University with Alan Yuille for three years. He obtained his PhD from the University of Basel with Thomas Vetter. His research focuses on studying computer vision from a generative perspective. His core working hypothesis is that vision computer vision systems need to develop a causal 3D understanding of images following an analysis-by-synthesis approach, in order to become a truly foundational component of AI systems.