December 8, 2022 15:51
TrustML Young Scientist Seminar #43 20221207 thumbnails

Description

The 43rd Seminar
Date and Time: Dec. 7th 4:00 pm – 6:00 pm(JST)
Venue: Zoom webinar
Language: English

4:00pm – 5:00pm
Speaker: Shiqi Yang (Autonomous University of Barcelona)
Title: Model Adaptation under Domain and Category Shift
Abstract:
In recent years, large amount of works emerged in the domain adaptation community, aiming to address the domain shift between training and test data. While there still remain plenty of challenging problems for further investigation. For example, 1) demanding source data during adaptation is impossible in some privacy-sensitive applications (e.g., surveillance or medical applications), and 2) unseen categories may exist in the test data in the real-world scenarios. In this talk, I will first introduce a method to address domain adaptation without source data, from the perspective of unsupervised clustering, and meanwhile we could relate several methods in domain adaptation via the view of discriminability and diversity. Then, we propose to deploy an attention based regularization to avoid forgetting on the source domain after model adaptation. Finally, I will present an elegantly simple method to address domain and category shift simultaneously during model adaptation.

5:00pm – 6:00pm
Speaker: Oğuzhan Fatih Kar (EPFL)
Title: 3D Common Corruptions and Data Augmentation
Abstract:
Computer vision models deployed in the real world will encounter naturally occurring distribution shifts from their training data. These shifts range from lower-level distortions, such as motion blur and illumination changes, to semantic ones, like object occlusion. Each of them represents a possible failure mode of a model and has been frequently shown to result in profoundly unreliable predictions. Thus, understanding model failures against these shifts and developing better robustness mechanisms are critical before deploying these models in the real world. Our work presents a set of image transformations that can be used as corruptions to evaluate the robustness of models as well as data augmentation mechanisms for training neural networks. The primary distinction of the proposed transformations is that, unlike existing approaches such as Common Corruptions, the geometry and semantics of the scene is incorporated in the transformations — thus leading to corruptions that are more likely to occur in the real world. In this talk, I will discuss several properties of these transformations, e.g. these transformations are ‘efficient’ (can be computed on-the-fly), ‘extendable’ (can be applied on most image datasets), expose vulnerability of existing models, and can effectively make models more robust when employed as `3D data augmentation’ mechanisms.