2023/2/9 09:30
TrustML Young Scientist Seminar #52 20230201  Talk by Alexander Sax (UC Berkeley) サムネイル

説明

The 52nd Seminar
Date and Time: February 1st 9:00 am – 10:00 am(JST)
Venue: Zoom webinar
Language: English

Speaker: Alexander (Sasha) Sax (UC Berkeley)
Title: Robust Learning via Cross-Task Consistency:

Short Abstract:
Most neural networks (even those trained end-to-end) are later integrated into a larger system that makes multiple predictions. As an example, self-driving cars use several networks to predict about 40 different quantities: lane locations/topology, pedestrian locations + pose, vehicle locations + intention, ground traversability, and others. The training objective is usually such that accuracy is measured for each quantity independently–without ensuring that the global predictions are coherent or usable for the final downstream use case. Cross-Task Consistency (XTC) is a technique to learn global consistency constraints that can be used as regularization losses during training. XTC can be used when the constraints are only approximate, ill-posed, or unknown. Even when constraints are known analytically (e.g. normals-from-depth), XTC works as well or better in practice. Finally, discuss some experiments showing that the degree of constraint violation can be used as a form of anomaly detection.

Bio:
Sasha Sax is a last-year PhD student at Berkeley advised by Jitendra Malik (Berkeley) and Amir Zamir (EPFL). His work is in representation learning for Embodied AI, and in particular on developing intermediate representations that support sample-efficient downstream learning and lifelong calibration + adaptation to novel situations. His work has received a CVPR Best Paper award, an CVPR Best Paper Honorable Mention, Nvidia Pioneering Research Award, and has placed first in the CVPR Embodied AI Navigation Challenge.