January 26, 2023 13:45

Abstract

The TrustML Young Scientist Seminars (TrustML YSS) started from January 28, 2022.

The TrustML YSS is a video series that features young scientists giving talks and discoveries in relation with Trustworthy Machine Learning.

Timetable for the TrustML YSS online seminars from Jan. to Feb. 2023.

For more information please see the following site.
TrustML YSS

This network is funded by RIKEN-AIP’s subsidy and JST, ACT-X Grant Number JPMJAX21AF, Japan.


【The 52nd Seminar】


Date and Time: February 1st 9:00 am – 11:00 am(JST)

Venue: Zoom webinar

Language: English

Schedule
9:00 am – 10:00 am Speaker 1: Alexander (Sasha) Sax (UC Berkeley)
Title: Robust Learning via Cross-Task Consistency:

10:00 am – 11:00 am Speaker 2: Dan Hendrycks (UC Berkeley)
Title: ML Safety:

Abstract

Speaker 1: Alexander (Sasha) Sax (UC Berkeley)
Title: Robust Learning via Cross-Task Consistency
Short Abstract
Most neural networks (even those trained end-to-end) are later integrated into a larger system that makes multiple predictions. As an example, self-driving cars use several networks to predict about 40 different quantities: lane locations/topology, pedestrian locations + pose, vehicle locations + intention, ground traversability, and others. The training objective is usually such that accuracy is measured for each quantity independently–without ensuring that the global predictions are coherent or usable for the final downstream use case. Cross-Task Consistency (XTC) is a technique to learn global consistency constraints that can be used as regularization losses during training. XTC can be used when the constraints are only approximate, ill-posed, or unknown. Even when constraints are known analytically (e.g. normals-from-depth), XTC works as well or better in practice. Finally, discuss some experiments showing that the degree of constraint violation can be used as a form of anomaly detection.

Bio:
Sasha Sax is a last-year PhD student at Berkeley advised by Jitendra Malik (Berkeley) and Amir Zamir (EPFL). His work is in representation learning for Embodied AI, and in particular on developing intermediate representations that support sample-efficient downstream learning and lifelong calibration + adaptation to novel situations. His work has received a CVPR Best Paper award, an CVPR Best Paper Honorable Mention, Nvidia Pioneering Research Award, and has placed first in the CVPR Embodied AI Navigation Challenge.

Speaker 2: Dan Hendrycks (UC Berkeley)
Title: ML Safety
Short Abstract
Machine learning (ML) systems are rapidly increasing in size, are acquiring new capabilities, and are increasingly deployed in high-stakes settings. As with other powerful technologies, safety for ML should be a leading research priority. In response to emerging safety challenges in ML, such as those introduced by recent large-scale models, we provide a new roadmap for ML Safety and refine the technical problems that the field needs to address. We present four problems ready for research, namely withstanding hazards (“Robustness”), identifying hazards (“Monitoring”), reducing inherent model hazards (“Alignment”), and reducing systemic hazards (“Systemic Safety”). Throughout, we clarify each problem’s motivation and provide concrete research directions.

Bio:
I recently received my PhD from UC Berkeley where I was advised by Dawn Song and Jacob Steinhardt. I am now the director of the Center for AI Safety. I am interested in ML Safety. In 2018 I received my BS from UChicago. My research is supported by the NSF GRFP and the Open Philanthropy AI Fellowship. I helped contribute the GELU activation function (the most-used activation in state-of-the-art models including BERT, GPT, Vision Transformers, etc.), the out-of-distribution detection baseline, and distribution shift benchmarks.


All participants are required to agree with the AIP Seminar Series Code of Conduct.
Please see the URL below.
https://aip.riken.jp/event-list/termsofparticipation/?lang=en

RIKEN AIP will expect adherence to this code throughout the event. We expect cooperation from all participants to help ensure a safe environment for everybody.


More Information

Date February 1, 2023 (Wed) 09:00 - 11:00
URL https://c5dc59ed978213830355fc8978.doorkeeper.jp/events/150517

Related Laboratories

last updated on March 19, 2024 15:05Laboratory