March 30, 2023 12:29

Abstract

The TrustML Young Scientist Seminars (TrustML YSS) started from January 28, 2022.

The TrustML YSS is a video series that features young scientists giving talks and discoveries in relation with Trustworthy Machine Learning.

Timetable for the TrustML YSS online seminars from March to April. 2023.

For more information please see the following site.
TrustML YSS

This network is funded by RIKEN-AIP’s subsidy and JST, ACT-X Grant Number JPMJAX21AF, Japan.


【The 66th Seminar】


Date and Time: April 4th 10:00 am – 11:00 am(JST)

Speaker: Muhammad Ahmed Shah (Carnegie Mellon University)
Title: Biologically Inspired Foveation Filter Improves Robustness to Adversarial Attacks:

Short Abstract
Deep neural networks (DNNs) have been shown to be vulnerable to adversarial attacks — subtle, perceptually indistinguishable perturbations of inputs that change the response of the model. In the context of vision, we hypothesize that a factor contributing to the robustness of human visual perception is our constant exposure to low-fidelity visual stimuli in our peripheral vision. To investigate this hypothesis, we develop R-Blur, an image transform that simulates the loss in fidelity of peripheral vision by blurring the image and reducing its color saturation based on the distance from a given fixation point. We show that DNNs trained on images transformed by R-Blur are substantially more robust to adversarial attacks, as well as other, non-adversarial, corruptions than DNNs trained on the original images, with the former achieving up to 69% higher accuracy on perturbed data. We further show that the robustness induced by R-Blur is certifiable.

Bio:
Muhammad Ahmed Shah is a 2nd year PhD student at Carnegie Mellon University, and is advised by Dr. Bhiksha Raj. His current research focus is developing biologically-inspired methods for making deep neural networks more robust to input corruptions, particularly adversarial attacks. In the past he has worked on a variety of research topics including machine learning privacy, neural model compression and information retrieval. His work has been published in several conferences including ICASSP, Interspeech and ICPR.


All participants are required to agree with the AIP Seminar Series Code of Conduct.
Please see the URL below.
https://aip.riken.jp/event-list/termsofparticipation/?lang=en

RIKEN AIP will expect adherence to this code throughout the event. We expect cooperation from all participants to help ensure a safe environment for everybody.


More Information

Date April 4, 2023 (Tue) 10:00 - 11:00
URL https://c5dc59ed978213830355fc8978.doorkeeper.jp/events/154428

Related Laboratories

last updated on December 9, 2024 13:41Laboratory