Abstract
This is an online seminer. Registration is required.
We’ll send the instruction for attending the online seminar.
Date: 2020-04-27 10:00-12:00
Timesheet:
– 10:00-11:00 Jiang Dayou, Analysis of Trustworthy Artificial Intelligence System and Security Framework
– 11:00-12:00 Jiayang Liu, Detection based Defense against Adversarial Examples from the Steganalysis Point of View
=====================================================
Speaker1:# Jiang Dayou
Time: 10:00-11:00
Title: Analysis of Trustworthy Artificial Intelligence System and Security Framework
Abstract: In recent years, driven by the algorithm, computing power, and data, artificial intelligence (AI) has entered a new stage of accelerated development and has become an accelerator of economic and social development. With the in-depth integration of AI in related industries and people’s social life, the crisis of AI trust caused by it has also attracted widespread attention. This presentation will review the current development of AI, sort out and summarize the focus issue of trustworthy AI and its assessment list, focus on the security and privacy issues of AI, analyze the main threat risks and technical measures, and provide security standards framework.
Speaker#2: Jiayang Liu
Affiliation: University of Science and Technology of China
Time: 11:00-12:00
Title: Detection based Defense against Adversarial Examples from the Steganalysis Point of View
Abstract: Deep Neural Networks (DNNs) have recently led to significant improvements in many fields. However, DNNs are vulnerable to adversarial examples which are samples with imperceptible perturbations while dramatically misleading the DNNs. Many defense methods have been proposed, such as obfuscating gradients of the networks or detecting adversarial examples. However it is proved out that these defense methods are not effective or cannot resist secondary adversarial attacks. In this paper, we point out that steganalysis can be applied to adversarial examples detection, and propose a method to enhance steganalysis features by estimating the probability of modifications caused by adversarial attacks. Experimental results show that the proposed method can accurately detect adversarial examples. Moreover, secondary adversarial attacks are hard to be directly performed to our method because our method is not based on a neural network but based on high-dimensional artificial features.
More Information
Date | April 27, 2020 (Mon) 10:00 - 12:00 |
URL | https://c5dc59ed978213830355fc8978.doorkeeper.jp/events/106136 |