June 2, 2021 09:43

Abstract

AI Security and Privacy Team (https://aip.riken.jp/labs/ai_soc/ai_sec_privacy/) at RIKEN AIP

Speaker 1: Jun Sakuma (15 min)
Title: Overview of the AI security and privacy team
Abstract: The main target of our team is to establish trustwothy AI by enhancing guarantee of security, protection of privacy, and achieveing fairness by means of statistical theory, cryptography theory, system security, and formal method. In this talk, the overview and recent issues in AI security and privacy.

Speaker 2: Kazuto Fukuchi (25min)
Title: Equalized Impact for Fair Regression
Abstract: We deal with the algorithmic bias problem in the regression tasks, a problem where individuals suffer from an adverse or advantageous impact due to a social bias injected by machine learning models. This study sheds light on the fact that the impact of the regression models can take a continuous value reflecting the significance of the impact. For the continuous impact, we introduce a novel fairness definition, equalized impact. The equalized impact requires the significance of adverse and advantageous impacts caused by the model are equal among groups. We develop a method to train fair regression models using the equalized impact as a penalty term and show that the expected squared estimation error of the unfairness score can be bounded above by $O(1/n)$, where $n$ is the sample size. The experimental results show that the proposed method successfully achieves fairness in the sense of the significance of impact.

Speaker 3: Tatsuya Mori (25min)
Title: What a security researcher thought about the threat model and feasibility of adversarial examples
Abstract: The research on adversarial examples, which aims to generate input intended to lead machine learning algorithms to make a misclassification, was triggered by the pioneering papers by Christian Szegedy and Ian Goodfellow published in 2013-2014, and has been actively studied up to now. In this talk, I will discuss the following questions from the perspective of a security researcher: (1) What kind of threats can adversarial examples pose in the real world?, (2) How feasible are the attacks?, and (3) Are adversarial examples effective for systems that consist of various data processing modules in addition to machine learning algorithms? I will introduce several specific applications of adversarial attacks, such as ECG diagnosis, machine translation, and voice assistant, and use them as examples for further discussion.

Speaker 4: Yohei Akimoto (25min)
Title: Minimax optimization with approximate minimization solvers and its application to automatic berthing control
Abstract: Computational simulations often include error and uncertainty. Optimization on such a simulation results in solutions that are not guaranteed to perform as well on the real environment as on the simulation, due to the difference between the real environment and the simulation. This is one of the factors that impair the reliability of the optimization result in the simulation. The problem of finding a solution robust to such error and uncertainty is often formulated as a minimax optimization. In this talk, we will introduce a minimax optimization approach using approximate minimization solvers without using the gradient of the objective function. We also introduce an application example to the automatic berthing control problem of a ship.

Speaker 5: Hiroshi Unno (25min)
Title: Applications of Machine Learning in Formal Methods
Abstract: Formal methods are mathematically rigorous techniques for specifying, verifying, and synthesizing software and hardware systems. Formal methods have traditionally been based on automated deduction, such as
theorem proving, model checking, and constraint satisfaction. Recent advances in cloud, machine learning, and cyber-physical systems, however, have posed new challenges regarding complex, large, and black-box systems, which have driven the integration of traditional approaches with data-driven inductive reasoning using machine learning. This talk presents an ongoing effort toward building an automated tool for verifying and synthesizing the above targets against safety, liveness, security, and privacy properties.


All participants are required to agree with the AIP Open Seminar Series Code of Conduct.
Please see the URL below.
https://aip.riken.jp/event-list/termsofparticipation/?lang=en

RIKEN AIP will expect adherence to this code throughout the event. We expect cooperation from all participants to help ensure a safe environment for everybody.


More Information

Date June 23, 2021 (Wed) 15:00 - 17:00
URL https://c5dc59ed978213830355fc8978.doorkeeper.jp/events/115896

Related Laboratories

last updated on October 17, 2024 09:25Laboratory