January 11, 2023 10:47

Abstract

The TrustML Young Scientist Seminars (TrustML YSS) started from January 28, 2022.

The TrustML YSS is a video series that features young scientists giving talks and discoveries in relation with Trustworthy Machine Learning.

Timetable for the TrustML YSS online seminars from Jan. to Feb. 2023.

For more information please see the following site.
TrustML YSS

This network is funded by RIKEN-AIP’s subsidy and JST, ACT-X Grant Number JPMJAX21AF, Japan.


【The 49th Seminar】


Date and Time: January 18th 4:00 pm – 7:00 pm(JST)

Venue: Zoom webinar

Language: English

Schedule
4:00 pm – 5:00 pm Speaker 1: Salah Ghamizi (University of Luxembourg)
Title: Knowledge Augmentation: Towards multi-objective robust machine learning for critical systems:

5:00 pm – 6:00 pm Speaker 2: Eleonora Giunchiglia (University of Oxford)
Title: Multi-Label Classification Neural Networks with Hard Logical Constraints:

6:00 pm – 7:00 pm Speaker 3: Maxime Cordy (University of Luxembourg)
Title: Adversarial machine learning in the real-world: assessing and improving model robustness in domain-constrained data space:

Abstract

Speaker 1: Salah Ghamizi (University of Luxembourg)
Title: Knowledge Augmentation: Towards multi-objective robust machine learning for critical systems:
Short Abstract
With the heavy reliance on Information Technologies in every aspect of our daily lives, Machine Learning (ML) models have become a cornerstone of these technologies’ rapid growth and pervasiveness. In particular, the most critical and fundamental technologies that handle our economic systems, transportation, health, and even privacy. However, while these systems are becoming more effective, their complexity inherently decreases our ability to understand, test, and assess the dependability and trustworthiness of these systems. This problem becomes even more challenging under a multi-objective framework: When the ML model is required to learn multiple tasks together, behave under constrained inputs or fulfill contradicting concomitant objectives. In this talk, we introduce the concept of “Knowledge Augmentation”, a novel set of approaches to improve the robustness and performances of ML in the Real World. The talk will cover three real use cases: Fraud detection, pandemic forecasting, and medical diagnosis.

Bio:
Salah Ghamizi has recently completed his PhD thesis at the University of Luxembourg and is currently a post-doctoral researcher at Interdisciplinary Center for Security, Reliability and Trust (SnT). His research focuses practical solutions to achieve robust machine learning, using self-supervised learning and multi-task learning. His work lie at the intersection of Software Security and Machine Learning, with his recent publications presented in venues such as ICCV, IJCAI, AAAI, KDD (Best paper award), and S&P.

Speaker 2: Eleonora Giunchiglia (University of Oxford)
Title: Multi-Label Classification Neural Networks with Hard Logical Constraints:
Short Abstract
Multi-label classification (MC) is a standard machine learning problem in which a data point can be associated with a set of classes. A more challenging scenario is given by hierarchical multi-label classification (HMC) problems, in which every prediction must satisfy a given set of hard constraints expressing subclass relationships between classes. In this talk, we propose C-HMCNN (h), a novel approach for solving HMC problems, which, given a network h for the underlying MC problem, exploits the hierarchy information in order to produce predictions coherent with the constraints and to improve performance.

Bio:
Eleonora Giunchiglia has recently completed her PhD thesis at the University of Oxford and is currently a post-doctoral researcher at TU-Wien. Her area of research is neural-symbolic AI, with a focus on how to use neural-symbolic techniques to make deep learning models safer. Her work has been published in various top-tier conferences and journals and won her multiple awards at international venues.

Speaker 3: Maxime Cordy (University of Luxembourg)
Title: Adversarial machine learning in the real-world: assessing and improving model robustness in domain-constrained data space:
Short Abstract
Adversarial attacks are considered as one of the most critical security threats for Machine Learning (ML). In order to enable the secure deployment of ML models in the real world, it is essential to properly assess their robustness to adversarial attacks and develop means to make models more robust. Traditional adversarial attacks were mostly designed for image recognition and assume that every image pixel can be modified independently to its full range of values. In many domains, however, these attacks fail to consider that only specific perturbations could occur in practice due to the hard domain constraints that delimit the set of valid inputs. Because of this, they almost-always produce examples that are not feasible (i.e. could not exist in the real world). As a result, research has developed real-world adversarial attacks that either manipulate real objects through a series of problem-space transformations (i.e. problem-space attacks) or generate feature perturbations that satisfy predefined domain constraints (i.e. constrained feature space attacks). In this talk, we will review the scientific literature on these attacks and report on our experience in applying them to real-world cases.

Bio:
Maxime Cordy is a Research Scientist at the Interdisciplinary Center for Security, Reliability and Trust (SnT), University of Luxembourg, in the domain of Artificial Intelligence (AI) and Software Engineering (SE), with a focus on security and quality assurance for machine learning, software verification and testing, and the engineering of data-intensive systems. He has published 80+ peer review papers in these areas. He is one of the four permanent scientists of the SnT’s SerVal group (SEcurity, Reasoning and VALidation). His research is inspired from and applies to several industry partners, mostly from the financial technology and smart energy sectors. He is deeply engaged in making Society benefit from results and technologies produced by research through the founding of a spin-off company and the leadership of private-public partnership projects at SnT. He has worked as a program committee member and reviewer for top-tier AI and SE conferences incl. IJCAI, ECCV, NeurIPS, ESEC/FSE, PLDI, ISSTA, CAISE, etc. He is distinguished reviewer board member of TOSEM and regular reviewer for other top-tier SE journals.


All participants are required to agree with the AIP Seminar Series Code of Conduct.
Please see the URL below.
https://aip.riken.jp/event-list/termsofparticipation/?lang=en

RIKEN AIP will expect adherence to this code throughout the event. We expect cooperation from all participants to help ensure a safe environment for everybody.


More Information

Date January 18, 2023 (Wed) 16:00 - 19:00
URL https://c5dc59ed978213830355fc8978.doorkeeper.jp/events/149647

Related Laboratories

last updated on December 9, 2024 13:41Laboratory