October 13, 2021 14:38
EPFL CIS-RIKEN AIP Joint Seminar #1 20211006 thumbnails

Description

We are pleased to inform you that we will start the EPFL CIS – RIKEN AIP Joint ONLINE Seminar series from October 6th.

EPFL is located in Switzerland and is one of the most vibrant and cosmopolitan science and technology institutions. EPFL has both a Swiss and international vocation and focuses on three missions: teaching, research and innovation.

The Center for Intelligent Systems (CIS) at EPFL, a joint initiative of the schools ENAC, IC, SB, STI and SV seeks to advance research and practice in the strategic field of intelligent systems.

RIKEN is Japan’s largest comprehensive research institution renowned for high-quality research in a diverse range of scientific disciplines. https://aip.riken.jp/?lang=en

RIKEN Center for Advanced Intelligence Project (AIP) houses more than 40 research teams ranging from fundamentals of machine learning and optimization, applications in medicine, materials, and disaster, to analysis of ethics and social impact of artificial intelligence.


【The first Seminar】


Date and Time: October 6th 5:00pm – 6:00pm(JST)
10:00am-11:00pm(CEST)
Venue:Zoom webinar

Language: English

Speaker: Prof. Volkan Cevher, EPFL CIS

Title: Optimization challenges in adversarial machine learning

Abstract:
Thanks to neural networks (NNs), faster computation, and massive datasets, machine learning (ML) is under increasing pressure to provide automated solutions to even harder real-world tasks beyond human performance with ever faster response times due to potentially huge technological and societal benefits. Unsurprisingly, the NN learning formulations present a fundamental challenge to the back-end learning algorithms despite their scalability, in particular due to the existence of traps in the non-convex optimization landscape, such as saddle points, that can prevent algorithms from obtaining “good” solutions.

In this talk, we describe our recent research that has demonstrated that the non-convex optimization dogma is false by showing that scalable stochastic optimization algorithms can avoid traps and rapidly obtain locally optimal solutions. Coupled with the progress in representation learning, such as over-parameterized neural networks, such local solutions can be globally optimal.

Unfortunately, this talk will also demonstrate that the central min-max optimization problems in ML, such as generative adversarial networks (GANs), robust reinforcement learning (RL), and distributionally robust ML, contain spurious attractors that do not include any stationary points of the original learning formulation. Indeed, we will describe how algorithms are subject to a grander challenge, including unavoidable convergence failures, which could explain the stagnation in their progress despite the impressive earlier demonstrations.

Bio:
Volkan Cevher received the B.Sc. (valedictorian) in electrical engineering from Bilkent University in Ankara, Turkey, in 1999 and the Ph.D. in electrical and computer engineering from the Georgia Institute of Technology in Atlanta, GA in 2005. He was a Research Scientist with the University of Maryland, College Park from 2006-2007 and also with Rice University in Houston, TX, from 2008-2009. Currently, he is an Associate Professor at the Swiss Federal Institute of Technology Lausanne and a Faculty Fellow in the Electrical and Computer Engineering Department at Rice University. His research interests include signal processing theory, machine learning, convex optimization, and information theory. Dr. Cevher is an ELLIS fellow and was the recipient of the Google Faculty Research Award on Machine Learning in 2018, IEEE Signal Processing Society Best Paper Award in 2016, a Best Paper Award at CAMSAP in 2015, a Best Paper Award at SPARS in 2009, and an ERC CG in 2016 as well as an ERC StG in 2011.