September 1, 2023 12:11

Abstract

This event is hold on-line and in-person at the RIKEN AIP open space in Nihombashi.

15:00 – 16:30
Speaker 1
Martin Mundt (TU Darmstadt)

Title
Self-Expanding Neural Networks

Abstract
Neural networks have become the workhorse of deep learning. A large part of their popularity may be attributed to composition of stackable layers and a common t raining algorithm – both making them easy to program and apply. However, choosing a respective architecture for a given task’s complexity is challenging. It typically entails either repeated trial and error, following a set of historical rules of thumb, or conducting expensive neural architecture search processes. The to date most popular approach is thus to over-parametrize neural networks, i.e. make them excess ively large, and then regularize their learning. In this talk, I will present self-expanding neural networks (SENN), a radically different approach t o this unsustainable trend. SENNs start with one unit in one computational layer, and then self-reflect at the hand of natural g radients to decide: i) when to add components to the architecture, ii) what component to add at any given point in time, and iii) where to place it in the existing hierarchy of computational instructi ons. SENNs thus adapt to what is observed directly without perturbing the already learned function, makin g them ideal contenders towards lifelong machine learning.

16:30 – 18:00
Speaker 2
Krikamol Muandet (CISPA Helmholtz Center for Information Security)

Title
(Im)possibility of Collective Intelligence

Abstract
Democratization of AI involves training and deploying machine learning models across heterogeneous a nd potentially massive environments. While a diversity of data can bring about new possibilities to advance AI systems, it simultaneously restricts the extent to which information can be shared across environments due to pressing concern s such as privacy, security, and equity. Inspired by the social choice theory, I will first present a choice-theoretic perspective of machine learning as a tool to analyze learning algorithms. To understand the fundamental limits, I will then provide a minimum requirement in terms of intuitiv e and reasonable axioms under which an empirical risk minimization (ERM) that learns from a single e nvironment is the only rational learning algorithm in heterogeneous environments. This impossibility result implies that Collective Intelligence (CI), the ability of algorithms to su ccessfully learn across heterogeneous environments, cannot be achieved without sacrificing at least one of these essential properties. Lastly, I will discuss the implications of this result in critical areas of machine learning such as out-of-distribution generalization, federated learning, algorithmic fairness, and multi-modal learning.

Zoom Link
Join Zoom Meeting
https://riken-jp.zoom.us/j/98088651692?pwd=a28zVnc4NThvVmFrNjl3Tnd5ZGEzZz09

Meeting ID: 980 8865 1692
Passcode: PR15tm9bHR

Host
Thomas Möllenhoff (Approximate Bayesian Inference Team)

More Information

Date September 4, 2023 (Mon) 15:00 - 18:00
URL https://c5dc59ed978213830355fc8978.doorkeeper.jp/events/162850

Related Laboratories

last updated on November 13, 2024 10:07Laboratory