We will hold a joint workshop with Bar-Ilan University as follows;
Date and Time:
July 19, 2023: 15:00 pm – 18:00 pm (JST)
Venue: Online and Open Space at the RIKEN AIP Nihonbashi office*
*The Open Space; AIP researchers are only available.
15:05-15:30 Masashi Sugiyama (RIKEN-AIP)
Title: Importance-Weighting Approach to Distribution Shift Adaptation.
Abstract: For reliable machine learning, overcoming the distribution shift is one of the most important challenges. In this talk, I will first give an overview of the classical importance weighting approach to distribution shift adaptation, which consists of an importance estimation step and an importance-weighted training step. Then, I will present a more recent approach that simultaneously estimates the importance weight and trains a predictor. Finally, I will discuss a more challenging scenario of continuous distribution shifts, where the data distributions change continuously over time.
15:30-15:55 Gal Chechik (Bar-Ilan University)
Title: Personalizing text-to-image visual generation
Abstract: Text-to-image generative models can create fantastic content, but they are still limited in various ways. Primarily, we often wish to generate images based on our own items and concepts rather than images of generic concepts from the public domain. I will describe a series of papers that addresses that challenge. These include textual-inversion – where we teach new personalized visual concepts to the model that can be combined with known concepts, Perfusion – where we further lightly edit a diffusion model, and an encoder approach that accelerates the process. Second, it is still hard to generate images of relatively rare concepts or non-rigid structured concepts like hands. I will describe an approach to smartly select an initialization point that can greatly improve generation of such concepts. I’ll discuss future directions and trend.
15:55-16:20 Gang Niu (RIKEN-AIP)
Title: Label-noise Learning Beyond Class-conditional Noise
Abstract: Label noise may exist in many real-world applications where budgets for labeling raw data are limited. However, the famous class-conditional noise model, which assumes the label corruption process is instance-independent and only class-dependent, is not enough in expressing/modeling real-world label noise and thus we need to go beyond it. This talk will introduce our recent advances about robust learning against label noise when the noise is significantly harder than CCN. Specifically, two general noise models called instance-dependent noise and mutually contaminated distributions as well as the corresponding learning methods will be covered in the talk. The learning methods for handling IDN and MCD show that label-noise learning beyond CCN is at least possible and hopefully there will be new methods making it more and more practical.
16:40-17:05 Ethan Fetaya (Bar-Ilan University)
Title: Multi-task learning – a game theoretic perspective.
Abstract: Multi-task learning allows us to reduce runtime by training a model to solve several tasks simultaneously, however this leads to a challenging optimization problem. One line of multi-task optimization algorithms work by altering how the different task gradients are combined. We explain how the gradient combination can be viewed as a cooperative bargaining problem and how to us game theoretic approach we develop a multi-task algorithm based on the Nash bargaining solution, named Nash-MTL. We will describe this new approach to multi-task learning and how it can reach state-of-the-art results on a variety of benchmarks.
17:05-17:30 Shuo Chen (RIKEN-AIP)
Title: Robust Contrastive Learning and Its Applications
Abstract: Self-supervised contrastive learning has recently become a popular and powerful unsupervised learning approach. In this talk, we will first review the classical contrastive learning methods in terms of self-supervisory information and feature encoder structure. After that, we will present our recent work to solve the issues caused by inaccurate self-supervisory information and high-dimensional features. Finally, we will discuss some novel and challenging applications of contrastive learning in real-world tasks, e.g., zero-shot recognition, multi-view classification, and cross-modal learning.
17:30-17:55 Haggai Maron (NVIDIA)
Title: Equivariant Architectures for Learning in Deep Weight Spaces
**Abstract: Designing machine learning architectures for processing neural networks in their raw weight matrix form is an emerging field of research. If successful, such architectures would be capable of performing a wide range of intriguing tasks, from adapting a pre-trained network to a new domain to editing objects represented as functions (INRs or NeRFs). In this presentation, I will present a novel network architecture for learning in deep weight spaces that takes into account the unique structure of these spaces. Specifically, I will show how to design layers that respect symmetries of neural networks and demonstrate how these layers can be implemented using three basic operations: pooling, broadcasting, and fully connected layers. I will also demonstrate the effectiveness and advantages of our architecture over natural baselines in various learning tasks.
|Date||July 19, 2023 (Wed) 15:00 - 18:00|