May 1, 2025 11:05

Abstract

Part I: Continuous Optimization Team Presentation

10:00–10:30 Akiko Takeda
Title: Introduction of Continuous Optimization Team
Abstract: Our team, which was established in September 2016, is about to enter its 10th year this September. We will review some of our past research results and discuss future prospects.

10:30–11:00 Pierre-Louis Poirion
Title: Random Subspace Newton and Quasi-Newton Algorithms
Abstract: We present a randomized subspace regularized Newton method for a non-convex optimization. We will study global and local convergence properties of the method and prove that it works particularly well in a low rank setting. We will also present a randomized quasi-Newton method.

11:00–11:30 Jan Harold Alcantara
Title: Recent Developments in Splitting Algorithms for Nonconvex Optimization and Nonmonotone Inclusion
Abstract: Modern optimization methods increasingly leverage splitting algorithms to exploit problem structure and enable more efficient computation. In this talk, we review recent developments in the analysis of such methods within nonsmooth and nonconvex optimization, with a focus on structured nonconvex problems. We present global subsequential convergence guarantees under specific assumptions. We then extend this perspective to a broader class of problems—namely, multi-operator nonmonotone inclusion problems. In particular, we show how the Douglas–Rachford algorithm can be generalized to this multi-operator setting, and we establish conditions under which convergence can still be rigorously ensured.

11:30–11:45 Coffee Break

11:45–12:30 Christophe Roux
Title: Implicit Riemannian Optimism with Applications to Min-Max Problems
Abstract: Many optimization problems such as eigenvalue problems, principal component analysis and low-rank matrix completion can be interpreted as optimization problems over Riemannian manifolds, which allows for exploiting the geometric structure of the problems. While Riemannian optimization has been studied extensively in the offline setting, the online setting is not well understood. A major challenge in prior works was handling in-manifold constraints that arise in the online setting. We leverage implicit methods to address this problem and improve over existing results, removing strong assumptions and matching the best known regret bounds in the Euclidean setting. Building on this, we develop algorithms for g-convex, g-concave smooth min-max problems on Hadamard manifolds. Notably, one method nearly matches the gradient oracle complexity of the lower bound for Euclidean problems, for the first time.

13:30–14:30 Andreas Themelis
Title: It’s All in the Envelope! A Smoother Approach to Splitting Algorithms
Abstract: Splitting algorithms, such as the proximal gradient method, ADMM, and Douglas-Rachford splitting, are fundamental tools for solving structured optimization problems by decomposing them into simpler, more manageable subproblems. Because of their simplicity and modularity, a lot of research has been devoted to understanding and possibly improving their convergence behavior, especially in nonconvex settings.
This talk offers a walkthrough on the use of “proximal envelopes” as a unifying framework for analyzing splitting methods. Much like the Moreau envelope gives a smooth interpretation of the proximal point method for convex problems, these envelope functions allow us to view various splitting algorithms, even in absence of convexity, through the lens of a “nonsmooth” gradient descent applied to a more regular surrogate. This perspective not only aids in theoretical analysis and convergence guarantees, but also paves the way to “acceleration” techniques that preserve the structure and simplicity of the original methods.

14:50–15:50 Benjamin Poignard
Title: Sparse Factor Models of High Dimension
Abstract: We consider the estimation of a sparse factor model where the factor loading matrix is assumed sparse. The estimation problem is reformulated as a penalized M-estimation criterion, while the restrictions for identifying the factor loading matrix accommodate a wide range of sparsity patterns. We prove the sparsistency property of the penalized estimator when the number of parameters is diverging, that is the consistency of the estimator and the recovery of the true zeros entries. These theoretical results are illustrated by finite-sample simulation experiments and the relevance of the proposed method is assessed by real data.

15:50–16:10: Coffee Break

Part II: Functional Analytic Learning Team Presentation

16:10–17:10 Minh Ha Quang
Title: An Optimal Transport and Information Geometric Framework for Positive Operators, Infinite-Dimensional Gaussian Measures, and Gaussian Processes
Abstract: The Wasserstein and Fisher-Rao distances are two central quantities arising from the fields of Optimal Transport and Information Geometry, respectively, together with their applications in machine learning and statistics. On the set of zero-mean Gaussian densities on Euclidean space, they both admit closed form formulas. In this talk, we present their generalization to the infinite-dimensional setting of Gaussian measures on Hilbert space and Gaussian processes. In general, the exact Fisher-Rao metric formulation is not generalizable on the set of all Gaussian measures on an infinite-dimensional Hilbert space. Instead, we show that on the set of all Gaussian measures which are equivalent to a fixed one, all finite-dimensional formulas admit direct generalization. By employing regularization, we then have a formulation that is valid for all Gaussian measures on Hilbert space. The Wasserstein distance, on the other hand, is valid for all Gaussian measures on Hilbert space. Nevertheless, we show that by employing entropic regularization, many favorable theoretical properties, including convergence and differentiability, can be obtained. In the setting of Gaussian processes, by reproducing kernel Hilbert space (RKHS) methodology, we obtain consistent finite-dimensional approximations of the infinite-dimensional quantities that can be practically employed.

17:10–17:40 Le Thanh Tam
Title: Optimal Transport on Tree Systems and Applications
Abstract: Optimal transport (OT) provides a set of powerful toolkits to compare measures. However, OT has a high computational complexity, i.e., super cubic w.r.t. the number of input supports. Several variants of Sliced Wasserstein (SW) have been developed in the literature to overcome this challenge. These approaches exploit the closed-form expression of the univariate OT by projecting input measures onto one-dimensional lines. However, projecting measures onto low-dimensional spaces can lead to a loss of topological information. To mitigate this issue, we propose to replace one-dimensional lines with a more advanced structure, called tree systems. This structure is metrizable by a tree metric, which yields a closed-form expression for OT on tree systems. We derive an extensive theoretical analysis to formally define tree systems, introduce the concept of splitting maps, propose novel variants of Radon transform for tree systems, and verify their infectivity. Empirically, we illustrate that the proposed approaches perform favorably compared to SW and its variants on applications with dynamic-support measures such as generative models, and diffusion models.

More Information

Date May 21, 2025 (Wed) 10:00 - 17:40
URL https://c5dc59ed978213830355fc8978.doorkeeper.jp/events/184175

Related Laboratories

last updated on April 15, 2025 10:07Laboratory
last updated on April 15, 2025 10:17Laboratory