2021/3/2 12:51
連続最適化チーム(チームリーダー 武田 朗子) サムネイル

説明

Continuous Optimization Team (https://www.riken.jp/en/research/labs/aip/generic_tech/continuous_optimize/index.html)at RIKEN AIP

Speaker 1(15:00-15:30): Akiko Takeda
Title: Difference-of-Convex Approach for Nonconvex Nonsmooth Optimization Problems
Abstract: There are various applications formulated as nonconvex nonsmooth optimization problems in signal processing, machine learning and operations research. In this talk, we consider a class of nonconvex nonsmooth optimization problems whose objective is the sum of a smooth function and a finite number of possibly nonsmooth functions (whose proximal mappings are easy to compute). Solving these problems, however, can be challenging because of the coupled nonsmooth functions: the corresponding proximal mapping can be hard to compute so that standard first-order methods cannot be applied efficiently. We propose a successive difference-of-convex approximation method for solving this kind of problems. This talk is based on a paper Mathematical Programming, 176 (2019), with T. Liu and T.K. Pong.

Speaker 2(15:30-16:00): Tianxiang Liu
Title: A successive difference-of-convex approximation method for nonconvex nonsmooth optimization with applications to low-rank problems
Abstract: In this talk, we consider a class of nonconvex nonsmooth optimization problems whose objective is the sum of a smooth function and several nonsmooth functions, some of which are composed with linear maps. Making use of the simple observation that the Moreau envelope has a DC decomposition, we propose a successive difference-of-convex approximation method (SDCAM). We prove the convergence of SDCAM under suitable assumptions, and also discuss how SDCAM can be applied to many contemporary applications including a Hankel structured problem arising from system identification. This is a joint work with Akiko Takeda, Ting Kei Pong and Ivan Markovsky.

Speaker 3(16:00-16:30): Michael Metel
Title: Stochastic Proximal Gradient Methods for Non-Smooth Non-Convex Sparse Optimization
Abstract: Stochastic Proximal Gradient Methods for Non-Smooth Non-Convex Sparse Optimization This presentation focuses on stochastic proximal gradient methods for optimizing a smooth non-convex loss function with a non-smooth non-convex regularizer and convex constraints. This problem setting enables us to consider constrained sparse learning models. To the best of our knowledge we present the first non-asymptotic convergence bounds for this class of problem. We present two simple stochastic proximal gradient algorithms, for general stochastic and finite-sum optimization problems. Parts of this work was presented at ICML 2019.

Speaker 4(16:30-17:00): Takayuki Okuno
Title: On Lp-hyperparameter Learning via Bilevel Nonsmooth Optimization
Abstract: In recent years, the bilevel optimization strategy becomes more popular for selecting the best hyperparameter value of a sparse model. In this talk, we focus on the nonsmooth lp regularizer with 0<p<1 and develop a bilevel based algorithm for computing the optimal lp regularization parameter. The proposed algorithm is simple and scalable as our numerical comparison to Bayesian optimization indicates. Moreover, we present new optimality conditions for the relevant nonsmooth bilevel optimization problems and then give convergence guarantee of the proposed algorithm. This talk is based on the technical paper arXiv:1806.01520v2. This is a joint work with Akiko Takeda, Akihiro Kawana and Motokazu Watanabe.

関連研究室

last updated on 2024/3/22 04:29研究室