2017/10/10 20:22

要旨

Speaker: Le Thi Hoai An (University of Lorraine)
http://www.lita.univ-lorraine.fr/~lethi/index.php/en/

Title: DC (Difference of Convex functions) Programming and DCA (DC Algorithms) for Machine Learning: DCL (DC Learning)

Abstract:
DC Programming and DCA, which constitute the backbone of nonconvex programming and global optimization, have been introduced in 1985 by Pham Dinh Tao and extensively developed by Le Thi Hoai An and Pham Dinh Tao since 1994 to become now classic and increasingly popular. Their original key idea relies on the structure DC of objective/constraints functions nonconvex programs, which are explored and exploited in a deep and suitable way. Key idea ( Philosophy/ Principle) : Extension of Convex Analysis/Programming to Nonconvex Analysis/Programming, broad enough to cover most real-world nonconvex programs, but not too much to continue to use the powerful arsenal of convex analysis/programming.
It is worth noting that each DC function has infinitely many DC decompositions and so infinitely many resulting DCAs. That is quite appealing to researchers/practioners willing to use DCA. Because they have to find suitable statements of DC programs and DC decompositions fitting the specific structure of their DC programs. All these should be considered as interesting and useful contributions to DC programming and DCA. The resulting DCA introduces the nice and elegant concept of approximating a nonconvex (DC) program by a sequence of convex ones : each iteration of DCA requires solution of a convex program.
Their popularity resides in their rich, deep and rigorous mathematical foundations, and the versatility, flexibility, robustness, inexpensiveness and efficiency of DCA’s with respect to (a very few efficient) existing algorithms, their adaptation to specific structures of addressed problems, and their ability to solve real-world large-scale nonconvex programs.
Recent developments in nonconvex programming are mainly devoted to reformulation techniques and scalable algorithms in order to handle large-scale problems. They have been focused on the following issues, as mentioned in the seminal work [1], in order to improve DCA’s efficiency and scalability :
1: Finding reformulations and DC decompositions well adapted to DC programs
2: Exact penalty with/without errors bounds in general DC programs, i.e., DC programs with DC constraints
3: Using Nesterov smoothing techniques applied to generated convex subprograms for accelerating DCA,like FISTA
4: Using Proximal approaches techniques for both regularization, decomposition and improving DCA’s convergence speed
5: Using line-search – standard DCA’s step size being equal to 1- for the DC objective function in DCA
6: Estimating DCA’s convergence rate for special classes of DC programs
7: DCA’s global convergence for special DC programs, for example DC programs with subanalytic data using the Lojasiewicz inequality for nonsmooth subanalytic functions
8: Strategy for choosing initial points taking into account specific structures of DC programs

The talk is mainly focused on applications of DC Programming and DCA in Learning. For those who are not familiar with these theoretical and algorithmic tools, we will first briefly summarize DC and DCA programming, well known and used by researchers and practitioners worldwide to model/solve their nonconvex programs from different fields of applied sciences. Concerning DC Learning, we will present some representative problems and the use of DC Programming and DCA to model/solve them.

[1]Pham Dinh, T., Le Thi, H.A. : Convex analysis approach to DC programming :Theory, Algorithms and Applications. Acta Mathematica Vietnamica 22(1), 289–355 (1997)

詳細情報

日時 2017/10/17(火) 16:00 - 17:00
URL https://c5dc59ed978213830355fc8978.doorkeeper.jp/events/66148

場所

Faculty of Engineering Bulding #6, University of Tokyo (marked by purple 78 on the following map http://www.u-tokyo.ac.jp/content/400020145.pdf)(Google Maps)