2017/8/17 11:47

要旨

This is the second talk Prof. Schmidt will give.

Abstract:
There has been significant recent work on the theory and application of randomized coordinate-descent algorithms in machine learning and statistics, beginning with the work of Nesterov [2010] who showed that a random-coordinate selection rule achieves the same convergence rate as the Gauss-Southwell selection rule. This result suggests that we should never use the Gauss-Southwell rule, as it is typically much more expensive than random selection. However, the empirical behaviours of these algorithms contradict this theoretical result: in applications where the computational costs of the selection rules are comparable, the Gauss-Southwell selection rule tends to perform substantially better than random coordinate selection. We give a simple analysis of the Gauss-Southwell rule showing that – except in extreme cases – it is always faster than choosing random coordinates. Further, in this work we (i) show that exact coordinate optimization improves the convergence rate for certain sparse problems, (ii) propose a Gauss-Southwell-Lipschitz rule that gives an even faster convergence rate given knowledge of the Lipschitz constants of the partial derivatives, (iii) analyze the effect of approximate Gauss-Southwell rules, and (iv) analyze a proximal-gradient variant of the Gauss-Southwell rule.

詳細情報

日時 2017/08/25(金) 16:00 - 17:00
URL https://c5dc59ed978213830355fc8978.doorkeeper.jp/events/63967