Abstract
In high dimensional regression context, sparsity enforcing penalties have proved useful to regularize the data-fitting term. A recently introduced technique called screening rules, leverage the expected sparsity of the solutions by ignoring some variables in the optimization, hence leading to solver speed-ups. When the procedure is guaranteed not to discard features wrongly the rules are said to be emph{safe}. We propose a unifying framework that can cope with generalized linear models regularized with standard sparsity enforcing penalties such as l_1 or l_1/l_2 norms. Our technique allows to discard safely more variables than previously considered safe rules, particularly for low regularization parameters. Our proposed Gap Safe rules (so called because they rely on duality gap computation) can cope with any iterative solver but is particularly well suited to block coordinate descent for many standard learning tasks: Lasso, Sparse-Group Lasso, multi-task Lasso, binary and multinomial logistic regression, etc. For all such tasks and on all tested datasets, we report significant speed-ups compared to previously proposed safe rules.
More Information
Date | April 7, 2017 (Fri) 14:00 - 16:00 |
URL | https://c5dc59ed978213830355fc8978.doorkeeper.jp/events/58743 |