Seminar by Prof. Reiichiro Kawai (The University of Sydney)
Title: Stochastic approximation in Adaptive Monte Carlo variance reduction
We discuss an application of stochastic approximation algorithms in a forward problem, rather than inverse problems of great interest to the Machine Learning community. The forward problem here is a general framework of Monte Carlo variance reduction while adaptively updating the variance reduction parameters by stochastic approximation. To address the extreme sensitivity of performance to the choice of learning rates, we mainly focus on the case of finite computing budget and derive constant learning rates via minimization of an upper bound of the theoretical variance of the empirical mean, rather than minimization of the objective function as in the existing stochastic gradient framework. A strong convexity of the objective function improves the upper bound of the theoretical variance, whereas the convexity parameter is not required for implementation in any way. We present numerical results to support the theoretical findings and to illustrate the effectiveness of the proposed algorithm, especially the robustness of the performance to the choice of learning rates.
|January 15, 2018 (Mon) 15:00 - 16:00