A note on $L^1$-Convergence of the Empiric Minimizer for unbounded functions with fast growth

Abstract

For V:Rd→R coercive, we study the convergence rate for the L1-distance of the empiric minimizer, which is the true minimum of the function V sampled with noise with a finite number n of samples, to the minimum of V. We show that in general, for unbounded functions with fast growth, the convergence rate is bounded above by an n−1/q, where q is the dimension of the latent random variable and where an=o(nε) for every ε>0. We then present applications to optimization problems arising in Machine Learning and in Monte Carlo simulation.

Publication
In arXiv e-prints
Pierre Bras
Pierre Bras
PhD Student in Applied Mathematics

I am a PhD student under the direction of Gilles Pagès, interested in Machine Learning, Stochastic Optimization and Numerical Probability.

Related