For V:Rd→R coercive, we study the convergence rate for the L1-distance of the empiric minimizer, which is the true minimum of the function V sampled with noise with a finite number n of samples, to the minimum of V. We show that in general, for unbounded functions with fast growth, the convergence rate is bounded above by an n−1/q, where q is the dimension of the latent random variable and where an=o(nε) for every ε>0. We then present applications to optimization problems arising in Machine Learning and in Monte Carlo simulation.