Groupe de travail des thésards du LPSM

Abstract

I will show how minimization problems arising for example in machine learning can be solved using stochastic gradient descent. For non-convex optimization problems, I will introduce variants of this algorithm in order to improve the optimization procedure. Langevin algorithms consist in adding white noise to the gradient descent, hoping to escape local (but not global) minima. In its simulated annealing version, the noise is gradually decreased to zero to make the algorithm asymptotically converge, sharing its heuristic with the original simulated annealing algorithm.

Date
May 23, 2022 5:00 PM
Location
LPSM
4 place Jussieu, Paris, 75005

Slides

Reference article

Pierre Bras
Pierre Bras
PhD Student in Applied Mathematics

I am a PhD student under the direction of Gilles Pagès, interested in Machine Learning, Stochastic Optimization and Numerical Probability.