•  学术报告

关于举行朱凌炯副教授(佛罗里达州立大学)系列学术报告会的通知

发布时间:2022-05-03文章来源:华南理工大学数学学院浏览次数:629

报告主题: Stochastic Algorithms in Optimization and Sampling

报 : 朱凌炯副教授      

报告地点:Zoom会议:7339556904

交流QQ群:600788949(发布相关参考文献以及本报告会的最新动态信息)

报告时间安排:

序号

北京时间

内容

Seminar 1

58日(周日)

8:30-9:55

Introduction. Optimization (e.g. empirical risk minimization) and sampling (e.g. Bayesian learning) in machine learning. Stochastic gradient descent methods. Langevin algorithms. Stochastic modified equations. Heavy-tailed methods.

Seminar 2

515日(周日)

8:30-9:55

Stochastic gradient descent and Nesterovs accelerated stochastic gradient descent I. Convergence guarantees. Lyapunov function approach.

Seminar 3

522日(周日)

8:30-9:55

Stochastic gradient descent and Nesterovs accelerated stochastic gradient descent II. Trade-off between convergence speed and robustness. Convergence in Wasserstein distances.

Seminar 4

529日(周日)

8:30-9:55

Langevin Monte Carlo methods I. Sampling of log-concave distributions.

Seminar 5

65日(周日)

8:30-9:55

Langevin Monte Carlo methods II. Sampling and non-convex optimization. Non-reversibility yields accelerations.

Seminar 6

612日(周日)

8:30-9:55

Heavy-tailed Langevin-type methods. Metastability. Retargeting. Applications in machine learning.

Seminar 7

619日(周日)

8:30-9:55

The heavy-tail phenomenon in stochastic gradient descent.

Seminar 8

626日(周日)

8:30-9:55

Decentralized stochastic gradient methods and Langevin algorithms.


: 何志坚教授

欢迎广大师生前往!

数学学院

202253

报告摘要:In this series of seminars, we will cover and survey the popular stochastic algorithms used in large-scale optimization and sampling problems that arise in machine learning applications. In particular, we will cover stochastic gradient descent method, Nesterov’s accelerated stochastic gradient descent for convex optimization, as well as Langevin-type Monte Carlo methods for both sampling and non-convex optimization. We will also cover the heavy-tailed Langevin-type methods and investigate the heavy-tail phenomenon in machine learning, the decentralized stochastic gradient and Langevin-type algorithms. If time allows, we will also discuss the penalty method for constrained sampling, the connection to the decentralized sampling, as well as the connection between sampling and optimization.

报告人简介:Lingjiong Zhu got his BA from University of Cambridge in 2008 and PhD from New York University in 2013. He worked at Morgan Stanley and University of Minnesota before joining the faculty at Florida State University in 2015. His research interests include applied probability, data science, financial engineering and operations research. His works have been published in many leading conferences and journals including Annals of Applied Probability, Finance and Stochastics, ICML, INFORMS Journal on Computing, Journal of Machine Learning Research, NeurIPS, Production and Operations Management, SIAM Journal on Financial Mathematics and Operations Research.