报告主题: Stochastic Algorithms in Optimization and Sampling
报 告人: 朱凌炯副教授
报告地点:Zoom会议:7339556904
交流QQ群:600788949(发布相关参考文献以及本报告会的最新动态信息)
报告时间安排:
序号 | 北京时间 | 内容 |
Seminar 1 | 5月8日(周日) 8:30-9:55 | Introduction. Optimization (e.g. empirical risk minimization) and sampling (e.g. Bayesian learning) in machine learning. Stochastic gradient descent methods. Langevin algorithms. Stochastic modified equations. Heavy-tailed methods. |
Seminar 2 | 5月15日(周日) 8:30-9:55 | Stochastic gradient descent and Nesterov’s accelerated stochastic gradient descent I. Convergence guarantees. Lyapunov function approach. |
Seminar 3 | 5月22日(周日) 8:30-9:55 | Stochastic gradient descent and Nesterov’s accelerated stochastic gradient descent II. Trade-off between convergence speed and robustness. Convergence in Wasserstein distances. |
Seminar 4 | 5月29日(周日) 8:30-9:55 | Langevin Monte Carlo methods I. Sampling of log-concave distributions. |
Seminar 5 | 6月5日(周日) 8:30-9:55 | Langevin Monte Carlo methods II. Sampling and non-convex optimization. Non-reversibility yields accelerations. |
Seminar 6 | 6月12日(周日) 8:30-9:55 | Heavy-tailed Langevin-type methods. Metastability. Retargeting. Applications in machine learning. |
Seminar 7 | 6月19日(周日) 8:30-9:55 | The heavy-tail phenomenon in stochastic gradient descent. |
Seminar 8 | 6月26日(周日) 8:30-9:55 | Decentralized stochastic gradient methods and Langevin algorithms. |
邀请人: 何志坚教授
欢迎广大师生前往!
数学学院
2022年5月3日
报告摘要:In this series of seminars, we will cover and survey the popular stochastic algorithms used in large-scale optimization and sampling problems that arise in machine learning applications. In particular, we will cover stochastic gradient descent method, Nesterov’s accelerated stochastic gradient descent for convex optimization, as well as Langevin-type Monte Carlo methods for both sampling and non-convex optimization. We will also cover the heavy-tailed Langevin-type methods and investigate the heavy-tail phenomenon in machine learning, the decentralized stochastic gradient and Langevin-type algorithms. If time allows, we will also discuss the penalty method for constrained sampling, the connection to the decentralized sampling, as well as the connection between sampling and optimization.
报告人简介:Lingjiong Zhu got his BA from University of Cambridge in 2008 and PhD from New York University in 2013. He worked at Morgan Stanley and University of Minnesota before joining the faculty at Florida State University in 2015. His research interests include applied probability, data science, financial engineering and operations research. His works have been published in many leading conferences and journals including Annals of Applied Probability, Finance and Stochastics, ICML, INFORMS Journal on Computing, Journal of Machine Learning Research, NeurIPS, Production and Operations Management, SIAM Journal on Financial Mathematics and Operations Research.