报告题目:Demystifying Neural Tangent Kernel
报 告人: Richard Xu (徐亦达)教授
报告时间: 2021年11月29日(周一 上午)10:00-11:00
报告地点:腾讯会议:971 465 049
邀 请人:曾德炉教授
欢迎广大师生前往!
数学学院
2021年11月24日
报告摘要:For the most part of last decade, deep neural networks have achieved many monumental empirical successes in various fields of machine learning. However, most of their inner workings remain a mystery. Recently, many researchers have tried to disentangle this mystery by observing what happens when the width of the neural network reaches infinity. To this end, there have been several important studies to address the expressivity and trainability of neural networks recently. In this talk, I will start by briefly describe what a Gaussian process is and why a neural network can be expressed as a Gaussian process. Then, through the lens of the so-called Neural Tangent Kernel, I will explain why using gradient descent in neural network training can be regarded as solving linear ordinary differential equations. Finally, I will show some exciting research directions in this area.
报告人介绍:Richard Yi Da Xu 徐亦达is a Professor in the Department of Mathematics at Hong Kong Baptist University (HKBU). He previously worked at the University of Technology Sydney (UTS) Australia. His research fields are Machine Learning and Artificial Intelligence, and his recent research interests include Bayeisan Nonparametric and (machine) Learning Theory. Richard has published papers at many top international conferences, including AAAI, IJCAI, ECAI, ECCV, AI-STATS and ICDM, and many top IEEE Transactions: IEEE-(TNNLS, TIP, TSP, TKDE, MC and T-Cybernetics). Since 2009, he has created more than 2,000 slides of free machine learning online doctoral training materials and online machine learning videos.