周康杰学术报告:Learning time-scales in two-layer neural networks

发布时间:2024-06-26 浏览次数:10

报告时间:6月28日下午14:30-15:30

报告地点:雁山校区理四501报告厅

报告题目:Learning time-scales in two-layer neural networks

报告人:周康杰 美国斯坦福大学


报告摘要:Gradient-based learning in multi-layer neural networks displays a number of striking features. In particular, the decrease rate of empirical risk is non-monotone even after averaging over large batches. Long plateaus in which one observes barely any progress alternate with intervals of rapid decrease. These successive phases of learning often take place on very different time scales. Finally, models learned in an early phase are typically “simpler” or “easier to learn”, although in a way that is difficult to formalize.

Although theoretical explanations of these phenomena have been put forward, each of them captures at best certain specific regimes. In this talk, we study the gradient flow dynamics of a wide two-layer neural network in high-dimension, when data are distributed according to a single-index model (i.e., the target function depends on a one-dimensional projection of the covariates). Based on a mixture of new rigorous results, non-rigorous mathematical derivations, and numerical simulations, we propose a scenario for the learning dynamics in this setting. In particular, the proposed evolution exhibits separation of timescales and intermittency. These behaviors arise naturally because the population gradient flow can be recast as a singularly perturbed dynamical system.


报告人简介:周康杰,2024年5月毕业于斯坦福大学统计系,即将前往哥伦比亚大学统计系从事博士后研究工作,2019年本科毕业于北京大学数学科学学院. 他的研究兴趣包括理论统计,机器学习和优化。周康杰在高中期间参加CMO(中国数学奥林匹克)获得了金牌。在大学期间过得过各种主流数学竞赛金牌/一等奖。