Learning time-scales in two-layer neural networks
报告人: 周康杰(哥伦比亚大学)
时间:2024-07-16 10:00-11:00
地点:北京大学医学部新公卫楼819
Abstract:
Gradient-based learning in multi-layer neural networks displays a number of striking features. In particular, the decrease rate of empirical risk is non-monotone even after averaging over large batches. Long plateaus in which one observes barely any progress alternate with intervals of rapid decrease. These successive phases of learning often take place on very different time scales. Finally, models learned in an early phase are typically “simpler” or “easier to learn”, although in a way that is difficult to formalize.
Although theoretical explanations of these phenomena have been put forward, each of them captures at best certain specific regimes. In this talk, we study the gradient flow dynamics of a wide two-layer neural network in high-dimension, when data are distributed according to a single-index model (i.e., the target function depends on a one-dimensional projection of the covariates). Based on a mixture of new rigorous results, non-rigorous mathematical derivations, and numerical simulations, we propose a scenario for the learning dynamics in this setting. In particular, the proposed evolution exhibits separation of timescales and intermittency. These behaviors arise naturally because the population gradient flow can be recast as a singularly perturbed dynamical system.
About the Speaker:
周康杰,2024年5月毕业于斯坦福大学统计系,即将前往哥伦比亚大学统计系从事博士后研究工作,2019年本科毕业于北京大学数学科学学院。 他的研究兴趣包括理论统计,机器学习和优化理论。

Your participation is warmly welcomed!

欢迎扫码关注北大统计科学中心公众号,了解更多讲座信息!