Holder: Erdong Wang(Peking University)
Time:2025-01-09 16:00-17:00
Location:Tencent Conference 568-7810-5726
Abstract:
Following the emergence of Neural Tangent Kernel (NTK), feature learning theory has become a significant branch of deep learning. Unlike NTK, this theory suggests that neural networks can learn features or signals from data during gradient descent. It typically assumes specific data generating models, such as Gaussian mixtures or sparse-coding models, and analyzes how a neural network, often a two-layer network, learns signals and noise. By simplifying the dynamics of complex networks into “signal learning” and “noise memory,” feature learning theory effectively describes the optimization performance during training and the generalization capabilities post-convergence. This approach has greatly enhanced the interpretability of deep learning, revealing the intrinsic interactions between data and neural network dynamics.
In this talk, we will introduce some results on query lower bounds for log-concave sampling, based on a recent work by Chewi, Pont, Li, Lu, Narayanan[2023]. We will also introduce a lower bound for sampling algorithms which simulate underdamped Langevin dynamics, based on a work by Cao, Lu, Wang[2019].In this talk we will introduce the concept of "intrinsic freeness", which provides sharper bounds than the traditional noncommutative Khintchine inequality, especially in cases where the latter is suboptimal. We also show how to use Gaussian interpolation to solve these problems, and finally illustrate the practical significance of this theorem through various examples.
About the Speaker:
论坛每次邀请一位博士生就某个前沿课题做较为系统深入的介绍,主题包括但不限于机器学习、高维统计学、运筹优化和理论计算机科学。
Your participation is warmly welcomed!

欢迎扫码关注北大统计科学中心公众号,了解更多讲座信息!