Abstract:
In this talk I will present both mathematical and numerical analysis as well as experiments to understand a few basic computational issues in using neural networks, as a particular form of nonlinear representation, and show how the network structure, activation function, and parameter initialization can affect its approximation properties and the learning process. In particular, we propose a structured and balanced approximation using multi-component and multi-layer neural network (MMNN) structure. Using sine as the activation function and an initial scaling strategy, we show that scaled Fourier MMNNs (SFMMNN) have a distinct adaptive property as a nonlinear approximation. Computational examples will be presented to verify our analysis and demonstrate the efficacy of our method. At the end, I will raise a few issues and challenges when using neural networks in scientific computing.
About the Speaker:
Professor Hongkai Zhao is the Ruth F. DeVarney Distinguished Professor of Mathematics at Duke University. A leading expert in computational and applied mathematics, his research spans numerical methods for PDEs, inverse problems, and scientific computing, with impactful applications in imaging, physics, and engineering. He is best known for developing the fast sweeping method for solving Eikonal and Hamilton–Jacobi equations. Professor Zhao is a Fellow of SIAM, recipient of the Feng Kang Prize, and winner of the 2024 Frontiers of Science Award. He previously held faculty positions at UC Irvine and Stanford University.
Your participation is warmly welcomed!

欢迎扫码关注北大统计科学中心公众号,了解更多讲座信息!