报告人: Ganghua Wang (University of Minnesota)
时间:2023-06-08 10:00-11:00
地点:Room 1114, Sciences Building No. 1
Abstract:
Recently, deep network pruning has attracted significant attention in order to enable the rapid deployment of AI into small devices with computation and memory constraints. Many deep pruning algorithms have been proposed with impressive empirical success. However, a theoretical understanding of model compression is still limited. One problem is to understand if a network is more compressible than another of the same structure. Another problem is to quantify how much one can prune a network with theoretically guaranteed accuracy degradation. This talk address these two fundamental problems by using the sparsitysensitive lq norm, (0<q<1) to characterize compressibility and provide a relationship between soft sparsity of the network weights and the degree of compression with a controlled accuracy degradation bound. Next, we propose PQ Index (PQI) to measure the potential compressibility of deep neural networks and use this to develop a Sparsity-informed Adaptive Pruning (SAP) algorithm. Our experiments demonstrate that the proposed adaptive pruning algorithm with proper choice of hyper-parameters is superior to the iterative pruning algorithms such as the lottery ticket-based pruning methods, in terms of both compression efficiency and robustness.
About the Speaker:
Ganghua Wang received the B.S. degree from Peking University, Beijing, China, in 2019. Since 2019, he has been a Ph.D. student with the School of Statistics, University of Minnesota, Twin Cities, MN, USA. His research interests include the foundations of machine learning theory and trustworthy machine learning.
Your participation is warmly welcomed!
欢迎扫码关注北大统计科学中心公众号,了解更多讲座信息!