1. Ye, H., Luo, L., & Zhang, Z. (2020). Accelerated Proximal Subsampled Newton Method. IEEE Transactions on Neural Networks and Learning Systems , 32 (10), 4374-4388.
2. Li, X., Wang, S., Chen, K., & Zhang, Z. (2021, July). Communication-efficient distributed SVD via local power iterations. In International Conference on Machine Learning (pp. 6504-6514). PMLR.
3. Lin, D., Sun, R., & Zhang, Z. (2021). Faster Directional Convergence of Linear Neural Networks under Spherically Symmetric Data. Advances in Neural Information Processing Systems , 34 , 4647-4660.
4. Lin, D., Ye, H., & Zhang, Z. (2021). Greedy and random quasi-newton methods with faster explicit superlinear convergence. Advances in Neural Information Processing Systems , 34 , 6646-6657.
5. Ye, H., Luo, L., & Zhang, Z. (2020). Nesterov's Acceleration for Approximate Newton. J. Mach. Learn. Res. , 21 (142), 1-37.
6. Chen, C., Gu, M., Zhang, Z., Zhang, W., & Yu, Y. (2020, June). Efficient Spectrum-Revealing CUR Matrix Decomposition. In International Conference on Artificial Intelligence and Statistics (pp. 766-775). PMLR.
7. Li, X., Huang, K., Yang, W., Wang, S., & Zhang, Z. (2019). On the convergence of fedavg on non-iid data. arXiv preprint arXiv:1907.02189 .
8. Xie, G., Luo, L., Lian, Y., & Zhang, Z. (2020, November). Lower complexity bounds for finite-sum convex-concave minimax optimization problems. In International Conference on Machine Learning (pp. 10504-10513). PMLR.
9. Li, X., Wang, S., & Zhang, Z. (2020, April). Do Subsampled Newton Methods Work for High-Dimensional Data?. In Proceedings of the AAAI Conference on Artificial Intelligence (Vol. 34, No. 04, pp. 4723-4730).