Friday, March 06, 2020
1:00 PM
Wegmans 2506
Rupam Acharyya
University of Rochester
Fantastic Pruning Methods and How to Explain Them?
Deep learning architectures require a huge number of parameters, which can cause computational inefficiency during inference and deployments. Drawing motivation from synaptic diversity in the brain, we propose a novel diversity-based edge pruning technique for neural networks using Determinantal Point Process (DPP). We choose the teacher-student framework to derive the generalization error bounds (expected error on unseen test data) of the pruned networks. Based on these error bounds we theoretically justify (1) DPP-based pruning method outperforms random pruning and (2) our DPP-based edge pruning method outperforms previous DPP-based node pruning methods. We finally evaluate our method on real dataset and observe that the pruned models with much less parameters can achieve comparable performance to the unpruned models.