Friday, December 07, 2018
3:00 PM
Goergen 108
Ph.D. Thesis Proposal
Haichuan Yang
University of Rochester
Optimization-based Resource-Constrained Deep Neural Network Compression

Deep Neural Networks (DNNs) are increasingly being applied in many AI tasks such as computer vision, speech recognition and natural language processing. Although DNN has shown superior performance, it requires orders of magnitude more computation and energy for inference. Therefore, traditional DNNs lack the practicality in highly resource-constrained environments such as smartphones and wearable devices. Model compression prunes the DNN parameters to reduce the resource requirement and maintain the accuracy of the DNNs. Many works target on reducing resource metrics such as the number of (nonzero) parameters or FLOP (number of floating point operations), which are easy to control and evaluate.

In this proposal, we consider practical resource metrics such as energy and latency. These metrics highly depend on the specific platform where the DNNs are deployed. We propose to compress a given DNN model to meet a stringent energy budget. This leads to an optimization problem which has a previously unconsidered constraint that the (estimated) energy consumption of the DNN is limited by a given budget. In this case, an energy estimation needs to be explicitly formulated. We propose two approaches to estimate the energy consumption in different cases. One approach is to construct a quantitative DNN energy model by counting the number of hardware operations needed for a DNN inference (Chapter 2). This approach uses the domain knowledge on specific platform (i.e., hardware and DNN library). An alternative approach is to model the DNN energy consumption via a regression function and fit it by the energy cost data collected from the given platform (Chapter 3). This energy model is portable across different hardware platforms without requiring the domain knowledge. For these two approaches, we apply Projected Stochastic Gradient Descent (PSGD) and Alternating Direction Method of Multipliers (ADMM) to solve the constrained optimizations respectively. Our results show that we can compress the DNN model to have small energy consumption with little accuracy loss.

In Chapter 4, we propose new algorithms targeting the effectiveness of DNN compression. Most existing DNN compression methods use heuristic criteria to decide which parameters to be removed. Although PSGD has the better theoretical support, it does not bring significant improvement in the experiment. Recent sparse optimization methods reformulate the sparse constraint with a continuous penalty term. We propose to do similar reformulations in the DNN compression problem to improve the accuracy and stability of the compression result.

Advisor: Prof. Ji Liu (Computer Science)

Committee: Prof. Daniel Gildea (Computer Science), Prof. Chenliang Xu (Computer Science), Prof. Yuhao Zhu (Computer Science), Prof. Engin Ipek (Electrical and Computer Engineering)