Resource-Guaranteed Deep Learning

Deep Neural Networks (DNN) are increasingly deployed in highly resource-constrained environments such as autonomous drones and wearable devices, which have specific energy budgets and/or real-time requirements. While lots of recent work has studied empirical techniques to reduce the energy consumption and latency of DNNs, our research is the first to propose an end-to-end DNN training framework that provides quantitative resource guarantees. Our vision is that the strong resource-consumption guarantees lead to more predictable system behaviors with less variability, and thus pave the way for DNNs deployment in mission-critical and resource-constrained environment.

The key idea is to formulate the DNN training process as a constrained optimization problem in which the resource budget imposes a previously unconsidered optimization constraint. The optimization problem could be accurately and efficiently solved using the rich theories in mathematical optimization, enabling rapid and automated software-hardware co-design. The constrained-optimization is feasible fundamentally because of the regular behaviors of DNN algorithms and the hardware architecture, which let system metrics such as latency, energy, and size be analytically or statistically modeled. We explore the expressiveness and accuracy of resource modeling, and investigate an optimization framework that leverages the models to maximize DNN accuracy while observing system constraints.

Representative Publications