Robust, Explainable Deep Learning

While recent advances in AI have brought widespread excitement for various deep learning–based intelligent services, to push AI services closer to reality we must address AI's robustness and safety — that is, how to architect deep learning–based systems to be robust against uncertainty and failure to guarantee that they perform as intended without causing harmful behavior. Addressing the safety issue will involve close collaboration among different computing communities, and we believe computer architects must play a key role.

Safety in cognitive computing is inherently associated with the accuracy issue in deep learning. Most deep learning systems today focus on the average-case accuracy. In mature deep learning–based application domains, such as image classification, the average accuracy of well-trained deep learning models could be 99 percent. However, just as datacenter systems suffer from the long-tail latency issue, deep learning systems suffer from tail accuracy, in which a few requests exhibit poor accuracy due to uncertainties in the deep learning.

We work on improving the robustness and safety of deep learning in two ways. First, we believe that explainability and accountability are the premise of robustness and safety. We build tools that allows us to better explain why deep learning models succeed or fail. Second, we shift the burden of ensuring safety and robustness from algorithm designers to systems designer by building a safe whole out of less-safe parts.

Representative Publications

Adversarial Defense Through Network Profiling Based Path Extraction
[CVPR 2019] Yuxian Qiu, Jingwen Leng, Cong Guo, Quan Chen, Chao Li, Minyi Guo, Yuhao Zhu
Cognitive Computing Safety: The New Horizon for Reliability
[IEEE Micro 2017] Yuhao Zhu, Vijay Janapa Reddi