Robust and Safe Machine Learning

Deep Neural Networks (DNN), and machine learning in general, are not robust and can not be used in safety-critical systems. Perhaps the most infamous example is the fatal collision of a Tesla Model S with a tractor trailer, whose white side is mistaken for the brightly-lit sky by Tesla's Autopilot system. Beyond mission-critical scenarios such as autonomous driving, the robustness issue also obstructs the deployment of DNN in privacy/security-sensitive domains such as biometric authentication and even predicting crimes.

To accelerate the penetration of ML-enabled services, we must address ML's robustness and safety — that is, how to architect ML systems that are robust against uncertainties and failures and to guarantee that they perform as intended without causing harmful behavior. Addressing the safety issue will involve close collaboration among different computing communities, and we believe computer architects must play a key role.

We work on improving the robustness and safety of machine learning in two ways. First, we believe that explainability and accountability are the premise of robustness and safety. We propose new algorithmic frameworks to better explain why deep learning models fail and to detect adversarial samples efficiently at runtime so that proper countermeasures could be taken. Second, we build new training frameworks that provide quantitative resource guarantees. Our vision is that the strong resource-consumption guarantees lead to more predictable system behaviors with less variability, and thus pave the way for DNNs deployment in mission-critical environment.

Recent Publications

Robust Deep Learning

Adversarial Defense Through Network Profiling Based Path Extraction
[CVPR 2019] Yuxian Qiu, Jingwen Leng, Cong Guo, Quan Chen, Chao Li, Minyi Guo, Yuhao Zhu
Cognitive Computing Safety: The New Horizon for Reliability
[IEEE Micro 2017] Yuhao Zhu, Vijay Janapa Reddi

Resource-Guaranteeing Deep Learning