Monday, March 08, 2021
12:00 PM
Zoom Meeting ID: 915 0126 7742
Tianyi Zhou
University of Washington
Learning Like a Human: How, Why, and When
Abstract:
Machine learning (ML) can surpass humans on certain complicated yet specific tasks. However, most ML methods treat samples/tasks equally during the course of training, e.g., by taking a random batch per step and repeating many epochs' SGD on all data, which may work promisingly on well-processed data given sufficient computation but is extraordinarily suboptimal and inefficient from human perspectives, since we would never teach children or students in such a way. On the contrary, human learning is more strategic and smarter in selecting or generating the training contents via experienced teachers, collaborationa, curiosity and diversity driven exploration, tracking of memory and progress, sub-tasking, etc., which have been underexplored in ML. The selection and scheduling of data/tasks is another type of intelligence as important as the optimization of model. My recent work aims to bridge this gap. As we entering a new era of hybrid intelligence between humans and machines, it is important to make AI not only act like humans but also benefit from human learning strategies.

In this talk, I will present curriculum learning techniques we developed for improving supervised/semi-supervised/self-supervised learning, robust learning with noisy labels, reinforcement learning, ensemble learning, etc., when the data are imperfect and thus a curriculum can make a big difference. Firstly, I will show how to translate human learning strategies to discrete-continuous optimizations, which are challenging to solve in general but efficient and provable algorithms can be developed from submodular and convex/non-convex optimization. I will show that curiosity and diversity play vital roles in earlier learning. Secondly, we build both empirical and theoretical connections between curriculum learning and the training dynamics of ML models. Empirically, deep neural networks are fast in memorizing some data but also fast in forgetting some others, so we can accurately allocate those easily forgotten data by training dynamics observed in very early stages and make the future training mainly focus on them. Moreover, we find that the consistency of model output overtime for an unlabeled sample is a reliable indicator of its pseudo-label's correctness for self-supervised learning and a descriptor of the forgetting effects on historically learned data. These discoveries are consistent with human learning and lead to more efficient curricula for a rich class of ML problems. Theoretically, we aim to find a curriculum that can optimize the training dynamics in continuous time. Interestingly, the resulted curriculum matches our empirical understandings and naturally relates to the tangent/path kernel in recent deep learning theories.


Bio:
Tianyi Zhou (https://tianyizhou.github.io) is a Ph.D. candidate in the Paul G. Allen School of Computer Science and Engineering at University of Washington, advised by Professor Jeff A. Bilmes. His research interests are in machine learning, optimization, and natural language processing. His recent research focuses on transferring human learning strategies, e.g., curriculum and sub-tasking, to machine learning in the wild, especially when the data are unlabeled, redundant, noisy, biased, or are collected via interaction. The research results can improve supervised/semi-supervised/self-supervised learning, robust learning with noisy data, reinforcement learning, meta-learning, ensemble method, etc. He has published ~50 papers at NeurIPS, ICML, ICLR, AISTATS, NAACL, COLING, KDD, AAAI, IJCAI, Machine Learning (Springer), IEEE TIP, IEEE TNNLS, IEEE TKDE, etc., with ~2000 citations. He is the recipient of the Best Student Paper Award at ICDM 2013 and the 2020 IEEE Computer Society Technical Committee on Scalable Computing (TCSC) Most Influential Paper Award.