Tuesday, May 04, 2021
11:30 AM
https://rochester.zoom.us/j/9241564885
Ph.D. Thesis Proposal
Samuel Lerman
University of Rochester
Relational Reasoning, Memory, And Beyond (In Deep Learning)
Parametric learning suffers from often-slow and catastrophic updates, and representations that vary inconsistently across domains. Many architectural inductive biases have mitigated this representational variance, including convolution, recurrence, and attention, effectively endowing neural networks with translation invariance, scale invariance, rotational invariance, temporal invariance, permutation invariance, and more. In this work, we will explore relational invariance, a key property of general intelligence, as well as relational reasoning in neural networks as a whole. In reasoning generally, it is essential to understand not just the whole, but also its parts and their interactions. We highlight our past contributions in explaining and interpreting these interactions in a CNNís inferences on relational visual-question-answering, then tie this principle of relational reasoning to self-supervised learning and show how we can facilitate the relational invariance of the common self-supervised learning method, contrastive learning. The next portion of our proposal revolves around exploiting the resulting classifier as a similarity metric for episodic control, a kind of reinforcement learning that takes advantage of a non-parametric and expanding database of memories. To this end, we will attempt to mitigate the slow sample efficiency of parametric deep RL, improve few-shot learning as well as its close cousin meta-learning, and temper that notorious deficit of deep learning, catastrophic forgetting. To make this vision credible, we show this plan side by side with promising results already acquired to this point.

Advisor: Prof. Chenliang Xu (Computer Science) Committee: Prof. Henry Kautz (Computer Science), Prof. Robbie Jacobs (Brain and Cognitive Sciences/Computer Science), and Prof. Charles Venuto (Neurology)