Wednesday, May 05, 2021
2:00 PM
https://rochester.zoom.us/s/3626732673
Ph.D. Thesis Proposal
Songyang Zhang
University of Rochester
Temporal Representation Learning in Video and Language Understanding
Temporal representation learning in video and language understanding is a fundamental problem in computer vision, which is closely related to natural language processing, machine learning and other research areas. It involves a set of tasks, such as video classification, temporal action localization and moment localization with natural language.

How to model and leverage temporal information still remains challenging. Recent efforts in video and language understanding are mostly driven by the following pipeline: (1) pretraining a video encoder for action recognition problem on short video clips. (2) fixing the video encoder and modeling long-term temporal information for high level understanding tasks. In this thesis, I will describe our solutions of temporal representation learning for both steps. The solutions are divided into two parts. The first part is video representation learning, which explore better video encoder for video and language understanding from different aspects, such as long-term video representation learning, 3D-aware video representation learning and video-text pretraining. For the second part, we will introduce out solutions for temporal representation learning in high level understanding applications, such as moment localization with natural language, moment localization with temporal language, video-aided unsupervised grammar induction, etc.

In this proposal, I present our preliminary works in the second part of our thesis. I primarily focus on moment localization with natural language task, i.e. localizing the start and time of the video segment described by a given sentence. Specifically, we propose a simple and efficient framework that can model context among different moments. We further improve this framework with a multi-scale version and demonstrate that it can be extended for the temporal action localization task. We also study moment localization with temporal language, i.e. localizing video timestamps by sentences that involves the relations among multiple events. We propose a novel framework that is able to learn the relationship among events within text. Finally, conventional unsupervised grammar induction methods purely rely on text, we also propose a novel framework to improve existing frameworks by leveraging multi-modal video representations. At the end of this proposal, we also provide some promising research directions for the first part.

Advisor: Prof. Jiebo Luo (Computer Science) Committee: Prof. Chenliang Xu (Computer Science), Prof. Dan Gildea (Computer Science), and Dr. Linfeng Song (Tencent AI Lab)