Monday, April 29, 2019
9:30 AM
Wegmans Hall 2506
Ph.D. Thesis Defense
Linfeng Song
University of Rochester
Tackling Graphical NLP problems with Graph Recurrent Networks

How to properly model graphs is a long-existing and important problem in natural language processing, where several popular types of graphs are knowledge graphs, semantic graphs and dependency graphs. Comparing with other data structures, such as sequences and trees, graphs are generally more powerful in representing complex correlations among entities. For example, a knowledge graph stores real-word entities (such as “Barack Obama” and “U.S.”) and their relations (such as “live in” and “lead by”). Properly encoding a knowledge graph is beneficial to user applications, such as question answering and knowledge discovery. Modeling graphs is also very challenging, probably be-cause graphs usually contain massive and cyclic relations. For instance, a tree with n nodes has n − 1 edges (relations), while a complete graph with n nodes can have O(n2) edges (relations).

Recent years have witnessed the success of deep learning, especially RNN-based models, on many NLP problems, including machine translation (Cho et al., 2014) and question answering (Shen et al., 2017). Besides, RNNs and their variations have been extensively studied on several graph problems and showed preliminary successes. Despite the successes that have been achieved, RNN-based models suffer from several major drawbacks. First, they can only consume sequential data, thus linearization is required to serialize input graphs, resulting in the loss of important structural information. In particular, originally closely located graph nodes can be very far away after linearization, and this introduces great challenge for RNNs to model their relation. Second, the serialization results are usually very long, so it takes a long time for RNNs to encode them.

In this thesis, we propose a novel graph neural network, named graph re-current network (GRN). GRN takes a hidden state for each graph node, and it relies on an iterative message passing framework to update these hidden states in parallel. Within each iteration, neighboring nodes exchange information be-tween each other, so that they absorb more global knowledge. Different from RNNs, which require absolute orders (such as left-to-right orders) for execution, our GRN only require relative neighboring information, making it very general and flexible on a variety of data structures.

We study our GRN model on 4 very different tasks, such as machine read-ing comprehension, relation extraction and machine translation. Some tasks (such as machine translation) require generating sequences, while others only require one decision (classification). Some take undirected graphs without edge labels, while the others have directed ones with edge labels. To consider these important differences, we gradually enhance our GRN model, such as further considering edge labels and adding an RNN decoder. Carefully designed experiments show the effectiveness of GRN on all these tasks.

Reception to follow at 12:30pm in Wegmans Hall 2506

Advisor: Prof. Daniel Gildea (Computer Science)

Committee: Prof. Jiebo Luo (Computer Science), Prof. Lenhart Schubert (Computer Science), Prof. Yue Zhang (Westlake University)