Friday, August 23, 2019
9:00 AM
Wegmans Hall 1005
Ph.D. Thesis Defense
Xiangru Lian
University of Rochester
Large Scale Optimization for Deep Learning

In the big data era, deep learning is often employed to solve all kinds of problems from traditional classification to reinforcement learning. It often takes weeks or even months to train and tune parameters for a deep neural network. Therefore, the efficiency turns out to be a key bottleneck of deep learning. Parallel optimization has then emerged as an essential technology to solve computationally intensive problems. How to design efficient parallel systems and convergent algorithms becomes more and more important.

In this dissertation we investigate how to improve the optimization for deep learning from the following aspects:

* Asynchronous parallelism for reducing the synchronization overhead in parallel computation.

* Decentralized parallelism to make parallel algorithms more feasible and robust to network topology, latency, and bandwidth.

* Lossy compression in communication with error compensation for reducing the communication cost without sacrificing the model's quality.

* Compositional optimization, where the objective function is composed of multiple expectation of loss functions. Batch normalization can be formulated as a kind of compositional optimization.

We provide convergence analysis for all the algorithms we propose, and show when we should and should not use them.

Advisor: Prof. Ji Liu (Computer Science)

Committee: Prof. Daniel Gildea (Computer Science), Prof. Daniel Stefankovic (Computer Science), Prof. Gonzalo Mateos (ECE)

Reception to follow at 11:00am in Wegmans Hall third floor atrium