Wednesday, April 01, 2020
2:00 PM
Zoom Meeting ID: 241-989-609
Ph.D. Thesis Proposal
Hanlin Tang
University of Rochester
Communication-Efficient Distributed Learning
With the increasing scale of model size and dataset, using large scale parallel distributed training is necessary for many deep learning tasks. The bottleneck for the scalability of this parallel training system is communication overhead, and this problem becomes more severe as the number of computation nodes increases. Therefore, it is very important for us to design communication-efficient algorithms for parallel training. We are going to tackle this problem from two aspects: i) Compressed training, where the shared information is compressed in order to reduce the workload of bandwidth ii) Decentralized training, where we reduce the communication rounds in order to reduce the influence of network latency. In this proposal we introduce some state-of-art techniques and some of our preliminary works for both aspects. Our future works will be focusing on further improving the performance of those algorithms. Advisor: Prof. Ji Liu Committee: Prof. Daniel Gildea (Computer Science), Prof. Chenliang Xu (Computer Science), and Prof. Gonzalo Mateos (Electrical and Computer Engineering)