Skip to main content

News & Events

Events

 

RSS

March 30, 2020, 12:00 PM
Anhong Guo: Human-AI Systems for Visual Information Access

[Monday, March 30, 2020 at 12:00 PM in Zoom Meeting ID: 578-447-182] Abstract:
In my work, I create hybrid human- and AI-powered intelligent interactive systems to provide access to visual information in the real world. By combining the advantages of humans and AI, these systems can be nearly as robust and flexible as humans, and nearly as quick and low-cost as automated AI, enabling us to solve problems that are currently impossible with either alone.

I developed and deployed human-AI systems for two application domains: accessibility and environmental sensing. To make physical interfaces accessible for blind people, I developed systems to interpret static and dynamic interfaces, enabling blind people to independently access them through audio feedback or tactile overlays. For environmental sensing, I developed and deployed a camera sensing system that collects human labels to bootstrap automatic processes to answer real-world visual questions, allowing end users to actionalize AI in their everyday lives.

AI systems often require huge amount of up front training data to get started, but targeted human intelligence can bootstrap the systems with relatively little data. Although humans may be slower initially, quickly bootstrapping to automated approaches provides a good balance, enabling human-AI systems to be scalable and rapidly deployable.

Bio:
Anhong Guo is a Ph.D. candidate in the Human-Computer Interaction Institute in the School of Computer Science at Carnegie Mellon University, advised by Dr. Jeffrey Bigham. He is also a Snap Inc. Research Fellow, and a Swartz Innovation Fellow for Entrepreneurship. He has published in many top academic conferences in interface technologies, wearable computing, accessibility and computer vision, including two best paper nominees. Before CMU, he received his Master’s in HCI from Georgia Tech. He has also worked in the Ability and Intelligent User Experiences groups in Microsoft Research, the HCI group of Snap Research, the Accessibility Engineering team at Google, and the Mobile Innovation Center of SAP America. See more at: https://guoanhong.com


April 1, 2020, 02:00 PM
Hanlin Tang: Communication-Efficient Distributed Learning

[Wednesday, April 01, 2020 at 2:00 PM in Zoom Meeting ID: 241-989-609] With the increasing scale of model size and dataset, using large scale parallel distributed training is necessary for many deep learning tasks. The bottleneck for the scalability of this parallel training system is communication overhead, and this problem becomes more severe as the number of computation nodes increases. Therefore, it is very important for us to design communication-efficient algorithms for parallel training. We are going to tackle this problem from two aspects: i) Compressed training, where the shared information is compressed in order to reduce the workload of bandwidth ii) Decentralized training, where we reduce the communication rounds in order to reduce the influence of network latency. In this proposal we introduce some state-of-art techniques and some of our preliminary works for both aspects. Our future works will be focusing on further improving the performance of those algorithms. Advisor: Prof. Ji Liu Committee: Prof. Daniel Gildea (Computer Science), Prof. Chenliang Xu (Computer Science), and Prof. Gonzalo Mateos (Electrical and Computer Engineering)


April 10, 2020, 01:00 PM
Gene Kim: TBD

[Friday, April 10, 2020 at 1:00 PM in Wegmans 2506] TBD