Skip to main content

News & Events




March 30, 2018, 12:00 PM
Sang Won Lee: Improving user involvement through live collaborative creation

[Friday, March 30, 2018 at 12:00 PM in 1400 Wegmans Hall] Creating an artifact, such as writing a book, developing software, or performing a piece of music, is often limited to those with domain-specific experience or training. As a consequence, effectively involving non-expert end users in such creative processes is challenging. My research focuses on creating interactive systems that support live creation and collaboration, in which the process of creating an artifact is visible in real time to end users and invites them to collaborate with others. These systems help preserve our natural expressivity, support real-time communication, and facilitate participation in the creative process. Through these interactive systems, non-expert participants can collaborate to create such artifacts as GUI prototypes, software, and musical performances. For example, one of the systems that I developed enables large-scale audience participation at a public concert, where audience members collaboratively perform a piece of music using their smartphones. My thesis work has explored three topics linked to live creation and collaboration: (1) the challenges inherent to collaborative creation in live settings, and computational tools that address them; (2) methods for reducing the barrier of entry to live collaboration; and (3) approaches to preserve liveness in the creative process, affording creators more expressivity in making artifacts. Enabling collaborative, expressive, and live interactions in computational systems will invite the broader population to take part in creative practices.

Bio: Sang Won Lee is a Ph.D. candidate in Computer Science and Engineering at the University of Michigan. His work lies at the intersection of human-computer interaction and computer music. His research aims to bring the collaborative, live nature of music making to computational systems by developing interactive systems that facilitate real-time collaboration on creative tasks. His work explores how to computationally mediate musical collaboration and enable novel musical expression. More broadly, he has applied his findings from interactive music to applications in a variety of fields, including crowdsourcing, design, writing, and programming. These systems help people collaboratively create artifacts and experience liveness while collaborating with other people. He holds a diploma in Industrial Engineering from Seoul National University and an M.S. in Music Technology from Georgia Tech. He has been an active author in top-tier computer music conferences, such as New Interfaces for Musical Expression (NIME), as well as broader human-computer interaction venues, like ACM UIST and ACM CHI. In addition to academic research publications, he has presented his research in the form of musical performances at peer-reviewed venues, including NIME, Art-CHI, and ICMC He is a winner of the International Computer Music Association–Music Award 2016 for his composition Live Writing: Gloomy Streets.

April 2, 2018, 12:00 PM
Ting-Hao (Kenneth) Huang: A Crowd-Powered Conversational Assistant That Automates Itself Over Time

[Monday, April 02, 2018 at 12:00 PM in 1400 Wegmans Hall] Interaction in rich natural language enables people to exchange thoughts efficiently and come to a shared understanding quickly. Modern personal intelligent assistants such as Apple’s Siri and Amazon’s Echo all utilize conversational interfaces as their primary communication channels, and illustrate a future that in which getting help from a computer is as easy as asking a friend. However, modern conversational assistants are still limited in domain, expressiveness, and robustness. We take an alternative approach that blends real-time human computation with artificial intelligence to reliably engage in conversations. Instead of bootstrapping automation from the bottom up with only automatic components, we start with our crowd-powered conversational assistant, Chorus, and create a framework that enables Chorus to automate itself over time. Over time, the automated systems will take over more responsibility in Chorus, not only helping us to deploy robust conversational assistants before we know how to automate everything, but also allowing us to drive down costs and gradually reduce reliance on the crowd.

Bio: Ting-Hao (Kenneth) Huang is a Yahoo!/Oath Fellow and Ph.D. candidate in Language Technologies Institute, Carnegie Mellon University (CMU.) His research focuses on crowdsourcing and conversational agents, under the broader umbrella of human-in-the-loop architectures. As a part of his PhD work with Prof. Jeffrey P. Bigham at CMU, Kenneth deployed Chorus (, the first chatbot that is powered by the real-time crowd and artificial intelligence. In 2018, he won an Honourable Mention Award at CHI’18 for creating Evorus, a framework that automates Chorus over time. Kenneth is also known for developing the Visual Storytelling Dataset (VIST) as a summer intern at Microsoft Research in 2015. Prior to his PhD, Kenneth worked on natural language processing during his studies at CMU (M.S. in Computer Science) and National Taiwan University (M.S. and B.S. in Computer Science, B.A. in Chinese Literature.)

April 9, 2018, 12:00 PM
Bimal Viswanath: Security in an AI-driven world

[Monday, April 09, 2018 at 12:00 PM in 1400 Wegmans Hall] AI based on deep neural networks (DNN) has transformed computing as we know it. As AI tools become commoditized, and we increasingly rely on online services/devices powered by AI, it is important to understand the security risks. In this talk, I will present two research directions on this topic.

First, I will describe how AI can be used for attacks--to manipulate the information we consume online. In limited application contexts, DNNs have reached a point where they can produce sufficiently clear and correct text effectively indistinguishable from those produced by humans. I will show that AI programs based on Recurrent Neural Networks (RNNs) are capable of generating deceptive yet realistic looking reviews targeting e-commerce sites, and also discuss defensive measures. Second, I will focus on attacks on AI systems. Transfer learning is viewed as the next big step in accelerating adoption of AI systems. In this scheme, a small number of highly tuned centralized models are shared with the general community, and individual users further customize the model for a given application with additional training. I will present practical misclassification attacks against DNN models derived using transfer learning services available today. As part of future plans, I will discuss the need to re-think data-driven security when considering an AI-powered adversary.

Bio: Bimal Viswanath is a Postdoctoral Scholar at the University of California Santa Barbara, and is currently visiting the University of Chicago. Prior to that, he was a Researcher at Nokia Bell Labs, Germany for a year. He received his PhD (2016) and M.S (2008) from the Max Planck Institute for Software Systems, Germany and the Indian Institute of Technology Madras, India, respectively. He is primarily interested in security and privacy, and his recent work explores the risks posed by deep learning in different application scenarios.

April 18, 2018, 12:00 PM
Yupeng Zhang: Security and Privacy of Outsourced Data and Computations

[Wednesday, April 18, 2018 at 12:00 PM in Goergen 109] Users outsource their data and computation to cloud-service providers such as Amazon EC2, Google Cloud, and Microsoft Azure that are potentially untrusted or may be compromised. Meanwhile, companies collect data from users so as to run machine-learning algorithms on that data to develop products and services. Despite the great benefits of these techniques, they currently require users to give up control of their data and to trade off integrity and privacy for utility.

I will discuss several cryptographic techniques I have developed to address these issues. I will first talk about techniques for verifiable storage and computation that can be used to ensure the correctness of computations done in the cloud and services offered by cloud providers. I will then discuss privacy-preserving machine learning, which allows companies to execute machine-learning algorithms without learning users’ data. I will conclude with some thoughts on future applications of these new protocols to other domains.

Bio: Yupeng Zhang is a PhD student at the University of Maryland, working with Professors Charalampos Papamanthou and Jonathan Katz. His research is focused on applied cryptography, and his work on verifiable computation, privacy-preserving machine learning, and searchable encryption has been published at top security conferences. He is a recipient of Google PhD Fellowship and the Outstanding Graduate Assistant Award at the University of Maryland.