Skip to main content

News & Events

Events

 

RSS

February 26, 2018, 12:00 PM
Zhen Bai: Augmenting Social Reality for Good

[Monday, February 26, 2018 at 12:00 PM in 1400 Wegmans Hall] The profound transformation of the employment landscape requires advanced socio-emotional skills for effective collaboration and communication in cross-disciplinary and diverse cultural environments. People’s ability to cope with social situations and exert influence on others is critically linked with their ability to understand and affect meanings that others associate with their surroundings. This association is “meaning making”, the transformation of reality “in the raw” to socially constructed reality, which fundamentally affects how individuals act towards objects, people and situations. It remains challenging, however, to help people navigate their social reality because it is situated in the immediate surroundings, constantly changes through social interaction, and is only accessible through communication.

In this talk, I will describe my research exploring the design space of Augmented Social Reality, which elevates people’s abilities, motivations and experiences through technology-enhanced social cognition and social interaction situated in the immediate physical and social environments. I will focus on two projects. The first uses the “looking glass” metaphor of Augmented Reality to help develop “theory of mind” for preschool children with and without autism. The second, “Sensing Curiosity in Play and Responding”, uses theory and data driven approaches to elaborate fine-grained social accounts of curiosity, and to design social scaffoldings through a collaborative intelligent peer to support curiosity for small-group STEAM learning. Through these projects, I will reflect on the interdisciplinary opportunities and future directions of designing accessible and supportive social reality for lifelong learning, social wellbeing and quality of life in a diverse and ever-changing world.

Bio: Zhen Bai is a post-doctoral fellow of the Language Technology Institute at the School of Computer Science, Carnegie Mellon University. She received her Ph.D. degree from the Graphics & Interaction Group at the Computer Laboratory, University of Cambridge in 2015. Her background combines augmented and tangible user interfaces, human-robot interaction, educational technologies, computer-supported collaborative learning, and design for diversity. Her research has drawn from multiple disciplines including developmental psychology, social science, learning science, machine learning, and natural language processing to design interactive and intelligent interfaces that support lifelong learning and quality of life, by eliminating cognitive, socio-emotional and cultural barriers among people with diverse abilities and backgrounds.


February 28, 2018, 12:00 PM
Michelle Ichinco: Supporting novices in learning programming on-the-fly using examples

[Wednesday, February 28, 2018 at 12:00 PM in Goergen 109] Many people, including children, begin learning programming independently in open-ended contexts. This prevents them from receiving feedback that would introduce them to new skills. In this talk, I will present a system called the Example Guru, which suggests new skills to novice programmers using example code. Both in lab studies and the wild, novices chose to access suggestions significantly more often than common forms of support, like documentation or tutorials. Accessing suggestions often led to more use of new code. I will also discuss my approach for semi-automatically generating suggestions and examples. This approach generated both a set of suggestions similar to an expert hand-authored set, as well as an additional set of original suggestions. This type of support for independent novice programmers has the potential to significantly help the large population of non-expert programmers learning on-the-fly as they work toward their own goals.

Bio: Michelle is a PhD candidate in Computer Science at Washington University in St. Louis, advised by Caitlin Kelleher. Her research is in human-computer interaction, focusing on improving scalable support for independent learning. She developed her work in the context of Looking Glass (http://lookingglass.wustl.edu/), a freely available programming tool for creating 3D animated stories. Through her research, she has both increased understanding of non-expert programmers as well as designed and built systems to support them. Michelle received the Spencer T. and Ann W. Olin Fellowship for Women in Graduate Study, an NSF Graduate Fellowship Honorable Mention, and the 2017 best paper award at the IEEE Symposium on Visual Languages and Human-Centric Computing.


March 5, 2018, 12:00 PM
Mark Bun: Finding Structure in the Landscape of Differential Privacy

[Monday, March 05, 2018 at 12:00 PM in 1400 Wegmans Hall] Differential privacy offers a mathematical framework for balancing two goals: obtaining useful information about sensitive data, and protecting individual-level privacy. Discovering the limitations of differential privacy yields insights as to what analyses are incompatible with privacy and why. These insights further aid the quest to discover optimal privacy-preserving algorithms. In this talk, I will give examples of how both follow from new understandings of the structure of differential privacy.

I will first describe negative results for private data analysis via a connection to cryptographic objects called fingerprinting codes. These results show that an (asymptotically) optimal way to solve natural high-dimensional tasks is to decompose them into many simpler tasks. In the second part of the talk, I will discuss concentrated differential privacy, a framework which enables more accurate analyses by precisely capturing how simpler tasks compose.

Bio: Mark Bun is a postdoctoral researcher in the Computer Science Department at Princeton University. He is broadly interested in theoretical computer science, and his research focuses on understanding foundational problems in data privacy through the lens of computational complexity theory. He completed his Ph.D. at Harvard in 2016, where he was advised by Salil Vadhan and supported by an NDSEG Research Fellowship.


March 9, 2018, 12:00 PM
Xi He : Moving with Provable Privacy Guarantees

[Friday, March 09, 2018 at 12:00 PM in 1400 Wegmans Hall]

Companies such as Google or Lyft collect a substantial amount of location data about their users to provide useful services. The release of these datasets for general use can enable numerous innovative applications and research. However, such data contains sensitive information about the users, and simple clocking-based techniques have been shown to be ineffective to ensure users’ privacy. These privacy concerns have motivated many leading technology companies and researchers to develop algorithms that collect and analyze location data with formal provable privacy guarantees. I will show a unified framework that can (a) enhance a better understanding about the many existing provable privacy guarantees for location data; (b) allow flexible trade-offs between privacy, accuracy, and performance, based on the application’s requirements. I will also describe some exciting new research about provable privacy guarantees for handling advanced settings involving complex queries or datasets and emerging data-driven applications, and conclude with directions for future privacy research in big-data management and analysis.

Bio: Xi He is a Ph.D. student at Computer Science Department, Duke University. Her research interests lie in privacy-preserving data analysis and security. She has also received a double degree in Applied Mathematics and Computer Science from the University of Singapore. Xi has been working with Prof. Machanavajjhala on privacy since 2012. She has published in SIGMOD, VLDB, and CCS, and has given tutorials on privacy at VLDB 2016 and SIGMOD 2017. She received best demo award on differential privacy at VLDB 2016 and was awarded a 2017 Google Ph.D. Fellowship in Privacy and Security.


March 12, 2018, 12:00 PM
Derry Wijaya: Building knowledge bases for natural language understanding

[Monday, March 12, 2018 at 12:00 PM in 1400 Wegmans Hall] One of the ways we can formulate natural language understanding is by treating it as a task of mapping natural language text to its meaning representation: entities and relations anchored to the world. Knowledge bases (KBs) can facilitate natural language understanding by mapping words to their meaning representations, for example nouns to entities and verbs to relations. State of the art knowledge bases such as NELL, Freebase, and YAGO have been successful at constructing such knowledge bases, which contain beliefs about real world entities and relations, by leveraging the redundancy of millions of documents to detect language patterns. The accumulated knowledge have been used to improve the ability of intelligent systems to make inferences. Under multilingual and multimodal settings, knowledge bases present a virtuous learning opportunity: more and higher confident beliefs can be extracted by processing data in more languages or modalities; in turn, since entities and their relations in the KBs exist in the world no matter what language or modality is being used to express them, KBs can act as interlingua for relating corpora in different languages and modalities through KB entities and relations. This is is especially useful for low resource languages where there are few if any aligned bilingual texts to support effective natural language processing (NLP) tasks such as machine translation or cross-lingual disambiguation. In this talk, I will elaborate on this virtuous circle, starting with building knowledge bases that map verbs to real world relations, followed by results on using knowledge bases for translating words from monolingual only corpora.

Bio: Derry Wijaya is a postdoctoral researcher at University of Pennsylvania. Her research interests include machine learning, natural language processing, and data mining. She works with Professor Chris Callison-Burch on using machine learning to build computer systems that intelligently process and understand human languages particularly under low resource and multilingual settings. She received her Ph.D. from Carnegie Mellon University working with Professor Tom Mitchell on the Never Ending Language Learning (NELL) project, and her MSc and Bachelor or Computing from National University of Singapore.


April 9, 2018, 12:00 PM
Bimal Viswanath: Security in an AI-driven world

[Monday, April 09, 2018 at 12:00 PM in 1400 Wegmans Hall] AI based on deep neural networks (DNN) has transformed computing as we know it. As AI tools become commoditized, and we increasingly rely on online services/devices powered by AI, it is important to understand the security risks. In this talk, I will present two research directions on this topic.

First, I will describe how AI can be used for attacks--to manipulate the information we consume online. In limited application contexts, DNNs have reached a point where they can produce sufficiently clear and correct text effectively indistinguishable from those produced by humans. I will show that AI programs based on Recurrent Neural Networks (RNNs) are capable of generating deceptive yet realistic looking reviews targeting e-commerce sites, and also discuss defensive measures. Second, I will focus on attacks on AI systems. Transfer learning is viewed as the next big step in accelerating adoption of AI systems. In this scheme, a small number of highly tuned centralized models are shared with the general community, and individual users further customize the model for a given application with additional training. I will present practical misclassification attacks against DNN models derived using transfer learning services available today. As part of future plans, I will discuss the need to re-think data-driven security when considering an AI-powered adversary.

Bio: Bimal Viswanath is a Postdoctoral Scholar at the University of California Santa Barbara, and is currently visiting the University of Chicago. Prior to that, he was a Researcher at Nokia Bell Labs, Germany for a year. He received his PhD (2016) and M.S (2008) from the Max Planck Institute for Software Systems, Germany and the Indian Institute of Technology Madras, India, respectively. He is primarily interested in security and privacy, and his recent work explores the risks posed by deep learning in different application scenarios.