Skip to main content

News & Events

Events

 

RSS

September 22, 2017, 02:30 PM
Parker Riley: Adversarial Training for Unsupervised Bilingual Lexicon Induction

[Friday, September 22, 2017 at 2:30 PM in Wegmans Hall 3201] http://nlp.csai.tsinghua.edu.cn/~ly/papers/acl2017_zm.pdf


October 2, 2017, 12:00 PM
Alanson Sample: Hacking the RF Phy: Wireless power transfer, RF sensing, battery-free communication and beyond

[Monday, October 02, 2017 at 12:00 PM in 1400 Wegmans Hall] Harnessing the power of electromagnetic waves has changed how we live, work and play. While the semiconductor industry has focused on enabling faster, cheaper, and lower power wireless devices, there is the opportunity to use this underlying technology to re-examine the RF physical layer and explore novel means of using electromagnetic waves for sensing, communication, and energy transfer. This talk presents an overview of ongoing projects in the Wireless Systems Group at Disney Research, which focuses on creating new interactive experiences through innovations in sensing, circuit design, and signal processing. This includes the use of signal processing techniques that turn battery-free, long-range RFID tags into minimalistic sensors capable of human object interaction detection and connecting everyday objects to the Internet of Things. A smartwatch prototype that measures the EMI noise emitted by common electronic devices to enable real-time touch based device classification and identification. Ultra-wide band ambient backscatter techniques that use the radio waves present throughout our environment as a medium for communication. Along with a new form of wireless power transfer based on quasi-static cavity resonance which enables room wide wireless power delivery to mobile devices contained nearly anywhere within.

Alanson Sample is an Associate Lab Director & Principal Research Scientist at Disney Research in Pittsburgh and leads the Wireless Systems group. His research focuses on enabling new guest experiences and sensing and computing devices by applying novel approaches to electromagnetics, RF and analog circuits, and embedded systems. Prior to joining Disney, he worked at Intel Labs in Hillsboro on energy harvesting for wearable and Internet of Things applications. He also held a postdoctoral research position in the Department of Computer Science and Engineering at the University of Washington. There he developed methods of wirelessly powering implanted heart pumps, known as LVADs. Alanson received his Ph.D. in Electrical Engineering in 2011 from the University of Washington. Throughout his graduate studies, he worked full-time at Intel Research Seattle where he published several articles and patents on the use of magnetically coupled resonance for wireless power delivery, as well as RFID, and ambient RF energy harvesting. Alanson was one of the key contributors to the Wireless Identification and Sensing Platform, which was open-sourced in 2009 as part of Intel's WISP Challenge. His research interests lie broadly in the areas of wireless communication, RF and analog circuit design, embedded systems, and novel sensors.


October 6, 2017, 11:00 AM
Goergen Institute for Data Science Seminar Series: Satinder Singh: Deep Reinforcement Learning for Sequential Decision Making Tasks with Natural Language Interaction

[Friday, October 06, 2017 at 11:00 AM in 1400 Wegmans Hall] ABSTRACT: The success of Deep Learning (DL) on visual perception has led to rapid progress on Reinforcement Learning (RL) tasks with visual inputs. More recently, Deep Learning is showing promise at certain kinds of supervised natural language problems and this too is making its way into helping on RL tasks with natural language inputs. In this talk, I will describe two projects in this direction from my group. The first (url 1 below) involves learning to query, reason, and answer questions on simple forms of ambiguous texts designed to focus on a specific problem that occurs in dialog systems. The second (url 2 below) involves zero shot generalization to unseen instructions in a 3d maze navigation task for which we develop a hierarchical DeepRL architecture.

1. http://web.eecs.umich.edu/~baveja/Papers/GuoICLR2017.pdf

2. http://web.eecs.umich.edu/~baveja/Papers/task-generalization.pdf

BIO: Dr. Satinder Singh is a Professor of Computer Science & Engineering at the University of Michigan where he also served as the Director of the Artificial Intelligence Laboratory from 2006 to 2017. He is also a co-founder and Chief Scientist of Cogitai, Inc. Dr. Singh’s research interests focus on the field of Reinforcement Learning, i.e., on building algorithms, theory, and architectures for software agents that can learn how to act in uncertain, complex, and dynamic environments. Specific interests include building models of dynamical systems from time-series data, learning good interventions in human-machine interaction, dealing with partial observability and hidden state in sequential decision-making, dealing with the challenge of exploration-exploitation and delayed feedback, explaining animal and human decision making using computational models, and optimal querying in semi-autonomous agents based on value of information. He is interested in applications from healthcare, robotics, and game-playing. He is a Fellow of the Association for the Advancement of Artificial Intelligence, was Program Co-Chair of AAAI 2016, has received an outstanding faculty award from his department, and has published over 150 papers in his field. This speaker is sponsored by the National Science Foundation Research Traineeship Data-Enabled Science and Engineering (NSF NRT-DESE) award for Graduate Training in Data-Enabled Research into Human Behavior and its Cognitive and Neural Mechanisms.


October 13, 2017, 10:00 AM
Yuncheng Li: Weakly Supervised Learning From Noisy Data: from Practice to Theory

[Friday, October 13, 2017 at 10:00 AM in Wegmans Hall 3201]

This is an era about users, for example, web search engine personalized reranking, digital advertisement targeting and various recommendation systems. By engaging with various online platforms, users generate huge amount of data. AI is striving with these data and many opportunities arise. Nevertheless, there are still big challenges, for example, label scarce, cross domain, multi-modality and label ambiguity, before we can fully take advantage of these weakly supervised data. We develop models and algorithms to learn from various aspects of user interactions with images and videos.

To demonstrate the usefulness of learning from web weakly supervised data, we develop applications to link YouTube videos with Wikipedia entities, to describe videos with natural languages, to estimate air quality from photos, and to discover and curate fashion outfits.

Built upon successes in practice, we further develop models and theories that are specifically designed to handle the intrinsic challenges of learning from noisy image and video data. In particular, we design a pairwise ranking loss function and label decision model to learn convolutional neural networks from multi-label images, and we show their effectiveness both theoretically and empirically. Furthermore, we develop a framework based on knowledge distillation to learn from noisy labels, and we show its effectiveness theoretically and empirically on a large scale image dataset with real world label noise.

Reception to follow at 5:30pm in Wegmans Hall 3201