News & Events

Events

 

RSS

February 23, 2017, 03:30 PM
Leon Bergen: Pragmatic Reasoning and Contextual Interpretation

[Thursday, February 23, 2017 at 3:30 PM in Meliora 203]

Goergen Institute for Data Science talk

Abstract: Natural language provides people with a remarkably diverse set of strategies for communicating more than they literally say. For example, in a sentence such as "Juliet is the sun," the speaker is communicating much more than just Juliet's position in the solar system. My research aims to understand the types of knowledge and reasoning which support these pragmatic communication strategies. Towards this end, I introduce a novel modeling technique, pragmatic variable lifting, which can incorporate different types of commonsense knowledge, such as reasoning about beliefs and desires, into models of pragmatics. In this talk, I will consider case studies of several pragmatic phenomena: hyperbole, metaphor, and the interpretation of prosodic stress. Using pragmatic variable lifting, we can explain both qualitative aspects of these phenomena, as well as quantitative judgments in experimental tasks. The results demonstrate the power of pragmatic reasoning, and the potential for computational models of pragmatics to illuminate linguistic phenomena.

Bio: Leon Bergen is a postdoctoral researcher in the Computation & Cognition Lab at Stanford University. He received his Ph.D. from the department of Brain and Cognitive Sciences at MIT in 2015, and his B.A. in Mathematics and Philosophy from Swarthmore College in 2009. His research focuses on computational modeling of language understanding, particularly in semantics and pragmatics. He received NDSEG and NSF Graduate Research Fellowships, and a best paper award for Computational Modeling of Language from the Cognitive Science Society.


February 27, 2017, 12:00 PM
Dr. Oleg Komogortsev: Eye movements, their prediction and use in biometrics and health assessment

[Monday, February 27, 2017 at 12:00 PM in Computer Studies Building, Room 209]

Abstract: The talk will discuss eye movements, their characteristics, and Oculomotor Plant (eye globe and its muscles) mathematical model of the eye that allows predicting eye movements and can be employed for person recognition and health assessment together with other metrics extracted from the eye movement signal. General challenges of eye movement prediction are going to be discussed. The talk will also present how to conduct identity recognition and health assessment of a person via eye movements.

BIO: Dr. Komogortsev is an Associate Professor at Texas State University. Dr. Komogortsev has received his B.S. in Applied Mathematics from Volgograd State University, Russia and M.S./Ph.D. degree in Computer Science from Kent State University, Ohio. Dr. Komogortsev conducts research in eye tracking with a focus on cyber security (biometrics), human computer interaction, usability, bioengineering, and health assessment. This work has thus far yielded more than 80 peer reviewed publications and several patents. Dr. Komogortsev’s research was covered by the national media including NBC News, Discovery, Yahoo, Livesience and others. Dr. Komogortsev is a recipient of Google Virtual Reality Research Award and Google Faculty Research Award. Dr. Komogortsev has also won National Science Foundation CAREER award and Presidential Early Career Award for Scientists and Engineers (PECASE) on the topic of cybersecurity with the emphasis on eye movement-driven biometrics and health assessment. In addition, his research is supported by the National Institute of Standards, Sigma Xi the Scientific Research Society, and various industrial sources.


March 2, 2017, 03:30 PM
Aaron White: Factoring word knowledge

[Thursday, March 02, 2017 at 3:30 PM in Meliora 203]

Goergen Institute for Data Science Speaker

Abstract: Knowing a language's words is a fundamental component of knowing that language. A major part of knowing a word is knowing (i) what it means and (ii) how it distributes. A long-standing question in the linguistics literature is how these two kinds of knowledge are related. In this talk, I discuss two lines of work focused on how the meaning-distribution relationship is instantiated in the domain of verbs—in particular, how verbs' semantic arguments are mapped to their syntactic arguments. Taking a cue from recent work in computational semantics, I show how this question can be approached as a problem of multiview factorization under theoretically motivated constraints.

In the first part of the talk, I focus on the problem of determining the mapping from semantic arguments to nominals, developing a generalization of Dowty's seminal prototype-theoretic approach to semantic roles. In the second part of the talk, I turn to the problem of determining the mapping from semantic arguments to clauses, developing a model for jointly inducing (a) verbs' semantic types and (b) probabilistic rules of projection from those semantic types to syntactic types. I conclude with prospects for a unified computational model of syntactic and semantic argument-taking.

Bio: Aaron Steven White is a postdoctoral researcher in the Science of Learning Institute at Johns Hopkins University, with affiliations in the Department of Cognitive Science and the Center for Language and Speech Processing. He received his Ph.D. from the Department of Linguistics at the University of Maryland in 2015 and his B.A. in Linguistics from the University of California, Santa Cruz in 2009. His research aims to uncover the relationship between natural language syntax and semantics using large-scale behavioral and corpus data in conjunction with computational models informed by linguistic theory.

Host: Jeff Runner, jeffrey.runner@rochester.edu


March 3, 2017, 10:15 AM
Nadya Peek: Making Machines that Make: Infrastructure for End-to-End Machine Design

[Friday, March 03, 2017 at 10:15 AM in Goergen Hall, Room 101]

Abstract: Digital fabrication machines such as 3D printers or laser cutters have captured the imaginations of makers worldwide. But current practices don't quite deliver on the promises of a digital fabrication revolution---everyone is not yet making almost everything. I argue we need to move from rapid prototyping to rapid prototyping of rapid prototyping machines, a move which transcends the discussion of additive versus subtractive manufacturing.

Existing digital fabrication tools enable repeatability and precision by using codes to describe machine actions. But the infrastructure used for digital fabrication machines is difficult to extend, modify, and customize. It is very difficult for the end-user to incorporate more forms of control into the workflow. Machine building today is largely the same as it was 50 years ago, despite decades of progress in other fields such as computer science or network engineering.

In this talk, I will present modular hardware, software, and networking protocols I have developed to enable the quick construction of application-specific machines. Machine infrastructure consists of the universal components from which novice machine builders can construct rapid automation.

BIO: Nadya Peek develops infrastructures of fabrication and advanced manufacturing as a postdoctoral researcher in MIT's Center for Bits and Atoms. Her PhD at the MIT Center for Bits is titled "Making Machines that Make: Object-Oriented Hardware Meets Object-Oriented Hardware". She is an Assembler in the Harvard Berkman Klein Center's Assembly on Cybersecurity and plays in the band Construction.


March 3, 2017, 02:00 PM
Hao Luo: Optimizing Parallel Programs Using Composable Locality Models

[Friday, March 03, 2017 at 2:00 PM in 703 Computer Studies Building]

On modern processors, the on-chip cache memory is structured in a hierarchy, in order to accommodate the rapidly growing disparity between processor peak speed and off-chip memory speed. This design makes a program’s performance highly correlated with its memory access pattern and where the accessed data are positioned within the hierarchy. Locality analysis is to study such correlation and optimize programs accordingly.

However, the existing research effort in locality analysis is rather limited when dealing with contemporary parallel workloads. The performance of these workloads can be significantly influenced by how their threads interactively access data. The state of the art in locality analysis is neither sufficient nor efficient in modeling such interaction. Therefore, in this dissertation, I will present a set of locality models to analyze modern parallel workloads. The new models give insights on how the threads share data on a quantitative basis. They have a common property, composability, which makes predicting cache miss ratio extremely efficient, especially for a large number of thread and data placements. I will also show how these models enable new optimizations that significantly improve the performance of GPU applications and parallel workloads on NUMA systems.

Reception to follow in the CS Graduate Student Lounge at 5:00 pm.


March 6, 2017, 12:00 PM
Sreepathi Pai: A Sufficiently-Smart Compiler for Graph Algorithms on GPUs

[Monday, March 06, 2017 at 12:00 PM in Computer Studies Building, Room 209]

Abstract: Graphs containing millions of vertices and billions of edges are now commonplace in fields like social network analysis, computational biology, information security, and recommendation systems. Writing programs to efficiently process these graphs is not easy. First, as CPU speeds have stagnated, programmers must turn to accelerators such as Graphics Processing Units (GPUs) to obtain reasonable performance on increasingly larger graphs. Second, since graphs are irregular data structures, traditional compiler techniques like auto-parallelization do not work. Thus, programmers must manually implement complicated parallel algorithms using low-level accelerator-specific programming languages while hoping that the code they are writing will be fast enough. Ideally, we would like programmers to write graph algorithms in a high-level language while a "sufficiently-smart" compiler performs the hard work of producing high-performance code.

In this talk I present IrGL, an explicitly-parallel notation for graph algorithms, and the IrGL compiler, which uses three key throughput optimizations to produce highly optimized code for GPUs. These optimizations reduce the cost of fine-grained synchronization, eliminate serialization bottlenecks in nested parallel loops, and overcome CPU--GPU communication bottlenecks in iterative algorithms. Most handwritten implementations do not implement these optimizations due to their complexity and by automating these optimizations, the IrGL compiler makes it significantly easier to write high-performance graph algorithms on GPUs. Evaluated on eight core graph algorithms, IrGL programs are up to 6x faster (median 1.4x) than corresponding expert-written code.

The IrGL compiler demonstrates that domain-specific compilers can significantly improve the experience and lower the complexity of writing high-performance programs on current heterogeneous architectures even for complex problem domains like graph algorithms.

Bio: Sreepathi Pai is currently a Postdoctoral Fellow at the University of Texas at Austin. He received his PhD from the Indian Institute of Science (IISc), Bangalore in 2015. His research interests are in compilers, computer architecture and programming systems for heterogeneous systems that contain accelerators like GPUs. At the University of Texas, he co-wrote the LonestarGPU 2.0 benchmark suite, and developed the IrGL compiler to generate high-performance graph analytics applications for GPUs. His PhD research described the first full coherence scheme for minimally-redundant automatic memory transfers between the CPU and GPU, and proposed improvements to concurrent execution capabilities in GPUs.


March 10, 2017, 10:15 AM
Alexis Hiniker: TBA

[Friday, March 10, 2017 at 10:15 AM in Goergen Hall, Room 101]


March 20, 2017, 12:00 PM
Yufei Ding: High-Level Program Optimizations for Data Analytics

[Monday, March 20, 2017 at 12:00 PM in Computer Studies Building, Room 209]

Abstract: Many modern applications, especially those data analytics, often spend a large number of cycles on unnecessary computations. To find a document most similar to a query document, for instance, these applications typically would need to examine hundreds of thousands of other documents (that are not the most similar ones) in the dataset. Such redundant computations have been hidden in the useful instructions of the applications and are elusive for traditional compiler-based code optimizations. My work harnesses these hidden but significant optimization opportunities by raising the level of program optimizations from implementations to algorithms, and from instructions to formulas.

BIO: Yufei Ding is a Ph.D. candidate in the Computer Science Department at North Carolina State University. She received her B.S. and M.S. in Physics from University of Science and Technology of China and The College of William and Mary respectively. In 2012, she started her Ph.D. study in Computer Science. Her research interest resides at the intersection of Compiler Technology and (Big) Data Analytics, with a focus on enabling High-Level Program Optimizations for data analytics and other data-intensive applications. Yufei has been actively publishing in major venues in both computer systems and data analytics areas, such as ASPLOS, PLDI, VLDB, ICDE, and ICML. She was the receipt of NCSU Computer Science Outstanding Research Award in 2016.


March 24, 2017, 10:15 AM
Somayeh Sardashi: TBA

[Friday, March 24, 2017 at 10:15 AM in Goergen Hall, Room 101]


March 27, 2017, 12:00 PM
Ang Chen: Secure Diagnostics and Forensics with Network Provenance

[Monday, March 27, 2017 at 12:00 PM in Computer Studies Building, Room 209]

Abstract: Distributed systems are behind many important services that we use every day, such as online banking, social media, and video conferencing. However, in a large-scale distributed system, many things can go wrong: routers can be misconfigured, programs can be buggy, and computers can be compromised by an attacker. To investigate these problems, system administrators need to play the role of 'part-time detectives'. Their tasks would be much easier if there were a way for them to ask the system to explain certain events, such as 'Why was this particular route chosen?'.

My work leverages data provenance - a concept from the database community - to enable distributed systems to offer such explanations. At a high level, provenance tracks causality between network states and events, and produces a detailed, structured explanation of any event of interest. Such information can be a helpful starting point when investigating a variety of problems, ranging from benign misconfigurations to malicious attacks.

In this talk, I will present one technique in detail that can accurately pinpoint the root causes of problems by comparing the provenance of 'correct' and 'incorrect' events. I will then give an overview of my other work on network provenance, including an extension of provenance to repair network programs, a generalization of provenance to avoid collateral damage during repair, and an application of secure provenance to the Internet's data plane.

Bio: Ang Chen is a fifth-year Ph.D. student in the Department of Computer and Information Science at the University of Pennsylvania, advised by Professor Andreas Haeberlen. His research interests are distributed systems, networking, and security. Besides network provenance, he has also worked on systems and network security, including projects on detecting covert timing channels, mitigating attacks in cyber-physical systems, and defending against DDoS attacks.