Hajim School of Engineering and Applied Sciences Department of Computer Science

Systems Research

Computer systems research at URCS spans a range of topics, including program analysis and compiler technology; parallel, distributed, and mobile computing; cluster-based server technology; low-power hardware and software; processor and memory architecture; concurrency and synchronization; programming environments; and programming language design.

The department's core faculty in Systems consists of John Criswell, Chen DingSandhya DwarkadasEngin IpekMichael Scott, and Kai Shen, In addition, Wendi Heinzelman, and Michael Huang of Electrical and Computer Engineering have joint appointments in CS, and are active in department research.

Faculty

John Criswell

John Criswell's research interests focus on computer security and automatic compiler transformations that can be used to enforce security policies on commodity software.  He joined the department in 2014 after completing his PhD at the University of Illinois at Urbana-Champaign.

Chen Ding

Chen Ding is the recipient of a DOE Early Career Principal Investigator award and an NSF CAREER award. He joined URCS in July, 2000, after receiving his Ph.D. at Rice University. His research is in compiler technology, specifically in using compilers to generate programs that make better use of the caches on modern machines. For large scientific applications drawn from major benchmark suites, Chen's tools have been able to reduce the demand for memory bandwidth by as much as 80%.

Sandhya Dwarkadas

Sandhya Dwarkadas is the recipient of an NSF Postdoctoral Fellowship and an NSF CAREER award, and has done work in the areas of software distributed shared memory, parallel architecture, and performance evaluation. She co-led the Cashmere and InterWeave distributed sharing projects, and leads the ARCH (Architecture, Run-time, and Compiler integration for High-performance computing) project.

Engin Ipek

Engin Ipek holds a special joint position between Computer Science and Electrical & Computer Engineering. He is the recipient of an NSF CAREER award, an ASPLOS best paper award, an IEEE MICRO Top Picks award, and a Communications of the ACM Research Highlights designation. Prior to joining Rochester, he was a researcher in the Computer Architecture Group at Microsoft Research (2007-2009). His research interests are in the broad area of computer architecture, with an emphasis on multicore processors, hardware-software interaction, and power-efficient computing.

Michael Scott

Michael Scott is an ACM Fellow, an IEEE Fellow, and  the recipient of an IBM Faculty Development Award, the University’s Goergen Teaching Award, several best paper awards, and the 2006 Dijkstra Prize in Distributed Computing. He is widely known for his work on parallel operating systems and synchronization algorithms. He co-led the Cashmere and InterWeave distributed sharing projects. His textbook on programming language design and implementation (Programming Language Pragmatics, Morgan Kaufmann, third edition, 2009) is a leading reference in the field, with adoptions at over 200 schools.

Kai Shen

Kai Shen is the recipient of an NSF CAREER award. He joined the department in September, 2002, after receiving his Ph.D. from the University of California at Santa Barbara. His thesis work focused on clustering, replication, and resource management for scalable network services; results from this work have been adopted by the Ask.com Internet search site. Other contributions include compiler and run-time support for threaded MPI execution and sparse Gaussian elimination. At Rochester he heads the Neptune project.

Wendi Heinzelman

Wendi Heinzelman is the recipient of an NSF CAREER award, an ONR Young Investigator award, and the University’s Curtis Teaching Award. She came to Rochester in January 2001, after receiving her Ph.D. from MIT. Her research involves algorithms and protocols for wireless sensor networks and wireless video delivery. She leads the Wireless Communications and Networking group.

Michael Huang

Michael Huang joined the faculty in September 2002, after receiving his Ph.D. from the University of Illinois at Urbana-Champaign. His interests lie in computer architecture, processor microarchitecture, energy-efficient system and processor architecture, and processing-in-memory.

Parallel systems has been a central part of the department's research focus since its founding in 1974. Early work on the Rochester Intelligent Gateway (RIG) project was the direct predecessor to the Accent and Mach projects at CMU, led by UR alum Rick Rashid(now Senior Vice President for Research at Microsoft). These in turn led to such commercially important operating systems as Compaq Tru64 and Apple MacOS X. In the mid to late 1980s, the department's 128-node Butterfly Parallel Processor was the largest parallel computer anywhere in academia. It supported a wide variety of projects, including the Instant Replay debugging system, the Bridge parallel file system, the Elmwood and Psyche parallel operating systems, and the development of contention-free synchronization. Our work from this era is very heavily cited, and has influenced the research of many other groups. Our synchronization algorithms, for example, have been adopted by many commercial systems; several appear in the java.util.concurrentstandard library. In the early 1990s we moved increasingly into architectural issues, with a heavy emphasis on detailed simulation. Our Mint simulation testbed has been exported to more than a hundred sites worldwide, and forms the basis for more recent simulators from at least three other groups.

As a group, our research spans a range of topics, including program analysis and compiler technology; parallel, distributed, and mobile computing; cluster-based server technology; low-power hardware and software; processor and memory architecture; and concurrency and synchronization. We also maintain active interests in a wide variety of related topics, including parallel architectures, programming environments, computational science, and programming language design.

One of the strengths of the group is the way in which each faculty member's research interests tie into the others', allowing us to leverage infrastructure developed by the group as a whole. We therefore interact closely with the parallel user community, including researchers in Astrophysics, Chemistry, Biology, Laser Energetics, and, within our own department, Computer Vision, Robotics, Planning, and Data Mining. The multi-PI Computer Systems and Engineering Group spans the CS and ECE departments. Through the 1980s and 90s, we worked with colleagues in AI to secure an unprecedented four consecutive major 5-year awards from the National Science Foundation Research Infrastructure program. Prof. Dwarkadas's work on the TreadMarks and FASTLINK projects, in collaboration with colleagues at Rice University and at the National Institutes of Health, was instrumental in discovering the gene believed responsible for Parkinson's disease.

The systems group enjoys outstanding computational resources. Current large-scale machines include a 72-node (144-processor) Linux cluster, a 32-processor IBM p690 machine (acquired with the assistance of a $1.2M grant from IBM), 8 and 16-processor SunFire multiprocessors (supported by grants of over $750K from Sun Microsystems Labs), three 8-core, 32-thread Sun T1000 (Niagara) machines, a 16-core (128 thread) Niagara2 machine, and a variety of smaller x86 machines. The group also has also enjoyed access to a variety of machines at the University's Center for Integrated Research Computing, the Laboratory for Laser Energetics, the Pittsburgh Supercomputing Center, and other sites. For a more detailed list of department research facilities, click here.

Project Pages

Project NameBrief Summary
BOP: Behavior-oriented Parallelization aka Parallel Programming by Hints

Behavior-oriented parallelization (BOP) provides a suggestion interface for a user to mark possible parallelism and run-time support to guarante correctness and efficient execution whether the hints are correct or not.  It enables program parallelization based on partial information and is useful for incrementally parallelizing a program or streamlining it for common uses.

Compiler Research, including Distance- and Footprint-Based Locality Analysis and Optimization

Our compiler research addresses the twin concerns of correctness and performance, with a focus on recurrence in the use of data—determining whether a complex program has an inherent pattern of data reuse and, if so, to what degree that pattern can be modeled, measured, and modified (improved).

CoSyn: Communication and Synchronization Mechanisms for Emerging Multi-Core Processors

This project addresses the challenge of mainstream parallelism using a combined hardware-software approach. The key idea is to identify common time-critical operations, across a variety of applications and programming models, that might be accelerated or simplified by new architectural mechanisms, and then to design those mechanisms in as general a fashion as possible. Candidate mechanisms include the alert-on-update notification mechanism, programmable data isolation, adaptive cooperative caching, and fine-grain access control.

High-Performance Synchronization for Shared-Memory Parallel Programs

Synchronization serves to constrain the interleaving of actions performed by multiple threads of control (e.g., on a multicore processor), allowing only correct executions.  Over the years, this ongoing project has developed some of the most efficient and widely used algorithms for locking, concurrent data structures, and transactional memory.

Operating System Support for I/O-Intensive Online Servers

Server systems provide computing or storage to a potentially large number of simultaneous clients.  Our research addresses the increasing emphasis on information (data) rather than mere computing, and specifically addresses system manageability and dependability.

Performance Modeling and Anomaly Management for Complex Systems

This project investigates profile-driven performance models for multi-component, data-intensive online service.  It explores a variety of techniques, and has, among other things, identified previously unknown I/O performance bugs in Linux.

Reuse Distance and Footprint Based Locality Analysis and Optimization

Reuse distance and program footprint are two basic metrics we use to study the twin concerns of memory system performance and correctness, with a focus on recurrence in the use of data—determining whether a complex program has an inherent pattern of data reuse and, if so, to what degree that pattern can be modeled, measured, and modified (improved).

Rochester Memory Hardware Error Research Project

This project monitors computers in the field, in real time, and records memory errors as they occur. It reveals that soft (transient) errors are orders of magnitude less frequent than previously reported.  It combines the soft and hard (permanent) error rates to predict failure rates and patterns for systems as a whole.

The Rochester Software Transactional Memory (RSTM) system

Transactional memory (TM) allows programmers to specify operations that should execute atomically, without worrying about how that atomicity should be achieved.  Downloaded to thousands of sites worldwide, RSTM provides a diverse suite of efficient, mutually compatible TM run-time systems.

Research