URCS Projects

Research Area: systems

  • Many existing programs have dynamic parallelism at the high level but are hard to parallelize because of uncertainty in implementation and program input.  Behavior-oriented parallelization (BOP) provides a suggestion interface for a user to mark possible parallelism and run-time support to guarante correctness and efficient execution whether the hints are correct or not.  BOP is based on frequent, input-dependent behavior rather than definite behavior.   It enables program parallelization based on partial information and is useful for incrementally parallelizing a program or streamlining it for common uses.

  • Our compiler research addresses the twin concerns of correctness and performance, with a focus on recurrence in the use of data—determining whether a complex program has an inherent pattern of data reuse and, if so, to what degree that pattern can be modeled, measured, and modified (improved).

  • This project addresses the challenge of mainstream parallelism using a combined hardware-software approach. The key idea is to identify common time-critical operations, across a variety of applications and programming models, that might be accelerated or simplified by new architectural mechanisms, and then to design those mechanisms in as general a fashion as possible. Candidate mechanisms include the alert-on-update notification mechanism, programmable data isolation, adaptive cooperative caching, and fine-grain access control.

  • Synchronization serves to constrain the interleaving of actions performed by multiple threads of control (e.g., on a multicore processor), allowing only correct executions.  Over the years, this ongoing project has developed some of the most efficient and widely used algorithms for locking, concurrent data structures, and transactional memory.

  • Server systems provide computing or storage to a potentially large number of simultaneous clients.  Our research addresses the increasing emphasis on information (data) rather than mere computing, and specifically addresses system manageability and dependability.

  • This project investigates system-level techniques to better manage parallel and concurrent I/O for high-end computing.  Specific techniques include model-driven performance debugging, multi-level I/O tracing, 2-competitive I/O prefetching, and the exploitation of emerging solid-state storage technology.

  • This project investigates profile-driven performance models for multi-component, data-intensive online service.  It explores a variety of techniques, and has, among other things, identified previously unknown I/O performance bugs in Linux.

  • Reuse distance and program footprint are two basic metrics we use to study the twin concerns of memory system performance and correctness, with a focus on recurrence in the use of data—determining whether a complex program has an inherent pattern of data reuse and, if so, to what degree that pattern can be modeled, measured, and modified (improved).

  • This project monitors computers in the field, in real time, and records memory errors as they occur.  It reveals that soft (transient) errors are orders of magnitude less frequent than previously reported.  It combines the soft and hard (permanent) error rates to predict failure rates and patterns for systems as a whole.

  • Transactional memory (TM) allows programmers to specify operations that should execute atomically, without worrying about how that atomicity should be achieved.  Downloaded to thousands of sites worldwide, RSTM provides a diverse suite of efficient, mutually compatible TM run-time systems.