CSC 2/458

Parallel and Distributed Systems

Spring 2016

Possible semester projects

PROJECT PROPOSAL DUE (one page): March 22, 2016; In class 2-minute introduction: March 24, 2016

You are free to work together in groups of 2 or 3 (with a commensurate increase in expectation) for your term project. The proposal must be in PDF format and turned in using the electronic TURN-IN script. In the proposal, you should describe the concrete goals for your project, any related materials from papers and web resources, and anything else you feel is relevant. Please provide sufficient detail in the project proposal to demonstrate your preparation for the project. Please make sure to specify all project participants in the proposal.

The following are listed in no particular order. Also note that this is in no way an exhaustive list; feel free to suggest a project of your own.

Application parallelization/parallel machine/parallel abstraction exploration:
Parallelize some interesting application using either traditional lock-based shared memory or message passing, or more interestingly, contrasting these approaches with the use of transactional memory or languages such as CILK, TBB, OpenMP, UPC, X10, ... Alternatively, explore abstractions such as Map-Reduce, or those provided by Pregel or GraphLab. Analyze the behavior of different parallel systems, including the BlueGene/Q at the Center for Integrated Research Computing (CIRC).
We have several research groups associated with the CIRC with compute-intensive codes that could benefit from efficient parallelization, both on clusters and on the BlueGene rack at the CIRC. As part of this project, you could work with one of these groups to develop an efficient parallel implementation of their code. Please send me a note if you are interested and I will forward you more information. I can also put you in touch with the corresponding sponsor. Here are some example projects:

Software Transactional Memory (sponsor: Michael Scott):
Over the past decade, URCS has been one of the leading sites for research in transactional memory (TM). We have multiple TM implementations available, both hardware and software. Possible projects include:

Nonblocking data structures (sponsor: Michael Scott):
These are thread-safe concurrent data structures that work without locks, and are immune to the performance hiccups caused by preemption. Possible projects include:

Evaluate Sharing-Aware Mapping on Multicore Systems (sponsor: Sandhya Dwarkadas):
Multicore systems share resources such as on-chip interconnects, last-level caches, and off-chip bandwidth, and incur non-uniform latencies of access. Users of parallel applications written for these environments often have to understand the topology to get the best stand-alone performance. In the presence of multiprogramming, which dynamically changes the resources allocated to the parallel application, extracting the best performance available from the system is next to impossible. In a recent kernel-level implementation and corresponding publication, we have developed a Sharing-Aware Mapper, which identifies and reacts to both sharing and resource contention. Your task will be to develop both microbenchmarks and real applications to stress test the system.

Large-scale graph-based computations (sponsor: Sandhya Dwarkadas):
An unprecedented increase in the volumes of data that can be generated and stored has resulted in the need for data analysis tools that can handle this volume. Systems such as MapReduce, GraphLab, and Grappa have been developed to support analysis on both structured and unstructured data in parallel. In this project, you could experiment with the pros and cons of some of these frameworks when developing classic machine learning algorithms to handle data from different domains. One example project would consist of implementing algorithms to determine the k-nearest neighbors of a query data point.

Deterministic Parallel Ruby (DPR) (sponsor: Michael Scott):
For the thesis he defended last year (and presented at PLDI'14), graduate student Li Lu developed a parallel dialect of Ruby (based primarily on parallel execution of independent code blocks) and an innovative race detector (determinism checker) named TARDIS, which verifies independence.

Thread wakeup (sponsor: Michael Scott):
Fast "remote" operations are increasingly common on modern machines. Examples of such operations include remote procedure call (RPC) and requests to user-accessible devices and accelerators (network, hard drive, GPU, or compression/encryption/transcoding engine). The costs associated with invoking remote operations are poorly understood, yet essential for good performance.

Software coherence (sponsor: Sandhya Dwarkadas and/or Michael L. Scott):
Around the turn of the century, Michael Scott and I led a pair of projects known as Cashmere and InterWeave. These explored techniques to implement shared-memory programming models on distributed hardware.

Suggestion-based Parallelization Language (sponsor: Chen Ding)
Below are some bullet points provided by Prof. Chen Ding. If you are interested, please contact him for more details.

Shared cache locality management (sponsor: Chen Ding)
Shared cache locality modeling and improvement, in particular, characterizing the shared cache locality in sequential and parallel programs and examining possible solutions.

Parallel sorting
Andrea and Remzi Arpacci-Dusseau, now at UW-Madison, hold the record for the performance of a parallel external sort. Read up on their work, re-implement it, experiment with it, and if possible improve it.

Parallel architecture simulation and evaluation (sponsor: Sandhya Dwarkadas):
Implement a simulation model of your own or an existing shared memory multiprocessor design (e.g., TimeStamp Snooping) and evaluate its performance on a set of available benchmarks. Possible simulation tools include SimpleScalar extended to handle multithreaded applications, SIMICS, a full system simulator, and GEM5.

Memory Hierarchy Design for Multi-Core Processors
As part of our research, we are exploring novel communication and synchronization mechanisms, as well as examining ways in which on-chip state resources can be partitioned/shared so as to improve both single and multi-threaded performance. We have working simulation designs for various communication mechanisms and cache designs. As part of this project, you could use the simulator to examine/evaluate the proposed cache designs, suggest improvements of your own, or find and experiment with new ways to utilize the proposed communication mechanisms. For more information, take a look at the CoSyn project.

Last Change: 26 April 2016 / Sandhya Dwarkadas