CSC 2/458

Parallel and Distributed Systems

Spring 2013

Possible semester projects

The following are listed in no particular order. Also note that this is in no way an exhaustive list; feel free to suggest a project of your own.

Application parallelization/parallel machine/parallel abstraction exploration:
Parallelize some interesting application using either traditional lock-based shared memory or message passing, or more interestingly, contrasting these approaches with the use of transactional memory or languages such as CILK, TBB, OpenMP, UPC, X10, ... Alternatively, explore abstractions such as Map-Reduce, or those provided by Pregel or GraphLab. Analyze the behavior of different parallel systems, including the BlueGene/Q at the Center for Integrated Research Computing (CIRC).
Application possibilities include

Software Transactional Memory (sponsor: Michael Scott):
Over the past 8 years, URCS has been one of the leading sites for research in transactional memory (TM). Possible projects in this area include:

Nonblocking data structures (sponsor: Michael Scott):
These are thread-safe concurrent data structures that work without locks, and are immune to the performance hiccups caused by preemption. Possible projects include

Manual speculative execution in high performance concurrent data structures (sponsor: Michael Scott):
Building on his 400 project last from last year, Lingxiang Xiang has developed high-performance implementations of several concurrent data structures, using a methodology that moves as much work as possible outside of critical sections, with fast re-verification. Extend this work to other data structures, like search trees and graphs. This has the potential to be as influential as the dual-data-structure work -- definitely publishable.

Software coherence (sponsor: Sandhya Dwarkadas and/or Michael L. Scott):
A decade ago, Michael Scott and I led a pair of projects known as Cashmere and InterWeave. These explored techniques to implement shared-memory programming models on distributed hardware.

Suggestion-based Parallelization Language (sponsor: Chen Ding)
Below are some bullet points provided by Prof. Chen Ding. If you are interested, please contact him for more details.

Shared cache locality management (sponsor: Chen Ding)
Shared cache locality modeling and improvement, in particular, characterizing the shared cache locality in sequential and parallel programs and examining possible solutions.

Parallel Sparse LU Factorization Using Message Passing (sponsor: Kai Shen)
In assignments #1/#2 of this course, you have implemented parallel Gaussian Elimination with partial pivoting (using pthreads and MPI). LU factorization is the core step of the Gaussian Elimination. In this project, you will work on sparse LU factorization in the sense that most elements in the matrix are zeros. Sparse matrix computations have applications in many scientific and engineering problems, including the computation for Google's PageRank algorithm. Parallel sparse LU factorization with partial pivoting is challenging because it requires fine-grain synchronization and large communication volume between computing nodes. Creating a new solver would probably require more than a semester's effort. Instead, you are asked to experiment with and analyze some existing solvers. Two MPI-based solvers are: In your project, you can pick one of the solvers and study its behavior on different parallel platforms (e.g., a PC cluster and a shared-memory multiprocessor). You may also compare across the two solvers. Please pay attention to both performance as well as numerical stability. Analysis on good/bad behaviors and insights into any possible improvement are important parts of the project outcome.

Parallel sorting
Andrea and Remzi Arpacci-Dusseau, now at UW-Madison, hold the record for the performance of a parallel external sort. Read up on their work, re-implement it, experiment with it, and if possible improve it.

Parallel architecture simulation and evaluation
Implement a simulation model of your own or an existing shared memory multiprocessor design (e.g., TimeStamp Snooping) and evaluate its performance on a set of available benchmarks. Possible simulation tools include SimpleScalar extended to handle multithreaded applications, and SIMICS, a full system simulator.

Memory Hierarchy Design for Multi-Core Processors
As part of our research, we are exploring novel communication and synchronization mechanisms, as well as examining ways in which on-chip state resources can be partitioned/shared so as to improve both single and multi-threaded performance. We have working simulation designs for various communication mechanisms and cache designs. As part of this project, you could use the simulator to examine/evaluate the proposed cache designs, suggest improvements of your own, or find and experiment with new ways to utilize the proposed communication mechanisms. For more information, take a look at the CoSyn project.

Resource-Aware Scheduling for Multi-Threaded Processors
Multi-threaded processors (simultaneous multi-threaded or multi-core) have a unique opportunity to share resources at a fine grain. However, contention from conflicting resource requirements can result in reduced performance, especially in the memory hierarchy. As part of this project, you could attempt to identify a mechanism by which thread conflicts may be detected and implement a scheme to utilize the information at either the operating system or application level.

Last Change: 05 May 2013 / Sandhya Dwarkadas