Project proposal due by April 18 (Friday).
Project demo on May 2 (Friday), May 7 (Wednesday), or May 8 (Thursday).
For the course project, you can choose from the choices that are given below.
You may also propose a new topic that you have particular interests in.
If needed, the instructor may help you to define a specific scope.
Any topic would be appropriate as long as it is substantially related to
parallel and distributed systems.
You can either work individually on your project, or work in a team of two.
A project team of more than two persons is possible, but you will need to
obtain an approval from the instructor before forming a three (or more)-person
Project management and turn-in:
You need to submit a project proposal on or before April 18 (Friday) using
the electronic turn-in facility. The proposal must be in the PDF format.
In the proposal, you should describe concrete goals of your project, any
related materials from papers and web resources, and anything else
you feel relevant. Please provide sufficient details in the project proposal
to demonstrate your preparation for the project.
Please make sure to specify all project participants in the proposal.
You need to set up a 30-minute demo on either May 2 (Friday), May 7 (Wednesday), or May 8 (Thursday).
We will make an announcement when you can start signing up for a demo slot.
In the demo you need to show us how your project works (or partially works) with
your own test cases.
(Prepare the testing cases before the demo!)
We may ask you to run some additional test cases and ask some questions about
your design & implementation.
By the time of your demo, you should turn in a project report. The project report
should clearly describe your overall design, implementation details, and analysis
of your results. In addition to electronically turning in the project report,
please also prepare a printed copy at the demo time.
List of possible project topics:
Again, you do not have to choose from the following list. You can propose a new
topic as long as the topic is substantially related to parallel computing or
distributed systems. If you are unsure about the appropriateness of your
proposed project topic, please contact the instructor.
Some of the following project topics have sponsor professors other than the
course instructor. If you choose such a topic, please contact the sponsor
professor to establish a working relationship for your course project.
Parallelizing Your Application
You may parallelize an application that you already have experience with
(assuming that a desired parallel solution to the application doesn't yet exist).
You may use any of the parallelization schemes you learned in this course
(threads, MPI, or MapReduce/Hadoop). Your grade will be determined based on the
appropriateness of your parallel design, the difficulty of the
parallelization, the resulted performance, and the depth of your performance
Implementing A Distributed System Protocol
In the class we have discussed several distributed system protocols,
including totally ordered broadcasts, distributed snapshots, and Paxos
distributed consensus. You will implement a distributed system protocol
and devise your own test cases to examine the usefulness, correctness, and
performance of your implementation. Note that we will not provide test
cases or test scenarios. It is your responsibility of devising them and
the quality of your test cases will be an important basis to determine
Resource Management for Multicore / Hyperthreaded Processors
Multicore / hyperthreaded processors share hardware resources at
fine-grained levels. However, contention from conflicting resource
requirements (especially in the cache / memory hierarchy) can result in
reduced performance. You might already have had such experience in your
earlier programming assignments. In this project, you will attempt to
identify one or more mechanisms by which conflicts between applications
running on sibling hyperthreads / cores may be detected. You can
also implement a scheme to utilize the information to improve performance.
There have been a large body of research efforts devoted to this topic.
It may be helpful to read some previous works. You can start by checking
these Rochester papers
and USENIX ATC'2010)
as well as their references.
Parallel Sparse LU Factorization Using Message Passing
In assignments #2/#4 of this course, you have implemented parallel Gaussian
Elimination with partial pivoting (using pthreads and MPI). LU factorization
is the core step of the Gaussian Elimination. In this project, you will work
on sparse LU factorization in the sense that most elements in the
matrix are zeros. Sparse matrix computations have applications in many
scientific and engineering problems. Parallel sparse LU factorization with partial pivoting
is challenging because it requires fine-grain synchronization and large
communication volume between computing nodes. Creating a new solver would
probably require more than a semester's effort. Instead, you are asked to
experiment with and analyze some existing solvers. Two MPI-based solvers are:
In your project, you can pick one of the solvers and study its behavior
on different parallel platforms (e.g., a PC cluster and a shared-memory multiprocessor).
You may also compare across the two solvers.
Please pay attention to both performance as well as numerical stability.
Analysis on good/bad behaviors and insights into any possible improvement
are important parts of the project outcome.
Software Coherence (sponsor: Sandhya Dwarkadas and/or Michael Scott)
A decade ago, Profs. Dwarkadas and Scott led a pair of projects known as Cashmere
and InterWeave. These explored techniques to implement shared-memory programming
models on distributed hardware.
There has been a recent resurgence of interest in this area.
As people design chips with ever larger numbers of cores, they are
beginning to question whether the hardware should guarantee
coherence across all cores.
Build a software shared-memory system
in the style of Cashmere that exploits hardware coherence within
subdomains of a multiprocessor, and fast but noncoherent access to
memory across subdomains.
Build a coherent memory system for a collection of devices with globally
accessible but scratchpad-style local memory (i.e., direct access to,
but without hardware support for coherent shared memory). In essence,
you could propose and evaluate an explicit cache management strategy
that is tailored to the latency and bandwidth constraints in multicores.
Software Transactional Memory (sponsor: Michael Scott)
Over the past 10 years, URCS has been one of the leading sites for research
in transactional memory (TM). Possible projects in this area include:
- The most widely used TM applications are the STAMP suite from
Stanford. These were hastily written, leading to performance anomalies
in certain testing scenarios. Moreover they make hand-inserted calls to
library routines -- a tedious and unsafe programming approach. Two years
ago, postdoc Patrick Marlier began the process of porting the STAMP
apps to the C++ TM draft API. The process involved a lot of clean-up on
the code. The project would be to finish this process, collect
performance statistics, and release the code for use by groups around
- Implement or port additional benchmarks (many possibilities, large and
- One of the key selling points of TM is _composability_ -- the
ability to nest small transactions inside bigger ones, in any order, without
fear of implementation-induced deadlock. Toward that end, we need
transactional versions of standard libraries like the C++ STL. For many of
the data structures in these libraries, good transactional performance will
be nontrivial: naive "transactionalization" will lead to lots of
conflict-induced aborts. Pick a subset and explore techniques for
- Runtime systems. The locally developed RSTM suite is the most
comprehensive available set of software TM implementations. It needs a
few key enhancements. Try any or all of the following:
- Efficient condition synchronization
- True nested transactions
- "Boosted" implementations of common concurrent objects -- reductions in particular.
Nonblocking Data Structures (sponsor: Michael Scott)
These are thread-safe concurrent data structures that work without locks, and
are immune to the performance hiccups caused by preemption. Possible projects
- Dual data structures: A few years back, grad student Bill Scherer (now at
HP) gained considerable fame (including a "research highlights" publication
in CACM) by rewriting classic queues, stacks, synchronous queues, and
exchangers as lock-free "dual" data structures, in which an operation that
has to wait for a precondition leaves an explicit reservation in the data
structure. This past year, Joe Izraelevitz did similar work on fast,
fetch-and-increment--based queues. What else can be built in this style? In
particular, you might consider priority queues or other search structures.
- Hybrid transactional / non-blocking data structures: One of the key
advantages of nonblocking data structures is increased concurrency. A major
disadvantage is complexity. Can we get (most of) the concurrency without
(most of) the complexity by using transactions for _sub-parts_ of each
operation? Consider, for example, a B-tree or 2-3 tree: can we capture
rebalancing as a _series_ of transactions, each of which transforms the tree
from a consistent but sub-optimal state to a "better" consistent state?
Manual Speculative Execution in High Performance Concurrent Data Structures (sponsor: Michael Scott)
For his thesis work, Lingxiang Xiang is developing automatic and semi-automatic
techniques that move speculative work out of either transactions or lock-based
critical sections, to reduce critical-path length or transaction abort rates.
He has demonstrated his approach on a variety of data structures, but many more
are possible. Ideally, we would like to develop a full library of hand-written
"partitioned" concurrent objects, which can be composed automatically into
larger transactions that pre-compute much of their speculative work. This would
be very likely to be part of a future publication.
Concurrent Race Detection (sponsor: Michael Scott)
As part of his thesis work, graduate student Li Lu has developed an innovative
(determinism checker) for parallel Ruby programs (to be presented at
PLDI'14). We think there is an opportunity to speed up the performance of
this checker by moving work onto the "sister" hyperthreads of program
threads on SMT machines. This would entail some interesting work inside
both the JRuby virtual machine and the Jikes JVM, to "pin" computations to
Benchmark Memory Behavior Characterization (sponsor: Chen Ding)
The student will read a paper
on the higher order theory of locality and use the local tool (available via github)
to evaluate a parallel benchmark suite for its memory performance.
Safe Parallel Ruby (sponsor: Chen Ding)
Safe parallel Ruby lets a programmer to write parallel scripts utilizing the
multicore processor. In this project, a student will first read a technical
report and understand the implementation and then use safe parallel Ruby to
write a parallel program. The prototype is not yet robust. For successful
completion, the student should know the basics of Ruby extension and have good
background in programming and debugging in Ruby and in C. See the faculty
sponsor to obtain a copy of the technical report.