CSC 2/458
Parallel and Distributed Systems
Spring 2008.
Only partly updated from 2006; some information on
this page is still stale.
Possible semester projects
The following are listed in no particular order, but I've highlighted in
red the ones I'd particularly like to see someone
undertake.
Also note that this is in no way an exhaustive list; feel free to
suggest a project of your own.
- Software Transactional Memory
- As most of you know, we’re working hard on STM. Possible
projects include
- Download the Lee
Router application from the
University of Manchester, port it to RSTM, and study its
performance.
- Do the same for the STAMP “Vacation”
benchmark or the MiniSAT
benchmark.
- Implement or port additional benchmarks (many
possibilities, large and small)
- Add efficient support for condition synchronization
- Implement transaction synchronizers, as
proposed by Luchangco and Marathe [SCOOL’05]
- Implement true nested transactions (Andrew Sviekauskas has made
a big start on this; he may want to finish it :-)
- Dual data structures
- Bill Scherer gained considerable fame a couple years back by
rewriting classic queues, stacks, synchronous queues, and exchangers
as lock-free dual data structures, in which an operation that
has to wait for a precondition leaves an explicit reservation in the
data structure. What else can be built in this style? In
particular, you might consider priority queues or various search
structures. This is definitely publishable.
- InterWeave
- InterWeave is a system that allows distributed processes to
share global variables. It keeps locally cached copies of these
variables up-to-date efficiently, converting automatically among
multiple languages and machine types. It was a very successful
project, and has a high-quality implementation. We’d like to
distribute it to the world. Rich Sarkis, a former URCS undergrad
now on staff in Physics and Astronomy, has offered to take point on
polishing and packaging InterWeave for the Open Source
community. Work with him to make this happen. This
would involve nontrivial work with demo applications, performance
tuning, and configuration and installation tools. It’s
an opportunity to have your name permanently associated with a
widely-used code base.
- Parallel I/O workload characterization
(from Kai Shen)
-
Data-intensive parallel applications demand efficient I/O to achieve
high performance. Many parallel applications use the MPI-I/O
interface to access storage devices managed by parallel file
systems. Some parallel file systems (like PVFS) manage a
cluster of storage hosts (each with attached disks) on commodity
storage and network hardware. In this project, you will
experiment with several parallel applications on such platform;
collect I/O workload characteristics on individual storage hosts and
across the whole storage cluster; analyze such characteristics and
speculate on possible operating system and distributed system
enhancements.
You should make sufficient literature survey concerning previous
work on parallel I/O workload characterization. Note that
earlier results may be produced on different parallel I/O
platforms.
- Adaptive parallelization of large applications
(from Chen Ding)
-
Recently Xipeng Shen and Ruke Huang have built a prototype for
parallelizing large C/C++/Fortran programs. A programmer
marks possible parallel regions (PPR) in a program.
The compiler and run-time support will start the program in parallel
but will fall back to sequential execution if a dependence violation
is detected. The scheme enables an unconventional style of
parallel programming. The programmer specifies parallelism
but there is no need to debug a parallel program—the (possible)
parallel execution always returns the same result as the sequential
execution. Xipeng has developed profiling and run-time tools
that identify causes for run-time conflicts, so the programmer
can keep refining the program until parallelization succeeds in
common cases, which is often the best one can do presently for
large applications. Using the system, the compiler group has
successfully parallelized open-source programs up to ten thousands of
lines of code. The current system is based on Gcc and supports
two-way parallelism on x86 and IBM machines. If you are
interested in using the system (for parallelizing a program of yours
or of popular use), improving it (for more than 2-way parallelism
or for distributed environments), and formalizing it (for formal
relations among dependence, serializability/atomicity, and phase
and locality patterns), talk to Prof. Ding. He’d be
glad to give you a demo!
- Parallel sorting
- Andrea and Remzi Arpacci-Dusseau, now at UW-Madison, hold the
record for the performance of a parallel external sort. Read up on
their work, re-implement it, experiment with it, and if possible
improve it.
- Sparse linear solver
- In past years, students in this class parallelized Gaussian
elimination. This algorithm isn’t very efficient when the
coefficient matrix is sparse (mostly zeros). Create a better
version. (Kai Shen built a very good version of this as a graduate
student, and could probably provide some pointers. He notes:
“Sparse matrix processing is the computational kernel in
applications ranging from scientific simulations to Google's
PageRank calculation. There is
a little bit of numerical analysis involved in this project, but
your main efforts will be centering around fine-grain
synchronization, work stealing/balancing, and cache-efficient
algorithms.”)
- Scalable try-locks for in-core databases
-
Bill Scherer and I have developed a family of scalable queue-based
locks in which a process can “time out” and stop waiting
for the lock.
We conjecture that these locks would be particularly useful for
in-core user-level database systems.
Engineer them into MySQL and measure the performance of the
resulting code. This is almost certainly publishable.
- Parallelize some interesting application
- Compare shared memory and message passing
-
Create a shared memory version of some existing MPI application, and
perform a detailed performance comparison.
Last Change:
12 January 2011 /
