Course project
Project proposal due in class on April 6, Wednesday.
Project demo on April 27-28, Wednesday-Thursday.
For the course project, you can choose from the choices that are given below.
But you can also propose a brand-new topic that you have particular interests in.
Or you may work on a project arranged privately with another
professor. If needed, the instructor may help you to define a specific scope.
Any idea can be considered as long as the topic is substantially related to
parallel and distributed systems.
I expect that you will work on the project individually.
In rare cases, you may work in a team of two if the project scope justifies that.
You need to obtain an approval from the instructor before forming a two-person team.
Project management and turn-in:
You need to submit a project proposal in class on April 6, Wednesday. In the proposal,
you should describe concrete goals of your project, any related materials
(papers and web resources) you have read, and anything else you feel relevant.
Please provide sufficient details in the project proposal to demonstrate your
preparation for the project.
You need to set up a 30-minute demo with the instructor.
Please go to the demo schedule page
to sign up for an available slot. In the demo you need to show us
how your project works (or partially works) with your own test cases.
(Prepare the testing cases before the demo!)
We may ask you to run some additional test cases and ask some questions about
your design & implementation.
By the time of your demo, you should turn in a project report. The project report
should clearly describe your overall design, implementation details, and analysis
of your results. In
addition to electronically turning in the project report, please also prepare a
printed copy at the demo time.
The rest of this page describes some possible project topics.
Each topic has a sponsor. After choosing a topic, please contact the sponsor
to establish a working relationship for your course project.
-
Suggestion-based Parallelization Language (sponsor: Chen Ding)
Below are some bullet points provided by Prof. Chen Ding. If you are interested,
please contact him for more details.
- Use BOP system to parallelize one or more programs.
- Evaluate the current implementation using current benchmarks.
- Apply or extend FastTrack, a suggestion-based optimization tool.
- Install and study BOP related parallel programming on Intel's 40-core single-chip cloud computer.
-
Software Distributed Shared Memory (sponsor: Sandhya Dwarkadas)
Implementing shared memory in software is seeing some renewed interest as
designers begin to cope with the scaling issues associated with increasing
numbers of cores on chip. Our group has done some seminal work in the area
of software implemented shared memory systems. As part of this project,
you could propose and evaluate an explicit cache management strategy that
is tailored to the latency and bandwidth constraints in multicores.
-
Parallelizing High Performance Computing Applications (sponsor: Sandhya Dwarkadas)
We have several research groups associated with the Center for Research
Computing with compute-intensive codes that could benefit from efficient
parallelization, both on clusters and on the BlueGene rack at the CRC.
As part of this project, you could work with one of these groups to develop
an efficient parallel implementation of their code (e.g., gene expression
analysis with the UR biostatistics department, and computational models of
cell migration used in cancer studies by the radiation oncology department).
-
Memory Hierarchy Design for Multi-Core Processors (sponsor: Sandhya Dwarkadas)
As part of our research, we are exploring novel communication and
synchronization mechanisms, as well as examining ways in which on-chip
state resources can be partitioned/shared so as to improve both single and
multi-threaded performance. We have working simulation designs for various
communication mechanisms and cache designs. As part of this project, you
could use the simulator to examine/evaluate the proposed cache designs,
suggest improvements of your own, or find and experiment with new ways to
utilize the proposed communication mechanisms. For more information, take
a look at the CoSyn project.
-
Parallel Sparse LU Factorization Using Message Passing (sponsor: Kai Shen)
In assignments #1/#2 of this course, you have implemented parallel Gaussian
Elimination with partial pivoting (using pthreads and MPI). LU factorization
is the core step of the Gaussian Elimination. In this project, you will work
on sparse LU factorization in the sense that most elements in the
matrix are zeros. Sparse matrix computations have applications in many
scientific and engineering problems, including the computation for Google's
PageRank algorithm. Parallel sparse LU factorization with partial pivoting
is challenging because it requires fine-grain synchronization and large
communication volume between computing nodes. Creating a new solver would
probably require more than a semester's effort. Instead, you are asked to
experiment with and analyze some existing solvers. Two MPI-based solvers are:
In your project, you can pick one of the solvers and study its behavior
on different parallel platforms (e.g., a PC cluster and a shared-memory multiprocessor).
You may also compare across the two solvers.
Please pay attention to both performance as well as numerical stability.
Analysis on good/bad behaviors and insights into any possible improvement
are important parts of the project outcome.
-
Server System Behavior Characterization and Analysis (sponsor: Kai Shen)
In assignments #3 of this course, you have performed analysis on some server
request traces. This project allows you to extend from this assignment to
investigate advanced topics on server system behavior characterization and
analysis. You can try more sophisticated learning approaches to identify
interesting behaviors and characteristics. You can use timing analysis to
examine the behaviors of requests running at the same time. You can extend
the trace analysis to a distributed system, such as one using MapReduce.
It may even be possible to perform power-related studies.
The sponsor can help produce additional traces for analysis.
-
Resource-Aware Scheduling for Multicore Processors (sponsor: Kai Shen)
Multicore processors have a unique opportunity to share resources at a fine
grain. However, contention from conflicting resource requirements can result
in reduced performance, especially in the memory hierarchy. As part of this
project, you could attempt to identify a mechanism by which conflicts between
applications running on sibling cores may be detected. You can also implement
a scheme to utilize the information at either the operating system or
application level. You may want to check three papers
(EuroSys'2009,
USENIX ATC'2009,
and USENIX ATC'2010)
on relevant work done at Rochester.
-
Software Transactional Memory (sponsor: Michael Scott)
Over the past six years, URCS has become one of the leading sites for
research in transactional memory (TM). Possible projects in this area
include:
- Application work
- Researchers at the University of Manchester have developed a
transactional version of Lee's PC-board trace-routing algorithm,
but it has some structural problems (transactions are large, and
conflict with each other too much). Develop a new approach that
addresses this problem. Ideally, extend it into a methodology
to detect and address large-scale transaction conflicts.
- The most widely used TM applications are the STAMP suite from
Stanford. These were hastily written, leading to performance
anomalies in certain testing scenarios. Moreover they make
hand-inserted calls to library routines -- a tedious and unsafe
programming approach. Port the STAMP apps to the C++ TM
standard API, clean up the code, collect baseline performance
statistics, and release the code.
- Implement or port additional benchmarks (many possibilities,
large and small).
- Add efficient support for condition synchronization and/or
nested transactions.
-
Dual data structures (sponsor: Michael Scott)
A few years back, grad student Bill Scherer (now a soft-money
researcher at Rice Univ.) gained considerable fame (including a
"research highlights" publication in CACM) by rewriting classic
queues, stacks, synchronous queues, and exchangers as lock-free
"dual" data structures, in which an operation that has to wait for
a precondition leaves an explicit reservation in the data structure.
What else can be built in this style? In particular, you might
consider priority queues or various search structures.
-
Manual speculative in high performance concurrent data structures (sponsor: Michael Scott)
Building on his 400 project last fall, Lingxiang Xiang has developed
high-performance implementations of several concurrent data
structures, using a methodology that moves as much work as possible
outside of critical sections, with fast re-verification. Extend
this work to other data structures, like search trees and graphs.
This has the potential to be as influential as the dual-data-structure
work -- definitely publishable.
-
InterWeave (sponsor: Michael Scott)
InterWeave is a system that allows distributed processes to share
global variables. It keeps locally cached copies of these variables
up-to-date efficiently, converting automatically among multiple
languages and machine types. It was a very successful project, and
had a high-quality implementation that we never got around to
polishing up for open-source distribution. There is reason to think
there would be widespread interest in such a distribution. This
would involve nontrivial application development, performance
tuning, and configuration and installation work.