Course project (assigned April 9)
Project proposal due in class on April 16, Monday.
Project due on April 30, Monday.
For the course project, you can choose from the choices that are given below. But you can also propose a brand-new topic that you have particular interests in. Or you may work on a project arranged privately with another professor. If needed, the instructor may help you to define a specific scope. Any idea can be considered as long as the topic is substantially related to parallel and distributed systems.
You may work individually or in a team if the project scope justifies it. You need to obtain an approval from the instructor for a team as well as individual project.
Grading:
10-minute pre-proposal meeting, 10 points. Written proposal, 20 points. 2-minute in-class oral report, 5 points. Project, 50 points, evaluated based on the difficulty, relevance, design and implementation. Final demo, 15 points.
Project management:
First, read the instructions on this page including the time lines and the list of suggested project problems/topics.
Schedule a 10-minute preproposal meeting this Friday morning at the following link. The purpose of the meeting is to describe your plan in three minutes, answer questions and receive feedback (be ready to take notes). You receive 10 points for the pre-proposal meeting. Extra points are awarded for exceptional clarity and appeal to the interest of the listener.
https://docs.google.com/spreadsheet/ccc?key=0AtiJcruj7FQAdDlWQ3d0TDc2Tm1...
Following the pre-proposal meeting, submit a one-page project proposal on paper in class on Monday. In the proposal, describe the aims and scope of your project, the work items, materials (papers and web resources) you will use. Provide sufficient details how you will demonstrate the result at the end.
Give a 2-minute oral report at the start of one of the classes in the two weeks following the proposal. Tell the class your project and progress.
Submit the project and a report by the due time. Create a subdirectory under [repos]/assignments/final_project/ to store project files. The final report should extend your proposal with the actual design, implementation details, listing and analysis of results. Bring and hand-in a printed copy of the report at the demo time.
You need to set up an 8-minute demo. Use the same URL above for the sign-up (the time/place will be provided later). In the demo you need to show us how (well) your project works with your own test cases. BE ready to handle tests that the audience may suggest in addition to answering questions about your system.
Project topics:
You may choose a topic covered in the course to learn more. For example, write a test to show a machine is or is not sequential consistent or implement the Paxos protocol.
You may work with an experimental system. Following is a list of possible topics. Contact the faculty if the sponsor is not your instructor (and do so before the pre-proposal meeting). Some of the items may be dated. Confirm it with the faculty.
-
Suggestion-based Parallelization Language (sponsor: Chen Ding)
-
-
Use BOP system, either for C or Ruby, to parallelize one or more programs.
-
Evaluate the current implementation using current benchmarks.
-
Apply or extend FastTrack, a suggestion-based optimization tool.
-
Software Distributed Shared Memory (sponsor: Sandhya Dwarkadas)
Implementing shared memory in software is seeing some renewed interest as designers begin to cope with the scaling issues associated with increasing numbers of cores on chip. Our group has done some seminal work in the area of software implemented shared memory systems. As part of this project, you could propose and evaluate an explicit cache management strategy that is tailored to the latency and bandwidth constraints in multicores.
-
Parallelizing High Performance Computing Applications (sponsor: Sandhya Dwarkadas)
We have several research groups associated with the Center for Research Computing with compute-intensive codes that could benefit from efficient parallelization, both on clusters and on the BlueGene rack at the CRC. As part of this project, you could work with one of these groups to develop an efficient parallel implementation of their code (e.g., gene expression analysis with the UR biostatistics department, and computational models of cell migration used in cancer studies by the radiation oncology department).
-
Memory Hierarchy Design for Multi-Core Processors (sponsor: Sandhya Dwarkadas)
As part of our research, we are exploring novel communication and synchronization mechanisms, as well as examining ways in which on-chip state resources can be partitioned/shared so as to improve both single and multi-threaded performance. We have working simulation designs for various communication mechanisms and cache designs. As part of this project, you could use the simulator to examine/evaluate the proposed cache designs, suggest improvements of your own, or find and experiment with new ways to utilize the proposed communication mechanisms. For more information, take a look at the CoSyn project.
-
Parallel Sparse LU Factorization Using Message Passing (sponsor: Kai Shen)
In assignments #1/#2 of this course, you have implemented parallel Gaussian Elimination with partial pivoting (using pthreads and MPI). LU factorization is the core step of the Gaussian Elimination. In this project, you will work on sparse LU factorization in the sense that most elements in the matrix are zeros. Sparse matrix computations have applications in many scientific and engineering problems, including the computation for Google's PageRank algorithm. Parallel sparse LU factorization with partial pivoting is challenging because it requires fine-grain synchronization and large communication volume between computing nodes. Creating a new solver would probably require more than a semester's effort. Instead, you are asked to experiment with and analyze some existing solvers. Two MPI-based solvers are:
In your project, you can pick one of the solvers and study its behavior on different parallel platforms (e.g., a PC cluster and a shared-memory multiprocessor). You may also compare across the two solvers. Please pay attention to both performance as well as numerical stability. Analysis on good/bad behaviors and insights into any possible improvement are important parts of the project outcome.
-
Server System Behavior Characterization and Analysis (sponsor: Kai Shen)
In assignments #3 of this course, you have performed analysis on some server request traces. This project allows you to extend from this assignment to investigate advanced topics on server system behavior characterization and analysis. You can try more sophisticated learning approaches to identify interesting behaviors and characteristics. You can use timing analysis to examine the behaviors of requests running at the same time. You can extend the trace analysis to a distributed system, such as one using MapReduce. It may even be possible to perform power-related studies. The sponsor can help produce additional traces for analysis.
-
Resource-Aware Scheduling for Multicore Processors (sponsor: Kai Shen)
Multicore processors have a unique opportunity to share resources at a fine grain. However, contention from conflicting resource requirements can result in reduced performance, especially in the memory hierarchy. As part of this project, you could attempt to identify a mechanism by which conflicts between applications running on sibling cores may be detected. You can also implement a scheme to utilize the information at either the operating system or application level. You may want to check three papers (EuroSys'2009, USENIX ATC'2009, andUSENIX ATC'2010) on relevant work done at Rochester.
-
Software Transactional Memory (sponsor: Michael Scott)
Over the past six years, URCS has become one of the leading sites for research in transactional memory (TM). Possible projects in this area include:
-
Researchers at the University of Manchester have developed a transactional version of Lee's PC-board trace-routing algorithm, but it has some structural problems (transactions are large, and conflict with each other too much). Develop a new approach that addresses this problem. Ideally, extend it into a methodology to detect and address large-scale transaction conflicts.
-
Implement or port additional benchmarks (many possibilities, large and small).
-
Add efficient support for condition synchronization and/or nested transactions.
-
Dual data structures (sponsor: Michael Scott)
A few years back, grad student Bill Scherer (now a soft-money researcher at Rice Univ.) gained considerable fame (including a "research highlights" publication in CACM) by rewriting classic queues, stacks, synchronous queues, and exchangers as lock-free "dual" data structures, in which an operation that has to wait for a precondition leaves an explicit reservation in the data structure. What else can be built in this style? In particular, you might consider priority queues or various search structures.
-
Parallel garbage collection for Ruby (sponsor: Michael Scott)
Grad student Kyle Liddell is currently engaged in an ambitious parallelization of the Ruby Virtual Machine. One of the bottlenecks in his parallel implementation is the Ruby garbage collector, which is neither concurrent (runs only when the rest of the VM is stalled) nor (for the most part) internally parallel. Making it concurrent would be a lot of work. Making it parallel would be a good-size project for 2/458.
-
Manual speculative in high performance concurrent data structures (sponsor: Michael Scott)
Building on his 400 project last fall, Lingxiang Xiang has developed high-performance implementations of several concurrent data structures, using a methodology that moves as much work as possible outside of critical sections, with fast re-verification. Extend this work to other data structures, like search trees and graphs.
-
InterWeave (sponsor: Michael Scott)
InterWeave is a system that allows distributed processes to share global variables. It keeps locally cached copies of these variables up-to-date efficiently, converting automatically among multiple languages and machine types. It was a very successful project, and had a high-quality implementation that we never got around to polishing up for open-source distribution. There is reason to think there would be widespread interest in such a distribution. This would involve nontrivial application development, performance tuning, and configuration and installation work.
-
Distributed Hash Table (sponsor: Daniel Gildea)
The goal is to store/query a hash table too large to fit in memory on one machine. One should allocate data transparently across machines as necessary. User interface should be as close to standard python "dict" as possible.
sample dataset:
/p/nl-archives/Web-1T-5-gram-v1/web_5gram_1/data/3gms
-
Program the Crowd (sponsor: Jeffrey Bigham)
Crowd is a very intelligent, unreliable source of computation, massively parallelizeable. For example, it can be a program that distributes the responsibility of a task among hundreds of people in such a way that reliable output can be achieved within a defined time-frame. References can be found here:
http://web.mit.edu/newsoffice/2010/programming-crowds-1027.html