Pre-assignment due in class, Monday, January 31.
Full assignment due by 11:59pm, Tuesday, February 8.
Assignment overview:
The main goal of this assignment is to gain the experience of developing
a parallel program in pthreads. Additionally, you will study how the
parallel performance is affected by your parallelization strategy (including
task decomposition, assignment, and synchronization management) and the
multiprocessor platform.
Development and testing platforms:
There are several groups of multiprocessor machines you may use for this
assignment. Please check the Experiment machines
page of the course web site for information and instructions.
Requirements for the pre-assignment:
As a preparation, your first task is to run a sample program (Successive
Over Relaxation) that we provide to you. The sequential version of the
program is available at
/u/cs458/apps/sor/seq/ and the pthreads parallel version is
available at /u/cs458/apps/sor/pthreads/. Please choose a
multiprocessor machine and generate performance results for the
sequential version of the program and the pthreads version at a range of
processor numbers (at least to include 1, 2, and 4). Besides running the
program, you second task is to understand the assignment of tasks to the
processors in the parallel program.
Turn-in for the pre-assignment:
You should hand in a hard-copy report at the class of Monday, January 31.
The report should include a speedup graph of your measured performance
results. Please indicate the specific machine you used in your measurement.
Your report should also include a description of how tasks are assigned to
processors in the parallel version of the program. You are required to
provide a graphic illustration of such assignment.
At the later time of electronically submitting the main assignment, you should also include a copy of your pre-assignment report.
Requirements for the main assignment:
You are asked to develop a parallel Gaussian Elimination (with partial
pivoting) program using pthreads. In the context of solving a system of
linear equations, Gaussian Elimination is a classical method for reducing
an equation matrix into an equivalent upper-diagonal matrix. The full
solution requires an additional back-solver, which has a lower computation
complexity and thus typically requires much less time than the Gaussian
Elimination step. In this assignment, you only need to parallelize the
Gaussian Elimination step.
A sequential version of Gaussian Elimination is provided to you.
It is available at /u/cs458/apps/gauss/seq/. The program
takes one parameter as the input matrix. Currently six matrices
(five real-world sparse matrices and one artificially generated dense
matrix) are available for your testing. Please read
/u/cs458/apps/gauss/seq/README for details.
After developing and testing your program, you should measure the performance/speedup of your program with the input matrices on at least two different multiprocessor machines (for up to at least four processors). You should analyze the performance results. In addition, I expect that you have tried multiple ways of realizing parallel Gaussian Elimination (in terms of task decomposition, assignment, or synchronization management). You need to provide a comparison on at least two alternative approaches (pick a comparison that you have learned the most from) and analyze their performance.
We only care about the timing of the Gaussian Elimination step of your
program. In your parallel program, please use barrier (see the
sor code) to properly synchronize all processors at the
beginning and end of Gaussian Elimination for timing.
Code for using the high resolution timer on Intel processors is
available at /u/kshen/www/csc573-spring2003/hrtimer/.
To acquire accurate timing, you'll need to make sure that the machine
has enough number of processors for your experiments.
Turn-in for the main assignment:
You must electronically turn in a written report. Name your report report.pdf.
As you know there is no TA for the class. I will exclusively
rely on the report to assign your grade. You will get ZERO if you finish all your programming
but do not turn in the written report. Your report must be in PDF and easy to read.
The report should explain your parallelization strategy---specifically, how task decomposition, assignment, and synchronization are managed. The report should then provide performance/speedup plots for all test cases---six input matrices on at least two different multiprocessor machines (for up to at least four processors). Do not forget to report the comparison on at least two alternative approaches that was described above. For the comparison, you only need to provide results on one multiprocessor machine. You should also provide appropriate analysis of all these results. Do not forget the include pre-assignment report in the final report.
You should also electronically turn in the source code (Makefile included) of your program. Make sure that your program takes two parameters --- the first is the input matrix and the second is the number of processors for the parallel run. For example, one can run
./gauss /u/kshen/matrices/original/jpwh_991.dat 4to test the performance of processing
jpwh_991 on four processors.
Grading:
Late turn-in policy:
Late turn-ins will be accepted for up to three days, with 10% penalty for each late day.
No turn-ins more than three-day late will be accepted.