Assignment #2 - Parallel Programming in MPI

Assignment due by 11:59pm, Tuesday, March 8.

Assignment overview:
This is a followup to assignment #1. But instead of programming using shared-memory threads in assignment #1, you will develop a distributed-memory parallel program in MPI. Additionally, you will study how the performance is affected by your parallelization strategy and the parallel platform.

You will likely need online MPI references for this assignment. The MPI forum contains some references you may find useful.

Development and testing platforms:
You will be using the same set of machines as you did in assignment #1. Appropriate MPI versions have been installed for these machines. Please check the Experiment machines and MPI instructions page of the course web site for information and instructions on using MPI.

For load management (and particularly load balance), you usually prefer to run your program on a set of equivalent processors. Since the processors in the three set of experiment machines are quite different, we recommend that each run should only include machines from one set. For instance, in one run, you can use 8 processors from node17/node19/node20/node21. In another run, you can use 4 processors from node33/node34. In yet another run, you can use 8 processors from node4x2a. In the last case, you may find it odd that all your MPI processes run in a single shared-memory multiprocessor. Yes, this is allowed!

Part I-- Run/Measure a Provided MPI Program:
As a preparation, your first task is to run a sample program (Successive Over Relaxation) that we provide to you. The MPI parallel version is available at /u/cs458/apps/sor/mpi/. Please choose a parallel platform and generate performance results for the MPI version at a range of processor numbers (at least to include 1, 2, 4, 8, and 16). You should also provide a brief analysis of the results---e.g., how good is the parallel speedup and why?

Part II-- Develop/Measure a New MPI Program:
You are asked to develop a parallel Gaussian Elimination (with partial pivoting) program using MPI. A sequential version of Gaussian Elimination is provided to you at /u/cs458/apps/gauss/seq/. More information about Gaussian Elimination, the provided sequential program, and timing your parallel runs can be found in the description for assignment #1.

After developing and testing your program, you should measure the performance/speedup of your program with the input matrices on at least two different parallel platforms (for up to at least 8 processors on node4x2a and up to at least 16 processors on the other two platforms). You should analyze the performance results. In addition, I expect that you have tried multiple ways of realizing parallel Gaussian Elimination (in terms of data partitioning, communication management, or anything else). You need to provide a comparison on at least two alternative approaches (pick a comparison that you have learned the most from) and analyze their performance.

As mentioned earlier, you can run MPI programs in a single shared-memory multiprocessor. In such a case, please compare the parallel performance of your MPI implementation with the pthreads implementation you developed in assignment #1. You should analyze the result of this comparison---e.g., which version provides better parallel speedup and why?

Turn-in:
Like in the first assignment, your grade will be assigned based on your written report. Name your report report.pdf. Below are detailed turn-in requirements.

For Part I (Successive Over Relaxation), you should electronically turn in a report. Please indicate the specific machines you used in your measurement. The report should include a speedup graph of your measured performance results. It should also include an analysis of the results as specified above.

For Part II (Gaussian Elimination), you should electronically turn in the source code (Makefile included) of your program. Your program should take just one parameter of the input matrix. The number of MPI processes will be specified as a standard mpirun parameter. You should also electronically turn in a report. The report should explain your parallelization strategy---specifically, how data partitioning and communication are managed. The report should then provide performance/speedup plots for all test cases---six input matrices on at least two different platforms. Do not forget to report the comparison results specified above. For the comparisons, you only need to provide results on one parallel platform. You should also provide appropriate analysis of all these results.

Grading:

Late turn-in policy:
Late turn-ins will be accepted for up to five days, with 5% penalty for each late day. No turn-ins more than five-day late will be accepted.