Assignment #4 - Parallel Programming in MPI

Assignment due by 11:59pm, Friday, March 14.

You'll earn 5% bonus if you make your last (therefore final) turn-in by 11:59pm, Wednesday, March 12. You'll earn 10% bonus if you make your last turn-in by 11:59pm, Monday, March 10.

Assignment overview:

In this assignment, you will develop a distributed-memory parallel program in MPI. Additionally, you will study how the performance is affected by your parallelization strategy and the parallel platform.

You will likely need online MPI references for this assignment. The MPI forum contains some references you may find useful.

Development and testing platforms:

You will be using the set of machines as described on the Experiment machines page of the course web site. Appropriate MPI versions have been installed for these machines. Please check the MPI instructions pages for information and instructions on using MPI.

For load management (and particularly load balance), you usually prefer to run your program on a set of equivalent processors. Since the processors in different groups of experiment machines are quite different, we recommend that each run should only include machines from one group. For instance, in one run, you can use 8 processors from node17/node18/node20/node21. In another run, you can use 16 processors from node2x12x1a. In the latter case, you may find it odd that all your MPI processes run in a single shared-memory multiprocessor. Yes, this is allowed!

Part I-- Run/Measure a Provided MPI Program:

As a preparation, your first task is to run a sample program (Successive Over Relaxation) that we provide to you. The MPI parallel version is available at /u/cs458/apps/sor/mpi/. Please choose a parallel platform and generate performance results for the MPI version at a range of processor numbers (at least to include 1, 2, 4, 8, and 16). You should also provide a brief analysis of the results---e.g., how good is the parallel speedup and why?

Part II-- Develop/Measure a New MPI Program:

You are asked to develop a parallel Gaussian Elimination (with partial pivoting) program using MPI. A sequential version of Gaussian Elimination is provided to you at /u/cs458/apps/gauss/seq/. More information about Gaussian Elimination and the provided sequential program can be found in the description for assignment #2.

After developing and testing your program, you should measure the performance/speedup of your program with the input matrices on at least two different parallel platforms for at least 16 processors. You should analyze the performance results. In addition, I expect that you will try multiple ways of realizing parallel Gaussian Elimination (in terms of data partitioning, communication management, or anything else). You need to provide a comparison on at least two alternative approaches (pick a comparison that you have learned the most from) and analyze their performance.

As mentioned earlier, you can run MPI programs in a single shared-memory multiprocessor. In such a case, please compare the parallel performance of your MPI implementation with the pthreads implementation you developed in assignment #2. You should analyze the result of this comparison---e.g., which version provides better parallel speedup and why?

Turn-in:

Your grade will be assigned based on your written report. Name your report report.pdf. Below are detailed turn-in requirements.

For Part I (Successive Over Relaxation), your result should be provided in your submitted report. Please indicate the specific machines you used in your measurement. Your report should include a speedup graph of your measured performance results. It should also include an analysis of the results as specified above.

For Part II (Gaussian Elimination), your result should be provided in your submitted report. The report should explain your parallelization strategy---specifically, how data partitioning and communication are managed. The report should then provide performance/speedup plots for all test cases---six input matrices on at least two different platforms. Do not forget to report the comparison results specified above. For the comparisons, you only need to provide results on one parallel platform. You should also provide appropriate analysis of all these results. By "analysis", you should interpret the results and try to make sense of why you observe said performance, taking into account all the parameters you consider relevant (e.g. cache sizes, input sizes, data layout, locality, communication overheads, etc).

You should also electronically turn in the source code (Makefile included) of your program. Your program should take just one parameter of the input matrix. The number of MPI processes will be specified as a standard mpirun parameter.

Grading:

Late turn-in policy:

Late turn-ins will be accepted for up to three days, with 10% penalty for each late day. No turn-ins more than three-day late will be accepted.