CSC 2/458:  Parallel and Distributed Systems

Spring 2017.

Assignment 3:  Parallelization

For this assignment you are to explore various shared-memory parallelizations of a Gauss-Jordan solver for a system of linear equations.  We are providing sequential code as a starting point.  As a command-line argument, this code expects the name of a file in which to find an NxN matrix.  The expected matrix format is documented in the source code.  Small and large examples can be found in files matrix_10.dat and matrix_2000.dat.  For ease of debugging, the provided code chooses a right-hand-side vector for the system of equations such that [1, 2, 3, ..., N] will constitute a correct solution. 

Your task is to parallelize the sequential code using OpenMP and Cilk.  For the OpenMP solution, you should create variants that assign cyclic, blocked, and dynamically-allocated tasks to worker threads.  As discussed in class, blocked allocation will give each thread a set of adjacent rows.  This strategy helps to maximize spatial locality for certain classes of problems, but may not help you much in Gaussian elimination.  It also suffers from potential load imbalance.  The cyclic strategy gives thread i rows kt+i, where t is the number of threads.  It may give you better load balance than banded allocation, though it would hurt in problems in which threads need to access adjacent rows.  Dynamic allocation is automatically load-balancing, but has higher bookkeeping overhead.  The Cilk version of your code will also employ a dynamic strategy, by virtue of the natural behavior of the language run-time system. 

For an introduction to OpenMP, see the tutorial from Lawrence Livermore National Lab.  For an introduction to Cilk, see the tutorial at CilkPlus.org.  For instructions on compiling and running both systems on local machines, watch for an announcement on Blackboard.

You will notice that the sequential code implements smart pivoting, which improves numeric stability by choosing at each step to pivot using the row with the largest-magnitude initial value.  Students in CSC 458 are required to preserve this pivoting in the parallel code; students in CSC 258 may do so for extra credit, but are also permitted to drop it, and to always pivot using the next row in order (halting if that row has a zero in the pivot position).

Your write-up (README.pdf file) should present speed-up graphs for varying numbers of threads/workers for all four program variants.  (Feel free to use either the x86 or Power machines from the first two assignments; regardless of whether you’re in CSC 258 or CSC 458, you need only report results on a single architecture. [You are of course welcome to try both if you want.])  Your write-up should discuss/explain your results, and what you learned.  What is the highest speedup you are able to attain? 

As in past assignments, follow the turnin directions.  Questions can be posted to the discussion board.  We will be grading the assignment on a mix of completeness and correctness, programming style, and quality of write-up, with an emphasis on the latter. 

As in the past two assignments, be sure to leave plenty of time for experiments and performance debugging.  If you wait too long, you won’t be able to get stand-alone time on the multicore machines. 

Due Date: Friday, March 10, 5:00 pm.


Last Change:23 February 2017 / Michael Scott's email address