CSC 2/458: Parallel and Distributed Systems

Spring 2019.

Assignment 4: Single-Source Shortest Paths

Your task in this assignment is to parallelize an existing sequential program that finds the shortest paths to all vertices from a distinguished source vertex in a graph.  This is the single-source shortest path (SSSP) problem.  You will solve the problem in both MPI and Chapel.  Documentation for MPI can be found at Open-MPI.  Documentation for Chapel can be found at https://chapel-lang.org/.  In contrast to previous assignments, you may choose to work on this one in pairs.  If you do, it will probably be easiest for one person to write the MPI version and the other the Chapel version, but you will find it helpful to collaborate on algorithmic details, and you will need to turn in a shared README.pdf file with combined performance results.  Students in CSC 258 and CSC 458 will do the same work on this assignment:  there is no extra assignment for graduate students. 

The fastest known SSSP algorithm is due to Dijkstra, and runs in time O((m + n) log n) time, where n is the number of vertices and m is the number of edges (0 ≤ m(

n

2
)).  As you may recall from a data structures or algorithms class, Dijkstra’s algorithm maintains a priority queue of vertices, ordered by the length of their currently-best-known paths to the source. Initially, the source itself has path length 0 and every other vertex has path length ∞.  At each step, we remove the vertex v at the head of the queue, whose path is now known to be as short as possible.  For each neighbor w of v, we then look to see if reaching the source through v would be better than using w’s currently-best-known path; if so, we update w’s information and reposition it in the priority queue.  This repositioning operation is known as relaxing w

Unfortunately, Dijkstra’s algorithm doesn’t parallelize very well:  it’s what is known as a label-setting algorithm — we only select an edge when we’re sure it constitutes part of the shortest-path tree, and we never change our mind.  The priority queue, which allows us to make correct decisions, serves as a central bottleneck.  It turns out that there are label-correcting algorithms with comparable sequential performance that are more amenable to parallelization.  These algorithms admit the possibiity of de-selecting an edge when a better route is found; this ability to change means that threads can work in parallel on the basis on incomplete local knowledge, confident that any mistakes they make will be corrected later. 

The best known label-correcting SSSP algorithm is known as Δ-stepping, due to Meyer and Sanders [J. Algorithms, 2003].  Instead of a strict priority queue, it maintains an array of buckets, where bucket i holds vertices whose currently best-known paths have lengths i × Δ ≤ l < (i+1) × Δ.  Initially only the source is present in the array, in bucket 0.  At each step of the algorithm, we consider the first non-empty bucket B.  For each vertex v in B, we relax each neighbor w that is connected to v by an edge of weight ≤ Δ — that is, we check to see if connecting through v will improve w’s path to the source, and if so, we move w to the appropriate new bucket, which might be B or one of its successors.  Because we might move w into B, we iterate until B is empty; then we relax all the neighbors connected to v by edges of weight > Δ, for all edges v that we have removed from B.  Finally, we move on to the next nonempty bucket.  As it turns out, given a maximum weight W for edges, we can be sure that all vertices currently in the array will live within W/Δ buckets of each other.  If we create a few more buckets than this, we can safely use the array in a circular fashion; we don’t actually need max-path-length/Δ of them. 

Over time, w’s preferred edge back to the source may be updated several times.  Crucially, however, iteration over the neighbors w of v can be done in parallel for all vertices v in B.  This is the source of potential parallelism.  Harvesting this parallelism isn’t trivial, however.  Threads must always work on the same bucket at the same point in time, and must synchronize their operations.  While there are many ways to parallelize Δ-stepping, the following is perhaps the easiest:  We statically partition the n vertices so that each of our t threads is responsible for approximately n/t of them. Each thread then maintains its own, separate array of buckets.  We use a barrier to make sure that each thread works on its ith bucket during step i (perhaps each part of step i).  This strategy not only exposes parallel work, it also lends itself nicely to distributed implementation.  When thread j discovers an opportunity to relax vertex w, which belongs to thread k, it does not do so directly; rather, it sends a message asking k to do so.  (For efficiency, j may “batch” together requests being sent to the same destination.)  We encourage you to study the provided Chapel code carefully.

Resources

You will be running this assignment both on the node2x18a (shared-memory) machines and on a cluster of (somewhat dated) servers on either the cs or csug network.  The cluster will be running batch-processing software that will allow you to queue jobs for execution, with guaranteed exclusive access. 

You should be able to obtain significant speedup on the shared-memory machines, especially for larger graphs.  For very large graphs, you may be able to obtain speedups on the clusters as well, though we have not yet confirmed this with an “instructor’s version” of the code, and you will probably find that there is a point of diminishing returns at which the overhead of creating and managing additional threads — and the increasing overhead of communication relative to actual work — outweighs any additional performance improvements. 

As a starting point, we are giving you sequential code in Chapel for both Dijkstra's algorithm and Δ-stepping. as well as a graph generator tool.  You should find it straightforward to hand-translate the Chapel code into C++ for use with MPI.  To avoid potential problems with programs in infinite loops, each run on the clusters will be time-limited.  Further details will be posted to the discussion board shortly. 

You are welcome to do development on a personal machine, but it may not be worth your while to install local versions of MPI and Chapel.  In any event, you will need to test your code and collect performance results on the department machines.  We’ll be providing some graphs to get you started on testing.  Note that you’ll need to use some very large examples for performance tests:  sequential code on my wimpy laptop can solve a million-vertex graph in less than 6 seconds. 

Writeup

In addition to parallelizing the code and describing what you did, you must evaluate the success of your parallelization.  Plot speedup as a function of the number of threads, for values ranging from 1 to the number of hyperthreads on the shared-memory machine and, separately, on the cluster.  (As in previous assignments, you do not necessarily have to plot every possible thread count.  You might try powers of 2 and the halfway points in-between: 1, 2, 3, 4, 6, 8, 12, 16, ....)  As the baseline of your speedup calculation, use the run time of the provided versions of Dijkstra’s algorithm or Δ-stepping, whichever is faster.  Discuss your results and the nature of any encountered bottlenecks.  As in previous assignments, we will be grading the assignment on a mix of completeness and correctness, programming style, and quality of write-up.  To turn in your code, follow the turnin directions.  If you have questions, post to the discussion board

Extra credit suggestions

  1. The code we have given you uses a heuristic to set Δ.  Add a command-line argument to control the Δ choice, and experiment with its impact. 

  2. The parallelization strategy suggested above (partition vertices among threads, and send relaxation messages to owner threads) works well, with only modest amounts of synchronization, but it isn’t the only possible strategy.  Explore other possible parallelizations. 

  3. For execution on some fixed number of threads (even one!), plot run time as a function of the number of vertices.  Do you see any surprises?  Can you guess where the working set spills out of the various levels of cache? 

MAIN DUE DATE:

Sunday April 7, by end of day. 
Last Change: 05 April 2019 / Michael Scott's email address