CSC 2/458: Parallel and Distributed Systems

Spring 2026

Assignment 3: Single-Source Shortest Paths

Your task in this assignment is to parallelize an existing sequential program that finds the shortest paths to all vertices from a distinguished source vertex in a graph.  This is the single-source shortest path (SSSP) problem.  You will solve the problem in C++ using MPI to communicate between processes.  458 students will also test and profile their code with processes running on separate machines, connected by an Infiniband network. 

The fastest known SSSP algorithm is due to Dijkstra, and runs in time O((m + n) log n) time, where n is the number of vertices and m is the number of edges (0 ≤ m(n

2
)).  As you may recall from a data structures or algorithms class, Dijkstra’s algorithm maintains a priority queue of vertices, ordered by the length of their currently-best-known paths to the source.  Initially, the source itself has path length 0 and every other vertex has path length ∞.  At each step, we remove the vertex v at the head of the queue, whose path is now known to be as short as possible.  For each neighbor w of v, we then look to see if reaching the source through v would be better than using w’s currently-best-known path; if so, we update w’s information and reposition it in the priority queue.  This repositioning operation is known as relaxing w

Unfortunately, Dijkstra’s algorithm doesn’t parallelize very well:  it’s what is known as a label-setting algorithm — we only select an edge when we’re sure it constitutes part of the shortest-path tree, and we never change our mind.  The priority queue, which allows us to make correct decisions, serves as a central bottleneck.  It turns out that there are label-correcting algorithms with comparable sequential performance that are more amenable to parallelization.  These algorithms admit the possibiity of de-selecting an edge when a better route is found; this ability to change means that threads can work in parallel on the basis on incomplete local knowledge, confident that any mistakes they make will be corrected later. 

The best known label-correcting SSSP algorithm is known as Δ-stepping, due to Meyer and Sanders [J. Algorithms, 2003].  Instead of a strict priority queue, it maintains an array of buckets, where bucket Bi holds vertices whose currently best-known paths have lengths i × Δ ≤ l < (i+1) × Δ.  Initially only the source is present in the array, in bucket B0.  At each step of the algorithm, we consider the first non-empty bucket Bi.  For each vertex v in Bi, we relax each neighbor w that is connected to v by an edge e of weight ≤ Δ — that is, we check to see if connecting through v will improve w’s path to the source.  If so, we place w in the appropriate bucket, which may be either Bi or Bi+1, depending on the weight of e and on how close v is to the end of its bucket range.  If w was already in some other bucket Bj, we remove it from Bj first, de-selecting the edge by which it had previously been connected.  Because we might move w into Bi, we iterate until Bi is empty.

At this point, we know that all vertices with paths to the root of length < (i+1) × Δ have been identified.  We now relax all vertices that were connected to v by edges of weight > Δ, for all vertices v that we removed from Bi above.  (These may end up in Bj for any j>i.)  Finally, we move to the next nonempty bucket and repeat.  As it turns out, given a maximum weight W for edges, we can be sure that all vertices currently in the array will live within W/Δ buckets of each other.  If we create a few more buckets than this, we can safely use the array in a circular fashion; we don’t actually need max-path-length/Δ of them. 

Over time, w’s preferred edge back to the source may be updated several times.  Crucially, however, iteration over the neighbors w of v can be done in parallel for all vertices v in B.  This is the source of potential parallelism.  Harvesting this parallelism isn’t trivial, however.  Threads must always work on the same bucket at the same point in time, and must synchronize their operations.  While there are many ways to parallelize Δ-stepping, the following is perhaps the easiest:  We statically partition the n vertices so that each of our t threads is responsible for approximately n/t of them. Each thread then maintains its own, separate array of buckets.  We use a barrier to make sure that each thread works on its ith bucket during step i (perhaps each part of step i).  This strategy not only exposes parallel work, it also lends itself nicely to distributed implementation.  When thread p discovers an opportunity to relax vertex w, which belongs to thread q, it does not do so directly; rather, it sends a message asking q to do so.  (For efficiency, p may “batch” together requests being sent to the same destination.) 

As a starting point, we are providing you with sequential starter code for both Dijkstra's algorithm and Δ-stepping.  You will find that this code supports a variety of command-line arguments.  Feel free to add any additional arguments you find useful, but don’t change the ones that are already there (we want to be able to count on them for testing). 

When it begins execution, the starter code creates a random graph on which to test its algorithms.  For a given seed of the random number generator and a given geometry parameter, the graph is uniquely determined, and is guaranteed to have no self-loops and no two edges connecting the same pair of vertices.  (For convenience, the starter code identifies vertices using 2-D Cartesian coordinates.  The geometry parameter, which can be specified with a command-line argument, varies from 0 to 1.  At zero, every edge has random weight; at one, edge weights are determined by Euclidean distance in the plane.  Either way, vertices are connected only to relatively nearby peers in the plane.)  If you run your code with the -w command-line argument, which causes it to dump a list of vertices and their distances from the source at the end of execution, you can be confident that your program is correct if and only if your Delta-stepping output matches the Dijkstra output.

Resources

You’ll almost certainly want to access the official documentation for OpenMPI, as well as instructions for accessing our local installation.  You may also find it helpful to consult an online tutorial, such as this one, from Lawrence Livermore National Lab.  NB: MPI does not have an official C++ API, but the C API can be called safely and efficiently from C++. 

All students in the class will run this assignment on the node2x18a (shared-memory) machines (cs or csug network, as appropriate).  458 students will also run on a cluster of (somewhat dated) servers named ib-node01.cs.rochester.edu through ib-node06.cs.rochester.edu.  You are welcome to do development on a personal machine, but it may not be worth your while to install a local version of MPI.  In any event, you will need to test your code and collect performance results on the department machines. 

Beware of the potential to create orphan processes in this assignment.  Before ending a testing session, make sure nothing owned by you is still running on any of the machines you have been using. 

You should be able to obtain significant speedup on the shared-memory machines, especially for larger graphs.  For very large graphs, you may be able to obtain speedups on the Infiniband cluster as well, though we have not yet confirmed this with an “instructor’s version” of the code, and you will probably find that there is a point of diminishing returns at which the overhead of creating and managing additional threads — and the increasing overhead of communication relative to actual work — outweighs any additional performance improvements.  Note that you’ll need to use some very large examples for performance tests:  sequential code on my M4 Macbook Air solves a million-vertex graph in about a sixth of a second.  If you suspect that there’s a bug in your code that is selecting incorrect paths, try testing on a very small graph and using the -e command-line switch to produce a log of selections and deselections. 

Writeup

In addition to parallelizing the code and describing what you did, you must evaluate the success of your parallelization.  Plot speedup as a function of the number of threads, for values ranging from 1 to the number of hyperthreads on the shared-memory machine and, separately, on the cluster.  (As in previous assignments, you do not necessarily have to plot every possible thread count.  You might try powers of 2 and the halfway points in-between: 1, 2, 3, 4, 6, 8, 12, 16, ....)  As the baseline of your speedup calculation, use the run time of the provided versions of Dijkstra’s algorithm, not single-thread Δ-stepping.  (Single-thread Δ-stepping is almost certain to be slower than Dijkstra’s algorithm, so using it as the baseline would be unfair.)  Discuss your results and the nature of any encountered bottlenecks.  As in previous assignments, we will be grading the assignment on a mix of completeness and correctness, programming style, and quality of write-up.  To turn in your code, follow the turnin directions.  If you have questions, post to the discussion board

Extra credit suggestions

  1. The code we have given you uses a heuristic to set Δ.  Add a command-line argument to control the Δ choice, and experiment with its impact. 

  2. The parallelization strategy suggested above (partition vertices among threads, and send relaxation messages to owner threads) works well, with only modest amounts of synchronization, but it isn’t the only possible strategy.  Explore other possible parallelizations. 

  3. For execution on some fixed number of threads (even one!), plot run time as a function of the number of vertices.  Do you see any surprises?  Can you guess where the working set spills out of the various levels of cache? 

  4. The graphs generated by the starter code are are fairly regular, with high diameter and roughly uniform vertex degree.  Learn about scale-free networks and small-world networks, and extend the graph-creation routines to experiment with these. 

Due Date: Friday March 6, 5:00 pm. 


Back to the course home page                     Last Change: 15 February 2026