Spring 2008.
For this assignment you are to write a parallel heuristic solver for the traveling sales path (TSP) problem. Your solution should do a reasonable job of balancing (a) single-thread performance, (b) speedup, and (c) solution quality.
Your input will consist of (1) an integer n specifying the number of cities in the problem and (2) n subsequent lines, each containing the x and y coordinates of a city. You may assume that each coordinate is an integer randomly distributed in the range [0..10,000]. You should find a TSP on the complete graph of cites in 2-D Euclidean space.
Since optimal solution of TSP is NP-complete, you should apply some reasonable heuristic. As noted below, it is possible to find a path within a factor of 2 of the minimum length in polynomial time. You may or may not want to guarantee the bound for your solution—other approaches may lead to better speedup or better “typical” solutions.
This is a deliberately open-ended assignment. You are encouraged to experiment with multiple approaches to both path selection and parallelization; extra credit will be awarded for particularly thorough or creative approaches. Keep in mind that 258 and 458 are being graded on separate curves; graduate students should be particularly ambitious.
To facilitate debugging, we are providing a program to display solutions on the screen. It animates the addition and (optionally) deletion of edges from the display, and accepts run-time parameters to control the size and speed of that display. See the comment at the top of the Java source for more details.
Implement your solution using pthreads.
Working either alone or in a team of two, implement solutions using MPI
and OpenMP. Run your MPI solutions on (and report results for) both
sync/swym (the SunFire machines) and the Linux cluster.
OpenMP runs on the shared memory machines only.
You are to report SunFire results. If you’re interested,
you can also run 8-processor experiments on node4x2a (ask the TA for
instructions).
If you work in teams, the natural strategy is for one partner to do MPI and the other OpenMP. Be sure to include a comparative discussion of the two systems in your README file. What was easy? What was hard?
Note that cluster nodes 33–72 are managed using the torque
software package. You should not log into these machines
directly. See the Cluster
Manager Wiki for further details.
Implement a solution in C++ using the Rochester Software Transactional Memory system. To download the source code directly, without going through the registration system, CLICK HERE. RSTM compiles for Solaris/SPARC, Linux/x86, Windows/x86, and Mac OS X/x86. A version for PowerPC is in the works, but not available yet.
Try not to simply replace locks with transactions in your code. Rather, try to approach the problem with a fresh eye and try to invent the most “natural” solution you can. Discuss this process in your writeup: Did transactions make this problem any easier to solve? Why or why not? (Please try to distinguish between problems and limitations that are artifacts of working with an experimental, library-based system and those that may be inherent in the concept of transactions.)
You will find that RSTM supports a large number of “back-end” implementations. You should experiment with these and discuss their relative performance in your writeup. I particularly recommend compiling with the original RSTM back end (both eager and lazy), and with LLT (similar to TL2), RedoLock (similar to McRT), and CGL (base case for comparison). Keep in mind, however, that these back-ends are not 100% semantically compatible. Mike Spear notes that
As in assignment 1, follow the turnin directions. Contact the TA with any further questions. You can also post your question to the wiki page.
Be sure to include a README.pdf file that explains what you did and what you learned. It should present your performance results in graphical form. As in assignment 1, we will be grading the assignment on a roughly equal mixture of completeness and correctness, programming style, and quality of write-up.

As you may recall, there are two widely used sequential MST
algorithms: Kruskal’s algorithm maintains a set of
union-find trees, each of which is originally a singleton
node. It considers edges in order of increasing weight,
adding each to the MST if it connects two previoiusly unconnected
components, and stopping when a single component remains.
This algorithm takes O(m log n) time. Prim’s
algorithm starts with an arbitrary node and a priority queue of its
edges. It adds the shortest such edge to the MST and then
considers the node at the other end of the edge. It adds to
the priority queue all edges of that node that connect to nodes not
already in the tree, and then repeats, until all nodes have been
connected. With a suitable implementation of the priority
queue, this algorithm takes time O(m + n log n), which is
better by a factor of log n, given that we’re
considering a complete graph.
Both of these algorithms can be parallelized.
Another alternative is to exploit the fact that nodes are randomly distributed in Euclidean space: allocate geometric regions to different threads, and then stitch the regions together. You might, for example, do this recursively, solving the upper-left triangle and the lower-right triangle, then connecting them at the upper-right and lower-left corners, or similarly with squares:
Yet another alternative is to create an arbitrary path, and then look, in parallel, for pairs of edges that can be “flipped” to yield a better solution:

Note that this solution is complicated by the need to avoid breaking the path into two separate circuits: one possible flip of the edges works; the other one does not.
gcc -msupersparc
-Wa,-xarch=v9 on sync. During debugging, use
-g
as well; for timing runs, use -O3 instead. Please
follow the instructions specified in
assignment 1
to set up the Makefile and compiler path.
processor_bind(P_LWPID, P_MYID, ProcessorIds[threadId], NULL);
where ProcessorIds is an array of processor numbers
you’ve initialized (by hand) using info obtained
using psrinfo -v (probably the numbers 0–7).
ps -Af before you log out to
make sure you don’t leave any run-away processes behind.
discovery.
It has a working implementation of pthreads, but not of MPI or
OpenMP. Note that it has a
different processor architecture and a different (currently older)
version of gcc.
