Your task in this assignment is to parallelize an existing implementation (in Java) of a simple “N-body” simulation. The code is suggestive of (but much more primitive than) the simulations used to study the dynamics of galaxies.
The code we are giving you opens a square display containing N yellow dots (stars, or maybe galaxies), and a series of buttons to control the simulation. When the simulation is running, the positions and velocities of the stars are repeatedly updated under the influence of gravitational attraction. Stars that escape the bounds of the display cause the view to “pan out” so you can continue to see everything. A statistics pane at the bottom tracks the number of update steps, elapsed simulation time, and the display scaling factor. By parallelizing the application (re-writing it to use more threads), you should be able to complete more steps in a given amount of time.
The program accepts a variety of start-up parameters, all specified as positive integers:
-n stars | number of stars (default 500) | |
-w pixels | number of pixels on each side of the display (default 800) | |
-s seed | seed for the random number generator (default 0) | |
-g gravity | scaling factor for gravitational attraction (default 100) | |
-v velocity | scaling factor for initial velocity (default 100) | |
-x displaystep | interval, in time steps, between screen updates (default 100) | |
-l steplimit | number of time steps to simulate before stopping (default 0 means “run indefinitely”) |
Source code is in file
Nbody.java, which you can
view in, and save from, your browser.
Once you have created your own copy, you should compile it with
javac and run it in the
java virtual machine.
Experiment with various settings of gravity, velocity,
displaystep to get a sense of how things work.
(You’ll probably notice that some seeds produce a configuration in
which certain stars “slingshot” around each other and take off
in opposite directions, causing the view to pan out dramatically.
The remaining stars—all the stars in configurations without
slingshots— settle into a slowly expanding cloud [if they appear
stable, watch the scale indication at the bottom of the
display: it’s almost certainly increasing]. This behavior
very roughly illustrates an inflationary universe in which
initial velocities are high enough that gravity (which gets weaker as
galaxies fly apart) is not enough to reverse the expansion.)
You are to create two parallel versions of the code: one that uses
threads directly; the other that uses the Executor
mechanism (see the textbook for details).
The speedup you get will depend on your code, the machine on which
you run, and the load (number and behavior of concurrent users) on that
machine.
You are welcome to do development on your own computer, but
while Java implementations are very portable, you will need to
test your code and collect performance results at csug.
You will be running this assignment on
node2x14a.csug.rochester.edu and
node2x18a.csug.rochester.edu.
Each of these machines has two processor chips. The smaller
machine has 14 cores per chip, the larger 18 cores per chip. Each
core has 2 hardware contexts (hyperthreads). This means
the machines can execute up to 56 or 72 threads in
parallel. You should find that your code runs faster with larger
numbers of threads, though various inefficiencies (thread creation
overhead, lack of available concurrency, and/or bus, memory, or ALU
contention) may prevent it from effectively using the whole machine.
Java graphics will be smoothest when running locally, but will also
work over ssh with X forwarding, and
you will need to work remotely, at least to some
degree, in order to collect timing results.
Note that node2x14a
and node2x18a are visible only inside the
csug firewall;
only cycle1, cycle2, and cycle3
are visible from outside.
If you start outside the firewall, you will first
need to ssh to one of the cycle machines and
then ssh from there to
node2x14a
or node2x18a.
To get the graphics to work, you must make your initial
connection from an XTerm window, and
specify the -X or -Y command-line
option. The -Y option assumes that both machines are
trusted and omits some checks; it will generally run faster.
Even so, performance will be much more sluggish than on a local machine;
you’ll want to use a fairly large displaystep for remote
experiments.
As the due date approaches, we will reserve much of the time on
node2x18a for timing experiments, with a sign-up system that
allows you to obtain exclusive access to the machine
(node2x14a will remain available for development).
Note that you
will almost certainly not be able to get last-minute exclusive access, and
since results of timing experiments are required for full credit on the
assignment, you will need to plan to have your code ready for testing
ahead of the due date.
defaults write org.xquartz.X11 enable_render_extension 0
Then restart the XQuartz app. This will reduce the rendering speed
of X a bit, but should eliminate the problem.
The write-up requirements for this assignment are more extensive than they
have been for past assignments.
In addition to parallelizing the code and describing what you did, you
must evaluate the success of your parallelization. Using
node2x18a, for some well-chosen number of stars, the
default gravity and initial velocity, and a well-chosen
steplimit, create a graph
that plots execution time as a function of the number of threads, varying
that number from 1 to 64.
(You do not necessarily have to plot every possible thread
count—that would take a lot of experimentation time. Thread
counts of, say, 1, 2, 3, 4, 6, 8, 12, 16, 24, 32, 48, and 64 should
suffice.)
Also plot the speedup of your
code: the run time of the original (unmodified!) sequential version
divided by the run time of your parallel version. Ideally,
you’d see a speedup of k with k threads. How
close do you come? What bottleneck(s) keep you from doing
better? Do you see any differences between the thread and
Executor versions?
Hint: when collecting timing results, in addition to specifying a
large, concrete number of time steps (with the -l
steplimit command-line
option), you will also want to specify a single end-of-simulation-only
update of the display (-x displaystep
command-line option,
with displaystep = steplimit).
As in previous assignments, you may work alone or in teams of two.
If you choose to work in pairs, one possible division of labor is for one
partner to write thread-based version and one to write the
Executor version.
If you do this, you’ll want to consult with one another frequently
to avoid duplication of effort.
Each time step of a naive (O(N2)) N-body simulation
is “embarrassingly parallel”: the update of each star
is independent of that of all the other stars.
A natural strategy, therefore, is to divide the stars among some
appropriate number of threads (probably one thread per core or
hyperthread, or some larger number of Executor tasks, with
one underlying thread per core or hyperthread, created using the
newFixedThreadPool factory) and then
let the threads run in parallel.
It is important, however, that threads complete time steps together: no
star should be updated in the next time step until all stars have been
updated in the current step.
In a thread-based solution, the Java CyclicBarrier provides
a natural way to coordinate.
In an Executor solution, you can refrain from creating new
tasks until all tasks of the previous step have been completed.
Be sure to follow all the rules on the Grading page. As with all assignments,
use the turn-in script:
~cs254/bin/TURN_IN. Put your write-up in a
README.pdf file
(README.txt or README.md will not suffice)
in the directory in
which you run the script. Be sure to describe any
features of your code that the TAs might not immediately notice.
Executor-based parallelization.
For each of these, half the points are for correctness, the other
half for speedup.
By 11:59pm on Sunday, Nov. 16, each student should complete the T4 trivia assignment found on Blackboard.
