CSC 2/458: Parallel and Distributed Systems

Scalable Synchronization Algorithms



Due Date: Midnight, Wednesday, February 26th, 2020.


The purpose of this assignment is to analyze the tradeoffs among different synchronization algorithms in terms of their latency, fairness, scalability, traffic, and storage requirements. The performance levels of the algorithms used varies widely due to interaction between multiple factors, including the length of the critical section and the latency of communication. Your mission is to reason about the performance of these locks in terms of: latency, scaling behavior, effects of preemption (i.e., saturated CPU resources), and fairness.
  1. Create a program that launches t threads and have them all work on a shared counter.  Try experiments where [1] each thread increments the counter for a total of i times (all threads contend for the counter and execution terminates when the counter value reaches i), and [2] where each thread increments the counter i times (for a total of t*i).

    Pseudocode for all of these (other than the first two) can be found in Chapter 4 of Shared-Memory Synchronization.  You will need to translate these to C++.  Be sure to declare appropriate fields as atomic.  In addition, you should specify appropriate memory_order parameters on load, store, and read-modify-write operations.  For full credit, you should specify the minimum orders necessary for correctness. 

    Students in CSC 458 are additionally required to collect and analyze resuts on an IBM Power machine (CSC 258 students may do so for extra credit): node-ibm-822.csug. Using C++ will facilitate portability. You should develop and debug your code on an x86 machine first. Because of its more relaxed memory model, the IBM machine may expose bugs that are hidden on the x86.

  2. Change the loop so that instead of a single shared counter, a random location in an array of 1000 shared integers is incremented. Use separate TAS locks for each location. Extra Credit: Compare this version to using Intel's TSX - either speculative lock elision or transactional memory.

To simplify testing of your code, please write your program to take the number of threads t and the number of iterations i as command-line arguments, specified with “-t t” and “-i i” (in either order).  If the arguments are not specified, use t = 4 and i = 10,000. 

Try each option with varying numbers of threads, both greater and fewer than the number of processors in the machine: using powers of two and values halfway in between (e.g., 1, 2, 4, 6, 8, 12, 16, 32, 48, 64, 96, 128) allows for sufficient coverage.  (To find out how many processors there are, examine /proc/cpuinfo on x86/Linux machines.  Report final counter values, execution times, and the number of times each thread incremented the counter in the case of the first experiment.  Try any other tests that occur to you.  Explain your results (in writing). 

  • Consider pinning or binding processes to cores to determine the impact of placement across sockets and cores within a socket. Use lscpu -e=cpu,core,socket to determine logical to physical mappings; and read the man pages on the following commands to understand how to "pin" your process to a core (set its affinity) - The pthreads versions will work with pthreads, the sched versions within individual processes, and taskset is a command-line option using the process's pid or at the time of launch.

    Be sure to include a README.pdf file that explains what you did and what you learned.  It should include your timing results and analysis of the various locks.  We will be grading the assignment on a roughly equal mixture of completeness and correctness; programming style; and quality of write-up. 

    Notes