CSC 2/458: Parallel and Distributed Systems Mar. 2ff 2026 ======================================== Scalable Synchronization an issue for all busy-wait synchronization will concentrate on locks, with briefer coverage of barriers ---------------------------------------- Centralized locks problems with test-and-set locks contention unfairness test-and-test-and-set limits contention to release time on cache-coherent machines bounded exponential backoff further reduces contention at release time doesn't _solve_ the problem, esp. on machines with larger numbers of fast cores class lock atomic f := false const int base := ... // tuning parameters const int limit := ... const int multiplier := ... lock.acquire(): int delay := base while f.TAS( ∥ ) pause(delay) delay := min(delay × multiplier, limit) fence(R ∥ RW) lock.release(): f.store(false, RW ∥ ) I'll skip the ordering constraints on future algorithms. They serve to ensure correct operation of BOTH the lock's internals AND the protected critical section. ticket locks fair, but still vulnerable to contention nice backoff option: difference between my ticket and the posted number tells me the minimum wait time. class lock atomic next_ticket := 0 atomic now_serving := 0 const int base = ... // tuning parameter lock.acquire(): int my_ticket := next_ticket.FAI() // returns old value; // arithmetic overflow is harmless loop int ns := now_serving.load() if ns = my_ticket break pause(base * (my_ticket - ns)) // overflow in subtraction is harmless lock.release(): int t := now_serving.load() + 1 now_serving.store(t) ---------------------------------------- queue-based locks array-based (O(t) space per lock) Anderson -- FAA-based Graunke & Thakkar -- swap-based linked list-based (O(n+t) for t threads and n locks) MCS -- swap and (ideally) CAS-based CLH -- swap-based MCS code type qnode = record atomic next atomic waiting class lock atomic tail := null lock.acquire(qnode* p): // Initialization of waiting can be delayed p->next := null // until the if statement below, p->waiting := true // but at the cost of an extra W||W fence. qnode* prev := tail.swap(p) if prev != null // queue was nonempty prev->next.store(p) while p->waiting.load(); // spin lock.release(qnode* p): qnode* succ := p->next.load() if succ == null // no known successor if tail.CAS(p, null) return repeat succ := p->next.load() until succ != null succ->waiting.store(false) // parameter p points to a qnode record allocated // (in an enclosing scope) in shared memory locally-accessible // to the invoking processor CLH code type qnode = record qnode* prev // read and written only by owner thread atomic succ_must_wait class lock atomic tail := new qnode(null, false) ~lock(): // destructor lock.acquire(qnode* p): p->succ_must_wait.store(true) qnode* pred := p->prev := tail.swap(p) while pred->succ_must_wait.load(); // spin lock.release(qnode** pp): qnode* pred := (*pp)->prev (*pp)->succ_must_wait.store(false) *pp := pred // take pred's qnode Hemlock [Dice & Kogan, 2021] Each thread has a single status word, shared across _all_ locks. type status = atomic status ts[T] := { null ... } class lock atomic tail := null lock.acquire(): status* pred := tail.swap(&ts[self]) if pred != null while !pred.CAS(this, null); // while pred->load() != this; // spin // pred->store(null) // handshake lock.release(): if !tail.CAS(&ts[self], null) ts[self].store(this) while ts[self].FAA(0) != null; // spin // while ts[self].load() != null; // I'll be the next to write this ts[self], // in my next release The authors call their counter-intuitive use of CAS and FAA "Coherence Traffic Reduction". It forces a migratory pattern and avoids upgrade messages. ---------------------------------------- performance results see TOCS paper, Feb. 1991 note, esp., - Table II (p. 51): even w/exp. backoff, remote spinning impacts other operations - difference between Fig. 20 and 21 reflects broadcast v. network interconnect - Figure 22 and Table III don't build dance-hall machines! Is HW synch. needed? prob. not for locks, though constant factor can be recouped by, e.g., QOLB particularly valuable for small data structures: acquiring the lock can automatically grab data in the same cache line more important for barriers (e.g., on Crays, IBM BG series) asymptotic benefit tradeoffs array-based locks require O(L*T) space; linked lists are O(L+T), which is optimal CLH requires cache coherence to avoid contention; MCS works on non-cache-coherent machines MCS has potential spin in release CLH requires only swap; standard MCS requires CAS or LL/SC CLH requires dummy node in empty lock (can be modified to remove, at the cost of an extra atomic op in release) CLH can be modified to work on NCC machines, with an extra level of indirection; MCS can be modified to eliminate spin in release. If you make these changes to both, you arrive at pretty much the same place (with dummy node and only swap). both MCS and CLH have non-standard interfaces; MCS can be modified (IBM K42 innovation) to use standard interface with no performance cost; CLH can also be modified, but requires a global array of n queue nodes any fair lock dramatically exacerbates the preemption problem (below) K42 MCS: type qnode = record atomic tail atomic next // If threads are waiting for a held lock, next points to the queue // node of the first of them, and tail to the queue node of the last. // A held lock with no waiters has value [&head, null]. // A free lock with no waiters has value [null, null]. const qnode* waiting == 1 // In a real qnode, tail == null means the lock is free; // In the qnode that is a lock, tail is the real tail pointer. class lock atomic q := { null, null } lock.acquire(): loop qnode* prev := q.tail.load() if prev == null // lock appears to be free if q.tail.CAS( null, &q) break else qnode n := { waiting, null } if q.tail.CAS(prev, &n) // we're in line prev->next.store(&n) while n.tail.load() == waiting; // spin // now we have the lock qnode* succ := n.next.load() if succ == null q.next.store(null) // try to make lock point at itself: if !q.tail.CAS(&n, &q) // somebody got into the timing window repeat succ := n.next.load() until succ != null q.next.store(succ) break else q.next.store(succ) break lock.release(): qnode* succ := q.next.load() if succ == null if q.tail.CAS(&q, null) return repeat succ := q.next.load() until succ != null succ->tail.store(null) Linux qspinlock: The Linux source code includes a generic MCS lock (https://elixir.bootlin.com/linux/v6.17.7/source/kernel/locking/mcs_spinlock.h) It also includes a derivative called the qspinlock with a generic interface. (https://elixir.bootlin.com/linux/v6.17.7/source/kernel/locking/qspinlock.c) Authors: Waiman Long and Peter Zijlstra. Where the K42 lock arranges for the head pointer in the lock to point at the first waiting qnode, so the lock holder can find it on release, threads in the Linux qspinlock queue wait on their own qnode until they're the first in line, at which point the lock holder tells them to start spinning on the lock itself. As an optimization (driven by careful experiments counting cache misses), the first-arriving waiter doesn't use its queue node at all, but rather sets a separate "pending" bit in the lock. Once the pending thread becomes the owner, the pending bit remains unused until the lock becomes quiescent again. Subsequent, queued waiters spin, once they're first in line, for both the locked and pending bits to clear. The code is rather inscrutable, but roughly equivalent to the following: type qnode = record atomic next atomic waiting class lock // packed into a single word; pieces separately accessible record whole record status atomic locked atomic pending atomic tail lock.acquire(): loop lock w := whole.load() if w == and whole.CAS(w, ) return if w == and whole.CAS(w, ) // I'm the first in line while locked.load(); // spin status.store() // lock is mine; no one pending // if anyone has joined the queue behind me (or does in the // future), they'll know they're first in line return; // else I need to join the queue qnode q := { null, true }; qnode* pred := w.tail if w.status != lock w2 := w w2.tail := &q if whole.CAS(w, w2) if pred != null // someone is in the explicit queue ahead of me pred.next.store(&q); while q.waiting.load(); // spin // now I'm first in line while status.load() != ; // spin locked.store(true) if tail.load() != &q // someone is in line behind me while q.next.load() == null; // spin q.next.load()->waiting.store(false) // inform successor it's next in line // my qnode is no longer needed return // else try again lock.release(): locked.store(false) "K42" CLH NB: This has typos in early SMS2e copies. type qnode = record atomic succ_must_wait qnode* thread_qnode_ptrs[T] := {i in T : new qnode(false)} class lock atomic tail := new qnode(false) qnode* head // accessed only by lock owner ~lock(): delete tail lock.acquire(): qnode* p := thread_qnode_ptrs[self] p->succ_must_wait.store(true) qnode* pred := tail.swap(p) while pred!succ_must_wait.load(); // spin head := p thread_qnode_ptrs[self] := pred lock.release(): head->succ_must_wait.store(false) ---------------------------------------- Cohort locks (Dice, Marathe, and Shavit [PPoPP 2012]) Niagara machines: 8 cores, 8 threads each, 4 sockets = 256 contexts HBO lock: entirely probabilistic -- back off less when lock holder is nearby; more if it is far away HCLH: every once in a while, merge node-local queue into global queue bounded unfairness Flat-Combining NUMA Locks thread that merges into global queue does more work on local node to build an MCS-style list Notion of "thread obliviousness" (required by the global lock): have to allow lock to be acquired by thread i and released by some other thread j Notion of "cohort detection" (required by local locks): have to be able to tell whether any other local thread wants the lock Asymmetric locking Peterson_lock PL general_lock GL acquire: release: if ~preferred_thread PL_release() GL.acquire() if ~preferred_thread PL_acquire() GL_release() but with fences. Can choose to make them _very_ asymmetric: modest benefit for the preferred thread; very high cost for non-preferred threads. ======================================== Barriers log n time inescapable, though in practice it may seem so w/ HW support sense-reversing centralized barrier class barrier int count := 0 const int n := |T| bool sense := true bool local_sense[T] := { true ... } barrier.cycle(): bool s := !local_sense[self] local_sense[self] := s // each thread toggles its own sense if FAI(&count) == n-1 // note release ordering count.store(0) sense.store(s) // last thread toggles global sense else while sense.load() != s; // spin lots of log-time SW barriers combining tree [Tang & Yew] arrive at leaf FAI at each node; winner goes up dissemination barrier [Hensgen, Finkel, & Manber] notify node self + 1, wait for self - 1, notify self + 2, wait for self - 2, notify self + 4, wait for self - 4, ... mod |T| latency reduced by a factor of 2, but total traffic and space increased by a factor of n tournament barrier [Hensgen, Finkel, & Manber, Lubachevsky] like combining tree, but statically determined "winner" at each node need fewer RMW instructions (expensive on some machines) static tree barrier [Mellor-Crummey & Scott] all nodes are arrival nodes -- not just leaves. wait for all children, then notify parent. can have arbitrary (and different) fan-in and fan-out trivially local spinning All tree barriers (but not dissemination barrier) can use broadcast wakeup flag on CC machine. in practice, use centralized when # of participants is modest # of participants isn't static dissemination when latency matters, bandwidth is plentiful, and you don't have cache coherence fuzziness isn't desired static tree OW fuzzy barriers [Gupta] do work while waiting for others to arrive easy to do centralized possible to do with tree barriers [Scott & Mellor-Crummey] require adaptation (borrowed from Gupta & Hill) but not with dissemination barrier ================================================================ preemption problem with all locks especially bad for fair locks solutions (or at least partial solutions) timeout (spin-then-yield, try-locks) spin-then-block don't preempt me (various variants) handshake when passing published time: allows one to _guess_ whether peer is preempted don't preempt that other thread (wider kernel interface) yield processor to specific other thread (likewise) lock-free data structures ---------------- timeout trivial for TAS locks no obvious solution for ticket locks problematic for queue-based locks (how do I get out of line?) series of papers, culminating in HiPC 2005 -- addresses both timeout and preemption tolerance several variants, most promising of which - marks deleted nodes (dynamically allocated), allowing successor to link them out of the queue. - uses _published time_ heuristic to detect preempted peers work from HP at PPoPP'17: HMCS-T