CSC 2/458: Parallel and Distributed Systems Feb. 6ff 2019 ---------------- Scalable Synchronization an issue for all busy-wait synchronization will concentrate on locks, with briefer coverage of barriers problems with test-and-set locks contention unfairness test-and-test-and-set limits contention to release time on cache-coherent machines bounded exponential backoff reduces contention doesn't solve the problem, esp. on machines with larger numbers of fast cores ticket locks fair, but still vulnerable to contention nice backoff option class lock int next_ticket := 0 int now_serving := 0 const int base = ... // tuning parameter lock.acquire(): int my_ticket := FAI(&next_ticket) // returns old value; arithmetic overflow is harmless loop int ns := now_serving.load() if ns = my_ticket break pause(base * (my_ticket - ns)) // overflow in subtraction is harmless fence(R||RW) lock.release(): int t := now_serving + 1 now_serving.store(t, RW||) queue-based locks array-based (O(n) space per lock) Anderson -- FAA-based Graunke & Thakkar -- swap-based linked list-based MCS -- swap and (ideally) CAS-based CLH -- swap-based MCS code type qnode = record qnode* next bool waiting class lock qnode* tail := null lock.acquire(qnode* p): // Initialization of waiting can be delayed p->next := null // until the if statement below, p->waiting := true // but at the cost of an extra W||W fence. qnode* prev := swap(&tail, p, W||) if prev != null // queue was nonempty prev->next.store(p) while p->waiting.load(); // spin fence(||RW) lock.release(qnode* p): qnode* succ := p->next.load(WR||) if succ == null // no known successor if CAS(&tail, p, null) return repeat succ := p->next.load() until succ != null succ->waiting.store(false) // parameter p points to a qnode record allocated // (in an enclosing scope) in shared memory locally-accessible // to the invoking processor CLH code type qnode = record qnode* prev // read and written only by owner thread bool succ_must_wait class lock qnode dummy := { null, false } // ideally, dummy and tail should lie in separate cache lines qnode* tail := &dummy // initialization lock.acquire(qnode* p): p->succ_must_wait := true qnode* pred := p->prev := swap(&tail, p, W||) while pred->succ_must_wait.load(); // spin fence(||RW) lock.release(qnode** pp): qnode* pred := (*pp)->prev (*pp)->succ_must_wait.store(false, RW||) *pp := pred // take pred's qnode performance results see TOCS paper, Feb. 1991 note, esp., - Table II (p. 51): even w/exp. backoff, remote spinning impacts other operations - difference between Fig. 20 and 21 reflects broadcast v. network interconnect - Figure 22 and Table III don't build dance-hall machines! Is HW synch. needed? prob. not for locks, though constant factor can be recouped in particular, for small data structures, acquiring the lock can automatically grab colocated data more important for barriers (e.g., on Crays, IBM BG series) asymptotic benefit tradeoffs array-based locks require O(L*T) space; linked lists are O(L+T), which is optimal CLH requires cache coherence to avoid contention; MCS works on non-cache-coherent machines MCS has potential spin in release CLH requires only swap; standard MCS requires CAS or LL/SC CLH requires dummy node in empty lock (can be modified to remove, at the cost of an extra atomic op in release) CLH can be modified to work on NCC machines, with an extra level of indirection; MCS can be modified to eliminate spin in release. If you make these changes to both, you arrive at pretty much the same place (with dummy node and only swap). both MCS and CLH have non-standard interfaces; MCS can be modified (IBM K42 innovation) to use standard interface with no performance cost; CLH can also be modified, but requires a global array of n queue nodes any fair lock dramatically exacerbates the preemption problem (below) K42 MCS: type qnode = record qnode* tail qnode* next // If threads are waiting for a held lock, next points to the queue // node of the first of them, and tail to the queue node of the last. // A held lock with no waiters has value [&head, null]. // A free lock with no waiters has value [null, null]. const qnode* waiting == 1 // In a real qnode, tail == null means the lock is free; // In the qnode that is a lock, tail is the real tail pointer. class lock qnode q := { null, null } lock.acquire(): loop qnode* prev := q.tail.load() if prev == null // lock appears to be free if CAS(&q.tail, null, &q) break else qnode n := { waiting, null } if CAS(&q.tail, prev, &n, W||) // we're in line prev->next.store(&n) while n.tail.load{} == waiting; // spin // now we have the lock qnode* succ := n.next.load() if succ == null q.next.store(null) // try to make lock point at itself: if !CAS(&q.tail, &n, &q) // somebody got into the timing window repeat succ := n.next.load() until succ != null q.next.store(succ) break else q.next.store(succ) break fence(||RW) lock.release(): qnode* succ := q.next.load(RW||) if succ == null if CAS(&q.tail, &q, null) return repeat succ := q.next.load() until succ != null succ->tail.store(null) "K42" CLH node initial_thread_qnodes[T] qnode* thread_qnode_ptrs[T] := { i in T : &initial_thread_qnodes[i] } type qnode = record bool succ_must_wait class lock qnode dummy := { false } // ideally, dummy should lie in a // separate cache line from tail and head qnode* tail := &dummy qnode* head lock.acquire(): qnode* p := thread_qnode_ptrs[self] p->succ must wait := true qnode* pred := swap(&tail, p, W||) while pred!succ_must_wait.load(); // spin head.store(p) thread_qnode ptrs[self] := pred fence(||RW) lock.release(): head!succ_must_wait.store(false, RW||) ---------------------------------------- Cohort locks (Dice, Marathe, and Shavit) Niagara machines: 8 cores, 8 threads each, 4 sockets = 256 contexts HBO lock: entirely probabilistic -- back off less when lock holder is nearby; more if it is far away HCLH: every once in a while, merge node-local queue into global queue bounded unfairness Flat-Combining NUMA Locks (Dice, Marathe, & Shavit) thread that merges into global queue does more work on local node to build an MCS-style list Notion of "thread obliviousness" (required by the global lock): have to allow lock to be acquired by thread i and released by some other thread j Notion of "cohort detection" (required by local locks): have to be able to tell whether any other local thread wants the lock Asymmetric locking Peterson_lock PL general_lock GL acquire: release: if ~preferred_thread PL_release() GL.acquire() if ~preferred_thread PL_acquire() GL_release() but with fences. Can choose to make them _very_ asymmetric: modest benefit for the preferred thread; very high cost for non-preferred threads. ======================================== Barriers log n time inescapable, though in practice it may seem so w/ HW support sense-reversing centralized barrier class barrier int count := 0 const int n := |T| bool sense := true bool local_sense[T] := { true ... } barrier.cycle(): bool s := !local_sense[self] local_sense[self] := s // each thread toggles its own sense if FAI(&count, RW||) == n-1 // note release ordering count.store(0) sense.store(s) // last thread toggles global sense else while sense.load() != s; // spin fence(||RW) lots of log-time SW barriers combining tree [Tang & Yew] arrive at leaf FAI at each node; winner goes up dissemination barrier [Hensgen, Finkel, & Manber] notify node self + 1, wait for self - 1, notify self + 2, wait for self - 2, notify self + 4, wait for self - 4, ... mod |T| latency reduced by a factor of 2, but total traffic and space increased by a factor of n tournament barrier [Hensgen, Finkel, & Manber, Lubachevsky] like combining tree, but statically determined "winner" at each node need fewer RMW instructions (expensive on some machines) static tree barrier [Mellor-Crummey & Scott] all nodes are arrival nodes -- not just leaves. wait for all children, then notify parent. can have arbitrary (and different) fan-in and fan-out trivially local spinning All tree barriers (but not dissemination barrier) can use broadcast wakeup flag on CC machine. in practice, use centralized when # of participants is modest # of participants isn't static dissemination when latency matters, bandwidth is plentiful, and you don't have cache coherence fuzziness isn't desired static tree OW fuzzy barriers [Gupta] do work while waiting for others to arrive easy to do centralized possible to do with tree barriers [Scott & Mellor-Crummey] require adaptation (borrowed from Gupta & Hill) but not with dissemination barrier ================================================================ preemption problem with all locks especially bad for fair locks solutions (or at least partial solutions) timeout (spin-then-yield, try-locks) spin-then-block don't preempt me (various variants) handshake when passing published time: allows one to _guess_ whether peer is preempted don't preempt that other thread (wider kernel interface) yield processor to specific other thread (likewise) lock-free data structures ---------------- timeout trivial for TAS locks no obvious solution for ticket locks problematic for queue-based locks (how do I get out of line?) series of papers, culminating in HiPC 2005 -- addresses both timeout and preemption tolerance several variants, most promising of which - marks deleted nodes (dynamically allocated), allowing successor to link them out of the queue. - uses _published time_ heuristic to detect preempted peers work from HP at PPoPP'17: HMCS-T