CSC 2/458: Parallel and Distributed Systems 6 March 2019 Concurrency Theory ---------------------------------------- safety want sequential semantics plus deadlock freedom condition sync for anything that isn't total clear semantics regarding atomicity & ordering deadlock recall the Dining Philosophers necessary conditions: exclusive use go nonblocking, or (for readers) use RW locks, seqlocks, or RCU hold & wait prevent by requesting all resources at once irrevocability recover via some sort of back-out-and-retry mechamism circularity prevent with static acquisition order avoid via Banker's alg requires a feasible path to completion and known worst-case future needs (which we often don't have) ---------------------------------------- atomicity high-level sequential consistency object O is SC if all ops on O appear to happen in some global total order consistent with program order in each thread. problem: lack of composability of histories replicated integer example: void put(int v): int get(): L.acquire() return A[self] for i in T A[i] := v L.release() (This impl. might actually make sense on an NCC-NUMA machine with infrequent writes.) Puts are totally ordered by lock. Gets are totally ordered wrt the puts. But gets can happen in the middle of puts, so operations on _separate_ replicated integers may be seen in different orders by different threads, even if underlying memory is SC. SW write buffer: doesn't help if other threads all see my updates in the same order if I can see them early -- bow tie example. linearizabiity object O is L if all ops on O appear to happen in some order consistent not only w/ program order in each thread but also with any other observable order. In particular, they have to appear to happen instantaneously at some point between their call and return. "instantaneously" fixes the replicated integer example can't happen that A sees op, tells B it did, and then B looks and doesn't see it "between call and return" fixes the SW write buffer example hand-over-hand locking as an example of nontrivial linearizability in a fine-grain locking algorithm typically reason about a _linearization point_ might be statically identifiable -- e.g., in Treiber stack everything before is harmless prep everything after is doable-by-anybody or arbitrarily-postponable cleanup might be known by thread sometime after the fact "Ah, now that I see y == 0, I know that I linearized when I read x == 0 earlier" might require retrospective reasoning over a dynamic history This is ok if you can prove it's always possible Example where linearization points can't be statically identified: single-producer, single-consumer queue of Fig. 3.3 in Herlihy & Shavit: // assume sequential consistency for simplicity int head = 0, tail = 0 T items[length] exception full, empty void enqueue(T x) if (tail - head == length) throw full items[tail % length] = x tail++ T dequeue() if (tail - head == 0) throw empty T x = items[head % length] head++ return x Only producer modifies tail; only consumer modifies head. Both update items _before_ doing so. Wrap-around of integer precision is safe; subtractions happen on the ring of integers mod 2^wordsize. Invariants: 0 <= tail - head <= length data (if any) that have been produced but not consumed occupy items[head % length] .. items[tail-1 % length] To prove this algorithm is correct and non-blocking, we have to verify the invariants after each individual memory write Where does this code linearize? At the increments OR at the read of the second of tail & head, depending on whether they are successful. Another similar example: Harris & Michael nonblocking linked list set. Example requiring retrospective reasoning: Izraelevitz & Scott generic dual container serializability While it's nice to be able to safely compose histories of independent objects, sometimes we want to compose _operations_ into larger atomic operations. Linearizability doesn't let us do this. Serializability does, but at the cost of not being able to safely compose histories anymore: have to bring everything you care about into "one managed system". Databases typically do this. Transactions (composite operations) are said to serialize if they appear to happen in some global total order that respects program order in each thread. Note that this is allowed to be inconsistent with other observable orders. time --> T1 T2 ... ok if T2 serializes before T1 If we don't like that, we can insist (and pay for) _strict_ serializability, which requires "real time" order (appear to happen between start_txn and end_txn), much as linearizability does. A global lock can clearly be used to achieve (strict) serializability, but w/out any concurrency. Other strategies are possible. In general, they have to be prepared to back out and retry to recover from deadlock, because basic design goals imply exclusive use, hold-and-wait, and possible circularity. One popular strategy is _two-phase locking_ (2PL): every object has a lock acquire all the locks you need before releasing any of them if you get stuck (detect circularity, or simply wait too long and lose hope), back out and retry (that's _speculation_) 2PL suffices for strict serializability. Some other impls. of serializability don't. Difference between high-level SC and serializability: SC applies to a static set of operations on individual objects; serializability creates composite operations (transactions). SC can often be implemented w/out a global lock _or_ speculation. Imagine, for example, an impl. that remembers old values of objects and lets a "confirm non-zero" operation "complete in the past" so long as that's consistent with program order in the calling thread. quiescent consistency (not in the book): operations appear to happen in some total order; _nonoverlapping_ operations appear to occur in real-time order. Applies to individual objects -- no composite. Operations not separated by quiescence may not occur in program order. E.g., I enqueue x and then y; your dequeue operation overlaps both enqueues, and you come out with y. QC SC L S SS equiv. to a seq. order + + + + + respects program order - + + + + consistent w/ real time q - + - + op can touch multiple objects - - - + + So SS dominates the other 4 in the sense that any history that is SS is also QC, SC, L, and S. Similarly, L dominates QC and SC, and S > SC. But the stronger properties can get in the way of composability. local: histories compose + - + - - Note that when we use sequential consistency at the level of the hardware memory model, it does, effectively, respect real time, because it's being applied to the whole system -- there's nothing "external" we can use to "see" reorderings. But if you're a computer architect building memory, making that memory SC system-wide is a challenge precisely because of non-composability! Note also that strict serializability and linearizability are equivalent if we consider all the system's data to be a single object. ======================================== liveness multiple levels of nonblocking guarantees wait-free very strong; generally too expensive -- requires helping lock-free can be very fast in ad hoc cases obstruction-free moves progress out-of-band; can be quite simple All three levels are deadlock-free. Lock-free algorithms are livelock-free. Wait-free algorithms are starvation-free. Leader election (consensus) with CAS is wait-free. M&S queue is lock-free but not wait-free. SPSC queue above is wait-free There's a natural obstruction-free deque (that isn't lock-free). Anything can, in principle, be made wait-free, but the construction is messy. Intuition: - shared /announce/ array of high-level op descriptors, indexed by thread - per-object /responses/ array of result info, also indexed by thread - before I perform an op on obj. X I scan the two arrays and _help_ any op that hasn't completed yet - performing an op involves - indirection to root of every object - copying the whole thing -- or at least its "spine" - checking to make sure the copy is consistent - creating a new version - installing it with CAS - lots of messy race conditions. Also ABA. Helping isn't always necessary, though: witness SPSC queue. Also increment-only counter given in the book (inc self, return sum-of-scan). Important work in recent years (Petrank et al., etc.) has developed techniques to move helping off the common code path -- don't add self to announce array unless you fear you're starving; only check and help once in a while. The resulting algorithms tend to be pretty fast, though space remains linear in the number of threads in the system. ---------------------------------------- fairness LOTS of possible definitions _weak fairness_: any thread waiting for a condition that is continuously true eventually takes another step. _strong fairness_: any thread waiting for a condition that is true infinitely often eventually takes another step. Often impractical. How do we know that the scheduler doesn't pathologically only let me look when the condition is false?