CSC 2/458: Parallel and Distributed Systems
Feb. 27 2019

List of project suggestions will be updated this week.
Project proposals will be due shortly after spring break.

========================================
reader-writer locks -- Chapter 6
    like mutex locks, come in several variants
    PPoPP'91 paper explores
            TAS and MCS analogues
        X
            reader-pref, writer-pref, and fair

Centralized case
    reader preference:
        track number of active or interested readers,
            existence of active writer
        readers wait until no writer is active
        writers wait until no writer is active and there are no active
            or interested readers
    writer preference:
        track number of active readers, number of interested writers,
            existence of active writer
        readers wait until no writer is interested or active
        writers wait until no writer is active
    fair:
        track numbers of arrived and completed readers and writers
            (ticket-lock like)
        readers wait until all previous writers have finished
        writers wait until all previous readers and writers have finished
    phase-fair [Brandenburg & Anderson]::
        alternate between readers and writers as long as there are some
            of each waiting.
        let all the waiting readers in when a writer finishes
            (or a writer if there are no waiting readers)
        let a reader join a current reader group
            iff there are no waiting writers

Reader and writer preference locks can use exponential backoff for writers.
Fair locks can use ticket-style proportional backoff based on waiting writers.

Queued reader-writer locks are messier.  The book presents code for the
fair case, which is the simplest.  (Code by Krieger et al, inspired by
earlier version by Mellor-Crummey et al., with bug fix by Dice et al.)

----------------------------------------
Note that RW locks turn readers into writers, in some sense: that is, it
forces them to write the lock.  If critical sections are large, this may
not be a big deal.  If they're small, cache line bounces for the lock
itself limit throughput.  While in principle we'd like to allow an
arbitrary number of readers to operate concurrently, w/ RW locks we
really can't.

One way around this is a _distributed_ RW lock [Hsieh & Weihl].
Each reader has its own lock, which is usually local.
The writer has to acquire a special writer lock AND THEN every one of
the reader locks.

An alternative is a sequence lock
    provide mutual exclusion among writers, but not between readers
        and writers.  Instead, allow a reader to discover, after the
        fact, that its execution may not have been valid, and needs to
        be retried.
    avoid modification of the lock by readers
    allow a reader to "upgrade" to writer status
    canonical use case is multi-word timer information in the kernel
    paper at PPoPP'19 uses them for B-trees

class seqlock
    int n := 0

int seqlock.reader_start():
    int seq
    repeat  // spin until even
        seq := n.load()
    until seq = 0 mod 2
    fence (R|R)
    return seq

bool seqlock.reader_validate(int seq):
    return (n.load(R|) = seq)

bool seqlock.become_writer(int seq):    // upgrade
    if CAS(&n, seq, seq+1, R|)          // needs to be write-atomic
        fence(R|W)
        return true
    return false

seqlock.writer_acquire():
    int seq
    repeat  // spin
        seq := n.load()
    until seq = 0 mod 2 and CAS(&n, seq, seq+1)
        // needs to be write-atomic
    fence(R|RW)

seqlock.writer_release():
    int seq := n.load()
    n.store(seq+1, RW|)

// use case:
    repeat
        int s := SL.reader_start()
        // critical section
    until SL.reader_validate(s)

** critical section must be idempotent, and must validate before doing
   anything "dangerous"

** introduce a ton of data races, which have to be manually labeled to
   be safe in C/C++.  Can often use memory_order_relaxed, but this
   requires great care.

========================================
RCU -- McKenney article
    intended for overwhelmingly read-intensive data
        avoid any overhead at all on the reader path
        canonical use case is network routing tables in the kernel
    typically characterized by
        no shared updates by readers
        (but R|R -- or at least consume -- fences on each access, and a
            W|R fence at the beginning in many implementations)
        single-pointer updates
        unidirectional data traversal
        delayed, "epoch"-based reclamation
    epoch implementation
        several kernel-specific options
        in user space, most straightforward approach has a global
            epoch counter C and an array S[T] of thread-specific
            counters.
        each reader copies C into S[self], with W|R fencing, at the
            beginining of its operation (The fence is needed in case the
            previous value was 0, indicating quiescence.)
        each writer increments C at the end of its operation and
            then waits until each entry of S is zero or equal to or
            larger than the new C before reclaiming anything.
        Alternatively, it can pass responsibility for reclamation to a
            helper thread that waits.
        The W|R fence in the reader can be avoided if the writer
            (presumed to be rare) uses inter-thread signals to kick
            the readers, who do the fence in their signal handler
        Epoch overflow can be tolerated if we wait _two_ grace periods
        Nested critical sections introduce additional complications

Everybody remember how reference counts work for GC?  McKenney notes
that they also work in parallel code, so long as you use atomic
increments and decrements.  But they aren't cheap, and they require
extra discipline: if I want to reclaim something, I have to be sure not
only that its counter is zero but that nobody is about to increment the
count.  One way to make sure of that is to do the equivalent of
hand-over-hand locking (or, in this case, reference count
incrementing).  If structures are acyclic, I can, say, increment the
count in node A, read A->next (say it's B), increment the count in B,
then verify that A->next still points at B (decrementing the count in B
if not), then decrement the count in A.  This still requires a
TYPE-PRESERVING ALLOCATOR to make sure the reference-count field is
always a reference-count field, even if the space is reused.

----------------------------------------
Hazard pointers
(referred to by McKenney; due to Maged Michael and, concurrently and
independently, but under a different name, Herlihy, Luchangco, and
Moir).  McKenney describes them as "reference counts done inside out."
    Each thread maintains a globally visible list of the pointers it has
        cached in registers.  This requires a technique similar to the
        one described above for reference counts: I have a HP to A; I
        read A-> next == B, create a HP for B, then verify that A->next
        still points at B (and drop the B HP if not).
    A thread that removes a node from a structure waits, before actually
        deleting (freeing) the node until no hazard pointer points at
        that node.
    Big advantage over reference counts is that in the absence of
        writers the HP arrays are completely thread-local, and stay
        cached.  Note, however, that you generally need a W|R fence
        after setting one.
    Widely used technique.  When used as an alternative to RCU, has much
        lower latency for writers, but higher latency for readers.
        Which is better depends on relative frequency of readers & writers.
        Also, hazard pointers are suitable for cases that don't have a
        clean distinction between readers and writers.

Advantages of RCU when readers are overwhelmingly more common than writers:
    little or no overhead in readers
        no per-access metadata updates
        no fences on TSO machines; R|R fences (cheap) on relaxed machines
        perhaps a little per-operation OH, to maintain epoch counters

Disadvantages:
    significant OH in writers
        mutex or other form of atomicity wrt other writers
        W|R fence on each update
        long delay (in self or helper) before reclamation
    in the "simple" case, requires tree-structured data
    easy to mess up if you try to be tricky
    not lineariable
        more about what this means later
        for now, note that the lack of read-side fences in
            (kernel-level) RCU means that a reader can see (e.g.,
            prefetch) stale data.

========================================
Teaser: nonblocking synchronization
advantages
    tolerate
        preemption
        page faults
        thread failure?
    avoid
        deadlock
        priority inversion
    potentially higher concurrency than with coarse-grain locks

challenges
    data structure-specific algorithms are hard to come up with --
        every one is a publishable result
    general-purpose algorithms have historically been very slow,
        but this is changing! -- TM, esp. w/ HW support
