CSC 2/458, 8 April 2019 Time in parallel & distributed systems Final projects due the Friday after classes end. Will schedule in-person presentations during exam week. Assume collection of processes communicating via shared memory or, more generally, messages (over UDP, TCP, RPC, RDMA, etc.). aside: network protocols UDP and TCP are, of course, the standard unreliable and reliable protocols, respectively, built on IP, the fundamental Internet Protocol. RPC -- remote procedure call -- is often provided on top of either UDP or TCP no one standard implementation; lots of alternatives out there Google's internal systems, for example, use one called Stubby interface reminiscent of local procedure calls. IDL for data declarations (parameter types) RPC 'stub compiler' generates caller and callee stubs Caller stub "marshalls" parameters and sends and receives messages On server side, server thread calls into RPC library and, in a loop, repeatedly receives a message and calls the appropriate callee stub, which in turn "unmarshalls" parameters, calls the appropriate local procedure, and sends back the results. Note that parameters must generally be passed by value or value/result. deep copy RDMA is remote direct memory access. Provided by modern system-area networks, inc. Infiniband. why is time useful? ordering -- often determines semantics did you insert X before or after I looked to see whether X was present? which files need rebuilding in distributed /make/ ? (when clients share a filesystem) (approximate) connection to / consistency with external events helps if it matches wall clock time references: Lamport: Time, Clocks, and the Ordering of Events in a Distributed System CACM 1978 Tanenbaum & van Steen text 1st ed. 5.1-5.2; 2nd ed. 6.1-6.2 (main difference is description of NTP and GPS) [BTW, NIST stands for National Institute of Standards and Technology, not ___ Standard Time] key concepts: (1) There is no such thing as absolute time -- even in Physics All we can really observe is local ordering and causality in particular: a signal (message, light beam, radio wave) is always received _after_ it is sent Because the universe is distributed, happens-before is a PARTIAL ORDER (2) Sometimes it's _very_ useful for everybody to agree on some TOTAL ORDER consistent with the observed partial order. Motivating example: state replication for availability and fault tolerance. All replias need to be the same. Everybody has to agree on order of updates. (3) Sometimes it's important to capture the partial order precisely -- NOT to approximate it with a total order. Three motivating examples: (a) _causal_ multicast (b) data race detection [Pozniansky & Schuster, PPoPP 2003; Flanigan & Freund, PLDI 2009] (c) software distributed shared memory (Treadmarks protocol -- [Keleher et al, ISCA 1992; Amza et al., IEEE Computer 1996]) clock synchronization why useful allows events to be ordered e.g., which data are more recent? simplifies algorithms (e.g. heatbeats) based on timeout e.g., for at-most-once RPC how to do it time has to move forward; address discrepancies by speeding up or slowing down the clock Network Time Protocol (NTP) based on work of Cristian [1989] basic idea send clock synch. request to network time server at time t1 get response at time t2 indicating server's time is ts guess that drift is ts - t1 - (t2-t1)/2 add or remove a little bit from each subsequent clock tick to bring into sync w/o ever running time backward refinements send requests both ways, multiple times average, and solve for posibility of different A-B and B-A propagation times classify nodes in terms of reliability lower quality clock syncs itself to higher quality clock equal quality clocks split the difference worldwide accuracy under 50ms Berkeley alg (no authority): server periodically polls everybody (ala Cristian), computes an average (maybe discarding outliers), and tells everybody how much to speed up or slow down Fully decentralized everybody periodically broadcasts, waits a little while, computes an average (discarding outliers) on the replies it gets, and speeds up or slows down accordingly GPS each satellite broadcasts its time and position receivers with 3 satellites in view triangulate and use skew to determine distance. with 4 satellites you can also calculate the precise local time commercial devices have positional accuracy in the 1-5m range and time accuracy under 50ns. (mere) ordering may be cheaper than real time (don't have to _know_ real time) or may be more fine-grained than possible drift Lamport's virtual clock protocol goal: assign a "time" value to everything that represents a total order consistent with happens-before keep local step counter; increment at every "event," including message send. include in every message on receipt, set new local time to one greater than max of current local time and time contained in message if don't want equal times, include pid as low-order bits Totally ordered multicast e.g. to maintain distributed replicas trivial if all messages go through single "rebroadcast" server (Orca does this) But requires complicated leader election mechanism if central server fails can be done with Lamport clocks requires - messages from any given A to any given B arrive in order - senders "send" messages to themselves - receivers multicast acknowledgments to everybody these can be delayed a bit to piggyback on each other receiver delivers message to app when it reaches the head of the incoming queue and has been acknowledged by everyone no notion of leader, but need a failure detector to tell when to stop waiting for acknowledgments from a failed process NB: exist better ways of doing this -- fewer messages, more fault tolerant. See work of Birman et al.: Isis, Horus. Vector timestamps allow you to tell when things are ordered and when they're NOT admits a cheaper implementation of ordered multicast increment local clock on every "event" what these are is app-specific: at least includes sending a message max them element-wise on receipt of a message Example 1 of the use of vector timestamps: Causally ordered multicast Relax the totally ordered multicast to (merely) avoid temporal loops, without forcing same order of receipt for unrelated messages. Increment only on send; vector-max on receive (w/out inc.) Delay delivery of message from process i until (1) i-th element of received TS is one greater than i-th element of local TS (this is the next message expected from i); and (2) for all other indices k, i-th element of received TS is <= k-th element of local TS (we've seen everything process i had seen when it sent the message) This is all that is required for consistency in a distributed application that is properly synchronized in some other way -- i.e., that uses point-to-point messages (e.g., for locks, as mentioned below) to ensure that non-commutative operations never have incomparable timestamps ======================================== Recall that vector timestamps say A < B if AND ONLY IF A is causally ordered before B -- no spurious orderings. Sometimes this matters a lot. aside: locks in distributed systems standard near-universal algorithm reminiscent of the MCS lock (but predating it) each lock has a well-known manager to acquire the lock you send a message to the manager if the lock is free the manager sends you back a 'granted' message otherwise it sends you back an 'expect to hear from X' message While waiting for the lock or in your critical section, you may get a message from the manager saying 'when done, give the lock to Y' When done with the lock, send a 'granted' message to Y, if you know who Y is; otherwise send a 'release' message to the manager If 'when done, give to Y' and 'release to manager' cross in the mail, need to agree on who wins (prob. better for X to give to Y; manager is more likely to be a bottleneck) Example 2 of the use of vector timestamps: Race Detection in shared memory hot topic lots of implementations out there one canonical approach, which actually evolved out of the S-DSM community (below), uses vector clocks to track happens-before. tag each sync object with a vector clock tag each ordinary shared object with a write time and id, and a vector of read times on each write of a synch object, increment own component of local vector and max that vector into the object's vector (just as you would if sending a message) on read of synch object, max its vector into the local vector and add 1 to local component (just as you would if receiving a message) on each ordinary write, update write time and id on each ordinary read, update own element of read time vector on each ordinary access, read or write, complain if there's a conflicting access that isn't ordered before you: - you're a read and the write time isn't in your past - you're a write and the write time OR one of the read times isn't in your past Example 3 of the use of vector timestamps: Lazy Release Consistency Introduced for the Treadmarks S-DSM system [[ see SDSM notes file ]] ---------------------------------------- replica management (T & vS chap. 6) within a server cluster front end can route requests to back ends back ends may partition the data based on content, or each back end may have all the data and be capable of handing any request within a distributed service server tracks traffic; notices when a bunch of clients are making requests of a certain kind, and there is an alternate server closer to all of them that could service those requests -- reconfigure so it does so. Clients have to be able to accept direction to send future requests to the alternate location. within the Internet, transparently proxy caches all traffic sent through proxy (e.g. UR gateway to the rest of the world) proxy caches data, so it can reply to local clients without actually going out on the Internet. Great for Kazaa, etc. Can often accept a very weak coherence mechanism, e.g. date-based expiration edge (ISP) caches Akamai may use out-of-band update mechanism, e.g. via satellite broadcast. coherence update v. invalidate HW typically uses invalidate distributed systems are more evenly divided update wins if data are usually read, rarely written you get uniformly low latency of access invalidate wins if data are frequently modified you don't have to update copies that aren't being read popular compromise: lease - promise from a server that it will push updates to you for a certain period of time, during which you don't have to check to make sure what you have is current automatic adaptive push v. pull also possible primary copy v. replicated-write protocols Is there one master and (potentially) some backups, or multiple co-equal repositories? Ties into fault tolerance (see below). migrating single copy is the original idea behind S-DSM With multiple copies, have to worry about ordering (need to see writes in the same order at all replicas). Can use some variant of vector timestamps. Alternatively, can have one location "sequence" the writes (not much different from primary copy). Example system with vector timestamps is described in section 6.6.2. quorum-based voting sort of a compromise: read from R copies; write to W copies; ensure that R+W>N and W>N/2. propagation of data v. propagation of operations ("active replication") Latter may be much more efficient, but raises transitivity problem: if replicated operation makes a nested call, how do we make sure it is called exactly once? the end-to-end argument