CSC 2/458, 31 March 2008 Distribution Assume distributed collection of processes communicating via messages. Typically UDP, TCP, or RPC. Goal is to construct some application that has to be distributed by virtue of location of physical devices, data repositories, or people. This is different from the typical machine-room-level scientific distributed computing. GRID computing combines the two. RPC typically built on top of UDP or TCP, and provides an interface reminiscent of local procedure calls. IDL for data declarations (parameter types) RPC 'stub compiler' generates caller and callee stubs Caller stub "marshalls" parameters and sends and receives messages On server side, server thread calls into RPC library and, in a loop, repeatedly receives a message and calls the appropriate callee stub, which in turn "unmarshalls" parameters, calls the appropriate local procedure, and sends back the results. Note that parameters must generally be passed by value or value/result. deep copy issues time clock synchronization ordering locking not just for shared data; for anything that requires mutual exclusion replication not just for local copies; also for (remote) servers availability fault tolerance coherence/consistency for anything that requires distributed consensus -- getting all sites to agree on something who the servers are state of server data order in which events occurred etc. the end-to-end argument time clock synchronization why useful allows events to be ordered e.g., which data are more recent? simplifies algorithms (e.g. heatbeats) based on timeout e.g., for at-most-once RPC how to do it time has to move forward; address discrepancies by speeding up or slowing down the clock Cristian's alg (assume server is accurate): send clock synch. request at time t1 get response at time t2 indicating server's time is ts guess that drift is ts - t1 - (t2-t1)/2 Berkeley alg (no authority): server periodically polls everybody (ala Cristian), computes an average (maybe discarding outliers), and tells everybody how much to speed up or slow down Fully decentralized everybody periodically broadcasts, waits a little while, computes an average (discarding outliers) or the replies it gets, and speeds up or slows down accordingly (mere) ordering may be cheaper than real time (don't have to _know_ real time) or may be more fine-grained than possible drift Lamport's virtual clock protocol goal: assign a "time" value to everything that represents a total order consistent with happens-before keep local step counter increment every time you send or receive a message include in every message set to one more than included value, if necessary, when receiving if don't want equal times, include pid as low-order bits Totally ordered multicast e.g. for "active replication" of replicas (more on this below) trivial if all messages go through single "rebroadcast" server (Orca does this) can be done with Lamport clocks requires that messages from any given A to any given B arrive in order requires senders to "send" message to themselves requires receivers to multicast acknowledgments to everybody receiver delivers message to app when it reaches the head of the incoming queue and has been acknowledged by everyone exist better ways of doing this -- fewer messages, more fault tolerant. See work of Birman et al.: Isis, Horus. Vector timestamps allow you to tell when things are ordered and when they're NOT admits a cheaper implementation of ordered multicast needed for ordering of writes to distributed replicas (more below) also good for S-DSM locks standard near-universal algorithm reminiscent of the MCS lock (but predating it) each lock has a well-known manager to acquire the lock you send a message to the manager if the lock is free the manager sends you back a 'granted' message otherwise it sends you back a 'expect to hear from X' message While waiting for the lock or in your critical section, you may get a message from the manager saying 'when done, give the lock to Y' When done with the lock, send a 'granted' message to Y, if you know who Y is; otherwise send a 'release' message to the manager If 'when done, give to Y' and 'release to manager' cross in the mail, need to agree on who wins (prob. better for X to give to Y; manager is more likely to be a bottleneck) replica management (T & vS chap. 6) within a server cluster front end can route requests to back ends back ends may partition the data based on content, or each back end may have all the data and be capable of handing any request within a distributed service server tracks traffic; notices when a bunch of clients are making requests of a certain kind, and there is an alternate server closer to all of them that could service those requests -- reconfigure so it does so. Clients have to be able to accept direction to send future requests to the alternate location. within the Internet, transparently proxy caches all traffic sent through proxy (e.g. UR gateway to the rest of the world) proxy caches data, so it can reply to local clients without actually going out on the Internet. Great for Kazaa, etc. Can often accept a very weak coherence mechanism, e.g. date-based expiration edge (ISP) caches Akamai may use out-of-band update mechanism, e.g. via satellite broadcast. coherence update v. invalidate HW typically uses invalidate distributed systems are more evenly divided update wins if data are usually read, rarely written you get uniformly low latency of access invalidate wins if data are frequently modified you don't have to update copies that aren't being read popular compromise: lease - promise from a server that it will push updates to you for a certain period of time, during which you don't have to check to make sure what you have is current automatic adaptive push v. pull also possible | update via epidemic algorithms | push v. pull (latter does better job of reaching everybody | [don't have the problem of accidentally missing someone]; | former is needed at least at first to get things rolling, | and can leverage underlying multicast support) | concept of gossip: when you tell somebody something they already | know, give up on future propagation with probabilty p (e.g. 1/4). | Nice for bootstrapping pull-based epidemic. | lots of recent interest in peer-to-peer (P2P) systems, which | often push index information, though often not actual data. primary copy v. replicated-write protocols Is there one master and (potentially) some backups, or multiple co-equal repositories? Ties into fault tolerance (see below). migrating single copy is the original idea behind S-DSM With multiple copies, have to worry about ordering (need to see writes in the same order at all replicas). Can use some variant of vector timestamps. Alternatively, can have one location "sequence" the writes (not much different from primary copy). Example system with vector timestamps is described in section 6.6.2. quorum-based voting sort of a compromise: read from R copies; write to W copies; ensure that R+W>N and W>N/2. propagation of data v. propagation of operations ("active replication") Latter may be much more efficient, but raises transitivity problem: if replicated operation makes a nested call, how do we make sure it is called exactly once? the end-to-end argument --------------------- Future topics (later in semester?): Software Distributed Shared Memory (S-DSM) and automated caching Fault tolerance