17 April 2019 We have four class periods after today. What shall we do with them? Possible topics include: distributed transactions transactional memory persistence (major focus of current work in my group) directory-based cache coherence more cool concurrent data structures skip lists dual LCRQs Cache-Tries: Concurrent Lock-Free Hash Tries with Constant-Time Operations [PPoPP 2018] Practical Concurrent Traversals in Search Trees [PPoPP 2018] Bonzai Trees [ASPLOS 2012] wish list? ======================================== Paxos & Raft -- provably correct approaches to consensus (with an appropriate failure model); amenable to industrial strength implementation. Recall that the FLP theorem says we can't _solve_ distributed consensus -- can't simultaneously get termination agreement integrity Practical systems get around this by fudging on termination: they admit the possibility of indefinite deliberation, but take practical steps to drive the probability very low. Straightforward solution in the absence of process failures: 2-phase commit (not to be confused with 2-phase locking) initiator process asks everybody else if they're ready to commit, and waits for responses. Note that in the asynchronous model messages are reliable, so if nobody fails, the wait eventually terminates. If anybody says no, it sends everybody a "never mind" message; if everybody says yes, it sends everybody a "go ahead" message. NB: initiator doesn't have to be distinguished: anybody can take on the role Brittle in the face of process failures -- even fail-stop. Suppose the initiator dies. Might work around this some by replacing initiator: if I suspect initiator of failing, I ask everybody "hey, how did you vote?," collect the responses, and send "go ahead" or "never mind." Some of those messages may be redundant, but we can ignore them if we remember recent traffic. You're sill hosed, though, if a participant then fails, because the new initiator can't distinguish between the case where the original initiator told the failed participant to commit, and it did (but nobody else knew) and the case where the failed participant voted no. We care because if the failed participant did commit it may have told the outside world, and we can't take that back. Somewhat better reliability with 3-phase commit: initiator asks everybody if they're ready to commit, and waits for responses If anybody says no, it sends everybody a "never mind" message; if everybody says yes, it sends everybody a "prepare to commit" message and waits for responses Once everybody is ready to commit, the initiator sends a final "commit" message (actually, with at most f failures, f+1 ready responses suffice). The point here is that once the initiator has received f+1 "ready to commit" messages, it is guaranteed that any back-up initiator will discover that the vote went up, not down. On receipt of a "prepare to commit" message, a replica can start doing work, but it can't do anything undoable until it gets the final commit message, which assures it that the state of the system is certain to be clear to everyone, regardless of (bounded) failures. A back-up initiator can ask everybody "did you see 'prepare to commit'?" If any crashed node actually committed, everyone still alive will say "yes". If anyone still alive says "no", then no crashed node could have committed. Still brittle in the face of network partition: if all the "yes"es end up on one side and all the "no"s on the other, the two halves can make different decisions and be unmergeable after the network gets back together. Also brittle if an initiator can fail and then recover (from persistent storage, or just from network overload [in which case it may not even know it has "failed"]): its messages can end up being confused with those of a newly volunteering initiator. For example, if all "ready to commit" messages have been sent, but only some received, and the initiator goes temporarily dead, when it wakes up it may conclude that the processes it didn't hear from are dead, and send "never mind" messages to everybody else. Meanwhile, a new initiator may query everybody, get positive responses, and send "go ahead" messages to everybody. These conflicting messages might be seen in different orders by different participants. Paxos addresses both the network partition problem and the fail-and-recover (arbitrary delay) problem. ---------------------------------------- Paxos [Lamport, late '80s] Commercially and theoretically very important: used in all the big Cloud systems. History of being very difficult to understand and to implement correctly. Paper appeared nearly a decade later; delayed by controversy over its whimsical (bizarre?) presentation. Tolerates network partition/failure and up to n/2 "fail-recover" faults. Does not guarantee termination in general (nor does it tolerate Byzantine faults) but never makes a bad decision, and terminates fine when messages from live processes get through in reasonable time. In fact, has provably minimal latency (for a single decision) in the best case. Has to be extended to make a series of decisions. Outline of the protocol: "proposer" process analogous to initiator above. Only one allowed at a time. If you have two, they may need to arbitrate somehow and get down to one. Until they do, no guarantee of termination. "acceptor" processes analogous to participants above. An acceptor can reject a proposal or "accept" it; in the latter case, it promises never to accept any "lower priority" proposal (more on this in a minute). "learner" processes are interested in getting news of decisions. May or may not be the same as the acceptors. These capture the notion of communicating commit to the "outside world." I'll ignore learners in the rest of these notes. Four phases in normal operation: (1) proposer sends 'prepare' messages to all acceptors (2) acceptors send 'agree' messages back (3) proposer sends 'commit' messages to all acceptors (4) acceptors send 'accept' (confirmation) messages back; a majority vote means "yes" Proposer knows we have consensus when it gets accepts from a majority of acceptors. Can then spread the word. This is a lot like 2PC, except that (1) we do majority voting rather than trying to get unanimity among the non-failed processes (2) we give every proposal a number so we can arbitrate among -- and, effectively, merge -- competing concurrent proposals, whether from proposers who just happened to start up around the same time or from an original and one or more would-be recover-ers. Remember that this is a protocol to achieve consensus ONCE. It can be extended to be re-run for future decisions, but that's not what's being presented here. The actual protocol is quite concise (understanding it is the hard part): Any process can decide to become a proposer, in which case it - picks a sequence number n and sends a 'prepare' message containing n to the acceptors (or at least a majority of them). - waits for a majority of the acceptors to reply. If a majority agree, the proposer picks the value from the highest-numbered reply (or chooses its own, if none supplied a value), and sends commit messages to the acceptors. OW, start over. An acceptor, for its part - in response to a prepare message, compares its number to the highest numbered proposal to which it has previously agreed. If this new one is higher, the acceptor sends an 'agree' message with the value and number of the highest previously agreed-to proposal. If the new one is lower, it sends a 'reject' message. - in response to a 'commit' message, accepts if the seq_no and value match the highest-numbered previously-agreed-to proposal; OW reject. Discussion Note that a process can always "outbid" peers by picking a big seq_no -- don't have to be consecutive. Unless concurrent proposers back off or negotiate, this can lead to livelock. Typically include PID as low-order bits in chosen seq_nos to ensure global uniqueness. Acceptance is based not on unanimity but on majority voting (majority of total number of processes, including any that failed). This is a _quorum_ scheme: any two accepting majorities have to have somebody in common, so we can be sure we don't have inconsistency. (This is also how we tolerate partition -- can't have a majority in both fragments of the network, so at most one can make decisions prior to reunification.) So proposer can send 'commit' messages after getting "agree" from a majority of the acceptors (it can even choose to send 'prepare' messages to only a mere majority of the acceptors). The mechanism whereby an acceptor sends back the value from its highest-numbered already-agreed-to proposal -- and the proposer adopts that value -- avoids problems in which a proposal appears to fail for lack of a majority; another, conflicting, proposal succeeds; and then the original gets a (delayed) majority due to fail-recover. This way the two proposals are guaranteed to be compatible rather than conflicting. Note that a proposer doesn't even include a value in its original proposal -- just a seq_no. It picks a value (or if no one suggests one, cons-es up its own) after hearing back from a majority of acceptors. Majority voting allows us to tolerate failure of up to half the acceptors. So for f failures we need 2f+1 acceptors. If a proposer fails, another process can assume that role. If more than one does, or if a thought-to-be-dead proposer recovers, the one with the higher number will win (assuming they don't keep ratcheting up, which would presumably happen only in the face of continued delays that convince processes that their peers are dead). Of course, the _possibility_ of ratcheting up is the reason Paxos doesn't disprove FLP. Note that if we have more than f failures we do NOT get bogus agreement -- we simply don't agree. We can extend Paxos to accommodate Byzantine failures by adding yet more acceptors, to out-vote the ones that might be lying. Not covered here. Note that a process that crashes in a way that causes it to forget the proposals to which it has previously agreed -- and then recovers -- has essentially become Byzantine. So stable storage is important. ---------------------------------------- Raft Designed to be an alternative to (repeated) Paxos ("multi-Paxos"): same guarantees, comparable performance, but easier to understand (and presumably maintain), and easier to integrate into "real" systems. [Subjectively, I find the description of Raft more complicated than (the Paxos Made Simple description of) Paxos -- but that's because it includes all the practical stuff that Paxos leaves out.] Objections to Paxos "hard to understand" in particular, explanation is based on making a single decision. Extension to multi-Paxos adds considerable complexity (not covered here). "hard to use in practice" no standard version of multi-Paxos designed to be symmetric, whereas real systems tend to have leaders. Heavy emphasis on separation of concerns, so components of the system can be designed, implemented, and understood/proven largely independently. Leader election and group membership management, in particular, are largely independent of core consensus mechanism. Raft is all about maintaining a replicated log of consensus decisions. Log entries are initiated only by the leader, which also tells other servers when the entries are committed (and can be acted upon). The protocol guarantees that committed entries are (eventually) the same (and in the same order) everywhere. Every server (process) is a leader, follower, or candidate at any given time. Normally there is exactly one leader and everybody else is a follower. A candidate is somebody trying to take over from a presumed-to-have-failed leader. Everything is RPC-based. Only two of these (!): RequestVote -- for leader election AppendEntries -- for consensus and heartbeat Period of time beginning with election of a new leader is called a "term." Leader election is straightforward: if I think I might need to step up, I send a RequestVote RPC to everybody. If a majority say 'yes' I send a null AppendEntries to everybody to assert my authority. If I get a RequestVote with a higher term I defer to it; if I get one with an equal or lower term I refuse it. If the process takes too long I increment my term and try again. A proposal is considered committed when it has been appended to a majority of logs within a single term. Every AppendEntries RPC includes the leader's understanding of the current term and latest committed index. When any server learns that an entry has been committed, it applies it to its state machine (i.e., actually takes action on the consensus). A follower accepts an AppendEntries RPC iff it has all the previous committed entries. This guarantees no holes. It _is_ possible for noncommitted entries to appear in log A and be missing in log B, even when later entries are committed in B (e.g., if A is a follower but was formerly a leader, and crashed and recovered before some entries were committed). In this case A will reject a new AppendEntries request. The new leader (B) will then decrement its notion of what A knows about and send a new AppendEntries request with more history in it, repeating if there are further rejects. Eventually all uncommitted stuff will get purged and A will agree with the (current) leader (B). Note that a server never deletes or overwrites entries in its own log while serving as leader. One more wrinkle: we have to worry about the case where a follower misses some committed entries, then becomes leader and might (naively) expect its followers to delete/overwrite entries that were committed. We avoid this by refusing to elect a leader that doesn't have all the committed entries. To ensure this, RequestVote RPC describes the candidate's log; a peer refuses the request if its own log has a later-term entry, or a later entry within the same most recent term. (That later entry might not be committed, but better safe than sorry.) Note that commitment is based on _current-term voting_. There's an example in the paper where a command may end up in the logs of a majority of servers (put there in different terms, but always reflecting the term number in which it was first proposed) and yet still not be committed. Correctness arguments are based on the Leader Completeness Property: if a log entry is committed in term t, then that entry will be present in the log of the leader of any term > t. Group membership changes: Have to avoid case where you have different majorities for old (say, 2 or 3) and new (say, 3 of 5) configurations. Do this by going through a "joint" phase that requires majorities in both old and new configurations. Entering and leaving the joint phase is a state machine command that goes into the log in the usual way. Votes are taken based on the most recent entry in the leader's log, even if it hasn't been committed yet. If the leader is leaving the configuration, it steps down after committing the new config. Between introduction and commitment of that config, the leader excludes itself from voting. To avoid service gaps when expanding a cluster, new servers first run in getting-up-to-speed mode, where they acquire log entries but don't vote. To avoid confusion (RequestVote RPCs from followers removed from the config), RequestVote requests are ignored during the minimum-timeout interval after hearing from a current leader (when the server thinks there's still an active leader).