CSC 2/458, 15 April 2019 Consensus: all processes propose a value; want all to agree on one of them. More specifically, require: termination: each correct process decides agreement: all correct processes decide on the same value integrity/validity: if everybody proposes the same value, that's what they decide on (variant says agreed-on value is always one proposed by somebody) Lamport clocks can be used for consensus in a fully reliable system. But who has one of those? In general, want to achieve consensus even in the face of failures. That's hard. Key result due to Fischer, Lynch, and Paterson: can't achieve consensus among non-failed processes on something as trivial as "0 or 1" using reliable asynchronous messages if even one process can fail-stop. More complicated proposals, unreliable messages, or Byzantine behavior would only make things worse. Several ways to get around this in practice, by making lack of agreement extremely unlikely. Byzantine generals problem [Lamport 1982]: variant of consensus, with fail-by-acting-weird. One process proposes a yes-no value; others try to decide whether to agree. Require: termination: each correct process decides agreement: all correct processes decide on the same value integrity/validity: if the leader is correct, everybody decides on the leader's value Interactive consistency problem also described in CDKB (processes have to agree on separate values for each of them) -- not as important. All three variants reduce to each other. Consensus in a synchronous system (time-limited rounds: if you don't hear from process i in time t, you can safely assume it has crashed). in every round, broadcast to everybody all the values you didn't send in any previous round after f+1 rounds, where f is the max # of failues, everybody is guaranteed to have heard about all values apply the same decision function everywhere Proof outline: How could I see a value you didn't? I must have heard from some process p that crashed before sending to you. But that in turn means that p saw a value you didn't, so some other process q must have crashed after sending to p and before sending to you. This forces us to have a crash in every round, but we have a limit of f crashes and f+1 rounds. Byzantine generals in a synchronous system Can solve with 3f+1 processes, but not with only 3f. (This result assumes unsigned messages: I can lie about what I heard from p.) Book runs through the 3-process case: commander sends values to p and q, which then exchange what they heard. That's all the information there is; no point in communicating anymore. But now p can't distinguish between the cases where (a) the commander is bad, and sent different values to p and q (b) q is bad, and lied about what the commander sent So it has to decide based on what the commander sent. Symmetrically, q has to decide based on what the commander sent. But then if the commander is bad p and q will choose different values. 4-process case works: 2 rounds, running basically the consensus algorithm above. First, the commander sends a value to everybody. Then everybody shares what they heard with everybody else. With 2 guaranteed good peers and at most 1 bad peer, voting yields consensus (on the value sent by the commander, if the commander is good). Message complexity is high: N^{f+1} with unsigned messages; O(N^2) with signed messages. Asymchronous messages -- the FLP result (Presentation here based in part on http://the-paper-trail.org/blog/a-brief-tour-of-flp-impossibility/ ) What Fischer, Lynch, and Paterson proved is slightly stronger than what is stated above: we can't assure that even ONE non-faulty process terminates (in such a way that all the ones that do terminate agree) We assume that messages are reliable (always get through if sent), but can take arbitrarily long to arrive, and do so in any order. We model the system state as a _configuration_ that spans all processes and all in-flight messages. The system moves from one configuration to another in a _step_, wherein some process sends a message or receives a message and updates its internal state. Suppose at most one process is faulty and all messages (to nonfaulty processes) are eventually received. Can we achieve consensus? No. Three lemmas: (1) Commutativity of schedules. If you're in configuration C and two messages are receivable, one sent to p and one to q (p != q), receiving them in either order (with no other stuff mixed in) takes you to the same state. Proof outline: Straightforward. Every process is deterministic based purely on local state and the content of the incoming message. A global configuration is just the union of the local states and the pool of unreceived messages. (2) Execution isn't determined solely by initial conditions, but (also) by order of message receipt. Proof outline: Suppose the contrary: everything is predetermined. Consider all the possible combinations of proposed values (chosen from '0' and '1'). With n processes, there are 2^n possible initial configurations. Order these in a list such that each differs from its neighbors in only one initial value (this is possible via Gray coding). Given that all-zeros has to decide 0 and all-ones has to decide 1, there has to be a pair of neighbors with differing values. ** But now suppose the initial value that differed in these neighbors is in a process that fails right away! (3) Pumping (the bulk of the proof). Call a configuration 'bivalent' if it could go either way. Suppose you start from a bivalent initial state C, and suppose that message e might be sent in that state. Now consider chains of configurations starting from that state, with the last step being receipt of e. Let be the set of final configurations of such chains. Then contains at least one bivalent state. [Intuition: if we can delay e arbitrarily long, we can guarantee that the one of the states in which it's finally received is bivalent.] Proof outline: Suppose the contrary: no bivalent states in . Let be the set of configurations reachable from C without receiving e. (Every state in is reached by receiving e when in some state in .) We can see there must be a 0-valent state in : Clearly there is a 0-valent state E0 reachable from C, since C itself is bivalent. If E0 is reached without receiving e (E0 in ), consider F0, which is where you get by receiving e in E0. Since it's downstream of a 0-valent state, F0 has to be 0-valent. If E0 is reached WITH e, let F0 be the state you reached immediately after getting e, on the way to E0. Again, since E0 is 0-valent, F0 has to be too (since we're assuming no bivalent states in ). Symmetrically, there must be a 1-valent state in . Now: claim there has to be a pair of states, C0 and C1 in such that C0 and C1 are _neighbors_ (you get to C1 by receiving a single message e' in C0), receiving e in C0 takes you to a 0-valent state D0 in , and receiving e in C1 takes you to a 1-valent state D1 in . To see this, assume the contrary. Then on every chain from the initial configuration C, receiving e in any state takes you to uniformly 0-valent or 1-valent states. But the root is in all chains, and it's supposed to be bivalent! Next: note that in D1 we have received e', but in D0 we haven't. Two cases: (a) e and e' were sent to different processes. Suppose we receive e' in D0. Then by Lemma (1) receiving e' in D0 takes you to D1. But D0 is 0-valent and D1 is 1-valent: contradiction (b) e and e' were sent to the same process p. Consider a finite deciding run starting in C0 in which p takes no steps. This has to ** exist, because p might fail. Say this run ends in configuration A. Take this same sequence of message receipts and run it from D0 and D1. (This has to make sense, because D0 and D1 differ from C0 only in the state of p.) The state E0 reached from D0 must be 0-valent; the state E1 reached from D1 must be 1-valent. But by commutativity, you get to E0 from A by receiving e and to E1 from A by receiving e' and then e. This implies that A is bivalent, contradicting our assumption that it was the end of a deciding run. Back to the main theorem: Lemma (2) says there's a bivalent starting state. Lemma (3) says we can receive a nonzero number of messages from that state and end up in another bivalent state. We can repeat this inductively and end up with an arbitrarily long non-deciding chain. Note that we used the possibility of a failure twice (**) in the proof. Getting around the problem recover crashed processes (e.g., using checkpointing) assume a prefect failure (fail-stop) detector, presumably based on timeout This may end up deciding a process has failed when it hasn't. Also tends to require a really long time-out, which slows down everything. "suspect" processes that are slow, but let them back in if they show up makes algorithms more complicated randomize -- this basically thwarts any deliberate "pumping" and makes it possible to drive the probability of failure arbitrarily low.