========================================= Notes for CSC 2/466, Sept. 13, 2017 Introduction to synchronization I understand Sree talked about pthreads and/or C++ threads (which are often though not always implemented using pthreads under the hood). Example from the C++ manual: // thread example #include // std::cout #include // std::thread void foo() { // do stuff... } void bar(int x) { // do stuff... } int main() { std::thread first (foo); // spawn new thread that calls foo() std::thread second (bar,0); // spawn new thread that calls bar(0) std::cout << "main, foo and bar now execute concurrently...\n"; // synchronize threads: first.join(); // pauses until first finishes second.join(); // pauses until second finishes std::cout << "foo and bar completed.\n"; return 0; } Outputs: main, foo and bar now execute concurrently... foo and bar completed. ---------------------------------------- What can foo and bar do? How about // x == 0 foo: bar: x++ x++ Now what is x? This is a _data race_ -- two _conflicting operations_ whose ordering is not forced by the program. This is forbidden but generally uncaught in C++, and can lead to arbitrarily bad behavior. Two ways to get around it: (1) force the ordering (what I'll talk about first today) This is _synchronization_. It generally uses special library primitives (e.g., locks) (2) label x as a special atomic_int (atomic) variable that supports racy access; then think _very_ carefully about the possible resulting behaviors. This is how you build synchronization primitives; topic for CSC 2/458. May get to it today; we'll see. One way to avoid the above problem in C++: std::mutex m; foo: { bar: { m.lock(); m.lock(); x++; x++; m.unlock(); m.unlock(); } } or foo: { bar: { std::lock_guard g(m); std::lock_guard g(m); x++; x++; } } ---------------------------------------- Related but different problem. foo* p; t1: p = new foo(); t2: // use p; This is a data race on p. I could say std::mutex m; t1: { t2: { std::lock_guard g(m); std::lock_guard g(m); p = new foo(); // use p; } } But that's not enough. How do I make sure that t1 happens _first_? One way to handle it in C++: std::mutex m; std::condition_variable cv; bool ready = false; t1: { t2: { std::unique_lock g(m); std::unique_lock g(m); p = new foo(); while (!ready) cv.wait(m); ready = true; cv.notify_one(); // use p; } } Here unique_lock is like lock_guard, but with some extra member functions. In particular, it supports lock() and unlock() methods, which condition_variable::wait() uses internally to release the lock while waiting (so some other thread can acquire the lock and execute notify_one). There is also a notify_all(); ---------------------------------------- The first example above is _atomicity_. The second is _condition synchronization_. Almost all synchronization idioms can be seen as one or the other of these. Atomicity means I have some block of code that needs to happen all at once from the perspective of all other reads. Condition synchronization means I mustn't proceed until some other thread(s) have done something I depend on. _Mutual exclusion_ is the simplest implementation of atomicity. That's what locks give you: only one thread in the _critical section_ at a time. There are other implementations of atomicity, notably _transactional memory_, which as been proposed for C++ '20, but isn't in the language yet (though gcc includes a preliminary implementation): atomic_noexcept { // don't have to name a lock ... // body is atomic, but may actually execute at // the same time as other atomic blocks, as long as // the compiler and/or run-time system can prove there // are no conflicts. Often done with _speculation_ -- // try it and back out if it doesn't work. } ---------------------------------------- spinning v. yielding The above _yield_ when necessary -- give the core to another thread when the current one can't proceed. This is expensive if you expect to be able to proceed really soon -- quicker than you could switch to a different thread and switch back. An alternative is to _spin_ (_busy-wait_): just keep checking over and over until you discover you can proceed. This works ONLY if the thread that needs to get out of your way (or finish what you need) is running on a different core. It requires use of the atomic template mentioned above, so the compiler generates safe code (more on this if we get time to talk about _memory models_). Finally, it's needed under the hood to implement yield-based synchronization. In actuality, the usual implementation of std::mutex and related primitives uses very clever code that spins for a little while in hope of succeeding soon, then yields if unsuccessful. ---------------------------------------- coarse v. fine-grain locking Consider a concurrent hash table, as might be used as an index for pages cached inside a web server like apache. We could put a lock on the table, so bad things don't happen if we try to, say, look something up at exactly the moment another thread is removing it. But then performance sucks. Better to put a lock on each bucket. Then lots of operations can happen concurrently. The downside: if you want to resize the table you have to grab _all_ the locks. In general, fine-grain locking can be really tricky. Imagine replacing the hash table with an AVL or red-black tree. May need a non-trivial set of locks to protect an (atomic) rotation. If t1 grabs lock l1 and t2 grabs l2, then t1 tries to grab l2 while t2 tries to grab l1, we get _deadlock_. One way to avoid is to always grab locks in the same order (no circular waiting), but that isn't easy: lookup grabs locks while going down the tree (releasing them behind so we don't serialize on the root); rotations grab locks while going back up the tree. ---------------------------------------- Note that lock_guard holds a lock for the duration of a scope. If you want multiple locks their scopes have to nest. That's a good discipline, but not always what you want. Linked lists, for example, often use "hand-over-hand" locking -- grab A, grab B, release A, grab C, release B, ... ---------------------------------------- reader-writer locks Consider the hash table again. I don't want to let a reader and a deleter access the same bucket at the same time. But how about two readers? It's usually safe for them to work on the same data at the same time. "reader-writer" locks allow this, though they're typically a little more expensive than mutex locks, so you don't want to use them unless most threads are readers. New in C++'17: std::shared_mutex -- both ordinary lock() // exclusive -- writer and lock_shared() // non-exclusive -- reader ---------------------------------------- implementing spin locks Modern spin locks use atomic read-modify-write (fetch-and-phi) instructions, notably test_and_set and compare_and_swap or load-linked, store-conditional The simple test_and_set lock: type lock = Boolean := false procedure acquire(L : ^lock) repeat until test_and_set(L) = false procedure release(L : ^lock) L^ := false Problems: not fair (possible starvation) LOTS of contention for memory and interconnect bandwidth Latter problem can be *partially* cured, on a cache-coherent machine, by spinning with reads instead of TASes: procedure acquire(L : ^lock) // "test-and-test-and-set" lock while test_and_set(L) = true repeat until L = false This is known as a test-and-test_and_set lock. It's still not very good, but better. There are lots of other alternatives, some of which I helped invent :-) Take CSC 2/458 to learn more. BTW, Fetch-and-phi operations are useful not only for locking, but for NONBLOCKING data structures -- clever algorithms that avoid race conditions without ever locking anything. If a thread is preempted (at any time), other threads can continue to make progress. Perhaps the simplest nonblocking object is a counter. Some machines support this with an atomic fetch-and-add (FAA) instruction. This is most useful if it returns the _previous_ value as a side effect (lets you get a sequence number, for example). The C++ library supports this with int atomic_int::fetch_add(int n) This will be implemented under the hood with FAA if the HW supports it, OW with compare_and_exchange (aka CAS): int atomic_int::fetch_add(int n) { int old = *this; do { new = old + n; } while !compare_exchange_weak(old, new); // updates old return old; } Compare_and_exchange updates a location and returns true if it currently contains an expected value; OW it loads the current value into its first arg (which is passed by reference) and returns false. The "weak" version may fail spuriously; the strong version fails only if the location really doesn't contain the expected value. The weak version is cheaper on some machines, and is all we need in the code above. There also exist good (and often _very_ tricky) nonblocking algorithms for lists, queues, search trees, skip lists, mark-and-sweep garbage collection, and other things. New nonblocking algorithms still tend to be publishable results. For most programs, you can stick with locks. ---------------------------------------- Memory model: what's wrong with data races; why we need atomic template example: initialization // ready == false p = new foo(args) ready = true while (!ready) {} // use *p What might go wrong? (1) stores happen in the background: update to ready might be seen before update to p. (2) hardware is allowed to reorder instructions if they don't affect single-core behavior: might actually update ready first at run time! (3) _compiler_ is allowed to reorder instructions if they don't affect single-core behavior: might actually update ready first _in the code_. (This is not an exclusive list!) Can prevent these counter-intuitive orderings, but it's expensive -- slows down the processor and memory system. So we only want to do it when we have to: hence the atomic template. By default, operations on _atomic_ variables are totally ordered wrt each other; there are extensions that let you specify more relaxed orderings when they're all you need. THIS IS AN OPPORTUNITY TO SHOOT YOURSELF IN THE FOOT FOR VERY LITTLE GAIN. DON'T DO IT! ---------------------------------------- There is a LOT of other cool stuff in the C++ concurrency libraries, including barriers futures trylocks And tons of cool issues in concurrency: take CSC 2/458!