Lecture notes for CSC 252, Tues. Apr. 22, 2014ff Reading Skim chapter 12; read 12.3-12.6 carefully DO ON-LINE COURSE EVALUATIONS ============================== Concurrency processes and threads kernel-level and user-level threads blocking kernel calls shared memory v. message passing In Linux processes share memory via mmap or shmem library packages threads get shared memory by default (actually difficult to get transparent separate copies) processes send messages over sockets, both within and across machines Take networks course (257) to learn (much) more about sockets. Take operating systems (256) to learn more about threads. Take languages (254) to learn about linguistic support. Take parallel & distributed systems (258) to learn more about algorithmic issues for all of the above. ------------------ Two main uses for threads/processes conceptual structuring for event-based programs ("mere concurrency") e.g. web browser, GUI-based app parallelism e.g. web server ================== create processes with fork() and execve() -- Chapter 8 have to create shared memory explicitly with mmap or shmem create (Posix) threads with pthread_create global memory is automatically shared #include ... int pthread_create(pthread_t * thread, pthread_attr_t * attr, void * (*start_routine)(void *), void * arg); Returns 0 on success, various non-zero values on error. Stores id of new thread at location specified by first arg. Runs start_routine, passing arg. pthread_attr_t has fields to control various things: whether thread is "joinable" or "detached" scheduling policy (normal, RR real-time, FIFO real-time) scheduling priority whether these last two are new or inherited from parent (ignored) system or process-level scheduling "scope" (Linux supports only the former) where to put the stack some non-standard extensions (notably Solaris LWP) support additional parameters. if thread is joinable, join up with pthread_join(pthread_t thread, void ** thread_return) thread's return value (of type void *) is stored in specified location ======================================== synchronization (chapter 12) The fundamental thing that makes parallel programming challenging is the huge number of ways that executions of different threads can interleave. Some interleavings are correct; some aren't. SYNCHRONIZATION is the act of precluding bad interleavings. Most common forms: ATOMICITY, CONDITION SYNCHRONIZATION. [NB: We do NOT in general want to over-synchronize. That eliminates parallelism, which we generally want to encourage for performance. In a nutshell, we want to eliminate "bad" interleavings -- the ones that cause the program to give incorrect results -- while simultaneously eliminating as few "good" interleavings as possible.] Atomicity ensures that I can move some collection of data from one consistent state to another consistent state without any other thread seeing the (probably inconsistent) intermediate states -- and without me having to worry about seeing anything they might be doing to it. Example: concurrent increments of a shared counter T1: r1 = ct T2: r1 = ct r1++ r1++ ct = r1 ct = r1 Condition synchronization ensures that some necessary precondition (to be made true by somebody else) holds before I proceed. Example: consumer: wait until queue is nonempty and then atomically remove something // with some sort of assurance that that nobody else will // empty it again between the wait and the remove We can also classify synchronization mechanisms as busy-wait (spinning) v. scheduler-based. I'll focus on the latter; the former is a bad idea on a uniprocessor. Also note that synchronization between event handlers and main program requires masking on the part of the main program, since the handler can neither spin nor be descheduled. [NB: You might be tempted to think of mutual exclusion as a form of condition synchronization (the condition being that nobody else is in the critical section), but it isn't. The distinction is basically existential v. universal quantification. Mutual exclusion requires a variant of multi-process consensus.] ---------------------------------------- Atomicity Can be achieved in various ways. Simplest is MUTUAL EXCLUSION: make sure only one thread at a time is executing a dangerous ("critical) section of code. Mutual exclusion is most commonly implemented with LOCKS. A lock is an object with two operations: acquire and release. In a correctly written program they're used in pairs that bracket critical sections. L.acquire() // critical section -- e.g., ct++ L.release() Posix threads (the pthreads package) provide two implementations of locks, the more general of which are called SEMAPHORES. They're an old idea (introduced by Dijkstra in 1965), but pretty clearly a good one, because they're still very widely used. #include int sem_init(sem_t *sem, 0, 1); int sem_wait(sem_t *s); // acquire int sem_post(sem_t *s); // release return 0 if OK; -1 on error The second arg to sem_init is non-zero for semaphores shared among threads from different processes, which must still share memory. The third argument is the _initial value_ of the semaphore, which is always 1 when semaphore is going to be used as a lock. The sem_wait operation waits until the value of the semaphore is positive and then (atomically with the wait) decrements it. The sem_post operation increments the semaphore and (atomically with the increment) wakes up one waiting thread, if there is one. (To implement these operations I need some lower-level mechanism for atomicity, which I'm not discussing here.) [NB: The other pthread locking mechanism is called pthread_mutex. Read the man pages if curious. There's also a second (System V, non-Posix) semaphore interface provided by most Unix variants, including Linux. It doesn't use shared memory at all, but it has a really clunky interface and poorer performance, so it isn't much used by modern programs.] With semaphores, our code looks like sem_wait(&my_sem) // critical section -- e.g., ct++ sem_post(&my_sem) // I should, of course, be checking return values, e.g. by wrapping // all these calls in my VERIFY macro. ---------------------------------------- Condition synchronization Now suppose instead of a counter I want to share a bounded queue among a set of producers and consumers. (Imagine the queue of incoming requests to a parallel web server.) Naively, I could write // producer int done = 0; while (!done) { sem_wait(&my_sem) if (!Q.full()) { Q.insert(my_data); done = 1; } sem_post(&my_sem) } // consumer is symmetric But if the queue _is_ full, this may busy-wait for a _long_ time. It's certainly intolerable on a uniprocessor. What we want is to go to sleep when we can't proceed, and give the core to somebody else. We can do this with GENERAL SEMAPHORES -- one that we initialize to values other than 1. Each semaphore represents a "resource." When we're doing mutual exclusion, it represents access to the critical section. In our bounded queue, we can have one semaphore that represents full slots and another that represents empty slots: sem_t full_slots; sem_init(&full_slots, 0, 0); sem_t empty_slots; sem_init(&empty_slots, 0, Q.size()); sem_t mutex; sem_init(&mutex, 0, 1); // producer sem_wait(&empty_slots); sem_wait(&mutex); Q.insert(my_data); // I know there's room! sem_post(&mutex); sem_post(&full_slots); // consumer sem_wait(&full_slots); sem_wait(&mutex); my_data = Q.remove(); // I know there's data available! sem_post(&mutex); sem_post(&empty_slots); [NB: pthreads also provide "condition variables", with operations wait() and signal(). Condition variables are like memory-less semaphores: they don't have a counter inside, so wait() _always_ waits until somebody does a signal(). A signal() with no one waiting is a no-op. Semaphores can be used to implement mutex and cond, and vice versa. normal idiom: pthread_mutex_lock(L, ...) while (! condition) { pthread_cond_wait(..., L, ...) } pthread_mutex_unlock(L, ...) ] Exist lots of other synchronization mechanisms, e.g. monitors, Java synchronized methods, conditional critical regions, transactional memory, etc. Many of these are best with language support. ---------------------------------------- Barriers common in data-parallel programs separate algorithm phases Used for parallel iterative algorithms, which are common in scientific code. Not supported directly by pthreads, but easy to implement given semaphores -- or mutex and cond. Also supported directly by OpenMP. In pseudocode: in all threads, in parallel, do: many times, do: read state of neighbors at end of previous iteration update my state for this iteration write that state where neighbors can see it my_barrier.wait() // ensures that ALL threads have reached this point in the // code before ANY of them proceeds to the next iteration ---------------------------------------- How a scheduler works (take 254 and 258 for more) ready list synchronization lists preemption ======================================== Multithreaded and Multicore machines The cache coherence problem Update v. invalidate cache coherence definition writes to same location will be seen in same order by everyone or, put another way, when writers quiesce, everybody will agree on value of any given location Cf. cache consistency writes to _different_ locations will be seen in same order by everyone snooping MSI protocol state see do goto --------------------------------------------------------- invalid PrRd BusRd shared PrWr BusRdX modified shared PrRd -- -- PrWr BusRdX modified BusRd flush?? -- BusRdX flush?? invalid modified (dirty) PrRd -- -- PrWr -- -- BusRd flush shared BusRdX flush invalid MESI protocol state see do goto --------------------------------------------------------- invalid PrRd BusRd(S) shared BusRd(NS) exclusive PrWr BusRdX modified shared PrRd -- -- PrWr BusRdX modified BusRd flush?? -- BusRdX flush?? invalid exclusive (clean) PrRd -- -- PrWr -- modified BusRd flush? shared BusRdX flush? invalid modified (dirty) PrRd -- -- PrWr -- -- BusRd flush shared BusRdX flush invalid [MESI is a variant on the original 1983 "write-once" protocol of Goodman, in which the states are known as dirty, reserved, valid, and invalid (DRVI), respectively. The difference lies in the R/E state. Both are known to be cached only locally, but R has been written once, while E hasn't been written at all. The purpose of R is to avoid the need for write-back when evicting, since memory is up-to-date. The purpose of E is to avoid the need for a bus transaction on the first write, which is critical for single-thread performance. Goodman received the Eckert-Mauchly award in 2013, largely in recognition of having invented coherence protocols.] flush?: provide data only if machine supports cache-to-cache transfers flush??: provide data only if machine supports cache-to-cache transfers and it's this processor's responsibility to respond (can be determined via bus arbitration or a separate owner state [MOESI]) note existence of directory-based coherence protocols processor locality building synchronization primitives and nonblocking data structures RMW (atomic update) instructions tas cas lock prefix on x86 for instructions that aren't intrinsically atomic HLE/TSX sequential consistency and why most HW doesn't provide it fences and globally atomic loads & stores memory models why reads might bypass writes why that's bad bow tie: x == y == 0 T1 T2 x := 1 y := 1 i := y j := x i == j == 0 why writes might reorder why that's bad IRIW: x == y == 0 T1 T2 T3 T4 x := 1 i := x b := y y := 1 j := y a := x i == b == 1; a == j == 0 note that compiler can also reorder loads and stores optimizations that are same for sequential code aren't safe in parallel code SC for DRF what about non-DRF? Java C++ ======================================== TM GPUs