2/458: Parallel and Distributed Systems 27 February 2008 reminders: project proposals due Monday 3 Mar. Intro to OpenMP Split-join model implicit barrier at the end of every parallel section Parallel sections can nest, but not all impls. really fork. Implementation typically doesn't really fork and join. Rather, threads are created at the beginning of execution, and threads other than the master skip the sequential sections. Unlike MPI, requires significant compiler support available in gcc 4.2; see the course wiki NB: compile with -fopenmp study the table under "Clauses/Directives Summary" in the LLNL tutorial #pragma omp parallel { // every thread executes this block; use omp_get_thread_num() // to drive choice of what to do in each thread } clause options if (expr) // whether to do this block at all num_threads (expr) // create this many. If unspecified, use value set by // omp_set_num_threads() or OMP_NUM_THREADS env. var, // or default (# processors) private (list) // thread-private vars shared (list) // vars shared by threads default (shared | none) // should unnamed vars be shared or unavailable? // some impls. also allow you to specify private. // absent specifications, default is: globals are shared; // loop indices and subroutine locals are private copyin (list) // for threadprivate vars: initializes to value in master firstprivate (list) // like private, but initialized reduction (operator: list) // master gets reduced value of specified vars at end // behave as private within parallel block; // must be shared in enclosing block #pragma omp threadprivate (list) // these global variables persist across parallel regions, // retaining their value within a given thread, but only // if dynamic threads are turned off #pragma omp for { // Must be nested inside an already-parallel section. // Threads divy up the iterations of the loop. // Loop has to have stylized header. } clause options private, shared, firstprivate, reduction, as above lastprivate // master gets value from last iteration of loop schedule static, chunksize // banded dynamic, chunksize // from a worklist guided, chunksize // of decreasing size, with min. runtime // as specified by OMP_SCHEDULE env. var ordered // forces iterations to occur in order nowait // omits implicit barrier at the end of the loop use of ordered: #pragma omp for ordered { ... # pragma ordered { // this _part_ of the loop happens "in order" } ... } #pragma omp sections { # pragma omp section { // one thread's work (not necessarily thread 0) } # pragma omp section { // another thread's work (not necessarily thread 1) } ... # pragma omp section { // last thread's work } } clause options private, firstprivate, lastprivate, reduction, nowait, as above #pragma omp single { // when nested in a parallel block, will be executed by only one thread } clause options private, firstprivate, nowait, as above #pragma omp parallel for like #pragma parallel # pragma for with the clauses working in the obvious way #pragma omp parallel sections similarly ------------------- SYNCHRONIZATION #pragma omp master { // executed only by the master thread } #pragma omp critical [name] { // Executed by only one thread at a time. // Blocks with different names do not exclude one another. } #pragma omp barrier // all threads in the team synch up // NB: all threads in a team must see the same sequence of barriers // and work-sharing (for, sections, single) directives. #pragma omp atomic // mini critical section #pragma omp flush (list) // Make sure these variables are consistent across threads. // Sort of like a dynamic 'volatile' // Happens automatically for all variables at beginning and end of // parallel, critical, and ordered; at end of for, sections, single ------------------- functions omp_set_dynamic (expr) // true or false omp_set_num_threads (expr) int omp_get_num_threads() int omp_get_thread_num() // who am i? also various locking routines portable timing routines =================== HPCA/PPoPP/TRANSACT trip report