CSC 2/458 4 March 2019, if time permits MapReduce Dean and Ghemawat, OSDI 2004 map and reduce common higher-order functions in functional programming In Scheme: (map * '(2 4 6) '(3 5 7)) ==> (6 20 42) (define reduce (lambda (f L) (cond ((null? L) '()) ((null? (cdr L)) (car L)) (else (reduce f (cons (f (car L) (cadr L)) (cddr L))))))) ;; The built-in version of reduce in Scheme is called fold. ;; It takes an extra identity element to bootstrap the process ;; and avoid ever returning '() (reduce + '(1 2 3 4 5)) => 15 (reduce * '(1 2 3 4 5)) => 120 (define total (lambda (l) (reduce + l))) (total '(1 2 3 4 5)) => 15 (define total-all (lambda (l) (map total l))) (total-all '((1 2 3 4 5) (2 4 6 8 10) (3 6 9 12 15))) => (15 30 45) General paradigm for many programs: (reduce f (map g L)) Generally assume that f is commutative Especially efficient implementations available when f is also associative -- can use a tree. ---------------------------------------- Google MapReduce Data structured as key-value pairs, generally represented as text. Built on top of preexisting distributed file system single global hierarchical name space all files available from everywhere replication for reliability and availability faster when the data happens to be local (try to maximize this) M map jobs and R reduce jobs. Generally M >> n (for good load balancing) and 1 <= R <= O(n) Input split into M chunks by user-provided splitter function. Map job buffers output pairs in memory. Periodically flushes to R local files. Tells Master the names of those files; Master passes names on to the R reduce jobs, which read from the correspoinding files remotely. Allow optional "combiner" function (typically same as Reduce) to merge pairs with matching keys just before Map job writes them to intermediate files. Reduce job uses a built-in sort to order its gathered input. Sort example in the paper leverages this and really does nothing more (!) Note: sort is not strictly comparison-based: does binning based on knowledge of key distribution. "Re-execution as the primary mechanism for fault tolerance" assume master doesn't die (rerun the whole job if it does) re-run any worker that doesn't respond to pings quick enough rely on atomicity of file system rename() to arbitrate when we guess wrong and duplicate a job that's really still active notice when jobs fail deterministically, and skip them Run extra "backup" jobs at the end to reduce tail latency leverages fault tolerance mechanism ---------------------------------------- Apache Hadoop Open-source alternative to Google's (internal only) system. Very similar in most respects. Built on Apache YARN system for cluster node mgmt (rather than Google's borg). Java is the default language for map and reduce (and combine) functions, though others can be used as well, via executables & pipes or via C++ wrapper API. (Google is C++-centric). Files kept in HDFS (Hadoop Distributed File System). Suggest 10-100 map instances per cluster node. Boot-up takes tens of seconds, so map should take at least a minute to be worth it. Suggest 0.95 or 1.75 x (#nodes * #output containers per node). Latter option provides the "backup" jobs of Google MapReduce. Lots and lots of configuration parameters and optional components. ---------------------------------------- What is MapReduce *not* good for? Rise of graph-processing and machine-learning packages. Future lectures? ======================================== 2019 A4 SSSP Recall Dijkstra's algorithm: - maintain priority queue of vertices sorted by their currently known distance from the source. Initially, all are infinity except for neighbors. - at each step, take the root v of the PQ (which must indeed be the closest node to the source, nail down its path, and "relax" its neighbors. For each, if going through v is better than the currently known path from the source, update the neighbor in the PQ. - time O(m log n) This is a "lebel setting" algorithm. It never changes its mind, and it doesn't parallelize easily. Delta stepping is a "label correcting" algorithm that works well sequentially AND parallelizes nicely. Instead of a PQ, use an array of buckets, each of which holds vertices whose best known distance from the source lies in a range of width delta. Start w/ the source in bucket 0 and the rest not in the array. At each step, consider the first non-empty bucket B. For each vertex v in B, we relax each neighbor w that is connected to v by an edge of weight <= delta -- that is, we check to see if connecting through v will improve w's path to the source, and if so, we move w to the appropriate new bucket, which might be B or one of its successors. Because we might move w into B, we iterate until B is empty; then we relax all the neighbors connected to v by edges of weight > delta, for all edges v that we have removed from B. These will all move to buckets > B. Finally, we move on to the next nonempty bucket. As it turns out, given a maximum weight W for edges, we can be sure that all vertices currently in the array will live within W/delta buckets of each other. If we create a few more buckets than this, we can safely use the array in a circular fashion; we don't actually need max-path-length/delta of them. Over time, w's preferred edge back to the source may be updated several times. Crucially, however, iteration over the neighbors w of v can be done in parallel for all vertices v in B. This is the source of potential parallelism. Harvesting this parallelism isn't trivial, however. Threads must always work on the same bucket at the same point in time, and must synchronize their access to all shared data structures. While there are many ways to parallelize delta-stepping, the following is perhaps the easiest: We statically partition the n vertices so that each of our t threads is responsible for approximately n/t of them. Each thread then maintains its own, separate array of buckets. We use a barrier to make sure that each thread works on its ith bucket during step i (perhaps each part of step i). We also create 2 (t choose 2) message queues that allow threads to pass work to one another. When thread j discovers an opportunity to relax vertex w, which belongs to thread k, it does not do so directly; rather, it enqueues a message asking k to do so -- i.e., to move w to a new bucket, where it will get considered in due course. For efficiency, j may "batch" together requests being sent to the same destination. Batching is particularly important on a cluster with expensive messages. This algorithm adapts nicely to MPI. You'll also solve it in Chapel.