2/458: Parallel and Distributed Systems Jan. 29 - Feb. 10, 2003 Parallelization reading assignments for Mon. 3 Feb: Lauer & Needham Welsh et. al from SOSP 2001 for Mon. 10 Feb: Crowl et al. ========================================================== Lauer & Needham: "On the Duality of Operating Systems Structures." Proc. of the 2nd Intl. Symp. on Operating Systems, IRIA, Oct. 1978. Reprinted in OSR 13:2 (Apr. 1979), pp. 3-19. Message-oriented system OS/360 Minix v. procedure-oriented system Windows all commercial Unix variants Example: write to file user space envelope file system memory management system (allocate a buffer) file system disk system (add to queue, schedule completion interrupt) envelope scheduler envelope user space What are goals that might drive choice of system structure? What might constitute tradeoffs? - conceptual clarity Most people seem to find POS clearer, though Tanenbaum would disagree, I think. Explicit coding of the request FSM is a pain. - compartmentalization -- error containment MOS is pretty clearly better here -- no shared data; no process interactions other than message passing - latency POS is usually better here -- that's why most OSes use it (despite arguments of L&N to the contrary) - throughput for a server (different domain than an OS!) MOS is probably better here: better cache and TLB locality - scalability naively, POS seems better here (one thread per request), but the SEDA work would seem to argue otherwise - others? Observations about address spaces (one per process in message-oriented system) should usually be thought of in terms of language referencing environment, rather than HW protection domain -- we don't necessarily change page tables when changing processes. Note at top of p. 6 is key: "...processes [in a message-oriented system] tend to be associated with system resources, and the needs of applications which the system exists to serve are encoded into data to be passed around in messages." Other examples of a message-oriented systems: Minix and Amoeba. In those systems one of the heavyweight processes represents *all* user-level processes. Likewise note in the middle of p. 8: "...system resources [in a procedure-oriended system] tend to be encoded in common or global data structures and the applications are associated with processes whose needs are encoded in calls to system-provided procedures which access this data." Most modern OSes are procedure-oriented, including Linux, every commercial Unix I know of (including MacOS X), and Windows NT/2000/XP. Note that shared memory is still very useful in a message-oriented system: it makes message-passing fast (messages are little header blocks containing pointers to big data structures in shared memory). What L&N call "message channels" are sometimes called "output ports". What they call "message ports" are sometimes called "input ports". It is common, though not universal, for the connections among output ports and input ports to be many-one. L&N's performance arguments (p. 14) are "hand-wavy". In practice, people tend to build procedure-oriented kernels and message-oriented servers. Conventional wisdom holds that the alternatives are not as efficient. Why is this? Part of the answer, perhaps, is that event-driven servers don't really follow the L&N message-passing model. Rather than one process per logical resource or system component, they typically have one process per processor, which multiplexes management of multiple resources/components. In effect, the processes of the message-passing model are laid on top of each other, leading to one big process with a very large (heterogeneous, often hard to understand) switch statement. As a result, the event-driven model avoids most of the context switches of the message-passing model, for much improved performance. Q: could we build an event-driven kernel? Probably too complex; servers are hairy enough. Welsh et al. Message-based model as a compromise between the shared-memory model and the event-driven model. Draw picture. As the authors note, the event-driven model (and indeed the message-passing model as well) depends critically on not blocking during sub-tasks. It's not very tolerant of page faults, for example. Welsh et al. cite L&N at the top of p. 4. ======================================== SEDA (Welsh et al., SOSP'01) Staged event-driven architecture Assumptions: - You want more active requests than you can accommodate (kernel-supported) processes. - Requests perform several stages; the time consumed by each is workload dependent. - Offered workload is _very_ dynamic. Conclusion: If you want to pipeline effectively, you need to adjust the degree of parallelism in each stage and otherwise explicitly adapt to changes in load. "Load conditioning" via thread pool sizing event batching adaptive load shedding Two main example applications: Haboob web server; GNUtella packet router. Implemented for multiprocessor. --------------------- Capriccio (von Behren et al., SOSP'03) The two key technical innovations: linked stacks, resource-aware schedule via dynamically-discovered blocking graph. Also a thorough (but not new) use of asynchronous I/O. Implemented for uniprocessor only. Count on run-until-block semantics for "free" synchronization. Non-composable. Not clear to what extent they depend on lack of context switches. Perhaps only for "lock" implementation -- RMW w/out expense of CAS. << Everybody understand the synchronous I/O problem? >> Linked stack management is easy if you are willing to create a new frame for every call. The trick in Capriccio is to minimize the number of allocation operations. Can actually improve performance (on a uniprocessor) by improving cache behavior via re-use of stack chunks in different threads. Cf. Lynx, which had cooperative scheduling and used the main stack whenever the compiler could prove that the called routine would not yield. Note that transactions could combine concurrency with "run until block" semantics. Not clear whether this is a good idea. Resource-aware scheduling: when resources are plentiful, preferentially schedule threads that are about to consume them. When scarce, schedule threads that are about to release them. Resources = {heap space, CPU, file descriptors}. ========================================================== Crowl et al. combinatorial search paper the 7 machines the basic algorithm node (i, j) in the tree postulates that i maps to j; all nodes numbered smaller than i in S have had mappings postulated in ancestor nodes of the tree distance filter deals with *paths* if we guess that i maps to j we conclude that k does NOT map to l if the distance from i to k is less than the distance from j to l connectivity filter deals with *edges* if we guess that i maps to j we conclude that k does NOT map to l if there is an edge from i to k but not from j to l the argument: the "right" parallelization depends on the machine granularity of parallelism, communication overhead the problem whether you want one solution or all solutions the input data in this problem: whether the search space is so dense enough to justify focusing search, or whether speculation is better