Lecture notes for CSC 252, Tues. Apr. 8, 2014ff Announcements A6 due next Monday evening Read chapter 9 ============================== address translation and virtual memory Three main goals 1 give every application the illusion that it has the memory to itself; that is, that it runs from 0 to max_addr 2 give applications the illusion that there's more memory than there really is 3 protect applications from each other extra benefits: ability to differentiate between what I have the right to access (in principle) and what I have mapped at present allows me to arrange for page faults on certain kinds of accesses. This is really handy for copy-on-write sharing, e.g., after fork() automatic extension of stack debuggers concurrent garbage collection software-emulated shared memory on clusters persistent storage in-core database transactions checkpointing process migration Goal 2 above uses DRAM as a cache for disk. In comparison to using SRAM as a cache for DRAM, VM has an *enormous* miss penalty: ~100,000X (100ns v. 10ms), rather than ~100X (1ns v. 100ns). On the plus side, this means we can easily afford to do miss handling in SW. On the minus side, it means we have to have an incredibly high hit rate to keep the overhead tolerable. Toward that end we implement full associativity very sophisticated eviction policies (take CSC 256) write-back Physical vs Virtual Addresses Physical addresses refer to physical (hardware) memory. This memory typically begins with address 0 and runs from less than a megabyte on very small embedded devices to as much as a terabyte on the largest supercomputers. Desktops and laptops are typically a few GB these days -- usually less than 1GB on a cell phone. Virtual addresses refer to locations in a process's view of memory, which need not correspond directly to physical memory. The size of a virtual address space is limited by the size of addresses (n bits can specify an address space of size 2^n). The most recent machines have an architectural limit of 2^64, but implementations tend to actually support something less than that (say 2^40 == 1TB). With hardware support, virtual and physical addresses can be independent. It is common to have virtual address spaces much larger than the physical address space (so we can run very large programs in a comparatively small amount of memory) and even possible to have physical address spaces that are larger than a virtual address space (e.g. on the PDP/11 [1970s] or Cray T3D [1990s]). [An aside: in most current implementations of C, an int (long) is 32 bits, while a "long long" is 64 bits. A pointer is either 32 bits or 64 bits, depending on the underlying OS and HW. On 32-bit systems, the compiler generates multi-instruction sequences to manipulate long longs.] ------------------------------ paging the most common VM technique segmentation the principal alternative lots of hybrid possibilities take 256 to learn more; stick to paging here We divide the virtual address space of a process into smaller, equal-size chunks called pages, and divide the physical address space of the machine into equal-size chunks called frames, of the same size as the pages. We then map from pages to frames in such a way that the nice linear virtual address space can be composed of physical frames that are scattered all over the place. Address translation is performed by the memory management unit (MMU), which is built into the CPU. A virtual address is divided into a page/offset pair. The association between pages and frames is described by a PAGE TABLE. To turn a virtual address into a physical address we simply replace the page number with the corresponding frame number. The page table is a dictionary abstraction that maps page numbers to information about the page, including the physical frame number and protection attributes (readable and/or writable in kernel and/or user mode). We can use any of the various data structure implementations of a dictionary: characteristic array, hash table, tree, list. The PDP-11 used a simple array (implemented in hardware), indexed by page number. The VAX had three simple arrays, one for the kernel, one for user text and data, one for user stack. Arrays don't work anymore, because the typical modern address space isn't contiguous, and you can't afford the space for all the missing entries. The SPARC (older models), x86, and MC680x0 use(d) multi-way search trees. The PowerPC, SPARC (middle-aged models), and Itanium use hash tables (called "inverted page tables"). The Mach OS kernel originally used a linked-list organization in software (since replaced by a search tree or hash table). Generally need a separate page table for every process. Dedicated HW register holds pointer to it. Swap the pointer on context switch. ------------------------------ Contents of a page table entry page number (for associative lookup, if we're using a hash table or list [falls out of location in characteristic array or multi-level tree]) frame number (translation [may fall out of location in inverted table]) permissions (read, write, maybe execute, for both kernel and user) valid bit use and dirty bits Exactly how many bits this adds up to depends on how big the various things above are chosen to be. A minimal case for a characteristic array: 1 valid bit 2 use and dirty bits 4 writable, readable in user mode, kernel mode 20 frame number (assuming a 32-bit virtual address and a 4KB page size) -- 27b ~= 3.5 bytes With a page table of 2^20 entries, that's 3.5MB. You can't afford to put that on the chip (not anymore -- the PDP/11 did it, with 16-bit addresses). You probably don't even want to put it in main memory for 100+ simultaneous processes. With 64-bit virtual addresses it gets totally out of hand. (With 4-byte entries, 8KB = 2^13B pages, and a 40-bit address space, that's 2^(40-13+2) = 0.5GB of page table, per process!!) Two basic possibilities: find a way to store the information more compactly page the page tables The latter doesn't completely solve the problem, and is complicated. I don't plan to talk about it (though several systems do it). We'll stick to the former here: store the page table more compactly. Possible approaches: (1) store information only for the portion of the address space that a process actually uses. Trees are good for this. If a lot of the address space *is* used, however, you're still in trouble, and even if it's sparse, a 64-bit address space needs at least 5-level tables. (2) store information in per-page form only for the portion of the address space that is actually resident in memory. In other words, store information on a per-frame basis, and find a way to summarize the rest of the address space. Trees are ok for this. Hash tables are even better. ------------------------------ Tree-structured page tables (older SPARC, x86 [both Intel and AMD]) Two or three levels deep for 32-bit; 5-6 levels deep for 64-bit. In main memory. Special register (visible only in kernel mode) holds root pointer. Contents of root pointer register and all internal tree pointers are physical addresses. All pages that are resident in memory must be represented by tree. Big parts of tree can be left out if corresponding pages are not resident and that part of address space is missing; that part of tree is paged out to disk; or the kernel has chosen to represent that part of the address space with some other (presumably more compact) data structure Whenever there is a TLB (page table cache) miss and the needed part of the tree is missing (or is present but marked invalid or protected against the requested access), the translation hardware generates an exception and the OS has to handle it. If the needed part of the tree is missing, the OS pages it in from disk or reconstructs it from some more compact form, then (assuming permissions say the access should be allowed, so this was a demand page fault) brings in the page from disk and re-starts the pipeline at the instruction that caused the exception. The TLB will miss again, but now, because the tree is in place, the translation hardware will be able to fill it. The more levels the tree has, the more opportunity you have to leave out chunks of the tree, but the more memory accesses it takes to satisfy a TLB miss. Plausible example for 3-level table with 32-bit virtual addresses: 6 6 7 13 ,------,------,-------,-------------, virtual address | p|a g e | n u m | offset | '--|---'--|---'--|----'-------------' | | | | | | ,-------------------' | | | | | | Root | | | | ,------' | | v | '-------, | ,----, | | | | | | | | | ------> | | | | | | ,----, | '--->| ------------------>| | | ,----, | | | | ------------------>| | | ------> | | | | | | | | | | ------> | | | | ------> '----->| | | | | | | | ------> '----->| frame #, etc. '----' | | | | | ------> | | | | | | '----' | | | | | | | | | | | | | | | | | | | | '----' ------------------------------ TLBs The page table can be stored in main memory, but looking things up in such a table on every memory access would slow all loads and stores (including instruction fetches!) by a factor of two or more, which is generally unacceptable. Almost all modern machines keep the most active translations in an associative set of registers known as the translation lookaside buffer (TLB) (sometimes called an address translation cache (ATC)). So long as a program has reasonable locality, most translations will be a "TLB hit", using a page table entry already in the TLB. The TLB must either be re-loaded (or at least purged) on a context switch, or else its entries must be tagged with the id of the address space for which they are valid (tags are common but not universal on modern machines) Now what happens on a TLB miss? We have to look things up in the full page table. In some machines hardware does the lookup (and hence defines the format of the table). In other machines (e.g. MIPS, Alpha, and SPARC), HW traps to the OS on a TLB miss, and reloading (and the format of tables) is entirely up to the OS. Dirty bit has to be in the TLB, or else we need a workaround. Nice if you don't have to write TLB entries back to page table. MIPS accomplishes this by initially setting every TLB entry read-only, so you get a fault on write. Small TLBs (32-96 entries) tend to be fully associative. Larger TLBs (up to maybe 4K entries) tend to have limited associativity. The kernel must generally augment hardware-defined page tables, if any, with additional data structures that describe things like which processes are sharing the frame. ------------------------------ Efficiently representing information for non-resident pages. Here's the outline of one possible approach. It's a (*very*) simplified version of what goes on in the Mach kernel, which underlies several versions of Unix, including MacOS X. The virtual address space for a given process is represented by a linked list called the "address map". Each entry in the list represents a portion of the address space that is virtually contiguous uniformly protected backed up by the same file on disk (swap file, a.out file, etc.) The entry contains (among other things) a pointer to a data structure describing the file (which may be pointed at by lots of entries in lots of address maps) and an indication of the offsets within the file at which the pages of this range of addresses begin and end. The typical process has 5-10 list entries in its address map (code, initialized data, uninitialized data, stack, shared libraries, memory-mapped files). On a page fault the kernel knows which process was running. It walks down the process's list until it find the entry (if any) describing the address range in which the fault occurred. From there it can find the file and offset at which the page can be found on disk. But just because we took a page fault doesn't mean the page isn't in memory. If we have SW TLB reload there may be lots of pages in memory that aren't currently in the TLB. Even with HW page tables we may fault on a page that's really in memory (for copy-on-write or other obscure reasons that you'll learn about in 256). So the kernel also maintains a hash table of all memory-resident pages. Once it finds the appropriate entry in the address map list, the kernel uses the source of the data (e.g. name of swap file or a.out file) and the offset of the page within that source as key and does a lookup in the hash table. The physical hardware page tables, if any, are used solely as a "cache" of the tables maintained by the kernel. This provides a very nice degree of hardware independence for the kernel. Mach pioneered machine-independent VM. ------------------------------ Interaction of cache and virtual memory Our pipeline drawings for the "Y86" assumed that we could access memory for I-fetch or load or store in a single cycle. This is true if we have a fairly slow clock and we hit in both the TLB and the L1 cache. More accurately, in those two places in the pipeline diagram, we're really doing the following: divide address into page number and offset look page number up in TLB and return frame number if not found access page table (in HW or SW) load TLB and return frame number concatenate frame number and offset to get physical address look up in L1 cache if not found look up in L2 cache if not found look up in L3 cache if not found retrieve from memory load L3 cache load L2 cache load L1 cache Note that even stores generally require access to memory on a miss, to load the rest of the cache block in which the store occurs (write-allocate). If we encounter any of the miss cases we have to stall the pipeline, or accommodate out-of-order execution. To allow the double-hit case to fit in a single cycle, many modern machines have a virtually indexed (but physically tagged) L1 cache, so they can do address translation and L1 lookup in parallel. ============================== Core i7/Linux case study Not really 64-bit: VA is 48-bit (256TB); PA is 52-bit (4PB) processor chip has 2-6 cores. Caches (recall from chap. 6): blocks (at all levels) are 64B L1-I (per core) 8-way, 32KB L1-D (per core) 8-way, 32KB L2 (unified, per core) 8-way, 256KB L3 (shared) 16-way, 8-12 MB Address translation DTLB 64 entries, 4-way ITLB 128 entries, 4-way L2 (unified) 512 entries, 4-way Internal "QuickPath" point-to-point interconnect among cores and memory controller. ------------------------------ Address translation The 8086 did not have address translation. The 80286 added a relatively simple form of translation known as segmentation. The 80386 added paging. The segmentation portion of the HW is ignored by Linux. Take 256 to learn more. Virtual address is 48 bits: 36 bits of page number and 12 bits of page offset. Because the TLB is (only) 4-way set associative, the low 5 (for ITLB) or 4 (for DTLB) bits of the page number are used to index into the appropriate set, and the remaining 31 or 32 bits are used as tag, and matched against the 4 entries in the set. The HW does TLB reload from 4-level tree-structured page tables. Each level of the table has 2^9 = 512 entries. The root of the table is process-specific, and is found by following a pointer in a special register called CR3 (control register 3). That pointer and all the pointers in the tree are (the upper 40 bits of) physical addresses (each table is assumed to be page [12-bit] aligned). Page table entries contain 40-bit frame number, or address for next level of table permission bits (R/W/X, K/U) whether next level of tree is present cache policy (write-through/write-back/uncached) global mark (don't evict on context switch -- used to map the kernel) superpage bit (top level only) reference bit dirty bit (bottom level only) The 40-bit frame number at bottom level is concatenated with the 12-bit page offset to give the 52-bit physical address. The TLB does not contain process ids, so non-global pages have to be flushed on a context switch. The page size and L1 cache size and associativity are chosen so that TLB lookup and L1 cache lookup can happen simultaneously, even though the caches are not virtually indexed: page offset is 12 bits (4KB page size). cache line is 64 bytes, so 6 bits of the address are block index, leaving 6 bits of block offset within the page. L1 cache has 2^6 = 64 sets (2^15 B capacity/2^3 ways/2^6 B block size), so the cache index bits are exactly the block offset bits, which lie in the page offset portion of the virtual address, NOT the page or frame number portion. The cache returns the 8 candidate lines, including physical tags. These are compared to the physical frame number (if any) returned by the TLB, for quick selection of matching line (if any). ------------------------------ Linux virtual memory ZZZZ +--------------------------+ process-specific data structures (different for every process) +--------------------------+ mapped one-one with (part of) physical memory (I/O space in particular) (same for every process) +--------------------------+ kernel code and data (same for every process) YYYY +==========================+ user stack +--------------------------+ | v ^ | +--------------------------+ shared libraries and memory-mapped files +--------------------------+ ^ | brk +--------------------------+ heap +--------------------------+ uninitialized data +--------------------------+ initialized data +--------------------------+ code XXXX +--------------------------+ forbidden 0000 +--------------------------+ Addresses XXXX, YYYY, and ZZZZ depend on whether we're running 32- or 64-bit. The kernel maintains address space data structures for each process similar to those described for the Mach kernel above. In particular, there is a linked list of address space "areas", one for each virtually contiguous address range with the same protections. To safeguard against very long lists, Linux superimposes a binary search tree, for which the list entries are the leaves. Every "area" is backed by (a portion of) a file on disk. There are several subcases: - Code: Area is read-only, backed by code portion of an a.out (executable) file. Pages are fetched from disk on demand. - Read-write, backed by real file. This case covers memory-mapped file access, an alternative to read/write file semantics. - Zero-fill "file". Used for .bss (uninitialized data) and for stack and heap. Initially created by zeroing out some available frame. When a dirty zero-fill page is chosen for eviction, it goes into the "swap space", a special "anonymous" (unnamed) file. - Data: Backed by data portion of an a.out (executable) file. Paged out to swap space when dirty and evicted. Sharing of frames between processes can happen in several ways: - Regular code: Logically private, but read-only, and thus physically sharable as a space-saving optimization. Benefits from the fact that code starts at the same address in all processes. - "Shared" libraries: Logically private, but physically shared, like regular code, except that they may be at different virtual addresses in the address spaces of different processes. Must be compiled "position independent", with private companion pages for private data and indirection tables for links into regular code or other "shared" libraries. - True shared data: created via mmap(), the alternative file access mechanism mentioned above. (BTW, mmap can also create shared zero-fill regions. Mmap is a Berkeley-ism. There's an alternative set of shared memory facilities from the System V side of the Unix family tree. The calls all have names starting with "shm_". Linux uses mmap(). Solaris supports both styles.) - Copy-on-write sharing: Logically private, and read/write, but physically shared and mapped as read-only. On a page fault, create private copy and patch up page table. Copy-on-write is great for message passing (intra-machine sockets). It's also great for fork() (avoids the need to really create all those copy pages, which execve typically throws away immediately afterward). void *mmap(void *start, size_t length, int prot, int flags, int fd, off_t offset); start is a *requested* starting virtual address typically specified as NULL, which lets the kernel choose fd identifies an open file whose contents should be mapped offset identifies the offset within the file at which the desired data lies prot can be any or-ing together of PROT_EXEC PROT_READ PROT_WRITE PROT_NONE flags can be MAP_ANON if fd is NULL, do zero-fill and page to swap space MAP_PRIVATE copy on write MAP_SHARED truly shared returns pointer to allocated space, or -1 on error int munmap(void *start, size_t length); returns 0 if ok, or -1 on error ------------------------------ Inverted page tables (not in the text) Used in the Power-PC and various other machines. Basic idea: page table represents resident pages only -- basically an entry per _frame_ instead of per page, and a single global table, rather than one per process. Conceptually, if it were fully associative, we'd ask all the slots, simultaneously, "does anybody have virtual page X for process Y?" We don't have associativity in memory, so we use hashing instead. A naive approach would have one slot (bucket) per frame, and the index of the bucket (after resolving any collisions) would be, implicitly, the frame #. That's why it's called an "inverted" table: one could (if one wanted) map directly from frame number (index) to page number (tag). Unfortunately, since we want most frames to be in use, the naive implementation would imply a really full table, which doesn't perform well. A common fix is to augment the frame table with a separate hash table. The hash table is longer than the frame table, but each entry contains only a frame number. Each entry in the frame table contains the tag (process id and page number), permission bits, and use and dirty bits. On a tag mis-match we rehash in the hash table and try again, until we miss entirely (no frame# present), at which point we generate a page fault exception. As usual, the OS has to have some other way (likely "segment" based) to find locations on disk of non-resident pages. [ One possible problem: this scheme doesn't allow a process to have two [ virtual addresses for the same page. If the OS insists on that (as [ occasionally they do), the alternatives will keep faulting each other [ out. ============================== Dynamic storage management (malloc/free, etc) The heap in Linux, runs from top of bss to per-process brk pointer, maintained by kernel Explicit reclamation v. garbage collection explicit generally faster subject to dangling reference and storage leak bugs automatic avoids bugs, makes code simpler and (usually) more robust slower imperfect: useless != inaccessible Standard routines: #include void *malloc(size_t size); returns pointer, or NULL on error (and sets errno) void free(void *ptr); no return value; fails silently behavior undefined if ptr was not previously returned by malloc, calloc, or realloc void *calloc(size_t nmemb, size_t size); does zero-fill (malloc doesn't) handles arrays explicitly (have to multiply yourself w/ malloc) void *realloc(void *ptr, size_t size); changes size (usually by moving; note that this breaks existing ptrs) #include void *sbrk(int incr); returns old brk, or -1 on error Goals: functionality in-order response aligned blocks space self-suffiency (all but constant amount of space contained in the heap itself) correctness no storage leaks no overlapped blocks no premature reclamation no movement or modification (by library) of allocated blocks speed can't generally make malloc constant time worst case, but easy to be linear in the number of current free (disjoint) blocks, and possible to come close to constant time in practice with segregated free lists if you're willing to burn some space (see below). free is easily constant time space efficiency low bookkeeping overhead low fragmentation authors define "peak utilization" at end of k requests as Uk = (max Pi)/Hk i<=k where Pi is the total net asked-for space after i requests and Hk is the total space obtained from the OS long-term stability no climb in time overhead, space overhead, or fragmentation (can't avoid worst-case climb in external fragmentation unless you have compaction) resilience to varying request sequences Internal and external fragmentation neither includes bookkeeping overhead, though that figures into utilization latter is hard to quantify how do I know whether this is bad? depends on future requests rule of thumb: prefer small number of large blocks to large number of small blocks simplest possible (?) allocator high-water mark, no free great throughput, terrible utilization (in the face of frees) issues in a more complex allocator - keeping track of free space - where to carve out a new block - what to do with left-over space - how to re-combine adjacent free blocks slightly fancier allocator (32-bit machine) one-word header high 29 bits give size of block, in doublewords convention: size includes bookkeeping overhead and alignment; adding size to address gives address of corresponding location in next block 2 bits unused (zero) 1 bit to indicate if block is free/used payload padding, if any free list is "implicit" -- embedded in the list of *all* blocks, found by using the size fields and the used/free bits. Need a "used" sentinel block at the end of the heap. (Also at the beginning, for coalescing) skipping over used blocks makes finding an appropriate unused block slower alignment may be needed for blocks that begin with a double. may also help with external fragmentation; more later note that if the payload is to be 8-byte aligned, a one-word header is *not*. So any block that is a multiple of 8 bytes long has (at least) 4 bytes of external frag. between it and the next block. (Book assumes all requests are rounded up to a multiple of 8 before adding the header, which strikes me as silly.) how to choose an appropriate free block first fit next fit best fit None of these is strictly better: can devise request sequences that fail with one but not the others. First fit generally a little faster, sometimes a little better on external fragmentation, but often a little worse on overall utilization. Next fit avoids much of the cost of skipping over initial used blocks. Most allocators are fancier: don't do any of these. Split blocks that are too big. Have a minimum size worth keeping. Pad for alignment and to respect minimum size. Coalesce adjacent free blocks. can do at free time, at allocate time (when notice adjacent blocks during scan for appropriate free block), or more occasionally (e.g. when search for free block fails). Advantage of waiting is reduced overhead for repeated free/re-allocate of same-size block. Boundary tags to find free block to the left. Basically duplicate one-word header in footer at end of block, adjacent to head of next block. Note: with immediate coalescing you never have to recurse: neighbor of free neighbor is never free. problem: space overhead of footers (an issue only if you don't already 8-byte align everything) solution: don't exactly duplicate header. Rather, keep free/used bit of footer in header of *next* block (where we had two extra bits), and keep size in footer only if block is free. use sbrk or mmap to get additional space from the OS if needed Implementation challenge: allocators require you to work outsize the type system. *Lots* of casts. Very little help from semantic checks in the compiler (in fact, you have to fight them). Improvements delayed coalescing (described above) explicit free list avoids overhead of scanning past allocated blocks Can insert freed blocks at end of list (either end; stack order is generally better for external fragmentation and cache performance), or in address order. Some studies suggest that address order first fit has less external fragmentation than LIFO order first fit, but address order free list insertion is not constant time -- linear with a list, log with a tree. note: explicit free list increases minimum block size, since free blocks must hold pointers for doubly-linked list or tree (otherwise you couldn't coallesce in constant time). segregated free lists: one for each common size simplest approach: "segregated lists" no movement of space among sizes no splitting no coalescing no free/used bit no header or footer! (just next ptr in free block) problem: fragmentation live with internal address external by moving entirely-empty page to pool of free pages, available to allocator for any size of block slightly fancier: "segregated fits" blocks on each list are of approx. the same size split a block that would have "too much" internal fragmentation; put the left-over part on the appropriate free list coalesce adjacent free blocks; put result on apprpriate free list this is what GNU malloc does buddy system: special case of segregated fits where sizes are all powers of two. Coalesce only with buddy -- never with same-size neighbor in the other direction. Given size and address, finding address of buddy is trivial; no footers needed fibonacci heap: special case of segregated fits where sizes are fibonacci numbers. Slightly more complex than buddy system, but less internal fragmentation, because F(n)/F(n-1) -> (1 + sqrt(5))/2 ~= 1.62 (the golden ratio), rather than 2 for the buddy system. common programming trick: if you use a ton of blocks all of the same size, you may want to maintain your own stack-allocated free list. ------------------------------ Garbage collection Still a very active area of research. Very difficult problem. Goals correctness: never collect anything usable find everything unusable don't spend too much time doing it avoid "stop the world" phenomenon multiple threads (as in Java) impose additional challenges thread safety concurrency Note that "unusable" is defined as "unreachable", not "will never be used again". Collector can't evaluate the latter, so if your program creates a huge structure, keeps a pointer to it, but never uses it again, you lose. Reference counting circularity problem Mark-and-sweep collection not in book: Stop-and-copy collection Generational collection Conservative collection for non-type-safe languages Take 254 to learn more. ------------------------------ C Pointer Pitfalls (study section 9.11 in the book!) Dangling references Storage leaks Using something as a pointer when it isn't: scanf("%d", my_var) /* should be &my_var */ Assuming that malloc-ed memory is initialized to zeros: char *A[10] = (char *) malloc(10 * sizeof(char*)); /* should use calloc, or zero it myself */ ... if (A[3] && (!strcmp(A[3], "foo"))) ... Using gets, which may cause buffer overflow and overwrite a nearby pointer. Asking for the wrong size: char *A[10] = (char *) malloc(10 * sizeof(char)); /* should be sizeof(char*) */ ... if (... A[9] ...) Off-by-one errors: char *A[10]; ... for (int i = 0; i <= 10; i++) { /* should be <, not <= */ puts(A[i]); } Precedence errors: int *p; ... *p--; /* should almost certainly be (*p)-- */ Forgetting scaling of pointer arithmetic: int *p, *q; ... int *copy = (int *) malloc(q-p); /* should be (q-p) * sizeof(int) */ int *t = copy; for (int *t = copy; p < q; ) *t++ = *p++; Allowing a pointer to a local to escape: int *foo() { int v; ... return &v; }