Lecture notes for CSC 2/456, 2-9-2000 ff [4-7-95: next year, add some coverage of how to manage swap space.] Midterm exam will be Wednesday March 1, in class I'll spend Monday February 28th on review IF YOU BRING QUESTIONS Spring break is March 6 and 8 ------------------------------ Virtual Memory means distinguishing between virtual and physical addresses, and translating the former into the latter. It is possible to provide much more virtual memory than there exists physical memory. The usual mechanism to do this is called demand paging. Demand paging Motivation As mentioned several times, we only need a subset of the program at one time: the memory pages or segments consisting of the currently executing instructions and accessed data. As additional program text and data is needed, we can bring it into memory. This approach is called "demand paging". Hardware support If we access a page that is not in memory, we need an indication (from the hardware) that we need to bring the page into memory. The "valid" bit in a page table entry tells the hardware whether or not a page is in memory. If it is not, the hardware will trap to the OS, which can then load the page. This trap is called a "page fault". Handling a page fault: hardware traps to OS with "invalid address fault" OS saves registers and other context information OS determines address is valid, but page is not in memory free page frame (physical memory) is allocated a read operation from disk copy of page to page frame is initiated < go run something else while disk responds > page table entry is initialized and valid bit set restore registers and other context information restart offending instruction (assumes hardware support) hardware must support restartable instructions If one instruction in 4 accesses data memory and we sustain, say, 400M instructions/second, that's 100M data accesses per second. Add to that 400M instruction accesses, for a total of 500M memory accesses/s. If disk reads average 5ms, we could burn a second on 200 page faults. That means if only 1 of every 2.5M accesses page faults, we spend half our time paging! Page faults have to be *really* rare. Some more numbers, just to get your thoughts in the right ballpark: If you have a 4K byte page size and 64MB of memory on your machine, that's 16,384 pages. The typical TLB has about 64 entries, each of which maps only one page (some TLBs let one entry map more than one page), so TLB misses tend to be a couple decimal orders of magnitude more common than page faults. Thrashing: degenerate scenario in which we spend more time paging than executing. We want to implement paging (and possibly swapping) policies that avoid thrashing. NB: Linux and some other systems erroneously use the term "swapping" when they really mean "paging". Solaris sometimes talks about "swap space" on disk, but at least that's where you'd swap to if you needed to (in addition to paging to it). Working Set Model Principle of Locality Pages are not accessed randomly. At each instant of execution a program tends to use only a small set of pages. As the pages in the set change, the program is said to move from one phase to another. The principle of locality states that most references will be to the current small set of pages in use. Examples instructions are fetched sequentially (except for branches) from the same page array processing usually proceeds sequentially thru the array functions repeatedly access variables in the top stack frame Ramification If we have locality, we are unlikely to continually suffer page faults. If a page consists of 1000 instructions in self-contained loop, we will only fault once (at most) to fetch all 1000 instructions. Working Set Definition The set of pages currently needed by a process is its working set. WS(k) for a process P is the number of pages needed to satisfy the last k page references for process P. WS(t) is the number of pages needed to satisfy a process's page references for the last t units of time. Either can be used to capture the notion of locality. Working Set Policy Restrict the number of processes on the ready queue so that physical memory can accommodate the working sets of all ready processes. Monitor the working sets of ready processes and, when necessary, reduce multiprogramming (i.e. swap) to avoid thrashing. Note: exact computation of the working set of each process is difficult, but it can be estimated, by using the reference bits maintained by the hardware to implement an aging algorithm for pages. When loading a process for execution, pre-load certain pages. This prevents a process from having to "fault into" its working set. May be only a rough guess at start-up, but can be quite accurate on swap-in. Page Replacement Algorithms What if there are no free physical page frames to service a page fault? Unless we swap, we need to choose a victim page (currently unused), write it to disk, and reclaim its frame. This requires a Replacement Strategy -- policy used to select a victim. (We assume the victim is not in active use.) A replacement strategy can be global (ANY page can be bumped, including pages belonging to another process) or local (a process may only bump its own pages). Global strategies may improve overall system performance, but local strategies are needed to guarantee a particular level of performance for a process. We can attempt to minimize the page fault rate by (1) choosing pages to swap in and out cleverly and (2) reducing multiprogramming when necessary Belady's Min Algorithm: Replace the page that will not be used for the longest period of time in the future. Although optimal, requires future knowledge that is usually unavailable. Useful primarily for the purpose of evaluating other algorithms. FIFO: Replace the oldest page in memory. Although easy to understand and implement, it doesn't recognize that the time a page spends in memory is independent of how often it is used. Also, in some uncommon but important applications, the number of page faults can actually increase if the number of page frames is increased (a phenomenon known as Belady's anomaly). Example - consider the reference string (string of references to pages) 1,2,3,4,1,2,5,1,2,3,4,5. With 3 frames we get 9 faults. With 4 frames we get 10 faults! Note that this is the shortest example. In general, N physical page frames will have one fewer faults than N+K physical page frames, where N > K+1, using the reference string: 1,..,N+K,1,..,N-1,N+K+1,1,..,N+K+1 Least-recently-used (LRU): Replace the page which has not been used for the longest period of time. An example of a "stack algorithm", in which the pages in memory with N frames is a subset of the pages in memory with n+1 frames. Stack algorithms cannot suffer from Belady's anomaly. Few machines have the hardware support necessary to know the "time" of the last reference for each page. However, they do often provide a "reference bit" (also known as the "used" bit) which tells us if the page has been referenced at all. The reference bit allows us to approximate LRU. Although we can't tell in what order pages were referenced, we can tell which pages were referenced and which pages were not. Therefore, we can determine the least-recently-used (ie, unused) pages within a particular time interval, giving us: Not-used-recently (NUR): Name used for a class of policies (lots of possible wrinkles) that use the reference bits to approximate LRU. The hardware turns reference bits on. The OS turns them off periodically. Pages may get booted (be a "victim") when the reference bit hasn't come on since the OS turned it off. We keep frames linked in a FIFO list. When we need a victim we scan the list. If a frame's reference bit is on, we turn it off. If it is off, we have a victim. We can think of simple NUR as an improvement to FIFO that gives active pages a "second chance." When does scanning happen? A naive system might not choose any victims until it runs out of frames, but this would slow response to faults quite a bit. Better approaches keep a free frame pool, which fluctuates up and down in size. We look for victims when the clock handler or page fault handler notices that the number of free frames has dropped below, say, 1/4 of memory. [If this seems like a large fraction, remember that dirty pages have to be written out to disk. If we have a sudden need for pages (running a new program, or moving to a new phase of an algorithm), we don't necessarily want to wait to get them all out to disk. The pageout algorithm puts a bunch of victims on a queue, then runs/blocks on the disk queue/runs/blocks ... until all the dirty ones are out on disk.] We can control the amount of fluctuation in the free frame pool by varying how many victims we choose when we scan. If we scan all of memory and bump every page whose reference bit is 0, we may see a LOT of fluctuation. To get less, we either - scan only part of memory (this is the CLOCK algorithm), or - scan all of memory, but use more than just the reference bit, to be pickier One way to be pickier is to use both the reference and dirty bits. MacOS does this. The (ref, dirty) pairs (0,0), (0,1), (1,0), and (1,1), in order give you best to worst choices to replace. When it needs to page, MacOS picks frames from the lowest non-empty class. A fancier way to be pickier is to shift reference bits into the upper end of a timestamp. Then when we need victim(s) we choose the ones with the smallest stamps. The problem with simply being pickier is that while you're choosing fewer victims (for stability) you're still doing work proportional to the total number of frames in the system. Clock fixes this. It scans, picking up where it left off last time, until it finds enough victims to bring the pool above, say, 1/3 of memory, then quits. Note that if the interval between scans is too big, we will find most reference bits set, and we'll end up doing more work to find a given number of victims. If the interval between scans is relatively small, we'll have a lot of frames to choose from, and may not make great choices. We can tune the victim rate by using a TWO HANDED CLOCK algorithm, introduced by Berkeley Unix. The two hands are separated by some constant number of frames. The hand in the lead clears reference bits; the training hand inspects them. [Note that the second hand had better not overtake the original position of the first hand during a single step of the algorithm, or we'll revert to FIFO. The intuition is that with one hand and a really big memory you'll always be finding most of the bits on. The additional hand effectively shrinks the memory; outside the internal between the two hands, the bits are mostly on.] WSClock: A hybrid that uses NUR and a working set policy. Maintain multi-bit reference "times" in SW (as described above) by scanning the whole frame list periodically. When you need victims, run clock, but don't pick a victim that appears to be in the current process's working set -- i.e. is in the current process's address space and has a non-zero reference "time". Useful wrinkles: (1) Keep track of what is in the free frame pool, so if it's needed again you can re-use it without going to disk. (2) Write dirty pages to disk "in the background," with a temporarily reduced pool, so the faulting process can restart as soon as possible. (2) Write dirty pages to disk "in the background" periodically, so they will be clean when you need to get rid of them. Suppose your hardware isn't helpful? If you can't restart instructions, you can't demand page, or else you have to use shadow processors. PDP-11s and 68000's suffer from this problem. If you don't have a dirty bit, you have to assume every page is dirty. Ouch. I don't know of a machine that does that to you. If you don't have a reference bit, you can run clock on the dirty bits, but this amounts to FIFO for read-only pages. The VAX 780 has this weakness. VMS uses FIFO for all pages, and uses a large pool to get back its mistakes. BSD Unix's "two-hand clock" on the VAX simulates reference bits in software: when it resets them it unmaps the page; it sets them in response to subsequent page faults. Performance studies indicate that this works surprisingly well.