Professional Documents
Culture Documents
Background Demand Paging Process Creation Page Replacement Allocation of Frames Thrashing Operating System Examples
Slide 1
Background
Only part of the program needs to be in memory for execution (error routines, portions of arrays, lists etc) Logical address space can therefore be much larger than physical address space. Allows address spaces to be shared by several processes. Allows for more efficient process creation.
Slide 2
Slide 3
Demand Paging
Bring a page into memory from secondary storage only when it is needed
Less I/O needed Less memory needed Faster response More users
ValidValid-Invalid Bit
With each page table entry a validinvalid bit is associated
(1 in-memory, 0 not-in-memory) Initially validinvalid but is set to 0 on all entries. Example of a page table snapshot. Frame # valid-invalid bit
1 1 1 1 0 0 0
page table
Slide 5
Slide 6
Page Fault
If there is ever a reference to a page, first reference will trap to OS page fault OS looks at another table (in PCB) to decide:
Get empty frame. Copy page into frame. Reset tables, validation bit = 1. Restart instruction that was interrupted
Silberschatz / OS Concepts / 6e Chapter 10 Virtual Memory Slide 7
Slide 8
Paging Issues
Never bring a page into memory until it is required Can lead to unacceptable system performance particularly when a page fault results in the access of several pages (one for instructions and several for data) Analysis has shown that programs tend to have locality of reference which results in reasonable performance
Slide 9
Page replacement find some page in memory, but not really in use, swap it out.
algorithm performance want an algorithm which will result in minimum number of page faults.
Slide 10
Memory access time = 1 sec (microsecond) 50% of the time the page that is being replaced has been modified and therefore needs to be swapped out. Swap Page Time = 10 msec = 10,000 sec EAT = (1 p) x 1 + p (15000) 1 + 15000p (in sec)
Silberschatz / OS Concepts / 6e Chapter 10 Virtual Memory Slide 11
EAT is directly proportional to the page-fault rate If p is .001 (1 in 1,000 page fault rate) then EAT is 25 sec
which is 250 times the mat To get an EAT only 10% above the mat, p has to be .0000004 which is 1 out of 2,500,000
Slide 12
Page Replacement
Prevent over-allocation of memory by modifying page-fault
A process may have 10 pages but only 5 are in use. Demand paging only brings in those 5. This allows us to bring more programs into memory increase degree of multiprogramming or overallocating memory. If have 40 frames, run 8 processes using 5 pages each instead of 4 processes using 10 pages. What happens when a process suddenly needs all 10 pages? All memory is in use. Terminate process why, it is running normally Swap out a process will be considered later Page replacement
Slide 13
Slide 14
Find the location of the desired page on disk. Find a free frame: - If there is a free frame, use it. - If there is no free frame, use a page replacement algorithm to select a victim frame. Read the desired page into the (newly) free frame. Update the page and frame tables. Restart the process.
Use modify (dirty) bit to reduce overhead of page transfers only modified pages are written to disk. Page replacement completes separation between logical memory and physical memory Silberschatz / OS Concepts / 6e - large virtual memory can be provided Virtual Memory Chapter 10 on a smaller physical memory. Slide 15
Page Replacement
Slide 16
Want lowest page-fault rate. Evaluate algorithm by running it on a particular string of memory references (reference string) and computing the number of page faults on that string. In all our examples, the reference string is 1, 2, 3, 4, 1, 2, 5, 1, 2, 3, 4, 5.
Slide 17
Slide 18
Slide 19
process)
1 2 3
1 2 3
4 1 2
5 3 4 9 page faults
1 2 3 4
5 1 2 3
4 5 10 page faults
4 frames
2 3 4
Silberschatz / frames, opposite is what would think is true with more OS Concepts / 6e Chapter 10 Virtual Memory Slide 20
Slide 21
Optimal Algorithm
Replace page that will not be used for longest period of time. 4 frames example 1, 2, 3, 4, 1, 2, 5, 1, 2, 3, 4, 5
1 2 3 4 5 4 6 page faults
How do you know this? Requires future knowledge! Used as yardstick to evaluate how well other algorithms
perform.
Slide 22
Slide 23
The optimal algorithm uses the time when a page is to be used next FIFO uses the time when a page was brought into memory LRU use the recent past as an approximation of the near future. Replace the page that has not been used for the longest period of time
Slide 24
Counter implementation Every page entry has a counter; every time page is referenced through this entry, copy the clock into the counter. When a page needs to be changed, look at the counters to determine which one is the smallest value and use that frame. Silberschatz / OS Concepts / 6e Chapter 10 Virtual Memory
Slide 25
Slide 26
Stack implementation
Keep a stack of page numbers in a double link form When a page is referenced:
move it to the top, may be in middle of list thus need double link with head and tail pointers requires 6 pointers to be changed at most
Optimal and LRU replacement do not suffer from Beladys anomaly. The class of page replacement algorithms called stack algorithms never exhibit Beladys anomaly
It can be shown that the set of pages in memory for n frames is always a subset of the set of pages that would be in memory with n+1 frames LRU requires special hardware to update the clock or stack at each memory reference or else the overhead would slow processing too much
Slide 27
Slide 28
Reference bit
With each page associate a bit, initially = 0 When page is referenced (read or write), bit set to 1. Replace the one which is 0 (if one exists). We do not know the order, however.
Additional-Reference-Bit Algorithm
Keep an 8 bit byte for each page At regular intervals (100 msec) a timer interrupts and the OS shifts the reference bit for each page into the high order bit and shifts the other bits one to the right, discarding the loworder bit The shift register contains the history of page use for the last 8 time intervals . The page with the lowest number is the one replaced may have duplicates, use FIFO or swap out all Silberschatz / OS Concepts / 6e Chapter 10 Virtual Memory Slide 29
Slide 30
Slide 31
Four classes: consider both the reference bit and modify bit as an ordered pair
(0,0) neither recently used nor modified - excellent candidate (0,1) not recently used but modified not as good as (0,0) as page will have to be written out before replacing (1,0) recently used but clean probably will be used again soon (1,1) recently used and modified- probably will be used again and if replaced will need to be written out Use same scheme as clock algorithm but check the ordered pair instead of just the reference bit. Replace the first page in the lowest nonempty class
Slide 32
made to each page. Least Frequently Used (LFU) Algorithm: replaces page with smallest count.
Problem: A page used heavily in the initial phase of a process will have a large count and remain in memory even though it may not be used again Solution: Age pages as they remain in memory by shifting the counts 1 bit to the right
argument that the page with the smallest count was probably just brought in and has yet to be used. Neither LFU or MFU is commonly used implementation is expensive and do not 6e Silberschatz / OS Concepts /approximate OPT well
Chapter 10 Virtual Memory Slide 33
Global replacement process selects a replacement frame from the set of all frames; one process can take a frame from another.
A process is no longer in control of its page-fault rate Usually results in greater system throughput and its therefore more commonly used
Local replacement each process selects from only its own set of allocated frames.
Does not take advantage of less used pages in other processes which could improve overall system throughput
Slide 34
Thrashing
If a process does not have enough pages, the pagefault rate is very high. This leads to:
low CPU utilization. operating system thinks that it needs to increase the degree of multiprogramming. another process added to the system to increase multiprogramming
Thrashing | a process is busy swapping pages in and out rather than executing
Slide 35
Thrashing
Locality model
A locality is a set of pages that are actively used together. A program is generally composed of several different localities which map overlap Locality is defined by the program structure and its data structures As a process executes it migrates from one locality to another (e.g. subroutine)
Why does thrashing occur? size of locality > number of frames Silberschatz / OS Concepts / 6e allocated
Slide 36
Slide 37
WorkingWorking-Set Model
( | working-set window | a fixed number of page references Example: 10,000 instruction WSSi (working set of Process Pi) = approximation of the programs locality - total number of pages referenced in the most recent ( (varies in time)
if ( too small will not encompass entire locality. if ( too large will encompass several localities. if ( = g will encompass entire program lifetime
D = 7 WSSi | total demand for frames if D > m Thrashing Policy if D > m, then suspend one of the processes, writing its
frames to disk, and reallocates the frames to another process Working set strategy prevents thrashing while keeping the degree of multiprogramming as high as possible Silberschatz / OS Concepts / 6e Chapter 10 Virtual Memory Slide 38
WorkingWorking-set model
Slide 39
it is a moving target Approximate with interval timer + a reference bit Example: ( = 10,000
Timer interrupts after every 5000 time units. Keep in memory 2 bits for each page. Whenever a timer interrupts, copy the value to memory and set the values of all reference bits to 0. If one of the bits in memory = 1 page in working set.
the interval of 5,000 it was last referenced Improvement = 10 bits and interrupt every 1000 time units but at a cost of more timer interrupt handling Silberschatz / OS Concepts / 6e Chapter 10 Virtual Memory Slide 40
If actual rate too low, process loses frame. If actual rate too high, process gains frame.
Slide 41
Other Considerations
Prepaging- prevent the high level of initial paging by bringing
into memory at one time all the pages that will be needed
Page size selection Fragmentation bigger page more wasted space table size 4,096 pages of 1MB or 512 pages of 8MB I/O overhead amount to transfer, transfer rate, latency time seem to conflict Locality - smaller page size allows each page to match program locality more accurately, better resolution but larger page generates less page faults
Slide 42
TLB Reach - The amount of memory accessible from the TLB. TLB Reach = (TLB Size) X (Page Size) Ideally, the working set of each process is stored in the TLB. Otherwise there is a high degree of page faults.
Slide 43
Increase the Page Size. This may lead to an increase in fragmentation as not all applications require a large page size. Provide Multiple Page Sizes. This allows applications that require larger page sizes the opportunity to use them without an increase in fragmentation but OS must manage the TLB at cost to performance
Slide 44
Program structure
int A[][] = new int[1024][1024]; Each row is stored in one page Program 1 for (j = 0; j < A.length; j++) for (i = 0; i < A.length; i++) A[i,j] = 0; 1024 x 1024 page faults Program 2 for (i = 0; i < A.length; i++) for (j = 0; j < A.length; j++) A[i,j] = 0;
Slide 45