You are on page 1of 5

Operating Systems

Segments

Programs need to share data on a controlled basis. Examples: all processes should use
same compiler. Two processes may wish to share part of their data.

Program needs to treat different pieces of its memory differently. Examples: process
should be able to execute its code, but not its data. Process should be able to write its
data, but not its code. Process should share part of memory with other processes, but
not all of memory. Some memory may need to be exported read-only, other memory
exported read/write.

Mechanism to support treating different pieces of address space separately: segments.


Program's memory is structured as a set of segments. Each segment is a variable-sized
chunk of memory. An address is a segment, offset pair. Each segment has protection
bits that specify which kind of accesses can be performed. Typically will have
something like read, write and execute bits.

Where are segments stored in physical memory? One alternative: each segment is
stored contiguously in physical memory. Each segment has a base and a bound. So,
each process has a segment table giving the base, bounds and protection bits for each
segment.

How does program generate an address containing a segment identifier? There are
several ways:
o Top bits of address specify segment, low bits specify offset.
o Instruction implicity specifies segment. I.E. code vs. data vs. stack.
o Current data segment stored in a register.
o Store several segment ids in registers; instruction specifies which one.

What does address translation mechanism look like now?


o Find base and bound for segment id.
o Add base to offset.
o Check that offset < bound.
o Check that access permissions match actual access.
o Reference the generated physical address.
How can this be fast enough? Several parts of strategy:

1 | Page

Operating Systems

o Segment table cache stored in fast memory. Typically fully associative.


o Full segment table stored in physical memory. If the segment id misses in the
cache, get from physical memory and reload the cache.

OS may need to reference data from any process. How is this done? One way: OS
runs with address translation turned off. Reserve certain parts of physical memory to
hold OS data structures (buffers, PCB's, etc.).

How do user and OS communicate? Via shared memory. But, OS must manually
apply translation any time user program gives it a pointer to data. Example: Exec
(file) system call in nachos.

What must OS do to manage segments?


o Keep copy of segment table in PCB.
o When create process, allocate space for segments, fill in base and bounds
registers.
o When switch contexts, switch segment information state in hardware.
Examples: may invalidate segment id cache.

What about memory management? Segments come in variable sized chunks, so must
allocate physical memory in variable sized chunks. Can use a variety of heuristics:
first fit, best fit, etc. All suffer from fragmentation (external fragmentation).

What to do when must allocate a segment and it doesn't fit given segments that are
already resident? Have several options:
o Can compact segments. Copy segments to contiguous physical memory
locations so that small holes are collected into one big hole. Notice that this
changes the physical memory locations of segment's data. What must OS do to
implement new translation?
o Can push segments out to disk. But, must provide a mechanism to detect a
reference to the swapped segment. When the reference happens, will then
reload from disk.

What happens when must enlarge a segment? (This can happen if user needs to
dynamically allocate more memory). If lucky, there is a hole above the segment and
can just increment bound for that segment. If not, maybe can move to a larger hole
where the new size fits. If not, may have to compact or swap out segments.

Protection: How does one process ensure that no other process can access its
memory? Make sure OS never creates a segment table entry that points to same
physical memory.

2 | Page

Operating Systems

Sharing: How do processes share memory? Typically at segment level. Segment


tables of the two processes point to the same physical memory.

What about protection for a shared segment? What if one process only wants other
processes to read segment? Typically have access bits in segment table and OS can
make segment read only in one process.

Naming: Processes must name segments created and manipulated by other processes.
Typically have a name space for segments; processes export segments for other
processes to use under given names. In Multics, had a tree structured segment name
space and segments were persistent across process invocations.

Efficiency: It is efficient. The segment table lookup typically does not impose too
much time overhead, and segment tables tend to be small with not much memory
overhead.

Granularity: allows processes to specify which memory they share with other
processes. But if whole segment is either resident or not, limits the flexibility of OS
memory allocation.

Advantages of segmentation:
o Can share data in a controlled way with appropriate protection mechanisms.
o Can move segments independently.
o Can put segments on disk independently.
o Is a nice abstraction for sharing data? In fact, abstraction is often preserved as
a software concept in systems that use other hardware mechanisms to share
data.

Problems with segmentation:


o Fragmentation and complicated memory management.
o Whole segment must be resident or not. Allocation granularity may be too
large for efficient memory utilization. Example: have a big segment but only
access a small part of it for a long time. Waste memory used to hold the rest.
o Potentially bad address space utilization if have fixed size segment id field in
addresses. If have few segments, waste bits in segment field. If have small
segments, waste bits in offset field.
o Must be sure to make offset field large enough. See 8086.

Page Replacement Algorithms


3 | Page

Operating Systems

Operating system which uses paging for managing virtual memory make use of page
replacement algorithms. Page replacement algorithms decide which main memory page to be
paged out to disk (Swap out) to allocate memory for another page.
If all the frames in the main memory are full, then to place a new page in main memory we
need to replace an existing page. There are different page replacement algorithms for this
purpose. The page being replaced is referred as the victim page.
1) FIFO page replacement
2) Optimal page replacement
3) LRU page replacement
4) MRU page replacement
5) Counting based page replacement: MFU ,LFU

1. FIFO Page Replacement


In FIFO (First in First out) page replacement scheme, when a page is to be replaced from
main memory ,the oldest page is chosen for replacement.Oldest the page is the page which
was brought to the memory before all other pages currently in memory.It is implemented
using a queue (FIFO). It is very simple to implement. But its performance is not always good.
Beladys Anamoly : In FIFO page replacement policy the general assumption is that if we
increase the frame size (ie, the number of frames in main memory) the number of page fault
will decrease. But if the number of page fault increases with the number of frames ,then such
situation is called Beladys anamoly.
2. Optimal Page Replacement
In optimal page replacement the idea is to replace the page in memory that is not going to use
in the near future. But it is practically difficult to implement since it require prior information
about those pages which will be referred in the near future. If we can correctly predict the
pages which will be referred in the near future then we can efficiently implement this
algorithm.
3. LRU (Least Recently Used) Page Replacement
In least recently used page replacement the idea is to replace the page in memory which is
referred (used) earlier to all other pages in memory in the recent past while looking at the
page reference pattern. Here the assumption is that since it is not used (referred) for the
longest period of time, it might not be referred in near future.
4. MRU (Most Recently Used) Page Replacement

4 | Page

Operating Systems

In most recently used page replacement the idea is to replace the page in memory which is
used (referred) very recently. Here the assumption is since the page is used very recently, it
might not be used in the near future.
5. Counting based page replacement: MFU, LFU
MFU (Most Frequently Used) Page Replacement: In MFU Page replacement the idea is to
replace the page in main memory which is used (referred) for the highest number of times.
Among the pages in memory, count the number of reference for each page in the recent past
and page to be replaced (the victim page) is the one which has the highest count.
LFU (Least Frequently Used) Page Replacement: In LFU Page replacement the idea is to
replace the page in main memory which is used (referred) for the lowest number of times.
Among the pages in memory, count the number of reference for each page in the recent past
and page to be replaced (the victim page) is the one which has the lowest count.

5 | Page

You might also like