You are on page 1of 22

Introduction

OS act as an interface between users program and computer


hardware
Purpose : to provide environment in which we can execute program
It is the only program running at all times on the computer called
kernel
OS also called- control, monitor, executive, supervisor

A program modules that acts as an intermediately between user of


the computer and computer hardware
Denotes program modules that govern the control of equipment
resources such as processors, main storage, I/O devices and files
Computer system divided into four components
Hardware
Operating system
Application program
Users
Importance of OS
To control the computer operation
To allocate resources
To improve efficiency
Decrease the cost
Basic concepts and terminology
Computer hardware terminology
Programming terminology
Operating system terminology
Computer hardware terminology

Instruction and data stored in core memory or main memory


Connected to main memory is processors that interpret the
instructions and perform the operation
CPU- a processor that manipulate and performs arithmetic operation
upon data
Control unit: h/w requires control circuitry
OS decides the accessibility to resources
Programming terminology
Software: collection of programs or data that used to perform
certain task
Program: sequence of instructions , placed into memory and
interpreted by processor
Stored in secondary storage when not in use.
Prepackaged programs routines to perform predefined task e.g.
square roots, sorting
Operating system terminology
User
Job
Process
Job step
Address space
Pure code
Multiprogramming
Privileged instructions
Protection hardware
Interrupt hardware
User : anybody that desires work to be done by a computer

Job: collection of activities to do the work required


Job steps: job divided into several steps- units of work that must be
done sequentially
Process or task: computation that may be done concurrently with
other computations
Address space: collection of programs and data that are accessed in
a process forms an address space
A user submit the job to the operating system
Job divided into several job steps
Once OS accepts the users job it create several process
Refer diagram 1.2- depicts two sample address space- i/o process
and CPU process
Actual code for file system portion is same
Code is shared so locks must be set to prevent race conditions
Pure code/ reentrant code : code that does no modify itself
OS maps the address space of processes into physical memory
This is done by special H/W- paged system and S/W swapping
system
Multiprogramming: a system that have several processes in the
states of execution at the same time
State of execution: computation has started but not completed or
terminated. i.e. Processor is not currently working on the process
Privileged instructions: instructions that are not available to
ordinary users- two states of execution in computer
Problem state: user state, slave state
Supervisor state: executive state, master state ( can execute privileged
instructions)
Protection h/w: to control access to the parts of memory e.g. To
prohibit from altering the OS program
Interrupt h/w: to coordinate the operation going on simultaneously

Interrupt: mechanism by which a processor is forced to take note of


an event
An OS Resource manager
View framework for study of OS
3 views
Process view : sequencing of the management of resources
Hierarchical view
Demonstrates interrelationship and
primitive function
Extended view
These modals are used extension of the work of Dijkstra,
Donovan, Hansen, Saltzer and Madnick
Primary view
OS- collection of programs (algorithm) designed to manage the
system resource
Memory, processor, devices, information (programs and Data)
OS should keep track of
Status of each resource
Decide which process get it
Allocate it
Reclaim it
Viewing OS as resource manager
Each manager should do
1. Keep track of resources
2. Enforce policy that determines who gets what, when and how much
3. Allocate resource
4. Reclaim the resources

Memory management functions


1. Keep track of resource ( memory). What parts are in use and
by whom? What part are not in use (free)?.
2. If multiprogramming, decide which process gets memory, when
and how much
3. Allocate the resource ( memory) when the process request it
and the policy of 2 above allows it
Reclaim the resources when the process no longer needs it or
has been terminated

Processor management functions


1. Keep track of the resource ( processor and the status of processor)done by traffic controller program
2. Decide who will use the processor-done by job scheduler that
chooses the job from submitted job. If multiprogramming, decides
which process gets the processor, when and how much done by
processor scheduler
3. Allocate resources (processor) to process by setting the hardware
register- called dispatcher
4. Reclaim the resources( processor) when process finishes, terminates
or exceeds allowed amount of usage.
Device management functions
1. Keep track of resources (devices, channels, control unit) I/O traffic
controller
2. Decide what is an efficient way to allocate resources ( device) . If it
is to be shared , then decide who gets it, how much he gets it I/O
scheduling
3. Allocate resource and initiate I/O operation
4. Reclaim the resources- I/O terminates automatically
5. If cannot be shared then dedicated to a process (card readers)
6. Shared devices- complication in allocation virtual devices

7. Example: punching on a punch card will be transferred to disk, at


later time a routine will copy the information
8. Virtualizing card reader, card punch, is done by SPOOLing routines
9. Virtual devices approach
1. More flexibility in allocating dedicated devices
2. Flexibility in job scheduling
3. Better match of speed of device and speed of request
Information management functions
1. Keep track of the resource ( information), its location, use, statuscalled as file system
2. Decide who gets it, enforce protection requirements and provide
accessing routines
3. Allocate the resources ( information) e.g. open a file
4. Deallocate the resource e.g. Close a file
An OS- process view point
Life of process
Represented by 3 transition states
Run: process is assigned a processor and its program are being
executed
Wait: process is waiting for some event (e.g. an I/O operation to be
completed)
Ready : process is ready but more process than processor so must
wait for its turn on processor
Simple states of a process

I/O has been


completed

Assumption : all process already existing in the system and will


continue running forever.
But complete and realistic life cycle will not be so.
It needs 3 more state to be complete and realistic
Submit
Hold
Complete
Submit : a user submit a job and the system must respond to the
users request
Hold: the users job has been converted to internal machine
readable form .
But no resources is allocated.
Job has been spooled onto the disk.
Resource must be allocated to move it to the ready state.
Complete : process has completed its computation and all its
assigned resources may be reclaimed.
Modal of process states

States and transitions in a process life cycle


Circles-> states, clouds-> transitions
Explained with an example- processing

deck into card reader.

1. User submit job by placing the deck into a card reader -> submit
state
2. Job consist of many deck of programs proceeded by job control cards.
3. Job control cards pass information to OS about what resources will be
needed.
4. SPOOLing routine reads the job places it into disk -> Hold state
5. The SPOOLing routine call information management to find storage
space
6. Job scheduling routine scans all SPOOLed files and picks a job to be
admitted
7. In doing this , it will call- memory management for sufficient memory,
device management for requested devices
8. Then job scheduler call traffic controller to create associated process
information and memory management to allocate main storage.
9. The job is then loaded into memory and process ready to run. ->
ready state
10. When a processor is free- process scheduler scans list of ready to
run process and choose, assign processor-> running state

11. If running process request access to information, the information


management will call device management to initiate the reading of
file
12. Device management initiate I/O operation and
13. then call process management to indicate that the process is waiting
for the completion of I/O operation -> wait state
14. When I/O is completed- h/w sends signal to traffic controller in
process management,
15. then puts back process back in the ready state
16. If process complete its computation, then completed state and
17.

allocated resources are freed.


Operating system hierarchical and extended machine view
Extended machine view

Require thousands of
Instructions to
Interact with system
Resources memory
and I/O

These instructions provided by the operating system

User request it by using a special supervisor call instructions.

Similar to subroutine call but transfer the control to the operating system
rather to the users subroutine

Statement 2 and 3 are legal but do not correspond to the instruction of the
bare machine

Basic hardware instruction + additional instructions=> instruction set called


as extended machine

Refer diagram 1.7

Kernel of the operating system runs on the bare machine

User programs runs on the extended machine

Hierarchical machine concept

OS as a one big program- unmanageable

Extended machine approach could be applied in 2 ways

Key functions needed by many system module could be separated into an


inner extended machine.

Certain modules could be separated out and run on the extended machine is
same way as user processes

Inner and outer extended machine generalized into levels of extended


machine

Layers of process- generalized operating system process

Kernel : all modules resides in extended machine as opposed to those that


operate as a process layers

No rule for number of layers and which modules goes into which layers
Hierarchical operating system structure

All processes uses kernel and share all the system resources

Some process are parent or controller- denoted by wavy lines- indicate


separate process layer

Given level is allowed to call upon services of lower level but not higher levels

Example :
if a interprocess message management call service of memory
management ,
the memory module must be in lower level,
but should not call upon process message management

In the lower level- function used by all resource manager keeping track of
resource allocation

This requires a synchronization of resources allocation done by primitive


operators

P operator- resources seized

V operator- resources released

These operator upon a S/W called semaphore- a switch( binary semaphore)


or a integer value ( counting semaphore)

When resources is requested , test the P operator if off turns it on

If already on then make a note that the requesting process has been placed
in WAIT state for resources

If V operator issued by another process releases the resources, then turn the
semaphore OFF and awake the process waiting for that resources
Examples of function in various levels

Level 1: processor management lower level

P, V synchronization primitive

process scheduling

Level 2: memory management

Allocate memory

Release memory

Level 3: processor management upper level

Create/ destroy process

Send/receive messages between processes

Stop process

Start process

Level 4 :Device management

Keep track of all status of I/ O devices

Schedule I/O

Initiate I/O

Level 5: information management

create / destroy file

Open/close file

Read/ write file

Unit 1: Memory management


Introduction

Concerned with the management of primary memory or core memory

Processors directly access for instructions and data


Four functions

Keep track of the status of each location of memory- either allocated or


unallocated

Determining allocation policy for memory- decide how much, when, where
the memory shoul d be allocated- if shared then determine which process
gets the memory

Allocation technique selt the specific location and allocation information


updated

Dellocation technique and policy may explicitly release or unilaterally


reclaim the memory .- must be updated
Memory management techniques

1. Single contiguous memory management


2. Partioned memory management
3. Relocation partioned memory management
4. Paged memory management
5. Demand-paged memory management
6. Segmented memory management
7. Segmented and demand-paged memory management
8. Other memory management

Analysis of technique

An overview of the approach and concepts employed

A description of special hardware

A description of software algorithm

Advantages and disadvantages

Single contiguous allocation

Simple memory mangement scheme

No special hardware

Associated with small stand alone computers

IBM OS/360, Primary conrol Program

No multi programming

One-to-one correspondance exists between user, job, job step, process

Memory is allocated to a job

Conceptually divided into 3 contiguous regions

Portion of memory permanently allocated to the OS

All of the remainder of memory is availableto single process

Job uses only a portion of memory other are unused region

Example : if 256K bytes- allocated but 32K only used by a process then
remining are unused

It cannot be returned

With respect to four functions

1. Kep track of memory- it is allocated to one job


2. Determining factor- the job gets all the memory
3. Allocation all of its allocated to the job
4. Deallocation when the job is done- memory is returned to free status
Hardware support

No special hardware

Only a protection mechanism- to ensure that users program do not


accidently tamper the OS

Consists of bounds register and supervisor-mode of CPU

If in the user mode- on each reference , hardware checks no acces is made

If access made then an iterrupt must occur and control is transfered to the OS

In supervisor mode can access the protected, and execute the privileged
instructions

Mode is changed when the control is transferred to the OS


Software support

A flowchart depicts single contiguous allocation

Algorithm is called when job scheduler of processor management schedules


a job to run

Called only when no other job is using the memory

Refer diagram 3.2


Advantages
UNIT ONE OVEr

UNIT 3: Job and processor scheduling


Processor management

Concerned with management of physical processor

Assignment if processors to processes

Modules from unit 1

Job scheduler- creates processes (non-multi programmed)

Processor scheduler-which process receives processor (in multiprogramming)

Traffic controller- keep track of status of the process

Job scheduler can be viewed as macro scheduler- choosing which job will run

Process scheduler can be viewed as micro scheduler assigning processors to


the process associated with scheduled jobs

User sub divide the job into job steps- processed sub tasks

System creates processes to do the computation of job steps

Job scheduling concerned with management of jobs

Processor scheduling concerned with management of processors

No distinction between process and job scheduling in non multiprogramming


system

One to one correspondence- one job create one process and assigned a
processor in non multiprogramming

For multiprogramming systems- job scheduler creates processes for the jobs
and assign the processor

Process scheduler decides which process will be assigned a processor at what


time and how long
1.State model

Submit state

Hold state

Ready state

Running state

Blocked state / wait state

Completed

5. Keep track of status of all processes


6. Provide mechanism for changing process state
7. Coordinate inter process synchronization and communication
2

3
4
5

1.1job scheduler
Keep track of status of all jobs. It must note which jobs are trying to get some
service (hold state) and status of all jobs being serviced (ready, running, or
blocked)
Choose the policy to enter the system based on priority, resources requested
Allocate the necessary resources for the scheduled job by use of memory ,
device and processor management
Deallocate the resources when the job is done
Process scheduling
Once job moved form hold to ready it creates one or more processes
Functions
1. Keep track of the status of the process traffic controller (running, ready,
blocked)
2. Deciding which process gets a processor for how long processor scheduler
3. Allocation of processor to process traffic controller
4. Deallocation of processor- when running process exceeds the current
quantum or must wait for an I/O completion traffic controller
Job and process synchronization
Mechanism to prevent race condition
Example: request a printer while it is printing
P and V operator
Semaphore
Deadlock a situation- two processes , each waiting for resources that the
other has and will not give up
Structure of processor management

Processor management operates in two levels- assigning a processors to jobs


and assigning processors to processes
On the job level processor management concerned which job will run first
Not concerned with multiprogramming
Assumes once job is scheduled it will run
Job scheduling can run concurrently with other user program
Creation, destroying, sending messages between process common to all
address space
Refer diagram 4.2
So in processor management upper level
In multiprogramming environment the process scheduling and
synchronization called by all modules
So center of the kernel ( lower level )
Job scheduling
Focus on job scheduling policies, implications
Dealing systems with and without multiprogramming
Time sharing system- Compatible Time Sharing Systems (CTSS)- priority
based
Batch systems-OS/VS-2- arrival and priority based
Job scheduling
Functions
Policies
Job scheduling in non-multiprogrammed environment
Job scheduling using FIFO
Job scheduling using shortest job first
Job scheduling using future knowledge
Measure of scheduling performance
Job scheduling in multiprogrammed environment
Job scheduling with multiprogramming but no i/o overlap
Job scheduling with multiprogramming and i/o overlap
Job scheduling with memory requirements and no i/o overlap
Job scheduling with memory and tape constraints and no i/o overlap
Job scheduling summary
Functions
Job scheduler overall supervisor
Assigns system resources to certain jobs
Keep track of jobs
Invoke policies to decide which job gets resources
Allocate resources
Deallocate resources

Mechanism
for keeping
track of jobs
is to have
separate
JCB (Job
Control
Block) for
each job
JCB created
when a
process is in
hold state.
Contains
entries
regarding
its status,
position in
the job
queue.
Priority and

Policies
Job scheduler must choose from the HOLD state and put it into READY to RUN
Choosing may be arbitrarily or shortest job
Key concept
Policy issue
Typical consideration
Policies- Key concept
More job wish to run
Most resources are finite
Many resources cannot be shared or easily reassigned to another process
Policies- policy issue
Running as many jobs as possible- only short jobs
Keeping the processor busy
Fairness to all jobs
Policies- typical consideration
To determine a job scheduling policy
Availability of special resources
With preference
Without preference
Cost-higher rates for faster service
System commitments- processor time and memory-the more you want the
longer you wait
System balancing- mixing I/O intensive and CPU intensive
Guaranteed service
Completing the job by or at specific time
job scheduler has selected a collection of jobs to run
the process scheduler schedules dynamically
Job and process scheduler may interact.
Process scheduler postpone a process and sends it back to the macro-level
job scheduling

Job scheduling in nonmutiprogrammed environment


No multiprogramming

Once a processor has been assigned a processor it does not release the
processor until it is finished
One CPU process is created for each job
Policy to reduce average turnaround time
Job scheduling using FIFO

estimated
got from job
control
cards
Other
Assume job
information
arrive
set by
Sample
operatingjob
arrival time
system

Job 1 arrived at 10 am and run for 2 hrs


If we use First In First Out then, jobs will run as depicted
Turnaround time = finish time- arrival time
Average turnaround = sum of turnaround time divided by total number of
jobs
Job 1 arrived at 10 am and run for 2 hrs
If we use First In First Out then, jobs will run as depicted
Turnaround time = finish time- arrival time
Average turnaround = sum of turnaround time divided by total number of
jobs
TABLE-plain sheet 31
Job scheduling using shortest job first
To reduce the average turnaround time
Scheduling algorithm runs the HOLD job with shortest job run time first
When job 1 arrives it is run
While job 1 is running job 2 and 3 arrive
Choose job 3 next shorter job than job 2 ( run time )
Job scheduling using shortest job first
TABLE_-plain sheet 33
Job scheduling using future knowledge
To improve the average turnaround time based on future knowledge
If at 10 am we know that two short jobs will arrive then we wont run job 1
Reduce average turnaround time but wasted .25 hr of CPU time
TABLE-sheet 35
Problems in these job scheduling algorithm
Future knowledge is rare
Run times are usually estimated approximately
Other resources must be considered such as memory requirements, I/O
devices

Turnaround
time amount
of time to
execute a
particular
process/job

Measure of scheduling performance


Turnaround time- a measure of performance
Another measure- Weighted turnaround time (W)
W= T/ R
T-> turnaround time

R-> run time


Job scheduling in multiprogrammed environment
Job scheduler selects the job to run
Simple process scheduling algorithm- round robin
Round robin- each job is assigned a processor for small quantum of time
If n jobs are running simultaneously, they each get an equal share of run time
CONTEXT SWITCH -39,40,41
These above tables represents FIFO scheduling with no multiprogramming
Refer diagram 4.9a for graphic representation
If multiprogramming is used thenCPU headway amount of CPU spent on a job equal to the half of the clock
time elapsed
Refer diagram 4.10a, 4.10b, 4.10c
Job 1 arrived at 10 and run for 0.3hrs
After job 1 had run for 0.2 hrs job 2 arrives so the processor was time sliced
between the two jobs job 1 and job 2
Even though job 1 has only 0.1hrs of execution it will complete only at 10.4
Unit 3- process scheduling
Process scheduling
Concerned with transitions 3 and 4
(hold-> ready, ready-> running)
Job scheduler selects a collection of jobs to run ( submit-> hold)
Process scheduler/ dispatcher or low-level scheduler assigns process to
processor
Process uses processor for only for 100ms
Job and process scheduler may interact
Process scheduler will roll out the process to undergo a job scheduling again
time sharing system
Single processor interlock

Needs
tape

It is a single processor multiprogramming system


Processes A and D need a tape drive
Only one tape
Process A request it first
So tape assigned to process A

When process D request it, it must be blocked until process A release it


Processes are blocked but not the processor
Functions process scheduler
Keep track of the state of the process
Decide which process gets a processor, when and for how long
Allocate processors to processes
Deallocate processors from processes
Process control Block

Process identification
Current state
Priority
Copy of active registers
Pointers to list of other processes in same
state
Etc.

Keeping track of status of processes are done by traffic controller


Using PCB- Process Control Block
a database associated with each process
Separate PCB for each process
All PCBs with same state ( ready, blocked) are linked
Resulting list is called ready list/ block list

Traffic controller is called whenever a status of a resource is changed


If a process request a device and that device is already is in use, the process
is linked to the block list
When the device is released the traffic controller checks any process is
waiting for that device
If so then processes are placed in ready state.
Policies

You might also like