You are on page 1of 4

CPU SCHEDULING

C
(i)

PU scheduling is the basis of multiprogrammed operating systems. By switching the CPU among processes, the operating system can make the computer more productive. Whenever the CPU becomes idle, the

operating system must select one of the processes in the ready queue to be executed. The part of the operating system that makes choices between processes is called the Scheduler and the algorithm it uses is called the Scheduling algorithm.

Categories of scheduling algorithms

Non preemptive Scheduling: in this scheme once the CPU has been allocated to a process, the process keeps the CPU until it releases the CPU either by terminating or by switching to a waiting state.

(ii)

Preemptive Scheduling: this normally involves suspension of a running process when a clock interrupt occurs.

Scheduling Criteria:

Different CPU scheduling algorithms have different properties and may favour one class of processes over another. In order to design a good algorithm, it is necessary to know what constitutes a good scheduling algorithm. The criteria used could include the following:

Fairness : giving each process a fair share of the CPU CPU Utilization: We want to keep the CPU as busy as possible. CPU utilization may range from 0 to 100 percent. In a real system, it should range from 40 % to 90%. Throughput: maximize the number of jobs processed per unit time.

Turnaround time: This is the interval from the time of submission of a process to the time of completion. It is the sums of the periods spend waiting to get into memory, waiting in the ready queue, executing on the CPU and doing I/O. We should minimize this interval. Response time: minimize the response time for interactive users.

Scheduling algorithms:
There are many scheduling algorithm, some of which are applicable to batch systems while others apply in interactive systems.

Scheduling in batch systems:

(a) First Come, First Served (FCFS) :

By far the simplest CPU scheduling algorithm is the First Come, First served algorithm. With this scheme the process that requests the CPU first is allocated the CPU first. The implementation of the FCFS policy is easily managed with a FIFO queue. When the CPU is free, it is allocated to the process at the head of the queue.

(b)Shortest Job First (SJF): A different approach to CPU scheduling is the shortest-job-first algorithm. This algorithm associates with each process the length of the latters next CPU burst. When the CPU is available, it is assigned to the process that has the smallest next CPU burst/ running time. If two processes have the same length next CPU burst, FCFS is used to break the tie.

(c) Three-Level Scheduling :

Batch systems allow scheduling at three different levels. As jobs arrive at the system they are initially placed in an input queue stored on disk. The admission scheduler decides which jobs to admit to the system. If the number of processes is larger than the available memory, the memory scheduler decides which processes are kept in memory and which ones the disk. The third level is actually picking one of the ready processes in memory to run. This is done by the CPU scheduler.

Scheduling in interactive systems:

(d)Round Robin Scheduling (RR) : The round-robin (RR) scheduling algorithm is especially designed for timesharing systems. It is one of the oldest, simplest and fairest algorithms. It is similar to FCFS, but preemption is added to switch between processes. A small unit of time called a time quantum or time slice is defined (generally between 10 to 100 ms). The ready queue is treated as a circular queue, and the CPU scheduler goes around the ready queue, allocating the CPU to each process for a time interval of up to one quantum. New processes are added to the tail of the queue. When a process uses up its quantum, it is placed at the end of the ready queue.

Exercise: What is the tradeoff between setting a short quantum and a long one? Answer: Short quantum leads to expansive context switches A larger quantum leads to poor response time for short highly interactive processes.

(e) Priority Scheduling : A priority is associated with each process, and the CPU is allocated to the process with the highest priority. Equal priority processes are scheduled in FCFS order. Priorities are generally some fixed range of numbers, such as 0 to

7 etc. Some systems use low numbers to represent low priority; others use low numbers for high priority. The major problem with priority scheduling algorithms is indefinite blocking or starvation. Priority scheduling can leave some low-priority processes waiting indefinitely for the CPU. A solution to the problem of indefinite blockage is called aging. Aging is a technique of gradually increasing the priority of processes that wait in the system for a long time. Another solution is to have the scheduler decreasing the priority of the running process at each clock interrupt.

(f) Multilevel Queue Scheduling :

This operates by classifying processes into different groups or queues. The processes are permanently assigned to one queue, generally based one some property of the process, such as memory size, process type etc. Each queue implements its own scheduling algorithm internally. In addition there should be scheduling between the queues, which is commonly implemented as fixedpriority preemptive scheduling. No process in the lower priority classes can be run before higher priority queues are empty.

(g)Multilevel Feedback Queue Scheduling :

This allows processes to move between the queues. The idea is to separate processes with different CPU-burst characteristics. If a process uses too much CPU time, it will be moved to a lower-priority queue. If a process in a high-priority queue does not finish within its quantum, it is moved down to the tail of the next lower-priority queue.

You might also like