You are on page 1of 6

LECTURE 6: CPU SCHEDULING

Types
1. nonpreemptive scheduling, once the CPU has been allocated to a process, the
process keeps the CPU until it releases the CPU either by terminating or by switching to
the waiting state.

2. preemptive scheduling can result in race conditions when data are shared among
several processes. Preemption also affects the design of the operating-system kernel

Criteria
1. CPU utilization. We want to keep the CPU as busy as possible

2. One measure of work is the number of processes that are completed per time unit,
called throughput

3. Turnaround time is the sum of the periods spent waiting to get into memory, waiting in
the ready queue, executing on the CPU, and doing I/O.

4. Waiting time is the sum of the periods spent waiting in the ready queue.

5. response time is from the submission of a request until the first response is produced.

➔ It is desirable to maximize CPU utilization and throughput and to minimize turnaround


time, waiting time, and response time. In most cases, we optimize the average measure.

Scheduling Algorithms
First-Come, First-Served Scheduling
● the process that requests the CPU first is allocated the CPU first
● The code for FCFS scheduling is simple to write and understand.
● negative side, the average waiting time under the FCFS policy is often quite long.(all the
other processes wait for the one big process to get off the CPU)
● FCFS scheduling algorithm is nonpreemptive.
Shortest-Job-First Scheduling
● shortest-next-CPU-burst algorithm, because scheduling depends on the length of the
next CPU burst of a process, rather than its total length.
● Average waiting time decreases.
● SJF scheduling is used frequently in long-term scheduling. it cannot be implemented at
the level of short-term CPU scheduling.(there is no way to know the length of the next
CPU burst.)
○ The next CPU burst is generally predicted as an exponential average of the
measured lengths of previous CPU bursts.
● The SJF algorithm can be either preemptive or nonpreemptive.
Priority Scheduling
● A priority is associated with each process, and the CPU is allocated to the process with
the highest priority.
○ Equal-priority processes are scheduled in FCFS order.
○ An SJF algorithm is simply a priority algorithm where the priority (p) is the inverse
of the (predicted) next CPU burst.
● Priority scheduling can be either preemptive or nonpreemptive
○ A preemptive priority scheduling algorithm will preempt the CPU if the priority of
the newly arrived process is higher than the priority of the currently running
process.
○ A nonpreemptive priority scheduling algorithm will simply put the new process at
the head of the ready queue.
● A major problem with priority scheduling algorithms is indefinite blocking,or starvation.
○ Solution is Aging; involves gradually increasing the priority of processes that wait
in the system for a long time.
Round-Robin Scheduling
● A time quantum/time slice is generally from 10 to 100 milliseconds in length
● The CPU scheduler goes around the ready queue(circular queue), allocating the CPU to
each process for a time interval of up to 1 time quantum.
● Time slice >> context switch time
○ Time slice is large → FCFS
○ Time slice is small → Context Switch
Multilevel Queue Scheduling
● processes are easily classified into different groups
● partitions the ready queue into several separate queues
Multilevel Feedback Queue Scheduling
● Allows a process to move between queues.
● Parameters
a. The number of queues
b. The scheduling algorithm for each queue
c. The method used to determine when to upgrade a process to a higher priority
queue
d. The method used to determine when to demote a process to a lower priority
queue
e. The method used to determine which queue a process will enter when that
process needs service

LECTURE 7: INTERPROCESS COMMUNICATION


Message Passing API
● Uses kernel services in order to communicate with processes and synchronize
processes
● System call > kernel “i want to make use of your buffer so i can send message in order
to be efficient”
○ Socket programming uses portal number(port) to know what buffer is used
○ Invokes switching from user to kernel a lot of times
● communication takes place by means of messages exchanged between the cooperating
processes
● useful for exchanging smaller amounts of data
● easier to implement in a distributed system
● POSIX STANDARD for system call
● Negative side: overheards
● Send: System call (user>kernel)+ Copy(kernel allows then sends back to user)
Receive: System call + Copy
USES 4x user/kernel switching mode
4x data copy
● Needs interface to process via a port
○ Pipes
○ Sockets allows to communicate with other program or processes in
local machine, much more complexity compared to pipes and message queues
○ Message Queues if you want to send messages in a certain format (to refine
message passing compared to pipes)

Shared Memory
● Synchronization is made by itself (explicit synchronization)
○ You have to be careful in compilation or IDE’s
■ IDE’s or some compilator may reorganized the execution of the line of
your codes in the name of optimization
● Moving shared memory from kernel to user
○ No user/kernel switching which means performance boost
○ After setting it up, kernel doesn’t intervene much at all
■ sadly , since there will be no help from kernel to know if the shared
memory has something stored or not, scheduling will be more complex
■ API is not standardized, SYSTEM V and POSIX is used
○ Allows multiple processes to see a shared region
● a region of memory that is shared by cooperating processes is established. Processes
can then exchange information by reading and writing data to the shared region.
● OS creates shared memory channel.Once shared memory is established no assistance
from the kernel is required
● System Calls are used inset-up only
● Data copying is reduced
● Shared memory suffers from cache coherency issues
● Requires Custom Data Structures and Synchronization Constructs
○ May deal with race conditions
MESSAGE PASSING VS SHARED MEMORY
Message Passing
If you don’t care about overheard or performance
If difficult or almost synchronization process (leave it to the kernel)
SHARED MEMORY
GUI, Games
Processing large files
When you need performance boost

SYSV SHARED MEMORY


Set as segments, assigned to minimum bytes and the max is 4096 bytes
Each segment has a name (numerical, hash function)
If you know the name, you can utilize the segment even if not in the same code

Create
shmget(shmid, size, flag)
scmid - number, must be unique (you can use ftok to generate a unique numeric
name)
Attach so you can use the shared memory, you can read and write anytime without kernel
interruption
shmat(shmid, addr, flags)
Detach shared memory region is still there
shmdt(shmid)
DESTROY modify the state of the shared memory (set it such that it is not a shared memory
anymore)
shmctl(shmid,cmd,buf)

POSIX SHARED MEMORY

Memory is allocated in multiples of 4096


mmap
Shared memory
Processing and handling files in a more efficient manner (faster file handling and
processing)
Load file to swap space, ram and disk communication is faster (read and write is
faster)
Allocate memory (malloc)
Brk moves program break so that we have extra space for the program to avoid
seg fault (inferior mmap)
Ipcs
Use to check syntax, if all steps were done (ex. if you forgot to use free)

LECTURE 8: SYNCHRONIZATION

A situation like this, where several processes access and manipulate the same data
concurrently and the outcome of the execution depends on the particular order in which the
access takes place, is called a race condition.
➔ we require that the processes be synchronized in some way.

entry section
Sends request permission to enter its critical section
critical section
in which the process may be changing common variables
→no two processes are executing in their critical sections
at the same time
remainder section
remaining code

Requirements
1. Mutual exclusion. If process Pi is executing in its critical section, then no other
processes can be executing in their critical sections.
2. Progress. Processes that are not executing in their remainder sections can participate
in deciding which will enter its critical section next
3. Bounded waiting. processes are not allowed to immediately reenter their critical
sections

Preemptive Kernel VS Nonpreemptive Kernel


nonpreemptive kernel
● essentially free from race conditions on kernel data structures, as only one process is
active in the kernel at a time.
preemptive kernel
● may be more responsive (since multiple process can run at the same time, one long
process will not make the other processes wait for a long time to be executed)
● more suitable for real-time programming, as it will allow a real-time process to preempt a
process currently running in the kernel.
● BUT must be carefully designed to ensure that shared kernel data are free from race
conditions
SOFTWARE-BASED SOLUTION
Peterson’s Solution

The variable turn indicates whose turn it is to enter its critical


section.

The flag array is used to indicate if a process is ready to enter its


critical section.

HARDWARE-BASED SOLUTION

special hardware instructions that allow us either to test and modify the content of a word or to
swap the contents of two words atomically—that is, as one uninterruptible unit

You might also like