Professional Documents
Culture Documents
Types
1. nonpreemptive scheduling, once the CPU has been allocated to a process, the
process keeps the CPU until it releases the CPU either by terminating or by switching to
the waiting state.
2. preemptive scheduling can result in race conditions when data are shared among
several processes. Preemption also affects the design of the operating-system kernel
Criteria
1. CPU utilization. We want to keep the CPU as busy as possible
2. One measure of work is the number of processes that are completed per time unit,
called throughput
3. Turnaround time is the sum of the periods spent waiting to get into memory, waiting in
the ready queue, executing on the CPU, and doing I/O.
4. Waiting time is the sum of the periods spent waiting in the ready queue.
5. response time is from the submission of a request until the first response is produced.
Scheduling Algorithms
First-Come, First-Served Scheduling
● the process that requests the CPU first is allocated the CPU first
● The code for FCFS scheduling is simple to write and understand.
● negative side, the average waiting time under the FCFS policy is often quite long.(all the
other processes wait for the one big process to get off the CPU)
● FCFS scheduling algorithm is nonpreemptive.
Shortest-Job-First Scheduling
● shortest-next-CPU-burst algorithm, because scheduling depends on the length of the
next CPU burst of a process, rather than its total length.
● Average waiting time decreases.
● SJF scheduling is used frequently in long-term scheduling. it cannot be implemented at
the level of short-term CPU scheduling.(there is no way to know the length of the next
CPU burst.)
○ The next CPU burst is generally predicted as an exponential average of the
measured lengths of previous CPU bursts.
● The SJF algorithm can be either preemptive or nonpreemptive.
Priority Scheduling
● A priority is associated with each process, and the CPU is allocated to the process with
the highest priority.
○ Equal-priority processes are scheduled in FCFS order.
○ An SJF algorithm is simply a priority algorithm where the priority (p) is the inverse
of the (predicted) next CPU burst.
● Priority scheduling can be either preemptive or nonpreemptive
○ A preemptive priority scheduling algorithm will preempt the CPU if the priority of
the newly arrived process is higher than the priority of the currently running
process.
○ A nonpreemptive priority scheduling algorithm will simply put the new process at
the head of the ready queue.
● A major problem with priority scheduling algorithms is indefinite blocking,or starvation.
○ Solution is Aging; involves gradually increasing the priority of processes that wait
in the system for a long time.
Round-Robin Scheduling
● A time quantum/time slice is generally from 10 to 100 milliseconds in length
● The CPU scheduler goes around the ready queue(circular queue), allocating the CPU to
each process for a time interval of up to 1 time quantum.
● Time slice >> context switch time
○ Time slice is large → FCFS
○ Time slice is small → Context Switch
Multilevel Queue Scheduling
● processes are easily classified into different groups
● partitions the ready queue into several separate queues
Multilevel Feedback Queue Scheduling
● Allows a process to move between queues.
● Parameters
a. The number of queues
b. The scheduling algorithm for each queue
c. The method used to determine when to upgrade a process to a higher priority
queue
d. The method used to determine when to demote a process to a lower priority
queue
e. The method used to determine which queue a process will enter when that
process needs service
Shared Memory
● Synchronization is made by itself (explicit synchronization)
○ You have to be careful in compilation or IDE’s
■ IDE’s or some compilator may reorganized the execution of the line of
your codes in the name of optimization
● Moving shared memory from kernel to user
○ No user/kernel switching which means performance boost
○ After setting it up, kernel doesn’t intervene much at all
■ sadly , since there will be no help from kernel to know if the shared
memory has something stored or not, scheduling will be more complex
■ API is not standardized, SYSTEM V and POSIX is used
○ Allows multiple processes to see a shared region
● a region of memory that is shared by cooperating processes is established. Processes
can then exchange information by reading and writing data to the shared region.
● OS creates shared memory channel.Once shared memory is established no assistance
from the kernel is required
● System Calls are used inset-up only
● Data copying is reduced
● Shared memory suffers from cache coherency issues
● Requires Custom Data Structures and Synchronization Constructs
○ May deal with race conditions
MESSAGE PASSING VS SHARED MEMORY
Message Passing
If you don’t care about overheard or performance
If difficult or almost synchronization process (leave it to the kernel)
SHARED MEMORY
GUI, Games
Processing large files
When you need performance boost
Create
shmget(shmid, size, flag)
scmid - number, must be unique (you can use ftok to generate a unique numeric
name)
Attach so you can use the shared memory, you can read and write anytime without kernel
interruption
shmat(shmid, addr, flags)
Detach shared memory region is still there
shmdt(shmid)
DESTROY modify the state of the shared memory (set it such that it is not a shared memory
anymore)
shmctl(shmid,cmd,buf)
LECTURE 8: SYNCHRONIZATION
A situation like this, where several processes access and manipulate the same data
concurrently and the outcome of the execution depends on the particular order in which the
access takes place, is called a race condition.
➔ we require that the processes be synchronized in some way.
entry section
Sends request permission to enter its critical section
critical section
in which the process may be changing common variables
→no two processes are executing in their critical sections
at the same time
remainder section
remaining code
Requirements
1. Mutual exclusion. If process Pi is executing in its critical section, then no other
processes can be executing in their critical sections.
2. Progress. Processes that are not executing in their remainder sections can participate
in deciding which will enter its critical section next
3. Bounded waiting. processes are not allowed to immediately reenter their critical
sections
HARDWARE-BASED SOLUTION
special hardware instructions that allow us either to test and modify the content of a word or to
swap the contents of two words atomically—that is, as one uninterruptible unit