You are on page 1of 53

Operating Systems

Chapter 2: Processes and Threads

Instr: Yusuf Altunel IKU Department of Computer Engineering (212) 498 42 10 y.altunel@iku.edu.tr
1

Content
2.1 2.2 2.3 2.4 2.5 Processes Threads Interprocess communication Classical IPC problems Scheduling

The Process Model


Software is organized
into a number of sequential processes

Each process has its own CPU In reality CPU


switches back and forth from process to process

A process will perform its computation There is a need for timing mechanism
to prevent some processes
making the CPU too much busy

Since the time needed for each process


is not uniform
3

Processes
The Process Model

a. Multiprogramming of four programs b. Conceptual model of 4 independent, sequential processes c. Only one program active at any instant
4

Process Creation
Principal events result in process creation 1. System initialization 2. User request to create a new process 3. Initiation of a batch job

Process Termination
Conditions terminate processes: 1. Normal exit
voluntary voluntary involuntary involuntary
6

2. Error exit 3. Fatal error 4. Killed by another process

Process Hierarchies
Parent creates a child process,
child processes can create their own process Forms a hierarchy
UNIX calls this a "process group"

Windows
has no process hierarchy concept all processes are treated equally
7

Process States

Running
using the CPU

Ready
no CPU is available Ready to run, but temporarily stopped to let another process run

Blocked
Cannot run even if the CPU is available Unable to run until some external event happens
8

Scheduler

Lowest layer of process-structured OS handles interrupts, scheduling Starts and stops the processes
when it is necessary
9

Process Implementation
Process Table
Maintained by OS
to implement the processes

Each entry is reserved for one process Contains information about


Process state Program counter Stack pointer Memory allocation Status of its open files Accounting and scheduling information etc.

Exact fields vary from system to system


10

Process Table Entry


Process Management
Registers Program Counter Program Status Word Stack Pointer Process State Priority Scheduling parameters Process id Parent process Process group Signals Time when process started CPU Time used Childrens CPU time Time of next alarm 11

Memory Management
Pointer to text segment Pointer to data segment

File Management
Root directory Working directory File descriptions User ID Group ID

Fields of a process table entry

Interrupt Handling

Skeleton of what lowest level of OS does when an interrupt occurs


12

Threads
Comparing Thread vs. Process

(a) Three processes each has one thread (b) One process with three threads
13

Processes Processes vs. Threads Threads

Process items shared


by all threads in a process

Thread items are private to each thread


14

Thread Usage: Word Processor

A word processor with three threads


15

Strategy to Implement Threads


Managing in user Space
OS is not aware of threads Compiler is responsible
to create and manage threads

When a thread is about to block


it chooses and starts its successor before stopping

Managing in Kernel Space


the operating system
is aware of the existence of multiple threads per process

when a thread blocks,


the operating system chooses
the next one to run, either from the same process or a different one
16

Implementing Threads in User Space

A user-level threads package


17

Implementing Threads in the Kernel

A threads package managed by the kernel


18

Pros and Cons


User space
Pros
Switching threads is much faster

Cons
When thread is blocked
e.g., waiting for I/O or a page fault to be handled

the kernel blocks the entire process

Kernel space
Pros
When thread is blocked Having the chance to execute the next

Cons
Switching is slower
19

Interprocess Communication
Processes need
to communicate with others

Example:
UNIX command to
Concatenate three files (Process 1) Select all lines containing the word tree (Process 2)

cat chapter1 chapter2 chapter3 | grep tree The first process should
concatenate three files and send the resulting file
to the second process

Second process
will get the concatenated files and finds those lines including the word tree
20

Race Conditions
Processes often share a common storage
In main memory or a shared file

Race condition
Two or more processes ready To read or write a shared data The final result depends on who runs precisely when
21

Mutual Exclusion
Critical region
part of a program where shared area is to be accessed

... Slot 4 Slot 5 Slot 6


Process A Process B

in=4

Shared area
Shared file Shared memory Variable etc.

Slot 7 ...

Shared Area

To avoid race conditions:

No two processes are ever at the critical region at the same time

out=7

Mutual exclusion:
Shared area
must be prevented against access by more than one process
at the same time

22

Mutual Exclusion Conditions


conditions of mutual exclusion:
1.

No two processes simultaneously


in critical region

2.

No assumptions made
about speeds or numbers of CPUs

3.

No process running outside its critical region

may block another process

4.

No process must
wait forever to enter its critical region
23

Critical Regions

Mutual exclusion using critical regions


24

Solving the Race Conditions


Busy Waiting
Disabling Interrupts Lock Variables Strict Alternation Petersons Solution The TSL (Test and Lock) Instruction

Sleep and Wakeup Semaphores Mutexes

25

Busy Waiting (Disabling Interrupts)


Disable all interrupts
just after entering CR

Re-enable them
just before leaving CR

User will get the power


to turn system interrupts off

Ends the system


if forget to turn on again

System with multi-processor


Turning interrupts
will affect only one processor

Other processors will be able


to access to shared area
26

Busy Waiting (Lock Variables)


Use a lock variable to
Set to 1 when the process enter CR If it is already 1
the process just waits until it becomes 0

When the process exits CR,


resets the lock to 0

Problem:
The lock variable itself is shared Access to lock variable
creates another Race Condition
27

Busy Waiting (Strict Alternation)


Algorithm:
Enter CR and set a variable to 1 If another process wants to enter CR
keep checking the variable
until it becomes 0

The process resets the variable to 0


when exits the CR

Second process will enter to CR


when it understand the variable became 0

Not Good
Similar problems to previous solutions Keeps the CPU busy If the waiting process is slower
the first process may not allow the second process to enter CR
28

Busy Waiting (Peterson's Solution )


Keep an array
to show which processs turn is it

Keep a variable (turn)


to show whose turn is it

Algorithm:
The process entering CR
Sets the array element
Process 0 element 0; Process 1 element 1, etc.

Set turn to its process number At leaving CR, reset the array element

A second process will be blocked


until the array element is reset
29

Busy Waiting (TSL Instruction)


TSL (Test and Lock) instruction:
TSL RX,LOCK Indivisible block of operation
reads the lock into register RX stores a nonzero value at the lock

Help from hardware is taken

Algorithm
Use a shared variable flag When flag is 0
any process may set it to 1
using TSL instructions

enter to CR after finishing execution reset the flag

When flag is 1
any process wants to enter CR should wait
30

Busy Waiting: Disadvantages


Continuously check the status
Wastes CPU time

Has unexpected results


Priority inversion problem
A process L with lower priority enters its CR A process H with higher priority blocks L and start execution At a point H needs to enter CR But L has already entered H keeps CPU continuously checking L has lower priority,
never has a chance to leave CR
31

Sleep and Wakeup


SLEEP:
System call
causes a process to be blocked

WAKEUP:
System call
causes a blocked process to be awakened and start execution

If WAKEUP signal is lost


(signal sent when the process is not asleep yet) Processes might go to sleep
and never wakeup
32

Semaphores
Counts number of wakeups
saved for future use

Solves the lost wakeup problem a semaphore:


could have the value 0
indicating that no wakeups were saved if one or more wakeups were pending

some positive value

Atomic operations:
Checking the value, changing it, and possibly going to sleep

two atomic operations: down


if semaphore value is greater than 0
it decrements the value the process is put to sleep

If the value is 0

Up
increments the value of the semaphore If one or more processes were sleeping on that semaphore
one of them is chosen by the system (e.g., at random) and is allowed to complete its down. the semaphore will still be 0, but there will be one fewer process sleeping on it

after an up on a semaphore with processes sleeping on it,

33

Solving Race Cond with Semaphores


uses three semaphores:
one called full
for counting the number of slots that are full initially 0

one called empty


for counting the number of slots that are empty initially equal to number of slots in the shared area

one called mutex


to make sure the producer and consumer do not access the buffer at the same time initially set to 1

Each process
does a mutex down
just before entering its critical region

and an up
just after leaving it

full and empty are used for synchronization


34

Mutexes
A mutex is a variable
can be in one of two states:
unlocked (0) locked (1)

only 1 bit is required to represent it, but in practice an integer often is used

When a thread (or process) needs access to a critical region


it calls mutex_lock.

If the mutex is current unlocked


meaning that the critical region is available the call succeeds
and the calling thread is free to enter the critical region.

if the mutex is already locked,


the calling thread is blocked
until the thread in the critical region is finished and calls mutex_unlock.

If multiple threads are blocked on the mutex,


one of them is chosen at random and allowed to acquire the lock.
35

Process Scheduling
Operating system (OS) must decide
when more than one process is runnable

Scheduler:
OS part to decide which process is run

Scheduling algorithm:
The algorithm to decide
which process will run first

A good algorithm should have the criteria:


Fairness
Each process must get its fair share of CPU

Efficiency
Keep the CPU busy as much as possible

Response time
Minimize the response time for interactive users

Turnaround
Minimize the time batch users must wait for output

Throughput
Maximize the number of jobs processed per hour
36

Scheduling

Bursts of CPU usage alternate with periods of I/O wait


a CPU-bound process an I/O bound process
37

Scheduling Algorithms
First-Come First-Served Shortest Job First Shortest Remaining Time Next Three level scheduling Round Robin Scheduling Priority Scheduling Multiple Queues Shortest Process Next Guaranteed Scheduling Lottery Scheduling Fair-Share Scheduling
38

FirstFirst-Come First-Served First The simplest algorithm Processes use the CPU
in the order they request it.

There is a single process queue. When the first job enters the system
it is started immediately allowed to run as long as it wants to.

If new jobs come in


they are put onto the end of the queue.

When the running process is blocked


the first process on the queue runs next.

When a blocked process becomes ready


it is placed on the end of the queue.
39

Shortest Job First


the process execution time must be known Especially useful in batch processing systems Run the first process that will take shortest time Then continue with the next one

An example of shortest job first scheduling

40

Shortest Remaining Time Next


The scheduler chooses the process
whose remaining run time is the shortest

The remaining time must be known When a new job arrives


its total time is compared to the current process' remaining time

If the new job needs less time


the current process is suspended and the new job started.

This scheme allows


new short jobs to get good service.
41

Three level scheduling


First level of scheduling:
Called as admission scheduler Jobs arrived at the system
are initially placed in an input queue stored on the disk.

Decides which jobs to admit to the system. A typical algorithm might be


to look for a mix of compute-bound jobs and I/O bound jobs.

Alternatively, short jobs could be admitted quickly

Second level of scheduling


called as the memory scheduler. determines which processes are kept in memory
and which on the disk.

Third level of scheduling


Called as the CPU scheduler Any suitable algorithm can be used here picking one of the ready processes in main memory to run next
42

Three level scheduling

Three level scheduling

43

Round Robin Scheduling


One of the
Oldest Simplest Fairest Most widely used
Current Process Next Process

Each process
assigned a time interval
called quantum

Current Process

Next Process

when the time ends


the process is blocked the next process is switched
44

Priority Scheduling
Each process assigned a priority The processes with highest in priority run first To prevent higher priority processes to run infinitively
Decrease the priority at each clock tick

It is often convenient to
Group processes into priority classes Apply
priority scheduling between classes and round robin within each class

45

Multiple Queues
To decrease the swap processes Create priority queues
Assign
1 quanta 2 quanta 4 quanta 8 quanta ... to to to to the the the the processes processes processes processes in in in in 1st priority queue 2nd priority queue 3rd priority queue 4th priority queue

Whenever a process used up all quanta allocated to


it will be moved to the next priority queue
46

Guaranteed Scheduling
If there are n processes
each will receive about 1/n of the CPU power.

Keep track of
how much CPU each process has had since its creation.

Then compute
the amount of CPU used by each process

Compute the ratio of


actual CPU time consumed to CPU time entitled.

A ratio of 0.5:
a process has only had half of what it should have had,

A ratio of 2.0:
a process has had twice as much as it was entitled to.

Run the process with the lowest ratio


until its ratio has moved above its closest competitor
47

Lottery Scheduling
Give processes lottery tickets To schedule
a lottery ticket is chosen at random, schedule the process holding that ticket

Example
For 50 lotteries per second each process will get
20 msec of CPU time on average

Lottery scheduling can be used


to solve difficult scheduling problems
48

FairFair-Share Scheduling
To consider who is the process owner Example: round robin, equal priorities,
user 1 has 9 processes user 2 has 1 process user 1 will get
90% of the CPU and

user 2 will get


only 10% of it.

Allocate user a fraction of the CPU time


pick processes so that equilize the CPU time used by the users.

If two users using the system


They will get 50% of the CPU of each no matter how many processes
they have in existence.
49

Scheduling Threads
When several processes each have multiple threads
we have two levels of parallelism present: processes and threads.

Scheduling in such systems differs depending on


whether user-level threads or kernel-level threads or both are supported.

50

User Level Threads

Possible scheduling of user-level threads


50-msec process quantum threads run 5 msec/CPU burst
51

KernelKernel-Level Threads

Possible scheduling of kernel-level threads


50-msec process quantum threads run 5 msec/CPU burst
52

End of Chapter 2
Processes and Threads

53

You might also like