Professional Documents
Culture Documents
Andrea Molina
April 2017
Contents
1 Concurrency 3
1.1 Microkernel Structure . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3
1.2 Uniprogramming . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3
1.3 Multiprogramming . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3
1.4 The Basic Problem of Concurrency . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3
1.5 Properties of this simple multiprogramming technique . . . . . . . . . . . . . . . . . . . . . . 4
1.6 SMT/Hyperthreading . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4
1.7 How to protect threads from one another? . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4
1.8 How do we multiplex processes? . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4
1.9 CPU Switch From Process to Process . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4
1.10 Diagram of Process State . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5
1.10.1 Process Scheduling . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5
1.10.2 What does it take to create a process? . . . . . . . . . . . . . . . . . . . . . . . . . . . 5
1.10.3 Multiple Processes Collaborate on a Task . . . . . . . . . . . . . . . . . . . . . . . . . 5
1.11 Modern Lightweight Process with Threads . . . . . . . . . . . . . . . . . . . . . . . . . . . 5
1.12 Single and Multithreaded Processes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5
1.13 Examples of multithreaded programs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6
1.14 Thread State . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7
1.15 Classification . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7
1.16 Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7
2 Thread Dispatching 8
2.1 Recall . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8
2.2 Thread State . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8
2.3 Ready Queue And Various I/O Device Queues . . . . . . . . . . . . . . . . . . . . . . . . . . 9
2.4 ThreadFork(): Create a New Thread . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9
2.5 How do we initialize TCB and Stack? . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9
2.6 What does ThreadRoot() look like? . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9
2.7 What does ThreadFinish() do? . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10
2.8 Additional Detail . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10
2.9 Dispatch Loop . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10
2.10 Running a thread . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11
2.10.1 Internal Events . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11
2.10.2 External Events . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11
2.11 Choosing a Thread to Run . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11
2.12 Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11
1
3 Cooperating Threads 12
3.1 ThreadJoin() system call . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12
3.2 Use of Join for Traditional Procedure Call . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12
3.3 Multiprocessing vs Multiprogramming . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12
3.4 Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12
4 Synchronization 13
7 Thread Scheduling 17
7.1 Scheduling . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 17
7.2 First-Come, First-Served (FCFS) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 17
7.3 Round Robin Scheme . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 17
7.4 Shortest Job First (SJF) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 17
7.5 Shortest Remaining Time First (SRTF) . . . . . . . . . . . . . . . . . . . . . . . . . . . . 17
7.6 Multiprocessor Thread Scheduling . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 18
2
1 Concurrency
1.2 Uniprogramming
One thread at a time.
MS/DOS, early Macintosh, Batch processing.
1.3 Multiprogramming
More than one thread at a time.
Multics, UNIX/Linux, OS/2, Windows NT/2000/XP, Mac OS X.
3
1.5 Properties of this simple multiprogramming technique
All virtual CPUs share same non-CPU resources
I/O devices.
Memory.
Consequence of sharing:
Each thread can access the data of every other thread (good for sharing, bad for protection).
Threads can share instructions (good for sharing, bad for protection).
1.6 SMT/Hyperthreading
Hardware technique
Exploit natural properties of super scalar processors to provide illusion of multiple processors.
Higher utilization of processor resources.
Can schedule each thread as if were separate CPU
However, not linear speedup.
If have multiprocessor, should schedule each processor first.
1. Address space
The set of accessible addresses + state associated.
2. Process
Operating system abstraction to represent what is needed to run as a single program.
Often called HeavyWeight Process
There is no concurrency in a heavyweight process.
Formally: a single, sequential stream of execution in its own address space.
4
1.10 Diagram of Process State
1.10.1 Process Scheduling
PCBs move from queue to queue as they change state.
Decisions about which order to remove from queues are Scheduling decisions.
Many algorithms possible.
5
Figure 2: Single and Multithreaded Processes
Internally concurrent because have to deal with concurrent request by multiple users.
But no protection needed within kernel.
Database Servers
Access to shared data by many concurrent users.
Also backgroud utility processing must be done.
Network Servers
Concurrent requests from network.
Parallel Programming (More than one physical CPU)
6
1.14 Thread State
State shared by all threads in process/addr space.
Contents of memory (global variables, heap).
I/O state (file system, network connections, etc).
1.15 Classification
1.16 Summary
Processes have two parts
Threads (Concurrency).
Address Spaces (Protection).
Concurrency accomplished by multiplexing CPU Time
7
Loading new thread (PC, registers)
Such context switching may be voluntary (yield(), I/O operations) or involuntary (timer, other
interrupts).
Protection accomplished restricting access
2 Thread Dispatching
2.1 Recall
Process
Operating system abstraction to represent what is needed to run a single, multithreaded program.
2 parts
Multiple Threads
Each thread is a single, sequential stream of execution.
Protected Resources
Main memory state (contents of address space).
I/O state (i.e. file descriptors).
Threads encapsulate concurrency
Active component of a process.
In Nachos:
Thread is a class that includes the TCB.
OS Keeps track of TCBs in protected memory.
8
2.3 Ready Queue And Various I/O Device Queues
Thread not running TCB is in some scheduler queue
Separate queue for each device/signal/condition.
Each queue can have a different scheduler policy.
Figure 4: ThreadRoot()
9
2.7 What does ThreadFinish() do?
Call run new thread() to run another thread:
ThreadFinish() and exit() are essentially the same procedure entered at user level.
Figure 6: Loop
10
2.10 Running a thread
2.10.1 Internal Events
Blocking on I/O
The act of requesting I/O implicitly yields the CPU.
Waiting on a signal from other thread
Thread asks to wait and thus yields the CPU.
Thread executes a yield()
Thread volunteers to give up CPU.
2.12 Summary
The state of a thread is contained in the TCB
Registers, PC, stack pointer
States: New, Ready, Running, Waiting, or Terminated
Multithreading provides simple illusion of multiple CPUs
Switch registers and stack to dispatch new thread
Provide mechanism to ensure dispatcher regains control
Switch routine
Can be very expensive if many registers
Must be very carefully constructed!
Many scheduling options
Decision of which thread to run complex enough for complete lecture.
11
3 Cooperating Threads
3.1 ThreadJoin() system call
One thread can wait for another to finish with the ThreadJoin(tid) call
Calling thread will be taken off run queue and placed on waiting queue for thread tid.
Where is a logical place to store this wait queue?
On queue inside the TCB
3.4 Summary
Interrupts:
Hardware mechanism for returning control to operating system.
Used for important/high-priority events.
Can force dispatcher to schedule a different thread (premptive multithreading).
New Threads Created with ThreadFork()
12
4 Synchronization
Concurrent threads are a very useful abstraction.
Allow transparent overlapping of computation and I/O.
Allow use of parallel processing when available.
Concurrent threads introduce problems when accessing shared data
Lock before entering critical section and before accessing shared data.
Unlock when leaving, after accessing shared data.
Wait if locked
Important idea: all synchronization involves waiting.
Should sleep if waiting for a long time.
Atomic Load/Store.
13
5.3 Read-Modify-Write
Figure 7: Read-Modify-Write
Simple explanation:
If lock is free, test&set reads 0 and sets value=1, so lock is now busy. It returns 0 so while exits.
If lock is busy, testset reads 1 and sets value=1 (no change). It returns 1, so while loop continues.
When we set value = 0, someone else can get lock.
5.5 Busy-Waiting
Thread consumes cycles while waiting.
14
5.6 Busy-Waiting for Lock
Positives for this solution
Machine can receive interrupts
User code can use this lock.
Works on a multiprocessor.
Negatives
This is very inefficient because the busy-waiting thread will consume cycles waiting.
Waiting thread may take cycles away from thread holding lock (no one wins!).
Priority Inversion
For semaphores and monitors, waiting thread may wait for an arbitrary length of time!
5.9 Semaphores
Semaphores are a kind of generalized lock.
Definition:
A semaphore has a non-negative integer value and supports the following two operations:
P(): an atomic operation that waits for semaphore to become positive, then decrements it
by 1.
P() stands for proberen (to test).
V(): an atomic operation that increments the semaphore by 1, waking up a waiting P.
V() stands for verhogen (to increment).
Semaphores are like integers
Only operations allowed are P and V cant read or write value, except to set it initially.
15
Operations must be atomic.
Two Uses of Semaphores
Mutual Exclusion (initial value = 1)
Also called Binary Semaphore.
Can be used for mutual exclusion.
Scheduling Constraints (initial value = 0)
Locks are fine for mutual exclusion.
A lock and zero or more condition variables for managing concurrent access to shared data.
16
6.2.2 Four conditions for deadlocks
1. Mutual exclusion
Only one thread at a time can use a resource.
7 Thread Scheduling
7.1 Scheduling
Deciding which threads are given access to resources from moment to moment.
17
7.6 Multiprocessor Thread Scheduling
Load sharing
Processes are not assigned to a particular processor.
Gang scheduling
A set of related threads is scheduled to run on a set of processors at the same time.
Dedicated processor assignment
Threads are assigned to a specific processor.
Dynamic scheduling
Number of threads can be altered during course of execution.
18
Figure 12: Speak
19
Figure 13: Listen
20
Figure 15: Condition 2 Wake
21