You are on page 1of 6

CS Division EECS Department

University of Central Florida


CGS 3763 Operating System Concepts
Spring 2013 dcm
Homework 4 (100 points)
Due Wednesday, March 13 2013 - 5 PM
A homework should be written in Word or Latex and should be Emailed to the TA
Edward Aymerich, Email: edward.aymerich@knights.ucf.edu
Subject of the Email: First.Last student name - HW4
Problem 4.1 (10pts)
What are two differences between user-level and kernel-level threads?
1. Kernel-level threads are supported and managed directly by the
kernel, whereas user-level threads are managed without kernel support.
2. The creation of user-level threads is usually faster than the creation of
kernel-level threads.
Problem 4.2(5pts)
Describe the actions taken by a kernel to context-switch between kernel level threads.
The kernel saves the state of the current thread, and then restores the
state of the new thread being scheduled.
Problem 4.3 (10pts)
What resources are used when a thread is created? How do they differ from those used
when a process is created?
A new context (CPU registers) and stack must be created for the new
thread.
When a new process is created, a full PCB must be created, which has a
lot more information (memory allocated, list of open files, accounting
information, etc.), and also memory must be allocated for the code and
data of the new process.

Problem 4.4 (5pts)


Which of the following program components are shared across threads in a multithreaded
process?
1. Register values
2. Heap memory
3. Global variables
4. Stack memory
Heap memory and Global variables are shared across threads.
Problem 4.5 (15pts)
Consider a multiprocessor system and a multithreaded program written using the many-tomany threading model. Let the number of user-level threads N be larger than P the number
of processors. Call K the number of kernel threads allocated to the program. Discuss the
performance implications of the following scenarios:
1. K < P
2. K = P
3. P < K < N
1. K<P
The process won't use all the processors in the system. All the work of
the process will be done by a few processors, taking more time for the
process to finish.
2. K=P
The process will utilize all the processors in the system, therefore the
work will finish as soon as possible. But if a thread is blocked by a
system call, then a processor is not utilized while the thread waits,
lowering the system usage.
3. P < K < N
All the processors in the system will be used by the process. The OS will
have to schedule the K threads over the P processors, with a little
overhead for the context switching. However, if a thread running in some
processor gets blocked by a system call, the scheduler can schedule
another ready thread in that processor, maximizing system utilization.

Problem 4.6 (15pts)


Define the concepts of CPU-bound and I/0-bound process. Why is it important for a
scheduler to distinguish the two.
A CPU-bound process is a process that spends most of his time executing
instructions.
An I/O-bound process is one which spends most of his time waiting for
I/O operations to complete.
It is important for a scheduler to distinguish the two because this allows
the scheduler to keep a balanced system. If the scheduler only executes
processes that are I/O-bound, then the CPU is underutilized. On the
contrary, if the scheduler only executes processes that are CPU-bound
then a high CPU utilization is achieved, but other system resources (such
as the hard drive or network interface) may be underused. A good
balance of CPU-bound and I/O-bound processes keeps the overall system
as busy as possible.
Problem 4.7 (15pts)
Discuss how the following pairs of scheduling criteria conflict in certain settings.
1. CPU utilization and response time.
A system can have a better response time using a scheduling
policy like Round Robin, but the more frequent context switching
could lower CPU utilization.
2. Average turnaround time and maximum waiting time.
The average turnaround time can be minimized using a Shortest
Job First scheduling policy, but this could increase the maximum
waiting time, because bigger jobs are always pushed behind
smaller jobs.
3. I/O device utilization and CPU utilization.
To achieve maximum CPU utilization the scheduler must choose
long running CPU-bound processes, but doing so will lower I/O
device utilization. To maximize I/O device utilization, the scheduler
must choose I/O-bound processes, which spend most of their time
waiting for the devices to reply, so the CPU utilization is lowered.

Problem 4.8 (6+6+6+2=20pts)

a.
FCFS
P1
0
SJF
P2 P4 P3
0 1 2

P2 P3
10 11
P5
4

P4 P5
13 14

19

P1
9

19

Non preemptive Priority


P2 P5
P1
0 1
6

P3
16

P4
18 19

RR (quantum=1)
P1 P2 P3 P4 P5 P1 P3 P5 P1 P5 P1 P5 P1 P5 P1 P1 P1 P1 P1
0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19
b.
Turnaround time for each process:
FCFS
SJF
P1
10
19
P2
11
1
P3
13
4
P4
14
2
P5
19
9

Priority
16
1
18
19
6

RR
19
2
7
4
14

c.
Waiting time for each process:
FCFS
P1
0
P2
10
P3
11
P4
13
P5
14
Sum
48
Average
9.6

Priority
6
0
16
18
1
41
8.2

RR
9
1
5
3
9
27
5.4

SJF
9
0
2
1
4
16
3.2

d.
SJF results in the minimum average waiting time (3.2).

Problem 4.9 (5pts)


Which one of the following scheduling algorithms could result in starvation?
1. FCFS
2. SJF
3. RR
4. Priority
SJF and Priority. There could be starvation in SJF if shorter processes
keep arriving, preventing older (bigger) processes from being executed.
Similarly, there could be starvation in Priority if processes with high
priority keep arriving, preventing older (with less priority) processes from
being scheduled.

You might also like