You are on page 1of 7

--------------------------------------------------------------------------------

Definitions:
--------------------------------------------------------------------------------
Process - A program loaded into memory and executing is commonly
referred to as a process. That is, a process has one or more
threads of execution.
--------------------------------------------------------------------------------
Chapter 1
--------------------------------------------------------------------------------
1.1: What are the two main purposes of an Operating system?
The purpose of an operating system is to provide an environment in
which a user can execute programs in a convenient and efficient
manner.
1.3: What is the main advantage of multiprogramming?
Multiprogramming increases CPU utilization by organizing jobs so that
the CPU always has one to execute.
1.6: Define the essential properties of the following systems?
a. Batch - To batch similar jobs together so that they can run at the
same time.
b. Interactive - reduce response time so that users get a quick response
while interacting with the system.
c. Time Sharing - the CPU will execute multiple jobs by switching
among them, but the switches occur so frequently that the users
can interact with each program while its running.
d. Real Time - used when rigid time requirements are placed on the
operation of a processor or the flow of data.
e. Network - this is simply a communication path between two or more
systems. Sharing file systems or other resources across the net.
f. Parallel - this system gathers multiple CPUs together to perform
computational work. This is just a way of saying multiprocessor
system.
g. Distributed - allow users to share resources across geographically
dispersed locations via a network.
h. Clustered - two or more individual systems that are coupled together
that make a resource or service redundant or highly-available. This
type of system also gathers multiple CPUs together to perform
computational work.
i. Handheld - these devices are usually palms or cell-phones and have
restraints on CPU speed and power consumption.
1.7: We have stressed the need for operating systems to make efficient use of
computing hardware. When is it appropriate for operating systems to
forsake this principal and to "waste" resources? Why is such a system
not really wasteful?
It is perfectly normal for a hard real time system to violate this
principal. Because events are time critical, any such event must be
taken care of at the expense of efficiency. This is not wasteful
because such behavior is the goal of a hard real time system.
--------------------------------------------------------------------------------
Chapter 2
--------------------------------------------------------------------------------
2.2: How does the distinction between monitor mode and user mode function as
a rudimentary form of protection (security) system?
Monitor or privileged mode is implemented in hardware as a bit that
indicates the operating system is performing a privileged command. In
user mode the bit is set to 1 and privileged operations cannot be
performed on certain hardware. But putting the operating system in
control of the monitor we can validate a user's request to access
hardware. That is, the user must request hardware access through the
operating system since it is only this system that can run in monitor
mode. Thus, in theory a user cannot access hardware without first
having access to that hardware.
2.5: Which of the following instructions should be privileged?
a. Set value of timer - privileged otherwise the user program can
play w/ it and the OS never gains control
b. Read the Clock - not privileged because user program can't do
anything harmful (unless crypto depends on it)
c. Clear Memory - privileged because user program shouldn't be able
to clear arbitrary memory
d. Turn off interrupts - privileged, reason is same as a
e. Switch from user to monitor mode - privileged, otherwise the kernel
is useless
2.6: Some computer systems do not provide a privileged mode of operation in
the hardware. Is is possible to construct a secure operating system for
these computers? Give arguments both that it is and that it is not.
(YES). because you can regulate all action through the operating
system. That is, all requests for resources can be made to go through
the operating system (and usually are). Whatever you can do in
hardware you can do in software.
(NO). There are always a way to exploit complex operating systems and
many have argued that there are never ways to be 100% sure about the
security of a system against attackers. In deed our systems today have
hardware protection and it would be foolish to think that they are
secure.
Secure OS = the user program can't crash the OS. Secure OS != the OS
is never infected by virus.
POSSIBLE: it is possible by building a virtual machine on top of this
machine and export another pseudo instruction set to the user program.
That is we emulate a dual mode CPU and check every pseudo instructions.
Or we can limit all programs to be written and compiled by certain
program language (e.g. Java/MESA), and let compiler and loader do all
the checking of privileged access.
IMPOSSIBLE: the trade-off of the above method is performance penalty,
checking every instruction can hurt the performance dramatically.
*Note: single user OS still needs this protection to make it secure,
because it can still crash the OS without it (DOS), you won't get more
credit if you assume it can be done by limiting the usage to one user
a time.
--------------------------------------------------------------------------------
Chapter 3
--------------------------------------------------------------------------------
3.1: What are the 5 major activities of an operating system with regards to
process management?
- creating and deleting both user and system processes
- suspending and resuming processes
- providing mechanisms for process synchronization
- providing mechanisms for process communication
- providing mechanisms for deadlock handling
3.2: What are the three major activities of an operating system with regards
to memory management?
- keeping track of which parts of memory are currently being used
and by whom
- deciding which processes are to be loaded into memory when memory
becomes available
- allocating and deallocating memory space as needed
3.3: What are the three major activities of an operating system in regards to
secondary-storage management?
- free space management
- storage allocation
- disk scheduling
3.4: What are the five major activities with regards to file management?
- creating and deleting files
- creating and deleting directories
- supporting primitives for manipulating files and directories
- mapping files onto secondary storage
- backing up files on stable (nonvolatile) storage media
3.6: List five services provided by an operating system. Explain how each
provides convenience to the users. In what cases would it be impossible
for user-level programs to provide these services? Explain.
1. program execution - the operating system will schedule on behalf of
the user. This service could not be handled by
the user because you need access to the hardware.
2. I/O operations - This makes it easy for user's to access I/O streams.
this means the user does not need to know the
physical access of data in the machine. If there
were not interface provided the user could not
do this on their own.
3. file-system manipulation - This means the user does not need to
worry about accessing and updating the
file system table. Such access is best
handled by the operating system because
of this complexity.
4. communications - in the case of memory mapping this is extremely
beneficial for the OS to handle access and control
to memory regions. The user could not in this case
access such a system to share the map.
5. error detection - If there is some error on one of the lower levels
the user is notified so that they can take action.
if there is no memory left on the heap for
instance. The user could not do this because it
is simply too much work for the user.
3.11: What is the main advantage to the layered approach to system design?
The main advantage to the layered approach is modularity.
3.12: What is the main advantage of the micro-kernel approach to system design?
Because the system is modular it is very easy to expand and extend the
OS. Security and reliability are also huge advantages since most
services are running as user, rather than kernel, processes.
--------------------------------------------------------------------------------
Chapter 4
--------------------------------------------------------------------------------
4.1: Palm OS provided no means of concurrent processing. Discuss three major
complications that concurrent processing adds to an operating system.
A method of time sharing must be implemented to allow each of
several processes to have access to the system. This method involves
the preemption of processes that do not voluntarily give up the CPU
(by using a system call, for instance) and the kernel being reentrant
(so more than one process may be executing kernel code concurrently).
Processes and system resources must have protections and must be
protected from each other. Any given process must be limited in the
amount of memory it can use and the operations it can perform on
devices like disks. Care must be taken in the kernel to prevent
deadlocks between processes, so processes aren't waiting for each
other's allocated resources.
4.5: What are the benefits and detriments of each of the following?
Consider both the systems and the programmers' levels.
a. Symmetric and asymmetric communication
b. Automatic and explicit buffering
c. Send by copy and send by reference
d. Fixed-sized and variable-sized messages
a) Symmetric direct communication is a pain since both sides need the
name of the other process. This makes it hard to build a server.
b) Automatic makes programming easier but is a harder system to build.
c) Send by copy is better for network generalization and
synchronization issues. Send by reference is more efficient for big
data structures but harder to code because of the shared memory
implications.
d) Variable sized makes programming easier but is a harder system to
build.
--------------------------------------------------------------------------------
Chapter 5
--------------------------------------------------------------------------------
5.1: Provide two programming examples of multi-threading that improve
performance over a single-threaded solution.
Programs that have high parallelism - multi-threading kernel on
multiprocessors, parallel scientific computations, etc.
Programs that share lots resources between different internal entities
- web browsers, web servers, database access, on-line multi-user games.
Programs that easier to program/structure/debug in multi-threading model
- network servers, GUI systems
5.2: Provide two programming examples of multi-threading that do not improve
performance over a single-threaded solution.
Programs that require sequential processing - Shell programs, printer
driver.
Simple Programs - Hello world, embedded programs running on simple
hardware/chips.
5.7: Assume an operating system maps user-level threads to the kernel using
the many-to-many model where the mapping is done through LWPs.
Furthermore, the system allows the developers to create real-time threads.
Is it necessary to bind a real-time thread to an LWP? Explain.
You shouldn't have a mix of general threads and a real-time
thread all bound to a single LWP. The general threads may make a
blocking system call causing the LWP to wait and possibly miss a time
guarantee. However, this can be avoided with the many-to-many model
since there can be many LWP associated with the process.
--------------------------------------------------------------------------------
Chapter 6
--------------------------------------------------------------------------------
6.2: Define the difference between preemptive and non-preemptive scheduling.
In non-preemptive scheduling, once a process has been allocated the
CPU, the process keeps the CPU until it releases the CPU either by
terminating or by switching to the waiting state.
In preemptive scheduling, CPU is switched from an active process to a
process in the ready queue as a result of an interrupt or end of time
quantum.
6.3: Consider the following set of processes, with the length of the
CPU-burst time given in milliseconds:
Process Burst Time Priority
P1 10 3
P2 1 1
P3 2 3
P4 1 4
P5 5 2
The processes are assumed to have arrived in the order P1, P2, P3,
P4, P5, all at time 0.
a. Draw four Gantt charts illustrating the execution of these
processes using FCFS, SJF, a non-preemptive priority (a
smaller priority number implies a higher priority), and RR
(quantum = 1) scheduling.
b. What is the turnaround time of each process for each of the
scheduling algorithms in part a?
c. What is the waiting time of each process for each of the
scheduling algorithms in part a?
d. Which of the schedules in part a results in the minimal
average waiting time (over all processes)?
Answer:
a. The four Gantt charts are
| 1 |2|3 |4| 5 | FCS
|1|2|3|4|5|1|3|5|1|5|1|5|1|5| 1 | RR
|2|4|3 | 5 | 1 | SJF
|2| 5 | 1 | 3|4| Priority
b. Turnaround time
FCFS RR SJF Priority
P1 10 19 19 16
P2 11 2 1 1
P3 13 7 4 18
P4 14 4 2 19
P5 19 14 9 6
c. Waiting time (turnaround time minus burst time)

FCFS RR SJF Priority


P1 0 9 9 6
P2 10 1 0 0
P3 11 5 2 16
P4 13 3 1 18
P5 14 9 4 1
d. Shortest Job First
6.4: Suppose that the following processes arrive for execution at the
times indicated. Each process will run the listed amount of
time. In answering the questions, use non-preemptive scheduling and
base all decisions on the information you have at the time the
decision must be made.
Process Arrival Time Burst Time
P1 0.0 8
P2 0.4 4
P3 1.0 1
a. What is the average turnaround time for these processes with
the FCFS scheduling algorithm?
b. What is the average turnaround time for these processes with
the SJF scheduling algorithm?
c. The SJF algorithm is supposed to improve performance, but
notice that we chose to run process P1 at time 0 because we
did not know that two shorter processes would arrive
soon. Compute what the average turnaround time will be if the
CPU is left idle for the first 1 unit and then SJF scheduling
is used. Remember that processes P1 and P2 are waiting during
this idle time, so their waiting time may increase. This
algorithm could be known as future-knowledge scheduling.
Answer:
a. 10.53
b. 9.53
c. 6.86
Remember that turnaround time is finishing time minus arrival time,
so you have to sub- tract the arrival times to compute the
turnaround times. FCFS is 11 if you forget to subtract arrival
time.
6.10: Explain the differences in the degree to which the following
scheduling algorithms discriminate in favor of short processes:
a. FCFS
b. RR
c. Multilevel feedback queues
Answer:
a. FCFS--discriminates against short jobs since any short jobs
arriving after long jobs will have a longer waiting time.
b. RR--treats all jobs equally (giving them equal bursts of CPU
time) so short jobs will be able to leave the system faster
since they will finish first.
c. Multilevel feedback queues--work similar to the RR
algorithm--they discriminate favorably toward short jobs.
--------------------------------------------------------------------------------
Chapter 7
--------------------------------------------------------------------------------
7.1: What is the meaning of the term busy waiting? What other kinds of
waiting are there in an operating system? Can busy waiting be
avoided altogether? Explain your answer.
Busy waiting is waiting without giving up the CPU. A preferred
type of waiting is to wait on a non-running queue. The techniques
in the book will still give some busy waiting on the critical
sections of a semaphore. It might be possible to avoid busy
waiting by changing the critical section code of 7.2 and 7.3 so
that it blocks processes.
7.7: The wait() statement in all java program examples was part of a while
loop. Explain why you would always need to use a while statement when
using wait() and why you would never use an if statement.
You do this to guard against spurious wake-ups. That is, the user of
wait() is made no guarantee that once the code is upped again the
condition which we went to sleep for will be met. A while loop makes
sure that the user checks the condition again and decides if should go
back onto the wait queue. If this were just an if() statement the
upped code would proceed even if the condition were not safe to do so.

You might also like