Professional Documents
Culture Documents
Short Questions:
1.What is ISR?
Ans:
An interrupt handler, also known as an interrupt service routine (ISR), is a callback subroutine in
an operating system or device driver whose execution is triggered by the reception of an interrupt.
The term Interrupt is usually reserved for hardware interrupts. They are program control interruptions
caused by external hardware events. Here, external means external to the CPU. Hardware interrupts
usually come from many different sources such as timer chip, peripheral devices (keyboards, mouse,
etc.).
A Trap can be identified as a transfer of control, which is initiated by the programmer. The term Trap is
used interchangeably with the term Exception (which is an automatically occurring software interrupt).
But some may argue that a trap is simply a special subroutine call. So they fall in to the category of
software-invoked interrupts.
A computer cluster consists of a set of loosely or tightly connected computers that work together so
that, in many respects, they can be viewed as a single system. Unlike grid computers,
computer clusters have each node set to perform the same task, controlled and scheduled by software.
Process is an executing instance of a program. For example, when you double click on a notepad icon on
your computer, a process is started that will run the notepad program.
A process is sometimes referred as active entity as it resides on the primary memory and leaves the
memory if the system is rebooted.
Thread is the smallest executable unit of a process. For example, when you run a notepad program,
operating system creates a process and starts the execution of main thread of that process.
A process can have multiple threads. Each thread will have their own task and own path of execution in
a process. For example, in a notepad program, one thread will be taking user inputs and another thread
will be printing a document.
When a fork() system call is issued, a copy of all the pages corresponding to the parent process is
created, loaded into a separate memory location by the OS for the child process. But this is not needed
in certain cases. Consider the case when a child executes an "exec" system call or exits very soon after
the fork(). When the child is needed just to execute a command for the parent process, there is no need
for copying the parent process' pages, since exec replaces the address space of the process which
invoked it with the command to be executed.
In such cases, a technique called copy-on-write (COW) is used. With this technique, when a fork occurs,
the parent process's pages are not copied for the child process. Instead, the pages are shared between
the child and the parent process. Whenever a process (parent or child) modifies a page, a separate copy
of that particular page alone is made for that process (parent or child) which performed the
modification. This process will then use the newly copied page rather than the shared one in all future
references. The other process (the one which did not modify the shared page) continues to use the
original copy of the page (which is now no longer shared). This technique is called copy-on-write since
the page is copied when some process writes to it.
Preemptive multitasking differs from non-preemptive multitasking in that the operating system can
take control of the processor without the task’s cooperation. (A task can also give it up voluntarily, as in
non-preemptive multitasking.) The process of a task having control taken from it is called preemption.
Windows NT uses preemptive multitasking for all processes except 16-bit Windows 3.1 programs. As a
result, a Window NT application cannot take over the processor in the same way that a Windows 3.1
application can.
7. Difference between User level threads and kernel level threads.
OS doesn’t recognized user level threads. Kernel threads are recognized by OS.
If one user level thread perform blocking If one kernel thread perform blocking operation
operation then entire process will be blocked. then another thread can continue execution.
Various criteria or characteristics that help in designing a good scheduling algorithm are:
1. CPU Utilization − A scheduling algorithm should be designed so that CPU remains busy as
possible. It should make efficient use of CPU.
2. Throughput − Throughput is the amount of work completed in a unit of time. In other words
throughput is the processes executed to number of jobs completed in a unit of time. The
scheduling algorithm must look to maximize the number of jobs processed per time unit.
3. Response time − Response time is the time taken to start responding to the request. A
scheduler must aim to minimize response time for interactive users.
4. Turnaround time − Turnaround time refers to the time between the moment of submission of a
job/ process and the time of its completion. Thus how long it takes to execute a process is also
an important factor.
5. Waiting time − It is the time a job waits for resource allocation when several jobs are
competing in multiprogramming system. The aim is to minimize the waiting time.
6. Fairness − A good scheduler should make sure that each process gets its fair share of the CPU.
For example, suppose that both processes are trying to increment the same variable. They both have
the line
x := x + 1
in them. One way for each process to execute this statement is for it to read the variable, then add one
to the value, then write it back. Suppose the value of x was 3. If both processes read x at the same time,
they would get the same value 3. If they then both added 1 to it, they would both have the value 4. They
would then both write 4 back to x. The result is that both processes incremented x, but its value is only
4, instead of 5.
For these processes to execute properly, they must ensure that only one of them is executing the
statement at a time.
A set of statements that can have only one process executing it at a time is a critical section. Another
way of saying this is that processes need mutually exclusive access to the critical section.
Is it a state where two or more operations are waiting for each other, say a computing action 'A' is
waiting for action 'B' to complete, while action 'B' can only execute when 'A' is completed. Such a
situation would be called a deadlock. In operating systems, a deadlock situation is arrived when
computer resources required for complete of a computing task are held by another task that is waiting
to execute. The system thus goes into an indefinite loop resulting into a deadlock.
The deadlock in operating system seems to be a common issue in multiprocessor systems, parallel and
distributed computing setups.
Necessary conditions for deadlock. Mutual Exclusion: At least one resource is held in a non-sharable
mode that is only one process at a time can use the resource. If another process requests that resource,
the requesting process must be delayed until the resource has been released.
Dynamic loading means loading the library (or any other binary for that matter) into the memory
during load or run-time.
Dynamic loading can be imagined to be similar to plugins , that is an exe can actually execute before
the dynamic loading happens(The dynamic loading for example can be created using LoadLibrary call
in C or C++).
The work of the MMU can be divided into three major categories:
Hardware memory management, which oversees and regulates the processor's use
of RAM (random access memory) and cache memory.
OS (operating system) memory management, which ensures the availability of adequate memory
resources for the objects and data structures of each running program at all times.
Application memory management, which allocates each individual program's required memory,
and then recycles freed-up memory space when the operation concludes.
Ans:
Malware, Computer virus, Rogue security software, Trojan horse, Malicious spyware:, Computer
worm, Botnet, Spam, Phishing, Rootkit:
The number of frames is equal to the size of memory divided by the page-size. So and increase
in page size means a decrease in the number of available frames.
Having a fewer frames will increase the number of page faults because of the lower freedom in
replacement choice.
Large pages would also waste space by Internal Fragmentation.
On the other hand, a larger page-size would draw in more memory per fault; so the number of
fault may decrease if there is limited contention.
Larger pages also reduce the number of TLB misses.
Subjective.
Q3.
What is Thread?
A thread is a flow of execution through the process code, with its own program counter that keeps
track of which instruction to execute next, system registers which hold its current working variables,
and a stack which contains the execution history.
A thread shares with its peer threads few information like code segment, data segment and open files.
When one thread alters a code segment memory item, all other threads see that.
A thread is also called a lightweight process. Threads provide a way to improve application
performance through parallelism. Threads represent a software approach to improving performance of
operating system by reducing the overhead thread is equivalent to a classical process.
Each thread belongs to exactly one process and no thread can exist outside a process. Each thread
represents a separate flow of control. Threads have been successfully used in implementing network
servers and web server. They also provide a suitable foundation for parallel execution of applications
on shared memory multiprocessors. The following figure shows the working of a single-threaded and a
multithreaded process.
2 Process switching needs interaction with Thread switching does not need to interact
operating system. with operating system.
3 In multiple processing environments, each All threads can share same set of open files,
process executes the same code but has child processes.
its own memory and file resources.
4 If one process is blocked, then no other While one thread is blocked and waiting, a
process can execute until the first process second thread in the same task can run.
is unblocked.
5 Multiple processes without using threads Multiple threaded processes use fewer
use more resources. resources.
6 In multiple processes each process One thread can read, write or change
operates independently of the others. another thread's data.
Advantages of Thread
Threads minimize the context switching time.
Use of threads provides concurrency within a process.
Efficient communication.
It is more economical to create and context switch threads.
Threads allow utilization of multiprocessor architectures to a greater scale and efficiency.
Types of Thread
Threads are implemented in following two ways −
Kernel Level Threads − Operating System managed threads acting on kernel, an operating
system core.
The Kernel maintains context information for the process as a whole and for individuals threads within
the process. Scheduling by the Kernel is done on a thread basis. The Kernel performs thread creation,
scheduling and management in Kernel space. Kernel threads are generally slower to create and manage
than the user threads.
Advantages
Kernel can simultaneously schedule multiple threads from the same process on multiple
processes.
If one thread in a process is blocked, the Kernel can schedule another thread of the same
process.
Kernel routines themselves can be multithreaded.
Disadvantages
Kernel threads are generally slower to create and manage than the user threads.
Transfer of control from one thread to another within the same process requires a mode switch
to the Kernel.
Multithreading Models
Some operating system provides a combined user level thread and Kernel level thread facility.
Solaris is a good example of this combined approach. In a combined system, multiple threads
within the same application can run in parallel on multiple processors and a blocking system call
need not block the entire process. Multithreading models are three types
The following diagram shows the many-to-many threading model where 6 user level threads are
multiplexing with 6 kernel level threads. In this model, developers can create as many user threads as
necessary and the corresponding Kernel threads can run in parallel on a multiprocessor machine. This
model provides the best accuracy on concurrency and when a thread performs a blocking system call,
the kernel can schedule another thread for execution.
Many to One Model
Many-to-one model maps many user level threads to one Kernel-level thread. Thread management is
done in user space by the thread library. When thread makes a blocking system call, the entire process
will be blocked. Only one thread can access the Kernel at a time, so multiple threads are unable to run
in parallel on multiprocessors.
If the user-level thread libraries are implemented in the operating system in such a way that the system
does not support them, then the Kernel threads use the many-to-one relationship modes.
Disadvantage of this model is that creating user thread requires the corresponding Kernel thread. OS/2,
windows NT and windows 2000 use one to one relationship model.
b. What the difference b/w symmetric and asymmetric multiprocessing?
There are two types of multiprocessing, Symmetric Multiprocessing and Asymmetric Multiprocessing.
Multiprocessing system has more than one processor and they can execute multiple process
simultaneously. In Symmetric Multiprocessing, processors shares the same memory. In Asymmetric
Multiprocessing there is a one master processor that controls the data structure of the system. The
primary difference between Symmetric and Asymmetric Multiprocessing is that in Symmetric
Multiprocessing all the processor in the system run tasks in OS. But, in Asymmetric
Multiprocessing only the master processor run task in OS.
There are two types of multiprocessing, Symmetric Multiprocessing and Asymmetric Multiprocessing.
Multiprocessing system has more than one processor and they can execute multiple process
simultaneously. In Symmetric Multiprocessing, processors shares the same memory. In Asymmetric
Multiprocessing there is a one master processor that controls the data structure of the system. The
primary difference between Symmetric and Asymmetric Multiprocessing is that in Symmetric
Multiprocessing all the processor in the system run tasks in OS. But, in Asymmetric
Multiprocessing only the master processor run task in OS.
BASIS FOR
SYMMETRIC MULTIPROCESSING ASYMMETRIC MULTIPROCESSING
COMPARISON
Basic Each processor run the tasks in Only Master processor run the tasks of
Process Processor takes processes from a Master processor assign processes to the
common ready queue, or there may slave processors, or they have some
processor.
BASIS FOR
SYMMETRIC MULTIPROCESSING ASYMMETRIC MULTIPROCESSING
COMPARISON
architecture.
Communication All processors communicate with Processors need not communicate as they
memory.
Failure If a processor fails, the computing If a master processor fails, a slave is turned
complex as all the processors need master processor access the data structure.
load balance.
The current state of the process i.e., whether it is ready, running, waiting,
or whatever.
2 Process privileges
3 Process ID
4 Pointer
5 Program Counter
6 CPU registers
Various CPU registers where process need to be stored for execution for
running state.
This includes the information of page table, memory limits, Segment table
depending on memory used by the operating system.
9 Accounting information
This includes the amount of CPU used for process execution, time limits,
execution ID etc.
10 IO status information
The PCB is maintained for a process throughout its lifetime, and is deleted once the process terminates.
b). Describe the actions taken by a kernel to context switching between kernel
level threads.
Ans. Context switching between kernel threads typically requires saving the value of the CPU registers from
the thread being switched out and restoring the CPU registers of the new thread being scheduled.
Q.5. (a)
What are critical section problems? Explain three requirements of critical section
problems.
1. Mutual Exclusion
Out of a group of cooperating processes, only one process can be in its critical section
at a given point of time.
2. Progress
If no process is in its critical section, and if one or more threads want to execute their
critical section then any one of these threads must be allowed to get into its critical
section.
3. Bounded Waiting
After a process makes a request for getting into its critical section, there is a limit for
how many other processes can get into their critical section, before this process's
request is granted. So after the limit is reached, system must grant the process
permission to get into its critical section.
Segmentation
Segmentation is a memory management technique in which each job is divided into several segments
of different sizes, one for each module that contains pieces that perform related functions. Each
segment is actually a different logical address space of the program.
When a process is to be executed, its corresponding segmentation are loaded into non-contiguous
memory though every segment is loaded into a contiguous block of available memory.
Segmentation memory management works very similar to paging but here segments are of variable-
length where as in paging pages are of fixed size.
A program segment contains the program's main function, utility functions, data structures, and so on.
The operating system maintains a segment map table for every process and a list of free memory
blocks along with segment numbers, their size and corresponding memory locations in main memory.
For each segment, the table stores the starting address of the segment and the length of the segment.
A reference to a memory location includes a value that identifies a segment and an offset.
2016.
Short Questions:
Ans. A process is basically a program in execution. The execution of a process must progress in a
sequential fashion.
A process is defined as an entity which represents the basic unit of work to be implemented in the
system.
When a program is loaded into the memory and it becomes a process, it can be divided into four
sections ─ stack, heap, text and data.
Ans. Symmetric multiprocessing (SMP) is a computing architecture in which two or more processors
are attached to a single memory and operating system (OS) instance. SMP combines multiple processors
to complete a process with the help of a host OS, which manages processor allocation, execution and
management.
In symmetric (or "tightly coupled") multiprocessing, the processors share memory and the I/O bus or
data path. A single copy of the operating system is in charge of all the processors. SMP, also known as a
"shared everything" system, does not usually exceed 16 processors.
Ans. Interposes communication (IPC) is a set of programming interfaces that allow a programmer to
coordinate activities among different program processes that can run concurrently in an operating
system.
Advantages.
1. This allows a program to handle many user requests at the same time
2. Since even a single user request may result in multiple processes running in the operating
system on the user's behalf, the processes need to communicate with each other. The IPC
interfaces make this possible.
5. What is the difference between process contention scope and
system contention scope?
Ans. The System Contention Scope is one of two thread-scheduling schemes used in operating
systems. This scheme is used by the kernel to decide which kernel-level thread to schedule onto
a CPU, wherein all threads (as opposed to only user-level threads, as in the Process Contention
Scope scheme) in the system compete for the CPU. Operating systems that use only the one-to-
one model, such as Windows, Linux, and Solaris, schedule threads using only System Contention
Scope.
Process Contention Scope is one of the two basic ways of scheduling threads. Both of them
being: process local scheduling (known as Process Contention Scope, or Unbound Threads—
the Many-to-Many model) and system global scheduling (known as System Contention
Scope, or Bound Threads—the One-to-One model). These scheduling classes are known as
the scheduling contention scope, and are defined only in POSIX. Process contention scope
scheduling means that all of the scheduling mechanism for the thread is local to the
process—the thread's library has full control over which thread will be scheduled on
an LWP. This also implies the use of either the Many- to-One or Many-to-Many model.
Ans. Throughput
number of processes that complete their execution per time unit.
Turnaround time
Waiting time
Response time
amount of time it takes from when a request was submitted until the first response is
produced, not output (for time-sharing environment)
Ans. To overcome the problem of underutilization of CPU and main memory, the multiprogramming
was introduced. The multiprogramming is interleaved execution of multiple jobs by the same computer.
In multiprogramming system, when one program is waiting for I/O transfer; there is another program
ready to utilize the CPU. So it is possible for several jobs to share the time of the CPU. But it is important
to note that multiprogramming is not defined to be the execution of jobs at the same instance of time.
Rather it does mean that there are a number of jobs available to the CPU (placed in main memory) and a
portion of one is executed then a segment of another and so on.
8. What is dispatcher?
Ans. The dispatcher is the module that gives control of the CPU to the process selected by
the short-time scheduler(selects from among the processes that are ready to execute).
The function involves :
Switching context
Switching to user mode
Jumping to the proper location in the user program to restart that program.
Ans. conceptually, a semaphore is a nonnegative integer count. Semaphores are typically used to
coordinate access to resources, with the semaphore count initialized to the number of free resources.
Threads then atomically increment the count when resources are added and atomically decrement the
count when resources are removed.
When the semaphore count becomes zero, indicating that no more resources are present, threads trying
to decrement the semaphore block wait until the count becomes greater than zero
Ans. Fragmentation
As processes are loaded and removed from memory, the free memory space is broken into little pieces.
It happens after sometimes that processes cannot be allocated to memory blocks considering their
small size and memory blocks remains unused. This problem is known as Fragmentation.
1 External fragmentation
2 Internal fragmentation
In the many-to-one model, many user-level threads are all mapped onto a single kernel thread.
Thread management is handled by the thread library in user space, which is efficient in nature.
Ans. The term dynamically linked means that the program and the particular library it references
are not combined together by the linker at link time. Instead, the linker places information into the
executable that tells the loader which shared object module the code is in and which runtime linker
should be used to find and bind the references. This means that the binding between the program and
the shared object is done at runtime — before the program starts, the appropriate shared objects are
found and bound.
16.What is thrashing?
Ans. With a computer, thrashing or disk thrashing describes when a hard drive is being overworked by
moving information between the system memory and virtual memoryexcessively. Thrashing occurs
when the system does not have enough memory, the system swap file is not properly configured, too
much is running at the same time, or has low system resources.
When thrashing occurs you will notice the computer hard drive always working and a decrease in system
performance. Thrashing is bad on the hard drive because of the amount of work the hard drive has to do
and if left unfixed can cause an early hard drive failure.
Ans:
Subjective Part.
Ans.
Dining Philosophers Problem
The dining philosophers problem is another classic synchronization problem which is used
to evaluate situations where there is a need of allocating multiple resources to multiple
processes.
Problem Statement:
Consider there are five philosophers sitting around a circular dining table. The dining table
has five chopsticks and a bowl of rice in the middle as shown in the below figure.
At any instant, a philosopher is either eating or thinking. When a philosopher wants to eat,
he uses two chopsticks - one from their left and one from their right. When a philosopher
wants to think, he keeps down both chopsticks at their original place.
Solution:
From the problem statement, it is clear that a philosopher can think for an indefinite amount
of time. But when a philosopher starts eating, he has to stop at some point of time. The
philosopher is in an endless cycle of thinking and eating.
An array of five semaphores, stick[5], for each of the five chopsticks.
The code for each philosopher looks like:
while(TRUE) {
wait(stick[i]);
signal(stick[i]);
signal(stick[(i+1) % 5]);
/* think */
When a philosopher wants to eat the rice, he will wait for the chopstick at his left and picks
up that chopstick. Then he waits for the right chopstick to be available, and then picks it too.
After eating, he puts both the chopsticks down.
But if all five philosophers are hungry simultaneously, and each of them pickup one
chopstick, then a deadlock situation occurs because they will be waiting for another
chopstick forever. The possible solutions for this are:
A philosopher must be allowed to pick up the chopsticks only if both the left and right
chopsticks are available.
Allow only four philosophers to sit at the table. That way, if all the four philosophers pick
up four chopsticks, there will be one chopstick left on the table. So, one philosopher can
start eating and eventually, two chopsticks will be available. In this way, deadlocks can
be avoided.
The earliest computer operating systems ran only one program at a time. All of the resources of the
system were available to this one program. Later, operating systems ran multiple programs at once,
interleaving them. Programs were required to specify in advance what resources they needed so that
they could avoid conflicts with other programs running at the same time. Eventually some operating
systems offered dynamic allocation of resources. Programs could request further allocations of
resources after they had begun running. This led to the problem of the deadlock. Here is the simplest
example:
Learning to deal with deadlocks had a major impact on the development of operating systems and the
structure of databases. Data was structured and the order of requests was constrained in order to avoid
creating deadlocks.
Deadlock Prevention
Mutual Exclusion.
Hold and Wait.
No preemption.
Circular wait.
Deadlock Prevention
We can prevent Deadlock by eliminating any of the above four condition.
Eliminate No Preemption
Preempt resources from process when resources required by other high priority process.
Directory Structure
There are many types of directory structure in Operating System. They are as follows :-
a) Since all files are in the same directory, they must have unique name.
b) If two user call their data free test, then the unique name rule is violated.
d) Even a single user may find it difficult to remember the names of all files as the number of file
increases.
ii) When the user job start or user log in, the system Master File Directory (MFD) is searched. MFD is
indexed by user name or Account Number.
iii) When user refers to a particular file, only his own UFD is searched.
Thus different users may have files with same name. To have a particular file uniquely, in a two level
directory, we must give both the user name and file name.
Its direct descendents are User File Directory (UFD). The descendents of UFD's are file themselves.
A directory (or Sub directory) contains a set of files or sub directories. All
directories has the same internal format.
iii) Each process has a current directory. Current directory should contain most of the files that
are of current interest to the process.
vi) If a file is not needed in the current directory then the user usually must either specify a path
name or change the current directory.
Paths can be of two types :-
a) Absolute Path
Begins at root and follows a path down to the specified file.
b) Relative Path
Defines a path from current directory.
vii) Deletions if directory is empty, its entry in the directory that contains it can simply deleted.
If it is not empty : One of the Two approaches can be taken :-
a)User must delete all the files in the directory.
Acyclic Graph is the graph with no cycles. It allows directories to share sub directories and files. With
a shared file, only one actual file exists, so any changes made by one person are immediately visible
to the another.
i) To create a link
. Deletion of the link will not effect the original file, only link is removed.