You are on page 1of 26

Session 2013 2015.

Short Questions:

1.What is ISR?

Ans:

An interrupt handler, also known as an interrupt service routine (ISR), is a callback subroutine in an operating system or device driver whose execution is triggered by the reception of an interrupt.

Interrupt Service Routine

For every interrupt, there must be an interrupt service routine (ISR), or interrupt handler. When an interrupt occurs, the microcontroller runs the interrupt service routine. For every interrupt, there is a fixed location in memory that holds the address of its interrupt service routine, ISR. The table of memory locations set aside to hold the addresses of ISRs is called as the Interrupt Vector Table.

  • 2. Difference b/w Trap and Interrupt.

The term Interrupt is usually reserved for hardware interrupts. They are program control interruptions caused by external hardware events. Here, external means external to the CPU. Hardware interrupts usually come from many different sources such as timer chip, peripheral devices (keyboards, mouse, etc.).

A Trap can be identified as a transfer of control, which is initiated by the programmer. The term Trap is used interchangeably with the term Exception (which is an automatically occurring software interrupt). But some may argue that a trap is simply a special subroutine call. So they fall in to the category of software-invoked interrupts.

  • 3. What is clustered system?

A computer cluster consists of a set of loosely or tightly connected computers that work together so that, in many respects, they can be viewed as a single system. Unlike grid computers, computer clusters have each node set to perform the same task, controlled and scheduled by software.

  • 4. What do you know process and programs?

Program is an executable file containing the set of instructions written to perform a specific job on your computer. For example, notepad.exe is an executable file containing the set of instructions which help us to edit and print the text files.

A program is sometimes referred as passive entity as it resides on a secondary memory.

Process is an executing instance of a program. For example, when you double click on a notepad icon on your computer, a process is started that will run the notepad program.

A process is sometimes referred as active entity as it resides on the primary memory and leaves the memory if the system is rebooted.

Thread is the smallest executable unit of a process. For example, when you run a notepad program, operating system creates a process and starts the execution of main thread of that process.

A process can have multiple threads. Each thread will have their own task and own path of execution in a process. For example, in a notepad program, one thread will be taking user inputs and another thread will be printing a document.

  • 5. How parent and child share the address space in fork.

When a fork() system call is issued, a copy of all the pages corresponding to the parent process is created, loaded into a separate memory location by the OS for the child process. But this is not needed in certain cases. Consider the case when a child executes an "exec" system call or exits very soon after the fork(). When the child is needed just to execute a command for the parent process, there is no need for copying the parent process' pages, since exec replaces the address space of the process which invoked it with the command to be executed.

In such cases, a technique called copy-on-write (COW) is used. With this technique, when a fork occurs, the parent process's pages are not copied for the child process. Instead, the pages are shared between the child and the parent process. Whenever a process (parent or child) modifies a page, a separate copy of that particular page alone is made for that process (parent or child) which performed the modification. This process will then use the newly copied page rather than the shared one in all future references. The other process (the one which did not modify the shared page) continues to use the original copy of the page (which is now no longer shared). This technique is called copy-on-write since the page is copied when some process writes to it.

6.Difference between preemptive and non-preemptive multitasking.

What Is Multitasking?

Multitasking is the ability of a computer to run more than one program, or task , at the same time. Multitasking contrasts with single-tasking, where one process must entirely finish before another can begin. MS-DOS is primarily a single-tasking environment, while Windows 3.1 and Windows NT are both multi-tasking environments.

Review: Preemptive and Non-Preemptive Multitasking

Within the category of multitasking, there are two major sub-categories: preemptive and non- preemptive (or cooperative). In non-preemptive multitasking, use of the processor is never taken from a task; rather, a task must voluntarily yield control of the processor before any other task can run. Windows 3.1 uses non-preemptive multitasking for Windows applications. Programs running under a non-preemptive operating system must be specially written to cooperate in multitasking by yielding control of the processor at frequent intervals. Programs that do not yield sufficiently often cause non-preemptive systems to stay “locked” in that program until it does yield. An example of failed non-preemptive multitasking is the inability to do anything else while printing a

document in Microsoft Word for Windows 2.0a. This happens because Word does not give up control of the processor often enough while printing your document. The worst case of a program not yielding is when a program crashes. Sometimes, programs which crash in Windows 3.1 will crash the whole system simply because no other tasks can run until the crashed program yields.

Preemptive multitasking differs from non-preemptive multitasking in that the operating system can take control of the processor without the task’s cooperation. (A task can also give it up voluntarily, as in

non-preemptive multitasking.) The process of a task having control taken from it is called preemption. Windows NT uses preemptive multitasking for all processes except 16-bit Windows 3.1 programs. As a result, a Window NT application cannot take over the processor in the same way that a Windows 3.1 application can.

  • 7. Difference between User level threads and kernel level threads.

USER LEVEL THREAD

KERNEL LEVEL THREAD

User thread are implemented by users.

kernel threads are implemented by OS.

OS doesn’t recognized user level threads.

Kernel threads are recognized by OS.

Implementation of User threads is easy.

Implementation of Kernel thread is complicated.

Context switch time is less.

Context switch time is more.

Context switch requires no hardware support.

Hardware support is needed.

If one user level thread perform blocking operation then entire process will be blocked.

If one kernel thread perform blocking operation then another thread can continue execution.

Example : Java thread, POSIX threads.

Example : Window Solaris.

8. What are multilevel feedbacks Queue?

Multilevel feedback Queue scheduling

It is an enhancement of multilevel queue scheduling where process can move between the queues. In approach, the ready queue is partitioned into multiple queues of different priorities. The system use to assign processes to queue based on their CPU burst characteristic. If a process consumes too much CPU time, it is placed into a lower priority queue. It favors I/O bound jobs to get good input/output device utilization. A technique called aging promotes lower priority processes to the next higher priority queue after a suitable interval of time.

9. What is scheduling? What criteria affect the scheduler's performance?/what are the good scheduling criteria?

Scheduling can be defined as a set of policies and mechanisms which controls the order in which the work to be done is completed. The scheduling program which is a system software concerned with scheduling is called the scheduler and the algorithm it uses is called the scheduling algorithm.

Various criteria or characteristics that help in designing a good scheduling algorithm are:

  • 1. CPU Utilization − A scheduling algorithm should be designed so that CPU remains busy as possible. It should make efficient use of CPU.

  • 2. Throughput − Throughput is the amount of work completed in a unit of time. In other words throughput is the processes executed to number of jobs completed in a unit of time. The scheduling algorithm must look to maximize the number of jobs processed per time unit.

3.

Response

time

Response

time

is

the

time

taken to

start responding to the request. A

scheduler must aim to minimize response time for interactive users.

4.

Turnaround time − Turnaround time refers to the time between the moment of submission of a job/ process and the time of its completion. Thus how long it takes to execute a process is also an important factor.

5.

Waiting

time

It

is

the time

a

job waits

for resource allocation when several jobs are

competing in multiprogramming system. The aim is to minimize the waiting time.

  • 6. Fairness − A good scheduler should make sure that each process gets its fair share of the CPU.

  • 10. what is critical section?

Whenever two processes/threads are reading and writing the same variables in a language with Shared State Concurrency, it is possible that one process will interfere with the other -- a Race Condition.

For example, suppose that both processes are trying to increment the same variable. They both have the line

x := x + 1
x := x + 1

in them. One way for each process to execute this statement is for it to read the variable, then add one to the value, then write it back. Suppose the value of x was 3. If both processes read x at the same time, they would get the same value 3. If they then both added 1 to it, they would both have the value 4. They would then both write 4 back to x. The result is that both processes incremented x, but its value is only 4, instead of 5.

For these processes to execute properly, they must ensure that only one of them is executing the statement at a time.

A set of statements that can have only one process executing it at a time is a critical section. Another way of saying this is that processes need mutually exclusive access to the critical section.

  • 11. What is a Dead lock?

Is it a state where two or more operations are waiting for each other, say a computing action 'A' is waiting for action 'B' to complete, while action 'B' can only execute when 'A' is completed. Such a situation would be called a deadlock. In operating systems, a deadlock situation is arrived when computer resources required for complete of a computing task are held by another task that is waiting to execute. The system thus goes into an indefinite loop resulting into a deadlock.

The deadlock in operating system seems to be a common issue in multiprocessor systems, parallel and distributed computing setups.

  • 12. What is hold and wait in OS?

Hold and wait or resource holding: a process is currently holding at least one resource and requesting additional resources which are being held by other processes.

Necessary conditions for deadlock. Mutual Exclusion: At least one resource is held in a non-sharable mode that is only one process at a time can use the resource. If another process requests that resource, the requesting process must be delayed until the resource has been released.

  • 13. What is Dynamic loading?

Dynamic loading means loading the library (or any other binary for that matter) into the memory during load or run-time. Dynamic loading can be imagined to be similar to plugins , that is an exe can actually execute before

the dynamic loading happens(The dynamic loading for example can be created using LoadLibrary call in C or C++).

14. What is memory management unit?

A memory management unit (MMU) is a computer hardware component that handles all memory and caching operations associated with the processor. In other words, the MMU is responsible for all aspects of memory management. It is usually integrated into the processor, although in some systems it occupies a separate IC (integrated circuit) chip.

The work of the MMU can be divided into three major categories:

Hardware memory management, which oversees and regulates the processor's use of RAM (random access memory) and cache memory.

OS (operating system) memory management, which ensures the availability of adequate memory resources for the objects and data structures of each running program at all times.

Application memory management, which allocates each individual program's required memory, and then recycles freed-up memory space when the operation concludes.

  • 15. Lists the three system security threats.

Ans:

Malware, Computer virus, Rogue security software, Trojan horse, Malicious spyware:, Computer worm, Botnet, Spam, Phishing, Rootkit:

  • 16. What effects does increase the page size.

The number of frames is equal to the size of memory divided by the page-size. So and increase

in page size means a decrease in the number of available frames. Having a fewer frames will increase the number of page faults because of the lower freedom in

replacement choice. Large pages would also waste space by Internal Fragmentation.

On the other hand, a larger page-size would draw in more memory per fault; so the number of

fault may decrease if there is limited contention. Larger pages also reduce the number of TLB misses.

Subjective.

Q3.

A: Explain the Multithreading models.

What is Thread?

A thread is a flow of execution through the process code, with its own program counter that keeps track of which instruction to execute next, system registers which hold its current working variables, and a stack which contains the execution history.

A thread shares with its peer threads few information like code segment, data segment and open files. When one thread alters a code segment memory item, all other threads see that.

A thread is also called a lightweight process. Threads provide a way to improve application performance through parallelism. Threads represent a software approach to improving performance of operating system by reducing the overhead thread is equivalent to a classical process.

Each thread belongs to exactly one process and no thread can exist outside a process. Each thread represents a separate flow of control. Threads have been successfully used in implementing network servers and web server. They also provide a suitable foundation for parallel execution of applications on shared memory multiprocessors. The following figure shows the working of a single-threaded and a multithreaded process.

Subjective. Q3. A: Explain the Multithreading models. What is Thread? A thread is a flow of

Difference between Process and Thread

S.N.

Process

Thread

1

Process is heavy weight or resource intensive.

Thread is light weight, taking lesser resources than a process.

2

Process switching needs interaction with operating system.

Thread switching does not need to interact with operating system.

3

In multiple processing environments, each process executes the same code but has its own memory and file resources.

All threads can share same set of open files, child processes.

4

If one process is blocked, then no other process can execute until the first process is unblocked.

While one thread is blocked and waiting, a second thread in the same task can run.

5

Multiple processes without using threads use more resources.

Multiple threaded processes use fewer resources.

6

In multiple processes each process operates independently of the others.

One thread can read, write or change another thread's data.

Advantages of Thread

Threads minimize the context switching time.

Use of threads provides concurrency within a process.

Efficient communication.

It is more economical to create and context switch threads.

Threads allow utilization of multiprocessor architectures to a greater scale and efficiency.

Types of Thread

Threads are implemented in following two ways −

User Level Threads − User managed threads.

Kernel Level Threads − Operating System managed threads acting on kernel, an operating system core.

User Level Threads

In this case, the thread management kernel is not aware of the existence of threads. The thread library contains code for creating and destroying threads, for passing message and data between threads, for scheduling thread execution and for saving and restoring thread contexts. The application starts with a single thread.

Advantages  Thread switching does not require Kernel mode privileges.  User level thread can run

Advantages

Thread switching does not require Kernel mode privileges.

User level thread can run on any operating system.

Scheduling can be application specific in the user level thread.

User level threads are fast to create and manage.

Disadvantages

In a typical operating system, most system calls are blocking.

Multithreaded application cannot take advantage of multiprocessing.

Kernel Level Threads

In this case, thread management is done by the Kernel. There is no thread management code in the application area. Kernel threads are supported directly by the operating system. Any application can be programmed to be multithreaded. All of the threads within an application are supported within a single process.

The Kernel maintains context information for the process as a whole and for individuals threads within the process. Scheduling by the Kernel is done on a thread basis. The Kernel performs thread creation, scheduling and management in Kernel space. Kernel threads are generally slower to create and manage than the user threads.

Advantages

Kernel can simultaneously schedule multiple threads from the same process on multiple processes.

If one thread in a process is blocked, the Kernel can schedule another thread of the same process.

Kernel routines themselves can be multithreaded.

Disadvantages

Kernel threads are generally slower to create and manage than the user threads.

Transfer of control from one thread to another within the same process requires a mode switch to the Kernel.

Multithreading Models

Some operating system provides a combined user level thread and Kernel level thread facility. Solaris is a good example of this combined approach. In a combined system, multiple threads within the same application can run in parallel on multiple processors and a blocking system call need not block the entire process. Multithreading models are three types

Many to many relationship.

Many to one relationship.

One to one relationship.

Many to Many Model

The many-to-many model multiplexes any number of user threads onto an equal or smaller number of kernel threads.

The following diagram shows the many-to-many threading model where 6 user level threads are multiplexing with 6 kernel level threads. In this model, developers can create as many user threads as necessary and the corresponding Kernel threads can run in parallel on a multiprocessor machine. This model provides the best accuracy on concurrency and when a thread performs a blocking system call, the kernel can schedule another thread for execution.

Advantages  Kernel can simultaneously schedule multiple threads from the same process on multiple processes. 

Many to One Model

Many-to-one model maps many user level threads to one Kernel-level thread. Thread management is done in user space by the thread library. When thread makes a blocking system call, the entire process will be blocked. Only one thread can access the Kernel at a time, so multiple threads are unable to run in parallel on multiprocessors.

If the user-level thread libraries are implemented in the operating system in such a way that the system does not support them, then the Kernel threads use the many-to-one relationship modes.

Many to One Model Many-to-one model maps many user level threads to one Kernel-level thread. Thread

One to One Model

There is one-to-one relationship of user-level thread to the kernel-level thread. This model provides more concurrency than the many-to-one model. It also allows another thread to run when a thread makes a blocking system call. It supports multiple threads to execute in parallel on microprocessors.

Disadvantage of this model is that creating user thread requires the corresponding Kernel thread. OS/2, windows NT and windows 2000 use one to one relationship model.

Many to One Model Many-to-one model maps many user level threads to one Kernel-level thread. Thread

b. What the difference b/w symmetric and asymmetric multiprocessing?

b. What the difference b/w symmetric and asymmetric multiprocessing? There are two types of multiprocessing, Symmetric

There are two types of multiprocessing, Symmetric Multiprocessing and Asymmetric Multiprocessing. Multiprocessing system has more than one processor and they can execute multiple process simultaneously. In Symmetric Multiprocessing, processors shares the same memory. In Asymmetric Multiprocessing there is a one master processor that controls the data structure of the system. The primary difference between Symmetric and Asymmetric Multiprocessing is that in Symmetric Multiprocessing all the processor in the system run tasks in OS. But, in Asymmetric Multiprocessing only the master processor run task in OS.

Difference Between Symmetric and Asymmetric Multiprocessing

There are two types of multiprocessing, Symmetric Multiprocessing and Asymmetric Multiprocessing. Multiprocessing system has more than one processor and they can execute multiple process simultaneously. In Symmetric Multiprocessing, processors shares the same memory. In Asymmetric Multiprocessing there is a one master processor that controls the data structure of the system. The primary difference between Symmetric and Asymmetric Multiprocessing is that in Symmetric Multiprocessing all the processor in the system run tasks in OS. But, in Asymmetric Multiprocessing only the master processor run task in OS.

BASIS FOR

 

COMPARISON

SYMMETRIC MULTIPROCESSING

ASYMMETRIC MULTIPROCESSING

     

Basic

Each processor run the tasks in

Only Master processor run the tasks of

Operating System.

Operating System.

     

Process

Processor takes processes from a

Master processor assign processes to the

common ready queue, or there may

slave processors, or they have some

be a private ready queue for each

predefined processes.

processor.

   

BASIS FOR

 

COMPARISON

SYMMETRIC MULTIPROCESSING

ASYMMETRIC MULTIPROCESSING

     

Architecture

All processor in Symmetric

All processor in Asymmetric Multiprocessing

Multiprocessing has the same

may have same or different architecture.

architecture.

     

Communication

All processors communicate with

Processors need not communicate as they

another processor by a shared

are controlled by the master processor.

memory.

   
     

Failure

If a processor fails, the computing

If a master processor fails, a slave is turned

capacity of the system reduces.

to the master processor to continue the

 

execution. If a slave processor fails, its task

is switched to other processors.

     

Ease

Symmetric Multiprocessor is

Asymmetric Multiprocessor is simple as

complex as all the processors need

master processor access the data structure.

to be synchronized to maintain the

load balance.

   

Q: 04: (a)Process control block (PCB)?

Process Control Block (PCB)

A Process Control Block is a data structure maintained by the Operating System for every process. The

PCB is identified by an integer process ID (PID). A PCB keeps all the information needed to keep track of

a process as listed below in the table −

S.N.

Information & Description

1

Process State

The current state of the process i.e., whether it is ready, running, waiting, or whatever.

2

Process privileges

This is required to allow/disallow access to system resources.

3

Process ID

Unique identification for each of the process in the operating system.

4

Pointer

A pointer to parent process.

5

Program Counter

Program Counter is a pointer to the address of the next instruction to be executed for this process.

6

CPU registers

Various CPU registers where process need to be stored for execution for running state.

7

CPU Scheduling Information

Process priority and other scheduling information which is required to schedule the process.

8

Memory management information

This includes the information of page table, memory limits, Segment table depending on memory used by the operating system.

9

Accounting information

This includes the amount of CPU used for process execution, time limits, execution ID etc.

10

IO status information

This includes a list of I/O devices allocated to the process.

The architecture of a PCB is completely dependent on Operating System and may contain different information in different operating systems. Here

is a simplified diagram of a PCB −

The architecture of a PCB is completely dependent on Operating System and may contain different information

The PCB is maintained for a process throughout its lifetime, and is deleted once the process terminates.

b). Describe the actions taken by a kernel level threads.

to context switching between kernel

Ans. Context switching between kernel threads typically requires saving the value of the CPU registers from the thread being switched out and restoring the CPU registers of the new thread being scheduled.

Q.5. (a)

What are critical section problems? Explain three requirements of critical section problems.

Critical Section Problem

A Critical Section is a code segment that accesses shared variables and has to be executed as an atomic action. It means that in a group of cooperating processes, at a given point of time, only one process must be executing its critical section. If any other process also wants to execute its critical section, it must wait until the first one finishes.

Solution to Critical Section Problem A solution to the critical section problem must satisfy the following

Solution to Critical Section Problem

A solution to the critical section problem must satisfy the following three conditions :

  • 1. Mutual Exclusion Out of a group of cooperating processes, only one process can be in its critical section at a given point of time.

  • 2. Progress If no process is in its critical section, and if one or more threads want to execute their critical section then any one of these threads must be allowed to get into its critical section.

  • 3. Bounded Waiting After a process makes a request for getting into its critical section, there is a limit for how many other processes can get into their critical section, before this process's request is granted. So after the limit is reached, system must grant the process permission to get into its critical section. (b) What is Segmentation?

Segmentation

Segmentation is a memory management technique in which each job is divided into several segments of different sizes, one for each module that contains pieces that perform related functions. Each segment is actually a different logical address space of the program.

When a process is to be executed, its corresponding segmentation are loaded into non-contiguous memory though every segment is loaded into a contiguous block of available memory.

Segmentation memory management works very similar to paging but here segments are of variable- length where as in paging pages are of fixed size.

A program segment contains the program's main function, utility functions, data structures, and so on. The operating system maintains a segment map table for every process and a list of free memory blocks along with segment numbers, their size and corresponding memory locations in main memory. For each segment, the table stores the starting address of the segment and the length of the segment. A reference to a memory location includes a value that identifies a segment and an offset.

Segmentation is a memory management technique in which each job is divided into several segments of

2016.

Short Questions:

  • 1. What is process? What information does the operating system generally need to keep about running process in order to execute them

Ans. A process is basically a program in execution. The execution of a process must progress in a sequential fashion.

A process is defined as an entity which represents the basic unit of work to be implemented in the system.

When a program is loaded into the memory and it becomes a process, it can be divided into four

sections ─ stack, heap, text and data.

  • 2. What is symmetric multiprocessing?

Ans. Symmetric multiprocessing (SMP) is a computing architecture in which two or more processors are attached to a single memory and operating system (OS) instance. SMP combines multiple processors to complete a process with the help of a host OS, which manages processor allocation, execution and management.

In symmetric (or "tightly coupled") multiprocessing, the processors share memory and the I/O bus or data path. A single copy of the operating system is in charge of all the processors. SMP, also known as a "shared everything" system, does not usually exceed 16 processors.

  • 3. Which of the following scheduling algorithms are non preemptive?

Ans. First-Come, First-Served (FCFS) Scheduling

(Non Preemptive)

Shortest-Job-Next (SJN) Scheduling

(Non Preemptive)

Priority Scheduling

(Non Preemptive)

Round Robin(RR) Scheduling

(Preemptive)

  • 4. What the advantages are of interposes communication?

Ans. Interposes communication (IPC) is a set of programming interfaces that allow a programmer to coordinate activities among different program processes that can run concurrently in an operating system.

Advantages.

  • 1. This allows a program to handle many user requests at the same time

  • 2. Since even a single user request may result in multiple processes running in the operating system on the user's behalf, the processes need to communicate with each other. The IPC interfaces make this possible.

5.

What is the difference between process contention scope and system contention scope?

Ans. The System Contention Scope is one of two thread-scheduling schemes used in operating systems. This scheme is used by the kernel to decide which kernel-level thread to schedule onto a CPU, wherein all threads (as opposed to only user-level threads, as in the Process Contention Scope scheme) in the system compete for the CPU. Operating systems that use only the one-to- one model, such as Windows, Linux, and Solaris, schedule threads using only System Contention Scope.

Process Contention Scope is one of the two basic ways of scheduling threads. Both of them being: process local scheduling (known as Process Contention Scope, or Unbound Threadsthe Many-to-Many model) and system global scheduling (known as System Contention Scope, or Bound Threadsthe One-to-One model). These scheduling classes are known as the scheduling contention scope, and are defined only in POSIX. Process contention scope scheduling means that all of the scheduling mechanism for the thread is local to the processthe thread's library has full control over which thread will be scheduled on an LWP. This also implies the use of either the Many- to-One or Many-to-Many model.

  • 6. Define throughput and response time.

Ans. Throughput

number of processes that complete their execution per time unit.

Turnaround time

amount of time to execute a particular process.

Waiting time

amount of time a process has been waiting in the ready queue.

Response time

amount of time it takes from when a request was submitted until the first response is produced, not output (for time-sharing environment)

  • 7. What is meant by multiprogramming?

Ans. To overcome the problem of underutilization of CPU and main memory, the multiprogramming was introduced. The multiprogramming is interleaved execution of multiple jobs by the same computer.

In multiprogramming system, when one program is waiting for I/O transfer; there is another program ready to utilize the CPU. So it is possible for several jobs to share the time of the CPU. But it is important to note that multiprogramming is not defined to be the execution of jobs at the same instance of time. Rather it does mean that there are a number of jobs available to the CPU (placed in main memory) and a portion of one is executed then a segment of another and so on.

  • 8. What is dispatcher?

Ans. The dispatcher is the module that gives control of the CPU to the process selected by the short-time scheduler(selects from among the processes that are ready to execute). The function involves :

Switching context Switching to user mode Jumping to the proper location in the user program to restart that program.

  • 9. What is spin clock?

Ans. Spin locks are kernel-defined, kernel-mode-only synchronization mechanisms, exported as an opaque type. A spin lock can be used to protect shared data or resources from simultaneous access by routines that can execute concurrently.

  • 10. What are counting semaphores?

Ans. conceptually, a semaphore is a nonnegative integer count. Semaphores are typically used to coordinate access to resources, with the semaphore count initialized to the number of free resources. Threads then atomically increment the count when resources are added and atomically decrement the count when resources are removed.

When the semaphore count becomes zero, indicating that no more resources are present, threads trying to decrement the semaphore block wait until the count becomes greater than zero

  • 11. What are difference types of fragmentation?

Ans. Fragmentation

As processes are loaded and removed from memory, the free memory space is broken into little pieces. It happens after sometimes that processes cannot be allocated to memory blocks considering their small size and memory blocks remains unused. This problem is known as Fragmentation.

Fragmentation is of two types −

S.N.

Fragmentation & Description

1

External fragmentation

Total memory space is enough to satisfy a request or to reside a process in it, but it is not contiguous, so it cannot be used.

2

Internal fragmentation

Memory block assigned to process is bigger. Some portion of memory is left unused, as it cannot be used by another process.

12. What is safe state?

A state is safe if the system can allocate all resources requested by all processes ( up to their

stated maximums ) without entering a deadlock but not all unsafe states lead to deadlocks. ).

state. ...

( All safe states are deadlock free,

13. What are advantages of many to many models in threading?

14. Ans.

In the many-to-one model, many user-level threads are all mapped onto a single kernel thread.

Thread management is handled by the thread library in user space, which is efficient in nature.

15. What is dynamic linking?

Ans. The term dynamically linked means that the program and the particular library it references are not combined together by the linker at link time. Instead, the linker places information into the executable that tells the loader which shared object module the code is in and which runtime linker should be used to find and bind the references. This means that the binding between the program and the shared object is done at runtime before the program starts, the appropriate shared objects are found and bound.

16. What is thrashing?

Ans. With a computer, thrashing or disk thrashing describes when a hard drive is being overworked by moving information between the system memory and virtual memoryexcessively. Thrashing occurs when the system does not have enough memory, the system swap file is not properly configured, too much is running at the same time, or has low system resources. When thrashing occurs you will notice the computer hard drive always working and a decrease in system performance. Thrashing is bad on the hard drive because of the amount of work the hard drive has to do and if left unfixed can cause an early hard drive failure.

16. What will be the problems if wait() primitive is not followed by signal() and vice versa i.e. if wait () primitive is missing before signal()?

Ans:

Subjective Part.

Q.3. (a). Write down the solution of dinning philosopher problem using semaphores.

Ans.

Dining Philosophers Problem

The dining philosophers problem is another classic synchronization problem which is used to evaluate situations where there is a need of allocating multiple resources to multiple processes.

Problem Statement:

Consider there are five philosophers sitting around a circular dining table. The dining table has five chopsticks and a bowl of rice in the middle as shown in the below figure.

Dining Philosophers Problem The dining philosophers problem is another classic synchronization problem which is used to

Dining Philosophers Problem

At any instant, a philosopher is either eating or thinking. When a philosopher wants to eat, he uses two chopsticks - one from their left and one from their right. When a philosopher wants to think, he keeps down both chopsticks at their original place.

Solution:

From the problem statement, it is clear that a philosopher can think for an indefinite amount of time. But when a philosopher starts eating, he has to stop at some point of time. The philosopher is in an endless cycle of thinking and eating.

An array of five semaphores, stick[5], for each of the five chopsticks. The code for each philosopher looks like:

/* eat */ signal(stick[i]); signal(stick[(i+1) % 5]); /* think */ }

When a philosopher wants to eat the rice, he will wait for the chopstick at his left and picks up that chopstick. Then he waits for the right chopstick to be available, and then picks it too. After eating, he puts both the chopsticks down.

But if all five philosophers are hungry simultaneously, and each of them pickup one chopstick, then a deadlock situation occurs because they will be waiting for another chopstick forever. The possible solutions for this are:

A philosopher must be allowed to pick up the chopsticks only if both the left and right

chopsticks are available. Allow only four philosophers to sit at the table. That way, if all the four philosophers pick up four chopsticks, there will be one chopstick left on the table. So, one philosopher can start eating and eventually, two chopsticks will be available. In this way, deadlocks can be avoided.

(b) Write down the solution of producer consumer problem using semaphores.

Q. 4. What is deadlock? Who dead lock can be prevented?

A deadlock is a situation in which two computer programs sharing the same resource are effectively preventing each other from accessing the resource, resulting in both programs ceasing to function.

The earliest computer operating systems ran only one program at a time. All of the resources of the system were available to this one program. Later, operating systems ran multiple programs at once, interleaving them. Programs were required to specify in advance what resources they needed so that they could avoid conflicts with other programs running at the same time. Eventually some operating systems offered dynamic allocation of resources. Programs could request further allocations of resources after they had begun running. This led to the problem of the deadlock. Here is the simplest example:

Program 1 requests resource A and receives it.

Program 2 requests resource B and receives it.

Program 1 requests resource B and is queued up, pending the release of B.

Program 2 requests resource A and is queued up, pending the release of A.

Now neither program can proceed until the other program releases a resource. The operating system cannot know what action to take. At this point the only alternative is to abort (stop) one of the programs.

Learning to deal with deadlocks had a major impact on the development of operating systems and the structure of databases. Data was structured and the order of requests was constrained in order to avoid creating deadlocks.

Deadlock Prevention

Mutual Exclusion. Hold and Wait. No preemption. Circular wait.
Mutual Exclusion.
Hold and Wait.
No preemption.
Circular wait.

Deadlock Prevention

We can prevent Deadlock by eliminating any of the above four condition.

Eliminate

Mutual

Exclusion

It is not possible to dis-satisfy the mutual exclusion because some resources, such as the tap drive and

printer, are inherently non-shareable.

Eliminate

Hold

and

wait

  • 1. Allocate all required resources to the process before start of its execution, this way hold and wait

condition is eliminated but it will lead to low device utilization. for example, if a process requires printer

at a later time and we have allocated printer before the start of its execution printer will remained blocked till it has completed its execution.

  • 2. Process will make new request for resources after releasing the current set of resources. This solution

may lead to starvation.

Now neither program can proceed until the other program releases a resource. The operating system cannot

Eliminate

No

Preemption

Preempt resources from process when resources required by other high priority process.

Eliminate

Circular

Wait

Each resource will be assigned with a numerical number. A process can request for the resources only in

increasing

order

of

numbering.

For Example, if P1 process is allocated R5 resources, now next time if P1 ask for R4, R3 lesser than R5

such request will not be granted, only request for resources more than R5 will be granted.

Q.5: explain different directory structure.

Directory Structure

There are many types of directory structure in Operating System. They are as follows :-

1) Single Level Directory 2) Two Level Directory 3) Tree Structured Directory 4) Acyclic Graph Directory 5) General Graph Directory

1) Single Level Directory

In Single Level Directory all files are in the same directory.

Limitations of Single Level Directory

  • a) Since all files are in the same directory, they must have unique name.

  • b) If two user call their data free test, then the unique name rule is violated.

  • c) Files are limited in length.

  • d) Even a single user may find it difficult to remember the names of all files as the number of file

increases.

  • e) Keeping track of so many file is daunting task.

2) Two Level Directory

  • i) Each user has Its own User File Directory (UFD).

ii) When the user job start or user log in, the system Master File Directory (MFD) is searched. MFD is indexed by user name or Account Number.

iii) When user refers to a particular file, only his own UFD is searched.

Thus different users may have files with same name. To have a particular file uniquely, in a two level directory, we must give both the user name and file name.

The root of a tree is Master File Directory (MFD).

Its direct descendents are User File Directory (UFD). The descendents of UFD's are file themselves.

The files are the leaves of the tree.

Limitations of Two Level Directory

The structure effectively isolates one user from another.

3) Tree Structured Directory

A directory (or Sub directory) contains a set of files or sub directories. All directories has the same internal format.

  • i) One bit in each directory entry defines the entry.

ii) Special calls are used to create and delete directories.

iii) Each process has a current directory. Current directory should contain most of the files that are of current interest to the process.

iv) When a reference is made to a file, the current directory is searched.

  • v) The user can change his current directory whenever he desires.

vi) If a file is not needed in the current directory then the user usually must either specify a path name or change the current directory. Paths can be of two types :-

  • a) Absolute Path

Begins at root and follows a path down to the specified file.

  • b) Relative Path

Defines a path from current directory.

vii) Deletions if directory is empty, its entry in the directory that contains it can simply deleted. If it is not empty : One of the Two approaches can be taken :- a)User must delete all the files in the directory.

b)If any sub directories exist, same procedure must be applied. The UNIX rm command is used. MS dos will not delete a directory unless it is empty.

4) Acyclic Graph Directory

Acyclic Graph is the graph with no cycles. It allows directories to share sub directories and files. With a shared file, only one actual file exists, so any changes made by one person are immediately visible to the another.

Implememtation of Shared files and directories

i) To create a link

. A link is effectively a pointer to another file or sub directory.

. Duplicate all the information about them in both sharing directories.

ii) Deleting a link

. Deletion of the link will not effect the original file, only link is removed.

. To preserve the file until all references to it are deleted.