You are on page 1of 20

Compiled By Kamal Kumar Pathak, SEMC

Question1:State the Readers & Writers problem and write its semaphore based solution. Also describe the algorithm. Can the producers consumer problem be considered as a special case of Reader / Writer problem with a single Writer (the producer) and a single Reader (consumer)? Answer:The readers/writers problem is one of the classic synchronization problems. Like the dining philosophers, it is often used to compare and contrast synchronization mechanisms. It is also an eminently practical problem. It is one of the classic synchronization problems. It is often used to compare and contrast synchronization mechanism. A common paradigm in concurrent applications is isolation of shared data such as a variable, buffer or document and the control of access to that data. This problem has two types of clients accessing the shared data. The first type, referred to as readers, only wants to read the shared data. The second type, referred to as writers, may want to modify the shared data. There is also a designated central data server or controller. It enforces exclusive write semantic, if a write is active then no other writer or reader can be active. The server can support clients that wish to both read and write. The readers & writers problem is useful for modelling process which are completing for a limited shared resource. Let us understand it with the help of a practical example. The airline reservation system consists of a large database with many processes that read and write the data. Reading information from the database will not cause a problem since no data is changed. The problem lies in writing information to the database. If no constraints are put on access to the database, data may change at nay moment. By the time a reading process displays the result of a request for information to the user, the actual data in the database may have changed. What if, per instance a process reads the numbers of available seats on a flight finds a value of one and repeat it to the customer? Before the customer has a chance to make their reservation, another process makes a reservation for another customer, changing the number of available seats to zero. Selection using semaphores:It can be used to restrict access to the database under certain conditions. In this example: - semaphores are used to prevent any writing process from changing information to the database while other processes are reading from the database. Algorithm:Semaphore mutex Semaphore db int reader_count Reader() { while(TRUE) = 1; = 1; // controls access to the count. // controls access to the database.

; // the no of reading process accessing the data.

// loop forever
1

Compiled By Kamal Kumar Pathak, SEMC


{ down(&mutex); If(reader_count ==1) Down(&dz); // if this is the first process to read the database // a down on db is executed to prevent access to // database by writing process. Up (&mutex); // allow other process to access reader_count Read_db(); Down(&mutex) ; // read the database // gain access to reder_count // gain access to reader_count.

Redaer_count = reader_count +1 // increment the reader_count.

reader_count = reader_count 1 ; // decrement reader_count If(reader_count == 0) Up(&db) ; // if there are no more process reading // from the database, allow writing process // to access the data Up(&mutex); // allow other process to access reader_countuse-data(); // use the data read from the database(non-critical) } writer () { while(TRUE) { create_data() ; // loop forever // create data to enter into database // (non critical) // write information to the database // release exclusive access to the database.

down(&db) ; // gain access to the database write_db(); up(&db) ; } }

The producer consumer problem can be considered as a special case of Reader/ writer problem with a single writer(the producer) and a single reader(consumer). If we look at the problem properly then we get that the Producer / Consumer problems is similar to Reader / Writer problem. We can use it as a special case of Reader / Writer problem.

Compiled By Kamal Kumar Pathak, SEMC


The Producer / Consumer problem(bounded_buffer) has two processes, share a common, fixed-sized buffer. One of them the producer puts information into the buffer and the other one, the consumer, takes it out.(it is also possible to generalize the problem to have m producers and n consumers, but we will only consider the case of one person and one consumer because this assumption simplifies the solutions. Trouble arises when the producer wants to put a new item in the buffer, but it is already full. To solution is for the producer to go to sleep to be awakened when the consumer has removed one or more items. Similarly if the consumer wants to remove an item from the buffer and sees that the buffer is empty, it goes to sleep until the producer puts something in the buffer and wakes it up. To keep track of the number of items in the buffer, we will need a variable count. If the maximum number of items the buffer can hold is N, producers code will first test to see if count N. if it is, the producer will go to sleep , if it is not, the producer will add an item and increment count. The consumers code is similar, first test count to see if it is 0. if it is, go to sleep. If it is not nonzero, remove an item and decrement counter. Each of the processes also tests to see if the other should be awakened and if so. Code of Producer / Consumer Problem # define N 100 int count = 0 ; Void producer(void) { Int item; While(TRUE) { Item = producer_item(); If (count = = N) Sleep (); Insert_item(item) If(count = = 1) Wakeup(consumer); // was buffer empty? } Void consumer(void)
3

// number of slots in the buffer // number of items in the buffer

// repeat forever // generate the next item // if buffer is full, go to sleep // put item in buffer

Count = = count + 1;// increment count of items in buffer

Compiled By Kamal Kumar Pathak, SEMC


{ int item; While(TRUE) { If(count = = 0) Sleep (); Item = remove item(); Count = count -1 ; If(count = = N-1) Wakeup(producer); Consume_item(item); } } // was buffer full // if buffer is empty go to sleep // take item out of buffer // decrement count of items in buffer // repeat forever

********************

Question 2:4

Compiled By Kamal Kumar Pathak, SEMC


Describe different schemes for deadlock prevention, avoidance and detection with major advantages and disadvantages of each scheme. Answer:Deadlock:A deadlock is a situation wherein two or more competing actions are waiting for the other to finish, and thus neither ever does. Another word each member of the set of deadlock processes is waiting for a resource that can be released only by a deadlock processes. Deadlock prevention:Deadlock might be prevented by denying any one of the following condition which was used by Havenders algorithm. Havenders Algorithm Elimination of Mutual Exclusion condition It must hold for non-shareable resources. That is, several processes cannot simultaneously share a single resource. This condition is difficult to eliminate because some resources such as tape drive and printer are inherently non-shareable. The shareable resources like read only file do not require mutually exclusive access and thus cannot be involved in deadlock. Elimination of Hold and Wait condition There are two possibilities to eliminate this condition. The first is that a process request be granted all the resources it needs at once, prior to execution. The second is to disallow a process from requesting resources whenever it has previously allocated resources. If the complete set of resources needed by a process is not currently available , then the processes must wait until the complete set is available. While the process waits, however it may not hold any resources. Thus the wait for condition is denied and deadlocks simply cannot occur. This strategy leads to serious waste of resources. Elimination of No-Preemption condition A system does allow processes to holds resources while requesting additional resources. A processes holds resource a second processes may need in order to proceed, while the second processes may hold the resources needed by the first process. This is a deadlock. It requires that when a process that is holding some resources is denied a request for additional resources, the processes must release its hold resources and if necessary request them again together with additional resources. Implementation of this strategy denies the no-preemtive condition effectively. Elimination of Circular Wait condition:5

Compiled By Kamal Kumar Pathak, SEMC


It can be denied by imposing a total ordering on all of the resource types and then forcing all processes to request the resources in order(increasing or decreasing). It imposes a total ordering of all resource types and requires that each processes requests resources in a numerical order of communication. With this rule, the resource allocation graph can have a cycle. Deadlock avoidance:It anticipates a deadlock before it actually occurs. This approach employs an algorithm to access the possibility that deadlock could occur and act accordingly. The most famous deadlock avoidance algorithm, from Dijkstra [1965] is the Bankers algorithm. It is named as Bankers algorithm because the process is analogous to that used by a banker in deciding if a loan can be safely made a net. If the necessary conditions for a deadlock are in place, it is still possible to avoid deadlock by being careful when resources are allocated. To avoid the deadlock as per the Bankers algorithm: 1. Each process declares maximum number of resource of each type that it may need. 2. Keep the system in a safe state in which we can allocate resources to each processes in some order on avoid deadlock. 3. Check for the safe state by finding a safe sequence <p1,p2,..pn> where resources that pi needs can be satisfied by available resources plus hold by pj where j<i. 4. Resource allocation graph algorithm uses claim edge to check for a safe state. Deadlock Detection and Recovery Deadlock detection:Detection of deadlock is the most practical policy which being both liberal and cost off most operating systems deploy. To detect a deadlock, we must go about in a recursive manner and simulate the most favoured execution of each unblocked processes.

An unblocked process may acquire all the needed resources and will execute. It will then release all the required resources and remain dormant thereafter. The new released resources may wakeup some previously blocked processes. Continue the above steps as long as possible. If any blocked processes remain, they are deadlocked.

Recovery from deadlock Recovery by process termination

Compiled By Kamal Kumar Pathak, SEMC


In this approach we terminate deadlock processes in a systematic way taking into account their priorities. Consider the services where a process is in the state of updating a data file and it is terminated. The file may be left in an incorrect state by the unexpected termination of the updating process. Further, process should be terminated based on some criterion / policy. Some of the criteria may be follows as:

Priorities of a process. CPU time used and expected usage before completion. Number and type of resources being used (can they be pre-empted easily?) Number of resources needed for completion. Number of process needed to be terminated. Are the processes interactive or batch?

Recovery by check pointing and rollback:Some systems facilitate deadlock recovery by implementing checkpoint and rollback. Checkpoint is saving enough state of a process so that the process can be restarted at the point in the computation where the checkpoint was taken. Auto saving file edits are a form of checkpointing. Checkpointing costs depend on the underlying algorithm. Very simple algorithm(like linear primarily testing) can be checkpointed with a few words of data. More complicated processes may have to save all the process state and memory. If a deadlock is detected, one or more processes are restarted from a checkpoint. Restarting a process from a checkpoint is called rollback. It is done with the expectation that the resource requests will not interleave again to produce deadlock. Deadlock recovery is generally used when deadlock are rare, and the cost of recovery (process termination or rollback) is low. *************

Question 3:What is thrashing? How does it happen? Where are the two mechanisms to prevent thrashing? Describe them. Answer :-

Compiled By Kamal Kumar Pathak, SEMC


Thrashing: - Thrashing occurs when a system spends more time processing page fault than executing transactions . While processing page fault, it is necessary to be in order to appreciate the benefits of vertical memory, thrashing has a negative effect on the system. As the page fault rate increases more transactions need processing from the paging device. The queue at the paging device increases, resulting in increased service time for a page fault. While the transactions in the system are waiting for the paging device, CPU utilisation, system throughput and system response time decrease, resulting in below optimal performance of a system. Thrashing becomes a greater threat as the degree of multi programming of the system increases.
CPU Utilization

Improved Throughput
Thrashing

Degree of multi programming

Fig- Degree of multi programming Here it shows that there is a degree of multiprogramming that is optimal for system performance. CPU utilization reaches a maximum before a swift decline as the degree of multiprogramming increases and thrashing occurs in the over entered system. This indicates that controlling the load on the system is important to avoid thrashing. The selection of a replacement policy to implement virtual memory plays an important part in the elimination of the potential for thrashing. A policy based on the local mode will tend to limit the effect of trashing. A replacement policy based on the global mode is more likely to cause thrashing. Since all pages of memory are available to all transactions, a memory intensive transaction may occupy a large portion of memory, making other transactions susceptible to page faults and resulting in a system that thrashes. To prevent thrashing, there are two techniques 1. Working- sot model. 2. Page-fault Rate. 1. Working-Set Model:Principle of Locality:- Pages are not accessed randomly. At each instant of execution a program tends to use only a small set of pages. As the pages in the set change the program is said to move from one phase to another. The principle of locality States that the most references will be to the current small set of pages in use. Working set Definition:- It is based on the assumption of locality. The idea to examine the most recent page references in the working set. If a page is in active use, it will be in the working set. If it is no longer being used, it will drop from working set. The set of pages currently needed by a process is its working set.
8

Compiled By Kamal Kumar Pathak, SEMC


Working Set Policy:- Restrict the number of processes on the ready queue so that physical memory can accommodate the working set of all ready processes. Monitor the working sets of ready processes and when necessary, reduce multiprogramming (swap) to avoid thrashing. When loading a process for execution, pre-load certain pages. This prevents a process from having to fault into its working set. May be only a rough guess at start up, but can be quite accurate on swap-in. 2. Page Fault Rate:-The working set model is successful and knowledge of the working set can be useful for pre-paging, but it is scattered way to control thrashing. A page fault frequency (page fault rate) takes a more direct approach. In this we establish upper and lower bound on the desired page fault rate. If the actual page fault rate exceeds the upper limit, we allocate the process another frame. If the page fault rate falls below the lower limit, we remove a frame from the process. Thus we can directly measure and control the page fault rate to prevent thrashing. Increase number of frames Upper bound Page fault rate Lower bound Decrease number of frames

No. of Frames Fig :- page fault frequency Establish acceptable page fault If actual rate too low, process loses frame. If actual rate too high, process gains frame. *************** Question 4:What are the factors important for selection of a file organisation ? Discuss three important file organisations mechanisms and their relative performance. Answer :-

Compiled By Kamal Kumar Pathak, SEMC


The operating system abstracts (maps) from the physical properties if its storage devices to define a logical storage unit. i.e., the file. The operating system provides various system calls for file management like creating, deleting files, read and write, truncate operations etc. the files are mapped by the operating system on to physical devices. The operating system is responsible for the following activities in regard to the file system. 1. The creation and deletion of files 2. The creation and deletion of directories 3. The support of system calls for files and directories manipulation. 4. The mapping of files on to disk. 5. Backup of stable storage media. Directories:A file directory is a group of files organised together. An entry within a directory refers to the file or another directory. Hence, a tree structure / hierarchy can be formed. The directories are used to group files belonging to different applications / users large-scale time sharing systems and distributed systems store thousands of files and bulk of data. File systems can be broken into partitions or volumes. They provide separate areas within one disk, each treated as separate storage devices in which files and directories reside. The device directory records information like name, location, size and type for all the files on partition. A root refer to the part of the from where the root directory begins, which paints to the user directory. The root directory is distinct from subdirectories in that it is in a fixed position and of fixed size. So, the directory is like a symbol table that converts file names into corresponding directory entries. File Organisation Mechanisms The most common schemes for describing logical directory structure are:1. Single Level Directory:All the files are inside the same directory, which is simple and easy to understand; but the directory limitation is that all files must have unique names. Also even with a single user as the number of files increases, it is difficult to remember and to track the names of all the files.

Directory Abc Test Xyz Hello ----Data

10

Compiled By Kamal Kumar Pathak, SEMC


File Fig:- Single level directory 2. Two Level Directory;we can overcome the limitations of single level directory by crating a separate directory for each user, called user file directory (ufd). initially when the user logs the systems master file directory (mfd) is searched while is indexed with respect to username / account and ufd reference for that user. User 1 User 2 User 3 User 4

Abc

Xyz

Hello

Data

Test

Hello1

Hello2 Abc1

Fig:- Two level directory Thus different users may have same file names but within each UFD they should be unique. This resolves name-collision problem upto some extent but this directory structure isolates one user from another, which is not desired some times when the users need to share or co-operate on some task. 3. Tree Structure Directory:The two level directory structure is like a 2-level tree. Thus to generate, we can extend the directory structure to a tree of arbitrary height. Thus the user can create his / her own directory and sub-directories and can also organise files. One bit in each directory entry defines entry as a file(o) or as a sub-directory(1). Hello Test Programs Hello Test Programs

11

Compiled By Kamal Kumar Pathak, SEMC


Hello1 Hello2 Hello3 Test1 Test2 Oct Hex Dec

F1

F5

F6

F7

F13

Trial 1 Trial 1 Test2


F6 F4

Test2

F6

Trial 1

Test2

oet

Hex

Dec

F10 F2 F3

F11

F12

Fig :- Tree Structured directory The tree has a root directory and every file in it has a unique path name(Path from root, through all subdirectories to a specified file). The pathname prefixes the filename, helps to reach the required file traversed from a base directory. The pathnames can be of 2 types : absolute path names or relative path names, depending on the base directory. An absolute path name begins at the root and follows a path to a particular file. It is a full pathname and uses the root directory. Relative defines the path from the current directory. For example, if we assume that the current directory is / Hello2 then the file f4.doc has the absolute pathname /Hello/Hello2/Test2/f4.doc and the relative path name is /Test2/F4.doc. The pathname is used to simplify the searching of a file is a tree structural directory hierarchy. ******

Question 5:How do you differentiate between pre-emptive and non pre-emptive scheduling ? Briefly describes Round Robin and Shortest process next scheduling with example & for each. Answer:-

12

Compiled By Kamal Kumar Pathak, SEMC


Scheduling :- CPU scheduling deals with the problem of deciding which of the processes in the ready queue is to be allocated to the CPU. Difference of pre-emptive and non pre-emptive scheduling: Pre-emptive scheduling: The term pre-emption means the operating system moves a process from running to ready without the process requesting it. An operating system implementing this algorithm switches to the processing of a new request before completing the processing if the current request. The pre-empted request is put back back into the list of the pending requests. Its servicing would be resumed sometimes in the future when it is scheduled again. Pre-emptive scheduling is more useful in high priority process which requires immediate response. For example, in Real time system the consequence of mixing one interrupt could be dangerous. Round Robin scheduling, priority based scheduling or event driven scheduling and SRTN are considered to be the pre emptive scheduling algorithm. Non pre-emptive scheduling:A scheduling discipline is non-premptive if once a process has been allotted to the CPU, the CPU cannot be taken away from the process. A non-preemptible discipline always processes a scheduled request to its completion. In non-preemptive systems, jobs are made to wait by no longer jobs, but the treatment of all process is fairer. First come First Served(FCFS) and shortest job First(SJF) , are considered to be the non-preemtive scheduling algorithms. The decision whether to schedule pre emptive or not depends on the environment and the type if application most likely to be supported by a given operating system. Round Robin Scheduling:Round Robin (RR) scheduling is a preemptive algorithm that relates the process that has been waiting the longest. This is one of the oldest, simplest and widely used algorithms. The round robin scheduling algorithm is previously used in timesharing and a multi-user system environment where the primary requirement is to provide reasonably good response times and in general to share the system fairly among all system users. Basically the CPU time is divided into the slices. Each process is allocated a small time slice called quantum. No process can run for more than one process, are waiting in the ready queue. if a process needs more CPU time to complete after Exhausting one quantum, it goes to the end of the end of ready queue to await the next allocation. To implement the RR scheduling queue data structure is used to maintain the queue of ready process. A new process is added at the tail of that queue. The CPU schedule picks the first process from the ready queue,
13

Compiled By Kamal Kumar Pathak, SEMC


allocate processor for a specified time quantum. After that time the CPU schedule will select the next process is the ready queue. Consider the following set of process with the processing time given in milliseconds:Process P1 P2 P3 Processing time 24 03 03

If we use a time quantum of 4 milliseconds then process p1 gets the first 4 milliseconds. Since it requires another 20 milliseconds, it is pre-empted after the first time quantum and the CPU is given to the next process in queue, process p2. Since process p2 does not need and milliseconds, it quits before its time quantum expires. The CPU is then given to the next process, process p3 one each process has received 1 time quantum, the CPU is returned to process p1 for an additional time quantum. The Gantt chart will be:P1 0 P2 4 7 P3 10 P1 14 P1 P1 18 P1 22 P1 26

30

Process P1 P2 P3

Processing time 24 03 03

Turn around time = T(process Completed) t(process submitted) 30 0 = 30 70=7 10 0 = 10

Waiting time = Turn around time processing time 30 24 = 6 73=4 10 3 = 7

Average turn around time = ( 30 + 7 + 10) / 3 = 47/3 = 15.66 Average waiting time Through put = ( 6 + 4 + 7) = 17 / 3 = 5.66 = 3 / 30 = 0.1

Processor utilisation = (30 / 30) * 100 = 100 % Shortest process Ment.:-

14

Compiled By Kamal Kumar Pathak, SEMC


Interactive process generally follows the pattern of wait for command and so on. If we regard the execution of each command as a separate job, then we could minimize overall response time by running the shortest one first. The only problem is figuring out which of the currently runnable process is the shortest one. One approach is to make estimates based on part behaviour and run the process with the shortest estimated running time. Suppose that the estimated time per command for some terminal is To. Now suppose its next run is measured to be T1. we could update our estimate by taking a weighted sum of these two numbers that is aT0 + (1- a0 T1. through the choice of we can decide to have the estimation processes forget old runs quickly, or remember them for a long time. With a = , we get successive estimates of T0, To/2 + T1/2, To/4 + T1/4 +T2/2, T0/8 + T1/8 + T2/4 + T3/2 After three new runs, the weight of T0 in the new estimate has dropped to 1/8. The technique of estimating the next value in a services by taking the weighted average of the current measured value and the previous estimate is sometimes called aging. It is applicable to many situations where a prediction must be made based on previous values. Aging is especially easy to implement when a = . All that is needed is to add the new value to the current estimate and divide the sum by 2 ( by shifting it right 1 bit).

*********

Question 6:i) What are the main differences between capabilities list and access lists? Explain through examples. Answer:The security policy outlines several high level points :- how the data is accessed, the amount of security required and what the steps are when these requirements are not met. Two methods that are practical, however are storing the matrix by rows or by columns and then storing only the non-empty elements.
15

Compiled By Kamal Kumar Pathak, SEMC


1. Access Control List:-

It consists of associating with each object an(ordered) list containing all the domains that may access the object and how. This list is called the access control list or ACL. Consider an example of three processes, each belonging to a different domain. A,B and C and three file F1,F2 and F3. for simplicity, we will assume that each domain corresponds to exactly one user, in this case, users A,B and c often in the security literature, the users are called subjects or principals, to contrast them with the things owned the objects such as files. Each file has an ACL associated with it. File F1 has two entries in its ACL(separated by a semicolon). The first entry says that any process owned by user A may read and write the file. All other access by these users and all accesses by other users are forbidden. The rights are granted by user not by process. As far as the protection system goes, any process owned by user A can read and write file F1. It does not matter if there is one such process or 100 of them. It is the owner, not the process ID, which matters. File F2 has entries in its ACL. A,B and c can all read the file and in addition B can also write it. No other accesses are allowed. File F3 is apparently an executable program, since B and C can both read and execute it. B can also write it. Many systems support the concept of a group of users. Group have names and can be included in ACLs. Two variations on the semantics, each process has a user ID(UID) and group ID(GID). In such systems, an ACL entry contains entries of the form. UID1, GID1: right1; UID2, GID2: right2; .. Under these conditions, when a request is made to access an object, a check is made using the callers UID and GID. If they are present in the ACL, the rights listed are available. If the (UID,GID) combination is not in the list the access is not permitted.
2. Capability List:-

The other way of slicing up the matrix is/ by rows. When this method is used associated with each process is a list of objects that may be accessed, along with an indication of which operations are permitted on each, in other words, its domain.
A
Process
Owner

User Space F1 F1 F1 F1 : R F2 : R F1 : R F2 : RW
F3: RWX

F2 : R F3 : RX 16

Compiled By Kamal Kumar Pathak, SEMC

Kernel Space Client

Fig:- when capabilities are used each process has a capability list. This list is called capability list or C-list and the individual items on it are called capabilities.(Dennis and Yan Henn1966, Fabry 1974). A set of three processes and the capability lists are shown the above figure. Each capability grants the owner certain right on a certain object. Here in the figure, the process owned by user A can read files F1 and F2. usually a capability consists of a file( or more generally, an object) identifier and a bitmap for the various rights. In UNIX like system, the file identifier would probably be the i-node number. Capability lists are themselves objects and may be pointed to from other capability lists, thus facilitating the sharing of sub-domains. ***********

Question 6:ii) What is the Kernel of an operating system? what functions are normally performed by the Kernel ? Give several reasons why it is effective to design micro kernel. Answer: The kernel is a bridge between applications and the actual data processing done at the hardware level. The operating system, referred to in UNIX as the kernel, interacts directly with the hardware and provides the service to the user programs. User programs interact with the kernel through a set of standard system calls. These system calls request services to be provided by the kernel. Such services would include accessing a file, open, close, read, write, link or execute a file, starting or
17

Compiled By Kamal Kumar Pathak, SEMC


updating accounting records, changing ownership of a file or directory; changing to a new directorys creating, suspending or killing a process; enabling access to hardware devices and setting limits on system resources. In windows 2000, the purpose of the kernel is to make the rest of the operating system hardware independent. It access the hardware via the HAL and builds upon the extremely low level HAL(hardware Abstraction Layer) services to build higher level abstractions. In addition to providing a higher level abstraction of the hardware and handling thread switches, the kernel also has another key function: providing low level support for control objects and dispatcher objects. On booting, windows 2000 are loaded into memory as a collection of files and the description of important files. File name Nteskrnl.exe Consists of Kernel and executive

In multi-user, multi tasking operating system(UNIX), we can have many users logged into a system simultaneously each running many programs. It is the Kernels job to keep each process and user separate and to regulate access to system hardware including CPU<memory Disk and other I/O device. Management of the process by the kernel For each new process created, the kernel sets up an address space in the memory. This address space consists of the following logical segments: Text- contains the programs instructions. Data contains initialize program variables. Bss- contains uninitialized program variables. Stack- a dynamically growable segment, it contains variables allocated locally and parameters paned to functions in the program. Each process has two stacks :- a user stack and a kernel stack. These stacks are used when the process executes in the user or kernel mode. Mode switching:Kernel Mode :Process carrying out Kernel instructions are said to be running in the kernel mode . P1 user process can be in the kernel mode while making a system call, while generating an exception fault , or in case of interrupt . Essentially, a mode switch occurs and control is transferred to the kernel , when a user program makes a system call . the kernel then executes the instructions on the users behalf .

18

Compiled By Kamal Kumar Pathak, SEMC


While in the kernel mode , a process has fall privileges and many access the code and data of any processes. Memory organization by the Kernel : When the kernel is first loaded into memory at the boot time, it sets aside a certain amount of RAM for itself as well as for all system and user processes. Main categories in which RAM is divided are:

Text : To hold the text segment of running processes .

Data : To hold the data segments of running processes Stack : To hold the stack segments of running processes .

Shared Memory : This is an area of memory which is available to running programs if they need it . Consider a common use of shared memory : Let us assume we have a program which has been complied using a shared library . Assume that five of this programs are running simultaneously . At run time , the code of sick is made resident in the shared area . This way , a single copy of the library needs to be in memory , resulting in increased efficiency and major cost savings.

Buffer Cache:All reads and writes to the file system are cached here first . Sometimes where a program that is writing to a file doesnt seem to work [ nothing is written to the file ] . A trend in modern operating systems is to take the idea of moving code up into higher layers even further and remove as much as possible from the kernel Client Process User mode Kernel mode

Client process

Process server

Terminal server Microkernel

File server

Memory server

Client obtain service by sending messages to server processes. mode, leaving a minimal level of kernel with the minimal characteristics of a kernel.
19

Compiled By Kamal Kumar Pathak, SEMC


The kernel does is handle the communication between client and servers. By splitting the operating system up into parts, each of which only handles one facet of the system, such as file service, process service, terminal service, or memory service, each part becomes small and manageable .Furthermore, because all the servers run as user mode processes and not in the Kernel mode. They do not have direct access to the hardware. As a consequence if a bug in the file server is triggered the file service may crash, but this will not usually bring the whole machine down. The Kernel would not even impact the bytes in the manage to see if they were valid or meaningful; it would just blindly copy them into the disk device registers. (obviously, some scheme for limiting such messages to authorized processes only much be used). The Split between mechanism and policy is an important concept. It occurs again and again in a operating system in various contents.

**********

20

You might also like