You are on page 1of 8

Batch Systems

i. In early serial processing system , most of the time is wasted due to scheduling and setup time.
To improve utilization of CPU time , the concept of a batch operating system was developed.
The first batch operating system was developed in the mid-1950s by General Motors for use on
an IBM 701.
ii. The central idea behind the simple batch processing scheme was the use of a piece of software
known as the monitor.
iii. With the use of this type of operating system the user no longer has direct access to the machine.
Rather, the user submits the job on cards or tape to a computer operator, who batches the jobs
together sequentially based on their similar needs and places the entire batch on an input
device(typically a card reader or magnetic tape drive), for use by the monitor.
iv. The monitor reads in jobs one at a time from the input device. As it is read in, the current job is
placed in the user program area, and control is passed to this job. When the job is completed, it
returns control to the monitor, which immediately reads in the next job. The results of each job
are sent to an output device, such as a printer, for delivery to the user.
v. It is the monitor that controls the sequence of events. For this to be so, much of the monitor must
always be in main memory and available for execution (Figure below). That portion is referred
to as the resident monitor.
vi. A batch of jobs is queued up, and jobs are executed as rapidly as possible by the monitor , with
no intervening idle time because the jobs in same batch needs same resources ( like same
compilers ) for their execution , that reduces the overhead of loading and unloading the
memory with different resources.
Interrupt
processing

Device
drivers
Monitor Job
sequencing

Control
language
interpreter
Boundary
User
program
area

Memory Layout for Resident Monitor

System working ..........


Area Batch Monitor
Job1 Job2 Jobn
User Working Current job of
Area Batch
Jobs belonging to same Batch
Schematic of batch processing system
1
vii. There have been two sacrifices:
• Some main memory is now given over to the monitor and
• some machine time is consumed by the monitor.
Both of these are forms of overhead. Despite this overhead, the simple batch system improves
utilization of the computer
viii. The monitor improves job setup time as well. With each job, instructions are included in a
primitive form of job control language (JCL). This is a special type of programming language
used to provide instructions to the monitor.

Certain other hardware features are also desirable:


• Memory protection: While the user program is executing, it must not alter the memory area
containing the monitor. If such an attempt is made, the processor hardware should detect an error
and transfer control to the monitor. The monitor would then abort the job, print out an error
message, and load in the next job.
• Timer: A timer is used to prevent a single job from monopolizing the system. The timer is set at
the beginning of each job. If the timer expires, the user program is stopped, and control returns
to the monitor.
• Privileged instructions: Certain machine level instructions are designated privileged and can be
executed only by the monitor. If the processor encounters such an instruction while executing a
user program, an error occurs causing control to be transferred to the monitor. Among the
privileged instructions are I/O instructions, so that the monitor retains control of all I/O devices.
This prevents, for example, a user program from accidentally reading job control instructions
from the next job. If a user program wishes to perform I/O, it must request that the monitor
perform the operation for it.
• Interrupts: This feature gives the operating system more flexibility in relinquishing control to
and regaining control from user programs
A simple example is that of a user submitting a program written in the programming lan-
guage FORTRAN plus some data to be used by the program. All FORTRAN instructions and data
are on a separate punched card or a separate record on tape. In addition to FORTRAN and data
lines, the job includes job control instructions, which are denoted by the beginning '$'. The overall
format of the job looks like this:

$JOB To execute this job, the monitor reads the $FTN'line and loads the
$FTN appropriate language compiler from its mass storage (usually
• tape). The compiler translates the user's program into object code,
• FORTRAN instructions which is stored in memory or mass storage. If it is stored in
• memory, the operation is referred to as "compile, load, and go." If
$LOAD it is stored on tape, then the $LOAD instruction is required. This
$RUN instruction is read by the monitor, which regains control after the
• compile operation. The monitor invokes the loader, which loads
• Data the object program into memory (in place of the compiler) and
• transfers control to it. In this manner, a large segment of main
$END memory can be shared among different subsystems, although only
one such subsystem could be executing at a time.

2
Multiprogrammed Batch Systems
i. Inspite of the automatic job sequencing provided by the uniprogrammig simple batch operating
system, the processor is often idle because the I/O devices are slower than the processor. Such
systems cannot keep either the CPU or the I/O devices busy at all times. Multiprogramming
increases CPU utilization by organizing jobs so that the CPU always has one to execute.
CPU
CPU CPU
idle
idle idle

CPU I/O CPU I/O CPU I/O

Job1 Job2 Job3


(a). Uniprogramming System
Operating
Job1 System
CPU I/O Overlapping of Job1
I/O operations &
Job2 CPU operations Job2
CPU I/O

Job3
Job3 CPU I/O
Job4

CPU is Busy (c).Memory layout for a multiprogramming


system.
(b). Multiprocessing
System this situation, where we have a single program, referred to as
ii. Figure (a) illustrates
uniprogramming. The processor spends a certain amount of time executing, until it reaches an
I/O instruction. It must then wait until that I/O instruction concludes before proceeding.
iii. This inefficiency is not necessary. Sometimes memory can hold the operating system (resident
monitor) and multiple user program. In such cases, when one job needs to wait for I/O, the
processor can switch to the other job, which likely is not waiting for I/O as shown in figure (b).
The process is known as multiprogramming, or multitasking. It is the central theme of modern
operating systems.
iv. The idea is as follows: The operating system keeps several jobs in memory simultaneously
(Figure (c)). This set of jobs is a subset of the jobs kept in the job pool-since the number of jobs
that can be kept simultaneously in memory is usually much smaller than the number of jobs that
can be in the job pool. The operating system picks and begins to execute one of the jobs in the
memory. Eventually, the job may have to wait for some task, such as an I/O operation, to
complete. In a non-multiprogrammed system, the CPU would sit idle. In a multiprogramming
system, the operating system simply switches to, and executes, another job. When that job needs
to wait, the CPU is switched to another job, and so on. Eventually, the first job finishes waiting
and gets the CPU back. As long as at least one job needs to execute, the CPU is never idle.

3
v.Multiprogramming batch system relies on certain computer hardware features :
• The most notable additional feature that is useful for multiprogramming is the hardware that
supports I/O interrupts and DMA (direct memory access). With interrupt-driven I/O or DMA,
the processor can issue an I/O command for one job and proceed with the execution of
another job while the I/O is carried out by the device controller. When the I/O operation is
complete, the processor is interrupted and control is passed to an interrupt-handling program
in the operating system. The operating system will then pass control to another job.
• To have several jobs ready to run, they must be kept in main memory, requiring some form
of memory management. In addition, if several jobs are ready to run, the processor must
decide which one to run, which requires some algorithm for scheduling.

REAL TIME SYSTEMS


i. Real-time computing is a computer system in which the correctness depends not only on the
logical result of the computation , but also on the time at which the results are produced.In a real-
time system the scheduler is the most important component.
ii. In a real-time system, some of the tasks thet attempt to control or react to events that take place
in the outside world have a certain degree of urgency to them. Since these events occur in "real
time”, such tasks must be accepted and processed within deadline, where deadline specifies
either a start time or a completion time.
iii. Real-time computing is of two types : hard and soft
• A hard real-time has the most stringent requirements, guaranteeing that critical tasks be
completed on time. These systems are safety-critical systems. For example ,the weapon
systems , antilock brake systems flight-management systems and health related embedded
systems, such as pacemakers.
• A less restrictive type of real-time system is a soft real-time system, where a critical real-time
task gets priority over other tasks, and retains that priority until it completes. For example,
microwave ovens, networking devices such as switches and routers.
Task sensititve to
External Events
Processing CPU
Quick Responce

iv. Characteristics of Real-Time Operating Systems


• Determinism - An operating system is deterministic to the extent that it performs operations
at fixed, predetermined times or within predetermined time intervals. . Determinism is con-
cerned with how long an operating system delays before acknowledging an interrupt.
Determinism depends on the speed with which it can respond to interrupts and, on whether it
has sufficient capacity to handle all requests within the required time.
• Responsiveness - Responsiveness is concerned with how long an operating system takes to
service the interrupt after acknowledgment.
Aspects of responsiveness include the following:
➢ The amount of time required to initially handle the interrupt and begin execution of the
interrupt service routine (ISR).
➢ The amount of time required to perform the ISR.
➢ The effect of interrupt nesting that is currant interrupt interrupted by another one causing
the delay in providing service.

4
• User control - In a real-time system, the user should distinguish between hard and soft tasks
and to specify fine-grained relative priorities within each class of tasks. A real-time system
may also allow the user to specify characteristics of paging or process swapping, what
processes must always be resident in main memory, what disk transfer algorithms are to be
used, what rights the processes in various priority bands have, and so on.
• Reliability - Reliability is more important for real-time systems than non-realtime systems.
Loss or degradation of performance may have catastrophic consequences, ranging from
financial loss to major equipment damage and even loss of life.
• Fail-soft operation - Fail-soft operation is a characteristic that refers to the ability of a
system to fail in such a way as to preserve as much capability and data possible. Typically,
the system notifies user or user process that it should attempt corrective action and then
continues operation perhaps at a reduced level of service. In the event a shutdown is
necessary, an attempt is made to maintain file and data consistency.
v. File Management is usually found in large installations of real-time systems . The primary
objective of file management in real-time systems is usually speed of access, rather than efficient
utilization of secondary storage.

TIME SHARING SYSTEMS


i. With the use of multiprogramming, batch processing can be quite efficient. However, for many
jobs like transaction processing ; it is desirable to provide an interactive mode to user to interact
directly with computer.
ii. Today , the requirement of an interactive computing facilities obtained by using a dedicated
microcomputer .That option was not available in the 1960s, when most computers were big and
costly. Instead time sharing was developed.
iii. Time sharing system is popular representative of multiuser and multi-access system. One
primary objective of the Time Sharing Operating System as to Reduce the Terminal
Response Time .
iv. In a time-sharing system, multiple users simultaneously access the system through terminals.
Multiple users submits their jobs to the Operating System , that is installed on the Time Sharing
System. OS submits the job of every user to the CPU.
v. For this , the OS circulates CPU time slots among terminals. When the terminal have any job to
be executed , when the CPU time slot arrives to it , the job is processed. If the job is not
completed for any reason ,then the next CPU slot is switched to the next terminal , leaving the
current job. This uncompleted job is processed further when the time slot arrives it back after
circulating among all terminals.
Process1 User 1

User 2
CPU Process2

Processn User n

vi. Being interactive , each action or command in a time-shared system tends to be short. Thus, only
a little CPU time is needed for each user. As the system switches rapidly from one user to the
next, each user is given the impression that the entire computer system is dedicated to his use,
even though it is being shared among many users.
5
vii. Time-sharing and multi-programming require several jobs to be kept simultaneously in
memory. But in general main memory is too small to accommodate all jobs, the jobs are
kept initially on the disk in the job pool. This pool consists of all processes residing on disk
awaiting allocation of main memory.
viii. If several jobs are ready to be brought into memory, and if there is not enough room for all
of them, then the system must choose among them. Making this decision is referred as job
scheduling.
ix. When the operating system selects a job from the job pool, it loads that job into memory
for execution. Having several programs in memory at the same time requires some form of
memory management.
x. In addition, if several jobs are ready to run at the same time, the system must choose
among them. Making this decision is CPU scheduling.
xi. Finally, running multiple jobs concurrently requires that their ability to affect one another
be limited in all phases of the operating system, including process scheduling, disk storage,
and memory management.
xii. One of the first time-sharing operating systems to be developed was the Compatible Time-
Sharing System (CTSS), developed at the Massachusetts Institute of Technology (MIT) by a
group known as Project MAC (Machine-Aided Cognition, or Multiple-Access Computers). The
system was first developed for the IBM 709 in 1961 and later transferred to an IBM 7094

Each user has at least one separate program in memory. A program loaded into memory and
executing is commonly referred to as a process. When a process executes, it typically executes for
only a short time before it either finishes or needs to perform I/O.
Thus, if there are n users actively requesting service at one time, each user will only see on the
average 1/n of the effective computer speed, not counting operating system overhead. However,
given the relatively slow human reaction time, the response time on a properly designed system
should be comparable to that on a dedicated computer.
An interactive (or hands-on) computer system provides direct communication between the user
and the system. The user gives instructions to the operating system or to a program directly, using
a keyboard or a mouse, and waits for immediate results. Accordingly, the response time should be
short typically within 1 second or so.

DISTRIBUTED SYSTEMS
i. A distributed system is a collection of physically separate, possibly heterogeneous, loosely
coupled processors interconnected by a communication network.
ii. The processors in a distributed system may vary in size and function. They may include small
microprocessors, workstations, minicomputers, and large general-purpose computer systems.
These processors are referred to by a number of names, such as sites, nodes, computers,
machines, or hosts, depending on the context in which they are mentioned.
iii. In this system processor does not share memory or devices even a clock. Each processor has got
its own memory. The processors communicates with each other through high-speed buses or
telephone lines
There are four main reasons for building distributed systems as explained below:
iv. Resource Sharing – If a number of different sites are connected to one another, then a user at
one site may be able to use the resources available at another.
Resource sharing supports sharing files at remote sites, processing information in a distributed
database, using remote specialized hardware devices(such as high speed array processor ,printer
etc.) and performing other operations.

6
v. Computation Speedup – In a distributed system a particular computation can be partitioned
into subcomputations and are distributed among various sites so that they can run concurrently
providing computation speedup.
Also if the system is overloaded with jobs them, some of them can be moved to other lightly
loaded sites. This job movement is called as Load Sharing.
vi. Reliability – If one system fails in a distributed system, the remaining site can continue
operating, giving the system better reliability. If a site responsible for any critical function fails,
the whole system may halt. This problem can be eliminated with enough redundancy(in both
data and hardware).
vii. Communication –When several sites are connected to one another by a communication
network, the users at different sites have the opportunity to exchange information. At a low level,
messages are passed between systems. High level communication functions include file transfer,
login, mail, and remote procedure calls (RPCs).
The advantage of a distributed system is that these functions can be carried out over a great
distances. For example, two people at geographically distant sites can collaborate on a project.
viii. A network, in the simplest terms, is a communication path between two or more systems.
Distributed systems depend on networking for their functionality.
ix. Networks vary by the protocols used, the distances between nodes, and the transport media.
TCP/IP is the most common network protocol. Most operating systems support TCP/IP,
including the Windows and UNIX operating systems.
x. Local Area Networks (LANs), Wide Area Networks (WANs), Metropolitan Area Networks
(MANs) are nothing but the types of Distributed Systems.

CLUSTERED SYSTEMS
i. We can define a cluster as a group of interconnected, whole computers working together as a
unified computing resource that can create the illusion of being one machine. The term whole
computer means a system that can run on its own apart from the cluster.
ii. The generally accepted definition is that clustered computers share storage and are closely linked
via Local Area Network or a faster interconnect such as InfiniBand.
iii. Four benefits of that of clustering are as follows :
•Absolute scalability: It is possible to create large clusters that far surpass the power of even
the largest standalone machines. A cluster can have dozens or even hundreds of machines,
each of which is a multiprocessor.
• Incremental scalability: It is possible to add new systems to the cluster in small increments
for expanding it as needs grow, without having to go through a major upgrade in which an
existing small system is replaced with larger system.
• High availability:
➢ Because each node in a cluster is a standalone computer, the failure of one node does
not mean loss of service.
➢ A layer of cluster software runs on the cluster nodes. Each node can monitor one or
more of the others (over the LAN). If the monitored machine fails, the monitoring
machine can take ownership of its storage, and restart the applications that were running
on the failed machine.
• Superior price/performance: By using commodity building blocks, it is possible to put
together a cluster with equal or greater computing power than a single large machine, at
much lower cost.
iv. Clustering can be structured asymmetrically or symmetrically.
• Asymmetric Clustering: In asymmetric clustering, one machine is in hot standby mode
while the other is running the applications. The hot standby host (machine) does nothing but

7
monitor the active server. If that server fails, the hot standby host becomes the active server.
• Symmetric Clustering: In symmetric mode, two or more hosts are running applications,
and they are monitoring each other. This mode is obviously more efficient, as it uses all of
the available hardware.
v. The clusters in which nodes shares the storage are called as Parallel clusters. Shared storage
generally uses the RAID technology. To provide access to shared data, distributed file systems
must provide access control and locking ensure that no conflicting operations occur. This type of
service is commonly known as a distributed lock manager (DLM).
vi. New improvements like connecting dozens of machines in a cluster, even though they are
separated by milses are made possible by storage-area networks (SANs). They allow many
systems to attach to a pool of storage.

Three methods of clustering can be identified as: separate servers, shared nothing, and shared
memory.
In one approach to clustering, each computer is a separate server with its own disks and there
are no disks shared between systems .This arrangement provides high performance as well as high
availability. In this case, some type of management or scheduling software is needed to assign
incoming client requests to servers so that the load is balanced and high utilization is achieved. It is
desirable to have a failover capability, which means that if a computer fails while executing an
application, another computer in the cluster can pick up and complete the application. For this to
happen, data must constantly be copied among systems so that each system has access to the
current data of the other systems. The overhead of this data exchange ensures high availability at
the cost of a performance penalty.
To reduce the communications overhead, most clusters now consist of servers connected to
common disks. One variation on this approach is simply called shared nothing. In this approach,
the common disks are partitioned into volumes, and each volume is owned by a single computer. If
that computer fails, the cluster must be reconfigured so that some other computer has ownership of
the volumes of the failed computer.
It is also possible to have multiple computers share the same disks at the same time (called the
shared disk approach), so that each computer has access to all of the volumes on all of the disks.
This approach requires the use of some type of locking facility to ensure that data can only be
accessed by one computer at a time.

You might also like