You are on page 1of 17

mca amity first semester

Digital Electronics & Computer Organisation


section A
Q2 .What is virtual memory? How address mapping is done
in cache memory? Elaborate your answer with examples.
ans-
VIRTUAL MEMORY
Virtual (or logical) memory is a concept that, when implemented
by a computer and its operating system, allows programmers to
use a very large range of memory or storage addresses for stored
data. The computing system maps the programmer's virtual
addresses to real hardware storage addresses. Usually, the
programmer is freed from having to be concerned about the
availability of data storage. In addition to managing the mapping
of virtual storage addresses to real storage addresses, a
computer implementing virtual memory or storage also manages
storage swapping between active storage (RAM) and hard disk or
other high volume storage devices. Data is read in units called
"pages" of sizes ranging from a thousand bytes (actually 1,024
decimal bytes) up to several megabytes in size. This reduces the
amount of physical storage access that is required and speeds up
overall system performance.
ADDRESS MAPPING IN CACHE MEMORY
The correspondence between the main memory and CPU are
specified by a mapping function. There are three standard
mapping functions namely
1. Direct mapping
2. Associative mapping
3. Block set associative mapping
1. Direct mapping technique
This is the simplest mapping technique. In this case, block K of
the main memory maps onto block K modulo 128 of the cache.
Since more than one main memory block is mapped onto a given
cache block position, contention may arise for that position even
when the cache is not full. This is overcome by allowing the new
block to overwrite the currently resident block. A main memory
address can be divided into three fields, TAG, BLOCK and WORD
as shown in figure

2. Associative mapping technique


This is a much more flexible mapping technique. Here any main
memory block can be loaded to any cache block position.
Associative mapping is illustrated as shown in figure below. In this
case 12 tag bits are required to identify a main memory block
when it is resident in the cache.

3. Block set associative mapping


This is a combination of two techniques discussed above. In this
case blocks of the cache are grouped into sets and the mapping
allows a block of main memory to reside in any block of a
particular set. Set associative mapping is illustrated as shown in
figure
Consider an example, a cache with two blocks per set. The 6 bit
set field of the address determines which set of the cache might
contain the addressed block. The tag field of the address must be
associatively compared to the tags of the two blocks of the set to
check if the desired block is present.
Q5- Write short notes on any three of the following.
a. Microprocessor
b. Modes of data transfer.
c. I/O processor
d. Associative memory
e. Software and Hardware interrupt
ans-
a) MODES OF DATA TRANSFER.
There are three techniques are possible for I/O operations. They
are:

Programmed I/O

Interrupt driven

Direct Memory Access (DMA)

Programmed I/O- data are exchanged between the CPU and the
I/O module. The CPU executes a program that gives it direct
control of the I/O operation, including sensing device status,
sending a read or write command and transferring data. When
CPU issues a command to I/O module, it must wait until I/O
operation is complete. If the CPU is faster than I/O module, there
is wastage of CPU time.

Interrupt driven I/O- the CPU issues a command to I/O module


and it does not wait until I/O operation is complete but instead
continues to execute other instructions. When I/O module has
completed its work it interrupts the CPU.

DMA- When a large quantity of data is to be transferred from


CPU, a DMA module can be utilised. In both Programmed and
Interrupt driven I/O,the CPU is busy in implementing input/output
instructions. However, DMA permits information to be transferred
fast in and out of memory with no intervention
of the CPU.
b) I/O PROCESSOR
A system may assimilate one or more external processors and
assign them the duty of direct communication with all I/O devices.
An Input-Output Processor (IOP) may be termed as a processor
with the ability of direct memory access which communicates with
I/O devices. In this configuration, the computer system maybe
segregated into a memory unit and a number of processors
comprised of one or more IOPs and the CPU. Each IOP takes
charge of output and input tasks thereby, reducing system tasks
of the CPU which are involved in I/O transfers. The IOP is similar
to a CPU with the exception that it is planned to handle the tiny
details of I/O processing. With relation to a DMA controller which
requires to be set up completely by the CPU, the IOP is able to
fetch and
execute its personal instructions. These IOP instructions are
specifically designed to ease I/O transfers. Additionally, the IOP
has the ability to perform other processing tasks, like logic and
arithmetic operations, code translation and branching.

C) HARDWARE AND SOFTWARE INTERRUPTS

Hardware interrupts are issued by hardware devices like disk,


network cards, keyboards, clocks, etc. Each device or set of
devices will have its own IRQ (Interrupt ReQuest) line. Based on
the IRQ the CPU will dispatch the request to the appropriate
hardware driver. (Hardware drivers are usually subroutines within
the kernel rather than a separate process.) The driver which
handles the interrupt is run on the CPU. The CPU is interrupted
from what it was doing to handle the interrupt, so nothing
additional is required to get the CPU's attention.

D)
Software interrupts are requests for I/O (Input or Output). These
will call kernel routines which will schedule the I/O to occur. For
some devices the I/O will be done immediately, but disk I/O is
usually queued and done at a later time. Depending on the I/O
being done, the process may be suspended until the I/O
completes, causing the kernel scheduler to select another
process to run. The software interrupt only talks to the kernel. It is
the responsibility of the kernel to schedule any other processes
which need to run. software interrupt doesn't directly interrupt the
CPU. Only code that is currently running code can generate a
software interrupt. The interrupt is a request for the kernel to do
something (usually I/O) for running process.

Q8.Compare RISC & CISC architecture.


ans-
RISC Vs CISC
Given the differences between RISC and CISC, Which is better?
The answer to this question is that there is no right answer to this
question. Each has some features that are better than the other
does. RISC processors have fewer and simpler instructions than
CISC processors.
As a result, their control units are less complex and easier to
design. This allows them to run at higher clock frequencies than
CISC processors and reduces the amount of space needed on
the processor chip, so designer can use the extra space for
additional registers and other components.
Simpler control units can also lead to reduced development cost.
With a simpler design, it is easier to incorporate parallelism into
the CU of RISC CPU.
With fewer instructions in their instruction sets, the compilers for
RISC processors are less complex than those for CISC
processors.
As general guideline, CISC processors were originally designed
for assembly language programming, where as RISC processors
are geared toward compiled, high levellanguage programs.
However, the same complied high-level program will require more
instructions for a RISC CPU than for a CISC CPU.
The CISC methodology offers some advantages as well. Although
CISC processors are more complex, this complexity does not
necessarily increase development costs. Current CISC
processors are often the most recent addition to an entire family
of processors, such as the Intels family. As such they might
incorporate portions of the designs of their previous families.
CISC processors also provide backward compatibility with other
processors in their families. If they are pin compatible, it may be
possible simply to replace a previous generation processor with
the newest model without changing the rest of the computers
design. This same backward compatibility, whether pin compatible
or not, allows the CISC CPU to run the same software as used by
the predecessors in its family. For instance, a program that runs
successfully on a Pentium III. This can translate into significant
savings for the user and can determine the success or failure of a
microprocessor.
CISC designs generally incorporate instruction pipelines, which
have improved performance dramatically in RISC processors. As
technology allows more devices to be incorporated into a single
microprocessor chip, CISC are adding more registers to their
designs, again to achieve the performance improvements they
provide to RISC processors. Newer processor families such as
PowerPC microprocessors, draw some features from RISC
methodology and others from CISC, Making them a hybrid of
RISC and CISC.

case study
Q1- Give the organization of Micro programmed control unit
and explain its operation. Explain the role of address
sequencer in detail. If you convert your control unit to
hardwired unit, what are the changes you will observe?
ans
MICRO-PROGRAMMED CONTROL UNIT
A micro-programmed control unit is implemented using
programming approach. Microprogramming is the concept for
generating control signals using programs. These programs are
called micro-programs. micro-program consist of micro-
instructions. The operations performed on the data stored inside
the registers are called micro-operations. A sequence of micro-
operations are carried out by executing a micro-program
consisting of micro-instructions. micro-program is stored in the
control memory(its read only memory (ROM)) of the control unit.
Execution of a micro-instruction is responsible for generation of a
set of control signals. A micro-instruction consists of:
One or more micro-operations to be executed.
Address of next microinstruction to be executed.
ARCHITECTURE OF MICRO-PROGRAMMED CONTROL UNIT
OPERATION OF MICRO PROGRAMMED CONTROL UNIT
The two basic tasks performed by a micro-programmed control
unit are as follows:
1)Microinstruction sequencing: Get the next microinstruction from
the control memory.
2)Microinstruction execution: Generate the control signals needed
to execute the microinstruction.
the operations are performed in following order:
1.To execute an instruction, the sequencing logic unit issues a
READ command to the control memory.
2.The word whose address is specified in the control address
register is read into the control buffer register.
3.The content of the control buffer register generates control
signals and next-address information for the sequencing logic
unit.
4.The sequencing logic unit loads a new address into the control
address register based on the next-address information from the
control buffer register and the ALU flags.
Role of Address Sequencer
The next microinstruction address is determined by address
sequencer(micro sequencer) in one of five ways:

Next sequential address: In the absence of other


instructions, the control units control address register is
incremented by 1.
Opcode mapping: At the beginning of each instruction
cycle, the next microinstruction address is determined by
the opcode.
Subroutine facility: The address stored in the micro-
subroutine register. Just as high-level and assembly
language programs may have subroutines which can be
invoked from different locations within the program,
microcode may also use micro-subroutines.
Interrupt testing : Certain microinstructions specify a test
for interrupts. If an interrupt has occurred, this determines
the next microinstruction address.
Branch: Conditional and unconditional branch
microinstructions are used.
The changes we will observe if we change from micro
programmed control unit to hardwired control unit, are as follows:

Q2- Explain in details the block diagram of timing and control


unit.
ans- The control unit (CU) is a component of a computer's central
processing unit (CPU) that directs operation of the processor. It
tells the computer's memory, arithmetic/logic unit and input and
output devices how to respond to a program's instructions. It
directs the operation of the other units by providing timing and
control signals. Most computer resources are managed by the
CU.[citation needed] It directs the flow of data between the CPU
and the other devices. It generates timing and control signals
which are necessary for the execution of instructions. It controls
provides status, control and timing signals which are required for
the operation of memory and I/O devices. It controls the entire
operation of the microprocessor and peripherals consented to it.
Thus it is seen that control unit of the CPU acts as a brain of the
computer.

The Control Unit has three main jobs:


1. It controls and monitors the hardware attached to the system
to make sure that the commands given to it by the
application software are used. For example, if you send
something to print, the control unit will keep a check that the
instructions are sent to the printer correctly.
2. It controls the input and output of data so that the signals go
to the right place at the right time
3. It controls the flow of data within the CPU - which is the
Fetch-Execute cycle.
FETCH CYCLE-The instruction whose address is determined by
the PC(program counter) is obtained from the memory Loaded
into the IR(instruction register). The PC is then incremented to
point to the next instruction and switch over to execution cycle.
INSTRUCTION EXECUTION CYCLE

Instruction loaded into Instruction Register (IR)

Processor interprets instruction and performs required

actions

The CPU executes a sequence of instructions

The execution of an instruction is organized as an


instruction cycle : it is performed as a succession of several
steps.

Each step is executed as a set of several micro-operations.

The task performed by any micro-operation falls in one of


the following categories:
Transfer data from one register to another.

Transfer data from a register to an external interface


(system bus).

Transfer data from an external interface to a register

Perform an arithmetic or logic operation, using registers for


input and output.

Timing of control unit:


The MFC(Memory Fetch Cycle) signal is generated by the main
memory whose operation is independent of CPU clock. Hence
MFC is an asynchronous signal that may arrive at any time
relative to the CPU clock. It is possible to synchronized with CPU
clock with the help of a D flip-flop. When WMFC(wait MFC) signal
is high, then RUN signal is low. This run signal is used with the
master clock pulse through an AND gate. When RUN is low, then
the CLK signal remains low, and it does not allow to progress the
control step counter.
When the MFC signal is received, the run signal becomes high
and the CLK signal becomes same with the MCLK signal and due
to which the control step counter progresses. Therefore, in the
next control step, the WMFC signal goes low and control unit
operates normally till the next memory access signal is generated.
fig:Timing of control signals during instruction fetch

You might also like