You are on page 1of 13

Homework 4

CAP 208

INTRODUCTION TO COMPUTER ORGANISATION ANDARCHITECTURE

SUBMITTED TO-

Anjlee Mam

SUBMITTED BY-

SACHIN RAJ

B34

D3901
Part-A

Q1. Explain interrupt - initiated I/O mode of data transfer


between I/O and processor.

Ans-

Interrupt Initiated I/O Mode data transfer between I/O and


processor

When I/o device will ready to send data then it will


interrupt.here time of Cpu is not wasting.it means when I/o
device will ready then it send interrupt request to CPU.

 Polling takes valuable CPU time

 Open communication only when some data has

 to be passed -> Interrupt.

 I/O interface, instead of the CPU, monitors the I/O device

 When the interface determines that the I/O device is

 ready for data transfer, it generates an Interrupt Request to


the CPU

 Upon detecting an interrupt, CPU stops momentarily

 the task it is doing, branches to the service routine

 to process the data transfer, and then returns to the

 task it was performing


Q2. Write the difference between isolated I/O and memory
mapped I/O.

Ans-

Isolated I/O

 Separate I/O read/write control lines in addition to


memory read/write control lines

 Separate (isolated) memory and I/O address spaces

 Distinct input and output instructions

Memory mapped I/O.

 A single set of read/write control lines

(no distinction between memory and I/O transfer)

 Memory and I/O addresses share the common address


space

reduces memory address range available

 No specific input or output instruction

 The same memory reference instructions can be used for


I/O transfers

 Considerable flexibility in handling I/O operations

Difference

In memory mapped I/O, a chunk of the CPU's address


space is reserved for accessing I/O devices. In I/O
mapped I/O, I/O devices are handled distinctly by the CPU
and hence occupy a seperate chunk of addressess
predetermined by the CPU for I/O
Q3. Explain by making proper diagrams how data is transferred via
DMA?

Ans-

When peripheral device sends a DMA request,the DMA controller


activates the BR line,informing the cpu to relinquish the buses.The
cpu responds with its BG line,informing the DMA that its buses are
disabled.The DMA then puts the current value of its address
register into the address bus,initiates the RD and WR lines in DMA
controller are bidirectional.the direction of Transfer depends on the
status of BG line.when BG=0 the RD and WR are input lines allowing
the CPU to communicate With the internal DMA registers.When
BG=1 the RD and WR are output lines from The DMA controller to
the random-access memory to specify the read or write operation
for the data.

DMA I/O Operation


Input

[1] Input Device <- R (Read control signal)

[2] Buffer(DMA Controller) <- Input Byte; and

assembles the byte into a word until word is full

[4] M <- memory address, W(Write control signal)

[5] Address Reg <- Address Reg +1; WC(Word Counter) <-
WC - 1

[6] If WC = 0, then Interrupt to acknowledge done, else go


to [1]

Output

[1] M <- M Address, R

M Address R <- M Address R + 1, WC <- WC - 1

[2] Disassemble the word

[3] Buffer <- One byte; Output Device <- W, for all
disassembled bytes

[4] If WC = 0, then Interrupt to acknowledge done, else go


to [1]
Part-B

Q4. Why does DMA have priority over CPU when both request a
memory transfer?

Ans-

The CPU can wait to fetch instructions and data from memory
without any damage occurring except loss of time. DMA usually
transfers data from a device that cannot be stopped since
information continues to flow so loss of data may occur.

During DMA transfer, the CPU is idle and has no control of the
memory buses. A DMA controller takes over the buses to manage
the transfer directly between the I/O device and memory.

Other Reason Can Be :

DMA is an essential feature of all modern computers, as it allows


devices to transfer data without subjecting the CPU to a heavy
overhead. Otherwise, the CPU would have to copy each piece of data
from the source to the destination, making itself unavailable for
other tasks. This situation is aggravated because access to I/O
devices over a peripheral bus is generally slower than normal system
RAM. With DMA, the CPU gets freed from this overhead and can do
useful tasks during data transfer (though the CPU bus would be
partly blocked by DMA). In the same way, a DMA engine in an
embedded processor allows its processing element to issue a data
transfer and carries on its own task while the data transfer is being
performed.
Q5. Explain in detail the memory organization, the memory
hierarchy and RAM ROM chips.

Ans-

Memory Organisation

A computer's memory is a complicated system. The major


components are storage memory, system memory, virtual memory
and cache memory. There are several hardware components that
are part of the memory structure.

Hard Drive

Storage memory is located in the computer's main hard disk


drive. The hard drive also contains a segment reserved for
virtual memory. When the system's RAM is overused, the
computer uses a portion of the hard drive to simulate RAM.

RAM

Random access memory is the main system memory. RAM


consists of cards or modules installed directly into the
motherboard.

CPU

The central processing unit, or processor, is the computer's main


organizing mechanism. It is sometimes called the system's brain.
The CPU is in charge of the movements of memory, but also
contains cache memory to speed up processes.

MMU

The memory management unit is the component of the CPU


responsible for moving data from the page files to RAM.
Memory hierarchy

RAM Chip-

ROM Chip-
Q6 Discuss Associative memory, Cache memory and Virtual memory?
Ans-

Associative memory

 Accessed by the content of the data rather than by an


address

 Also called Content Addressable Memory (CAM)

Hardware Organisation

Compare each word in CAM in parallel with the

content of A(Argument Register)

 If CAM Word[i] = A, M(i) = 1

 Read sequentially accessing CAM for CAM Word(i) for M(i)


= 1

 K(Key Register) provides a mask for choosing a

particular field or key in the argument in A

(only those bits in the argument that have 1’s in

their corresponding position of K are compared)

Cache memory
Locality of Reference

 The references to memory at any given time

o interval tend to be confined within a localized areas

 This area contains a set of information and

o the membership changes gradually as time goes by

Temporal Locality

The information which will be used in near future

is likely to be in use already( e.g. Reuse of information in loops)

Spatial Locality

If a word is accessed, adjacent(near) words are likely accessed


soon

(e.g. Related data items (arrays) are usually stored together;

instructions are executed sequentially)

Cache

The property of Locality of Reference makes the

Cache memory systems work

Cache is a fast small capacity memory that should hold


those information

which are most likely to be accessed


Virtual memory

An imaginary memory area supported by some operating systems


(for example, Windows but not DOS) in conjunction with the
hardware. You can think of virtual memory as an alternate set
of memory addresses. Programs use these virtual addresses
rather than real addresses to store instructions and data. When
the program is actually executed, the virtual addresses are
converted into real memory addresses.

The purpose of virtual memory is to enlarge the address space,


the set of addresses a program can utilize. For example, virtual
memory might contain twice as many addresses as main memory.
A program using all of virtual memory, therefore, would not be
able to fit in main memory all at once. Nevertheless, the
computer could execute such a program by copying into main
memory those portions of the program needed at any given point
during execution.

You might also like