You are on page 1of 9

Affiliated to Sikkim Manipal University

INFOMAX COLLEGE OF INFROMATION TECHNOLOGY &


MANAGEMENT
Fulbari Marga, Pokhara-11



Assignment on
COMPUTER ORGANIZATOINS




























Submitted by :
Annanda Shrestha (1308016924)





Submitted to :
Department of IT,
Infomax College Pokhara
6/15/2014

Q.N.1. a.) Find 10s and 9s complement of


Ans: In computers, the complements of numbers are used for simplified subtraction operation and
other logical multiplication. There are two types of complements of each base r system: the rs
complenent and the (r-1)s complement.
The 9s complement of the given number is:
999
348
651
In the above problem 9,s complement of 348 is 999 348 = 651
10s Complement is 6511 = 650
The 10s complement of 348 is 650 and
9s complement is 651.

b.) Find 10s and 9s complement of


The 9s complement of the given number is:
999999
-134795
865204
In the above problem 9s complement of 134795 is 999999134795 = 865204
10s Complement is 865204 1= 865203
The 10s complement of 134795 is 865203 and 9s complement is 865204.

Q.N.2. Draw and explain the Von Neumann architecture. What are the drawbacks it.
Ans: The word Von Neumann architecture also known as the Princeton architecture. It describes a
design architecture for an electronic digital computer with subdivisions of a processing unit
consisting of an arithmetic unit, logic unit and processor registers, a control unit containing as
instruction register and program counter a memory to store data and instruction external mass
storage and input and output mechanisms. The term Von Neumann Architecture is also known
as the Von Neumann Model. It was derived from a 1945 computer architecture description by
the mathematician and physicist John Von Neumann and others. The design of a von Neumann
architecture which is also a stored program system but has one dedicated set of address and data
buses for reading data from and writing data to memory an another set of address and data buses
for fetching instructions.

















Main
Memory


I/O
Equipment
Arithmetic and Logic
Unit

Program Control
Unit































The store program digital computer is one that keeps its programmed instructions, as well as
its data, in read write random-access memory store program computer were advancement
over the program controlled patch leads to route data and to control signals between various
functional units. The vast majority of modern computers the same memory used for both
data and program instructions and von- Neumann.
The Ven- Neumann is based on three keys they are as follows:
1. Data and instructions are stored in a single memory.
2. Content of this memory is addressable by location, with out regard.
3. The execution occurs in a sequential fashion unless explicitly modified form one
instructions to the next.
The drawbacks of Von Neumann architecture are
The main limitation of the von Neumann architecture is known as the "von Neumann
bottleneck". The von Neumann bottleneck is a limitation on throughput caused by the
standard personal computer architecture. The term is named for John von Neumann, who
developed the theory behind the architecture of modern computers. Earlier computers were
fed programs and data for processing while they were running. Von Neumann came up with
the idea behind the stored program computer, our standard model, which is also known as the
von Neumann architecture. In the von Neumann architecture, programs and data are held in





PC
MAR
MBR
I / O AR
IR
I / O BR
ALU
.


Instruction

Instruction

Instruction

Instruction


..

Data

Data

Data

PC = Program counter
IR = Instruction Register
MAR = Memory address register
MBR = Memory buffer register
I/0 AR = I/O address register
I/0 BR = I/O buffer register

I / O model
memory; the processor and memory are separate and data moves between the two. In that
configuration, latency is unavoidable. Furthermore, in recent years, processor speeds have
increased significantly. Memory improvements, on the other hand, have mostly been in
density the ability to store more data in less space rather than transfer rates. As speeds
have increased, the processor has spent an increasing amount of time idle, waiting for data to
be fetched from memory. No matter how fast a given processor can work, in effect it is
limited to the rate of transfer allowed by the bottleneck. Often, a faster processor just means
that it will spend more time idle. The von Neumann bottleneck has often been considered a
problem that can only be overcome through significant changes to computer or processor
architectures. Approaches to overcoming the von Neumann bottleneck include:
Caching -- the storage of frequently used data in a special area (usually RAM), so that it is
more readily accessible than if it were stored in main memory.
Prefetching -- moving some data into cache before it is requested to speed access in the
event of a request.
Multithreading -- managing multiple requests simultaneously in separate threads.
New types of RAM (random access memory) -- for example, DDR SDRAM, which
activates output on both the rising and falling edge of the system clock rather than on just the
rising edge, to potentially double output.
RAMBUS -- a memory subsystem consisting of the RAM, the RAM controller, and the bus
(path) connecting RAM to the microprocessor and devices in the computer that use it.
Processing in memory (PIM), which integrates a processor and memory in a single
microchip.

Q.N.3. Describe the types of busses and types of control lines.
Ans: In computer architecture, a bus is a communication system that transfers data between
components inside a computer or between computers. A system bus consists of a data bus, a
memory address bus and a control bus.
In a computer architecture a bus consists of 2 more wires. There's usually a bus that connects
the CPU to memory and to disk and I/O devices. The real computers system usually have
several buses, even through the simple computer we have modeled only has one bus where
we consider the data bus, address bus, and control bus as part of one large bus. They are
briefly discussed below:
Data Bus:
Sometimes it is also called memory bus that handles the transfer of all data and
instruction between functional areas of the computer. It can only transmit in one
direction at a time and provides a means for moving data between the different
modules of a system. It is used to transfer data between memory and the I/O section
during input output operations. The data bus usually consists of 8, 16 or 32 separate
lines which implies the data bus. A bus that is used to carry the address of the data in
the memory and its width is equal to the number if bits in the memory address register
of the memory.
Address Bus:
It consists of all the signals necessary to define any of the possible memory address
locations within the computer. An address is defined as a label, symbol or other set of
characters used to designate a location or register where information is stored. An
address must be transmitted to memory over the address bus before data or
instructions can be written into or read from memory by the CPU or I/O sections. A
bus that is used to carry the address of the data in the memory and its width is equal to
the number if bits in the memory address register of the memory.
Cache Bus:
Cache bus is a communication system that transfer data between components inside a
computer or between computers. This expression covers all related hardware
component and software including communication protocols.
Control Bus:
A control bus is a computer bus used by CPUs for communicating other devices with
within the computer. While the address bus carries the information on which device
the CPU communicating with and data bus carries the actual data being processed the
control bus carries commands from the CPU and returns status signals from the
devices. The processor has to send command READ and WRITE memory which
requires single wire. A START command is necessary for the input output unit. All
the signals are carried by the control signals. It is used by the CPU to direct and
monitor the actions of the other functional areas of the computer. It is used to transmit
a variety of individual signals (read, write, interrupt, acknowledge) necessary to
control and coordinate the operations of the computer. The individual signals
transmitted over the control bus and their functions are covered in the appropriate
functions area description.
The control line is the method by which a person can input can input direction and
therefore control a moving model. In the computer system there are different types of
control lines are used they are as follows:
Memory write:
Memory write cause data on the bus to be written into the addressed location.
Memory Read:
Cause the data from the addressed location to be placed on the data bus.
Input output write:
The input output write causes data on the data bus to be output to the addressed
input output port.
Input output read:
Causes data from the addressed Input output port to be placed on the bus.
Transfer ACK:
It indicates that a module need s to gain control of the bus.
Bus Request:
Indicates that a model to gain control of the bus.
Bus Grant:
The bus Grant indicates that a requesting module has been granted control of the
bus.
Interrupt Request:
Indicates that an interrupt is pending.
Interrupt request:
Acknowledge that the pending interrupt has been recognized.
Clock:
Clock is used to synchronize operations.
Reset:
It initializes all modules.

Q.N.4. Explain Booth's multiplication algorithm, with its advantages and disadvantages.
Ans: A power algorithm for signed-Number multiplication is the Booth algorithm. The Booth used
desk calculator that were faster at shifting than adding and created the algorithm to increase
their speed. Booth's algorithm is of interest in the study of computer architecture. Booths
multiplication algorithm is a multiplication algorithm that multiplies two signed binary
numbers in twos complement notation. This was invented by Andrew Donald Booth in 1950
in London. We need twice as many bits in our product as we have in our original two
+
operands. The leftmost bit of our operands is a SIGN bit and cannot be used as part of the
value. Firstly we must decide which operand will be the multiplier and which will be the
multiplicand. Booth algorithm examines adjacent pairs of bits of the n-bit multiplier y in
signed two's complement representation, including an implicit bit below significant bit Y
-1
are
considered where these two bits are equal the product accumulator P is left unchanged where
Yi
-1
=1 the multiplicand times 21 is added to Pj and where yi=1 and yi-1=0 the multiplication
can times 21 is subtract from P. The final value of P is the signed of product. The algorithm is
often described as covering strings of 1's in the multiplier to high order +1 and lower order -1
at the end of the string runs through the MSB there is no high order +1 and net effect is no
high order. Booth's multiplication algorithm generates a 2n-bit product and treats both positive
and negative numbers uniformly.Considering a positive binary number containing a run of
ones eg. the 8-bit value: 00011110. Multiplying by such a value implies four consecutive
additions of shifted multiplicands.
00010100
00011110
00000000
00010100 first
00010100 second
00010100 third
00010100 fourth
00000000
00000000
00000000
0000001001011000
Now 00011110 can be expressed as a difference of powers of two:
00011110
00000010
00011110
This means that the same multiplication can be obtained by using only two additions:
1. +2
5
x multiplicand of 00010100
2. -2
1
x multiplicand of 00010100
Suppose we want to multiply 2
10
by 6
10
, or 00100
2
by 0110
2
:
0010
2
2
X 0110
2
X 6
+ 0000 12
+ 0010
+ 0010
+ 0000
0001100
2
Booth observed that an ALU that could add or subtract could get the same result
in more than one way. Like 6
10
= - 2
10
+ 8
10
or 0110
2
= - 0010
2
+ 1000
2
The advantage of Booth 's multiplication algorithm are:
- It handles both positive and negative multiplier uniformly.
- It achieves efficiency in the number of additions required when the multiplier has
large blocks.
- It is efficient when there are long runs of once in the multiplier.
The disadvantages of Booths Algorithm are:
- The average speed of algorithm is about the same as with the normal
multiplication algorithm.
- The worst case operates at a slower speed than the normal multiplication
algorithm.

Q.N.5. Explain the concept of memory interleaving with diagram.
Ans: Memory interleaving is a method to increase the speed of the high end microprocessor and it
is even applicable to the hard disks too. For example, with separate memory banks for odd
and even addresses, the next byte of memory can be accessed while the current byte is being
refreshed. Memory is used for storage, and retrieval of data and instructions. A typical
computer system is equipped with a hierarchy of memory subsystem, some internal to the
system and some external. Internal memory systems are accessible to the CPU directly and
external memory system are accessible through an I/O module. The another technique called
memory interleaving divides the system into a number of modules and arranges them so that
successive words in the address space are placed in different modules. In computing,
memory interleaving is a design made to compensate for the relatively slow speed of
Dynamic Random Access Memory (DRAM). Interleaved memory is more flexible than wide-
access memory in that it can handle multiple independent accesses at once. It can be of 2
types:

2 way interleaving (using 2 complete address buses)
4 way interleaving (using 4 complete address buses)


The above figure illustrates interleaved memory where main memory is divided into 4
different modules. A memory read command reads all four modules and retrieves four
instructions. These instructions are sent to processor. One may combine interleaving and
cache to reduce the speed mismatch between the cache memory and main memory. When an
instruction FETCH is issued by the processor a memory access circuit creates four
consecutive addresses and places them in four MARs. A memory read command reads all
four modules simultaneously and retrieves four instructions. These are sent to the processor.
Thus each FETCH instruction fetches 4 consecutive instructions. One may combine
interleaving and cache to reduce the speed mismatch between cache memory and main
memory. To illustrate this consider the time required for transferring a block of datafrom the
main memory to the cache when a read miss occurs. Assume that a cache with 8-word blocks
is used. When cache miss occurs, the block that contains desired word must be copied from
the main memory into the cache. Now assume that the main memory is divided into 4
interleaved modules using interleaving technique. When the starting address of the block
arrives at the memory all the four modules start accessing the required data using higher
order bits of the address. After 8 cycles each module has one data in its MDR. These word
are transferred to the cache one word at a time during the next clock cycles. During this the
next word in each modules is accessed. Then it takes another 4 cycles to transfer these word
to the cache. Therefore the total time required to load the block from the interleaved memory
is 1+8+4+4=17 cycles. Thus interleaving reduces the block transfer by more than a factor of
2.

Q.N.6. Explain interrupt driven I/O. Describe the design issue in implementing
interrupt driven I/O.
Ans: An interrupt schema for dealing with I/O is the interrupt driven method. There the CPU
works on its given tasks continuously. When an input is available such as when someone
types of key on the keyboard, then the CPU is interrupted from its work to take care of input
data. Most input and output devices are much slower than the CPU. So much slower that it
would be a terrible waste of the CPU to make it wait for the input devices and the driver
writer should implement buffering. Data buffers help to detach data transmission and
reception from the write and read system calls and overall system performance benefits. A
good buffering mechanism leads to interrupt-driven I/O. The problem with the programmed
I/O is that the processor has to wait a long time for the I/O module of concern to be ready for
either reception or transmission of data. The processor, while waiting, must repeatedly
interrogate the status of the I/O module. This type of operation, where the CPU constantly
test a part to see if data is available or if it is capable of accepting data. Polled I/O is
inherently inefficient. Its solution is to provide an interrupt mechanism. In this approach the
processor issues an I/O command to a module and then go on to some other useful work. The
interrupt enable transfer of control from one program to another to be initiated by an event
that is external to a computer. Execution of the interrupted program resumes after completion
of execution of the interrupt service routine. The concept of interrupts is useful in operating
systems and in many control applications where processing is certain routines has to be
accurately time relative to the external events. The latter types of application is generally
referred to as a real time processing. The I/O module with then interrupt the processor to
request service when it is ready to exchange data with the processor. The processor then
executes resumes its former processing.
It can be better understood with an example to input a block of data. A flow chart using this
technique for input of a block of data is as shown in figure below:

You might also like