You are on page 1of 7

KONERU LAKSHMAIAH COLLEGE OF

ENGINEERING
(ATONOMOUS)
Approved by AICTE, Accredited by National Board of Accredition and ISO 9001-2000
certified
Green fields, vaddeswaram, Guntur dist., A.P.India-522 502.
Affiliated to Acharya Nagarjuna University

STRATEGY OF PARALLEL
COMPUTING

PRESENTED BY

NAME: K.NAGA TEJA NAME: K.IMMANUEL

KLCE KLCE
PHNO: 9030686681 PHNO: 9642672002

Email: naga.tejarocks@gmail.com Email:


immanuel.kota@gmail.com
• Only one instruction may
execute at any moment in
time.
ABSTRACT:
Serial/scaled computing
The very basics of parallel
computing, and is intended for
someone who is just becoming
acquainted with the subject. It
begins with a brief overview,
including concepts and terminology
associated with parallel computing.
The topics of parallel memory Parallelcomputing
architectures and programming
models are then explored. These
topics are followed by a discussion
on a number of issues related to
designing parallel programs.

1. INTRODUCTION

Traditionally, software has been The compute resources can include:


written for serial computation:
• A single computer with
• To be run on a single multiple processors;
computer having a single • An arbitrary number of
Central Processing Unit computers connected by a
(CPU); network;
• A problem is broken into a • A combination of both.
discrete series of instructions.
• Instructions are executed one savings. Parallel clusters can be
after another. built from cheap, commodity
components. The very basics of
parallel computing, and is intended for
someone who is just becoming
acquainted with the subject. It begins 4.METACOMPUTING
with a brief overview, including
concepts and terminology associated
with parallel computing. The topics of
parallel memory architectures and
programming models are then explored.
These topics are followed by a
discussion on a number of issues related
1.SHARED MEMORY COMPUTER
to designing parallel programs.
MODEL:
DEF: The simultaneous use of more
than one processor or computer to solve
a problem or task is represented as
“PARALLEL COMPUTING”

There are many types of computers


available today, from single processor or
'scalar' computers to machines with
vector processors to massively parallel
computers with thousands of
microprocessors. Each platform has its
own unique characteristics.
Understanding the differences is
important to understanding how best to • Shared memory parallel
program each. However, the real trick is computers vary widely, but
to try to write programs that will run generally have in common the
reasonably well on a wide range of ability for all processors to
computers. access all memory as global
address space.
• Multiple processors can operate
independently but share the same
TYPES OF PARALLEL memory resources.
COMPUTERS: • Changes in a memory location
effected by one processor are
1.SHARED MEMORY visible to all other processors.
ARCHITECHTURE • Shared memory machines can be
divided into two main classes
2.DISRIBUTED MEMORY based upon memory access
ARCHITECHTURE times: UMA and NUMA.

3.CLUSTER COMPUTER Advantages:


characteristic. Distributed
• Global address space provides a memory systems require a
user-friendly programming
communication network to
perspective to memory
• Data sharing between tasks is
connect inter-processor memory.
both fast and uniform due to the
proximity of memory to CPUs

Disadvantages:

• Primary disadvantage is the lack


of scalability between memory
and CPUs. Adding more CPUs
can geometrically increases
• Processors have their own
traffic on the shared memory-
CPU path, and for cache coherent local memory. Memory
systems, geometrically increase addresses in one processor do
traffic associated with not map to another processor,
cache/memory management. so there is no concept of
• Programmer responsibility for global address space across
synchronization constructs that all processors.
insure "correct" access of global
• Because each processor has
memory.
• Expense: it becomes increasingly
its own local memory, it
difficult and expensive to design operates independently.
and produce shared memory Changes it makes to its local
machines with ever increasing memory have no effect on the
numbers of processors. memory of other processors.
Hence, the concept of cache
coherency does not apply.
2.DISTRIBUTED MEMORY
Advantages:
MODEL:
• Memory is scalable with
number of processors.
Like shared memory systems, Increase the number of
distributed memory systems vary processors and the size of
widely but share a common
memory increases workstations), and NOWs (networks of
proportionately. workstations).
• Each processor can rapidly
They are much cheaper than traditional
access its own memory MPP systems, and often use the same
without interference and processors, but are more difficult to use
without the overhead since the network capabilities are
incurred with trying to currently much lower. Cluster computers
maintain cache coherency. are also usually much smaller, most
often involving fewer than 100

Cost effectiveness: can use


commodity, off-the-shelf
processors and networking.

Disadvantages:

• The programmer is
responsible for many of the
details associated with data
computers. This is in part because the
communication between
networking and software infrastructure
processors. for cluster computing is less mature,
• It may be difficult to map making it difficult to make use of very
existing data structures, based large systems at this time. Below is a list
on global memory, to this of some local clusters and their
memory organization. characteristics, plus some other notable
• Non-uniform memory access systems from around the country.
(NUMA) times
4.METACOMPUTING:

3.CLUSTER COMPUTERS: Metacomputing is a similar idea, but


with loftier goals. Supercomputers that
Distributed memory computers can also
may be geographically separated can be
be built from scratch using mass
combined to run the same program.
produced PCs and workstations. These
However, the goal in metacomputing is
cluster computers are referred to by
usually to provide very high bandwidths
many names, from a poor-man's
between the supercomputers so that
supercomputer to COWs (clusters of
these connections do not produce a
bottleneck for the communications.
Scheduling exclusive time on many - Have ever increasing processors,
supercomputers at the same time can memory, performance, but
also pose a problem. This is still an area
- Need more space (new computer
of active research.
halls = $)
- Need more power (MWs = $)
2.Parallel computers
require/produce a lot of data (I/O)
- Require parallel file systems
(GPFS, Lustre) + archive store
3. Applications need to scale to
increasing numbers of
processors, problems areas are
- Load imbalance, Serial sections,
Global Communications

REFERENCES:
REAL TIME APPLICATION FOR
PARALLEL COMPUTING: • A Library of Parallel
Algorithms, www-
An IFS TL2047L149 forecast 2.cs.cmu.edu/~scandal/nesl/
model takes about 5000 seconds algorithms.html
wall time for a 10 day forecast
using 128 nodes of an IBM • Internet Parallel Computing
Power6 cluster. Archive,
wotug.ukc.ac.uk/parallel
How long would this model take
using a fast PC with sufficient • Introduction to Parallel
memory? (e.g. dual core Dell) Computing,
www.llnl.gov/computing/tut
ANS: Ans. About 1 year orials/parallel_comp/#Whati
This PC would also need ~2000 s
Gbytes of memory

CONCLUSION:

You might also like