You are on page 1of 5

Scalability

In telecommunications and software engineering, scalability is a desirable property of a


system, a network, or a process, which indicates its ability to either handle growing
amounts of work in a graceful manner or to be readily enlarged.[1] For example, it can
refer to the capability of a system to increase total throughput under an increased load
when resources (typically hardware) are added. An analogous meaning is implied when
the word is used in a commercial context, where scalability of a company implies that
the underlying business model offers the potential for economic growth within the
company.

Scalability, as a property of systems, is generally difficult to define [2] and in any


particular case it is necessary to define the specific requirements for scalability on those
dimensions which are deemed important. It is a highly significant issue in electronics
systems, database, routers, and networking. A system whose performance improves
after adding hardware, proportionally to the capacity added, is said to be a scalable
system. An algorithm, design, networking protocol, program, or other system is said
to scale if it is suitably efficientand practical when applied to large situations (e.g. a
large input data set or large number of participating nodes in the case of a distributed
system). If the design fails when the quantity increases then it does not scale.

The concept of scalability applies to technology as well as business settings. The base
concept is consistent - The ability for a business or technology to accept increased
volume without impacting the contribution margin (= revenue - variable costs). For
example, a given piece of equipment may have capacity from 1-1000 users, and
beyond 1000 users, additional equipment is needed or performance will decline
(variable costs will increase and reduce contribution margin).

Measures

Scalability can be measured in various dimensions, such as:

 Load scalability: The ability for a distributed system to easily expand and contract
its resource pool to accommodate heavier or lighter loads. Alternatively, the ease
with which a system or component can be modified, added, or removed, to
accommodate changing load.
 Geographic scalability: The ability to maintain performance, usefulness, or
usability regardless of expansion from concentration in a local area to a more
distributed geographic pattern.
 Administrative scalability: The ability for an increasing number of organizations to
easily share a single distributed system.
 Functional scalability: The ability to enhance the system by adding new
functionality at minimal effort.

Examples

 A scalable online transaction processing system or database management


system is one that can be upgraded to process more transactions by adding new
processors, devices and storage, and which can be upgraded easily and
transparently without shutting it down.
 A routing protocol is considered scalable with respect to network size, if the size
of the necessary routing table on each node grows asO(log N), where N is the
number of nodes in the network.
 The distributed nature of the Domain Name System allows it to work efficiently
even when all hosts on the worldwide Internet are served, so it is said to "scale well".
 Some early peer-to-peer (P2P) implementations of Gnutella had scaling issues.
Each node query flooded its requests to all peers. The demand on each peer would
increase in proportion to the total number of peers, quickly overrunning the peers'
limited capacity. Other P2P systems like BitTorrent scale well because demand on
each peer is independent of the total number of peers. There is no centralized
bottleneck, so the system may expand indefinitely without the addition of supporting
resources (other than the peers themselves).

Scale vertically vs. horizontally

Methods of adding more resources for a particular application fall into two broad
categories:[3]

Scale vertically (scale up)


To scale vertically (or scale up) means to add resources to a single node in a system,
typically involving the addition of CPUs or memory to a single computer. Such vertical
scaling of existing systems also enables them to use Virtualization technology more
effectively, as it provides more resources for the hosted set of Operating
system and Application modules to share.
Taking advantage of such resources can also be called "scaling up", such as expanding
the number of Apache daemon processes currently running.

Scale horizontally (scale out)


To scale horizontally (or scale out) means to add more nodes to a system, such as
adding a new computer to a distributed software application. An example might be
scaling out from one web server system to three.

As computer prices drop and performance continues to increase, low cost "commodity"
systems can be used for high performance computing applications such as seismic
analysis and biotechnology workloads that could in the past only be handled
by supercomputers. Hundreds of small computers may be configured in a cluster to
obtain aggregate computing power which often exceeds that of single traditional RISC
processor based scientific computers. This model has further been fueled by the
availability of high performance interconnects such
as Myrinet and InfiniBand technologies. It has also led to demand for features such as
remote maintenance and batch processing management previously not available for
"commodity" systems.

The scale-out model has created an increased demand for shared data storage with
very high I/O performance, especially where processing of large amounts of data is
required, such as in seismic analysis. This has fueled the development of new storage
technologies such asobject storage devices.

Tradeoffs
There are tradeoffs between the two models. Larger numbers of computers means
increased management complexity, as well as a more complex programming model and
issues such as throughput and latency between nodes; also, some applications do not
lend themselves to a distributed computing model. In the past, the price differential
between the two models has favored "scale out" computing for those applications that fit
its paradigm, but recent advances in virtualization technology have blurred that
advantage, since deploying a new virtual system over a hypervisor (where possible) is
almost always less expensive than actually buying and installing a real one.

Database scalability

A number of different approaches enable databases to grow to very large size while
supporting an ever-increasing rate of transactions per second. Not to be discounted, of
course, is the rapid pace of hardware advances in both the speed and capacity of mass
storage devices, as well as similar advances in CPU and networking speed. Beyond
that, a variety of architectures are employed in the implementation of very large-scale
databases.

One technique supported by most of the major DBMS products is the partitioning of
large tables, based on ranges of values in a key field. In this manner, the database can
be scaled out across a cluster of separate database servers. Also, with the advent of
64-bit microprocessors,multi-core CPUs, and large SMP multiprocessors, DBMS
vendors have been at the forefront of supporting multi-threaded implementations that
substantially scale up transaction processing capacity.

Network-attached storage (NAS) and Storage area networks (SANs) coupled with fast
local area networks and Fibre Channel technology enable still larger, more loosely
coupled configurations of databases and distributed computing power. The widely
supported X/Open XAstandard employs a global transaction monitor to
coordinate distributed transactions among semi-autonomous XA-compliant database
resources. Oracle RAC uses a different model to achieve scalability, based on a
"shared-everything" architecture that relies upon high-speed connections between
servers.

While DBMS vendors debate the relative merits of their favored designs, some
companies and researchers question the inherent limitations ofrelational database
management systems. GigaSpaces, for example, contends that an entirely different
model of distributed data access and transaction processing, named Space based
architecture, is required to achieve the highest performance and scalability.[4] On the
other hand, Base One makes the case for extreme scalability without departing from
mainstream database technology.[5] In either case, there appears to be no end in sight
to the limits of database scalability.

Design for scalability

It is often advised to focus system design on hardware scalability rather than on


capacity. It is typically cheaper to add a new node to a system in order to achieve
improved performance than to partake in performance tuning to improve the capacity
that each node can handle. But this approach can have diminishing returns (as
discussed in performance engineering). For example: suppose a portion of a program
can be sped up by 70% if parallelized and run on four CPUs instead of one. If α is the
fraction of a calculation that is sequential, and 1 − α is the fraction that can be
parallelized, then the maximum speedup that can be achieved by using P processors is
given according to Amdahl's Law: . Substituting the values for this example, we

get . If we double the compute power to 8 processors we get . Doubling the


processing power has only improved the speedup by roughly one-fifth. If the whole
problem was parallelizable, we would, of course, expect the speed up to double also.
Therefore, throwing in more hardware is not necessarily the optimal approach.

Weak vs Strong Scaling

In the context of high performance computing there are two common notions of
scalability. The first is strong scaling, which is defined as how the solution time varies
with the number of processors for a fixed total problem size[6]. The second is weak
scaling, which is defined as how the solution time varies with the number of processors
for a fixed problem size per processor.

You might also like