You are on page 1of 17

Report on Architecture, Storage Networks & Security

Challenges in Cloud Computing

Contents

Introduction to Cloud Computing

Invention of Basic needs of Cloud Computing

Architecture of Cloud Computing

Service Grid Architecture

Storage Architecture Network

Security Challenges

Conclusion
What is a Cloud Computing?

Cloud computing is a term used to describe both a platform and type of


application. A cloud computing platform dynamically provisions, configures,
reconfigures, and deprovisions servers as needed. Servers in the cloud can be
physical machines or virtual machines. Advanced clouds typically include other
computing resources such as storage area networks (SANs), network Equipment,
firewall and other security devices.
Cloud computing also describes applications that are extended to be
accessible through the Internet. These Cloud applications use large data centers
and powerful servers that host Web applications and Web services. Anyone with a
suitable Internet connection and a standard browser can access a cloud
application.

A cloud is a pool of virtualized computer resources. A cloud can:


Host a variety of different workloads, including batch-style back-end jobs
and interactive, User-facing applications.
Allow workloads to be deployed and scaled-out quickly through the rapid
provisioning of Virtual machines or physical machines.
Support redundant, self-recovering, highly scalable programming models
that allow Workloads to recover from many unavoidable hardware/software
failures.
Monitor resource use in real time to enable rebalancing of allocations
when needed.

Basic Evolutions for Cloud Computing in the history of Computers:-


Our lives today would be different, and probably difficult, without the
benefits of modern computers. Computerization has permeated nearly every
facet of our personal and professional lives. Computer evolution has been both
rapid and fascinating. The first step along the evolutionary path of computers
occurred in 1930, when binary arithmetic was developed and became the
foundation of computer processing technology, terminology, and programming
languages. Calculating devices date back to at least as early as 1642, when a
device that could mechanically add numbers was invented. Adding devices
evolved from the abacus. It was a significant milestone in the history of
computers. In 1939, the Berry brothers invented an electronic computer capable
of operating digitally. Computations were performed using vacuum-tube
technology.
In 1941, the introduction of Konrad Zuse s Z3 at the German Laboratory for
Aviation in Berlin was one of the most significant events in the evolution of
computers because this machine supported both floating-point and binary
arithmetic. Because it was a Turing-complete device, it is considered to be the
very first computer that was fully operational
The amazing growth of the Internet throughout the 1990s caused a vast
reduction in the number of free IP addresses available under IPv4. IPv4 was neve
r
designed to scale to global levels. To increase available address space, it had
to
process data packets that were larger. This resulted in a longer IP address and
that caused problems for existing hardware and software. Solving those problems
required the design, development, and implementation of a new architecture and
new hardware to support it. It also required changes to all of the TCP/IP routin
g
software.
After examining a number of proposals, the Internet Engineering Task Force
(IETF) settled on IPv6, which was released in January 1995 as RFC 1752. Ipv6 is
sometimes called the Next Generation Internet Protocol (IPNG) or TCP/IP v6.
In the fall of 1990, Berners-Lee developed the first web browser featuring
an integrated editor that could create hypertext documents. He installed the
application on his and Cailliau s computers, and they both began communicating
via the world s first web server, at info.cern.ch, on December 25, 1990.
First Web browser created by Berner s Lee in 1990 .
Berners-Lee enhanced the server and browser by adding support for the
FTP protocol. This made a wide range of existing FTP directories and Usenet
newsgroups instantly accessible via a web page displayed in his browser. By usin
g
these internet or web browser s a new era of Cloud computing came to phase .

CLOUD COMPUTING ARCHITECTURE: -


A simple architecture of cloud computing seems like
The Cloud Computing Architecture of a cloud solution is the structure of the
system, which comprises on-premise and cloud resources, services, middleware,
and software components, geo-location, the externally visible properties of thos
e,
and the relationships between them. The term also refers to documentation of a
system's cloud computing architecture. Documenting facilitates communication
between stakeholders, documents early decisions about high-level design, and
allows reuse of design components and patterns between projects.

Cloud-resident entities such as data centers have taken the concepts of grid
computing and bundled them into service offerings that appeal to other entities
that do not want the burden of infrastructure but do want the capabilities hoste
d
from those data centers. One of the most well known of the new cloud service
providers is Amazon s S3 (Simple Storage Service) third party storage solution.
Amazon S3 is storage for the Internet. According to the Amazon S3 website, it
provides a simple web services interface that can be used to store and retrieve
any amount of data, at any time, from anywhere on the web. It gives any
developer access to the same highly scalable, reliable, fast, inexpensive data
storage infrastructure that Amazon uses to run its own global network of web
sites. The service aims to maximize benefits of scale and to pass those benefits
on
to developers.
This Architecture illustrates the high level architecture of the cloud
computing platform. It s comprised of a data center, IBM Tivoli Provisioning
manager, IBM Tivoli Monitoring, IBM Websphere Application Server, IBM DB2,
and virtualization components. This architecture diagram focuses on the core
back end of the cloud computing platform; it does not address the user interface
.
Tivoli Provisioning Manager automates imaging, deployment, installation,
and configuration of the Microsoft Windows and Linux operating systems, along
with the installation / configuration of any software stack that the user reques
ts.
Tivoli Provisioning Manager uses Websphere Application Server to
communicate the provisioning status and availability of resources in the data
center, to schedule the provisioning and deprovisioning of resources, and to
reserve resources for future use.
As a result of the provisioning, virtual machines are created using the XEN
hypervisor or physical machines are created using Network Installation Manager,
Remote Deployment Manager, or Cluster Systems Manager, depending upon the
operating system and platform.
IBM Tivoli Monitoring Server monitors the health (CPU, disk, and memory)
of the servers provisioned by Tivoli Provisioning Manager.DB2 is the database
server that Tivoli Provisioning Manager uses to store the resource data.
To approach the architecture in brief it is easy to approach with an example
here I m giving an example of Microsoft azure which is a Microsoft cloud
computing platform

Microsoft Azure ( Cloud computing Platform)


Some benefits of cloud computing are
Cloud Computing approaches will spread because of lower TCO and higher
flexibility (business, technical) .
Cloud Computing will massively change the future IT business in a way that
many standard IT services will offered by big IT providers.
Cloud Computing platforms commoditize native Internet scale application
development and operation.
Cloud Computing Architecture aspects will be integrated in Cloud platforms
as framework, process, templates, guidance to lower the business, legal, and
technical burden for application developers.

Service Grid Architecture, Storage Architecture in Cloud & Security Challenges:-

Service Grid Architecture:


Before the term cloud, the term service grid was sometimes used to define
a managed distributed computing platform that can be used for business as well
as scientific applications. Said slightly differently, a service grid is a manag
eable
ecosystem of specific services deployed by service businesses or utility
companies. Service grids have been likened to a power or utility grid: always on
,
highly reliable, a platform for making managed services available to some user
constituency. When the term came into use in the IT domain, the word service
was implied to mean Web service, and service grid was viewed as an
infrastructure platform on which ecology of services could be composed,
deployed, and managed.
The phrase service grid implies structure. While grid elements servers
together with functionality they host within a service grid may be
heterogeneous vis-à-vis their construction and implementation, their presence
within a service grid implies manageability as part of the grid as a whole. This
implies that a capability exists to manage grid elements using policy that is
external to implementations of services in a service grid (at the minimum in
conjunction with policy that might be embedded in legacy service
implementations). And services in a grid become candidates for reuse through
service composition; services outside of a grid also are candidates for
composition, but the service grid only can manage services within its scope of
control. Of course, service grids defined as we have above are autonomic, can be
recursively structured, and can collaborate in their management of composite
services provisioned across different grids.
Service grid deployment architecture
Smart infrastructure architecture based on SOA principles promotes
creation of services that enable automation of physical and virtual infrastructu
res.
Automation makes the environment reliable and feasible.
Interactions: Service oriented interactions contain resource requirements and
business constraints that govern the run-time platform enabling infrastructural
management, business, proirisation, etc.
Consolidation: A pool of available physical and virtual resources are created an
d
mapped to service based on spanning interaction profiles.
Resources: Storage, Operating Systems, Middle-ware, etc are managed and
monitored in a automated fashion for facilitate the creation of resources pool.
Clouds and service grids both have containers. In clouds, container is used
to mean a virtualized image containing technology and application stacks. The
container might hold other kinds of containers (e.g., a J2EE/Java EE application
container), but the cloud container is impermeable, which means that the cloud
does not directly manage container contents, and the cloud contents do not
participate in cloud or container management. In a service grid, container is th
e
means by which the grid provides underlying infrastructural services, including
security, persistence, business transaction or interaction life cycle management
,
and policy management.
In a service grid, it is possible for contents in a container to participate in
grid management as a function of infrastructure management policies
harmonized with business policies like service level agreements. It also is poss
ible
that policy external to container contents can shape5 how the container s
functionality executes. So a service grid container s wall is permeable vis-à-vis
policy, which is a critical distinction between clouds and service grids.

Storage Architecture in Clouds:

D:\diag-virtmach.jpg

The storage architecture of the cloud includes the capabilities of the Google
file system along with the benefits of a storage area network (SAN). Either
technique can be used by itself, or both can be used together as needed.
Computing without data is as rare as data without computing. The
combination of data and computer power is important. Computer power often is
measured in the cycle speed of a processor. Computer speed also needs to
account for the number of processors. The number of processors within an SMP
and the number within a cluster may both be important.
When looking at disk storage, the amount of space is often the primary
measure. The number of gigabytes or terrabytes of data needed is important. But
access rates are often more important.
Being able to only read sixty megabyes per second may limit your
processing capabilites below your computer capabilites. Individual disks have
limits on the rate at which they can process data. A single computer may have
multiple disks, or with SAN file system be able to access data over the network.
So
data placement can be an important factor in achieving high data access rates.
Spreading the data over multiple computer nodes may be desired, or having all
the data reside on a single node may be required for optimal performance.
The Google file structure can be used in the cloud environment. When
used, it uses the disks inside the machines, along with the network to provide a
shared file system that is redundant. This can increase the total data processin
g
speed when the data and processing power is spread out efficiently.
The Google file system is a part of a storage architecture but it is not
considered to be a SAN architecture. A SAN architecture relies on an adapter
other than an Ethernet in the computer nodes, and has a network similar to an
Etherent network that can then host various SAN devices.
D:\strategic_tech_fig13.gif

Basic SAN Architecture Network


Typically a single machine has both computer power and disks. The ratio of
disk capability to computer capability is fairly static. With the Google file sy
stem,
the single node s computer power can be used against very large data by
accessing the data through the network and staging it on the local disk.
Alternativly, if the problem lends itself to distribution, then many computer no
des
can be used allowing their disks to also be involved. With the SAN we can
fundamentally alter the ratio between computer power and disk capability.
A single SAN client can be connected to, and access at high speeds, an
enormous amount of data. When more computer power is neded, more machines
can be added. When more I/O capability is neded, more SAN devices can be
added. Either capability is independent of the other.
Fast write is a capability available on many SAN devices. Normal disk writes
do not complete until the data has been written to disk, which involves spinning
the disk, and potentially moving the heads. With fast write, the write completes
when the data reaches memory in the SAN device, long before it gets written to
disk. Certain applications will achieve significant performance boosts through f
ast
write if the SAN implements it.
Flash copy is an instantaneous copy capability available with some SAN
devices. Actually copying the data may take time, but the SAN device can
complete the physical copying after the logical copying. Being able to make copi
es
is essential to any storage architecture. Often copies are used for purposes suc
h
as backup, or to allow parallel processing without contention. With flash copy
capabilites from the SAN, the performance of copies can be greatly improved.
Shared file systems are not part of the SAN architecture, but can be
implemented on top of the SAN. Some recovery techniques such as HACMP rely
on SAN technology to enable failover.
While the Google file system provides similar capabilites, it is not currently
integrated into most failover techniques.

Security Challenges:
Although virtualization and cloud computing can help companies
accomplish more by breaking the physical bonds between an IT infrastructure and
its users, heightened security threats must be overcome in order to benefit full
y
from this new computing paradigm. This is particularly true for the SaaS provide
r.
Some security concerns are worth more discussion. For example, in the cloud, you
lose control over assets in some respects, so your security model must be
reassessed. Enterprise security is only as good as the least reliable partner,
department, or vendor. Can you trust your data to your service provider? In the
following paragraphs, we discuss some issues you should consider before
answering that question.
With the cloud model, you lose control over physical security. In a public
cloud, you are sharing computing resources with other companies. In a shared
pool outside the enterprise, you don t have any knowledge or control of where
the resources run. Exposing your data in an environment shared with other
companies could give the government reasonable cause to seize your assets
because another company has violated the law. Simply because you share the
environment in the cloud, may put your data at risk of seizure.
Storage services provided by one cloud vendor may be incompatible with
another vendor s services should you decide to move from one to the other.
Vendors are known for creating what the hosting world calls sticky services
services that an end user may have difficulty transporting from one cloud vendor
to another (e.g., Amazon s Simple Storage Service *S3+ is incompatible with
IBM s Blue Cloud, or Google, or Dell). If information is encrypted while passing
through the cloud, who controls the encryption/decryption keys? Is it the
customer or the cloud vendor?
Most customers probably want their data encrypted both ways across the
Internet using SSL (Secure Sockets Layer protocol). They also most likely want
their data encrypted while it is at rest in the cloud vendor s storage pool. Be su
re
that you, the customer, control the encryption/decryption keys, just as if the d
ata
were still resident on your own servers. Data integrity means ensuring that data
is
identically maintained during any operation (such as transfer, storage, or
retrieval). Put simply, data integrity is assurance that the data is consistent
and
correct. Ensuring the integrity of the data really means that it changes only in
response to authorized transactions. This sounds good, but you must remember
that a common standard to ensure data integrity does not yet exist.
Using SaaS offerings in the cloud means that there is much less need for
software development. For example, using a web-based customer relationship
management (CRM) offering eliminates the necessity to write code and
customize a vendor s application. If you plan to use internally developed code in
the cloud, it is even more important to have a formal secure software
development life cycle (SDLC). The immature use of mashup technology
(combinations of web services), which is fundamental to cloud applications, is
inevitably going to cause unwitting security vulnerabilities in those applicatio
ns.
your development tool of choice should have a security model embedded in it to
guide developers during the development phase and restrict users only to their
authorized data when the system is deployed into production.
As more and more mission-critical processes are moved to the cloud, SaaS
suppliers will have to provide log data in a real-time, straightforward manner,
probably for their administrators as well as their customers personnel. Someone
has to be responsible for monitoring for security and compliance, and unless the
application and data are under the control of end users, they will not be able t
o.
Will customers trust the cloud provider enough to push their mission-critical
applications out to the cloud? Since the SaaS provider s logs are internal and not
necessarily accessible externally or by clients or investigators, monitoring is
difficult. Since access to logs is required for Payment Card Industry Data Secur
ity
Standard (PCI DSS) compliance and may be requested by auditors and regulators,
security managers need to make sure to negotiate access to the provider s logs as
part of any service agreement. Cloud applications undergo constant feature
additions, and users must keep up to date with application improvements to be
sure they are protected. The speed at which applications will change in the clou
d
will affect both the SDLC and security. For example, Microsoft s SDLC assumes
that mission-critical software will have a three- to five-year period in which i
t will
not change substantially, but the cloud may require a change in the application
every few weeks. Even worse, a secure SLDC will not be able to provide a securit
y
cycle that keeps up with changes that occur so quickly. This means that users
must constantly upgrade, because an older version may not function, or protect
the data.

Conclusion:
The future for cloud computing is bright. The big names in computers are
throwing lots of resources into this. Dell sees a huge market for cloud computin
g
in the future, upwards of $1 billion a year in a few more years. HP, Intel and m
ore
are throwing resources into this, and it looks like cloud computing might be the
next big thing after UMPCs.
Cloud computing is the next big wave in computing. It has many benefits,
such as better hardware management, since all the computers are the same and
run the same hardware. It also provides for better and easier management of
data security, since all the data is located on a central server, so administrat
ors
can control who has and doesn't have access to the files.
There are some down sides as well to cloud computing. Peripherals such as
printers or scanners might have issues dealing with the fact that there is no ha
rd
drive attached to the physical, local machine. If there are machines a user uses
at
work that aren't their own for any reason, that require access to particular dri
vers
or programs, it is still a struggle to get this application to know that it shou
ld be
available to the user.
If you're looking to implement this, you have two options. You can host it
all within your network, or you can use a device from a company that provides
the server storage, such as the Cherry Pal. I hope you have learned a lot about
cloud computing and the bright future it has in the coming years.

Thank You

You might also like