Professional Documents
Culture Documents
CHAPTER 1
INTRODUCTION
Page 1
A user bought a license for each application from a software vendor and obtained the right
to install the application on one computer system.
3. With the development of local area networks (LAN) and more networking capabilities, the
client-server model, where server computers with enhanced capabilities and large storage
devices could be used to host application services and data for a large workgroup.
Page 2
Page 3
supports an enhanced version of Eucalyptus that uses the KVM hypervisor. This allowed any user
to deploy a cloud that matches the same API that AWS provides.
UEC effectively demonstrates that the hypervisor layer does not need to be the same for portability
to be achieved and that open source does provide viable tools for creation of a cloud. This white
paper tries to provide an understanding of the UEC internal architecture and possibilities offered by
it in terms of security, networking and scalability.
1.3.2: Introduction to Cloud Computing Architecture (SUN)
Cloud computing promises to increase the velocity with which applications are deployed, increase
innovation, and lower costs, all while increasing business agility. Sun takes an inclusive view of
cloud computing that allows it to support every facet, including the server, storage, network, and
virtualization technology that drives cloud computing environments to the software that runs in
virtual appliances that can be used to assemble applications in minimal time. This white paper
discusses how cloud computing transforms the way we design, build, and deliver applications, and
the architectural considerations that enterprises must make when adopting and using cloud
computing technology.
Page 4
Private cloud
Private cloud is cloud infrastructure operated solely for a single organization, whether managed
internally or by a third-party and hosted internally or externally. Undertaking a private cloud
project requires a significant level and degree of engagement to virtualize the business
environment, and requires the organization to reevaluate decisions about existing resources. When
done right, it can improve business, but every step in the project raises security issues that must be
addressed to prevent serious vulnerabilities
They have attracted criticism because users "still have to buy, build, and manage them" and thus do
not benefit from less hands-on management, essentially "[lacking] the economic model that makes
cloud computing such an intriguing concept".
Private cloud
Initial cost
Typically zero
Typically high
Running cost
Unpredictable
Unpredictable
Customization
Impossible
Possible
Privacy
Yes
Single sign-on
Impossible
Possible
Scaling up
Public cloud
A cloud is called a 'Public cloud' when the services are rendered over a network that is open for
public use. Technically there may be little or no difference between public and private cloud
architecture, however, security consideration may be substantially different for services
(applications, storage, and other resources) that are made available by a service provider for a
public audience and when communication is effected over a non-trusted network. Generally, public
Page 5
cloud service providers like Amazon AWS, Microsoft and Google own and operate the
infrastructure and offer access only via Internet (direct connectivity is not offered).
Community cloud
Community cloud shares infrastructure between several organizations from a specific community
with common concerns (security, compliance, jurisdiction, etc.), whether managed internally or by
a third-party and hosted internally or externally. The costs are spread over fewer users than a public
cloud (but more than a private cloud), so only some of the cost savings potential of cloud
computing are realized.
Hybrid cloud
Hybrid cloud is a composition of two or more clouds (private, community or public) that remain
unique entities but are bound together, offering the benefits of multiple deployment models. Such
composition expands deployment options for cloud services, allowing IT organizations to use
public cloud computing resources to meet temporary needs.[78] This capability enables hybrid
clouds to employ cloud bursting for scaling across clouds.
Cloud bursting is an application deployment model in which an application runs in a private cloud
or data center and "bursts" to a public cloud when the demand for computing capacity increases. A
primary advantage of cloud bursting and a hybrid cloud model is that an organization only pays for
extra compute resources when they are needed.
Cloud bursting enables data centers to create an in-house IT infrastructure that supports average
workloads, and use cloud resources from public or private clouds, during spikes in processing
demands. By utilizing "hybrid cloud" architecture, companies and individuals are able to obtain
degrees of fault tolerance combined with locally immediate usability without dependency on
internet connectivity. Hybrid cloud architecture requires both on-premises resources and off-site
(remote) server-based cloud infrastructure.
Hybrid clouds lack the flexibility, security and certainty of in-house applications. Hybrid cloud
provides the flexibility of in house applications with the fault tolerance and scalability of cloud
based services.
Page 6
Distributed cloud
Cloud computing can also be provided by a distributed set of machines that are running at different
locations, while still connected to a single network or hub service. Examples of this include
distributed computing platforms such as BOINC and Folding@Home.
Page 7
1.7 : Architecture
Page 8
over a loose coupling mechanism such as a messaging queue. Elastic provision implies intelligence
in the use of tight or loose coupling as applied to mechanisms such as these and others.
The Intercloud
The Intercloud is an interconnected global "cloud of cloud stand an extension of the Internet
"network of networks" on which it is based.
method
encompassing
contributions
from
diverse
areas
such
1.8.1 Issues
Threats and opportunities of the cloud
However, cloud computing continues to gain steam with 56% of the major European technology
decision-makers estimate that the cloud is a priority in 2013 and 2014, and the cloud budget may
reach 30% of the overall IT budget.
According to the TechInsights Report 2013: Cloud Succeeds based on a survey, the cloud
implementations generally meets or exceedes expectations across major service models, such as
Infrastructure as a Service (IaaS), Platform as a Service (PaaS) and Software as a Service (SaaS)".
Several deterrents to the widespread adoption of cloud computing remain. Among them, are:
reliability, availability of services and data, security, complexity, costs, regulations and legal issues,
performance, migration, reversion, the lack of standards, limited customization and issues of
Page 9
privacy. The cloud offers many strong points: infrastructure flexibility, faster deployment of
applications and data, cost control, adaptation of cloud resources to real needs, improved
productivity, etc. The early 2010s cloud market is dominated by software and services in SaaS
mode and IaaS (infrastructure), especially the private cloud. PaaS and the public cloud are further
back.
Privacy
Privacy advocates have criticized the cloud model for giving hosting companies' greater ease to
controland thus, to monitor at willcommunication between host company and end user, and
access user data (with or without permission). Instances such as the secret NSA program, working
with AT&T, and Verizon, which recorded over 10 million telephone calls between American
citizens, causes uncertainty among privacy advocates, and the greater powers it gives to
telecommunication companies to monitor user activity. A cloud service provider (CSP) can
complicate data privacy because of the extent of virtualization (virtual machines) and cloud
storage used to implement cloud service. CSP operations, customer or tenant data may not remain
on the same system, or in the same data center or even within the same provider's cloud; this can
lead to legal concerns over jurisdiction. While there have been efforts (such as US-EU Safe Harbor)
to "harmonise" the legal environment, providers such as Amazon still cater to major markets
(typically the United States and the European Union) by deploying local infrastructure and
allowing customers to select "availability zones." Cloud computing poses privacy concerns because
the service provider can access the data that is on the cloud at any time. It could accidentally or
deliberately alter or even delete information.
Compliance
To comply with regulations including FISMA, HIPAA, and SOX in the United States, the Data
Protection Directive in the EU and the credit card industry's PCI DSS, users may have to
adopt community orhybrid deployment modes that are typically more expensive and may offer
restricted benefits. This is how Google is able to "manage and meet additional government policy
requirements beyond FISMA"and Rackspace Cloud or QubeSpace are able to claim PCI
compliance.
Page 10
Many providers also obtain a SAS 70 Type II audit, but this has been criticised on the grounds that
the hand-picked set of goals and standards determined by the auditor and the auditee are often not
disclosed and can vary widely. Providers typically make this information available on request,
under non-disclosure agreement.
Customers in the EU contracting with cloud providers outside the EU/EEA have to adhere to the
EU regulations on export of personal data.
U.S. Federal Agencies have been directed by the Office of Management and Budget to use a
process called FedRAMP (Federal Risk and Authorization Management Program) to assess and
authorize cloud products and services. Federal CIO Steven VanRoekel issued a memorandum to
federal agency Chief Information Officers on December 8, 2011 defining how federal agencies
should use FedRAMP. FedRAMP consists of a subset of NIST Special Publication 800-53 security
controls specifically selected to provide protection in cloud environments. A subset has been
defined for the FIPS 199 low categorization and the FIPS 199 moderate categorization. The
FedRAMP program has also established a Joint Accreditation Board (JAB) consisting of Chief
Information Officers from DoD, DHS and GSA. The JAB is responsible for establishing
accreditation standards for 3rd party organizations who perform the assessments of cloud solutions.
The JAB also reviews authorization packages, and may grant provisional authorization (to operate).
The federal agency consuming the service still has final responsibility for final authority to operate.
A multitude of laws and regulations have forced specific compliance requirements onto many
companies that collect, generate or store data. These policies may dictate a wide array of data
storage policies, such as how long information must be retained, the process used for deleting data,
and even certain recovery plans. Below are some examples of compliance laws or regulations.
In the United States, the Health Insurance Portability and Accountability Act (HIPAA) requires a
contingency plan that includes, data backups, data recovery, and data access during emergencies.
The privacy laws of the Switzerland demand that private data, including emails, be physically
stored in the Switzerland.
In the United Kingdom, the Civil Contingencies Act of 2004 sets forth guidance for a Business
contingency plan that includes policies for data storage.
Page 11
In a virtualized cloud computing environment, customers may never know exactly where their data
is stored. In fact, data may be stored across multiple data centers in an effort to improve reliability,
increase performance, and provide redundancies. This geographic dispersion may make it more
difficult to ascertain legal jurisdiction if disputes arise.
CHAPTER 2
Mr. Likhesh Kolhe
Page 12
PROBLEM DEFINITION
2.1.Problem Statement:
Setting up of Private Cloud to provide IaaS (Providing an Operating System to the user)
And SaaS Services (Web Applications).
Page 13
2.2: SCOPE
Cloud computing is a major change, caused by the underlying commoditization of IT. It is
expected to see a future dominance of the open source model in cloud computing, which will solve
the major adoption concerns for users. Even a small private cloud built on our college intranet
gives us an idea of the dominance of cloud computing in near future.
The scope of the cloud built in our project is
Scalable
- Allocate more capacity only when you need it
- Allocate more instances only when you need them.
- Dynamic Instance Creation and Termination upon receiving a request.
CHAPTER 3
PROJECT IMPLEMENTATION
Mr. Likhesh Kolhe
Page 14
Page 15
The system starts the Node Controller if the load exceeds a certain specified threshold
(Threshold is assumed at 80%) and shuts down a running Node Controller if the load is below the
specified threshold for certain predetermined period of time.
Page 16
HTML
Used for developing interface of the Word processor
Mysql
Used for creation and management of Databases
PHP
Used for uploading and downloading of user documents
Page 17
Eucalyptus
-
SSH Server
-
Operating System
Shell Script
-
Page 18
Page 19
CHAPTER 4
FEATURES OF CLOUD
Page 20
Features:
Cloud Computing exhibits the following key characteristics:1 .Agility:- improves with users' ability to re-provision technological infrastructure resources.
2. Cost is claimed to be reduced and in a public cloud delivery model capital expenditure is
converted to operational expenditure.
3. Device and location independent.
4. Scalability.
5. Performance monitoring.
6. Maintenance of cloud applications is easier.
Types of Cloud Services
PaaS:- Platform as a service .Deliver computing platform and solution stack as a service.
SaaS :- Software as s service.
IaaS :- Infrastructure as a service .Delivers computer infrastructure typically a platform
virtualization environment as a service along with raw block storage and networking.
HaaS :- Hardware as a service. Its a procurement process similar to licensing. Generally
speaking, a managed service provider remotely monitors and administers hardware on a client's
site on a subscription basis.
Page 21
desktop) facilitates interaction between humans and computers. Cloud computing systems
typically use Representational State Transfer (REST)-based APIs.
Cost: cloud providers claim that computing costs reduce. A public-cloud delivery model
converts capital expenditure to operational expenditure. This purportedly lowers barriers to
entry, as infrastructure is typically provided by a third-party and does not need to be purchased
for one-time or infrequent intensive computing tasks. Pricing on a utility computing basis is
fine-grained, with usage-based options and fewer IT skills are required for implementation (inhouse). The e-FISCAL project's state-of-the-art repository contains several articles looking into
cost aspects in more detail, most of them concluding that costs savings depend on the type of
activities supported and the type of infrastructure available in-house.
independence enable users to access systems using a web browser regardless of their location
or what device they use (e.g., PC, mobile phone). As infrastructure is off-site (typically
provided by a third-party) and accessed via the Internet, users can connect from anywhere.
Virtualization technology allows sharing of servers and storage devices and increased
utilization. Applications can be easily migrated from one physical server to another.
Multitenancy enables sharing of resources and costs across a large pool of users thus allowing
for:
centralization of infrastructure in locations with lower costs (such as real estate, electricity, etc.)
peak-load capacity increases (users need not engineer for highest possible load-levels)
utilisation and efficiency improvements for systems that are often only 1020% utilised.
Reliability improves with the use of multiple redundant sites, which makes well-designed cloud
computing suitable for business continuity and disaster recovery. Scalability and elasticity via
dynamic ("on-demand") provisioning of resources on a fine-grained, self-service basis near
real-time, without users having to engineer for peak loads. Performance is monitored, and
consistent and loosely coupled architectures are constructed using web services as the system
interface.
Page 22
Security can improve due to centralization of data, increased security-focused resources, etc.,
but concerns can persist about loss of control over certain sensitive data, and the lack of
security for stored kernels. Security is often as good as or better than other traditional systems,
in part because providers are able to devote resources to solving security issues that many
customers cannot afford to tackle. However, the complexity of security is greatly increased
when data is distributed over a wider area or over a greater number of devices, as well as in
multi-tenant systems shared by unrelated users. In addition, user access to security audit
logs may be difficult or impossible. Private cloud installations are in part motivated by users'
desire to retain control over the infrastructure and avoid losing control of information security.
Maintenance of cloud computing applications is easier, because they do not need to be installed
on each user's computer and can be accessed from different places.
The National Institute of Standards and Technology's definition of cloud computing identifies
"five essential characteristics":
On-demand self-service. A consumer can unilaterally provision computing capabilities, such
as server time and network storage, as needed automatically without requiring human
interaction with each service provider.
Broad network access. Capabilities are available over the network and accessed through
standard mechanisms that promote use by heterogeneous thin or thick client platforms (e.g.,
mobile phones, tablets, laptops, and workstations).
Resource pooling. The provider's computing resources are pooled to serve multiple consumers
using a multi-tenant model, with different physical and virtual resources dynamically assigned
and reassigned according to consumer demand. ...
Rapid elasticity. Capabilities can be elastically provisioned and released, in some cases
automatically, to scale rapidly outward and inward commensurate with demand. To the
consumer, the capabilities available for provisioning often appear unlimited and can be
appropriated in any quantity at any time.
Measured service. Cloud systems automatically control and optimize resource use by
leveraging a metering capability at some level of abstraction appropriate to the type of service
Mr. Likhesh Kolhe
Page 23
(e.g., storage, processing, bandwidth, and active user accounts). Resource usage can be
monitored, controlled, and reported, providing transparency for both the provider and consumer
of the utilized service.
On-demand self-service
On-demand self-service allows users to obtain, configure and deploy cloud services themselves
using cloud service catalogues, without requiring the assistance of IT. This feature is listed by
the National Institute of Standards and Technology (NIST) as a characteristic of cloud
computing. The self-service requirement of cloud computing prompts infrastructure vendors to
create cloud computing templates, which are obtained from cloud service catalogues.
Manufacturers of such templates or blueprints include BMC Software (BMC), with Service
Blueprints as part of their cloud management platform Hewlett-Packard (HP), which names its
templates as HP Cloud Maps RightScale and Red Hat, which names its templates CloudForms.
The templates contain predefined configurations used by consumers to set up cloud services.
The templates or blueprints provide the technical information necessary to build ready-to-use
clouds. Each template includes specific configuration details for different cloud infrastructures,
with information about servers for specific tasks such as hosting applications, databases,
websites and so on. The templates also include predefined Web service, the operating system,
the database, security configurations and load balancing.
Cloud computing consumers use cloud templates to move applications between clouds through
a self-service portal. The predefined blueprints define all that an application requires to run in
different environments. For example, a template could define how the same application could
be deployed in cloud platforms based on Amazon Web Service, VMware or Red Hat. The user
organization benefits from cloud templates because the technical aspects of cloud
configurations reside in the templates, letting users to deploy cloud services with a push of a
button. Developers can use cloud templates to create a catalog of cloud services.
Page 24
CHAPTER 5
ADVANTAGES OF CLOUD
COMPUTING SYSTEM
Page 25
Page 26
CHAPTER 6
CLOUD COMPUTING
IMPLEMENTATION
Page 27
Page 28
6.2 Setting up Eucalyptus Cloud on KVM :In any Eucalyptus Cloud Installation, there are 2 top-level components: Cloud
Controller(CLC) and Walrus. These 2 components manage the various clusters, where cluster is
asset of physical machines that host the Virtual Instances. In each cluster, there are components.
that interact with the high level components : Cluster Controller (CC) and StorageController (SC).
CC and SC are cluster level components. Each cluster is composed of various Nodes, or physical
Page 29
machines. Each Node will run a Node Controller (NC) that will control the hypervisor for
managing the Virtual Instances. For this setup, We have implemented a Single-Cluster Installation,
where all the components except NC are co-located on one machine. As per Eucalyptus
documentation, this co-located system is called : front-end. So in a gist, We have 1 physical
machine which hosts CLC, Walrus, CC, SC, and 5 other machines that hosts NC each.
The Node Controller uses KVM as a hypervisor. The NC service runs on Domain-0 kernel
in the KVM Setup.
Hardware :We used 1 Admin Machine with the config : Intel Core 2 Duo Processor 1.8 GHz, with 1
GB RAM , 160 GB HDD, and 5 Node Controllers each with the config : Intel Core 2 Duo
Processor 1.8 Ghz (VT enabled) with 2 GB RAM, 160 GB HDD.
6.3 Eucalyptus Front End :The Eucalyptus Front End hosts the Cloud Controller, Storage Controller and the Cluster
Controller services. It exposes AWS compatible WS (Web Services) interfaces.
Page 30
Page 31
Page 32
Page 33
Page 34
Page 35
gateway 192.168.1.1
dns-nameservers 59.185.0.23
6.5 Registering Node Controller on Front End:Now, we will start configuring the Cloud Controller, and allow it to register the new Node
that we have setup. For this, we will login to the Eucalyptus Cloud Controller box, and use
the euca_conf application to register the new Node.
Use the following command :The IP address mentioned in the above command refers to the machine that had the Eucalyptus
Node Controller service running.
Now we need to obtain the credentials from the command line of the Cloud Controller by doing the
following:Eucalyptus Cloud Controller will attempt to register the new Node, and we can check for
successful registration by following command :The above command produces the following console output in my setup :- If the above command
works out properly, we are sure that Eucalyptus is working fine. Now we will proceed to run a new
instance of the Cloud.
30.Register The Front End Components And The Nodes
sudo $EUCALYPTUS/usr/sbin/euca_conf --register-walrus 192.168.1.2
sudo $EUCALYPTUS/usr/sbin/euca_conf --register-cluster homecloud 192.168.1.2
sudo $EUCALYPTUS/usr/sbin/euca_conf --register-sc homecloud 192.168.1.2
Page 36
6.6 Adding VM Images :Adding a VM image to the Eucalyptus Cloud requires :a) Download a VM Image
b) Add the root disk image, kernel / ramdisk pair to the Walrus (Storage Service)
c) Register the image with Eucalyptus
First, we downloaded an image from http://uec-images.ubuntu.com/releases/ which in
this case is : http://uec-images.ubuntu.com/releases/9.10/rc/ubuntu-9.10-rcueci386.tar.gz
We will now bundle the Kernel, Initrd and the OS Image :1) Unpack the Downloaded image from the tarball :
2) We bundle the kernel
3) We upload the kernel bundle
4) Register the kernel bundle with Eucalyptus
5) We bundle the ramdisk
6) We upload the ramdisk bundle
7) Register the ramdisk bundle with Eucalyptus
8) We bundle the image
Page 37
The above step will take a little time, depending on the size of the image.
9) Upload the image bundle
10) We register the image with Eucalyptus
32.Download Credentials
a)Go To: https://192.168.1.2:8443/
Login:
Username:admin
Password:admin
b)Enter New Password As admin And Enter Administrators E-Mail Address
c)Click On Credentials Tab.Click On Download Credentials Button
d)Save The File To The Desktop
Page 38
Page 39
Page 40
6.7 Configure Eucalyptus Tools :One of the popular tools to manage both Amazon and Eucalyptus EC2 instances is HybridFox.
It is a Mozilla Firefox Plugin, and integrates well with the Eucalyptus Cloud. It allows
to manage the EC2 instances and EBS Volumes. The user can create / stop/ start
instances, attach EBS volumes and even take EBS snapshots. We have covered HybridFox in brief through the series of screenshots.
We can download HybridFox from http://code.google.com/p/hybridfox/downloads/list
To install, just drag and drop the HybridFox.xpi file into Mozilla Firefox browser. Once installed,
the plugin can be accessed from the Tools menu of the browser.
Page 41
We will now cover the configuration part of the HybridFox. The user must click on the Regions
button as shown below, and enter the Region Name and EndPoint URL. The EndPoint URL must point to the IP address of the Cloud Controller. Region Name can be
anything that the user likes.
Once added, we must select the added region from the Regions Drop down.
Now we need to get the credentials that are required for HybridFox to make secure Web
Service calls to the Cloud Controller. For retrieving the Credentials, we must log in to the
Eucalyptus Admin Tool as shown in the next screenshot.
We will use the Query ID and Secret Key available from the Credentials Tab of the AdminConsole.
Page 42
The User must now get back to HybridFox, and click on Credentials button.
The Query ID obtained from above will be the AWS Access Key, and the Secret Key will
be the AWS Secret Access Key. The account name can be anything that the user wants.
Page 43
Once the credentials are added, the user must select the added credentials from drop
down, and hit refresh for the browser. This will allow HybridFox to access the Cloud Controller
with the config.
The screenshot below shows the list of Images available. The use can select any EMI
and launch an instance for the same. Subsequent screenshot shows the running instance.
Page 44
Page 45
The following tasks can be performed and can enhance the performance of the cloud.
Automatically turning node controllers on/off:-Now as we can switch off the idle nodes
from the cloud controller, we have to sense the load from the nodes and then comparing
with the Threshold set, turn them off. Also when a fresh load arises and if there are no free
nodes to handle it, we have to automatically switch on the idle nodes again from the cloud
controller.
Dynamic Instance Creation and Termination:-If the number of requests for a particular
service increases the threshold the nodes can serve, then dynamically launching the new
Instance of that service. And if no users are using that service, then dynamically terminating
that instance.
More Applications can be provided on the cloud For Example: - Image Processing
Services, video compression, Document format converters.
Develop a Word Processor application: Develop a Word Processor application in php and
retrieve the data and show it in database
Page 46
CHAPTER 7
CONCLUSION
Page 47
CONCLUSION
Cloud computing is the next big wave in computing. It has many benefits,
such as better hardware management, since all the computers are the same and run the same
hardware. It also provides for better and easier management of data security, since all the data is
located on a central server, so administrators can control who has and doesn't have access to the
files.
There are some down sides as well to cloud computing. Peripherals such as
printers or scanners might have issues dealing with the fact that there is no hard drive attached to
the physical, local machine. If there are machines a user uses at work that aren't their own for any
reason, that require access to particular drivers or programs, it is still a struggle to get this
application to know that it should be available to the user.
If you're looking to implement this, you have two options. You can host it all
within your network, or you can use a device from a company that provides the server storage, such
as the CherryPal..
Page 48
REFERENCES
WHITE PAPERS:
WEBSITES:
http://www.ubuntu.com
http://www.zoho.com
http://www.cacoo.com
http://aws.amazon.com
http://ubuntuforums.org/forumdisplay.php
http://www.ubuntuforums.com
http://www.eucalyptus.com/downloads/whitepapers
http://www.eucalyptus.com/downloads/repo/eucalyptus/2.0.3/natty/universe
Page 49
http://www.eucalyptus.com/pdf/whitepapers/Cloud_Builder_Guide.pdf
http://www.eucalyptus.com/pdf/whitepapers/IT_Climatology.pdf
http://www.eucalyptus.com/pdf/whitepapers/Five_Steps_Cloud.pdf
http://www.eucalyptus.com/pdf/whitepapers/Eucalyptus_Overview.pdf
Page 50