You are on page 1of 229

Visit & Downloaded from : www.LearnEngineering.

in

ENGINEERING COLLEGES
2016 – 17 Odd Semester

IMPORTANT QUESTIONS & ANSWERS

DEPARTMENT OF COMPUTER SCIENCE AND ENGINEERING

SUBJECT CODE: CS6703

SUBJECT NAME: GRID AND CLOUD COMPUTING

REGULATION : 2013 SEMESTER AND YEAR: 07 & IV

Visit & Downloaded from : www.LearnEngineering.in


Visit & Downloaded from : www.LearnEngineering.in

INDEX

S.NO CONTENTS PAGE NO


A Aim & Objective 1
B Syllabus 2
C Detailed Lesson Plan 4
UNIT I
1 PART – A 8
2 PART – B 13
3 Technologies For Network Based Systems 13
4 Virtual Machine 27
5 Service Oriented Architecture(SOA) 31
6 Grid Architecture 38
7 Grid Standards 45
8 GPU And Elements Of Grids 48
9 Data Center in Cloud Computing 53
UNIT II
10 PART – A 56
11 PART – B 61
12 OGSA 61
13 Services Provided By OGSA 65
14 OGSI 67
15 Data Intensive Grid Service Model 72
Grid Service Handle, Grid Service Migration, OGSI
16 76
Security Model
UNIT III
17 PART A 79
18 PART B 85
19 Cloud Reference Model 85
20 Cloud Deployment Models 88

Visit & Downloaded from : www.LearnEngineering.in


Visit & Downloaded from : www.LearnEngineering.in

S.NO CONTENTS PAGE NO


21 Services Provided By Cloud 99
22 Implementation Levels of Virtualization 107
23 Virtualization Structures / Tools and Mechanisms 112
24 Virtualization of CPU, Memory and I/O Devices 117
25 Virtualization of Data Center Automation 124
26 Pros and Cons Cloud Computing 129
UNIT IV
27 PART A 134
28 PART B 139
29 Configuring And Testing Of Globus Toolkit 139
30 Components Of Globus Toolkit4 144
Map And Reduce Function In Hadoop Framework
31 152
Using Java Program
32 Hdfs Concept 157
33 Hadoop File System And Command Line Interface 160
34 HDFS Java Interface 164
35 Data Flow Methods In HDFS 171
UNIT V
36 PART A 175
37 PART B 180
38 Grid Security Infrastructure 180
Network, Host And Application Level In Cloud
39 185
Security
Aspects Of Data Security And Provider And
40 190
Security
41 Identity And Access Management Architecture 195
42 IAM Practices In Cloud 210
45 Saas,Paas,Iaas Availability In Cloud 215
46 Industrial Connectivity And Latest Development 222
47 University Question Paper 223

Visit & Downloaded from : www.LearnEngineering.in


Visit & Downloaded from : www.LearnEngineering.in

AIM AND OBEJECTIVE

The student should be made to:

 Understand how Grid computing helps in solving large scale scientific


problems.
 Gain knowledge on the concept of virtualization that is fundamental to cloud
computing.
 Learn how to program the grid and the cloud.
 Understand the security issues in the grid and the cloud environment.

Visit & Downloaded from : www.LearnEngineering.in


Visit & Downloaded from : www.LearnEngineering.in

ANNA UNIVERSITY, CHENNAI-25


SYLLABUS COPY
REGULATION 2013
CS6703 GRID AND CLOUD COMPUTING L T PC
3 0 0 3
UNIT I INTRODUCTION 9
Evolution of Distributed computing: Scalable computing over the Internet –
Technologies for network based systems – clusters of cooperative computers - Grid
computing Infrastructures – cloud computing - service oriented architecture –
Introduction to Grid Architecture and standards – Elements of Grid – Overview of
Grid Architecture.
UNIT II GRID SERVICES 9
Introduction to Open Grid Services Architecture (OGSA) – Motivation –
Functionality Requirements – Practical & Detailed view of OGSA/OGSI – Data
intensive grid service models – OGSA services.
UNIT III VIRTUALIZATION 9
Cloud deployment models: public, private, hybrid, community – Categories of cloud
computing: Everything as a service: Infrastructure, platform, software - Pros and
Cons of cloud computing – Implementation levels of virtualization – virtualization
structure – virtualization of CPU, Memory and I/O devices – virtual clusters and
Resource Management – Virtualization for data center automation.
UNIT IV PROGRAMMING MODEL 9
Open source grid middleware packages – Globus Toolkit (GT4) Architecture ,
Configuration – Usage of Globus – Main components and Programming model -
Introduction to Hadoop Framework - Mapreduce, Input splitting, map and reduce
functions, specifying input and output parameters, configuring and running a job –
Design of Hadoop file system, HDFS concepts, command line and java interface,
dataflow of File read & File write.
UNIT V SECURITY 9
Trust models for Grid security environment – Authentication and Authorization
methods – Grid security infrastructure – Cloud Infrastructure security: network, host
and application level – aspects of data security, provider data and its security, Identity
and access management architecture, IAM practices in the cloud, SaaS, PaaS, IaaS
availability in the cloud, Key privacy issues in the cloud.
TOTAL: 45 PERIODS
TEXT BOOK:
1. Kai Hwang, Geoffery C. Fox and Jack J. Dongarra, ―Distributed and Cloud
Computing: Clusters, Grids, Clouds and the Future of Internet‖, First Edition,
Morgan Kaufman Publisher, an Imprint of Elsevier, 2012.

REFERENCES:
1. Jason Venner, ―Pro Hadoop- Build Scalable, Distributed Applications in the
Cloud‖, A Press, 2009
2. Tom White, ―Hadoop The Definitive Guide‖, First Edition. O‟Reilly, 2009.
3. Bart Jacob (Editor), ―Introduction to Grid Computing‖, IBM Red Books,
Vervante, 2005

Visit & Downloaded from : www.LearnEngineering.in


Visit & Downloaded from : www.LearnEngineering.in

4. Ian Foster, Carl Kesselman, ―The Grid: Blueprint for a New Computing
Infrastructure‖, 2 Edition, Morgan Kaufmann.
5. Frederic Magoules and Jie Pan, ―Introduction to Grid Computing‖ CRC Press,
2009.
6. Daniel Minoli, ―A Networking Approach to Grid Computing‖, John Wiley
Publication, 2005.
7. Barry Wilkinson, ―Grid Computing: Techniques and Applications‖, Chapman
and Hall, CRC, Taylor and Francis Group, 2010.

Visit & Downloaded from : www.LearnEngineering.in


Visit & Downloaded from : www.LearnEngineering.in

DETAILED LESSON PLAN


TEXT BOOK:
1. Kai Hwang, Geoffery C. Fox and Jack J. Dongarra, ―Distributed and Cloud
Computing: Clusters, Grids, Clouds and the Future of Internet‖, First Edition,
Morgan Kaufman Publisher, an Imprint of Elsevier, 2012.
REFERENCES:
1. Kai Hwang, Geoffery C. Fox and Jack J. Dongarra, ―Distributed and Cloud
Computing: From Parallel Processing to the Internet of Things‖, First Edition,
Morgan Kaufman Publisher, an Imprint of Elsevier, 2012.
2. Jason Venner, ―Pro Hadoop- Build Scalable, Distributed Applications in the
Cloud‖, A Press, 2009
3. Tom White, ―Hadoop The Definitive Guide‖, First Edition. O‟Reilly, 2009.
4. Bart Jacob (Editor), ―Introduction to Grid Computing‖, IBM Red Books,
Vervante, 2005
5. Ian Foster, Carl Kesselman, ―The Grid: Blueprint for a New Computing
Infrastructure‖, 2 Edition, Morgan Kaufmann.
6. Frederic Magoules and Jie Pan, ―Introduction to Grid Computing‖ CRC Press,
2009.
7. Daniel Minoli, ―A Networking Approach to Grid Computing‖, John Wiley
Publication, 2005.
8. Barry Wilkinson, ―Grid Computing: Techniques and Applications‖, Chapman
and Hall, CRC, Taylor and Francis Group, 2010.
9. Joshy Joseph, Craig Fellenstein,‖Grid Computing‖,Pearson Education,2009
10. Rajkumar Buyya, Christian Vecchiola, S. Thamarai Selvi, ―Mastering Cloud
Computing, Foundations and Applications Programming‖, Morgan Kaufman
Publisher, an Imprint of Elsevier
11. Tim Mather, Subra Kumarasamy, Shahed Lathif, ―Cloud security and privacy,
An Enterprise perspective on Risks and Compliance‖ , O‟Reilly publications.

S. Unit Hours Cum Books


Topic/Portions to be covered
No No Req Hours Referred
UNIT – I INTRODUCTION
Evolution of Distributed : Scalable
1 1 1 R1
Computing over the Internet
Technologies for network based
2 1 2 T1
systems
I Technologies for network based
3 1 3 T1
systems
4 Clusters of cooperative computers 1 4 T1

5 Grid computing Infrastructures 1 5 T1

Visit & Downloaded from : www.LearnEngineering.in


Visit & Downloaded from : www.LearnEngineering.in

S. Unit Hours Cum Books


Topic/Portions to be covered
No No Req Hours Referred
6 Cloud Computing 1 6 T1

7 Service Oriented Architecture 1 7 T1


Introduction to Grid Architecture and
8 1 8 T1
standards
Elements of Grid and architecture
9 1 9 T1
overview
UNIT – II GRID SERVICES
Introduction to Open Grid Services
10 Architecture (OGSA) - Motivation, 1 10 R9
Functional Requirements
11 OGSA architecture 1 11 R9
Practical & Detailed view
12 2 13 R9
OGSA/OGSI
13 OGSI Grouping 1 13 R1

14 II Data intensive grid service models 1 14 R9


OGSA services: Naming and change
15 1 15 R9
management recommendation
OGSA services:
16 1 16 R9
Life Cycle Management
OGSA services:
17 1 17 R9
Interfaces
18 Programming Model 1 18 R9

UNIT – III VIRTUALIZATION


Cloud deployment models: public,
20 1 19 R10
private, hybrid, community
21 Categories of cloud computing 1 20 R10

III Everything as a service :


22 1 21 R10
Infrastructure, platform, software
23 Pros and Cons of cloud computing 1 22 R10

24 Implementation levels of virtualization 1 23 R1

Visit & Downloaded from : www.LearnEngineering.in


Visit & Downloaded from : www.LearnEngineering.in

S. Unit Hours Cum Books


Topic/Portions to be covered
No No Req Hours Referred
25 Virtualization Structure 1 24 R1
Virtualization of CPU, Memory
26 1 25 R1
and I/O devices
Virtual clusters and Resource
27 1 26 R1
Management
Virtualization for data center
28 1 27 R1
automation
UNIT IV PROGRAMMING MODEL
Open source grid middleware
29 1 28 R1
packages
Globus Toolkit (GT4) Architecture –
30 1 29 R1
Usage
GT4 Main components &
31 1 30 R4
Configuration

32 GT4 Programming model 1 31 R4


IV Introduction to Hadoop Framework,
33 1 32 R2
Mapreduce, Input splitting
Map and reduce functions, specifying
34 input and output parameters, 1 34 R2
configuring and running a job
Design of Hadoop file system, HDFS
35 1 35 R2
concepts
Command line and java interface,
36 1 36 R2
dataflow of File read & File write.
UNIT V SECURITY
Trust models for Grid security
38 1 37 R1
environment
Authentication and Authorization
39 1 38 R1
methods
V
40 Grid security infrastructure 1 39 R1
Cloud Infrastructure security:
41 1 40 R11
network, host and application level

Visit & Downloaded from : www.LearnEngineering.in


Visit & Downloaded from : www.LearnEngineering.in

S. Unit Hours Cum Books


Topic/Portions to be covered
No No Req Hours Referred
Aspects of data security, provider data
42 1 41 R11
and its security
Identity and access management
43 1 42 R11
architecture
IAM practices in the cloud, SaaS,
44 1 43 R11
PaaS, IaaS availability in the cloud
IAM practices in the cloud, SaaS,
45 1 44 R11
PaaS, IaaS availability in the cloud
46 Key privacy issues in the cloud. 1 45 R11

10

Visit & Downloaded from : www.LearnEngineering.in


Visit & Downloaded from : www.LearnEngineering.in

UNIT – I

INTRODUCTION

Evolution of Distributed computing: Scalable computing over the Internet –


Technologies for network based systems – clusters for cooperative computers –
Grid computing infrastructures – cloud computing – service oriented architecture –
Introduction to Grid Architecture and UNIT I – Elements of Grid – Overview of
standards
Grid Architecture

PART – A

1. What is cluster? List out its benefits.

The cluster is often a collection of homogeneous compute nodes that are physically
connected in close range to one another.
Benefits – clusters
 Scalable performance
 Programmability
 Efficient message passing
 High system availability
 Seamless fault tolerance
 Cluster wide job management
 Dynamic load balancing

2. What is cloud computing? Mention the characteristics cloud computing.

Cloud computing refers to both the applications delivered as services over the internet
and the hardware and system software in the data centers that provides those services.
The characteristics of cloud computing are:
 On-demand usage
 Ubiquitous access
 Multitenancy

11

Visit & Downloaded from : www.LearnEngineering.in


Visit & Downloaded from : www.LearnEngineering.in

 Elasticity
 Measured usage
 Resiliency

3. Differentiate centralized and parallel computing.

Centralized Computing Parallel Computing


This is a computing paradigm by which In parallel computing, all processors are
all computer resources are centralized in either tightly coupled with centralized
one physical system. All resources shared memory or loosely coupled with
(processors, memory, and storage) are distributed memory.
fully shared and tightly coupled within
one integrated OS.

4. What is distributed computing?

Distributed computing This is a field of computer science/engineering that studies


distributed systems. A distributed system consists of multiple autonomous computers,
each having its own private memory, communicating through a computer network.
Information exchange in a distributed system is accomplished through message
passing.

5. What is meant by ubiquitous computing and Internet of Things?

Ubiquitous computing refers to computing with pervasive devices at any place and
time using wired or wireless communication.
The Internet of Things (IoT) is a networked connection of everyday objects including
computers, sensors, humans, etc. The IoT is supported by Internet clouds to achieve
ubiquitous computing with any object at any place and time.

12

Visit & Downloaded from : www.LearnEngineering.in


Visit & Downloaded from : www.LearnEngineering.in

6. What are SAN and NAS?

A storage area network (SAN) connects servers to network storage such as disk
arrays.
Network attached storage (NAS) connects client hosts directly to the disk arrays.

7. List out the benefits of virtual machines.

Virtual machines (VMs) offer novel solutions to underutilized resources, application


inflexibility, software manageability, and security concerns in existing physical
machines

8. What are hypervisor, bare-metal hypervisor and VMM?

Hypervisor: In compute virtualization, a virtualization layer resides between the


hardware and virtual machine. The virtualization layer is also known as hypervisor. It
provides standardized hardware resources (CPU, memory, network, etc) to all the
virtual machine.
Virtual Machine Monitor (VMM) is responsible for actually executing commands
on the CPUs and performing binary translations. It abstracts hardware to appear as a
physical machine with its own CPU, memory and I/O devices. When a VM starts
running the control is transferred to VMM, which subsequently begins executing
instruction from the Virtual Machine (VM)
Bare-metal hypervisor is directly installed on the x86 based architecture. It has direct
access to the hardware resources. It is more efficient than Hypervisor.

9. Define SSI.
Greg Pfister has indicated that an ideal cluster should merge multiple system images
into a single-system image (SSI). An SSI is an illusion created by software or
hardware that presents a collection of resources as one integrated, powerful resource.

13

Visit & Downloaded from : www.LearnEngineering.in


Visit & Downloaded from : www.LearnEngineering.in

SSI makes the cluster appear like a single machine to the user. A cluster with multiple
system images is nothing but a collection of independent computers.

10. Define grid computing. What are all the types of grid systems?

Grid computing has attracted global technical communities with the evolution of
Business on-demand computing and Autonomic computing. Grid computing is the
process of coordinated resource sharing and problem solving in dynamic, multi-
institutional virtual organizations.
Type of Grid systems:
Computational and data grids – provides computing utility, data and information
services through resource sharing and cooperation among participating organization.
P2P grids - the P2P grids are mainly for distributed computing and collaboration
applications that have no fixed structures. P2P grids are unreliable, resources are
controlled by the users and it is limited to use with few applications.

11. What is grid and cloud?

A grid is an environment that allows service oriented, flexible and seamless sharing of
heterogeneous network of resources for compute intensive and data intensive tasks
and provides faster throughput and scalability at lower costs. The distinct benefits of
using grids include performance with scalability, resource utilization, management
and reliability and virtualization.
Cloud - A cloud is a pool of virtualized computer resources. A cloud can host a
variety of different workloads, including batchstyle backend jobs and interactive and
user-facing applications.

12. List out the benefits of grid computing.

Grid computing environment provides more computational capabilities and helps to


increase the efficiency and scalability of the infrastructure and resource sharing.
14

Visit & Downloaded from : www.LearnEngineering.in


Visit & Downloaded from : www.LearnEngineering.in

Grid computing provides a single interface for managing the heterogeneous resources.
It can create a more robust and resilient infrastructure through the use of
decentralization, fail-over and fault tolerance to make the infrastructure better suited
to respond to minor or major disasters.

13. What is SOA?

A service-oriented architecture is an information technology approach or strategy in


which applications make use of (perhaps more accurately, rely on) services available
in a network such as the World Wide Web. Implementing a service-oriented
architecture can involve developing applications that use services, making
applications available as services so that other applications can use those services, or
both.

14. Mention the reasons for the emergence of cloud computing.

Traditionally, a distributed computing system tends to be owned and operated by an


autonomous administrative domain (e.g., a research laboratory or company) for on-
premises computing needs. However, these traditional systems have encountered
several performance bottlenecks: constant system maintenance, poor utilization, and
increasing costs associated with hardware/software upgrades. Cloud computing as an
on-demand computing paradigm resolves or relieves us from these problems.

15. List out the design goals and requirement of HPC and HTC systems.

The design goals are throughput, efficiency, scalability and reliability


The design requirements are efficiency, dependability, and adaptation in
programming model and flexibility in application deployment.

15

Visit & Downloaded from : www.LearnEngineering.in


Visit & Downloaded from : www.LearnEngineering.in

PART – B

1. TECHNOLOGIES FOR NETWORK BASED SYSTEMS

 Discuss in detail about technologies for network based systems.

System Models for Distributed and Cloud Computing

 Distributed and cloud computing systems are built over a large number of
autonomous computer nodes. These node machines are interconnected by
SANs, LANs, or WANs in a hierarchical manner.
 With today’s networking technology, a few LAN switches can easily connect
hundreds of machines as a working cluster.
 A WAN can connect many local clusters to form a very large cluster of
clusters. In this sense, one can build a massive system with millions of
computers connected to edge networks. Massive systems are considered highly
scalable, and can reach web-scale connectivity, either physically or logically.
 Massive systems are classified into four groups: clusters, P2P networks,
computing grids, and Internet clouds over huge data centres. In terms of node
number, these four system classes may involve hundreds, thousands, or even
millions of computers as participating nodes. These machines work
collectively, cooperatively, or collaboratively at various levels.

Clusters of Cooperative Computers

A computing cluster consists of interconnected stand-alone computers which


work cooperatively as a single integrated computing resource. Clustered computer
systems have demonstrated impressive results in handling heavy workloads with large
data sets.

16

Visit & Downloaded from : www.LearnEngineering.in


Visit & Downloaded from : www.LearnEngineering.in

 Cluster Architecture

o The figure1.1 below shows the architecture of a typical server cluster


built around a low-latency, high bandwidth interconnection network.
o This network can be as simple as a SAN (e.g., Myrinet) or a LAN (e.g.,
Ethernet).
o To build a larger cluster with more nodes, the interconnection network
can be built with multiple levels of Gigabit Ethernet, Myrinet, or
InfiniBand switches. Through hierarchical construction using a SAN,
LAN, or WAN, one can build scalable clusters with an increasing
number of nodes.
o The cluster is connected to the Internet via a virtual private network
(VPN) gateway. The gateway IP address locates the cluster.
o The system image of a computer is decided by the way the OS manages
the shared cluster resources.
o Most clusters have loosely coupled node computers.
o All resources of a server node are managed by their own OS. Thus, most
clusters have multiple system images as a result of having many
autonomous nodes under different OS control.

Figure 1.1 A cluster of servers interconnected by a high-bandwidth SAN or LAN


with shared I/O devices and disk arrays; the cluster acts as a single computer
attached to the Internet.

17

Visit & Downloaded from : www.LearnEngineering.in


Visit & Downloaded from : www.LearnEngineering.in

 Single-System Image:

o An SSI is an illusion created by software or hardware that presents a


collection of resources as one integrated, powerful resource.
o SSI makes the cluster appear like a single machine to the user.
o A cluster with multiple system images is nothing but a collection of
independent computers.

 Hardware, Software, and Middleware Support

o Clusters exploring massive parallelism are commonly known as MPPs.


Almost all HPC clusters in the Top 500 list are also MPPs.
o The building blocks are computer nodes (PCs, workstations, servers, or
SMP), special communication software such as PVM or MPI, and a
network interface card in each computer node.
o Most clusters run under the Linux OS.
o The computer nodes are interconnected by a high-bandwidth network
(such as Gigabit Ethernet, Myrinet, InfiniBand, etc.).
o Special cluster middleware supports are needed to create SSI or high
availability (HA). Both sequential and parallel applications can run on
the cluster, and special parallel environments are needed to facilitate use
of the cluster resources.
o Users may want all distributed memory to be shared by all servers by
forming distributed shared memory (DSM).
o Many SSI features are expensive or difficult to achieve at various cluster
operational levels. Instead of achieving SSI, many clusters are loosely
coupled machines.
o Using virtualization, one can build many virtual clusters dynamically,
upon user demand.

18

Visit & Downloaded from : www.LearnEngineering.in


Visit & Downloaded from : www.LearnEngineering.in

 Major Cluster Design Issues

o Unfortunately, a cluster-wide OS for complete resource sharing is not


available yet.
o Middleware or OS extensions were developed at the user space to
achieve SSI at selected functional levels. Without this middleware,
cluster nodes cannot work together effectively to achieve cooperative
computing. The software environments and applications must rely on
the middleware to achieve high performance.
o The cluster benefits come from scalable performance, efficient message
passing, high system availability, seamless fault tolerance, and cluster-
wide job management, as summarized in table given below.

Features Functional Characterization Feasible Implementations

Failover, failback, check


Availability and Hardware and software support
pointing, rollback recovery,
Support for sustained HA in cluster
nonstop OS, etc.
Automated failure management to Component redundancy, hot
Hardware Fault
eliminate all single points of swapping, RAID, multiple
Tolerance
failure power supplies, etc.
Hardware mechanisms or
Achieving SSI at functional level
middleware
Single System with hardware and software
support to achieve DSM at
Image (SSI) support, middleware, or OS
coherent
extensions
cache level
To reduce message passing Fast message passing, active
Efficient
system overhead and hide messages, enhanced MPI
Communications
latencies library, etc.
Cluster-wide Job Using a global job management Application of single-job
Management system with better scheduling and management systems such as

19

Visit & Downloaded from : www.LearnEngineering.in


Visit & Downloaded from : www.LearnEngineering.in

Features Functional Characterization Feasible Implementations

monitoring LSF, Codine, etc.


Workload monitoring,
Balancing the workload of all
Dynamic Load process migration, job
processing nodes along with
Balancing replication and gang
failure recovery
scheduling, etc.
Use of scalable interconnect,
Adding more servers to a cluster
performance monitoring,
Scalability and or adding more clusters to a grid
distributed execution
Programmability as the workload or
environment, and better
data set increases
software tools

Table 1.1 - Critical Cluster Design Issues and Feasible Implementations

Grid Computing Infrastructures

In the past 30 years, users have experienced a natural growth path from Internet to
web and grid computing services.
Grid computing is envisioned to allow close interaction among applications running
on distant computers simultaneously.

 Computational Grids:

o Like an electric utility power grid, a computing grid offers an


infrastructure that couples computers, software/middleware, special
instruments, and people and sensors together.
o The grid is often constructed across LAN, WAN, or Internet backbone
networks at a regional, national, or global scale.

20

Visit & Downloaded from : www.LearnEngineering.in


Visit & Downloaded from : www.LearnEngineering.in

o They can also be viewed as virtual platforms to support virtual


organizations. The computers used in a grid are primarily workstations,
servers, clusters, and supercomputers.
o Personal computers, laptops, and PDAs can be used as access devices to
a grid system.
o Figure below shows an example computational grid built over multiple
resource sites owned by different organizations.
o The resource sites offer complementary computing resources, including
workstations, large servers, a mesh of processors, and Linux clusters to
satisfy a chain of computational needs.
o The grid is built across various IP broadband networks including LANs
and WANs already used by enterprises or organizations over the
Internet.
o The grid is presented to users as an integrated resource pools as shown
in the upper half of the figure 1.2 shown below.

Figure 1.2 - Computational grid or data grid providing computing utility, data,
and information services through resource sharing and cooperation among
participating organizations.

o At the server end, the grid is a network.


o At the client end, wired or wireless terminal devices. The grid integrates
the computing, communication, contents, and transactions as rented

21

Visit & Downloaded from : www.LearnEngineering.in


Visit & Downloaded from : www.LearnEngineering.in

services. Enterprises and consumers form the user base, which then
defines the usage trends and service characteristics.

 Grid Families

o Grid technology demands new distributed computing models,


software/middleware support, network protocols, and hardware
infrastructures.
o New grid service providers (GSPs) and new grid applications have
emerged rapidly, similar to the growth of Internet and web services in
the past two decades.
o The table 1.2 shown below shows the classification of grid systems in
essentially two categories:
 computational or data grids and P2P grids.
 Computing or data grids are built primarily at the national level.

Design Issues Computational and Data P2P Grids


Grids
Open grid with P2P flexibility,
Grid Applications Distributed supercomputing,
all resources from client
Reported National Grid initiatives, etc.
machines
TeraGrid built in US,
Representative JXTA, FightAid@home,
ChinaGrid in China, and the e-
Systems SETI@home
Science grid built in UK
Restricted user groups, Unreliable user contributed
Development
middleware bugs, protocols to resources, limited to a few
Lessons Learned
acquire resources apps

Table 1.2 - Two Grid Computing Infrastructures and


Representative Systems

22

Visit & Downloaded from : www.LearnEngineering.in


Visit & Downloaded from : www.LearnEngineering.in

Peer-to-Peer Network Families

o The P2P architecture offers a distributed model of networked systems.


o First, a P2P network is client-oriented instead of server-oriented.
o P2P systems are introduced at the physical level and overlay networks at
the logical level.

 P2P Systems:

o In a P2P system, every node acts as both a client and a server, providing
part of the system resources.
o Peer machines are simply client computers connected to the Internet.
o All client machines act autonomously to join or leave the system freely.
o This implies that no master slave relationship exists among the peers.
No central coordination or central database is needed.
o In other words, no peer machine has a global view of the entire P2P
system. The system is self-organizing with distributed control.
o The figure 1.3 given below shows the architecture of a P2P network at
two abstraction levels. Initially, the peers are totally unrelated. Each
peer machine joins or leaves the P2P network voluntarily. Only the
participating peers form the physical network at any time. Unlike the
cluster or grid, a P2P network does not use a dedicated interconnection
network. The physical network is simply an ad hoc network formed at
various Internet domains randomly using the TCP/IP and NAI protocols.
Thus, the physical network varies in size and topology dynamically du
due to the free membership in the P2P network.

23

Visit & Downloaded from : www.LearnEngineering.in


Visit & Downloaded from : www.LearnEngineering.in

Figure 1.3 - The structure of a P2P system by mapping a physical IP network to


an overlay network built with virtual links.

 Overlay Networks

o Data items or files are distributed in the participating peers.


o Based on communication or file-sharing needs, the peer IDs form an
overlay network at the logical level. This overlay is a virtual network
formed by mapping each physical machine with its ID, logically,
through a virtual mapping as shown in above Figure.
o When a new peer joins the system, its peer ID is added as a node in the
overlay network.
o When an existing peer leaves the system, its peer ID is removed from
the overlay network automatically. Therefore, it is the P2P overlay
network that characterizes the logical connectivity among the peers.
o There are two types of overlay networks: unstructured and structured.

24

Visit & Downloaded from : www.LearnEngineering.in


Visit & Downloaded from : www.LearnEngineering.in

o An unstructured overlay network is characterized by a random graph.


There is no fixed route to send messages or files among the nodes.
Often, flooding is applied to send a query to all nodes in an unstructured
overlay, thus resulting in heavy network traffic and nondeterministic
search results.
o Structured overlay networks follow certain connectivity topology and
rules for inserting and removing nodes (peer IDs) from the overlay
graph. Routing mechanisms are developed to take advantage of the
structured overlays.

 P2P Application Families


Based on application, P2P networks are classified into four groups, as shown in table
1.3 below.

Table 1.3 - Major Categories of P2P Network Families


 P2P Computing Challenges
o P2P computing faces three types of heterogeneity problems in hardware,
software, and network requirements.

25

Visit & Downloaded from : www.LearnEngineering.in


Visit & Downloaded from : www.LearnEngineering.in

o There are too many hardware models and architectures to select from;
incompatibility exists between software and the OS; and different
network connections and protocols make it too complex to apply in real
applications.
o System scalability is needed as the workload increases. System scaling
is directly related to performance and bandwidth.
o P2P networks have these properties. Data location is also important to
affect collective performance.
o Data locality, network proximity, and interoperability are three design
objectives in distributed P2P applications.
o P2P performance is affected by routing efficiency and self-organization
by participating peers.
o Fault tolerance, failure management, and load balancing are other
important issues in using overlay networks. Lack of trust among peers
poses another problem.

Cloud Computing Over the Internet

o Computational science is changing to be data-intensive. Supercomputers must


be balanced systems and petascale I/O and networking arrays.
o In the future, working with large data sets will typically mean sending the
computations (programs) to the data, rather than copying the data to the
workstations. This reflects the trend in IT of moving computing and data from
desktops to large data centers, where there is on-demand provision of software,
hardware, and data as a service. This data explosion has promoted the idea of
cloud computing.
o Cloud computing has been defined differently by many users and designers.
o A cloud is a pool of virtualized computer resources. A cloud can host a variety
of different workloads, including batchstyle backend jobs and interactive and
user-facing applications.

26

Visit & Downloaded from : www.LearnEngineering.in


Visit & Downloaded from : www.LearnEngineering.in

o A cloud allows workloads to be deployed and scaled out quickly through rapid
provisioning of virtual or physical machines.
o The cloud supports redundant, self-recovering, highly scalable programming
models that allow workloads to recover from many unavoidable hardware/
software failures.
o Finally, the cloud system should be able to monitor resource use in real time to
enable rebalancing of allocations when needed.

 Internet Clouds

o Cloud computing applies a virtualized platform with elastic resources on


demand by provisioning hardware, software, and data sets dynamically.
o The idea is to move desktop computing to a service-oriented platform
using server clusters and huge databases at data centers.
o Cloud computing leverages its low cost and simplicity to benefit both
users and providers. Machine virtualization has enabled such cost-
effectiveness.
o Cloud computing intends to satisfy many user applications
simultaneously.
o The cloud ecosystem must be designed to be secure, trustworthy, and
dependable. Some computer users think of the cloud as a centralized
resource pool. Others consider the cloud to be a server cluster which
practices distributed computing over all the servers used.

27

Visit & Downloaded from : www.LearnEngineering.in


Visit & Downloaded from : www.LearnEngineering.in

Figure 1.4 Virtualized resources from data centers to form an Internet cloud,
provisioned with hardware, software, storage, network, and services for paid
users to run their applications.
 The Cloud Landscape

o Traditionally, a distributed computing system tends to be owned and


operated by an autonomous administrative domain (e.g., a research
laboratory or company) for on-premises computing needs. However,
these traditional systems have encountered several performance
bottlenecks: constant system maintenance, poor utilization, and
increasing costs associated with hardware/software upgrades.
o Cloud computing as an on-demand computing paradigm resolves or
relieves us from these problems.
o The Figure 1.5 below depicts the cloud landscape and major cloud
players, based on three cloud service models.
o Services provided by cloud are:
Infrastructure as a Service (IaaS) This model puts together infrastructures
demanded by users—namely servers, storage, networks, and the data center fabric.
The user can deploy and run on multiple VMs running guest OSes on specific
applications. The user does not manage or control the underlying cloud infrastructure,
but can specify when to request and release the needed resources.

28

Visit & Downloaded from : www.LearnEngineering.in


Visit & Downloaded from : www.LearnEngineering.in

Figure 1.5 - Three cloud service models in a cloud landscape of major providers.
Platform as a Service (PaaS) This model enables the user to deploy user-built
applications onto a virtualized cloud platform. PaaS includes middleware, databases,
development tools, and some runtime support such as Web 2.0 and Java. The platform
includes both hardware and software integrated with specific programming interfaces.
The provider supplies the API and software tools (e.g., Java, Python, Web 2.0, .NET).
The user is freed from managing the cloud infrastructure.
Software as a Service (SaaS) This refers to browser-initiated application software
over thousands of paid cloud customers. The SaaS model applies to business
processes, industry applications, consumer relationship management (CRM),
enterprise resources planning (ERP), human resources (HR), and collaborative
applications. On the customer side, there is no upfront investment in servers or
software licensing. On the provider side, costs are rather low, compared with
conventional hosting of user applications.
Internet clouds offer four deployment modes: private, public, managed, and hybrid.
These modes demand different levels of security implications. The different SLAs

29

Visit & Downloaded from : www.LearnEngineering.in


Visit & Downloaded from : www.LearnEngineering.in

imply that the security responsibility is shared among all the cloud providers, the
cloud resource consumers, and the third-party cloud-enabled software providers.
Reasons for adaptation to cloud: The following list highlights eight reasons to adapt
the cloud for upgraded Internet applications and web services:
1. Desired location in areas with protected space and higher energy efficiency
2. Sharing of peak-load capacity among a large pool of users, improving overall
utilization
3. Separation of infrastructure maintenance duties from domain-specific
application development
4. Significant reduction in cloud computing cost, compared with traditional
computing paradigms
5. Cloud computing programming and application development
6. Service and data discovery and content/service distribution
7. Privacy, security, copyright, and reliability issues
8. Service agreements, business models, and pricing policies

2. VIRTUAL MACHINE

 Explain in detail about Virtual Machine.

Virtual Machines and Virtualization Middleware


 A conventional computer has a single OS image. This offers a rigid
architecture that tightly couples application software to a specific hardware
platform. Some software running well on one machine may not be executable
on another platform with a different instruction set under a fixed OS.
 Virtual machines (VMs) offer novel solutions to underutilized resources,
application inflexibility, software manageability, and security concerns in
existing physical machines.
 Today, to build large clusters, grids, and clouds, large amounts of computing,
storage, and networking resources needed to be accessed in a virtualized

30

Visit & Downloaded from : www.LearnEngineering.in


Visit & Downloaded from : www.LearnEngineering.in

manner and those resources have to be aggregated, and hopefully, offer a single
system image.
 In particular, a cloud of provisioned resources must rely on virtualization of
processors, memory, and I/O facilities dynamically. The Figure below
illustrates the architectures of three VM configurations.

Figure 1.6 - Three VM architectures in (b), (c), and (d), compared with the
traditional physical machine shown in (a).
Virtual Machines

 In the Figure 1.6 , the host machine is equipped with the physical hardware, as
shown at the bottom of the same figure. An example is an x-86 architecture
desktop running its installed Windows OS, as shown in part (a) of the Figure
1.6 . The VM can be provisioned for any hardware system. The VM is built
with virtual resources managed by a guest OS to run a specific application.
Between the VMs and the host platform, one needs to deploy a middleware
layer called a virtual machine monitor (VMM).
 Figure 1.6 (b) shows a native VM installed with the use of a VMM called a
hypervisor in privileged mode. For example, the hardware has x-86 architecture
running the Windows system. The guest OS could be a Linux system and the

31

Visit & Downloaded from : www.LearnEngineering.in


Visit & Downloaded from : www.LearnEngineering.in

hypervisor is the XEN system developed at Cambridge University. This


hypervisor approach is also called bare-metal VM, because the hypervisor
handles the bare hardware (CPU, memory, and I/O) directly.
 Another architecture is the host VM shown in Figure 1.6 (c). Here the VMM
runs in non privileged mode. The host OS need not be modified.
 The VM can also be implemented with a dual mode, as shown in Figure 1.6
(d). Part of the VMM runs at the user level and another part runs at the
supervisor level. In this case, the host OS may have to be modified to some
extent. Multiple VMs can be ported to a given hardware system to support the
virtualization process. The VM approach offers hardware independence of the
OS and applications. The user application running on its dedicated OS could be
bundled together as a virtual appliance that can be ported to any hardware
platform. The VM could run on an OS different from that of the host computer.

VM Primitive Operations

 The VMM provides the VM abstraction to the guest OS.


 With full virtualization, the VMM exports a VM abstraction identical to the
physical machine so that a standard OS such as Windows 2000 or Linux can
run just as it would on the physical hardware.
 Low-level VMM operations are indicated by Mendel Rosenblum and
illustrated in the Figure 1.7 below

32

Visit & Downloaded from : www.LearnEngineering.in


Visit & Downloaded from : www.LearnEngineering.in

Figure 1.7 - VM multiplexing, suspension, provision, and migration in a


distributed computing environment.

 First, the VMs can be multiplexed between hardware machines, as shown in


Figure 1.7(a).
 Second, a VM can be suspended and stored in stable storage, as shown in
Figure1.7 (b).
 Third, a suspended VM can be resumed or provisioned to a new hardware
platform, as shown in Figure 1.7 (c).
 Finally, a VM can be migrated from one hardware platform to another, as
shown in Figure1.7 (d).
 These VM operations enable a VM to be provisioned to any available hardware
platform. They also enable flexibility in porting distributed application
executions. Furthermore, the VM approach will significantly enhance the
utilization of server resources.

33

Visit & Downloaded from : www.LearnEngineering.in


Visit & Downloaded from : www.LearnEngineering.in

 Multiple server functions can be consolidated on the same hardware platform


to achieve higher system efficiency. This will eliminate server sprawl via
deployment of systems as VMs, which move transparency to the shared
hardware.

Virtual Infrastructures

 Physical resources for compute, storage, and networking are mapped to the
needy applications embedded in various VMs at the top.
 Hardware and software are then separated.
 Virtual infrastructure is what connects resources to distributed applications. It
is a dynamic mapping of system resources to specific applications. The result is
decreased costs and increased efficiency and responsiveness.
 Virtualization for server consolidation and containment is a good example of
this.

3. SERVICE ORIENTED ARCHITECTURE (SOA)

 Explain in detail about Service Oriented Architecture (SOA).

Service Oriented Architecture (SOA)

 In grids/web services, Java, and CORBA, an entity is, respectively, a service, a


Java object, and a CORBA distributed object in a variety of languages.
 These architectures build on the traditional seven Open Systems
Interconnection (OSI) layers that provide the base networking abstractions. On
top of this there will be a base software environment, which would be .NET or
Apache Axis for web services, the Java Virtual Machine for Java, and a broker
network for CORBA.

34

Visit & Downloaded from : www.LearnEngineering.in


Visit & Downloaded from : www.LearnEngineering.in

 On top of this base environment one would build a higher level environment
reflecting the special features of the distributed computing environment. This
starts with entity interfaces and inter-entity communication, which rebuild the
top four OSI layers but at the entity and not the bit level.
 The Figure below shows the layered architecture for distributed entities used in
web services and grid systems.

Layered Architecture for Web Services and Grids

 The entity interfaces correspond to the Web Services Description Language


(WSDL), Java method, and CORBA interface definition language (IDL)
specifications in these example distributed systems.
 These interfaces are linked with customized, high-level communication
systems: SOAP, RMI, and IIOP in the three examples. These communication
systems support features including particular message patterns (such as Remote
Procedure Call or RPC), fault recovery, and specialized routing.
 Often, these communication systems are built on message-oriented middleware
(enterprise bus) infrastructure such as WebSphere MQ or Java Message Service
(JMS) which provide rich functionality and support virtualization of routing,
senders, and recipients.
 In the case of fault tolerance, the features in the Web Services Reliable
Messaging (WSRM) framework mimic the OSI layer capability (as in TCP
fault tolerance) modified to match the different abstractions (such as messages
versus packets, virtualized addressing) at the entity levels.
 Security is a critical capability that either uses or reimplements the capabilities
seen in concepts such as Internet Protocol Security (IPsec) and secure sockets
in the OSI layers.
 Entity communication is supported by higher level services for registries,
metadata.

35

Visit & Downloaded from : www.LearnEngineering.in


Visit & Downloaded from : www.LearnEngineering.in

 Here, one might get several models with, for example, JNDI (Jini and Java
Naming and Directory Interface) illustrating different approaches within the
Java distributed object model.

Figure 1.8- Layered architecture for web services and the grids.

 The CORBA Trading Service, UDDI (Universal Description, Discovery, and


Integration), LDAP (Lightweight Directory Access Protocol), and ebXML
(Electronic Business using eXtensible Markup Language) are other examples
of discovery and information services.
 Management services include service state and lifetime support; examples
include the CORBA Life Cycle and Persistent states, the different Enterprise
JavaBeans models, Jini’s lifetime model, and a suite of web services
specifications. The above language or interface terms form a collection of
entity-level capabilities. The latter can have performance advantages and offers
a ―shared memory‖ model allowing more convenient exchange of information.
 However, the distributed model has two critical advantages: namely, higher
performance (from multiple CPUs when communication is unimportant) and a

36

Visit & Downloaded from : www.LearnEngineering.in


Visit & Downloaded from : www.LearnEngineering.in

cleaner separation of software functions with clear software reuse and


maintenance advantages.
 The distributed model is expected to gain popularity as the default approach to
software systems.
 In the earlier years, CORBA and Java approaches were used in distributed
systems rather than today’s SOAP, XML, or REST (Representational State
Transfer).
 The Figure 1.8 below corresponds to two choices of service architecture: web
services or REST systems. Both web services and REST systems have very
distinct approaches to building reliable interoperable systems.
 In web services, one aims to fully specify all aspects of the service and its
environment. This specification is carried with communicated messages using
Simple Object Access Protocol (SOAP). The hosting environment then
becomes a universal distributed operating system with fully distributed
capability carried by SOAP messages. This approach has mixed success as it
has been hard to agree on key parts of the protocol and even harder to
efficiently implement the protocol by software such as Apache Axis.
 In the REST approach, one adopts simplicity as the universal principle and
delegates most of the difficult problems to application (implementation-
specific) software. In a web services language, REST has minimal information
in the header, and the message body (that is opaque to generic message
processing) carries all the needed information. REST architectures are clearly
more appropriate for rapid technology environments. However, the ideas in
web services are important and probably will be required in mature systems at
a different level in the stack (as part of the application). REST can use XML
schemas but not those that are part of SOAP; ―XML over HTTP‖ is a popular
design choice in this regard.
 Above the communication and management layers, we have the ability to
compose new entities or distributed programs by integrating several entities
together.

37

Visit & Downloaded from : www.LearnEngineering.in


Visit & Downloaded from : www.LearnEngineering.in

 In CORBA and Java, the distributed entities are linked with RPCs, and the
simplest way to build composite applications is to view the entities as objects
and use the traditional ways of linking them together. For Java, this could be as
simple as writing a Java program with method calls replaced by Remote
Method Invocation (RMI), while CORBA supports a similar model with a
syntax reflecting the C++ style of its entity (object) interfaces.
 Allowing the term ―grid‖ to refer to a single service or to represent a collection
of services, here sensors represent entities that output data (as messages), and
grids and clouds represent collections of services that have multiple message-
based inputs and outputs.

The Evolution of SOA

 As shown in Figure 1.9 below, service-oriented architecture (SOA) has evolved


over the years. SOA applies to building grids, clouds, grids of clouds, clouds of
grids, clouds of clouds (also known as interclouds), and systems of systems in
general.
 A large number of sensors provide data-collection services, denoted in the
Figure 1.9 as SS (sensor service).
 A sensor can be a ZigBee device, a Bluetooth device, a WiFi access point, a
personal computer, a GPA, or a wireless phone, among other things. Raw data
is collected by sensor services.
 All the SS devices interact with large or small computers, many forms of grids,
databases, the compute cloud, the storage cloud, the filter cloud, the discovery
cloud, and so on.
 Filter services (fs in the Figure 1.7) are used to eliminate unwanted raw data, in
order to respond to specific requests from the web, the grid, or web services. A
collection of filter services forms a filter cloud.
 SOA aims to search for, or sort out, the useful data from the massive amounts
of raw data items. Processing this data will generate useful information, and

38

Visit & Downloaded from : www.LearnEngineering.in


Visit & Downloaded from : www.LearnEngineering.in

subsequently, the knowledge for our daily use. In fact, wisdom or intelligence
is sorted out of large knowledge bases.
 Finally, intelligent decisions were made based on both biological and machine
wisdom. Most distributed systems require a web interface or portal.
 For raw data collected by a large number of sensors to be transformed into
useful information or knowledge, the data stream may go through a sequence
of compute, storage, filter, and discovery clouds. Finally, the inter-service
messages converge at the portal, which is accessed by all users. Two example
are portals, OGFCE and HUBzero, which uses both web service (portlet) and
Web 2.0 (gadget) technologies.
 Many distributed programming models are also built on top of these basic
constructs.
Grids versus Clouds

 The boundary between grids and clouds are getting blurred in recent years.
 For web services, workflow technologies are used to coordinate or orchestrate
services with certain specifications used to define critical business process
models such as two-phase transactions.
 The general approaches used in workflow are the BPEL Web Service standard,
Pegasus, Taverna, Kepler, Trident, and Swift.
 In all approaches, one is building a collection of services which together tackle
all or part of a distributed computing problem.
 In general, a grid system applies static resources, while a cloud emphasizes
elastic resources. For some researchers, the differences between grids and
clouds are limited only in dynamic resource allocation based on virtualization
and autonomic computing.
 One can build a grid out of multiple clouds. This type of grid can do a better
job than a pure cloud, because it can explicitly support negotiated resource
allocation. Thus one may end up building with a system of systems: such as a
cloud of clouds, a grid of clouds, or a cloud of grids, or interclouds as a basic
SOA architecture.
39

Visit & Downloaded from : www.LearnEngineering.in


Visit & Downloaded from : www.LearnEngineering.in

Figure 1.9 - The evolution of SOA: grids of clouds and grids, where ―SS‖ refers
to a sensor service and ―fs‖ to a filter or transforming service.

40

Visit & Downloaded from : www.LearnEngineering.in


Visit & Downloaded from : www.LearnEngineering.in

4. GRID ARCHITECTURE

 With a neat diagram explain in detail about grid architecture.

Grid Architecture

A new architecture model and technology was developed for the establishment,
management, and cross-organizational resource sharing within a virtual organization.
This new architecture, called grid architecture, identifies the basic components of a
grid system, defines the purpose and functions of such components and indicates how
each of these components interacts with one another. The main attention of the
architecture is on the interoperability among resource providers and users to establish
the sharing relationships. This interoperability means common protocols at each layer
of the architecture model, which leads to the definition of a grid protocol architecture
as shown in Figure 1.10 below. This protocol architecture defines common
mechanisms, interfaces, schema, and protocols at each layer, by which users and
resources can negotiate, establish, manage, and share resources.

Fabric Layer (Interface to Local Resources)

 The Fabric layer defines the resources that can be shared. This could include
computational resources, data storage, networks, catalogs, and other system
resources. These resources can be physical resources or logical resources by
nature. Typical examples of the logical resources found in a Grid Computing
environment are distributed file systems, computer clusters, distributed
computer pools, software applications, and advanced forms of networking
services.
 These logical resources are implemented by their own internal protocol (e.g.,
network file systems [NFS] for distributed file systems, and clusters using
logical file systems [LFS]).
 These resources then comprise their own network of physical resources.
41

Visit & Downloaded from : www.LearnEngineering.in


Visit & Downloaded from : www.LearnEngineering.in

 Although there are no specific requirements toward a particular resource that


relates to integrating itself as part of any grid system, it is recommended to
have two basic capabilities associated with the integration of resources.
 These basic capabilities should be considered as "best practices" toward Grid
Computing disciplines. These best practices are as follows:
1. Provide an "inquiry" mechanism whereby it allows for the discovery against
its own resource capabilities, structure, and state of operations. These are
value-added features for resource discovery and monitoring.
2. Provide appropriate "resource management" capabilities to control the QoS
the grid solution promises, or has been contracted to deliver. This enables
the service provider to control a resource for optimal manageability, such as
(but not limited to) start and stop activations, problem resolution,
configuration management, load balancing, workflow, complex event
correlation, and scheduling.

Figure 1.10 - The layered grid service protocols and their relationship with the
Internet service protocols.

42

Visit & Downloaded from : www.LearnEngineering.in


Visit & Downloaded from : www.LearnEngineering.in

Connectivity Layer: (Manages Communications)

 The Connectivity layer defines the core communication and authentication


protocols required for grid-specific networking services transactions.
 Communications protocols, which include aspects of networking transport,
routing, and naming, assist in the exchange of data between fabric layers of
respective resources.
 The authentication protocol builds on top of the networking communication
services in order to provide secure authentication and data exchange between
users and respective resources.
 The communication protocol can work with any of the networking layer
protocols that provide the transport, routing, and naming capabilities in
networking services solutions.
 The most commonly used Network layer protocol is the TCP/IP Internet
protocol stack; however, this concept and discussion is not limited to that
protocol.
 The authentication solution for virtual organization environments requires
significantly more complex characteristics.
 The following describes the characteristics for consideration:
Single sign-on: This provides any multiple entities in the grid fabric to
be authenticated once; the user can then access any available resources in the
grid Fabric layer without further user authentication intervention.
Delegation: This provides the ability to access a resource under the
current users permissions set; the resource should be able to relay the same user
credentials (or a subset of the credentials) to other resources respective to the
chain of access.
Integration with local resource specific security solutions: Each
resource and hosting has specific security requirements and security solutions
that match the local environment. This may include (for example) Kerberos
security methods, Windows security methods, Linux security methods, and
UNIX security methods. Therefore, in order to provide proper security in the

43

Visit & Downloaded from : www.LearnEngineering.in


Visit & Downloaded from : www.LearnEngineering.in

grid fabric model, all grid solutions must provide integration with the local
environment and respective resources specifically engaged by the security
solution mechanisms.
User-based trust relationships: In Grid Computing, establishing an
absolute trust relationship between users and multiple service providers is very
critical. This accomplishes the environmental factor to which there is then no
need of interaction among the providers to access the resources that each of
them provide.
Data security: The data security topic is important in order to provide
data integrity and confidentiality. The data passing through the Grid
Computing solution, no matter what complications may exist, should be made
secure using various cryptographic and data encryption mechanisms. These
mechanisms are well known in the prior technological art, across all global
industries.

Resource Layer: Sharing of a Single Resource

 The Resource layer utilizes the communication and security protocols defined
by the networking communications layer, to control the secure negotiation,
initiation, monitoring, metering, accounting, and payment involving the sharing
of operations across individual resources.
 The way this works is the Resource layer calls the Fabric layer functions in
order to access and control the multitude of local resources. This layer only
handles the individual resources and, hence, ignores the global state and atomic
actions across the other resource collection, which in the operational context is
the responsibility of the Collective layer.
 There are two primary classes of resource layer protocols. These protocols are
key to the operations and integrity of any single resource. These protocols are
as follows:
Information Protocols: These protocols are used to get information
about the structure and the operational state of a single resource, including

44

Visit & Downloaded from : www.LearnEngineering.in


Visit & Downloaded from : www.LearnEngineering.in

configuration, usage policies, service-level agreements, and the state of the


resource. In most situations, this information is used to monitor the resource
capabilities and availability constraints.
Management Protocols: The important functionalities provided by the
management protocols are:
o Negotiating access to a shared resource is paramount. These
negotiations can include the requirements on quality of service,
advanced reservation, scheduling, and other key operational factors.
o Performing operation(s) on the resource, such as process creation or data
access, is also a very important operational factor.
o Acting as the service/resource policy enforcement point for policy
validation between a user and resource is critical to the integrity of the
operations.
o Providing accounting and payment management functions on resource
sharing is mandatory.
o Monitoring the status of an operation, controlling the operation
including terminating the operation, and providing asynchronous
notifications on operation status, is extremely critical to the operational
state of integrity.
 It is recommended that these resource-level protocols should be minimal from
a functional overhead point of view and they should focus on the functionality
each provides from a utility aspect.
The Collective Layer: Coordinating Multiple Resources
 While the Resource layer manages an individual resource, the Collective layer
is responsible for all global resource management and interaction with a
collection of resources.
 This layer of protocol implements a wide variety of sharing behaviors
(protocols) utilizing a small number of Resource layer and Connectivity layer
protocols. Some key examples of the common, more visible collective services
in a Grid Computing system are as follows:

45

Visit & Downloaded from : www.LearnEngineering.in


Visit & Downloaded from : www.LearnEngineering.in

Discovery Services: This enables the virtual organization participants to


discover the existence and/or properties of that specific available virtual
organization's resources.
Coallocation, Scheduling, and Brokering Services: These services
allow virtual organization participants to request the allocation of one or more
resources for a specific task, during a specific period of time, and to schedule
those tasks on the appropriate resources.
Monitoring and Diagnostic Services: These services afford the virtual
organizations resource failure recovery capabilities, monitoring of the
networking and device services, and diagnostic services that include common
event logging and intrusion detection. Another important aspect of this topic
relates to the partial failure of any portion of a Grid Computing environment, in
that it is critical to understand any and all business impacts related to this
partial failure are well known, immediately, as the failure begins to occurall the
way through its corrective healing stages.
Data Replication Services: These services support the management
aspects of the virtual organization's storage resources in order to maximize data
access performance with respect to response time, reliability, and costs. Grid-
Enabled Programming Systems: These systems allow familiar
programming models to be utilized in the Grid Computing environments, while
sustaining various Grid Computing networking services. These networking
services are integral to the environment in order to address resource discovery,
resource allocation, problem resolution, event correlation, network
provisioning, and other very critical operational concerns related to the grid
networks.
Workload Management Systems and Collaborative Frameworks:
This provides multistep, asynchronous, multicomponent workflow
management. This is a complex topic across several dimensions, yet a
fundamental area of concern for enabling optimal performance and functional
integrity.

46

Visit & Downloaded from : www.LearnEngineering.in


Visit & Downloaded from : www.LearnEngineering.in

Software Discovery Services: This provides the mechanisms to


discover and select the best software implementation(s) available in the grid
environment, and those available to the platform based on the problem being
solved.
Community Authorization Servers: These servers control resource
access by enforcing community utilization policies and providing these
respective access capabilities by acting as policy enforcement agents.
Community Accounting and Payment Services: These services
provide resource utilization metrics, while at the same time generating payment
requirements for members of any community.
The capabilities and efficiencies of these Collective layer services are based on
the underlying layers of the protocol stack. These collective networking
services can be defined as general-purpose Grid Computing solutions to
narrowed domain and application-specific solutions.

Application Layer: User-Defined Grid Applications

 These are user applications, which are constructed by utilizing the services
defined at each lower layer. Such an application can directly access the
resource, or can access the resource through the Collective Service interface
APIs (Application Provider Interface).
 Each layer in the grid architecture provides a set of APIs and SDKs (software
developer kits) for the higher layers of integration. It is up to the application
developers whether they should use the collective services for general-purpose
discovery, and other high-level services across a set of resources, or if they
choose to start directly working with the exposed resources. These user-defined
grid applications are (in most cases) domain specific and provide specific
solutions.

47

Visit & Downloaded from : www.LearnEngineering.in


Visit & Downloaded from : www.LearnEngineering.in

5. GRID STANDARDS

 Write short notes on various grid standards.

OGSA

 The Global Grid Forum has published the Open Grid Service Architecture
(OGSA). To address the requirements of grid computing in an open and
standard way requires a framework for distributed systems that support
integration, virtualization, and management. Such a framework requires a core
set of interfaces, expected behaviors, resource models, and bindings.
 OGSA defines requirements for these core capabilities and thus provides
general reference architecture for grid computing environments. It identifies the
components and functions that are useful if not required for a grid environment.
 Though it does not go to the level of detail such as defining programmatic
interfaces or other aspects that would guarantee interoperabilty between
implementations, it can be used to identify the functions that should be
included based on the requirements of the specific target environment.

OGSI

 As grid computing has evolved it has become clear that a service-oriented


architecture could provide many benefits in the implementation of a grid
infrastructure.
 The Global Grid Forum extended the concepts defined in OGSA to define
specific interfaces to various services that would implement the functions
defined by OGSA.
 More specifically, the Open Grid Services Interface (OGSI) defines
mechanisms for creating, managing, and exchanging information among Grid
services.

48

Visit & Downloaded from : www.LearnEngineering.in


Visit & Downloaded from : www.LearnEngineering.in

 A Grid service is a Web service that conforms to a set of interfaces and


behaviors that define how a client interacts with a Grid service. These
interfaces and behaviors, along with other OGSI mechanisms associated with
Grid service creation and discovery, provide the basis for a robust grid
environment.
 OGSI provides the Web Service Definition Language (WSDL) definitions for
these key interfaces.
 Globus Toolkit 3 included several of its core functions as Grid services
conforming to OGSI.

OGSA-DAI

The OGSA-DAI (data access and integration) project is concerned with constructing
middleware to assist with access and integration of data from separate data sources via
the grid.
 The project was conceived by the UK Database Task Force and is working
closely with the Global Grid Forum DAIS-WG and the Globus team.

GridFTP

 GridFTP is a secure and reliable data transfer protocol providing high


performance and optimized for wide-area networks that have high bandwidth.
 It is based upon the Internet FTP protocol and includes extensions that make it
a desirable tool in a grid environment.
 The GridFTP protocol specification is a proposed recommendation document
in the Global Grid Forum (GFD-R-P.020).
 GridFTP uses basic Grid security on both control (command) and data
channels. Features include multiple data channels for parallel transfers, partial
file transfers, third-party transfers, and more.

49

Visit & Downloaded from : www.LearnEngineering.in


Visit & Downloaded from : www.LearnEngineering.in

 GridFTP can be used to move files (especially large files) across a network
efficiently and reliably. These files may include the executables required for an
application or data to be consumed or returned by an application.
 Higher level services, such as data replication services, could be built on top of
GridFTP.

WSRF

 WSRF is being promoted and developed through work from a variety of


companies, including IBM, and has been submitted to OASIS technical
committees.
 Basically, WSRF defines a set of specifications for defining the relationship
between Web services (that are normally stateless) and stateful resources.
 WSRF is a general term that encompasses several related proposed standards
that cover: Resources Resource lifetime Resource properties Service groups
(collections of resources) Faults Notifications Topics As the concept of Grid
services evolves, the WSRF suite of evolving standards holds great promise for
the merging of Web services standards with the stateful resource management
requirements of grid computing.

Web Services Interoperability (WS – I)

 Web Services Interoperabilty (WS-I) that also can be applied to and bring value
to grid environments, standards, and proposed standards.

50

Visit & Downloaded from : www.LearnEngineering.in


Visit & Downloaded from : www.LearnEngineering.in

6. GPU AND ELEMENTS OF GRID

 Explain in detail about GPU and elements of grid.

GPU Computing to Exascale and Beyond

 A GPU is a graphics coprocessor or accelerator mounted on a computer’s


graphics card or video card.
 A GPU offloads the CPU from te- dious graphics tasks in video editing
applications.
 The world’s first GPU, the GeForce 256, was marketed by NVIDIA in 1999.
These GPU chips can process a minimum of 10 million polygons per second,
and are used in nearly every computer on the market today.
 Some GPU features were also integrated into certain CPUs.
 Traditional CPUs are structured with only a few cores. For example, the Xeon
X5670 CPU has six cores. However, a modern GPU chip can be built with
hundreds of processing cores.
 Unlike CPUs, GPUs have a throughput architecture that exploits massive
parallelism by executing many concurrent threads slowly, instead of executing
a single long thread in a conventional microprocessor very quickly.
 Lately, parallel GPUs or GPU clusters have been garnering a lot of attention
against the use of CPUs with limited parallelism.
 General-purpose computing on GPUs, known as GPGPUs, have appeared in
the HPC field. NVIDIA’s CUDA model was for HPC using GPGPUs.

Working of GPU

 Early GPUs functioned as coprocessors attached to the CPU.


 Today, the NVIDIA GPU has been upgraded to 128 cores on a single chip.

51

Visit & Downloaded from : www.LearnEngineering.in


Visit & Downloaded from : www.LearnEngineering.in

 Furthermore, each core on a GPU can handle eight threads of instructions. This
translates to having up to 1,024 threads executed concurrently on a single GPU.
This is true massive parallelism, compared to only a few threads that can be
handled by a conventional CPU. The CPU is optimized for latency caches,
while the GPU is optimized to deliver much higher throughput with explicit
management of on-chip memory.
 Modern GPUs are not restricted to accelerated graphics or video coding. They
are used in HPC systems to power supercomputers with massive parallelism at
multicore and multithreading levels.
 GPUs are designed to handle large numbers of floating-point operations in
parallel.
 In a way, the GPU offloads the CPU from all data-intensive calculations, not
just those that are related to video processing.
 Conventional GPUs are widely used in mobile phones, game consoles,
embedded systems, PCs, and servers. The NVIDIA CUDA Tesla or Fermi is
used in GPU clusters or in HPC systems for parallel processing of massive
floating-pointing data.

GPU Programming Model

 The Figure 1.1 given below shows the interaction between a CPU and GPU in
performing parallel execution of floating-point operations concurrently.
 The CPU is the conventional multicore processor with limited parallelism to
exploit.
 The GPU has a many-core architecture that has hundreds of simple processing
cores organized as multiprocessors.
 Each core can have one or more threads. Essentially, the CPU’s floatingpoint
kernel computation role is largely offloaded to the many-core GPU.
 The CPU instructs the GPU to perform massive data processing.

52

Visit & Downloaded from : www.LearnEngineering.in


Visit & Downloaded from : www.LearnEngineering.in

 The bandwidth must be matched between the on-board main memory and the
on-chip GPU memory. This process is carried out in NVIDIA’s CUDA
programming using the GeForce 8800 or Tesla and Fermi GPUs.

Figure 1.11 - The use of a GPU along with a CPU for massively parallel
execution in hundreds or thousands of processing cores
 In the future, thousand-core GPUs may appear in Exascale (Eflops or 1018
flops) systems. This reflects a trend toward building future MPPs with hybrid
architectures of both types of processing chips.
 In a DARPA report published in September 2008, four challenges are identified
for exascale computing: (1) energy and power, (2) memory and storage, (3)
concurrency and locality, and (4) system resiliency.
Power Efficiency of the GPU
 Bill Dally of Stanford University considers power and massive parallelism as
the major benefits of GPUs over CPUs for the future.
 By extrapolating current technology and computer architecture, it was
estimated that 60 Gflops/watt per core is needed to run an exaflops system.
 Dally has estimated that the CPU chip consumes about 2 nJ/instruction, while
the GPU chip requires 200 pJ/instruction, which is 1/10 less than that of the
CPU.
 The CPU is optimized for latency in caches and memory, while the GPU is
optimized for throughput with explicit management of on-chip memory.

53

Visit & Downloaded from : www.LearnEngineering.in


Visit & Downloaded from : www.LearnEngineering.in

 Data movement dominates power consumption. One needs to optimize the


storage hierarchy and tailor the memory to the applications.
 The self-aware OS and runtime support and build locality-aware compilers and
auto-tuners for GPU-based MPPs have to be promoted. This implies that both
power and software are the real challenges in future parallel and distributed
computing systems.

Elements of Grid Computing

Grid computing combines elements such as distributed computing, high-performance


computing and disposable computing depending on the application of the technology
and the scale of operation.
Grids can create a virtual supercomputer out of the existing servers, workstations and
personal computers. Present-day grids encompass the following types
 Computational grids, in which machines will set aside resources to ―number
crunch‖ data or provide coverage for other intensive workloads
 Scavenging grids, commonly used to find and harvest machine cycles from idle
servers and desktop computers for use in resource-intensive tasks (scavenging
is usually implemented in a way that is unobtrusive to the owner/user of the
processor)
 Data grids, which provide a unified interface for all data repositories in an
organization, and through which data can be queried, managed and secured.
 Market-oriented grids, which deal with price setting and negotiation, grid
economy management and utility driven scheduling and resource allocation.
The key components of grid computing include the following.
 Resource management: a grid must be aware of what resources are available
for different tasks
 Security management: the grid needs to take care that only authorized users can
access and use the available resources
 Data management: data must be transported, cleansed, parceled and processed

54

Visit & Downloaded from : www.LearnEngineering.in


Visit & Downloaded from : www.LearnEngineering.in

 Services management: users and applications must be able to query the grid in
an effective and efficient manner
More specifically, grid computing environment can be viewed as a computing setup
constituted by a number of logical hierarchical layers. They include grid fabric
resources, grid security infrastructure, core grid middleware, user level middleware
and resource aggregators, grid programming environment and tools and grid
applications. The major constituents of a grid computing system can be identified into
various categories from different perspectives as follows:
 functional view
 physical view
 service view
Basic constituents of a grid from a functional view are decided depending on the grid
design and its expected use. Some of the functional constituents of a grid are
 Security (in the form of grid security infrastructure)
 Resource Broker
 Scheduler
 Data Management
 Job and resource management
 Resources
A resource is an entity that is to be shared; this includes computers, storage,
data and software. A resource need not be a physical entity. Normally, grid portal acts
as a user interaction mechanism which is application specific and can take many
forms.
A user-security functional block usually exists in the grid environment and is a
key requirement for grid computing.
In a grid environment, there is a need for mechanisms to provide
authentication, authorization, data confidentiality, data integrity and availability,
particularly from a user’s point of view.
In the case of inter-domain grids, there is also a requirement to support security
across organizational boundaries. This makes a centrally managed security system
impractical.
55

Visit & Downloaded from : www.LearnEngineering.in


Visit & Downloaded from : www.LearnEngineering.in

The grid security infrastructure (GSI) provides a ―single sign-on‖, run


anywhere authentication service with support for local control over access rights and
mapping from global to local identities.

7. DATA CENTER IN CLOUD COMPUTING

 Explain in detail about data center in cloud computing.

Data Center Virtualization for Cloud Computing

 Cloud architecture is built with commodity hardware and network devices.


Almost all cloud platforms choose the popular x86 processors.
 Low-cost terabyte disks and Gigabit Ethernet are used to build data centers.
 Data center design emphasizes the performance/ price ratio over speed
performance alone. In other words, storage and energy efficiency are more
important than shear speed performance.

Data Center Growth and Cost Breakdown

 A large data center may be built with thousands of servers. Smaller data centers
are typically built with hundreds of servers. The cost to build and maintain data
center servers has increased over the years.

Low-Cost Design Philosophy

 High-end switches or routers may be too cost prohibitive for building data
centers. Thus, using high-bandwidth networks may not fit the economics of
cloud computing.
 Given a fixed budget, commodity switches and networks are more
desirable in data centers. Similarly, using commodity x86 servers is more
desired over expensive mainframes.
56

Visit & Downloaded from : www.LearnEngineering.in


Visit & Downloaded from : www.LearnEngineering.in

 The software layer handles network traffic balancing, fault tolerance, and
expandability.
 Currently, nearly all cloud computing data centers use Ethernet as their
fundamental network technology.

Convergence of Technologies
 Essentially, cloud computing is enabled by the convergence of
technologies in four areas:
i. hardware virtualization and multi-core chips,
ii. utility and grid computing,
iii. SOA, Web 2.0, and WS mashups, and
iv. atonomic computing and data center automation.
 Hardware virtualization and multicore chips enable the existence of
dynamic configurations in the cloud.
 Utility and grid computing technologies lay the necessary foundation for
computing clouds.
 Recent advances in SOA, Web 2.0, and mashups of platforms are pushing
the cloud another step forward.
 Finally, achievements in autonomic computing and automated data center
operations contribute to the rise of cloud computing.
 Science and society faces a data deluge. Data comes from sensors, lab
experiments, simulations, individual archives, and the web in all scales and
formats. Preservation, movement, and access of massive data sets require
generic tools supporting high-performance, scalable file systems,
databases, algorithms, workflows, and visualization.
 With science becoming data-centric, a new paradigm of scientific
discovery is becoming based on data-intensive technologies.
 On January 11, 2007, the Computer Science and Telecommunication Board
(CSTB) recommended fostering tools for data capture, data creation, and
data analysis.
 A cycle of interaction exists among four technical areas.
57

Visit & Downloaded from : www.LearnEngineering.in


Visit & Downloaded from : www.LearnEngineering.in

 First, cloud technology is driven by a surge of interest in data deluge. Also,


cloud computing impacts e-science greatly, which explores multicore and
parallel computing technologies. These two hot areas enable the buildup of
data deluge.
 To support data-intensive computing, one needs to address workflows,
databases, algorithms, and virtualization issues. By linking computer
science and technologies with scientists, a spectrum of e-science or e-
research applications in biology, chemistry, physics, the social sciences,
and the humanities has generated new insights from interdisciplinary
activities.
 Cloud computing is a transformative approach as it promises much more
than a data center model. It fundamentally changes how we interact with
information. The cloud provides services on demand at the infrastructure,
platform, or software level.
 At the platform level, MapReduce offers a new programming model that
transparently handles data parallelism with natural fault tolerance
capability. Iterative MapReduce extends MapReduce to support a broader
range of data mining algorithms commonly used in scientific applications.
The cloud runs on an extremely large cluster of commodity computers.
Internal to each cluster node, multithreading is practiced with a large
number of cores in many-core GPU clusters.
 Data-intensive science, cloud computing, and multicore computing are
converging and revolutionizing the next genera- tion of computing in
architectural design and programming challenges. They enable the
pipeline: Data becomes information and knowledge, and in turn becomes
machine wisdom as desired in SOA.

58

Visit & Downloaded from : www.LearnEngineering.in


Visit & Downloaded from : www.LearnEngineering.in

UNIT – II

GRID SERVICES

Introduction to Open Grid Services Architecture (OGSA) – Motivation –


Functionality Requirements – Practical & Detailed view of OGSA/OGSI – Data
intensive grid service models – OGSA services.

PART – A

1. Define OGSA.

The Global Grid Forum has published the Open Grid Service Architecture (OGSA).
To address the requirements of grid computing in an open and standard way requires a
framework for distributed systems that support integration, virtualization, and
management. Such a framework requires a core set of interfaces, expected behaviors,
resource models, and bindings. OGSA defines requirements for these core capabilities
and thus provides general reference architecture for grid computing environments. It
identifies the components and functions that are useful if not required for a grid
environment.

2. What is the motivation behind OGSA?

The grid infrastructure is mainly concerned with the creation, management and the
application of dynamic coordinated resources and services which are complex. The
introduction of OGSA is to support the creation, maintenance and application of
ensembles of services maintained by virtual organizations.

3. Mention the goals of OGSA.

 Identify the use cases that can drive the OGSA platform components.
 Identify and define the core OGSA platform components.
 Define hosting and platform specific bindings.

59

Visit & Downloaded from : www.LearnEngineering.in


Visit & Downloaded from : www.LearnEngineering.in

 Define resource models and resource profiles interoperable solutions.

4. List out the functional requirements of OGSA.

 Discovery of resources
 Instantiating new service
 Service level management to meet user expectation
 Enabling metering and accounting to quantify resource usage into pricing units
 Monitoring resource usage and availability
 Managing service policies.
 Providing service grouping and aggregation to provide better indexing and
information.
 Managing end to end security
 Servicing life cycle and change management
 Failure provisioning management
 Workload management
 Load balancing to provide scalable system

5. Mention the basic services of OGSA.


 Common management model (CMM)
 Service domains
 Distributes data access and replication
 Policy
 Security
 Provisioning and resource management
 Accounting / metering
 Common distributed logging
 Monitoring
 Scheduling

60

Visit & Downloaded from : www.LearnEngineering.in


Visit & Downloaded from : www.LearnEngineering.in

6. What is OGSI?

OGSI is grid software infrastructure standardization initiative based on emerging


web services standards that are intended to provide maximum interoperability
among OGSA software components.

7. What is OGSI specification? Mention its dimensions.

OGSI specification defines a component model using a web service as its core
based technologies with WSDL as the service description mechanism and XML as
the message format. There are two dimensions to the stateful nature of web
service:
i. A service is maintaining its state information
ii. The interaction pattern between the client and service can be stateful.
8. What are software technologies behind the OGSA?

i. Globus Toolkit – which is adopted as a grid technology solution for


scientific and technical computing
ii. Web services (WS) – a popular standard based framework for business and
network applications.

9. What are the access models for organizing a data grid?

 Monadic model
 Hierarchical model
 Hybrid model
 Federation model

10. What are the grid service features that OGSI specification defines?

 Statefulness

61

Visit & Downloaded from : www.LearnEngineering.in


Visit & Downloaded from : www.LearnEngineering.in

 Stateful interactions
 The ability to create new instances
 Service lifetime management
 Notification of state changes and Grid service groups

11. Differentiate parallel and stripped data transfers.

Parallel data transfer Stripped data transfer


Parallel data transfer opens multiple In striped data transfer, a data object is
data streams for passing subdivided partitioned into a number of sections,
segments of a file simultaneously. and each section is placed in an
individual site in a data grid.
Although the speed of each stream is Striped data transfer can utilize the
the same as in sequential streaming, the bandwidths of multiple sites more
total time to move data in all streams efficiently to speed up data transfer.
can be significantly reduced compared
to FTP transfer.

12. What is data replication?

Replication strategies determine when and where to create a replica of the data. The
factors to consider include data demand, network conditions, and transfer cost. The
strategies of replication can be classified into method types: dynamic and static.

13. What are the two categories of grid applications?

Applications in the grid are normally grouped into two categories: computation-
intensive and data-intensive.

62

Visit & Downloaded from : www.LearnEngineering.in


Visit & Downloaded from : www.LearnEngineering.in

14. What is Gloubus?

The Globus project is a multi-institutional research effort to create a basic


infrastructure and high level services for a computational grid. A computational grid is
defined as hardware and software infrastructure that provides dependable, consistent,
pervasive, and inexpensive access to high-end computational capabilities. They have
now evolved into an infrastructure for resource sharing (hardware, software,
applications, and so on) among heterogeneous virtual organizations. These grids
enable high creativity by increasing the average and peak computational performance
available to important applications regardless of the spatial distribution of both
resources and users.

15. What is Grid Service Handle?


A GSH is a globally unique name that distinguishes a specific grid service instance
from all others. The status of a grid service instance could be that it exists now or that
it will exist in the future. These instances carry no protocol or instance-specific
addresses or supported protocol bindings. Instead, these information items are
encapsulated along with all other instance-specific information. In order to interact
with a specific service instance, a single abstraction is defined as a GSR.

63

Visit & Downloaded from : www.LearnEngineering.in


Visit & Downloaded from : www.LearnEngineering.in

Part – B

1. OGSA

 Explain in detail about OGSA.

OGSA

The Open Grid Services Architecture (OGSA)


 set of technical specifications which define a common framework that will
allow businesses to build grids both across the enterprise and with their
business partners.
 OGSA will define the standards required for both open source and commercial
software for a broadly applicable and widely adopted global grid infrastructure.
 An enabling infrastructure for systems and applications that require the
integration and management of service within distributed, heterogeneous,
dynamic ―virtual organizations
 Defines the notion of a ―Grid Service,‖ which is a Web Service that conforms
to a specific interface and behavior, as defined in various specifications
developed by the Global Grid Forum (GGF)

OGSA Architecture and Goal

OGSA architecture is a layered architecture, as shown in Figure 2.1 below, with clear
separation of the functionalities at each layer. The purpose of the OGSA Platform is to
define standard approaches to, and mechanisms for, basic problems that are common
to a wide variety of Grid systems, such as communicating with other services,
establishing identity, negotiating authorization, service discovery, error notification,
and managing service collections.

64

Visit & Downloaded from : www.LearnEngineering.in


Visit & Downloaded from : www.LearnEngineering.in

Figure 2.1 OGSA platform component

Goals of OGSA

 Identify the use cases that can drive the OGSA platform components
 Identify and define the core OGSA platform components
 Define hosting and platform-specific bindings
 Define resource models and resource profiles with interoperable solutions
 Facilitating distributed resource management across heterogeneous platforms
 Providing seamless quality of service delivery
 Building a common base for autonomic management solutions
 Providing common infrastructure building blocks to avoid "stovepipe solution
towers"
 Open and published interfaces and messages
 Industry-standard integration solutions including Web services
 Facilities to accomplish seamless integration with existing IT resources where
resources
 become on-demand services/resources
 Providing more knowledge-centric and semantic orientation of services

65

Visit & Downloaded from : www.LearnEngineering.in


Visit & Downloaded from : www.LearnEngineering.in

OGSA PLATFORM COMPONENTS: The job of the OGSA is to build on the grid
service specification (Open Grid Service Infrastructure, or OGSI) to define
architectures and standards for a set of "core grid services" that are essential
components to every grid.
 A set of core OSGA use cases are developed, which forms a representative
collection from different business models (e.g., business grids and science
grids) and are used for the collection of the OGSA functional requirements.
 The basic OGSA architectural organization can be classified into five layers:
o native platform services and transport mechanisms
o OGSA hosting environment
o OGSA transport and security
o OGSA infrastructure (OGSI)
o OGSA basic services (meta-OS and domain services)

Native Platform Services and Transport Mechanisms The native platforms form
the concrete resource-hosting environment. These platforms can be host resources
specific to operating systems or hardware components, and the native resource
managers manage them. The transport mechanisms use existing networking services
transport protocols and standards.

OGSA Hosting Environment: OGSA defines the semantics of a Grid service


instance: how it is created, how it is named, how its lifetime is determined, how to
communicate with it, and so on. However, while OGSA is prescriptive on matters of
basic behavior, it does not place requirements on what a service does or how it
performs that service. In other words, OGSA does not address issues of
implementation programming model, programming language, implementation tools,
or execution environment. In practice, Grid services are instantiated within a specific
execution environment or hosting environment. A particular hosting environment
defines not only implementation programming model, programming language,
development tools, and debugging tools, but also how an implementation of a Grid
service meets its obligations with respect to Grid service semantics. Today’s e-science

66

Visit & Downloaded from : www.LearnEngineering.in


Visit & Downloaded from : www.LearnEngineering.in

Grid applications typically rely on native operating system processes as their hosting
environment, with for example creation of a new service instance involving the
creation of a new process. In such environments, a service itself may be implemented
in a variety of languages such as C, C++, Java, or Fortran.

Core Networking Services Transport and Security: An OGSA standard does not
define the specific networking services transport, nor the security mechanisms in the
specification. Instead, it assumes use of the platform-specific transport and security at
the runtime instance of operation. In other words, these properties are defined as
service binding properties, and they are dynamically bound to the native networking
services transport and security systems at runtime. These binding requirements are
flexible; however, the communities in collaboration with the hosting and platform
capabilities must work together to provide the necessary interoperability aspects.

OGSA Infrastructure:
The grid service specification developed within the OGSI working group has defined
the essential building block for distributed systems. This is defined in terms of Web
service specifications and description mechanisms (i.e., W SDL). This specification
provides a common set of behaviors and interfaces to discover a service, create service
instance, service lifecycle management, and subscribe to and deliver respective
notifications.

OGSA Basic Services:


Some of the most notable and interesting basic services are as follows:
 Common Management Model  Provisioning and resource
(CMM) management
 Service domains  Accounting/metering
 Distributed data access and  Common distributed logging
replication  Monitoring
 Policy  Scheduling
 Security

67

Visit & Downloaded from : www.LearnEngineering.in


Visit & Downloaded from : www.LearnEngineering.in

2. SERVICES PROVIDED BY OGSA

 Explain in detail about the services provided by OGSA.

OGSA services fall into seven broad areas, defined in terms of capabilities
frequently required in a grid scenario. The Figure 2.2 below shows the OGSA
architecture. These services are summarized as follows:

i. Infrastructure Services refer to a set of common functionalities, such as naming,


typically required by higher level services.
ii. Execution Management Services Concerned with issues such as starting and
managing tasks, including placement, provisioning, and life-cycle management.
Tasks may range from simple jobs to complex workflows or composite services.

iii. Data Management Services provide functionality to move data to where it is


needed, maintain replicated copies, run queries and updates, and transform data
into new formats. These services must handle issues such as data consistency,
persistency, and integrity. An OGSA data service is a web service that implements
one or more of the base data interfaces to enable access to, and management of,
data resources in a distributed environment. The three base interfaces, Data
Access, Data Factory, and Data Management, define basic operations for
representing, accessing, creating, and managing data.

iv. Resource Management Services provide management capabilities for grid


resources: management of the resources themselves, management of the resources
as grid components, and management of the OGSA infrastructure. For example,
resources can be monitored, reserved, deployed, and con- Figured as needed to
meet application QoS requirements. It also requires an information model
(semantics) and data model (representation) of the grid resources and services.

68

Visit & Downloaded from : www.LearnEngineering.in


Visit & Downloaded from : www.LearnEngineering.in

Figure 2.2 - OGSA Services


v. Security Services facilitate the enforcement of security-related policies within a
(virtual) organization, and supports safe resource sharing. Authentication,
authorization, and integrity assurance are essential functionalities provided by
these services.

vi. Information Services provide efficient production of, and access to, information
about the grid and its constituent resources. The term ―information‖ refers to
dynamic data or events used for status monitoring; relatively static data used for
discovery; and any data that is logged. Troubleshooting is just one of the possible
uses for information provided by these services.

69

Visit & Downloaded from : www.LearnEngineering.in


Visit & Downloaded from : www.LearnEngineering.in

vii. Self-Management Services support service-level attainment for a set of services


(or resources), with as much automation as possible, to reduce the costs and
complexity of managing the system. These services are essential in addressing the
increasing complexity of owning and operating an IT infrastructure.

3. OGSI

 Explain in detail about OGSI.

The OGSI specification defines a component model using a W eb service as its core
base technology, with WSDL as the service description mechanism and XML as the
message format. Web services in general are dealing with stateless services, and their
client interaction is mostly stateless. On the other hand, grid services are a long-
running process, maintaining the state of the resource being shared, and the clients are
involved in a stateful interaction with the services. There are two dimensions to the
stateful nature of a Web service:

A service is maintaining its state information. These are normally classified as


application state and in the case of grid service it directly maps to the state of the
resource. The interaction pattern between the client and service can be stateful. There
are numerous architecture styles and programming models for defining these stateful
interactions including BPEL4W S and REST (Fielding).

Concepts behind OGSI:

 The Figure shown introduces a number of concepts surrounding OGSI, and its
relation to Web services. The following list describes points of interest related
to this model. Grid services are layered on top of Web services.
 Grid services contain application state factors, and provide concepts for
exposing the state, which is referred to as the service data element.

70

Visit & Downloaded from : www.LearnEngineering.in


Visit & Downloaded from : www.LearnEngineering.in

 Both grid services and Web services communicate with its client by
exchanging XML messages.
 Grid services are described using GW SDL, which is an extension of WSDL.
GWSDL provides interface inheritance and open port type for exposing the
service state information referred to as service data. This is similar to interface
properties or attributes commonly found in other distributed description
languages.
 The client programming model is the same for both grid service and Web
service. But grid services provide additional message exchange patterns such as
the handle resolution through OGSI port types.
 The transport bindings are selected by the runtime. Message encoding and
decoding is done for the specific binding and high-level transport protocol
(SOAP/HTTP).

Figure 2.3 - Typical Web service and grid service layers.


Terminologies used in OGSI:

Web service: A software component identified using a URI, whose public interfaces
and binding are described using XML. These services interact with its clients using
XML message exchanges.

71

Visit & Downloaded from : www.LearnEngineering.in


Visit & Downloaded from : www.LearnEngineering.in

Stateful Web service: A Web service that maintains some state information between
clients' interactions.

Grid service: This is a stateful Web service with a common set of public operations
and state behaviors exposed by the service. These services are created using the
OGSI-defined specification.

Grid service description: A mechanism to describe the public operations and


behaviors of a grid service. This is expressed using a combination of WSDL and
GWSDL language specifications. The WSDL is a language specified by W3C,
whereas GWSDL is an extension mechanism for WSDL and is specified by OGSI
specification.

Grid service instance: An instance of a grid service created by the hosting container
and identified by a unique URI called grid service handle (GSH).

Grid service reference: A temporal binding description of a grid service endpoint.


This binding lists the interfaces, endpoint address, protocol for communication, and
message encoding rules. In addition, it may contain service policies and other
metadata information. Some examples of GSR are WSDL, IOR, and so forth.

Service data element: These are publicly accessible state information of a service
included with the WSDL portType. These can be treated as interface attributes.

Technical Details of OGSI Specification: OGSI is based on Web services and it uses
WSDL as a mechanism to describe the public interfaces of the grid service. There are
two core requirements for describing Web services based on the OGSI:
 The ability to describe interface inheritance
 The ability to describe additional information elements (state
data/attributes/properties) with the interface definitions

72

Visit & Downloaded from : www.LearnEngineering.in


Visit & Downloaded from : www.LearnEngineering.in

Similar to most Web services, OGSI services use W SDL as a service description
mechanism, but the current W SDL 1.1 specification lacks the above two capabilities
in its definition of portType. The W SDL 1.2 working group has agreed to support
these features through portType (now called "interface" in W SDL 1.2) inheritance
and an open content model for portTypes. As an interim solution, OGSI developed a
new schema for portType definition (extended from normal W SDL 1.1 schema
portType Type) under a new namespace definition, GWSDL.

Significance of Transforming GWSDL to WSDL Definition It is, however, a


known fact that none of the WSDL 1.1 tools can handle these extensions. Most of
them will fail on WSDL validation. The current W SDL 1.1 manipulation tools are
used to create native language interfaces, stubs, and proxies from WSDL, and for the
converse process of creating WSDL from the services implemented in native
language. These functions have to be intelligent in order to handle these extensions.
Basically, GW SDL extensions are to be transformed to WSDL 1.1 artifacts. This
includes:
All the "extends" port types, and their operations, which are brought down to a
single most derived portType. This process is called "flattening" of the interface
hierarchy to the most derived type. All the service data elements and GW SDL
extensions are retained for the reverse transformation process. Figure 2.4 below
shows a simple transformation process (i.e., port type flattening), where the GW SDL
portType OperatingSystem extends the BaseManageableResource and GridService
declarations. These declarations are subsequently flattened to a W SDL portType
OperatingSystem, with all operations from its parent. It is worthy to note that the
WSDL 1.1 tools can all work on this newly emerged portType definition.

73

Visit & Downloaded from : www.LearnEngineering.in


Visit & Downloaded from : www.LearnEngineering.in

Figure 2.4 - The WSDL 2 WSDL transformation process.

Operator Overloading Support in OGSI Port Type

 Another important aspect of the OGSI is the naming convention adopted for the
portType operations, and the lack of support for operator overloading.
 In these situations, the OGSI follows the same conventions as described in the
suggested WSDL 1.2 specification.
 This now becomes rather complex across several different dimensions,
especially in the context of interface inheritance, and the process of
transformation to a single inheritance model as previously described.
 In these kinds of situations, the OGSI recommendations have to be adhered.
 The OGSI recommends that if two or more port type operation components
have the same value for their name and target namespace, then the component
model (i.e., the semantic and operation signature) for these operations must be
identical.
 Furthermore, if the port type operation components are equivalent, then they
can be considered as candidates to collapse into a single operation.

74

Visit & Downloaded from : www.LearnEngineering.in


Visit & Downloaded from : www.LearnEngineering.in

4. DATA INTENSIVE GRID SERVICE MODELS

 Discuss in detail about Data intensive grid service models.

Data-Intensive Grid Service Models

 Applications in the grid are normally grouped into two categories:


i. computation-intensive and
ii. data-intensive.
 For data-intensive applications, massive amounts of data have to be dealt. For
example, the data produced annually by a Large Hadron Collider may exceed
several petabytes (1015 bytes).
 The grid system must be specially designed to discover, transfer, and
manipulate these massive data sets.
 Transferring massive data sets is a time-consuming task.
 Efficient data management demands low-cost storage and high-speed data
movement

Data Replication and Unified Namespace

 This data access method is also known as caching, which is often applied to
enhance data efficiency in a grid environment.
 By replicating the same data blocks and scattering them in multiple regions of a
grid, users can access the same data with locality of references. Furthermore,
the replicas of the same data set can be a backup for one another.
 Some key data will not be lost in case of failures. However, data replication
may demand periodic consistency checks.
 The increase in storage requirements and network bandwidth may cause
additional problems.
 Replication strategies determine when and where to create a replica of the data.

75

Visit & Downloaded from : www.LearnEngineering.in


Visit & Downloaded from : www.LearnEngineering.in

 The factors to consider include data demand, network conditions, and transfer
cost.
 The strategies of replication can be classified into method types: dynamic and
static.
 For the static method, the locations and number of replicas are determined in
advance and will not be modified. Although replication operations require little
overhead, static strategies cannot adapt to changes in demand, bandwidth, and
storage vailability.
 Dynamic strategies can adjust locations and number of data replicas according
to changes in conditions (e.g., user behavior). However, frequent data-moving
operations can result in much more overhead than in static strategies.
 The replication strategy must be optimized with respect to the status of data
replicas.
 For static replication, optimization is required to determine the location and
number of data replicas.
 For dynamic replication, optimization may be determined based on whether the
data replica is being created, deleted, or moved.
 The most common replication strategies include preserving locality,
minimizing update costs, and maximizing profits.

Grid Data Access Models


 Multiple participants may want to share the same data collection.
 To retrieve any piece of data, a grid with a unique global namespace is needed.
Similarly, unique file names should be present.
 To achieve these, inconsistencies have to resolved among multiple data objects
bearing the same name. Access restrictions may be imposed to avoid
confusion. Also, data needs to be protected to avoid leakage and damage.
 Users who want to access data have to be authenticated first and then
authorized for access.

76

Visit & Downloaded from : www.LearnEngineering.in


Visit & Downloaded from : www.LearnEngineering.in

 In general, there are four access models for organizing a data grid, are shown in
Figure 2.5 below.

Figure 2.5 Four architectural models for building a data grid

Monadic model: This is a centralized data repository model, shown in Figure 2.5 a.
All the data is saved in a central data repository. When users want to access some data
they have to submit requests directly to the central repository. No data is replicated for
preserving data locality. This model is the simplest to implement for a small grid. For
a large grid, this model is not efficient in terms of performance and reliability. Data
replication is permitted in this model only when fault tolerance is demanded.

Hierarchical model: The hierarchical model, shown in Figure 2.5 (b), is suitable for
building a large data grid which has only one large data access directory. The data
may be transferred from the source to a second level center. Then some data in the
regional center is transferred to the third-level center. After being forwarded several
times, specific data objects are accessed directly by users. Generally speaking, a

77

Visit & Downloaded from : www.LearnEngineering.in


Visit & Downloaded from : www.LearnEngineering.in

higher-level data center has a wider coverage area. It provides higher bandwidth for
access than a lower-level data center. PKI security services are easier to implement in
this hierarchical data access model.

Federation model: This data access model shown in Figure 2.5 (c) is better suited for
designing a data grid with multiple sources of data supplies. Sometimes this model is
also known as a mesh model. The data sources are distributed to many different
locations. Although the data is shared, the data items are still owned and controlled by
their original owners. According to predefined access policies, only authenticated
users are authorized to request data from any data source. This mesh model may cost
the most when the number of grid institutions becomes very large.

Hybrid model: This data access model is shown in Figure 2.5 (d). The model
combines the best features of the hierarchical and mesh models. Traditional data
transfer technology, such as FTP, applies for networks with lower bandwidth.
Network links in a data grid often have fairly high bandwidth, and other data transfer
models are exploited by high-speed data transfer tools such as GridFTP developed
with the Globus library. The cost of the hybrid model can be traded off between the
two extreme models for hierarchical and mesh connected grids.

Parallel versus Striped Data Transfers


 Compared with traditional FTP data transfer, parallel data transfer opens
multiple data streams for passing subdivided segments of a file simultaneously.
Although the speed of each stream is the same as in sequential streaming, the
total time to move data in all streams can be significantly reduced compared to
FTP transfer.
 In striped data transfer, a data object is partitioned into a number of sections,
and each section is placed in an individual site in a data grid. When a user
requests this piece of data, a data stream is created for each site, and all the
sections of data objects are transferred simultaneously. Striped data transfer can

78

Visit & Downloaded from : www.LearnEngineering.in


Visit & Downloaded from : www.LearnEngineering.in

utilize the bandwidths of multiple sites more efficiently to speed up data


transfer.

5. GRID SERVICE HANDLE, GRID SERVICE MIGRATION, OGSA

SECURITY MODELS

 Explain in detail about grid service handle, grid service migration, OGSA
security models.

Grid service handle:


 A GSH is a globally unique name that distinguishes a specific grid service
instance from all others.
 The status of a grid service instance could be that it exists now or that it will
exist in the future.
 These instances carry no protocol or instance-specific addresses or supported
protocol bindings. Instead, these information items are encapsulated along with
all other instance-specific information.
 In order to interact with a specific service instance, a single abstraction is
defined as a GSR.
 Unlike a GSH, which is time-invariant, the GSR for an instance can change
over the life- time of the service.
 The OGSA employs a ―handle-resolution‖ mechanism for mapping from a
GSH to a GSR. The GSH must be globally defined for a particular instance.
 However, the GSH may not always refer to the same network address.
 A service instance may be implemented in its own way, as long as it obeys the
associated semantics.

79

Visit & Downloaded from : www.LearnEngineering.in


Visit & Downloaded from : www.LearnEngineering.in

Grid Service Migration

 This is a mechanism for creating new services and specifying assertions


regarding the lifetime of a service.
 The OGSA model defines a standard interface, known as a factor, to implement
this reference.
 Any service that is created must address the former services as the reference of
later services.
 The factory interface is labeled as a Create Service operation creates a
requested grid service with a specified interface and returns the GSH and initial
GSR for the new service instance. It should also register the new service
instance with a handle resolution service. Each dynamically created grid
service instance is associated with a specified lifetime.

OGSA Security Models

 The OGSA supports security enforcement at various levels, as shown in Figure


2.6 below.
 The grid works in a heterogeneous distributed environment, which is
essentially open to the general public.
 The people should be able to detect intrusions or stop viruses from spreading
by implementing secure conversations, single logon, access control, and
auditing for nonrepudiation.
 At the security policy and user levels, user should be to apply a service or
endpoint policy, resource mapping rules, authorized access of critical
resources, and privacy protection.
 At the Public Key Infrastructure (PKI) service level, the OGSA demands
security binding with the security protocol stack and bridging of certificate
authorities (CAs), use of multiple trusted intermediaries, and so on. Trust
models and secure logging are often practiced in grid platforms.

80

Visit & Downloaded from : www.LearnEngineering.in


Visit & Downloaded from : www.LearnEngineering.in

Figure 2.6 - The OGSA security model implemented at various protection levels

81

Visit & Downloaded from : www.LearnEngineering.in


Visit & Downloaded from : www.LearnEngineering.in

UNIT III

VIRTUALIZATION

Cloud deployment models: public, private, hybrid, community – Categories of


cloud computing: Everything as a service: Infrastructure, platform, software -
Pros and Cons of cloud computing – Implementation levels of virtualization –
virtualization structure – virtualization of CPU, Memory and I/O devices –
virtual clusters and Resource Management – Virtualization for data center
automation.

Part – A

1. List out the advantages of cloud computing.

Advantages of Cloud Computing


 Cost Efficiency
 Convenience and continuous availability
 Backup and Recovery
 Cloud is environmentally friendly
 Resiliency and Redundancy
 Scalability and Performance
 Quick deployment and ease of integration
 Increased Storage Capacity
 Device Diversity and Location Independence
 Smaller learning

2. List out the disadvantages of cloud computing.

Disadvantages of Cloud Computing:


 Security and privacy in the Cloud
 Dependency and vendor lock-in
82

Visit & Downloaded from : www.LearnEngineering.in


Visit & Downloaded from : www.LearnEngineering.in

 Technical Difficulties and Downtime


 Limited control and flexibility
 Increased Vulnerability

3. Mention the various cloud deployment models.

 Public cloud  Hybrid cloud


 Private cloud  Community cloud
4. What are all the various services provided by cloud computing.

 IaaS
 PaaS
 SaaS

5. What is virtual execution environment?


Operating system virtualization inserts a virtualization layer inside an operating
system to partition a machine’s physical resources. It enables multiple isolated
VMs within a single operating system kernel. This kind of VM is often called a
virtual execution environment (VE), Virtual Private System (VPS), or simply
container.

6. Define VMM.
The hardware-level virtualization inserts a layer between real hardware and
traditional operating systems. This layer is commonly called the Virtual
Machine Monitor (VMM) and it manages the hardware resources of a
computing system. Each time programs access the hardware the VMM captures
the process. In this sense, the VMM acts as a traditional OS.

83

Visit & Downloaded from : www.LearnEngineering.in


Visit & Downloaded from : www.LearnEngineering.in

7. What are the requirements for VMM?


 First, a VMM should provide an environment for programs which is
essentially identical to the original machine.
 Second, programs run in this environment should show, at worst, only
minor decreases in speed.
 Third, a VMM should be in complete control of the system resources

8. What library level virtualization?


Library-level virtualization is also known as user-level Application Binary
Interface (ABI) or API emulation. This type of virtualization can create
execution environments for running alien programs on a platform rather than
creating a VM to run the entire operating system. API call interception and
remapping are the key functions performed.

9. What are all the various classes of VM architecture?


Depending on the position of the virtualization layer, there are several classes
of VM architectures, namely the hypervisor architecture, para-virtualization,
and host based virtualization.

10. List out the categories of hardware virtualization.


Depending on implementation technologies, hardware virtualization can be
classified into two categories: full virtualization and host-based virtualization.
Full virtualization: With full virtualization, noncritical instructions run on the
hardware directly while critical instructions are discovered and replaced with
traps into the VMM to be emulated by software. Both the hypervisor and VMM
approaches are considered full virtualization.
Host based virtualization: An alternative VM architecture is to install a
virtualization layer on top of the host OS. This host OS is still responsible for
managing the hardware. The guest OSes are installed and run on top of the
virtualization layer. Dedicated applications may run on the VMs. Certainly,
some other applications can also run with the host OS directly

84

Visit & Downloaded from : www.LearnEngineering.in


Visit & Downloaded from : www.LearnEngineering.in

11. Define para virtualization.

Para-virtualization needs to modify the guest operating systems. A para-


virtualized VM provides special APIs requiring substantial OS
modifications in user applications. Performance degradation is a critical
issue of a virtualized system. No one wants to use a VM if it is much slower
than using a physical machine. The virtualization layer can be inserted at
different positions in a machine software stack. However, para-virtualization
attempts to reduce the virtualization overhead, and thus improve performance
by modifying only the guest OS kernel.

12. Define kernel based virtual machine.

KVM (Kernel-based Virtual Machine) is a Linux kernel virtualization


infrastructure. KVM can support hardware-assisted virtualization and
paravirtualization by using the Intel VT-x or AMD-v and VirtIO framework,
respectively. The VirtIO framework includes a paravirtual Ethernet card, a disk
I/O controller, a balloon device for adjusting guest memory usage, and a VGA
graphics interface using VMware drivers.

13. List out the categories of critical instructions.

 Privileged instruction - execute in a privileged mode and will be trapped


if executed outside this mode
 Control sensitive instructions - attempt to change the configuration of
resources used
 Behaviour sensitive instructions - have different behaviors depending on
the configuration of resources, including the load and store operations
over the virtual memory.

85

Visit & Downloaded from : www.LearnEngineering.in


Visit & Downloaded from : www.LearnEngineering.in

14. How a VM is provisioned to virtual clusters dynamically and also mention


the design issues of virtual clusters?

VM provisioning to a virtual clusters:

 The virtual cluster nodes can be either physical or virtual machines.


Multiple VMs running with different OSes can be deployed on the same
physical node.
 A VM runs with a guest OS, which is often different from the host OS,
that manages the resources in the physical machine, where the VM is
implemented.
 The purpose of using VMs is to consolidate multiple functionalities on
the same server. This will greatly enhance server utilization and
application flexibility.
 VMs can be colonized (replicated) in multiple servers for the purpose of
promoting distributed parallelism, fault tolerance, and disaster recovery.
 The size (number of nodes) of a virtual cluster can grow or shrink
dynamically, similar to the way an overlay network varies in size in a
peer-to-peer (P2P) network.
 The failure of any physical nodes may disable some VMs installed on
the failing nodes. But the failure of VMs will not pull down the host
system.
Design issues of virtual clusters
 Live migration of VMs
 Memory and file migration
 Dynamic deployment of virtual clusters

15. What is data center automation?

Data center automation means that huge volumes of hardware, software, and
database resources in these data centers can be allocated dynamically to
86

Visit & Downloaded from : www.LearnEngineering.in


Visit & Downloaded from : www.LearnEngineering.in

millions of Internet users simultaneously, with guaranteed QoS and cost


effectiveness. This automation process is triggered by the growth of
virtualization products and cloud computing services.

87

Visit & Downloaded from : www.LearnEngineering.in


Visit & Downloaded from : www.LearnEngineering.in

PART B

1. CLOUD REFERENCE MODEL

 Explain about cloud Reference model with neat architecture.

The cloud Reference Model:

Cloud computing supports any IT service that can be consumed as a utility and
delivered through a network, most likely the Internet. Such characterization includes
quite different aspects: infrastructure, development platforms, application and
services.
It is possible to organize all the concrete realizations of cloud computing into a
layered view covering the entire stack from hardware appliances to software systems.
Cloud resources are harnessed to offer ―computing horsepower‖ required for
providing services. Often, this layer is implemented using a data center in which
hundreds and thousands of nodes are stacked together. Cloud infrastructure can be
heterogeneous in nature because a variety of resources, such as clusters and even
networked PCs, can be used to build it. Moreover, database systems and other storage
services can also be part of the infrastructure.
The physical infrastructure is managed by the core middleware, the objectives
of which are to provide an appropriate runtime environment for applications and to
best utilize resources. At the bottom of the stack, virtualization technologies are used
to guarantee runtime environment customization, application isolation, sandboxing,
and quality of service. Hardware virtualization is most commonly used at this level.
Hypervisors manage the pool of resources and expose the distributed
infrastructure as a collection of virtual machines. By using virtual machine technology
it is possible to finely partition the hardware resources such as CPU and memory and
to virtualize specific devices, thus meeting the requirements of users and applications.
This solution is generally paired with storage and network virtualization strategies,
which allow the infrastructure to be completely virtualized and controlled.

88

Visit & Downloaded from : www.LearnEngineering.in


Visit & Downloaded from : www.LearnEngineering.in

According to the specific service offered to end users, other virtualization


techniques can be used; for example, programming-level virtualization helps in
creating a portable runtime environment where applications can be run and controlled.
This scenario generally implies that applications hosted in the cloud be developed
with a specific technology or a programming language, such as Java, .NET, or Python.
In this case, the user does not have to build its system from bare metal. Infrastructure
management is the key function of core middleware, which supports capabilities such
as negotiation of the quality of service, admission control, execution management and
monitoring, accounting, and billing.

Figure3.1 Cloud Computing Architecture

The combination of cloud hosting platforms and resources is generally


classified as a Infrastructure-as-a-Service(IaaS) solution. We can organize the
different examples of IaaS into two categories:Some of them provide both the

89

Visit & Downloaded from : www.LearnEngineering.in


Visit & Downloaded from : www.LearnEngineering.in

management layer and the physical infrastructure; others provide only the
management layer(IaaS (M)).
In this second case, the management layer is often integrated with other IaaS
solutions that provide physical infrastructure and adds value to them. IaaS solutions
are suitable for designing the system infrastructure but provide limited services to
build applications. Such service is provided by cloud programming environments and
tools, which form a new layer for offering users a development platform for
applications.
The range of tools include Web-based interfaces, command-line tools, and
frameworks for concurrent and dis- tributed programming. In this scenario, users
develop their applications specifically for the cloud by using the API exposed at the
user-level middleware. For this reason,this approach is also known as Platform-as-a-
Service(PaaS) because the service offered to the user is a development platform rather
than an infrastructure.
PaaS solutions generally include the infrastructure as well, which is bundled as
part of the service provided to users. In the case of Pure PaaS, only the user-level
middleware is offered, and it has to be complemented with a virtual or physical
infrastructure. The top layer of the reference model depicted in Figure 3.1 contains
services delivered at the application level. These are mostly referred to as Software-
as-a-Service(SaaS).

In most cases these are Web-based applications that rely on the cloud to
provide service to end users. The horsepower of the cloud provided by IaaS and PaaS
solutions allows independent software vendors to deliver their application services
over the Internet. Other applications belonging to this layer are those that strongly
leverage the Internet for their core functionalities that rely on the cloud to sustain a
larger number of users; this is the case of gaming portals and, in general, social
networking websites.
SaaS implementations should feature such behavior automatically, whereas
PaaS and IaaS generally provide this functionality as a part of the API exposed to
users.The reference model also introduces the concept of everything as a Service
(XaaS).
90

Visit & Downloaded from : www.LearnEngineering.in


Visit & Downloaded from : www.LearnEngineering.in

This is one of the most important elements of cloud computing: Cloud services
from different providers can be combined to provide a completely integrated solution
covering all the computing stack of a system. IaaS providers can offer the bare metal
in terms of virtual machines where PaaS solutions are deployed.
When there is no need for a PaaS layer, it is possible to directly customize the
virtual infrastructure with the software stack needed to run applications. This is the
case of virtual Web farms: a distributed system composed of Web servers, database
servers, and load balancers on top of which prepackaged software is installed to run
Web applications. This possibility has made cloud computing an interesting option for
reducing startups’ capital investment in IT, allowing them to quickly commercialize
their ideas and grow their infrastructure according to their revenues.

2. CLOUD DEPLOYMENT MODEL

 Explain about the various cloud deployment model in detail. (or)


Explain about the types of Cloud.

Public clouds: Public clouds constitute the first expression of cloud computing. They
are a realization of the canonical view of cloud computing in which the services
offered are made available to anyone, from anywhere, and at any time through the
Internet.
From a structural point of view they are a distributed system, most likely
composed of one or more data centers connected together, on top of which the specific
services offered by the cloud are implemented. Any customer can easily sign in with
the cloud provider, enter her credential and billing details, and use the services
offered.
Public clouds were the first class of cloud that were implemented and offered.
They offer solutions for minimizing IT infrastructure costs and serve as a viable
option for handling peak loads on the local infrastructure. They have become an
interesting option for small enterprises, which are able to start their businesses without

91

Visit & Downloaded from : www.LearnEngineering.in


Visit & Downloaded from : www.LearnEngineering.in

large up-front investments by completely relying on public infrastructure for their IT


needs.
By renting the infrastructure or subscribing to application services, customers
were able to dynamically upsize or downsize their IT according to the demands of
their business. Currently, public clouds are used both to completely replace the IT
infrastructure of enterprises and to extend it when it is required.
A fundamental characteristic of public clouds is multitenancy. A public cloud
is meant to serve a multitude of users, not a single customer. Any customer requires a
virtual computing environment that is separated, and most likely isolated, from other
users.
This is a fundamental requirement to provide effective monitoring of user
activities and guarantee the desired performance and the other QoS attributes
negotiated with users. QoS management is a very important aspect of public clouds.
Hence, a significant portion of the software infrastructure is devoted to
monitoring the cloud resources, to bill them according to the contract made with the
user, and to keep a complete history of cloud usage for each customer. These features
are fundamental to public clouds because they help providers offer services to users
with full accountability.
A public cloud can offer any kind of service: infrastructure, platform, or
applications. For example, Amazon EC2 is a public cloud that provides infrastructure
as a service; Google AppEngine is a public cloud that provides an application
development platform as a service; and SalesForce.com is a public cloud that provides
software as a service. What makes public clouds peculiar is the way they are
consumed:
They are available to everyone and are generally architected to support a large
quantity of users. What characterizes them is their natural ability to scale on demand
and sustain peak loads.
Public clouds can be composed of geographically dispersed data centers to
share the load of users and better serve them according to their locations. For example,
Amazon Web Services has data centers installed in the United States, Europe,

92

Visit & Downloaded from : www.LearnEngineering.in


Visit & Downloaded from : www.LearnEngineering.in

Singapore, and Australia; they allow their customers to choose between three different
regions: us-west-1, us-east-1, or eu-west-1.
Such regions are priced differently and are further divided into availability
zones, which map to specific datacenters. According to the specific class of services
delivered by the cloud, a different software stack is installed to manage the
infrastructure: virtual machine managers, distributed middleware, or distributed
applications.

Private clouds: Public clouds are appealing and provide a viable option to cut IT
costs and reduce capital expenses, but they are not applicable in all scenarios. For
example, a very common critique to the use of cloud computing in its canonical
implementation is the loss of control. In the case of public clouds, the provider is
in control of the infrastructure and, eventually, of the customers’ core logic and
sensitive data. Even though there could be regulatory procedure in place that
guarantees fair management and respect of the customer’s privacy, this condition can
still be perceived as a threat or as an unacceptable risk that some organizations are not
willing to take.
In particular, institutions such as government and military agencies will not
consider public clouds as an option for processing or storing their sensitive data. The
risk of a breach in the security infrastructure of the provider could expose such
information to others; this could simply be considered unacceptable.
In other cases, the loss of control of where your virtual IT infrastructure resides
could open the way to other problematic situations. More precisely, the geographical
location of a data center generally determines the regulations that are applied to
management of digital information.
As a result, according to the specific location of data, some sensitive
information can be made accessible to government agencies or even considered
outside the law if processed with specific cryptographic techniques. For example,the
USAPATRIOTAct5 provides its government and other agencies with virtually
limitless powers to access information, including that belonging to any company that
stores information in the U.S. territory.

93

Visit & Downloaded from : www.LearnEngineering.in


Visit & Downloaded from : www.LearnEngineering.in

Finally, existing enterprises that have large computing infra- structures or large
installed bases of software do not simply want to switch to public clouds, but they use
the existing IT resources and optimize their revenue. All these aspects make the use of
a public computing infrastructure not always possible.
More specifically, having an infrastructure able to deliver IT services on
demand can still be a winning solution, even when implemented within the private
premises of an institution. This idea led to the diffusion of private clouds, which are
similar to pub- lic clouds, but their resource-provisioning model is limited within the
boundaries of an organization.
Private clouds are virtual distributed systems that rely on a private
infrastructure and provide internal users with dynamic provisioning of computing
resources. Instead of a pay-as-you-go model as in public clouds, there could be other
schemes in place, taking into account the usage of the cloud and proportionally billing
the different departments or sections of an enterprise.
Private clouds have the advantage of keeping the core business operations in-
house by relying on the existing IT infrastructure and reducing the burden of
maintaining it once the cloud has been set up. In this scenario, security concerns are
less critical, since sensitive information does not flow out of the private infrastructure.
Moreover, existing IT resources can be better utilized because the private cloud
can provide services to a different range of users. Another interesting opportunity that
comes with private clouds is the possibility of testing applications and systems at a
comparatively lower price rather than public clouds before deploying them on the
public virtual infrastructure.
A Forrester report on the benefits of delivering in-house cloud computing
solutions for enterprises highlighted some of the key advantages of using a private
cloud computing infrastructure:
 Customer information protection: Despite assurances by the public cloud
leaders about security, few provide satisfactory disclosure or have long enough
histories with their cloud offerings to provide warranties about the specific level of
security put in place on their systems. In-house security is easier to maintain and
rely on.

94

Visit & Downloaded from : www.LearnEngineering.in


Visit & Downloaded from : www.LearnEngineering.in

 Infrastructure ensuring SLAs: Quality of service implies specific operations


such as appropriate clustering and failover, data replication, system monitoring and
maintenance, and disaster recovery, and other uptime services can be
commensurate to the application needs. Although public cloud vendors provide
some of these features, not all of them are available as needed.
 Compliance with standard procedure sand operations: If organizations are
subject to third-party compliance standards, specific procedures have to be put in
place when deploying and executing applications. This could be not possible in the
case of the virtual public infrastructure.
All these aspects make the use of cloud-based infrastructures in private
premises an interesting option. From an architectural point of view, private clouds can
be implemented on more heterogeneous hardware: They generally rely on the existing
IT infrastructure already deployed on the private premises.
At the bottom layer of the software stack, virtual machine technologies such as
Xen , KVM , and VMware serve as the foundations of the cloud. Virtual machine
management technologies such as VMware vCloud, Eucalyptus , and OpenNebula
can be used to control the virtual infrastructure and provide an IaaS solution.
InterGrid provides added value on top of OpenNebula and Amazon EC2 by allowing
the reservation of virtual machine instances and managing multi- administrative
domain clouds. PaaS solutions can provide an additional layer and deliver a high-
level service for private clouds.
Among the options available for private deployment of clouds we can consider
DataSynapse, Zimory Pools, Elastra, and Aneka. DataSynapse is a global provider of
application virtualization software. By relying on the VMware virtualization
technology,
Data Synapse provides a flexible environment for building private clouds on
top of data centers. Elastra Cloud Server is a platform for easily configuring and
deploying distributed application infrastructures on clouds. Zimory provides a
software infrastructure layer that automates the use of resource pools based on Xen,
KVM, and VMware virtualization technologies.

95

Visit & Downloaded from : www.LearnEngineering.in


Visit & Downloaded from : www.LearnEngineering.in

It allows creating an internal cloud composed of sparse private and public


resources and provides facilities for migrating applications within the existing
infrastructure. Aneka is a software development platform that can be used to deploy a
cloud infrastructure on top of heterogeneous hardware: data centers, clusters, and
desktop grids.

Figure 3.2 Private cloud hardware and software stack

It provides a pluggable service-oriented architecture that’s mainly devoted to


supporting the execution of distributed applications with different programming
models: bag of tasks, MapReduce, and others. Private clouds can provide in-house
solutions for cloud computing, but if compared to public clouds they exhibit more
limited capability to scale elastically on demand.
Hybrid clouds:
Public clouds are large software and hardware infrastructures that have a
capability that is huge enough to serve the needs of multiple users, but they suffer
from security threats and administrative pitfalls. Although the option of completely
relying on a public virtual infrastructure is appealing for companies that did not incur

96

Visit & Downloaded from : www.LearnEngineering.in


Visit & Downloaded from : www.LearnEngineering.in

IT capital costs and have just started considering their IT needs (i.e., start-ups), in
most cases the private cloud option prevails because of the existing IT infrastructure.

Private clouds are the perfect solution when it is necessary to keep the
processing of information within an enterprise’s premises or it is necessary to use the
existing hardware and software infrastructure. One of the major drawbacks of private
deployments is the inability to scale on demand and to efficiently address peak loads.

In this case, it is important to leverage capabilities of public clouds as needed.


Hence, a hybrid solution could be an interesting opportunity for taking advantage of
the best of the private and public worlds. This led to the development and diffusion of
hybrid clouds.

Figure 3.3 Hybrid/heterogeneous cloud.


Hybrid clouds allow enterprises to exploit existing IT infrastructures, maintain
sensitive information within the premises, and naturally grow and shrink by
provisioning external resources and releasing them when they’re no longer needed.
Security concerns are then only limited to the public portion of the cloud that can be

97

Visit & Downloaded from : www.LearnEngineering.in


Visit & Downloaded from : www.LearnEngineering.in

used to perform operations with less stringent constraints but that are still part of the
system workload.
It is a heterogeneous distributed system resulting from a private cloud that
integrates additional services or resources from one or more public clouds. For this
reason they are also called heterogeneous clouds. As depicted in the diagram, dynamic
provisioning is a fundamental component in this scenario.
Hybrid clouds address scalability issues by leveraging external resources for
exceeding capacity demand. These resources or services are temporarily leased for the
time required and then released. This practice is also known as cloud bursting.
Whereas the concept of hybrid cloud is general, it mostly applies to IT
infrastructure rather than software services. Service-oriented computing already
introduces the concept of integration of paid software services with existing
application deployed in the private premises.
Infrastructure management software such as OpenNebula already exposes the
capability of integrating resources from public clouds such as Amazon EC2. In this
case the virtual machine obtained from the public infrastructure is managed as all the
other virtual machine instances maintained locally. What is missing is then an
advanced scheduling engine that’s able to differentiate these resources and provide
smart allocations by taking into account the budget available to extend the existing
infrastructure.
In the case of OpenNebula, advanced schedulers such as Haizea can be
integrated to provide cost-based scheduling. A different approach is taken by
InterGrid. This is essentially a distributed scheduling engine that manages the
allocation of virtual machines in a col- lection of peer networks.
Dynamic provisioning is most commonly implemented in PaaS solutions that
support hybrid clouds. As previously discussed, one of the fundamental components
of PaaS middleware is the mapping of distributed applications onto the cloud
infrastructure. In this scenario, the role of dynamic provisioning becomes fundamental
to ensuring the execution of applications under the QoS agreed on with the user.
For example, Aneka provides a provisioning service that leverages different
IaaS providers for scaling the existing cloud infrastructure .The provisioning service

98

Visit & Downloaded from : www.LearnEngineering.in


Visit & Downloaded from : www.LearnEngineering.in

cooperates with the scheduler, which is in charge of guaranteeing a specific QoS for
applications. In particular, each user application has a budget attached, and the
scheduler uses that budget to optimize the execution of the application by renting
virtual nodes if needed.
Community clouds: Community clouds are distributed systems created by
integrating the services of different clouds to address the specific needs of an industry,
a community, or a business sector. The National Institute of Standards and
Technologies (NIST) characterize community clouds as follows:
The infrastructure is shared by several organizations and supports a specific
community that has shared concerns (e.g., mission, security requirements, policy, and
compliance considerations). It may be managed by the organizations or a third party
and may exist on premise or off premise.
The users of a specific community cloud fall into a well-identified community,
sharing the same concerns or needs; they can be government bodies, industries, or
even simple users, but all of them focus on the same issues for their interaction with
the cloud. This is a different scenario than public clouds, which serve a multitude of
users with different needs.
Community clouds are also different from private clouds, where the services
are generally delivered within the institution that owns the cloud. From an
architectural point of view, a community cloud is most likely implemented over
multiple administrative domains. This means that different organizations such as
government bodies, private enterprises, research organizations, and even public virtual
infrastructure providers contribute with their resources to build the cloud
infrastructure. Candidate sectors for community clouds are as follows:
 Media industry: In the media industry, companies are looking for low-
cost, agile, and simple solutions to improve the efficiency of content production. Most
media productions involve an extended ecosystem of partners. In particular, the
creation of digital content is the outcome of a collaborative process that includes
movement of large data, massive compute-intensive rendering tasks, and complex
workflow executions. Community clouds can provide a shared environment where
services can facilitate business-to-business collaboration and offer the horsepower in

99

Visit & Downloaded from : www.LearnEngineering.in


Visit & Downloaded from : www.LearnEngineering.in

terms of aggregate bandwidth, CPU, and storage required to efficiently support media
production.
 Healthcare industry: In the healthcare industry, there are different
scenarios in which community clouds could be of use. In particular, community
clouds can provide a global platform on which to share information and knowledge
without revealing sensitive data maintained within the private infrastructure. The
naturally hybrid deployment model of community clouds can easily support the
storing of patient-related data in a private cloud while using the shared infrastructure
for noncritical services and automating processes within hospitals.
 Energy and other core industries: In these sectors, community clouds
can bundle the comprehensive set of solutions that together vertically address
management, deployment, and orchestration of services and operations. Since these
industries involve different providers, vendors, and organizations, a community cloud
can provide the right type of infrastructure to create an open and fair market.
 Public sector: Legal and political restrictions in the public sector can
limit the adoption of public cloud offerings. Moreover, governmental processes
involve several institutions and agencies and are aimed at providing strategic solutions
at local, national, and international administrative levels. They involve business-to-
administration, citizen-to-administration, and possibly business-to-business processes.
Some examples include invoice approval, infrastructure planning, and public hearings.
A community cloud can constitute the optimal venue to provide a distributed
environment in which to create a communication platform for performing such
operations.
 Scientific research: Science clouds are an interesting example of
community clouds. In this case, the common interest driving different organizations
sharing a large distributed infrastructure is scientific computing.
The benefits of these community clouds are the following:
 Openness: By removing the dependency on cloud vendors, community
clouds are open systems in which fair competition between different solutions can
happen.

100

Visit & Downloaded from : www.LearnEngineering.in


Visit & Downloaded from : www.LearnEngineering.in

 Community: Being based on a collective that provides resources and


services, the infrastructure turns out to be more scalable because the system can grow
simply by expanding its user base.
 Graceful failures: Since there is no single provider or vendor in control
of the infrastructure, there is no single point of failure.
 Convenience and control: Within a community cloud there is no
conflict between convenience and control because the cloud is shared and owned by
the community, which makes all the decisions through a collective democratic
process.
 Environmental sustainability: The community cloud is supposed to
have a smaller carbon footprint because it harnesses underutilized resources.
Moreover, these clouds tend to be more organic by growing and shrinking in a
symbiotic relationship to support the demand of the community, which in turn sustains
it.

Figure 3.4 Community Cloud

101

Visit & Downloaded from : www.LearnEngineering.in


Visit & Downloaded from : www.LearnEngineering.in

3. SERVICES PROVIDED BY CLOUD COMPUTING

 Explain about the various services provided by cloud computing in detail. (or)
Describe about everything as a service in cloud Environment in detail.

In Cloud Computing there are three types of services available.


 Infrastructure- and Hardware-as-a-Service (IaaS/HaaS)
 Platform as a service
 Software as a service
i. Infrastructure- and Hardware-as-a-Service (IaaS/HaaS): Infrastructure-
and Hardware-as-a-Service (IaaS/HaaS) solutions are the most popular and developed
market segment of cloud computing. They deliver customizable infrastructure on
demand. The available options within the IaaS offering umbrella range from single
servers to entire infra- structures, including network devices, load balancers, and
database and Web servers.

Figure 3.5 – Service provided by Cloud computing


The main technology used to deliver and implement these solutions is hardware
virtualization: one or more virtual machines opportunely configured and

102

Visit & Downloaded from : www.LearnEngineering.in


Visit & Downloaded from : www.LearnEngineering.in

interconnected define the distributed sys- tem on top of which applications are
installed and deployed. Virtual machines also constitute the atomic components that
are deployed and priced according to the specific features of the virtual hardware:
memory, number of processors, and disk storage.
IaaS/HaaS solutions bring all the benefits of hardware virtualization: workload
partitioning, application isolation, sandboxing, and hard- ware tuning. From the
perspective of the service provider, IaaS/HaaS allows better exploiting the IT
infrastructure and provides a more secure environment where executing third party
applications.
From the perspective of the customer it reduces the administration and
maintenance cost as well as the capital costs allocated to purchase hardware. At the
same time, users can take advantage of the full customization offered by virtualization
to deploy their infrastructure in the cloud; in most cases virtual machines come with
only the selected operating system installed and the system can be generally based on
Web 2.0 technologies: Web services, RESTful APIs, and mash-ups. These
technologies allow either applications or final users to access the services exposed by
the underlying infrastructure.
In particular, management of the virtual machines is the most important function
performed by this layer. A central role is played by the scheduler, which is in charge
of allocating the execution of virtual machine instances. The scheduler interacts with
the other components that perform a variety of tasks:
The pricing and billing component takes care of the cost of executing
each virtual machine instance and maintains data that will be used to charge the user.
The monitoring component tracks the execution of each virtual machine
instance and maintains data required for reporting and analyzing the performance of
the system.
The reservation component stores the information of all the virtual
machine instances that have been executed or that will be executed in the future.

If support for QoS-based execution is provided, a QoS/SLA management


component will maintain a repository of all the SLAs made with the users; together

103

Visit & Downloaded from : www.LearnEngineering.in


Visit & Downloaded from : www.LearnEngineering.in

with the monitoring component, this component is used to ensure that a given virtual
machine instance is executed with the desired quality of service.The VM repository
component provides a catalog of virtual machine images that users can use to create
virtual instances. Some implementations also allow users to upload their specific
virtual machine images.

Figure 3.5. Infrastructure as a service reference model


The bottom layer is composed of the physical infrastructure, on top of which
the management layer operates. The infrastructure can be of different types; the
specific infrastructure used depends on the specific use of the cloud. A service
provider will most likely use a massive data center containing hundreds or thousands
of nodes. A cloud infrastructure developed in house, in a small or medium-sized
enterprise or within a university department, will most likely rely on a cluster.
They need to provide credentials to access third-party IaaS providers or to own
a private infrastructure in which the management software is installed. This is the case

104

Visit & Downloaded from : www.LearnEngineering.in


Visit & Downloaded from : www.LearnEngineering.in

with Enomaly, Elastra, Eucalyptus, OpenNebula, and specific IaaS (M) solutions from
VMware, IBM, and Microsoft.
Finally, the reference architecture applies to IaaS implementations that provide
computing resources, especially for the scheduling component. If storage is the main
service provided, it is still possible to distinguish these three layers. The role of
infrastructure management software is not to keep track and manage the execution of
virtual machines but to provide access to large infrastructures and implement storage
virtualization solutions on top of the physical layer.
ii. Platform as a service: Platform-as-a-Service (PaaS) solutions provide a
development and deployment platform for running applications in the cloud. They
constitute the middleware on top of which applications are built.
Application management is the core functionality of the middleware. PaaS
implementations provide applications with a runtime environment and do not expose
any service for managing the underlying infrastructure. They automate the process of
deploying applications to the infrastructure, configuring application components,
provisioning and configuring supporting technologies such as load balancers and
databases, and managing system change based on policies set by the user.
The specific development model decided for applications determines the
interface exposed to the user. Some implementations provide a completely Web-based
interface hosted in the cloud and offering a variety of services. It is possible to find
integrated developed environments based on 4GL and visual programming concepts,
or rapid prototyping environments where applications are built by assembling mash-
ups and user-defined components and successively customized.
Other implementations of the PaaS model provide a complete object model for
representing an application and provide a programming language-based approach.
This approach generally offers more flexibility and opportunities but incurs longer
development cycles. Developers generally have the full power of programming
languages such as Java, .NET, Python, or Ruby, with some restrictions to provide
better scalability and security.
In this case the traditional development environments can be used to design and
develop applications, which are then deployed on the cloud by using the APIs exposed

105

Visit & Downloaded from : www.LearnEngineering.in


Visit & Downloaded from : www.LearnEngineering.in

by the PaaS provider. Specific components can be offered together with the
development libraries for better exploiting the services offered by the PaaS
environment. Sometimes a local runtime environment that simulates the conditions of
the cloud is given to users for testing their applications before deployment. This
environment can be restricted in terms of features, and it is generally not optimized for
scaling.
It is possible to organize the various solutions into three wide categories: PaaS-
I, PaaS-II, and PaaS-III.
The first category identifies PaaS implementations that completely follow the
cloud computing style for application development and deployment. They offer an
integrated development environment hosted within the Web browser where
applications are designed, developed, composed, and deployed. This is the case of
Force.com and Long jump. Both deliver as platforms the combination of middleware
and infrastructure.
In the second class we can list all those solutions that are focused on providing
a scalable infrastructure for Web application, mostly websites. In this case, developers
generally use the providers’ APIs, which are built on top of industrial runtimes, to
develop applications.
Google AppEngine is the most popular product in this category. It provides a scalable
runtime based on the Java and Python programming languages, which have been
modified for pro- viding a secure runtime environment and enriched with additional
APIs and components to support scalability.
AppScale, an open-source implementation of Google AppEngine, provides
interface- compatible middleware that has to be installed on a physical infrastructure.
Joyent Smart Platform provides a similar approach to Google AppEngine. A different
approach is taken by Heroku and Engine Yard, which provide scalability support for
Ruby- and Ruby on Rails-based Websites..
The third category consists of all those solutions that provide a cloud
programming platform for any kind of application, not only Web applications. Among
these, the most popular is Microsoft Windows Azure, which provides a

106

Visit & Downloaded from : www.LearnEngineering.in


Visit & Downloaded from : www.LearnEngineering.in

comprehensive framework for building service- oriented cloud applications on top of


the .NET technology, hosted on Microsoft’s data centers.
Other solutions in the same category, such as Manjrasoft Aneka, Apprenda
SaaSGrid, Appistry Cloud IQ Platform, DataSynapse, and GigaSpaces DataGrid,
provide only middleware with different services. Table 3.2 shows a few options
available in the Platform-as-a-Service market segment.
 Runtimeframework: This framework represents the ―software stack‖ of
the PaaS model and the most intuitive aspect that comes to people’s minds
when they refer to PaaS solutions. The runtime framework executes end-
user code according to the policies set by the user and the provider.
 Abstraction: PaaS solutions are distinguished by the higher level of
abstraction that they provide. Whereas in the case of IaaS solutions the
focus is on delivering ―raw‖ access to virtual or physical infrastructure, in
the case of PaaS the focus is on the applications the cloud must support.
This means that PaaS solutions offer a way to deploy and manage
applications on the cloud rather than a bunch of virtual machines on top of
which the IT infrastructure is built and conFigured.
 Automation: PaaS environments automate the process of deploying
applications to the infrastructure, scaling them by provisioning additional
resources when needed. This process is performed automatically and
according to the SLA made between the customers and the provider. This
feature is normally not native in IaaS solutions, which only provide ways to
provision more resources.
 Cloud services: PaaS offerings provide developers and architects with
services and APIs, helping them to simplify the creation and delivery of
elastic and highly available cloud applications. These services are the key
differentiators among competing PaaS solutions and generally include
specific components for developing applications, advanced services for
application monitoring, management, and reporting.
Many of the PaaS offerings provide this facility, which is naturally built into
the framework they leverage to provide a cloud computing solution.One of the major

107

Visit & Downloaded from : www.LearnEngineering.in


Visit & Downloaded from : www.LearnEngineering.in

concerns of leveraging PaaS solutions for implementing applications is vendor lock-


in.
iii. Software as a service: Software-as-a-Service (SaaS) is a software delivery
model that provides access to applications through the Internet as a Web-based
service. It provides a means to free users from complex hard- ware and software
management by offloading such tasks to third parties, which build applications
accessible to multiple users through a Web browser.
The SaaS model is appealing for applications serving a wide range of users and
that can be adapted to specific needs with little further customization. This
requirement characterizes SaaS as a ―one-to-many‖ software delivery model, whereby
an application is shared across multiple users. This is the case of CRM3 and ERP4
applications that constitute common needs for almost all enter- prises, from small to
medium-sized and large business. Every enterprise will have the same requirements
for the basic features concerning CRM and ERP; different needs can be satisfied with
further customization. This scenario facilitates the development of software platforms
that provide a general set of features and support specialization and ease of integration
of new components.
Moreover, it constitutes the perfect candidate for hosted solutions, since the
applications delivered to the user are the same, and the applications themselves
provide users with the means to shape the applications according to user needs. As a
result, SaaS applications are naturally multitenant.
Multitenancy, which is a feature of SaaS compared to traditional packaged
software, allows providers to centralize and sustain the effort of managing large
hardware infrastructures, maintaining and upgrading applications transparently to the
users, and optimizing resources by sharing the costs among the large user base. On the
customer side, such costs constitute a minimal fraction of the usage fee paid for the
software.
The analysis carried out by SIIA was mainly oriented to cover application
service providers (ASPs) and all their variations, which capture the concept of
software applications consumed as a service in a broader sense. ASPs already had
some of the core characteristics of SaaS:

108

Visit & Downloaded from : www.LearnEngineering.in


Visit & Downloaded from : www.LearnEngineering.in

 The product sold to customer is application access.


 The application is centrally managed.
 The service delivered is one-to-many.
 The service delivered is an integrated solution delivered on the contract, which
means provided as promised.
Initially ASPs offered hosting solutions for packaged applications, which were
served to multiple customers. Successively, other options, such as Web-based
integration of third-party application services, started to gain interest and a new range
of opportunities open up to independent software vendors and service providers.
These opportunities eventually evolved into a more flexible model to deliver
applications as a service: the SaaS model. ASPs provided access to packaged software
solutions that addressed the needs of a variety of customers. Initially this approach
was affordable for service providers, but it later became inconvenient when the cost of
customizations and specializations increased.
Benefits:
 Software cost reduction and total cost of ownership (TCO) were paramount
 Service-level improvements
 Rapid implementation
 Standalone and configurable applications
 Rudimentary application and data integration
 Subscription and pay-as-you-go (PAYG) pricing
Software-as-a-Service applications can serve different needs. CRM, ERP, and
social networking applications are definitely the most popular ones. SalesForce.com is
probably the most successful and popular example of a CRM service.
It provides a wide range of services for applications: customer relationship and
human resource management, enterprise resource planning, and many other features.
SalesForce.com builds on top of the Force.com platform, which provides a fully
featured environment for building applications.
In particular, through AppExchange customers can publish, search, and
integrate new services and features into their existing applications. This makes
SalesForce.com applications completely extensible and customizable.
109

Visit & Downloaded from : www.LearnEngineering.in


Visit & Downloaded from : www.LearnEngineering.in

Another important class of popular SaaS applications comprises social


networking applications such as Facebook and professional networking sites such as
LinkedIn. Other than providing the basic features of networking, they allow
incorporating and extending their capabilities by integrating third-party applications.
Office automation applications are also an important representative for SaaS
applications: Google Documents and Zoho Office are examples of Web-based
applications that aim to address all user needs for documents, spreadsheets, and
presentation management. They offer a Web-based interface for creating, managing,
and modifying documents that can be easily shared among users and made accessible
from anywhere. It is important to note the role of SaaS solution enablers, which
provide an environment in which to integrate third-party services and share
information with others.
A quite successful exam- ple is Box.net, an SaaS application providing users
with a Web space and profile that can be enriched and extended with third-party
applications such as office automation, integration with CRM-based solutions, social
Websites, and photo editing.

4. IMPLEMENTATION LEVELS OF VIRTUALIZATION

 Explain about the various implementation levels of virtualization.

Virtualization is a computer architecture technology by which multiple virtual


machines (VMs) are multiplexed in the same hardware machine.
The purpose of a VM is to enhance resource sharing by many users and
improve computer performance in terms of resource utilization and application
flexibility.
Hardware resources (CPU, memory, I/O devices, etc.) or software resources
(operating system and software libraries) can be virtualized in various functional
layers.
The idea is to separate the hardware from the software to yield better system
efficiency. For example, computer users gained access to much enlarged memory

110

Visit & Downloaded from : www.LearnEngineering.in


Visit & Downloaded from : www.LearnEngineering.in

space when the concept of virtual memory was introduced. Similarly, virtualization
techniques can be applied to enhance the use of compute engines, networks,and
storage.

Levels of Virtualization:

A traditional computer runs with a host operating system specially tailored for
its hardware architecture, as shown in Figure 3.6 (a). After virtualization, different
user applications managed by their own operating systems (guest OS) can run on the
same hardware, independent of the host OS.
This is often done by adding additional software, called a virtualization layer as
shown in Figure 3.6 (b). This virtualization layer is known as hypervisor or virtual
machine monitor (VMM) .The VMs are shown in the upper boxes, where applications
run with their own guest OS over the virtualized CPU, memory, and I/O resources.The
main function of the software layer for virtualization is to virtualize the physical
hardware of a host machine into virtual resources to be used by the VMs, exclusively.
The virtualization software creates the abstraction of VMs by interposing a
virtualization layer at various levels of a computer system. Common virtualization
layers include the instruction set architecture(ISA) level, hardware level, operating
system level, library support level, and application level.

111

Visit & Downloaded from : www.LearnEngineering.in


Visit & Downloaded from : www.LearnEngineering.in

Fig 3.6 The architecture of a computer system before and after


virtualization

Fig 3.7 Virtualization ranging from hardware to applications in five


abstraction levels.
Instruction Set Architecture Level: At the ISA level, virtualization is performed by
emulating a given ISA by the ISA of the host machine. For example, MIPS binary
code can run on an x86-based host machine with the help of ISA emulation. With this
approach, it is possible to run a large amount of legacy binary code written for various
processors on any given new hardware host machine. Instruction set emulation leads
to virtual ISAs created on any hardware machine.
The basic emulation method is through code interpretation. An interpreter
program interprets the source instructions to target instructions one by one. One
source instruction may require tens or hundreds of native target instructions to
perform its function. Obviously, this process is relatively slow. For better
performance, dynamic binary translation is desired.

112

Visit & Downloaded from : www.LearnEngineering.in


Visit & Downloaded from : www.LearnEngineering.in

This approach translates basic blocks of dynamic source instructions to target


instructions. The basic blocks can also be extended to program traces or super blocks
to increase translation efficiency.
Instruction set emulation requires binary translation and optimization. A virtual
instruction set architecture (V-ISA) thus requires adding a processor-specific software
translation layer to the compiler.

Hardware Abstraction Level: Hardware-level virtualization is performed right on


top of the bare hardware. The idea is to virtualize a computer’s resources, such as its
processors, memory, and I/O devices. The intention is to upgrade the hardware
utilization rate by multiple users concurrently.

Operating System Level: This refers to an abstraction layer between traditional OS


and user applications. OS-level virtualization creates isolated containers on a single
physical server and the OS instances to utilize the hardware and software in data
centers.
The containers behave like real servers. OS-level virtualization is commonly
used in creating virtual hosting environments to allocate hardware resources among a
large number of mutually distrusting users. It is also used, to a lesser extent, in
consolidating server hardware by moving services on separate hosts into containers or
VMs on one server.

Library Support Level: Most applications use APIs exported by user level libraries
rather than using lengthy system calls by the OS. Since most systems provide well-
documented APIs, such an interface becomes another candidate for virtualization.
Virtualization with library interfaces is possible by controlling the
communication link between applications and the rest of a system through API
hooks. The software tool WINE has implemented this approach to support Windows
applications on top of UNIX hosts. Another example is the vCUDA which
allows applications executing within VMs to leverage GPU hardware acceleration.

113

Visit & Downloaded from : www.LearnEngineering.in


Visit & Downloaded from : www.LearnEngineering.in

User-Application Level: Virtualization at the application level virtualizes an


application as a VM. On a traditional OS, an application often runs as a process.
Therefore, application-level virtualization is also known as process-level
virtualization. The most popular approach is to deploy high level language (HLL)
VMs.
In this scenario, the virtualization layer sits as an application program on top of
the operating system, and the layer exports an abstraction of a VM that can run
programs written and compiled to a particular abstract machine definition. Any
program written in the HLL and compiled for this VM will be able to run on it.
The Microsoft .NET CLR and Java Virtual Machine (JVM) are two good
examples of this class of VM. Other forms of application-level virtualization are
known as application isolation, application sandboxing, or application streaming. The
process involves wrapping the application in a layer that is isolated from the host OS
and other applications. The result is an application that is much easier to distribute and
remove from user workstations.

114

Visit & Downloaded from : www.LearnEngineering.in


Visit & Downloaded from : www.LearnEngineering.in

4. IMPLEMENTATION LEVELS OF VIRTUALIZATION

 Explain about Virtualization Structures/Tools and Mechanisms in detail.

There are three typical classes of VM architecture. Before virtualization, the


operating system manages the hardware. After virtualization, a virtualization layer is
inserted between the hardware and the operating system.
In such a case, the virtualization layer is responsible for converting portions of
the real hardware into virtual hardware. Therefore, different operating systems such as
Linux and Windows can run on the same physical machine, simultaneously.
Depending on the position of the virtualization layer, there are several classes
of VM architectures, namely the hypervisor architecture, para-virtualization, and
hostbased virtualization. The hypervisor is also known as the VMM (Virtual Machine
Monitor).They both perform the same virtualization operations.

Hypervisor and Xen Architecture: The hypervisor supports hardware-level


virtualization on bare metal devices like CPU, memory, disk and network interfaces.
The hypervisor software sits directly between the physical hardware and its OS. This
virtualization layer is referred to as either the VMM or the hypervisor.
The hypervisor provides hypercalls for the guest OSes and applications.
Depending on the functionality, ahypervisor can assume a micro-kernel architecture
like the Microsoft Hyper-V. Or it can assume a monolithic hypervisor architecture like
the VMware ESX for server virtualization.
A micro-kernel hypervisor includes only the basic and unchanging functions
(such as physical memory management and processor scheduling). The device drivers
and other changeable components are outside the hypervisor. A monolithic hypervisor
implements all the aforementioned functions, including those of the device drivers.
Therefore, the size of the hypervisor code of a micro-kernel hypervisor is smaller
than that of a monolithic hypervisor. Essentially, a hypervisor must be able to convert
physical devices into virtual resources dedicated for the deployed VM to use.

115

Visit & Downloaded from : www.LearnEngineering.in


Visit & Downloaded from : www.LearnEngineering.in

The Xen Architecture: The core components of a Xen system are the hypervisor,
kernel, and applications. The organization of the three components is important. Like
other virtualization systems, many guest OSes can run on top of the hypervisor.
However, not all guest OSes are created equal, and one in particular controls the
others.
The guest OS, which has control ability, is called Domain 0, and the others are
called Domain U. Domain 0 is a privileged guest OS of Xen. It is first loaded when
Xen boots without any file system drivers being available. Domain 0 is designed to
access hardware directly and manage devices. Therefore, one of the responsibilities of
Domain 0 is to allocate and map hardware resources for the guest domains (the
Domain U domains).

Binary Translation with Full Virtualization: Depending on implementation


technologies, hardware virtualization can be classified into two categories: full
virtualization and host-based virtualization. Full virtualization does not need to
modify the host OS. It relies on binary translation to trap and to virtualize the
execution of certain sensitive, non virtualizable instructions.The guest OSes and their
applications consist of noncritical and critical instructions. In a host-based system,
both a host OS and a guest OS are used. A virtualization software layer is built
between the host OS and guest OS.

Full Virtualization: With full virtualization, noncritical instructions run on the


hardware directly while critical instructions are discovered and replaced with traps
into the VMM to be emulated by software. Both the hypervisor and VMM approaches
are considered full virtualization.

Binary Translation of Guest OS Requests Using a VMM :


VMware puts the VMM at Ring 0 and the guest OS at Ring 1. The VMM scans
the instruction stream and identifies the privileged, control- and behavior-sensitive
instructions. When these instructions are identified, they are trapped into the VMM,
which emulates the behavior of these instructions.

116

Visit & Downloaded from : www.LearnEngineering.in


Visit & Downloaded from : www.LearnEngineering.in

Fig 3.8 Indirect execution of complex instructions via binary translation of


guest OS requests using the VMM plus direct execution of simple
instructions on the same host.
The method used in this emulation is called binary translation.
Therefore, full virtualization combines binary translation and direct execution.The
guest OS is completely decoupled from the underlying hardware. Consequently, the
guest OS is unaware that it is being virtualized.Binary translation employs a code
cache to store translated hot instructions to improve performance, but it increases the
cost of memory usage.

Host-Based Virtualization: An alternative VM architecture is to install a


virtualization layer on top of the host OS. This host OS is still responsible for
managing the hardware. The guest OSes are installed and run on top of the
virtualization layer. Dedicated applications may run on the VMs. Certainly,some other
applications can also run with the host OS directly.
This host-based architecture has some distinct advantages, as enumerated next.
First, the user can install this VM architecture without modifying the host OS. The
virtualizing software can rely on the host OS to provide device drivers and other low-
level services. This will simplify the VM design and ease its deployment. Second, the
host-based approach appeals to many host machine configurations.

117

Visit & Downloaded from : www.LearnEngineering.in


Visit & Downloaded from : www.LearnEngineering.in

Compared to the hypervisor/VMM architecture, the performance of the host-


based architecture may also be low. When an application requests hardware access, it
involves four layers of mapping which downgrades performance significantly.

Para-Virtualization with Compiler Support: Para-virtualization needs to modify


the guest operating systems. A para-virtualized VM provides special APIs requiring
substantial OS modifications in user applications. Performance degradation is a
critical issue of a virtualized system. No one wants to use a VM if it is much slower
than using a physical machine.
The virtualization layer can be inserted at different positions in a machine
software stack. However, para-virtualization attempts to reduce the virtualization
overhead, and thus improve performance by modifying only the guest OS kernel.
illustrates the concept of a paravirtualized VM architecture. The guest operating
systems are para-virtualized.
The traditional x86 processor offers four instruction execution rings: Rings 0,
1, 2, and 3. The lower the ring number, the higher the privilege of instruction being
executed. The OS is responsible for managing the hardware and the privileged
instructions to execute at Ring 0, while user-level applications run at Ring 3.

Fig 3.9 Para-virtualized VM architecture

118

Visit & Downloaded from : www.LearnEngineering.in


Visit & Downloaded from : www.LearnEngineering.in

Fig 3.10 The use of a para-virtualized guest OS assisted by an intelligent


compiler to replace nonvirtualizable OS instructions by hypercalls.

Para-Virtualization Architecture: When the x86 processor is virtualized, a


virtualization layer is inserted between the hardware and the OS. According to the x
86 ring definitions, the virtualization layer should also be installed at Ring 0. The
para-virtualization replaces nonvirtualizable instructions with hypercalls that
communicate directly with the hypervisor or VMM. However, when the guest OS
kernel is modified for virtualization, it can no longer run on the hardware directly.
Although para-virtualization reduces the overhead, it has incurred other
problems. First, its compatibility and portability may be in doubt, because it must
support the unmodified OS as well. Second, the cost of maintaining para-virtualized
OSes is high, because they may require deep OS kernel modifications. Finally, the
performance advantage of paravirtualization varies greatly due to workload variations.

KVM (Kernel-Based VM): This is a Linux para-virtualization system—a part of the


Linux version 2.6.20 kernel. Memory management and scheduling activities are
carried out by the existing Linux kernel.

119

Visit & Downloaded from : www.LearnEngineering.in


Visit & Downloaded from : www.LearnEngineering.in

The KVM does the rest, which makes it simpler than the hypervisor that controls the
entire machine.
KVM is a hardware-assisted para-virtualization tool, which improves
performance and supports unmodified guest OSes such as Windows, Linux, Solaris,
and other UNIX variants. Unlike the full virtualization architecture which intercepts
and emulates privileged and sensitive instructions at runtime, para-virtualization
handles these instructions at compile time.
The guest OS kernel is modified to replace the privileged and sensitive
instructions with hypercalls to the hypervisor or VMM. Xen assumes such a para-
virtualization architecture. The guest OS running in a guest domain may run at Ring 1
instead of at Ring 0. This implies that the guest OS may not be able to execute some
privileged and sensitive instructions.The privileged instructions are implemented by
hypercalls to the hypervisor. After replacing the instructions with hypercalls, the
modified guest OS emulates the behavior of the original guest OS.

6. VIRTUALIZATION OF CPU, MEMORY, AND I/O DEVICES

VIRTUAL CLUSTERS AND RESOURCE MANAGEMENT

 Explain in detail about Virtualization of CPU, Memory, and I/O Devices.


To support virtualization, processors such as the x86 employ a special running
mode and instructions, known as hardware-assisted virtualization. In this way, the
VMM and guest OS run in different modes and all sensitive instructions of the guest
OS and its applications are trapped in the VMM. To save processor states, mode
switching is completed by hardware. For the x86 architecture, Intel and AMD have
proprietary technologies for hardware-assisted virtualization.
Hardware Support for Virtualization: Modern operating systems and processors
permit multiple processes to run simultaneously. If there is no protection mechanism
in a processor, all instructions from different processes will access the hardware
directly and cause a system crash. Therefore, all processors have at least two modes,
user mode and supervisor mode, to ensure controlled access of critical hardware.

120

Visit & Downloaded from : www.LearnEngineering.in


Visit & Downloaded from : www.LearnEngineering.in

Instructions running in supervisor mode are called privileged instructions.


Other instructions are unprivileged instructions. In a virtualized environment, it is
more difficult to make OSes and applications run correctly because there are more
layers in the machine stack.
CPU Virtualization: A VM is a duplicate of an existing computer system in which a
majority of the VM instructions are executed on the host processor in native mode.
Thus, unprivileged instructions of VMs run directly on the host machine for higher
efficiency. Other critical instructions should be handled carefully for correctness and
stability. The critical instructions are divided into three categories:
Privileged instructions - Privileged instructions execute in a privileged mode and
will be trapped if executed outside this mode.
Control sensitive instructions - Control-sensitive instructions attempt to change the
configuration of resources used.
Behavior-sensitive instructions - Behavior-sensitive instructions have different
behaviors depending on the configuration of resources, including the load and store
operations over the virtual memory.
A CPU architecture is virtualizable if it supports the ability to run the VM’s
privileged and privileged instructions in the CPU’s user mode while the VMM runs in
supervisor mode. When the privileged instructions including control- and behavior-
sensitive instructions of a VM are executed, they are trapped in the VMM. In this
case, the VMM acts as a unified mediator for hardware access from different VMs to
guarantee the correctness and stability of the whole system.
However, not all CPU architectures are virtualizable. RISC CPU architectures
can be naturally virtualized because all control- and behavior-sensitive instructions are
privileged instructions.
Hardware-Assisted CPU Virtualization:
This technique attempts to simplify virtualization because full or
paravirtualization is complicated. Intel and AMD add an additional mode called
privilege mode level (some people call it Ring-1) to x86 processors. Therefore,
operating systems can still run at Ring 0 and the hypervisor can run at Ring -1.

121

Visit & Downloaded from : www.LearnEngineering.in


Visit & Downloaded from : www.LearnEngineering.in

All the privileged and sensitive instructions are trapped in the hypervisor
automatically. This technique removes the difficulty of implementing binary
translation of full virtualization. It also lets the operating system run in VMs without
modification.
Memory Virtualization :
Virtual memory virtualization is similar to the virtual memory support provided
by modern operating systems. In a traditional execution environment, the operating
system maintains mappings of virtual memory to machine memory using page tables,
which is a one-stage mapping from virtual memory to machine memory.
All modern x86 CPUs include a memory management unit (MMU) and a
translation lookaside buffer (TLB) to optimize virtual memory performance.
However, in a virtual execution environment, virtual memory virtualization involves
sharing the physical system memory in RAM and dynamically allocating it to the
physical memory of the VMs. That means a two-stage mapping process should be
maintained by the guest OS and the VMM, respectively: virtual memory to physical
memory and physical memory to machine memory. Furthermore, MMU virtualization
should be supported, which is transparent to the guest OS. The guest OS continues to
control the mapping of virtual addresses to the physical memory addresses of VMs.
But the guest OS cannot directly access the actual machine memory.
The VMM is responsible for mapping the guest physical memory to the actual
machine memory. Figure 3.11 shows the two-level memory mapping procedure.

I/O Virtualization : I/O virtualization involves managing the routing of I/O requests
between virtual devices and the shared physical hardware. There are three ways to
implement I/O virtualization:
 Full device emulation
 Para virtualization
 Direct I/O

122

Visit & Downloaded from : www.LearnEngineering.in


Visit & Downloaded from : www.LearnEngineering.in

Figure 3.11 Two-level memory mapping procedure.

Full device emulation is the first approach for I/O virtualization. Generally, this
approach emulates well known, real-world devices. All the functions of a device or
bus infrastructure, such as device enumeration, identification, interrupts, and DMA,
are replicated in software. This software is located in the VMM and acts as a virtual
device. The I/O access requests of the guest OS are trapped in the VMM which
interacts with the I/O devices.
A single hardware device can be shared by multiple VMs that run concurrently.
However, software emulation runs much slower than the hardware it emulates. The
para virtualization method of I/O virtualization is typically used in Xen. It is also
known as the split driver model consisting of a frontend driver and a backend driver.
The frontend driver is running in Domain U and the backend driver is running
in Domain 0. They interact with each other via a block of shared memory. The
frontend driver manages the I/O requests of the guest OSes and the backend driver is
responsible for managing the real I/O devices and multiplexing the I/O data of
different VMs.
Although para I/O-virtualization achieves better device performance than full
device emulation, it comes with a higher CPU overhead.

123

Visit & Downloaded from : www.LearnEngineering.in


Visit & Downloaded from : www.LearnEngineering.in

Figure 3.12 Device emulation for I/O virtualization implemented inside the
middle layer that maps real I/O devices into the virtual devices for the guest
device driver to use.
Virtualization in Multi-Core Processors: Virtualizing a multi-core processor is
relatively more complicated than virtualizing a unicore processor. Though multicore
processors are claimed to have higher performance by integrating multiple processor
cores in a single chip, muti-core virtualization has raised some new challenges to
computer architects, compiler constructors, system designers, and application
programmers.
There are mainly two difficulties: Application programs must be parallelized to
use all cores fully, and software must explicitly assign tasks to the cores, which is a
very complex problem.

VIRTUAL CLUSTERS AND RESOURCE MANAGEMENT

A physical cluster is a collection of servers (physical machines) interconnected


by a physical network such as a LAN. When a traditional VM is initialized, the
administrator needs to manually write configuration information or specify the
configuration sources.
When more VMs join a network, an inefficient configuration always causes
problems with overloading or underutilization. Amazon’s Elastic Compute Cloud

124

Visit & Downloaded from : www.LearnEngineering.in


Visit & Downloaded from : www.LearnEngineering.in

(EC2) is a good example of a web service that provides elastic computing power in a
cloud. EC2 permits customers to create VMs and to manage user accounts over the
time of their use.
Most virtualization platforms, including XenServer and VMware ESX Server,
support a bridging mode which allows all domains to appear on the network as
individual hosts. By using this mode, VMs can communicate with one another freely
through the virtual network interface card and conFigure the network automatically.
Physical versus Virtual Clusters: Virtual clusters are built with VMs installed at
distributed servers from one or more physical clusters. The VMs in a virtual cluster
are interconnected logically by a virtual network across several physical networks.
Figure 3.18 illustrates the concepts of virtual clusters and physical clusters.Each
virtual cluster is formed with physical machines or a VM hosted by multiple physical
clusters. The virtual cluster boundaries are shown as distinct boundaries.
Properties: The virtual cluster nodes can be either physical or virtual machines.
Multiple VMs running with different OSes can be deployed n the same physical node.
A VM runs with a guest OS, which is often different from the host OS, that manages
the resources in the physical machine, where the VM is implemented.

Figure 3. 13 A cloud platform with four virtual clusters over three physical
clusters shaded differently.
The purpose of using VMs is to consolidate multiple functionalities on the
same server. This will greatly enhance server utilization and application flexibility.
125

Visit & Downloaded from : www.LearnEngineering.in


Visit & Downloaded from : www.LearnEngineering.in

VMs can be colonized (replicated) in multiple servers for the purpose of promoting
distributed parallelism, fault tolerance, and disaster recovery.
The size (number of nodes) of a virtual cluster can grow or shrink dynamically,
similar to the way an overlay network varies in size in a peer-to-peer (P2P) network.
The failure of any physical nodes may disable some VMs installed on the failing
nodes. But the failure of VMs will not pull down the host system. Figure shows the
concept of a virtual cluster based on application partitioning or customization.
As a large number of VM images might be present, the most important thing is
to determine how to store those images in the system efficiently. There are common
installations for most users or applications, such as operating systems or user-level
programming libraries. These software packages can be preinstalled as templates
(called template VMs). With these templates, users can build their own software
stacks.

Figure 3. 14. The concept of a virtual cluster based on application partitioning

New OS instances can be copied from the template VM. User-specific


components such as programming libraries and applications can be installed to those
instances. Three physical clusters are shown on the left side of Figure 3. 8. Four

126

Visit & Downloaded from : www.LearnEngineering.in


Visit & Downloaded from : www.LearnEngineering.in

virtual clusters are created on the right, over the physical clusters. The physical
machines are also called host systems. In contrast, the VMs are guest systems. The
host and guest systems may run with different operating systems.
Each VM can be installed on a remote server or replicated on multiple servers
belonging to the same or different physical clusters. The boundary of a virtual cluster
can change as VM nodes are added, removed, or migrated dynamically over time.

7a. VIRTUALIZATION FOR DATA-CENTER AUTOMATION

 Explain about Virtualization for Data-Center Automation in detail.

Data centers have grown rapidly in recent years, and all major IT companies
are pouring their resources into building new data centers. In addition, Google,
Yahoo!, Amazon, Microsoft, HP, Apple, and IBM are all in the game. All these
companies have invested billions of dollars in data-center construction and
automation.
Data-center automation means that huge volumes of hardware, software, and
database resources in these data centers can be allocated dynamically to millions of
Internet users simultaneously, with guaranteed QoS and cost-effectiveness. This
automation process is triggered by the growth of virtualization products and cloud
computing services.
The latest virtualization development highlights high availability (HA), backup
services, workload balancing, and further increases in client bases. IDC projected that
automation, service orientation, policy-based, and variable costs in the virtualization
market.

127

Visit & Downloaded from : www.LearnEngineering.in


Visit & Downloaded from : www.LearnEngineering.in

Server Consolidation in Data Centers:

In data centers, a large number of heterogeneous workloads can run on servers


at various times. These heterogeneous workloads can be roughly divided into two
categories: chatty workloads and non interactive workloads. Chatty workloads may
burst at some point and return to a silent state at some other point.
A web video service is an example of this; whereby a lot of people use it at
night and few people use it during the day. Non interactive workloads do not require
people’s efforts to make progress after they are submitted. High performance
computing is a typical example of this. At various stages, the requirements for
resources of these workloads are dramatically different.
However, to guarantee that a workload will always be able to cope with all
demand levels, the workload is statically allocated enough resources so that peak
demand is satisfied. Figure 3.15 illustrates server virtualization in a data center. In this
case, the granularity of resource optimization is focused on the CPU, memory, and
network interfaces. Therefore, it is common that most servers in data centers are
underutilized.
A large amount of hardware, space, power, and management cost of these
servers is wasted. Server consolidation is an approach to improve the low utility ratio
of hardware resources by reducing the number of physical servers. Among several
server consolidation techniques such as centralized and physical consolidation,
virtualization-based server consolidation is the most powerful.
Data centers need to optimize their resource management. In general, the use of
VMs increases resource management complexity. This causes a challenge in terms of
how to improve resource utilization as well as guarantee QoS in data centers. In detail,
server virtualization has the following side effects:
 Consolidation enhances hardware utilization. Many underutilized
servers are consolidated into fewer servers to enhance resource utilization.
Consolidation also facilitates backup services and disaster recovery.

128

Visit & Downloaded from : www.LearnEngineering.in


Visit & Downloaded from : www.LearnEngineering.in

 This approach enables more agile provisioning and deployment of


resources. In a virtual environment, the images of the guest OSes and their
applications are readily cloned and reused.
 The total cost of ownership is reduced. In this sense, server
virtualization causes deferred purchases of new servers, a smaller data-center
footprint, lower maintenance costs, and lower power, cooling, and cabling
requirements.
 This approach improves availability and business continuity. The crash
of a guest OS has no effect on the host OS or any other guest OS. It becomes easier to
transfer a VM from one server to another, because virtual servers are unaware of the
underlying hardware.
To automate data center operations, one must consider resource scheduling,
architectural support, power management, automatic or autonomic resource
management, performance of analytical models, and so on. In virtualized data centers,
an efficient, on-demand, fine grained scheduler is one of the key factors to improve
resource utilization. Scheduling and reallocations can be done in a wide range of
levels in a set of data centers. The levels match at least at the VM level, server level,
and data center level. Ideally, scheduling and resource reallocations should be done at
all levels.
However, due to the complexity of this, current techniques only focus on a
single level or, atmost, two levels. Dynamic CPU allocation is based on VM
utilization and application-level QoS metrics. One method considers both CPU and
memory flowing as well as automatically adjusting resource overhead based on
varying workloads in hosted services.
Another scheme uses a two level resource management system to handle the
complexity involved. A local controller at the VM level and a global controller at the
server level are designed. They implement autonomic resource allocation via the
interaction of the local and global controllers. Multicore and virtualization are two
cutting techniques that can enhance each other.

129

Visit & Downloaded from : www.LearnEngineering.in


Visit & Downloaded from : www.LearnEngineering.in

Virtual Storage Management : The term ―storage virtualization‖ was widely used
before the renaissance of system virtualization. Yet the term has a different meaning
in a system virtualization environment. Previously, storage virtualization was largely
used to describe the aggregation and repartitioning of disks at very coarse time scales
for use by physical machines.
In system virtualization, virtual storage includes the storage managed by
VMMs and guest OSes. Generally, the data stored in this environment can be
classified into two categories: VM images and application data. The VM images are
special to the virtual environment, while application data includes all other data which
is the same as the data in traditional OS environments.
The most important aspects of system virtualization are encapsulation and
isolation. In virtualization environments, a virtualization layer is inserted between the
hardware and traditional operating systems or a traditional operating system is
modified to support virtualization.
This procedure complicates storage operations. On the one hand, storage
management of the guest OS performs as though it is operating in a real hard disk
while the guest OSes cannot access the hard disk directly.
On the other hand, many guest OSes contest the hard disk when many VMs are
running on a single physical machine. Since traditional storage management
techniques do not consider the features of storage in virtualization environments,
Parallax designs a novel architecture in which storage features that have traditionally
been implemented directly on high-end storage arrays and switchers are relocated into
a federation of storage VMs.

Cloud OS for Virtualized Data Centers: Data centers must be virtualized to serve as
cloud providers. Table 3.6 summarizes four virtual infrastructure (VI) managers and
OSes. These VI managers and OSes are specially tailored for virtualizing data centers
which often own a large number of servers in clusters. Nimbus, Eucalyptus, and
OpenNebula are all open source software available to the general public. Only
vSphere 4 is a proprietary OS for cloud resource virtualization and management over
data centers.

130

Visit & Downloaded from : www.LearnEngineering.in


Visit & Downloaded from : www.LearnEngineering.in

Trust Management in Virtualized Data Centers : A VMM changes the computer


architecture. It provides a layer of software between the operating systems and system
hardware to create one or more VMs on a single physical platform. A VM entirely
encapsulates the state of the guest operating system running inside it. Encapsulated
machine state can be copied and shared over the network and removed like a normal
file, which proposes a challenge to VM security.
In general, a VMM can provide secure isolation and a VM accesses hardware
resources through the control of the VMM, so the VMM is the base of the security of
a virtual system. Normally, one VM is taken as a manage ment VM to have some
privileges such as creating, suspending, resuming, or deleting a VM.

VM-Based Intrusion Detection: Intrusions are unauthorized access to a certain


computer from local or network users and intrusion detection is used to recognize the
unauthorized access. An intrusion detection system (IDS) is built on operating
systems, and is based on the characteristics of intrusion actions. A typical IDS can be
classified as a hostbased IDS (HIDS) or a network-based IDS (NIDS), depending on
the data source.
Virtualization-based intrusion detection can isolate guest VMs on the same
hardware platform. Even some VMs can be invaded successfully; they never influence
other VMs, which is similar to the way in which a NIDS operates.
The VM-based IDS contains a policy engine and a policy module. The policy
framework can monitor events in different guest VMs by operating system interface
library and PTrace indicates trace to secure policy of monitored host. It’s difficult to
predict and prevent all intrusions without delay. Therefore, an analysis of the intrusion
action is extremely important after an intrusion occurs.

131

Visit & Downloaded from : www.LearnEngineering.in


Visit & Downloaded from : www.LearnEngineering.in

Figure 3.15 The architecture of livewire for intrusion detection using a dedicated
VM.

7b. PROS AND CONS OF CLOUD COMPUTING

 Write short notes on The Pros and Cons of Cloud Computing. Or


 Explain in detail about advantages and disadvantages of cloud computing.

Advantages of Cloud Computing

i. Cost Efficiency: This is the biggest advantage of cloud computing, achieved by


the elimination of the investment in stand-alone software or servers. By leveraging
cloud’s capabilities, companies can save on licensing fees and at the same time
eliminate overhead charges such as the cost of data storage, software updates,
management etc.
The cloud is in general available at much cheaper rates than traditional
approaches and can significantly lower the overall IT expenses. At the same time,
convenient and scalable charging models have emerged (such as one-time-payment
and pay-as-you-go), making the cloud even more attractive.
132

Visit & Downloaded from : www.LearnEngineering.in


Visit & Downloaded from : www.LearnEngineering.in

If you want to get more technical and analytical, cloud computing delivers a
better cash flow by eliminating the capital expense (CAPEX) associated with
developing and maintaining the server infrastructure.
ii. Convenience and continuous availability: Public clouds offer services that are
available wherever the end user might be located. This approach enables easy access
to information and accommodates the needs of users in different time zones and
geographic locations. As a side benefit, collaboration booms since it is now easier
than ever to access, view and modify shared documents and files.
Moreover, service uptime is in most cases guaranteed, providing in that way
continuous availability of resources. The various cloud vendors typically use multiple
servers for maximum redundancy. In case of system failure, alternative instances are
automatically spawned on other machines.
iii. Backup and Recovery: The process of backing up and recovering data is
simplified since those now reside on the cloud and not on a physical device. The
various cloud providers offer reliable and flexible backup/recovery solutions. In some
cases, the cloud itself is used solely as a backup repository of the data located in local
computers.
iv. Cloud is environmentally friendly: The cloud is in general more efficient than
the typical IT infrastructure and It takes fewer resources to compute, thus saving
energy. For example, when servers are not used, the infrastructure normally scales
down, freeing up resources and consuming less power. At any moment, only the
resources that are truly needed are consumed by the system.
v. Resiliency and Redundancy: A cloud deployment is usually built on a robust
architecture thus providing resiliency and redundancy to its users. The cloud offers
automatic failover between hardware platforms out of the box, while disaster recovery
services are also often included.
vi. Scalability and Performance: Scalability is a built-in feature for cloud
deployments. Cloud instances are deployed automatically only when needed and as a
result, you pay only for the applications and data storage you need. Hand in hand, also
comes elasticity, since clouds can be scaled to meet your changing IT system
demands.

133

Visit & Downloaded from : www.LearnEngineering.in


Visit & Downloaded from : www.LearnEngineering.in

Regarding performance, the systems utilize distributed architectures which


offer excellent speed of computations. Again, it is the provider’s responsibility to
ensure that your services run on cutting edge machinery. Instances can be added
instantly for improved performance and customers have access to the total resources
of the cloud’s core hardware via their dashboards.
vii. Quick deployment and ease of integration: A cloud system can be up and
running in a very short period, making quick deployment a key benefit. On the same
aspect, the introduction of a new user in the system happens instantaneously,
eliminating waiting periods.
Furthermore, software integration occurs automatically and organically in
cloud installations. A business is allowed to choose the services and applications that
best suit their preferences, while there is minimum effort in customizing and
integrating those applications.
viii. Increased Storage Capacity: The cloud can accommodate and store much more
data compared to a personal computer and in a way offers almost unlimited storage
capacity. It eliminates worries about running out of storage space and at the same time
It spares businesses the need to upgrade their computer hardware, further reducing the
overall IT cost.
ix. Device Diversity and Location Independence: Cloud computing services can be
accessed via a plethora of electronic devices that are able to have access to the
internet. These devices include not only the traditional PCs, but also smart phones,
tablets etc. With the cloud, the ―Bring your own device‖ (BYOD) policy can be easily
adopted; permitting employees to bring personally owned mobile devices to their
workplace.
An end-user might decide not only which device to use, but also where to
access the service from. There is no limitation of place and medium. We can access
our applications and data anywhere in the world, making this method very attractive to
people. Cloud computing is in that way especially appealing to international
companies as it offers the flexibility for its employees to access company files
wherever they are.

134

Visit & Downloaded from : www.LearnEngineering.in


Visit & Downloaded from : www.LearnEngineering.in

x. Smaller learning curve: Cloud applications usually entail smaller learning curves
since people are quietly used to them. Users find it easier to adopt them and come up
to speed much faster. Main examples of this are applications like GMail and Google
Docs.
Disadvantages of Cloud Computing: As made clear from the above, cloud
computing is a tool that offers enormous benefits to its adopters. However, being a
tool, it also comes with its set of problems and inefficiencies.
i. Security and privacy in the Cloud: Security is the biggest concern when it
comes to cloud computing. By leveraging a remote cloud based infrastructure, a
company essentially gives away private data and information, things that might be
sensitive and confidential. It is then up to the cloud service provider to manage,
protect and retain them, thus the provider’s reliability is very critical. A company’s
existence might be put in jeopardy, so all possible alternatives should be explored
before a decision. On the same note, even end users might feel uncomfortable
surrendering their data to a third party.
Similarly, privacy in the cloud is another huge issue. Companies and users have
to trust their cloud service vendors that they will protect their data from unauthorized
users. The various stories of data loss and password leakage in the media does not
help to reassure some of the most concerned users.
ii. Dependency and vendor lock-in: One of the major disadvantages of cloud
computing is the implicit dependency on the provider. This is what the industry calls
―vendor lock-in‖ since it is difficult, and sometimes impossible, to migrate from a
provider once you have rolled with him. If a user wishes to switch to some other
provider, then it can be really painful and cumbersome to transfer huge data from the
old provider to the new one. This is another reason why you should carefully and
thoroughly contemplate all options when picking a vendor.
iii. Technical Difficulties and Downtime: Certainly the smaller business will enjoy
not having to deal with the daily technical issues and will prefer handing those to an
established IT company; however you should keep in mind that all systems might face
dysfunctions from time to time. Outage and downtime is possible even to the best
cloud service providers, as the past has shown.

135

Visit & Downloaded from : www.LearnEngineering.in


Visit & Downloaded from : www.LearnEngineering.in

Additionally, you should remember that the whole setup is dependent on


internet access, thus any network or connectivity problems will render the setup
useless. As a minor detail, also keep in mind that it might take several minutes for the
cloud to detect a server fault and launch a new instance from an image snapshot.
iv. Limited control and flexibility: Since the applications and services run on
remote, third party virtual environments, companies and users have limited control
over the function and execution of the hardware and software. Moreover, since remote
software is being used, it usually lacks the features of an application running locally.
v. Increased Vulnerability: Related to the security and privacy mentioned before,
note that cloud based solutions are exposed on the public internet and are thus a more
vulnerable target for malicious users and hackers. Nothing on the Internet is
completely secured and even the biggest players suffer from serious attacks and
security breaches. Due to the interdependency of the system, If there is a compromise
one of the machines that data is stored, there might be a leakage of personal
information to the world.

136

Visit & Downloaded from : www.LearnEngineering.in


Visit & Downloaded from : www.LearnEngineering.in

UNIT-4

PROGRAMMING MODEL

Open source grid middleware packages – Globus Toolkit (GT4) Architecture,


Configuration – Usage of Globus – Main components and Programming model -
Introduction to Hadoop Framework – Map reduce, Input splitting, map and
reduce functions, specifying input and output parameters, configuring and
running a job – Design of Hadoop file system, HDFS concepts, command line
and java interface, dataflow of File read & File write.

PART – A

1. What is Globus Toolkit 4? What is the motivation behind it?


The Globus Toolkit, started in 1995 with funding from DARPA, is an open
middleware library for the grid computing communities. These open source software
libraries support many operational grids and their applications on an international
basis. The toolkit addresses common problems and issues related to grid resource
discovery, management, communication, security, fault detection, and portability. The
software itself provides a variety of components and capabilities. The library includes
a rich set of service implementations
Motivation – Globus Toolkit 4:
The Globus Toolkit was initially motivated by a desire to remove obstacles that
prevent seamless collaboration, and thus sharing of resources and services, in scientific
and engineering applications. The shared resources can be computers, storage, data,
services, net- works, science instruments (e.g., sensors), and so on.

137

Visit & Downloaded from : www.LearnEngineering.in


Visit & Downloaded from : www.LearnEngineering.in

2. Mention the functional modules of GT4 library.

Module
Service Functionality Functional Description
Name
Global Resource Allocation Grid Resource Access and Management
GRAM
Manager (HTTP-based)
Communication Nexus Unicast and multicast communication
Grid Security Infrastructure GSI Authentication and related security services
Monitory and Discovery Distributed access to structure and state
MDS
Service information
Heartbeat monitoring of system
Health and Status HBM
components
Global Access of Secondary Grid access of data in remote secondary
GASS
Storage storage
Grid File Transfer GridFTP Inter-node fast file transfer

3. List out the packages of GT4.

 Source package
 Binary package

4. Write down the steps for installation of binary packages.

 Obtain the Globus Toolkit 4 binary package from the Globus site.
 Extract the binary package as the Globus user
 Set environmental variables for the Globus location.
 Create and change the ownership of directory for user and group globus
 ConFigure and install Globus Toolkit 4

5. Write down the steps for installation of source packages.

 Obtain the Globus Toolkit 4 source package from the Globus site
 Extract the source package with the Globus user ID
 Set environmental variables for the Globus location.

138

Visit & Downloaded from : www.LearnEngineering.in


Visit & Downloaded from : www.LearnEngineering.in

 Create and change the ownership of the directory for user and group Globus
 ConFigure and install Globus Toolkit 4

6. What is Haoop?

Hadoop is the Apache Software Foundation top-level project that holds the
various Hadoop subprojects that graduated from the Apache Incubator. The Hadoop
project provides and supports the development of open source software that supplies a
framework for the development of highly scalable distributed computing applications.
The Hadoop framework handles the processing details, leaving developers free to
focus on application logic.

7. What is MapReduce?

Hadoop supports the MapReduce model, which was introduced by Google as a


method of solving a class of petascale problems with large clusters of inexpensive
machines. The model is based on two distinct steps for an application:
i. Map: An initial ingestion and transformation step, in which individual input
records can be processed in parallel.
ii. Reduce: An aggregation or summarization step, in which all associated records
must be processed together by a single entity.

8. Define Hadoop Distributed File System (HDFS).

HDFS is a file system that is designed for use for MapReduce jobs that read
input in large chunks of input, process it, and write potentially large chunks of output.
HDFS does not handle random access particularly well.

139

Visit & Downloaded from : www.LearnEngineering.in


Visit & Downloaded from : www.LearnEngineering.in

9. What is input splitting?

For the Hadoop framework to be able to distribute pieces of the job to multiple
machines, it needs to fragment the input into individual pieces, which can in turn be
provided as input to the individual distributed tasks. Each fragment of input is called
an input split. The default rules for how input splits are constructed from the actual
input files are a combination of configuration parameters and the capabilities of the
class that actually reads the input records.

10. What are all the various input formats specified in Hadoop framework?

 KeyValueTextInputFormat: Key/value pairs, one per line.


 TextInputFormant: The key is the line number, and the value is the line.
 NLineInputFormat: Similar to KeyValueTextInputFormat, but the splits are
based on N lines of input rather than Y bytes of input.
 MultiFileInputFormat: An abstract class that lets the user implement an
input format that aggregates multiple files into one split.
 SequenceFIleInputFormat: The input file is a Hadoop sequence file,
containing serialized key/value pairs.

11. Mention the information that has to be supplied by the user while configuring
the reduce phase.
To configure the reduce phase, the user must supply the framework with five
pieces of information:
 The number of reduce tasks; if zero, no reduce phase is run
 The class supplying the reduce method
 The input key and value types for the reduce task; by default, the same
as the reduce output
 The output key and value types for the reduce task
 The output file type for the reduce task output

140

Visit & Downloaded from : www.LearnEngineering.in


Visit & Downloaded from : www.LearnEngineering.in

12. Name the processes that provide HDFS services.

 NameNode handles management of the file system metadata, and provides


management and control services.
 DataNode provides block storage and retrieval services.

13. What is ZooKeeper and Hive?

 ZooKeeper is a highly available and reliable coordination system.


Distributed applications use ZooKeeper to store and mediate updates for critical
shared state.
 Hive is a data warehouse infrastructure built on Hadoop Core that provides
data summarization, adhoc querying and analysis of datasets.

14. What is Pig and Sqoop?

Pig: Pig is a data flow language and execution environment for exploring very
large datasets. Pig runs on HDFS and MapReduce clusters.
Sqoop: A tool for efficiently moving data between relational databases and
HDFS.

15. Define YARN.

A new MapReduce runtime, called MapReduce 2, implemented on a new


system called YARN (Yet Another Resource Negotiator), which is a general resource
management system for running distributed applications. MapReduce 2 replaces the
―classic‖ runtime in previous releases.

141

Visit & Downloaded from : www.LearnEngineering.in


Visit & Downloaded from : www.LearnEngineering.in

PART – B

1. CONFIGURING AND TESTING OF GLOBUS TOOLKIT

 Write in detail about the configuration and testing of Globus Toolkit GT4 in a
Grid environment.

After the installation of the Globus Toolkit, each element of your grid environment
must be conFigured.
a. Configuring environmental variables
Before starting the configuration process, it is useful to set up the
GLOBUS_LOCATION environmental variables in either /etc/profile or
(userhome)/.bash_profile. To save time upon subsequent logins from different user
IDs, we specified GLOBUS_LOCATION in /etc/profile.
Also, Globus Toolkit provides shell scripts to set up these environmental variables.
They can be sourced as follows:
source $GLOBUS_LOCATION/etc/globus-user-env.sh (sh)
source $GLOBUS_LOCATION/etc/globus-user-env.csh (csh)
The Globus Toolkit also provides shell scripts for developers to set up Java
CLASSPATH environmental variables. They can be sourced as follows:
source $GLOBUS_LOCATION/etc/globus-devel-env.sh (sh)
source $GLOBUS_LOCATION/etc/globus-devel-env.csh (csh)
The globus-user-env.sh and globus-devel-env.sh in /etc/profile are specified, so that
all users can use the grid environment.
Example of /etc/profile
Export GLOBUS_LOCATION=/usr/local/globus-4.0.0
source $GLOBUS_LOCATION/etc/globus-user-env.sh
source $GLOBUS_LOCATION/etc/globus-devel-env.sh
b. Security set up
Installation of CA packages
To install CA packages:
i.Log in to the CA host as a Globus user.
142

Visit & Downloaded from : www.LearnEngineering.in


Visit & Downloaded from : www.LearnEngineering.in

ii. Invoke the setup-simple-ca script, and answer the prompts as appropriate
See the following Example. This script initializes the files that are necessary for
SimpleCA.
Example Setting up SimpleCA
[globus@ca]$ $GLOBUS_LOCATION/setup/globus/setup-simple-ca
WARNING: GPT_LOCATION not set, assuming:
GPT_LOCATION=/usr/local/globus-4.0.0
The CA certificate has an expiration date. Keep in mind that once the CA
certificate has expired, all the certificates signed by that CA become invalid. A CA
should regenerate the CA certificate and start re-issuing ca-setup packages before the
actual CA certificate expires. This can be done by re-running this setup script. Enter
the number of DAYS the CA certificate should last before it expires. [default: 5 years
(1825 days)]: (type the number of days)1825

Setting up security in each grid node: After performing the steps above, a package
file has been created that needs to be used on other nodes, as described in this section.
In order to use certificates from this CA in other grid nodes, you need to copy and
install the CA setup package to each grid node.
i. Log in to a grid node as a Globus user and obtain a CA setup package from the
CA host. Then run the setup commands for configuration (see the following
Example).
Example Set up CA in each grid node
[globus@hosta]$
scp globus@ca:/home/globus/.globus/simpleCA \
/globus_simple_ca_(ca_hash)_setup-0.18.tar.gz .
[globus@hosta]$ $GLOBUS_LOCATION/sbin/gpt-build \
globus_simple_ca_(ca_hash)_setup-0.18.tar.gz gcc32dbg
[globus@hosta]$ $GLOBUS_LOCATION/sbin/gpt-postinstall
ii. As the root user, submit the commands in Example to conFigure the CA
settings in each grid node. This script creates the /etc/grid-security directory. This
directory contains the configuration files for security.

143

Visit & Downloaded from : www.LearnEngineering.in


Visit & Downloaded from : www.LearnEngineering.in

Example ConFigure CA in each grid node


[root@hosta]# $GLOBUS_LOCATION/setup\
/globus_simple_ca_[ca_hash]_setup/setup-gsi -default

Obtain and sign a host certificate: In order to use some of the services provided by
Globus Toolkit 4, such as Grid FTP, you need to have a CA signed host certificate and
host key in the Appropriate directory.
 As root user, request a host certificate with the command in the above
Example.
 Copy or send the /etc/grid-security/hostcert_request.pem file to the CA host.
 In the CA host as a Globus user, sign the host certificate by using the grid-ca-
sign command.
 Copy the hostcert.pem back to the /etc/grid-security/ directory in the grid node
.
Obtain and sign a user certificate
In order to use the grid environment, a grid user needs to have a CA signed user
certificate and user key in the user’s directory.
 As a user (auser1 in hosta), request a user certificate with the command.
 Copy or send the (userhome)/.globus/usercert_request.pem file to the CAhost.
 In CA host as a Globus user, sign the user certificate by using the grid-ca-sign
command
 Copy the created usercert.pem to the (userhome)/.globus/ directory on the grid
node.
 Test the user certificate by typing grid-proxy-init -debug -verify as the a user.
With this command, you can see the location of a user certificate and a key,
CA’s certificate directory, a distinguished name for the user, and the expiration
time. After you successfully execute grid-proxy-init, you have been
authenticated and are ready to use the grid environment.
c. Configuration of Java WS Core: The Java WS Core container is installed as a
part of the default Globus Toolkit 4 installation. There are a few things you need to
conFigure before you start Java WS Core.
144

Visit & Downloaded from : www.LearnEngineering.in


Visit & Downloaded from : www.LearnEngineering.in

Setting up Java WS Core environment The Java WS Core container uses a copy of
the host certificate and a host key. You need to copy and change the owner of those
files before you start the Java WS Core container.
As a root user, copy hostcert.pem and hostkey.pem to containercert.pem and
containerkey.pem in /etc/grid-security/. Then change the owner of the new files to
Globus (see the following Example).
Example Copying host certificate and key to container certificate and key
[root@hosta]# cp hostcert.pem containercert.pem
[root@hosta]# cp hostkey.pem containerkey.pem
[root@hosta]# chown globus.globus containercert.pem containerkey.pem

Verifying the installation and configuration of Java WS Core To verify that the
Java WS Core has been installed successfully and that grid security has been
implemented correctly, complete the following procedure:
 As a Globus user, run the following command to start the container: globus-
start-container. If you do not use a secured container, then type following
command: globus-start-container –nosec.
 When the process is complete, a message indicates that the container I open for
Grid services, as shown in the following Example.

Troubleshooting: The following are a few common errors that may occur and what
you might do to correct them. The following message appears during the globus-start-
container command.

Failed to start container: Failed to initialize 'ManagedJobFactoryService' service


[Caused by: [SEC] Service credentials not conFigured and was not able to obtain
container credentials;
This may be due to not having properly created container certificates. Also, this error
appears when you do not have a grid-mapfile. Make sure you follow the steps,
―Security set up‖. The following message appears during the globus-start-container
command.

145

Visit & Downloaded from : www.LearnEngineering.in


Visit & Downloaded from : www.LearnEngineering.in

Failed to start container: Container failed to initialize [Caused by: Address already
in use]
This is because you have another container or program running. You may need to stop
the container or program in order to make this command work.
The following message appears during the counter-create command. Error: nested
exception is:
GSSException: Defective credential detected [Caused by: Proxy file
(/tmp/x509up_u511) not found.]
This is because you have tried to access a secured container without an activated
proxy certificate. You need to run the grid-proxy-init command in order to make this
command work.

Configuration and testing of GridFTP You need to conFigure GridFTP before RFT,
because GridFTP is required by RFT. GridFTP is already installed during the default
installation process. You only need to conFigure GridFTP as a service daemon so that
you can transfer data between two hosts with GridFTP.

Setting up GridFTP environment


In order to install GridFTP, follow the procedures below.
1. Assign the service name gsiftp to TCP port 2811 in /etc/services
2. Create the /etc/xinetd.d/gsiftp file with the entry
3. Restart xinetd daemon

Verifying the installation and configuration of GridFTP


To verify that GridFTP has been installed successfully, complete the following
procedure:
1. Log in to your grid node with the user who has grid user certificates.
2. Type a grid-proxy-init command to authenticate and create the proxy certificate.
3. Type the following GridFTP client command to make sure your GridFTP is
conFigured properly
4. Try third-party transfer with the globus-url-copy command

146

Visit & Downloaded from : www.LearnEngineering.in


Visit & Downloaded from : www.LearnEngineering.in

2. COMPONENTS OF GLOBUS TOOLKIT 4 (GT4)

 Discuss about various components of Globus Toolkit GT4.

Globus Toolkit 4 is a collection of open-source components. Many of these are based


on existing standards, while others are based on (and in some cases driving) evolving
standards. Version 4 of the toolkit is the first version to support Web service based
implementations of many of its components. Globus Toolkit 4 provides components in
the following five categories:
 Common runtime components
 Security
 Data management
 Information services
 Execution management Tab

Common runtime components: Globus Toolkit 4 includes common runtime


components. Common runtime components consist of libraries and tools needed by
both types of implementations and used by most of the other components.

Java WS Core: Java WS Core consists of APIs and tools that implement
WSRF and WS-Notification standards implemented in Java. These components act as
the base components for various default services that Globus Toolkit 4 supplies. Also,
Java WS Core provides the development base libraries and tools for custom WS-RF
based services.

147

Visit & Downloaded from : www.LearnEngineering.in


Visit & Downloaded from : www.LearnEngineering.in

Figure 4.1 Relation between Java WS core and GT 4 supplied services

C WS Core: C WS Core consists of APIs and tools that implement WS-RF


and WS-Notification standards using C
Python WS Core: Python WS Core consists of APIs and tools that implement
WS-RF and WS-Notification standards with Python. This component is also known as
pyGridWare, contributed by Lawrence Berkeley National Laboratory.
Security components Because security is one of the most important issues in grid
environments, Globus Toolkit 4 includes various types of security components.
WS authentication and authorization Globus Toolkit 4 enables message-
level security and transport-level security for SOAP communication of Web services.
Also, it provides an Authorization Framework for container-level authorization.
Pre-WS authentication and authorization Pre-WS authentication and
authorization consists of APIs and tools for authentication, authorization, and
certificate management.
Community Authorization Service (CAS) CAS provides access control to
virtual organizations. The CAS server grants fine-grained permissions on subsets of

148

Visit & Downloaded from : www.LearnEngineering.in


Visit & Downloaded from : www.LearnEngineering.in

resources to members of the community. CAS authorization is currently not available


for Web services, but it supports the GridFTP server.
Delegation service The Delegation service enables delegation of credentials
between various services in one host. The Delegation service allows a single delegated
credential to be used by many services. Also, this service has a credential renewal
interface, and this service is capable of extending the valid date of credentials.
SimpleCA SimpleCA is a simplified Certificate Authority. This package has
fully functioning CA features for a PKI environment.
MyProxy MyProxy is responsible for storing X.509 proxy credentials,
protecting them by pass phrase, and enabling an interface for retrieving the proxy
credential. MyProxy acts as a repository of credentials, and is often used by Web
portal applications.
GSI-OpenSSH GSI-OpenSSH is a modified version of the OpenSSH client
and server that adds support for GSI authentication. GSI-OpenSSH can be used to
remotely create a shell on a remote system to run shell scripts or to interactively issue
shell commands, and it also permits the transfer of files between systems without
being prompted for a password and a user ID. Nevertheless, a valid proxy must be
created by using the grid-proxy-init command.
Data management components
GridFTP The GridFTP facility provides secure and reliable data transfer
between grid hosts. Its protocol extends the well-known FTP standard to provide
additional features, including support for authentication through GSI. One of the
major features of GridFTP is that it enables third-party transfer. Third-party transfer is
suitable for an environment where there is a large file in remote storage and the client
wants to copy it to another remote server, as illustrated in Figure given below:

149

Visit & Downloaded from : www.LearnEngineering.in


Visit & Downloaded from : www.LearnEngineering.in

Figure 4.2 - GridFTP third-party transfer

Reliable File Transfer (RFT) Reliable File Transfer provides a Web service
interface for transfer and deletion of files. RFT receives requests via SOAP messages
over HTTP and utilizes GridFTP. RFT also uses a database to store the list of file
transfers and their states, and is capable of recovering a transfer request that was
interrupted.

Figure 4.3 – How RTF and GridFTP works

150

Visit & Downloaded from : www.LearnEngineering.in


Visit & Downloaded from : www.LearnEngineering.in

Replica Location Service (RLS) The Replica Location Service maintains and
provides access to information about the physical locations of replicated data. This
component can map multiple physical replicas to one single logical file, and enables
data redundancy in a grid environment.

OGSA-DAI OGSA-DAI enables a general grid interface for accessing grid


data sources such as relational database management systems and XML repositories,
through query languages like SQL, XPat, and XQuery. Currently, OGSA-DAI is a
technical preview component. That is, the implementation is functional, but not
necessarily complete, and its implementation and interfaces may change in the future.

Data Replication Service (DRS) Data Replication Service provides a system


for making replicas of files in the grid environment, and registering them to RLS.
DRS uses RFT and GridFTP to transfer the files, and it uses RLS to locate and register
the replicas. Currently, DRS is a technical preview component.

Monitoring and Discovery Services The Monitoring and Discovery Services (MDS)
are mainly concerned with the collection, distribution, indexing, archival, and
otherwise processing information about the state of various resources, services, and
system configurations. The information collected is used to either discover new
services or resources, or to enable monitoring of system status. The GT4 provides a
WS-RF and WS-Notification compliant version of MDS, also known as MDS4. The
resource properties provided by a WS-RF compliant resource can be registered with
MDS4 services for information collection purposes. The GT4 WS-RF compliant
services such as GRAM and RFT provide such properties. Upon GT4 container
startup these services are registered with MDS4 services. MDS4 consists of two
higher-level services, an Index service and a Trigger service, which are based on the
Aggregator Framework.

Index service The Index service is the central component of the GT4 MDS
implementation. Every instance of a GT4 container has a default indexing service.

151

Visit & Downloaded from : www.LearnEngineering.in


Visit & Downloaded from : www.LearnEngineering.in

(DefaultIndexService) exposed as a WSRF service. The Index service interacts with


data sources via standard WS-RF resource property and subscription/notification
interfaces (WS-ResourceProperties and WS-BaseNotification). A WSRF-based
service can make information available as resource properties. An Index service can
potentially collect information from many sources and publish it in only one place.
Various WSRF registrations with the Index service are maintained as Service Group
Entries by the Index service. The contents of the Index service can be queried via
XPath queries.
Trigger service The MDS Trigger service collects information and compares
that data against a set of conditions defined in a configuration file. When a condition
is met an action is executed. The condition is specified as an XPath expression; that,
for example, may compare the value of a property to a threshold and send an alert e-
mail to an administrator by executing a script. The name and location of the script can
be conFigured with the MDS Trigger service.
Aggregator Framework The MDS-Index service and the MDS-Trigger
service are specializations of a general Aggregator Framework. The Aggregator
Framework is a software framework for building software services that collect and
aggregate data. These services are also known as aggregator services. An aggregator
service collects information from one of the three types of aggregator sources such as
a query source that utilizes WS-ResourceProperty mechanisms to collect data, a
subscription source that uses a WS-Notification subscription/notification mechanism
to collect data, or an execution source that executes an administrator-provided
application to collect information in XML format. An aggregator source retrieves
information from an external component called an information provider. In the case of
a query and subscription source, the information provider is a WSRF-compliant
service. For an execution source, the information provider is an executable program
that obtains data via some application-specific mechanism.

152

Visit & Downloaded from : www.LearnEngineering.in


Visit & Downloaded from : www.LearnEngineering.in

Figure 4.4 - MDS4 Aggregator Framework

WebMDS WebMDS is a Web-based interface to WS-RF resource property


information that can be used as a user-friendly front-end to the Index service.
WebMDS uses standard resource property requests to query resource property data
and transforms data for a user-friendly display. Web site administrators can customize
their own WebMDS deployments by using HTML form options and creating their
own XSLT transformations.
Execution management Globus Toolkit 4 provides various tools that enable
execution management in a grid environment.

WS GRAM WS GRAM is the Grid service that provides the remote execution
and status management of jobs. When a job is submitted by a client, the request is sent

153

Visit & Downloaded from : www.LearnEngineering.in


Visit & Downloaded from : www.LearnEngineering.in

to the remote host as a SOAP message, and handled by WS GRAM service located in
the remote host. The WS GRAM service is capable of submitting those requests to
local job schedulers such as Platform LSF or Altair PBS. The WS GRAM service
returns status information of the job using WSNotification. The WS GRAM service
can collaborate with the RFT service for staging files required by jobs. In order to
enable staging with RFT, valid credentials should be delegated to the RFT service by
the Delegation service.

Figure 4.5 – Execution of staging job

Community Scheduler Framework 4 (CSF4) The Community Scheduler


Framework 4 (CSF4) provides an intelligent, policy-based meta-scheduling facility for
building grids where there are multiple types of job schedulers involved. It enables a
single interface for different resource managers, such as Platform LSF and Altair PBS.
Currently, CSF4 is a technical preview component

154

Visit & Downloaded from : www.LearnEngineering.in


Visit & Downloaded from : www.LearnEngineering.in

Globus Teleoperations Control Protocol (GTCP) Globus Teleoperations


Control Protocol is the WSRF version of NEESgrid Teleoperations Control Protocol
(NTCP). Currently, GTCP is a technical preview component.

Workspace Management Service (WMS) The Workspace Management


Service enables a grid client to dynamically create, manage, and delete user accounts
in a remote site. Currently, WMS is a technical preview component, and only supports
management of UNIX accounts.

3.MAP AND REDUCE FUNCTION IN HADOOP FRAMEWORK USING


JAVA PROGRAM

 How will you define the Map and Reduce function in Hadoop framework using
Java program?

The whole data flow is illustrated in the following Figure 4.6. At the bottom of the
diagram is a Unix pipeline, which mimics the whole MapReduce flow.

Figure 4.6 Data flow

Java MapReduce Having run through how the MapReduce program works, the next
step is to express it in code. We need three things: a map function, a reduce function,
and some code to run the job. The map function is represented by the Mapper class,

155

Visit & Downloaded from : www.LearnEngineering.in


Visit & Downloaded from : www.LearnEngineering.in

which declares an abstract map() method. The following example shows the
implementation of our map method.
Mapper for maximum temperature example
import java.io.IOException;
import org.apache.hadoop.io.IntWritable;
import org.apache.hadoop.io.LongWritable;
import org.apache.hadoop.io.Text;
import org.apache.hadoop.mapreduce.Mapper;
public class MaxTemperatureMapper
extends Mapper<LongWritable, Text, Text, IntWritable>
{
private static final int MISSING = 9999;
@Override
public void map(LongWritable key, Text value, Context context)
throws IOException, InterruptedException
{
String line = value.toString();
String year = line.substring(15, 19);
int airTemperature;
if (line.charAt(87) == '+')
{ // parseInt doesn't like leading plus signs
airTemperature = Integer.parseInt(line.substring(88, 92));
}
else
{
airTemperature = Integer.parseInt(line.substring(87, 92));
}
String quality = line.substring(92, 93);
if (airTemperature != MISSING && quality.matches("[01459]"))
{
context.write(new Text(year), new IntWritable(airTemperature));

156

Visit & Downloaded from : www.LearnEngineering.in


Visit & Downloaded from : www.LearnEngineering.in

}
}}
The Mapper class is a generic type, with four formal type parameters that specify the
input key, input value, output key, and output value types of the map function. For the
present example, the input key is a long integer offset, the input value is a line of text,
the output key is a year, and the output value is an air temperature (an integer).
Rather than use built-in Java types, Hadoop provides its own set of basic types that
are optimized for network serialization. These are found in the org.apache.hadoop.io
package. Here we use LongWritable, which corresponds to a Java Long, Text (like
Java String), and IntWritable (like Java Integer). The map () method is passed a key
and a value. We convert the Text value containing the line of input into a Java String,
then use its substring() method to extract the columns we are interested in. The map()
method also provides an instance of Context to write the output to. In this case, we
write the year as a Text object (since we are just using it as a key), and the
temperature is wrapped in an IntWritable. We write an output record only if the
temperature is present and the quality code indicates the temperature reading is OK.
The reduce function is similarly defined using a Reducer, as illustrated in the
following Example.
Reducer for maximum temperature example
import java.io.IOException;
import org.apache.hadoop.io.IntWritable;
import org.apache.hadoop.io.Text;
import org.apache.hadoop.mapreduce.Reducer;
public class MaxTemperatureReducer
extends Reducer<Text, IntWritable, Text, IntWritable> {
@Override
public void reduce(Text key, Iterable<IntWritable> values,
Context context)
throws IOException, InterruptedException {
int maxValue = Integer.MIN_VALUE;
for (IntWritable value : values) {

157

Visit & Downloaded from : www.LearnEngineering.in


Visit & Downloaded from : www.LearnEngineering.in

maxValue = Math.max(maxValue, value.get());


}
context.write(key, new IntWritable(maxValue));
}}
Again, four formal type parameters are used to specify the input and output types, this
time for the reduce function. The input types of the reduce function must match the
output types of the map function: Text and IntWritable. And in this case, the output
types of the reduce function are Text and IntWritable, for a year and its maximum
temperature, which we find by iterating through the temperatures and comparing each
with a record of the highest found so far. The third piece of code runs the MapReduce
job (see the following Example).
Application to find the maximum temperature in the weather dataset
import org.apache.hadoop.fs.Path;
import org.apache.hadoop.io.IntWritable;
import org.apache.hadoop.io.Text;
import org.apache.hadoop.mapreduce.Job;
import org.apache.hadoop.mapreduce.lib.input.FileInputFormat;
import org.apache.hadoop.mapreduce.lib.output.FileOutputFormat;
public class MaxTemperature
{
public static void main(String[] args) throws Exception
{
if (args.length != 2)
{
System.err.println("Usage: MaxTemperature <input path> <output path>");
System.exit(-1);
}
Job job = new Job();
job.setJarByClass(MaxTemperature.class);
job.setJobName("Max temperature");
FileInputFormat.addInputPath(job, new Path(args[0]));

158

Visit & Downloaded from : www.LearnEngineering.in


Visit & Downloaded from : www.LearnEngineering.in

FileOutputFormat.setOutputPath(job, new Path(args[1]));


job.setMapperClass(MaxTemperatureMapper.class);
job.setReducerClass(MaxTemperatureReducer.class);
job.setOutputKeyClass(Text.class);
job.setOutputValueClass(IntWritable.class);
System.exit(job.waitForCompletion(true) ? 0 : 1);}}

A Job object forms the specification of the job. It gives you control over how the job
is run. When we run this job on a Hadoop cluster, we will package the code into a
JAR file (which Hadoop will distribute around the cluster). Rather than explicitly
specify the name of the JAR file, we can pass a class in the Job’s setJarByClass()
method, which Hadoop will use to locate the relevant JAR file by looking for the JAR
file containing this class.
Having constructed a Job object, we specify the input and output paths. An input path
is specified by calling the static addInputPath() method on FileInputFormat, and it can
be a single file, a directory (in which case, the input forms all the files in that
directory), or a file pattern. As the name suggests, addInputPath() can be called more
than once to use input from multiple paths.
The output path (of which there is only one) is specified by the static setOutput Path()
method on FileOutputFormat. It specifies a directory where the output files from the
reducer functions are written. The directory shouldn’t exist before running the job, as
Hadoop will complain and not run the job. This precaution is to prevent data loss. The
input types are controlled via the input format, which we have not explicitly set since
we are using the default TextInputFormat.
After setting the classes that define the map and reduce functions, we are ready to run
the job. The waitForCompletion() method on Job submits the job and waits for it to
finish. The method’s boolean argument is a verbose flag, so in this case the job writes
information about its progress to the console. The return value of the
waitForCompletion() method is a boolean indicating success (true) or failure (false),
which we translate into the program’s exit code of 0 or 1.

159

Visit & Downloaded from : www.LearnEngineering.in


Visit & Downloaded from : www.LearnEngineering.in

4. HDFS CONCEPTS

 Write about the HDFS concepts in detail

HDFS CONCEPTS
Blocks: A disk has a block size, which is the minimum amount of data that it can read
or write. File systems for a single disk build on this by dealing with data in blocks,
which are an integral multiple of the disk block size. File system blocks are typically a
few kilobytes in size, while disk blocks are normally 512 bytes. This is generally
transparent to the file system user who is simply reading or writing a file—of
whatever length. However, there are tools to perform file system maintenance, such as
df and fsck, that operate on the file system block level.
HDFS, too, has the concept of a block, but it is a much larger unit—64 MB by
default. Like in a file system for a single disk, files in HDFS are broken into block-
sized chunks, which are stored as independent units. Unlike a file system for a single
disk, a file in HDFS that is smaller than a single block does not occupy a full block’s
worth of underlying storage. When unqualified, the term ―block‖ in this book refers to
a block in HDFS.
Furthermore, blocks fit well with replication for providing fault tolerance and
availability. To insure against corrupted blocks and disk and machine failure, each
block is replicated to a small number of physically separate machines (typically three).
If a block becomes unavailable, a copy can be read from another location in a way that
is transparent to the client. A block that is no longer available due to corruption or
machine failure can be replicated from its alternative locations to other live machines
to bring the replication factor back to the normal level.
Similarly, some applications may choose to set a high replication factor for the blocks
in a popular file to spread the read load on the cluster. Like its disk filesystem cousin,
HDFS’s fsck command understands blocks. For example, running:
% hadoop fsck / -files -blocks
It will list the blocks that make up each file in the file system Namenodes and
Datanodes

160

Visit & Downloaded from : www.LearnEngineering.in


Visit & Downloaded from : www.LearnEngineering.in

An HDFS cluster has two types of node operating in a master-worker pattern: a


namenode (the master) and a number of datanodes (workers).
The namenode manages the file system namespace. It maintains the file system
tree and the metadata for all the files and directories in the tree. This information is
stored persistently on the local disk in the form of two files: the namespace image and
the edit log.
The name node also knows the data nodes on which all the blocks for a given
file are located, however, it does not store block locations persistently, since this
information is reconstructed from data nodes when the system starts.
A client accesses the file system on behalf of the user by communicating with
the namenode and datanodes. The client presents a POSIX-like file system interface,
so the user code does not need to know about the namenode and datanode to function.
Datanodes are the workhorses of the file system. They store and retrieve blocks
when they are told to (by clients or the namenode), and they report back to the
namenode periodically with lists of blocks that they are storing.
It is also possible to run a secondary namenode, which despite its name does
not act as a namenode. Its main role is to periodically merge the namespace image
with the edit log to prevent the edit log from becoming too large. The secondary
namenode usually runs on a separate physical machine, since it requires plenty of
CPU and as much memory as the namenode to perform the merge. It keeps a copy of
the merged namespace image, which can be used in the event of the namenode failing.
However, the state of the secondary namenode lags that of the primary, so in the event
of total failure of the primary, data loss is almost certain.

HDFS Federation: The namenode keeps a reference to every file and block in the file
system in memory, which means that on very large clusters with many files, memory
becomes the limiting factor for scaling. HDFS Federation, introduced in the 0.23
release series, allows a cluster to scale by adding namenodes, each of which manages
a portion of the filesystem namespace. For example, one namenode might manage all
the files rooted under /user, say, and a second namenode might handle files under
/share. To access a federated HDFS cluster, clients use client-side mount tables to

161

Visit & Downloaded from : www.LearnEngineering.in


Visit & Downloaded from : www.LearnEngineering.in

map file paths to namenodes. This is managed in configuration using the


ViewFileSystem, and viewfs:// URIs.

HDFS High-Availability: The combination of replicating namenode metadata on


multiple file systems, and using the secondary namenode to create checkpoints
protects against data loss, but does not provide high-availability of the files ystem.
The namenode is still a single point of failure (SPOF), since if it did fail, all clients—
including MapReduce jobs—would be unable to read, write, or list files, because the
namenode is the sole repository of the metadata and the file-to-block mapping. In such
an event the whole Hadoop system would effectively be out of service until a new
namenode could be brought online. The 0.23 release series of Hadoop remedies this
situation by adding support for HDFS high-availability (HA). In this implementation
there is a pair of namenodes in an active standby configuration. In the event of the
failure of the active namenode, the standby takes over its duties to continue servicing
client requests without a significant interruption. A few architectural changes are
needed to allow this to happen:
 The namenodes must use highly-available shared storage to share the edit log.
(In the initial implementation of HA this will require an NFS filer, but in future
releases more options will be provided, such as a BookKeeper-based system built on
Zoo- Keeper.) When a standby namenode comes up it reads up to the end of the
shared edit log to synchronize its state with the active namenode, and then continues
to read new entries as they are written by the active namenode.
 Datanodes must send block reports to both namenodes since the block
mappings are stored in a namenode’s memory, and not on disk.
 Clients must be conFigured to handle namenode failover, which uses a
mechanism that is transparent to users Failover and fencing The transition from the
active namenode to the standby is managed by a new entity in the system called the
failover controller. Failover controllers are pluggable, but the first implementation
uses ZooKeeper to ensure that only one namenode is active. Each namenode runs a
lightweight failover controller process whose job it is to monitor its namenode for
failures and trigger a failover should a namenode fail. Failover may also be initiated

162

Visit & Downloaded from : www.LearnEngineering.in


Visit & Downloaded from : www.LearnEngineering.in

manually by an adminstrator, in the case of routine maintenance, for example. This is


known as a graceful failover, since the failover controller arranges an orderly
transition for both namenodes to switch roles. In the case of an ungraceful failover,
however, it is impossible to be sure that the failed namenode has stopped running. For
example, a slow network or a network partition can trigger a failover transition, even
though the previously active namenode is still running, and thinks it is still the active
namenode. The HA implementation goes to great lengths to ensure that the previously
active namenode is prevented from doing any damage and causing corruption—a
method known as fencing. The system employs a range of fencing mechanisms,
including killing the namenode’s process, revoking its access to the shared storage
directory (typically by using a vendor-specific NFS command), and disabling its
network port via a remote management command.

5. HADOOP FILE SYSTEM AND COMMAND LINE INTERFACE

 Explain in detail about Hadoop File system and Command line Interface.

Hadoop has an abstract notion of file system, of which HDFS is just one
implementation. The Java abstract class org.apache.hadoop.fs.FileSystem represents a
file system in Hadoop, and there are several concrete implementations, which are
described in the following table. Hadoop provides many interfaces to its file systems,
and it generally uses the URI scheme to pick the correct file system instance to
communicate with. For example, the file system shell that we met in the previous
section operates with all Hadoop file systems.
To list the files in the root directory of the local file system, type:
% hadoop fs -ls file:///
Hadoop is written in Java, and all Hadoop file system interactions are mediated
through the Java API. The file system shell, for example, is a Java application that
uses the Java FileSystem class to provide file system operations. The other filesystem
interfaces are discussed briefly in this section. These interfaces are most commonly
used with HDFS, since the other file systems in Hadoop typically have existing tools

163

Visit & Downloaded from : www.LearnEngineering.in


Visit & Downloaded from : www.LearnEngineering.in

to access the underlying file system (FTP clients for FTP, S3 tools for S3, etc.), but
many of them will work with any Hadoop file system.

HTTP
There are two ways of accessing HDFS over HTTP: directly, where the HDFS
daemons serve HTTP requests to clients; and via a proxy (or proxies), which accesses
HDFS on the client’s behalf using the usual DistributedFileSystem API. The original
HDFS proxy (in src/contrib/hdfsproxy) was read-only, and could be accessed by
clients using the HSFTP FileSystem implementation (hsftp URIs).
The two ways are illustrated in the following Figure 4.7.

Figure 4.7 - Accessing HDFS over HTTP directly, and via a bank of HDFS
proxies
From release 0.23, there is a new proxy called HttpFS that has read and write
capabilities, and which exposes the same HTTP interface as WebHDFS, so clients can
access either using webhdfs URIs.
The HTTP REST API that WebHDFS exposes is formally defined in a specification,
so it is likely that over time clients in languages other than Java will be written that
use it directly.
C

164

Visit & Downloaded from : www.LearnEngineering.in


Visit & Downloaded from : www.LearnEngineering.in

Hadoop provides a C library called libhdfs that mirrors the Java FileSystem interface
(it was written as a C library for accessing HDFS, but despite its name it can be used
to access any Hadoop filesystem). It works using the Java Native Interface (JNI) to
call a Java filesystem client.

FUSE
Filesystem in Userspace (FUSE) allows filesystems that are implemented in user
space to be integrated as a Unix filesystem. Hadoop’s Fuse-DFS contrib module
allows any Hadoop filesystem (but typically HDFS) to be mounted as a standard
filesystem. You can then use Unix utilities (such as ls and cat) to interact with the
filesystem, as well as POSIX libraries to access the filesystem from any programming
language.

(II) THE COMMAND-LINE INTERFACE


HDFS by interacting with it from the command line. There are many other interfaces
to HDFS, but the command line is one of the simplest and, to many developers, the
most familiar.

Basic File system Operations


The file system is ready to be used, and we can do all of the usual filesystem
operations such as reading files, creating directories, moving files, deleting data, and
listing directories. You can type hadoop fs -help to get detailed help on every
command. Start by copying a file from the local filesystem to HDFS:
% hadoop fs -copyFromLocal input/docs/quangle.txt hdfs://localhost/user/tom/
quangle.txt
This command invokes Hadoop’s filesystem shell command fs, which supports a
number of subcommands—in this case, we are running -copyFromLocal. The local
file quangle.txt is copied to the file /user/tom/quangle.txt on the HDFS instance
running on localhost. In fact, we could have omitted the scheme and host of the URI
and picked up the default, hdfs://localhost, as specified in core-site.xml:
% hadoop fs -copyFromLocal input/docs/quangle.txt /user/tom/quangle.txt

165

Visit & Downloaded from : www.LearnEngineering.in


Visit & Downloaded from : www.LearnEngineering.in

We could also have used a relative path and copied the file to our home directory in
HDFS, which in this case is /user/tom:
% hadoop fs -copyFromLocal input/docs/quangle.txt quangle.txt
Let’s copy the file back to the local filesystem and check whether it’s the same:
% hadoop fs -copyToLocal quangle.txt quangle.copy.txt
% md5 input/docs/quangle.txt quangle.copy.txt
MD5 (input/docs/quangle.txt) = a16f231da6b05e2ba7a339320e7dacd9

MD5 (quangle.copy.txt) = a16f231da6b05e2ba7a339320e7dacd9


The MD5 digests are the same, showing that the file survived its trip to HDFS and is
back intact. Finally, let’s look at an HDFS file listing. We create a directory first just
to see how it is displayed in the listing:
% hadoop fs -mkdir books
% hadoop fs -ls .
Found 2 items
drwxr-xr-x - tom supergroup 0 2009-04-02 22:41 /user/tom/books
-rw-r--r-- 1 tom supergroup 118 2009-04-02 22:29 /user/tom/quangle.txt

The information returned is very similar to the Unix command ls -l, with a few minor
differences. The first column shows the file mode. The second column is the
replication factor of the file (something a traditional Unix filesystem does not have).
Remember we set the default replication factor in the site-wide configuration to be 1,
which is why we see the same value here. The entry in this column is empty for
directories since the concept of replication does not apply to them—directories are
treated as metadata and stored by the namenode, not the datanodes. The third and
fourth columns show the file owner and group. The fifth column is the size of the file
in bytes, or zero for directories. The sixth and seventh columns are the last modified
date and time. Finally, the eighth column is the absolute name of the file or directory

166

Visit & Downloaded from : www.LearnEngineering.in


Visit & Downloaded from : www.LearnEngineering.in

6. HDFS JAVA INTERFACE

 Discuss about the HDFS Java Interface in detail.

THE JAVA INTERFACE


Hadoop’s File System class: the API for interacting with one of Hadoop’s
filesystems.5 While we focus mainly on the HDFS implementation,
DistributedFileSystem, in general you should strive to write your code against the File
System abstract class, to retain portability across file systems. This is very useful
when testing your program, for example, since you can rapidly run tests using data
stored on the local file system.
Reading Data from a Hadoop URL
One of the simplest ways to read a file from a Hadoop file system is by using a
java.net.URL object to open a stream to read the data from. The general idiom is:
InputStream in = null;
try {
in = new URL("hdfs://host/path").openStream();
// process in
} finally {
IOUtils.closeStream(in);
}

There’s a little bit more work required to make Java recognize Hadoop’s hdfs URL
scheme. This is achieved by calling the setURLStreamHandlerFactory method on
URL with an instance of FsUrlStreamHandlerFactory. This method can only be called
once per JVM, so it is typically executed in a static block. This limitation means that if
some other part of your program—perhaps a third-party component outside your
control— sets a URLStreamHandlerFactory, you won’t be able to use this approach
for reading data from Hadoop. The next section discusses an alternative. The
following example shows a program for displaying files from Hadoop file systems on
standard output, like the Unix cat command.

167

Visit & Downloaded from : www.LearnEngineering.in


Visit & Downloaded from : www.LearnEngineering.in

public class URLCat {


static { URL.setURLStreamHandlerFactory(new FsUrlStreamHandlerFactory());
}
public static void main(String[] args) throws Exception {
InputStream in = null;
try {
in = new URL(args[0]).openStream();
IOUtils.copyBytes(in, System.out, 4096, false);
} finally {
IOUtils.closeStream(in);
}}}
Here’s a sample run:6
% hadoop URLCat hdfs://localhost/user/tom/quangle.txt
On the top of the Crumpetty Tree The Quangle Wangle sat, But his face you could not
see, On account of his Beaver Hat. Reading Data Using the FileSystem API A file in a
Hadoop filesystem is represented by a Hadoop Path object (and not a java.io.File
object, since its semantics are too closely tied to the local filesystem). You can think
of a Path as a Hadoop filesystem URI, such as hdfs://localhost/user/tom/ quangle.txt.
FileSystem is a general filesystem API, so the first step is to retrieve an instance for
the filesystem we want to use—HDFS in this case. There are several static factory
methods for getting a FileSystem instance:
public static FileSystem get(Configuration conf) throws IOException
public static FileSystem get(URI uri, Configuration conf) throws IOException
public static FileSystem get(URI uri, Configuration conf, String user) throws
IOException
A Configuration object encapsulates a client or server’s configuration, which is set
using configuration files read from the classpath, such as conf/core-site.xml. The first
method returns the default filesystem (as specified in the file conf/core-site.xml, or the
default local filesystem if not specified there). The second uses the given URI’s
scheme and authority to determine the filesystem to use, falling back to the default
filesystem if no scheme is specified in the given URI. The third retrieves the

168

Visit & Downloaded from : www.LearnEngineering.in


Visit & Downloaded from : www.LearnEngineering.in

filesystem as the given user. In some cases, you may want to retrieve a local
filesystem instance, in which case you can use the convenience method, getLocal():
public static LocalFileSystem getLocal(Configuration conf) throws IOException With
a FileSystem instance in hand, we invoke an open() method to get the input stream for
a file:
public FSDataInputStream open(Path f) throws IOException
public abstract FSDataInputStream open(Path f, int bufferSize) throws
IOException

Displaying files from a Hadoop filesystem on standard output by using the


FileSystem
Directly
public class FileSystemCat {
public static void main(String[] args) throws Exception {
String uri = args[0];
Configuration conf = new Configuration();
FileSystem fs = FileSystem.get(URI.create(uri), conf);
InputStream in = null;
try {
in = fs.open(new Path(uri));
IOUtils.copyBytes(in, System.out, 4096, false);
} finally {
IOUtils.closeStream(in);
}}}

The program runs as follows:


% hadoop FileSystemCat hdfs://localhost/user/tom/quangle.txt
On the top of the Crumpetty Tree
The Quangle Wangle sat,
But his face you could not see,
On account of his Beaver Hat.

169

Visit & Downloaded from : www.LearnEngineering.in


Visit & Downloaded from : www.LearnEngineering.in

FSDataInputStream
The open() method on FileSystem actually returns a FSDataInputStream rather than a
standard java.io class. This class is a specialization of java.io.DataInputStream with
support for random access, so you can read from any part of the stream:
package org.apache.hadoop.fs;
public class FSDataInputStream extends DataInputStream
implements Seekable, PositionedReadable {
// implementation elided
}
The Seekable interface permits seeking to a position in the file and a query method for
the current offset from the start of the file (getPos()):
public interface Seekable {
void seek(long pos) throws IOException;
long getPos() throws IOException;
}
Calling seek() with a position that is greater than the length of the file will result in an
IOException. Unlike the skip() method of java.io.InputStream that positions the
stream at a point later than the current position, seek() can move to an arbitrary,
absolute position in the file.
Writing Data
The FileSystem class has a number of methods for creating a file. The simplest is the
method that takes a Path object for the file to be created and returns an output stream
to write to:
public FSDataOutputStream create(Path f) throws IOException
The following example shows how to copy a local file to a Hadoop filesystem. We
illustrate progress by printing a period every time the progress() method is called by
Hadoop, which is after each 64 K packet of data is written to the datanode pipeline.
public class FileCopyWithProgress {
public static void main(String[] args) throws Exception {
String localSrc = args[0];

170

Visit & Downloaded from : www.LearnEngineering.in


Visit & Downloaded from : www.LearnEngineering.in

String dst = args[1];


InputStream in = new BufferedInputStream(new FileInputStream(localSrc));
Configuration conf = new Configuration();
FileSystem fs = FileSystem.get(URI.create(dst), conf);
OutputStream out = fs.create(new Path(dst), new Progressable() {
public void progress() {
System.out.print(".");
}
});
IOUtils.copyBytes(in, out, 4096, true);
}}
Typical usage:
% hadoop FileCopyWithProgress input/docs/1400-8.txt hdfs://localhost/user/tom/
1400-8.txt
Currently, none of the other Hadoop filesystems call progress() during writes.

FSDataOutputStream
The create() method on FileSystem returns an FSDataOutputStream, which, like
FSDataInputStream, has a method for querying the current position in the file:
public class FSDataOutputStream extends DataOutputStream implements Syncable
{
public long getPos() throws IOException {
// implementation elided
}
// implementation elided
}

Directories
FileSystem provides a method to create a directory:
public boolean mkdirs(Path f) throws IOException

171

Visit & Downloaded from : www.LearnEngineering.in


Visit & Downloaded from : www.LearnEngineering.in

This method creates all of the necessary parent directories if they don’t already exist,
just like the java.io.File’s mkdirs() method. It returns true if the directory (and all
parent directories) was (were) successfully created.
Often, you don’t need to explicitly create a directory, since writing a file, by calling
create(), will automatically create any parent directories.
Querying the Filesystem
File metadata: FileStatus
An important feature of any filesystem is the ability to navigate its directory structure
and retrieve information about the files and directories that it stores. The FileStatus
class encapsulates filesystem metadata for files and directories, including file length,
block size, replication, modification time, ownership, and permission information.
The method getFileStatus() on FileSystem provides a way of getting a FileStatus
object for a single file or directory.
Listing files
Finding information on a single file or directory is useful, but you also often need to
be able to list the contents of a directory. That’s what FileSystem’s listStatus()
methods are for:
public FileStatus[] listStatus(Path f) throws IOException
public FileStatus[] listStatus(Path f, PathFilter filter) throws IOException
public FileStatus[] listStatus(Path[] files) throws IOException
public FileStatus[] listStatus(Path[] files, PathFilter filter) throws IOException

When the argument is a file, the simplest variant returns an array of FileStatus objects
of length 1. When the argument is a directory, it returns zero or more FileStatus
objects
representing the files and directories contained in the directory.
File patterns
It is a common requirement to process sets of files in a single operation. For example,
a MapReduce job for log processing might analyze a month’s worth of files contained
in a number of directories. Rather than having to enumerate each file and directory to
specify the input, it is convenient to use wildcard characters to match multiple files

172

Visit & Downloaded from : www.LearnEngineering.in


Visit & Downloaded from : www.LearnEngineering.in

with a single expression, an operation that is known as globbing. Hadoop provides


two File System method for processing globs:
public FileStatus[] globStatus(Path pathPattern) throws IOException
public FileStatus[] globStatus(Path pathPattern, PathFilter filter)
throws
IOException
The globStatus() method returns an array of File Status objects whose paths match the
supplied pattern, sorted by path. An optional Path Filter can be specified to restrict the
matches further.
Path Filter
Glob patterns are not always powerful enough to describe a set of files you want to
access. For example, it is not generally possible to exclude a particular file using a
glob pattern. The listStatus() and globStatus() methods of File System take an optional
Path Filter, which allows programmatic control over matching:
package org.apache.hadoop.fs;
public interface PathFilter {
boolean accept(Path path); }
PathFilter is the equivalent of java.io.FileFilter for Path objects rather than File
objects.
Deleting Data
Use the delete() method on FileSystem to permanently remove files or directories:
public boolean delete(Path f, boolean recursive) throws IOException If f is a file or an
empty directory, then the value of recursive is ignored. A nonempty directory is only
deleted, along with its contents, if recursive is true (otherwise an IOException is
thrown).

173

Visit & Downloaded from : www.LearnEngineering.in


Visit & Downloaded from : www.LearnEngineering.in

7. DATA FLOW METHODS IN HDFS

 Describe the Data flow methods in HDFS

ANATOMY OF A FILE READ


To get an idea of how data flows between the client interacting with HDFS, the
namenode and the datanodes, consider the following Figure, which shows the main
sequence of events when reading a file.

Figure 4.8 – File Read


The client opens the file it wishes to read by calling open() on the File System object,
which for HDFS is an instance of DistributedFileSystem (step 1 in the Figure 4.8 ).
DistributedFileSystem calls the namenode, using RPC, to determine the locations of
the blocks for the first few blocks in the file (step 2). For each block, the namenode
returns the addresses of the datanodes that have a copy of that block. Furthermore, the
datanodes are sorted according to their proximity to the client. If the client is itself a
datanodes (in the case of a MapReduce task, for instance), then it will read from the
local datanodes, if it hosts a copy of the block.
The DistributedFileSystem returns an FSDataInputStream (an input stream that
supports file seeks) to the client for it to read data from. FSDataInputStream in turn
wraps a DFSInputStream, which manages the datanodes and namenode I/O.

174

Visit & Downloaded from : www.LearnEngineering.in


Visit & Downloaded from : www.LearnEngineering.in

ANATOMY OF A FILE WRITE


The case we’re going to consider is the case of creating a new file, writing data to it,
then closing the file. See the following Figure. The client creates the file by calling
create() on DistributedFileSystem (step 1 in The following Figure).

Figure 4.9 – File Write


DistributedFileSystem makes an RPC call to the namenode to create a new file in the
file system’s namespace, with no blocks associated with it (step 2). The namenode
performs various checks to make sure the file doesn’t already exist, and that the client
has the right permissions to create the file. If these checks pass, the namenode makes a
record of the new file; otherwise, file creation fails and the client is thrown an
IOException. The DistributedFileSystem returns an FSDataOutputStream for the
client to start writing data to. Just as in the read case, FSDataOutputStream wraps a
DFSOutput Stream, which handles communication with the datanodes and namenode.
DFSOutputStream also maintains an internal queue of packets that are waiting to be
acknowledged by datanodes, called the ack queue. A packet is removed from the ack
queue only when it has been acknowledged by all the datanodes in the pipeline (step

175

Visit & Downloaded from : www.LearnEngineering.in


Visit & Downloaded from : www.LearnEngineering.in

5).Coherency Model
A coherency model for a file system describes the data visibility of reads and writes
for a file. HDFS trades off some POSIX requirements for performance, so some
operations
may behave differently than you expect them to.
After creating a file, it is visible in the file system namespace, as expected:
Path p = new Path("p");
fs.create(p);
assertThat(fs.exists(p), is(true));
However, any content written to the file is not guaranteed to be visible, even if the
stream is flushed. So the file appears to have a length of zero:
Path p = new Path("p");
OutputStream out = fs.create(p);
out.write("content".getBytes("UTF-8"));
out.flush();
assertThat(fs.getFileStatus(p).getLen(), is(0L));
Once more than a block’s worth of data has been written, the first block will be visible
to new readers. This is true of subsequent blocks, too: it is always the current block
being written that is not visible to other readers.
HDFS provides a method for forcing all buffers to be synchronized to the datanodes
via the sync() method on FSDataOutputStream. After a successful return from sync(),
HDFS guarantees that the data written up to that point in the file is persisted and
visible to all new readers:
Path p = new Path("p");
FSDataOutputStream out = fs.create(p);
out.write("content".getBytes("UTF-8"));
out.flush(); out.sync();
assertThat(fs.getFileStatus(p).getLen(), is(((long) "content".length())));
This behavior is similar to the fsync system call in POSIX that commits buffered data
for a file descriptor. For example, using the standard Java API to write a local file, we
are guaranteed to see the content after flushing the stream and synchronizing:

176

Visit & Downloaded from : www.LearnEngineering.in


Visit & Downloaded from : www.LearnEngineering.in

FileOutputStream out = new FileOutputStream(localFile);


out.write("content".getBytes("UTF-8"));
out.flush(); // flush to operating system
out.getFD().sync(); // sync to disk
assertThat(localFile.length(), is(((long) "content".length())));
Closing a file in HDFS performs an implicit sync(), too:
Path p = new Path("p");
OutputStream out = fs.create(p);
out.write("content".getBytes("UTF-8"));
out.close();
assertThat(fs.getFileStatus(p).getLen(), is(((long) "content".length())));

Consequenes for application design


This coherency model has implications for the way you design applications. With no
calls to sync(), you should be prepared to lose up to a block of data in the event of
client or system failure. For many applications, this is unacceptable, so you should call
sync() at suitable points, such as after writing a certain number of records or number
of bytes. Though the sync() operation is designed to not unduly tax HDFS, it does
have some overhead, so there is a trade-off between data robustness and throughput.
What is an acceptable trade-off is application-dependent, and suitable values can be
selected after measuring your application’s performance with different sync()
frequencies.

177

Visit & Downloaded from : www.LearnEngineering.in


Visit & Downloaded from : www.LearnEngineering.in

UNIT – V

SECURITY

Trust models for Grid security environment – Authentication and


Authorization methods – Grid security infrastructure – Cloud
Infrastructure security: network, host and application level – aspects of
data security, provider data and its security, Identity and access
management architecture, IAM practices in the cloud, SaaS, PaaS, IaaS
availability in the cloud, Key privacy issues in the cloud.

Part – A

1. List out the security issues that occur in grid environment.

Many potential security issues may occur in a grid environment if qualified


security mechanisms are not in place. These issues include network sniffers,
out-of-control access, faulty operation, malicious operation, integration of local
security mechanisms, delegation, dynamic resources and services and attack
provenance.

2. What are all the authentication methods used in grid environment?

The major authentication methods in the grid include passwords, PKI, and
Kerberos.

3. Define authorization. List out the types of authority.

Authorization: The authorization is a process to exercise access control of


shared resources. Decisions can be made either at the access point of service or
at a centralized place.

178

Visit & Downloaded from : www.LearnEngineering.in


Visit & Downloaded from : www.LearnEngineering.in

Types of authority:
The authority can be classified into three categories:
 Attribute authorities issue attribute assertions;
 policy authorities issue authorization policies;
 identity authorities issue certificates.

4. Mention the properties of grid security infrastructure.

The grid requires a security infrastructure with the following properties:


 easy to use;
 conforms with the VO’s security needs while working well with site
policies of each resource provider site; and
 provides appropriate authentication and encryption of all interactions.

5. What is GSI?

GSI is a portion of the Globus Toolkit and provides fundamental security


services needed to support grids, including supporting for message protection,
authentication and delegation, and authorization. GSI enables secure authentication
and communication over an open network, and permits mutual authentication
across and among distributed sites with single sign-on capability. No centrally
managed security system is required, and the grid maintains the integrity of its
members’ local policies. GSI supports both message-level security, which supports
the WS-Security standard and the WS-SecureConversation specification to provide
message protection for SOAP messages, and transport-level security, which means
authentication via TLS with support for X.509 proxy certificates.

6. What are the functions of GSI?


 message protection,  delegation, and
 authentication,  authorization

179

Visit & Downloaded from : www.LearnEngineering.in


Visit & Downloaded from : www.LearnEngineering.in

7. What are all the protection mechanisms that are provided by GSI between
WS-Security and WS-Secure Conversation?

GSI allows three additional protection mechanisms.


 The first is integrity protection, by which a receiver can verify that
messages were not altered in transit from the sender.
 The second is encryption, by which messages can be protected to provide
confidentiality.
 The third is replay prevention, by which a receiver can verify that it has not
received the same message previously

8. What is IAM?

Identity and access management (IAM) is the security and business discipline
that "enables the right individuals to access the right resources at the right times
and for the right reasons."

9. Define data integrity.

Data integrity refers to maintaining and assuring the accuracy and consistency
of data over its entire life-cycle, and is a critical aspect to the design,
implementation and usage of any system which stores, processes, or
retrieves data.

10. List out the responsibilities and challenges in managing users in IaaS
services.

 User provisioning
 Privileged user management
 Customer key assignment Assigning IDs and keys
 Developer user management

180

Visit & Downloaded from : www.LearnEngineering.in


Visit & Downloaded from : www.LearnEngineering.in

 End user management

11. List out the support provided by IAM for business.

The IAM processes to support the business can be broadly categorized as


follows:
 User management Activities
 Authentication management
 Authorization management
 Access management
 Data management and provisioning
 Monitoring and Auditing

12. Define Intrusion Detection System.

An intrusion detection system is software or hardware designed to detect


unwanted attempts at accessing, manipulating, or disabling computer systems,
mainly through a network such as the Internet.
13. What is IDaas?

Identity-as-a-service (IDaas) refers to the practice of delivering identity


management as a service.

14. Define AAA.

Authentication, Authorization, and Accounting is a system used to control what


computer resources users have access to and to keep track of the activity of
users over a network.

181

Visit & Downloaded from : www.LearnEngineering.in


Visit & Downloaded from : www.LearnEngineering.in

15. What is federation?

Federation is the process of managing the trust relationships established beyond


the internal network boundaries or administrative domain boundaries among
distinct organizations. A federation is an association of organizations that come
together to exchange information about their users and resources to enable
collaborations and transactions (e.g., sharing user information with the
organizations’ benefits systems managed by a third-party provider). Federation
of identities to service providers will support SSO to cloud services.

182

Visit & Downloaded from : www.LearnEngineering.in


Visit & Downloaded from : www.LearnEngineering.in

Part – B

1. GRID SECURITY INFRASTRUCTURE

 Discuss the grid security infrastructure.

Grid Security Infrastructure (GSI)


Although the grid is increasingly deployed as a common approach to constructing
dynamic, interdomain, distributed computing and data collaborations, ―lack of
security/trust between different services‖ is still an important challenge of the grid.
The grid requires a security infrastructure with the following properties: easy to use;
conforms with the VO’s security needs while working well with site policies of each
resource provider site; and provides appropriate authentication and encryption of all
interactions. The GSI is an important step toward satisfying these requirements. As a
well-known security solution in the grid environment, GSI is a portion of the Globus
Toolkit and provides fundamental security services needed to support grids, including
supporting for message protection, authentication and delegation, and authorization.
GSI enables secure authentication and communication over an open network, and
permits mutual authentication across and among distributed sites with single sign-on
capability. No centrally managed security system is required, and the grid main- tains
the integrity of its members’ local policies. GSI supports both message-level security,
which supports the WS-Security standard and the WS-SecureConversation
specification to provide message protection for SOAP messages, and transport-level
security, which means authentication via TLS with support for X.509 proxy
certificates.
GSI Functional Layers: GT4 provides distinct WS and pre-WS authentication and
authorization capabilities. Both build on the same base, namely the X.509 standard
and entity certificates and proxy certificates, which are used to identify persistent
entities such as users and servers and to support the temporary delegation of privileges
to other entities, respectively. As shown in Figure 5.1 , GSI may be thought of as

183

Visit & Downloaded from : www.LearnEngineering.in


Visit & Downloaded from : www.LearnEngineering.in

being composed of four distinct functions: message protection, authentication,


delegation, and authorization.

FIGURE 7.1 GSI functional layers at the message and transport levels
TLS (transport-level security) or WS-Security and WS-Secure Conversation
(message-level) are used as message protection mechanisms in combination with
SOAP. X.509 End Entity Certificates or Username and Password are used as
authentication credentials. X.509 Proxy Certificates and WS-Trust are used for
delegation. An Authorization Framework allows for a variety of authorization
schemes, including a ―grid-mapfile‖ ACL, an ACL defined by a service, a custom
authorization handler, and access to an authorization service via the SAML protocol.
In addition, associated security tools provide for the storage of X.509 credentials
(MyProxy and Delegation services), the map- ping between GSI and other
authentication mechanisms (e.g., KX509 and PKINIT for Kerberos, MyProxy for one-
time passwords), and maintenance of information used for authorization (VOMS,
GUMS, PERMIS).
Transport-Level Security Transport-level security entails SOAP messages conveyed
over a network connection protected by TLS. TLS provides for both integrity
protection and privacy (via encryption). Transport-level security is normally used in
conjunction with X.509 credentials for authentication, but can also be used without
such credentials to provide message protection without authentication, often referred
to as ―anonymous transport-level security.‖ In this mode of operation, authentication
may be done by username and password in a SOAP message.

184

Visit & Downloaded from : www.LearnEngineering.in


Visit & Downloaded from : www.LearnEngineering.in

Message-Level Security GSI also provides message-level security for message


protection for SOAP messages by implementing the WS-Security standard and the
WS-Secure Conversation specification. The WS-Security standard from OASIS
defines a framework for applying security to individual SOAP messages; WS-Secure
Conversation is a proposed standard from IBM and Microsoft that allows for an initial
exchange of messages to establish a security context which can then be used to protect
subsequent messages in a manner that requires less computational overhead (i.e., it
allows the trade-off of initial overhead for setting up the session for lower overhead
for messages).
GSI conforms to this standard. GSI uses these mechanisms to provide security on a
per message basis, that is, to an individual message without any pre existing context
between the sender and receiver (outside of sharing some set of trust roots). GSI, as
described further in the subsequent section on authentication, al- lows for both X.509
public key credentials and the combination of username and password for
authentication; however, differences still exist. With username/password, only the
WSSecurity standard can be used to allow for authentication; that is, a receiver can
verify the identity of the communication initiator.
GSI allows three additional protection mechanisms. The first is integrity protection,
by which a receiver can verify that messages were not altered in transit from the
sender. The second is encryption, by which messages can be protected to provide
confidentiality. The third is replay prevention, by which a receiver can verify that it
has not received the same message previously. These protections are provided
between WS-Security and WS-Secure Conversation. The former applies the keys
associated with the sender and receiver’s X.509 credentials. The X.509 credentials are
used to establish a session key that is used to provide the message protection.
Authentication and Delegation GSI has traditionally supported authentication and
delegation through the use of X.509 certificates and public keys. As a new feature in
GT4, GSI also supports authentication through plain usernames and passwords as a
deployment option. We discuss both methods in this section. GSI uses X.509
certificates to identify persistent users and services. As a central concept in GSI
authentication, a certificate includes four primary pieces of information:

185

Visit & Downloaded from : www.LearnEngineering.in


Visit & Downloaded from : www.LearnEngineering.in

(1) a subject name, which identifies the person or object that the certificate represents;
(2) the public key belonging to the subject;
(3) the identity of a CA that has signed the certificate to certify that the public key and
the identity both belong to the subject; and
(4) the digital signature of the named CA. X.509 provides each entity with a unique
identifier (i.e., a distinguished name) and a method to assert that identifier to another
party through the use of an asymmetric key pair bound to the identifier by the
certificate.
The X.509 certificates used by GSI are conformant to the relevant standards
and conventions. Grid deployments around the world have established their own CAs
based on third-party software to issue the X.509 certificate for use with GSI and the
Globus Toolkit. GSI also supports delegation and single sign on through the use of
standard X.509 proxy certificates. Proxy certificates allow bearers of X.509 to
delegate their privileges temporarily to another entity. For the purposes of
authentication and authorization, GSI treats certificates and proxy certificates
equivalently. Authentication with X.509 credentials can be accomplished either via
TLS, in the case of transport-level security, or via signature as specified by WS-
Security, in the case of message-level security.
Trust Delegation To reduce or even avoid the number of times the user must
enter his passphrase when several grids are used or have agents (local or remote)
requesting services on behalf of a user, GSI provides a delegation capability and a
delegation service that provides an interface to allow clients to delegate (and renew)
X.509 proxy certificates to a service. The interface to this service is based on the WS-
Trust specification. A proxy consists of a new certificate and a private key. The key
pair that is used for the proxy, that is, the public key embedded in the certificate and
the private key, may either be regenerated for each proxy or be obtained by other
means. The new certificate contains the owner’s identity, modified slightly to indicate
that it is a proxy. The new certificate is signed by the owner, rather than a CA.

186

Visit & Downloaded from : www.LearnEngineering.in


Visit & Downloaded from : www.LearnEngineering.in

Figure 5.2 A sequence of trust delegations in which new certificates are


signed by the owners rather by the CA.
The certificate also includes a time notation after which the proxy should no
longer be accepted by others. Proxies have limited lifetimes. Because the proxy isn’t
valid for very long, it doesn’t have to stay quite as secure as the owner’s private key,
and thus it is possible to store the proxy’s private key in a local storage system without
being encrypted, as long as the permissions on the file prevent anyone else from
looking at them easily. Once a proxy is created and stored, the user can use the proxy
certificate and private key for mutual authentication without entering a password.
When proxies are used, the mutual authentication process differs slightly. The remote
party receives not only the proxy’s certificate (signed by the owner), but also the
owner’s certificate. During mutual authentication, the owner’s public key (obtained
from her certificate) is used to validate the signature on the proxy certificate. The
CA’s public key is then used to validate the signature on the owner’s certificate. This
establishes a chain of trust from the CA to the last proxy through the successive
owners of resources. The GSI uses WS-Security with textual usernames and
passwords. This mechanism supports more rudimentary web service applications.
When using usernames and passwords as opposed to X.509 credentials, the GSI
provides authentication, but no advanced security features such as delegation,
confidentiality, integrity, and replay prevention. However, one can use usernames and
passwords with anonymous transport-level security such as unauthenticated TLS to
ensure privacy.

187

Visit & Downloaded from : www.LearnEngineering.in


Visit & Downloaded from : www.LearnEngineering.in

2. NETWORK, HOST, AND APPLICATION LEVEL IN CLOUD SECURITY

 Explain the network, host, and application level in cloud security.

Network Level: At a network level of infrastructure security, it is important to


distinguish between public clouds and private clouds. With private clouds, there are
no new attacks, vulnerabilities or changes in risk specific to this topology. Changing
security requirements will require changes to your network topology. There are four
significant risk factors.
 Ensuring the confidentiality and integrity.
 Ensuring proper access control.
 Ensuring the availability of internet-facing resources.
 Replacing the established model of network zones and tiers with
domains.
Ensuring data confidentiality and integrity: Users not using HTTPS (but using
HTTP) did face an increased risk that their data could have been altered without their
knowledge.
Ensuring proper access control:
This is the issue of reused (reassigned) IP addresses. Generally speaking, cloud
provide us do not sufficiently ―age‖ IP addresses when they are no longer needed for
one customer. Addresses are usually reassigned and reused by other customers as they
become available.
The issue of ―non-aged‖ IP addresses an unauthorized network access to
resources does not apply only two routable IP addresses.
Ensuring the availability of internet- facing resources:
There are deliberate attacks although prefix hijacking due to deliberate attacks
is for less common than misconfigurations, it still occurs and can block access to data.
Denial of service (DoS) and distributed denial of service (DDoS) are the example of
problems associated with this third risk factor.
Replacing the established model of network zones and tiers with domains:

188

Visit & Downloaded from : www.LearnEngineering.in


Visit & Downloaded from : www.LearnEngineering.in

The traditional model of network zones and tiers has been replaced in public
cloud computing with ―security groups‖, or ―virtual data centers that have logical
separation between tiers but are less precise and afford less protection than the
formerly established model.
For example, the security groups feature in AWS allows your virtual machines to
access each other using a virtual firewall that has the ability to filter traffic based on IP
address, packet types and ports.
Infrastructure Security: The Host Level
Consider the context of cloud services delivery models (Saas, PaaS, IaaS) and
deployment models (public, private and hybrid). The dynamic nature (elasticity) of
cloud computing can bring new operational challenges from a security management
perspective.
SaaS and PaaS Host Security:
CSP’s do not share information related to their host platform, host OS, and the
processes that are in place to secure the hosts, since hackers can exploit that
information when they are trying to intrude into the cloud service. Hence, in the
context of SaaS or PaaS cloud services, host services, host security is opaque to
customers and the responsibility of securing the hosts is relegated to the CSP.
Virtualization is a key enabling technology that improves host hardware
utilization, among other benefits, it is common for CSPs to employ virtualization
platforms, including Xen and VMware hypervisors, in their host computing platform
architecture.
Boththe PaaS and SaaS platforms abstract and hide the host os from end users with
a host abstraction layer. One key difference between PaaS and SaaS is the
accessibility of the abstraction layer that hides the os services tha application
consume.
IaaA Host Security:
Unlike PaaS and SaaS, IaaS customers are primarily responsible for securing the
hosts provisioned in the cloud. Given that almost all IaaS services available today
employ virtualization at the host layer, host security in IaaS should be categorized as
follows:

189

Visit & Downloaded from : www.LearnEngineering.in


Visit & Downloaded from : www.LearnEngineering.in

Fig 5.3: Generic network topology for private cloud computing

1. Virtualization software security:


The virtualization software that sits on the top of the hardware,
customers will have neither visibility nor access to this software. OS

190

Visit & Downloaded from : www.LearnEngineering.in


Visit & Downloaded from : www.LearnEngineering.in

virtualization enables the sharing of hardware resources across multiple guest


VMs without interfering with each other.
2. Customer guest os or virtual server security:
Virtualized guest VMs that are hosted and isolated from each other by
hypervisor technology. Hence customers are responsible for securing and
ongoing security management of the guest VM.
Some of the new host security threats in the public IaaS include:
 Stealing keys.
 Attacking unpatched, vulnerable services.
 Hijacking accounts.
 Attacking systems that are not properly secured.
 Deploying Trojans.
3. Securing virtual servers:
Here are some recommendations:
 Use a secure-by-default configuration.
 Protect the integrity of the hardened image from unauthorized access.
 Safeguard the private key.
 Do not allow password-based authentication for shell access.
 Enable system auditing and event logging.
Infrastructure Security: The Application Level
Application or software security should be a critical element of your security
program. The application security spectrum ranges standalone single-user applications
to sophisticated multiuser e-commerce applications used by millions of users. End-to-
end cloud security that helps protect the confidentiality, integrity and availability of
the information processed by cloud services.
Application-Level Security Threats.
Hackers are constantly scanning web applications (accessible from the Internet) for
application vulnerabilities. Web applications are at risk of web application security
defects, ranging from insufficient validation to application logic errors. Web
applications deployed in a public cloud must be designed for an Internet threat model,

191

Visit & Downloaded from : www.LearnEngineering.in


Visit & Downloaded from : www.LearnEngineering.in

and security must be embedded into the software Development Life Cycle (SDLC) in
Figure 5.4

Figure 5.4: The SDLC


DoS and EDoS:
These attacks are typically originated from compromised computer systems attached
to the Internet. Application-level DoS attacks could manifest themselves as high-
volume web page reloads, XML* Web service requests, or protocol-specific requests
supported by a cloud service.
DoS attacks on pay-as-you-go cloud applications will result in a dramatic increase in
your cloud utility bill: you’ll see increased use of network bandwidth, CPU, and
storage consumption. This type of attack is also being characterized as economic
denial of sustainability (EDoS).
End User Security:
 Safe surfing.
 Use of security software-anti-malware, antivirus, personal firewalls.
SaaS Application Security:
SaaS providers are largely responsible for securing the applications and components
they offer to customers. Customers are usually responsible for operational security

192

Visit & Downloaded from : www.LearnEngineering.in


Visit & Downloaded from : www.LearnEngineering.in

functions, including user and access management as supported by the provider. Extra
attention needs to be paid to the authentication and access control features.

PaaS Application Security:


PaaS application security encompasses two software layers:
 Security of the PaaS platform itself.(running engine)
 Security of customer application deployed on a PaaS platform.

IaaS Application Security:


They should also be periodically tested for vulnerabilities, and most importantly,
security should be embedded into the SDLC. Customers are solely responsible for
keeping their applications and runtime platform patched to protect the system from
malware and hackers scanning for vulnerabilities to gain unauthorized access to their
data in the cloud.
Public Cloud Security Limitations:
There are limitations to the public cloud when it comes to support for custom security
features. Security requirements such as an application firewall, SSl accelerator,
cryptography, or rights management using a device that supports PKCS12 are not
supported in a public SaaS, PaaS or IaaS cloud.

3. ASPECTS OF DATA SECURITY AND PROVIDER DATA AND ITS


SECURITY

 Explain the various aspects of data security and discuss the provider data and
its security.

Data security becomes more important when using cloud computing at all ―levels‖:
infrastructure-as-a-service (IaaS), platform-as-a-service (PaaS), and Software-as-a-
service (SaaS). Several aspects of data security, including:
 Data-in transit.

193

Visit & Downloaded from : www.LearnEngineering.in


Visit & Downloaded from : www.LearnEngineering.in

 Data-at rest.
 Processing of Data, including multitenancy.
 Data lineage.
 Data provenance
 Data remanence.
Aspects of Data security:
 With regard to data-in-transit, the primary risk is in not using a vetted
encryption algorithm. It is also important to ensure that a protocol provides
confidentiality as well as integrity (FTP, HTTPS). Encrypting data and using a
non-secured protocol can provide confidentiality, but does not ensure the
integrity of the data.
 Using encryption to protect data-at-rest might seem obvious; the reality is not
that simple. Encrypting data-at-rest is possible—and is strongly suggested.
Data-at-rest used by a cloud-based application is generally not encrypted,
because encryption would prevent indexing or searching of that data.
 For any application to process data, that data must be unencrypted.
Homomorphic encryption scheme which allows data to be processed without
being decrypted. This is a huge advance in cryptography. Other cryptographic
research efforts are underway to limit the amount of data that would need to be
decrypted for processing in the cloud, such as predicate encryption. Whether
the data has put into the cloud is encrypted or not, it is useful and might be
required to know exactly where and when the data was specifically located
within the cloud.
 Following the path of data (mapping application data flows or data path
visualization) is known as data lineage. Providing data lineage to auditors or
management is time consuming, even when the environment is completely
under an organization control. Trying to provide accurate reporting on data
lineage for a public cloud service is really not possible
 Data lineage can be established in a public cloud, for some customer there is an
even more challenging requirement and problem: providing data provenance-

194

Visit & Downloaded from : www.LearnEngineering.in


Visit & Downloaded from : www.LearnEngineering.in

not just providing a integrity of data. There is a important difference between


the two terms
1. Integrity of data- data that has not is changed in an unauthorized manner.
2. Provenance – not only that the data has integrity, but also that it is
computationally accurate: that is, the data was accurately calculated
 A final aspect of data security is data remanence. Data remanence is the
residual representation of data that has been in some way nominally erased or
removed. This residue may be due to data being left intact by a nominal delete
operation, or through physical properties of the storage medium. Data
remanence may make inadvertent disclosure of sensitive information possible,
should the storage media be released into an uncontrolled environment.
The risk posed by data remanence in cloud service is that an organisation’s data
can be inadvertently exposed to an unauthorized party. When using SaaS or PaaS, the
risk is almost certainly unintentional or inadvertent exposure. However, that is not
reassuring after an unauthorized disclosure, and potential customers should question
what third – party tools or reviews are used to help validate the security of the
provider’s applications or platform.
Clearing- is a process of eradicating the data on media before reusing the media in
an environment that provides an acceptable level of protection for the data that was on
the media before clearing.
Sanitization- is a process of removing the data from media before reusing the
media in an environment that does not provide an acceptable level of protection for
the data that was on the media before sanitizing.
Data security mitigation- currently, the only viable option for mitigation is to
ensure that any sensitive or regulated data is not put into a public cloud.
Provider Data and its Security
In addition to the security of your own customer data, customers should also be
concerned about what data the provider collects and how the CSP protects that data.
Specifically, with regard to your customer data what metadata does the provider have
about your data, how is it secured and what access do you, the customer have to that

195

Visit & Downloaded from : www.LearnEngineering.in


Visit & Downloaded from : www.LearnEngineering.in

metadata? As your volume of data with a particular provider increases so does the
value of that metadata.
Additionally, you provider collects and must protect a huge amount of security-
related data. For example, at the network level your provider should be collecting
monitoring, and protecting firewall, intrusion prevention system(IPS), security
incident and event management (SIEM) and router flow data. Provider should be
collecting system log files and at the application level SaaS providers should be
collecting application log data including authentication and authorization information.
Storage: The three information security concerns are associated with the data stored
in the cloud: confidentiality, integrity and availability.
Confidentiality: Confidentiality of data stored in a public cloud, have two potential
concerns. First, what access control exists to protect the data? Access control consists
of both authentication (username + password) and authorization. The second potential
concern: how is the data that is stored in the cloud actually protected? For all practical
purposes, protection of data stored in the cloud involves the use of encryption.
If a CSP does encrypt a customer’s data, the next consideration concerns what
encryption algorithm it uses. Not all encryption algorithms are created equal.
Cryptographically, many algorithms provide insufficient security. Symmetric
encryption involves the use of a single secret key for both the encryption and
decryption of data. Although the example in Figure 5.9 is related to email, the same
concept (i.e., a single shared, secret key) is used in data storage encryption.

Figure 5.5. Symmetric Encryption

196

Visit & Downloaded from : www.LearnEngineering.in


Visit & Downloaded from : www.LearnEngineering.in

Although the example in Figure 5.10 is related to e-mail, the same concept (i.e., a
public key and a private key) is not used in data storage encryption.
The next consideration for you is what key is used. With symmetric encryption,
the longer the key length provides more protection. The key length should be
minimum of 112 bits for Triple DES (Data Encryption Standard) and 128 bits for AES
(Advanced Encryption Standard). Another confidentiality consideration for encryption
is key management. How are the encryption keys that are going to be managed? And
by whom? Because the key management is complex and difficult for a single
customer, it is even more complex and difficult to manage multiple customer’s key.

Figure 5.6. Asymmetric Encryption

Integrity:
 Confidentiality does not imply integrity; data can be encrypted for
confidentiality purposes, and yet you might not have a way to verify the
integrity of that data. Encryption alone is sufficient for confidentiality, but
integrity also requires the use of message authentication codes (MACs).
 The simplest way to use MACs on encrypted data is to use a block symmetric
algorithm in cipher block chaining (CBC) mode, and to include a one-way hash
function.
 Another aspect of data integrity is important, especially with bulk storage using
IaaS. What a customer really wants to do is to validate the integrity of its data

197

Visit & Downloaded from : www.LearnEngineering.in


Visit & Downloaded from : www.LearnEngineering.in

while that data remains in the cloud-without having to download and reupload
that data.This task is even more difficult. Additionally, that data set is probably
dynamic and changing frequently. Those frequent changes obviate the
effectiveness of traditional integrity insurance techniques.
Availability:
Assuming that a customer’s data has maintained its confidentiality and integrity,
you must also be concerned about the availability of your data. There are currently
three major threats in this regard:
 The first threat to availability is network-based attacks.
 The second threat to availability is the CSPs own availability.
 Finally, prospective cloud storage customers must be certain to ascertain just
what services their provider is actually offering.
Cloud storage does not mean the stored data is actually backed up. Some cloud
storage providers do not back up customer data, in addition to providing storage.
However, many cloud storage providers do not backup or do so by only as an
additional service for an additional cost.

4. IDENTIFY AND ACCESS MANAGEMENT ARCHITECTURE

 Explain in detail about Identify and access management architecture.

Trust Boundaries and IAM


In a typical organization ―trust boundary‖ is mostly static and is monitored and
controlled by the IT department and access to the network, systems and applications
is secured via network security controls including virtual private networks (VPNs),
intrusion detection systems (IDSs), intrusion prevention systems(IPSs) and multi
factor authentication.
With the adoption of cloud services, the organisation’s trust boundary will
become dynamic and will move beyond the control of IT. This loss of control
continues to challenge established trusted governance and control model and if not
managed properly, will impede cloud service adoption within an organization.

198

Visit & Downloaded from : www.LearnEngineering.in


Visit & Downloaded from : www.LearnEngineering.in

To compensate for the loss of network control and to strengthen risk assurance,
organizations will be forced to rely on other higher-level software controls, such as
application security and user access controls. These controls manifest as strong
authentication, authorization based on role or claims, trusted sources with accurate
attributes, identity federation, single sign-on (SSO), user activity monitoring, and
auditing. In particular, organizations need to pay attention to the identity federation
architecture and processes, as they can strengthen the controls and trust between
organizations and cloud service providers (CSPs).
IAM is a two way street. CSPs need to support IAM standards and practices
such as federation for customers to take advantage of and extend their practice to
maintain compliance with internal policies and standard.
Need for IAM:
 Improve operational efficiency
 Regulatory compliance management
Some of the cloud use cases that require IAM support from the CSP include:
 IT administrators accessing the CSP management console to provision
resources and access for users using a corporate identity.
 Developers creating accounts for partners users in a PaaS platform.
 End users accessing storage service in the cloud and sharing files and
objects with users, within and outside a domain using access policy
management features.
 An application residing in a cloud service provider accessing storage
from another cloud service.
IAM Challengers
One critical challenge of IAM accessing internal and externally hosted service
another issue is that turn over of users within the organizations. Turn over varies by
industry and function, for example, new product and service releases.
To addresses these challenges and risk many companies have sort technology
solutions to enable centralized and automated user access management. Many of these
initiatives are entered into with high expectations, which is not surprising given that
the problem is often large and complex.

199

Visit & Downloaded from : www.LearnEngineering.in


Visit & Downloaded from : www.LearnEngineering.in

IAM Definitions
Basic concept and definitions of IAM functions for any service:
Authentication – is a process of verifying the identity of a user or a system.
Authentication usually connotes a more roburst form of identification. In some use
cases such as service – to- service interaction, authentication involves verifying the
network service.
Authorization – is a process of determining the privileges the user or system is
entitled to once the identity is established. Authorization usually follows the
authentication step and is used to determine whether the user or service has the
necessary privileges to perform certain operations.
Auditing – Auditing entails the process of review and examination of
authentication, authorization records and activities to determine the adequacy of IAM
system controls, to verify complaints with established security policies and procedure,
to detect breaches in security services and to recommend any changes that are
indicated for counter measures

IAM Architecture and Practice


IAM is not a monolithic solution that can be easily deployed to gain capabilities
immediately. It is as much an aspect of architecture (see Figure 5.11) as it is a
collection of technology components, processes, and standard practices. Standard
enterprise IAM architecture encompasses several layers of technology, services, and
processes. At the core of the deployment architecture is a directory service (such as
LDAP or Active Directory) that acts as a repository for the identity, credential, and
user attributes of the organization’s user pool. The directory interacts with IAM
technology components such as authentication, user management, provisioning, and
federation services that support the standard IAM practice and processes within the
organization.
The IAM processes to support the business can be broadly categorized as follows:
User management: Activities for the effective governance and management of
identity life cycles

200

Visit & Downloaded from : www.LearnEngineering.in


Visit & Downloaded from : www.LearnEngineering.in

Authentication management: Activities for the effective governance and


management of the process for determining that an entity is who or what it claims to
be
Authorization management: Activities for the effective governance and
management of the process for determining entitlement rights that decide what
resources an entity is permitted to access in accordance with the organization’s
policies
Access management: Enforcement of policies for access control in response to a
request from an entity (user, services) wanting to access an IT resource within the
organization
Data management and provisioning: Propagation of identity and data for
authorization to IT resources via automated or manual processes
Monitoring and auditing: Monitoring, auditing, and reporting compliance by users
regarding access to resources within the organization based on the defined policies
IAM processes support the following operational activities:
Provisioning: Provisioning can be thought of as a combination of the duties of the
human resources and IT departments, where users are given access to data repositories
or systems, applications, and databases based on a unique user identity.
Deprovisioning works in the opposite manner, resulting in the deletion or deactivation
of an identity or of privileges assigned to the user identity.
Credential and attribute management: These processes are designed to manage the
life cycle of credentials and user attributes— create, issue, manage, revoke—to
minimize the business risk associated with identity impersonation and

201

Visit & Downloaded from : www.LearnEngineering.in


Visit & Downloaded from : www.LearnEngineering.in

Figure 5.7 Enterprise IAM functional architecture


inappropriate account use. Credentials are usually bound to an individual and are
verified during the authentication process.
The processes include provisioning of attributes, static (e.g., standard text
password) and dynamic (e.g., one-time password) credentials that comply with a
password standard (e.g., passwords resistant to dictionary attacks), handling password
expiration, encryption management of credentials during transit and at rest, and access
policies of user attributes (privacy and handling of attributes for various regulatory
reasons).

Entitlement management: Entitlements are also referred to as authorization policies.


The processes in this domain address the provisioning and deprovisioning of
privileges needed for the user to access resources including systems, applications, and
databases. Proper entitlement management ensures that users are assigned only the
required privileges.

202

Visit & Downloaded from : www.LearnEngineering.in


Visit & Downloaded from : www.LearnEngineering.in

Compliance management: This process implies that access rights and privileges are
monitored and tracked to ensure the security of an enterprise’s resources. The process
also helps auditors verify compliance to various internal access control policies, and
standards that include practices such as segregation of duties, access monitoring,
periodic auditing, and reporting. An example is a user certification process that allows
application owners to certify that only authorized users have the privileges necessary
to access business-sensitive information.
Identity federation management: Federation is the process of managing the trust
relationships established beyond the internal network boundaries or administrative
domain boundaries among distinct organizations. A federation is an association of
organizations that come together to exchange information about their users and
resources to enable collaborations and transactions.
Centralization of authentication (authN) and authorization (authZ): A central
authentication and authorization infrastructure alleviates the need for application
developers to build custom authentication and authorization features into their
applications. Furthermore, it promotes a loose coupling architecture where
applications become agnostic to the authentication methods and policies. This
approach is also called an ―externalization of authN and authZ‖ from applications.

Figure 5.8 Identity Life cycle

203

Visit & Downloaded from : www.LearnEngineering.in


Visit & Downloaded from : www.LearnEngineering.in

Getting ready for the cloud


As a first step, organizations planning for cloud services must plan for basic
user management functions such as user account provisioning and ongoing user
account management, including timely deprovisioning of users when they no longer
need access to the cloud service.
Organizations should start with an IAM strategy and architecture and invest in
foundational technology elements that support user management and federation. In
addition to providing a consistent user experience, federation can help to mitigate
risks to organizations since it supports the SSO user experience: users will not be
required to sign in multiple times, nor will they have to remember cloud-service-
specific user authentication information (e.g., one user ID/password pair per provider).
Architecting an identity federation model will help organizations gain
capabilities to support an identity provider (IdP), also known as an SSO provider
(using an existing directory service or cloud-based identity management service). In
that architecture, enterprise can share identities with trusted CSPs without sharing user
credentials or private user attributes.
Federation technology is typically built on a centralized identity management
architecture leveraging industry-standard identity management protocols, such as
Security Assertion Markup Language (SAML), WS Federation (WS-*), or Liberty
Alliance. Of the three major protocol families associated with federation, SAML
seems to be recognized as the de facto standard for enterprise-controlled federation.

IAM Standards and Specifications for Organisations


The following IAM standards and specifications will help organizations
implement effective and efficient user access management practices and processes in
the cloud. These sections are ordered by four major challenges in user and access
management faced by cloud users:
1. How can I avoid duplication of identity, attributes, and credentials and
provide a single sign-on user experience for my users? SAML.
2. How can I automatically provision user accounts with cloud services and
automate the process of provisoning and deprovisioning? SPML.

204

Visit & Downloaded from : www.LearnEngineering.in


Visit & Downloaded from : www.LearnEngineering.in

3. How can I provision user accounts with appropriate privileges and manage
entitlements for my users? XACML.
4. How can I authorize cloud service X to access my data in cloud service Y
without disclosing credentials? OAuth.

Security Assertion Markup Langauge (SAML): SAML is the most mature,


detailed, and widely adopted specifications family for browser-based federated sign-
on for cloud users. Once the user authenticates to the identity service, she can freely
access provisioned cloud services that fall within the trusted domain, thereby
sidestepping the cloud-specific sign-on process. Since SAML enables delegation
(SSO), by using risk-based authentication policies customers can elect to employ
strong authentication (multifactor authentication) for certain cloud services.
Strong authentication to cloud services is also advisable to protect user
credentials from man-in-the middle attacks—i.e., when computers or browsers fall
victim to trojans and botnet attacks. By supporting a SAML standard that enables a
delegated authentication model for cloud customers, the CSP can delegate the
authentication policies to the customer organization. In short, SAML helps CSPs to
become agnostic to customer authentication requirements.
Figure 5.13 illustrates an SSO into Google Apps from the browser. The Figure
illustrates the following steps involved in the SSO process of a user who is federated
to Google:
1. The user from your organization attempts to reach a hosted Google
application, such as Gmail, Start Pages, or another Google service.
2. Google generates a SAML authentication request. The SAML request is
encoded and embedded into the URL for your organization’s IdP supporting the SSO
service. The Relay State parameter containing the encoded URL of the Google
application that the user is trying to reach is also embedded in the SSO URL. This
Relay State parameter is meant to be an opaque identifier that is passed back without
any modification or inspection.

205

Visit & Downloaded from : www.LearnEngineering.in


Visit & Downloaded from : www.LearnEngineering.in

3. Google sends a redirect to the user’s browser. The redirect URL includes the
encoded SAML authentication request that should be submitted to your organization’s
IdP service.
4. Your IdP decodes the SAML request and extracts the URL for both Google’s
Assertion Consumer Service (ACS) and the user’s destination URL (the Relay State
parameter). Your IdP then authenticates the user. Your IdP could authenticate the user
by either asking for valid login credentials or checking for valid session cookies.
5. Your IdP generates a SAML response that contains the authenticated user’s
username. In accordance with the SAML 2.0 specification, this response is digitally
signed with the partner’s public and private DSA/RSA keys.
6. Your IdP encodes the SAML response and the Relay State parameter and
returns that information to the user’s browser. Your IdP provides a mechanism so that
the browser can forward that information to Google’s ACS.
7. Google’s ACS verifies the SAML response using your IdP’s public key. If
the response is successfully verified, ACS redirects the user to the destination URL.
8. The user has been redirected to the destination URL and is logged in to Google
Apps.

Service Provisioning Markup Language (SPML)


If SPML is supported, software-as-a-service (SaaS) providers can enable ―just-
in-time provisioning‖ to create accounts for new users in real time (as opposed to
preregistering users). In that model, the CSP extracts attributes from the SAML token
of a new user, creates an SPML message on the fly, and hands the request to a
provisioning service which in turn adds the user identity to the cloud user database.
Service Provisioning Markup Language (SPML)
If SPML is supported, software-as-a-service (SaaS) providers can enable ―just-
in-time provisioning‖ to create accounts for new users in real time (as opposed to
preregistering users). In that model, the CSP extracts attributes from the SAML token
of a new user, creates an SPML message on the fly, and hands the request to a
provisioning service which in turn adds the user identity to the cloud user database.

206

Visit & Downloaded from : www.LearnEngineering.in


Visit & Downloaded from : www.LearnEngineering.in

Fig 5.9 SSO transaction steps using SAML


Figure 5.14 illustrates an SPML use case in which an HR system is requesting a
provisioning system in the cloud with the SPML request. In the Figure, HR System of
Record (requesting authority) is an SPML web services client interacting with the
SPML provisioning service provider at the cloud service provider, which is
responsible for provisioning user accounts on the cloud services (provisioning service
target).
eXensible Access Control Markup Language (XACML)
XML-based access control language for policy management and access
decisions. It provides an XML schema for a general policy language which is used to
protect any kind of resource and make access decisions over these resources. The
XACML standard not only gives the model of the policy language, but also proposes a
processing environment model to manage the policies and to conclude the access
decisions.
Most applications (web or otherwise) have a built-in authorization module that
grants or denies access to certain application functions or resources based on
entitlements assigned to the user. Hence, the goal of XACML is to provide a
standardized language, a method of access control, and policy enforcement across all
applications that implement a common authorization standard.

207

Visit & Downloaded from : www.LearnEngineering.in


Visit & Downloaded from : www.LearnEngineering.in

Figure 5.15 illustrates the interaction among various health care participants with
unique roles (authorization privileges) accessing sensitive patient records stored in a
health care application.

Figure 5.10 SPML use case

208

Visit & Downloaded from : www.LearnEngineering.in


Visit & Downloaded from : www.LearnEngineering.in

Figure 5.11 XACML use case


The Figure illustrates the following steps involved in the XACML process:
1. The health care application manages various hospital associates (the
physician, registered nurse, nurses’ aide, and health care supervisor) accessing various
elements of the patient record. This application relies on the policy enforcement point
(PEP) and forwards the request to the PEP.
2. The PEP is actually the interface of the application environment. It receives
the access requests and evaluates them with the help of the policy decision point
(PDP). It then permits or denies access to the resource (the health care record).
3. The PEP then sends the request to the PDP. The PDP is the main decision
point for access requests. It collects all the necessary information from available
information sources and concludes with a decision on what access to grant. The PDP
should be located in a trusted network with strong access control policies, e.g., in a
corporate trusted network protected by a corporate firewall.

209

Visit & Downloaded from : www.LearnEngineering.in


Visit & Downloaded from : www.LearnEngineering.in

4. After evaluation, the PDP sends the XACML response to the PEP.
5. The PEP fulfills the obligations by enforcing the PDP’s authorization
decision.
Open Authentication (OAuth)
OAuth is an emerging authentication standard that allows consumers to share
their private resources (e.g., photos, videos, contact lists, bank accounts) stored on one
CSP with another CSP without having to disclose the authentication information (e.g.,
username and password). OAuth is an open protocol and it was created with the goal
of enabling authorization via a secure application programming interface (API)—a
simple and standard method for desktop, mobile, and web applications. For
application developers, OAuth is a method for publishing and interacting with
protected data.
Recently, Google released a hybrid version of an OpenID and OAuth protocol
that combines the authorization and authentication flow in fewer steps to enhance
usability. Google’s GData API recently announced support for OAuth. (GData also
supports SAML for browser SSO.) Figure 5.16 illustrates the sequence of interactions
between customer or partner web application, Google services, and end user:
1. Customer web application contacts the Google Authorization service, asking
for a request token for one or more Google service.
2. Google verifies that the web application is registered and responds with an
unauthorized request token.
3. The web application directs the end user to a Google authorization page,
referencing the request token.
4. On the Google authorization page, the user is prompted to log into his
account (for verification) and then either grant or deny limited access to his Google
service data by the web application.
5. The user decides whether to grant or deny access to the web application. If
the user denies access, he is directed to a Google page and not back to the web
application.

210

Visit & Downloaded from : www.LearnEngineering.in


Visit & Downloaded from : www.LearnEngineering.in

6. If the user grants access, the Authorization service redirects him back to a
page designated with the web application that was registered with Google. The
redirect includes the nowauthorized request token.
7. The web application sends a request to the Google Authorization service to
exchange the authorized request token for an access token.
8. Google verifies the request and returns a valid access token.
9. The web application sends a request to the Google service in question. The
request is signed and includes the access token.
10. If the Google service recognizes the token, it supplies the requested data.

Figure 5.12 OAuth use case


IAM Standards, Protocols, and Specifications for Consumers

The following protocols and specifications are oriented toward consumer cloud
services, and are not relevant from an enterprise cloud computing standpoint.

211

Visit & Downloaded from : www.LearnEngineering.in


Visit & Downloaded from : www.LearnEngineering.in

OpenID is an open, decentralized standard for user authentication and access


control, allowing users to log on to many services with the same digital identity—i.e.,
a single sign-on user experience with services supporting OpenID. OpenID is
primarily targeted for consumer services offered by Internet companies including
Google, eBay, Yahoo!, Microsoft, AOL, BBC, PayPal, and so on.
Information cards are another open standard for identity on the Web. The
Information Cards Protocol is designed for use in high-value scenarios, such as
banking, where phishing resistance and support for secure authentication mechanisms
such as smart cards are critical business requirements.
Open Authentication (OATH)
OATH is a collaborative effort of IT industry leaders aimed at providing an
architecture reference for universal, strong authentication across all users and all
devices over all networks. The goal of this initiative is to address the three major
authentication methods:
• Subscriber Identity Module (SIM)-based authentication (using a Global
System for Mobile Communications/General Packet Radio Service [GSM/GPRS]
SIM)
• Public Key Infrastructure (PKI)-based authentication (using an X.509v3
certificate)
• One-Time Password (OTP)-based authentication

Open Authentication API (OpenAuth)


Using this authentication method, an AIM- or AOL-registered user can log on
to a third-party website or application and access AOL services or new services built
on top of AOL services. According to AOL, the OpenAuth API provides the
following features:
• A secure method to sign in. User credentials are never exposed to the
websites or applications the user signs into.
• A secure method to control which sites are allowed to read private or
protected content.

212

Visit & Downloaded from : www.LearnEngineering.in


Visit & Downloaded from : www.LearnEngineering.in

• Automatic granting of permissions only if the user selects Allow Always


on the Consent page.
• A prompt for user consent when the website or application attempts to
read any private or protected content (e.g., separate consent requests to allow Buddy
List information, to send IMs, to read albums).
• Access to other non-AOL websites without the need to create a new user
account at each site that supports AOL OpenAuth APIs.

5. IAM PRACTICES IN THE CLOUD

 Describe the IAM practices in the cloud.

IAM Practices in the Cloud

When compared to the traditional applications deployment model within the


enterprise, IAM practices in the cloud are still evolving.
In the current state of IAM technology, standards support by CSPs (SaaS, PaaS, and
IaaS) is not consistent across providers. Although large providers such as Google,
Microsoft, and Salesforce.com seem to demonstrate basic IAM capabilities, our
assessment is that they still fall short of enterprise IAM requirements for managing
regulatory, privacy, and data protection requirements. The maturity model takes into
account the dynamic nature of IAM users, systems, and applications in the cloud and
addresses the four key components of the IAM automation process:
• User Management, New Users
• User Management, User Modifications
• Authentication Management
• Authorization Management
IAM practices and processes are applicable to cloud services; they need to be adjusted
to the cloud environment. Broadly speaking, user management functions in the cloud
can be categorized as follows:
• Cloud identity administration

213

Visit & Downloaded from : www.LearnEngineering.in


Visit & Downloaded from : www.LearnEngineering.in

• Federation or SSO
• Authorization management
• Compliance management

Cloud Identity Administration: Cloud identity administrative functions should


focus on life cycle management of user identities in the cloud—provisioning,
deprovisioning, identity federation, SSO, password or credentials management, profile
management, and administrative management. Organizations that are not capable of
supporting federation should explore cloud-based identity management services. This
new breed of services usually synchronizes an organization’s internal directories with
its directory (usually multitenant) and acts as a proxy IdP for the organization.

Federated Identity (SSO): Organizations planning to implement identity federation


that enables SSO for users can take one of the following two paths (architectures):
• Implement an enterprise IdP within an organization perimeter.
• Integrate with a trusted cloud-based identity management service provider.
Both architectures have pros and cons.

Enterprise identity provider: In this architecture, cloud services will delegate


authentication to an organization’s IdP. In this delegated authentication architecture,
the organization federates identities within a trusted circle of CSP domains. A circle of
trust can be created with all the domains that are authorized to delegate authentication
to the IdP. In this deployment architecture, where the organization will provide and
support an IdP, greater control can be exercised over user identities, attributes,
credentials, and policies for authenticating and authorizing users to a cloud service.
Figure 5.17 illustrates the IdP deployment architecture.

214

Visit & Downloaded from : www.LearnEngineering.in


Visit & Downloaded from : www.LearnEngineering.in

Figure 5.12 Identity provider deployment architecture


Here are the specific pros and cons of this approach:
Pros
Organizations can leverage the existing investment in their IAM infrastructure and
extend the practices to the cloud. For example, organizations that have implemented
SSO for applications within their data center exhibit the following benefits:
• They are consistent with internal policies, processes, and access management
frameworks.
• They have direct oversight of the service-level agreement (SLA) and security of the
IdP.
• They have an incremental investment in enhancing the existing identity architecture
to support federation.
Cons
By not changing the infrastructure to support federation, new inefficiencies can result
due to the addition of life cycle management for non-employees such as customers.
Most organizations will likely continue to manage employee and long-term contractor
identities using organically developed IAM infrastructures and practices. But they

215

Visit & Downloaded from : www.LearnEngineering.in


Visit & Downloaded from : www.LearnEngineering.in

seem to prefer to outsource the management of partner and consumer identities to a


trusted cloud based identity provider as a service partner.
Identity management-as-a-service : In this architecture, cloud services can delegate
authentication to an identity management-as-a-service (IDaaS) provider. In this
model, organizations outsource the federated identity management technology and
user management processes to a third-party service provider.
When federating identities to the cloud, organizations may need to manage the
identity life cycle using their IAM system and processes. However, the organization
might benefit from an outsourced multiprotocol federation gateway (identity
federation service) if it has to interface with many different partners and cloud service
federation schemes.
In cases where credentialing is difficult and costly, an enterprise might also outsource
credential issuance (and background investigations) to a service provider, such as the
GSA Managed Service Organization (MSO) that issues personal identity verification
(PIV) cards and, optionally, the certificates on the cards.
In essence, this is a SaaS model for identity management, where the SaaS IdP stores
identities in a ―trusted identity store‖ and acts as a proxy for the organization’s users
accessing cloud services, as illustrated in Figure 5.18

Figure 13 -Identity management-as-a-service (IDaaS)

216

Visit & Downloaded from : www.LearnEngineering.in


Visit & Downloaded from : www.LearnEngineering.in

The identity store in the cloud is kept in sync with the corporate directory through a
provider proprietary scheme (e.g., agents running on the customer’s premises
synchronizing a subset of an organization’s identity store to the identity store in the
cloud using SSL VPNs).
Once the IdP is established in the cloud, the organization should work with the CSP to
delegate authentication to the cloud identity service provider. The cloud IdP will
authenticate the cloud users prior to them accessing any cloud services (this is done
via browser SSO techniques that involve standard HTTP redirection techniques).
Here are the specific pros and cons of this approach:
Pros
Delegating certain authentication use cases to the cloud identity management service
hides the complexity of integrating with various CSPs supporting different federation
standards. Another benefit is that there is little need for architectural changes to
support this model. Once identity synchronization between the organization directory
or trusted system of record and the identity service directory in the cloud is set up,
users can sign on to cloud services using corporate identity, credentials (both static
and dynamic), and authentication policies.
Cons
When you rely on a third party for an identity management service, you may have less
visibility into the service, including implementation and architecture details. Hence,
the availability and authentication performance of cloud applications hinges on the
identity management service provider’s SLA, performance management, and
availability. It is important to understand the provider’s service level, architecture,
service redundancy, and performance guarantees of the identity management service
provider.
Another drawback to this approach is that it may not be able to generate custom
reports to meet internal compliance requirements. In addition, identity attribute
management can also become complex when identity attributes are not properly
defined and associated with identities (e.g., definitions of attributes, both mandatory
and optional).

217

Visit & Downloaded from : www.LearnEngineering.in


Visit & Downloaded from : www.LearnEngineering.in

6. SaaS, PaaS, IaaS AVAILABILITY IN THE CLOUD

 Explain the SaaS, PaaS, IaaS availability in the cloud.

Availability Management

Cloud services are not immune to outages, and the severity and scope of impact
to the customer can vary based on the outage situation. Similar to any internal IT-
supported application, business impact due to a service outage will depend on the
criticality of the cloud application and its relationship to internal business processes.
In the case of business-critical applications where businesses rely on the continuous
availability of service, even a few minutes of service outage can have a serious impact
on your organization’s productivity, revenue, customer satisfaction, and service-level
compliance.

Factors Impacting Availability

The cloud service resiliency and availability depend on a few factors, including
the CSP’s data center architecture (load balancers, networks, systems), application
architecture, hosting location redundancy, diversity of Internet service providers
(ISPs), and data storage architecture. Following is a list of the major factors:
• SaaS and PaaS application architecture and redundancy.
• Cloud service data center architecture, and network and systems architecture,
including geographically diverse and fault-tolerance architecture.
• Reliability and redundancy of Internet connectivity used by the customer and
the CSP.
• Customer’s ability to respond quickly and fall back on internal applications
and other processes, including manual procedures.
• Customer’s visibility of the fault. In some downtime events, if the impact
affects a small subset of users, it may be difficult to get a full picture of the impact and
can make it harder to troubleshoot the situation.

218

Visit & Downloaded from : www.LearnEngineering.in


Visit & Downloaded from : www.LearnEngineering.in

• Reliability of hardware and software components used in delivering the cloud


service.
• Efficacy of the security and network infrastructure to withstand a distributed
denial of service (DDoS) attack on the cloud service.
• Efficacy of security controls and processes that reduce human error and
protect infrastructure from malicious internal and external threats, e.g., privileged
users abusing privileges.

SaaS Availability Management

By virtue of the service delivery and business model, SaaS service providers
are responsible for business continuity, application, and infrastructure security
management processes. This means the tasks your IT organization once handled will
now be handled by the CSP. Some mature organizations that are aligned with industry
standards, such as ITIL, will be faced with new challenges of governance of SaaS
services as they try to map internal service-level categories to a CSP.
For example, if a marketing application is considered critical and has a high
service-level requirement, how can the IT or business unit meet the internal marketing
department’s availability expectation based on the SaaS provider’s SLA? In some
cases, SaaS vendors may not offer SLAs and may simply address se ervice terms via
terms and conditions. For example, Salesforce.com does not offer a standardized SLA
that describes and specifies performance criteria and service commitments. However,
another CRM SaaS provider, NetSuite, offers the following SLA clauses:
Uptime Goal—NetSuite commits to provide 99.5% uptime with respect to the
NetSuite application, excluding regularly scheduled maintenance times.
Scheduled and Unscheduled Maintenance—Regularly scheduled maintenance time
does not count as downtime. Maintenance time is regularly scheduled if it is
communicated at least two full business days in advance of the maintenance time.
Regularly scheduled maintenance time typically is communicated at least a week in
advance, scheduled to occur at night on the weekend, and takes less than 10–15 hours
each quarter.

219

Visit & Downloaded from : www.LearnEngineering.in


Visit & Downloaded from : www.LearnEngineering.in

NetSuite hereby provides notice that every Saturday night 10:00pm–10:20pm Pacific
Time is reserved for routine scheduled maintenance for use as needed.
There is no such thing as standard SLA among cloud service providers. Uptime
guarantee, service credits, and service exclusions clauses will vary from provider to
provider.

Customer Responsibility

Customers should understand the SLA and communication methods (e.g.,


email, RSS feed, website URL with outage information) to stay informed on service
outages. When possible, customers should use automated tools such as Nagios or
Siteuptime.com to verify the availability of the SaaS service.
The paper concluded that certain elements are necessary to make the SLA an
effective document, and states that:
Communication and clear expectations are required from both the service provider
and their customers to identify what is important and realistic with respect to
standards and expectations.
Customers of cloud services should note that a multitenant service delivery
model is usually designed with a ―one size fits all‖ operating principle, which means
CSPs typically offer a standard SLA for all customers. Thus, CSPs may not be
amenable to providing custom SLAs if the standard SLA does not meet your service-
level requirements. However, if you are a medium or large enterprise with a sizable
budget, a custom SLA may still be feasible.
Since most SaaS providers use virtualization technologies to deliver a multitenant
service, customers should also understand how resource democratization occurs
within the CSP to best predict the likelihood of system availability and performance
during business fluctuations.
If the resources (network, CPU, memory, storage) are not allocated in a fair manner
across the tenants to perform the workload, it is conceivable that a highly demanding
tenant may starve other tenants, which can result in lower service levels or poor user
experience.

220

Visit & Downloaded from : www.LearnEngineering.in


Visit & Downloaded from : www.LearnEngineering.in

SaaS Health Monitoring

The following options are available to customers to stay informed on the health of
their service:
• Service health dashboard published by the CSP. Usually SaaS providers, such
as Salesforce.com, publish the current state of the service, current outages that may
impact customers, and upcoming scheduled maintenance services on their website
(e.g., http:// trust.salesforce.com/trust/status/).
• The Cloud Computing Incidents Database (CCID). (This database is generally
communitysupported, and may not reflect all CSPs and all incidents that have
occurred.)
• Customer mailing list that notifies customers of occurring and recently
occurred outages.
• Internal or third-party-based service monitoring tools that periodically check
SaaS provider health and alert customers when service becomes unavailable (e.g.,
Nagios monitoring tool).
• RSS feed hosted at the SaaS service provider.

PaaS Availability Management

In a typical PaaS service, customers (developers) build and deploy PaaS


applications on top of the CSP-supplied PaaS platform. The PaaS platform is typically
built on a CSP owned and managed network, servers, operating systems, storage
infrastructure, and application components (web services). The customer is
responsible for managing the availability of the customerdeveloped application and
third-party services, and the PaaS CSP is responsible for the PaaS platform and any
other services supplied by the CSP.
In cases where the PaaS platform enforces quotas on compute resources (CPU,
memory, network I/O), upon reaching the thresholds the application may not be able
to respond within the normal latency expectations and could eventually become

221

Visit & Downloaded from : www.LearnEngineering.in


Visit & Downloaded from : www.LearnEngineering.in

unavailable. For example, the Google App Engine has a quota system whereby each
App Engine resource is measured against one of two kinds of quotas: a billable quota
or a fixed quota.

Billable quotas are resource maximums set by you, the application’s administrator, to
prevent the cost of the application from exceeding your budget. Every application gets
an amount of each billable quota for free. You can increase billable quotas for your
application by enabling billing, setting a daily budget, and then allocating the budget
to the quotas. You will be charged only for the resources your app actually uses, and
only for the amount of resources used above the free quota thresholds.

Fixed quotas are resource maximums set by the App Engine to ensure the integrity of
the system. These resources describe the boundaries of the architecture, and all
applications are expected to run within the same limits. They ensure that another app
that is consuming too many resources will not affect the performance of your app.
Customer Responsibility
Considering all of the variable parameters in availability management, the PaaS
application customer should carefully analyze the dependencies of the application on
the third-party web services (components) and outline a holistic management strategy
to manage and monitor all the dependencies.
The following considerations are for PaaS customers:
 PaaS platform service levels
Customers should carefully review the terms and conditions of the CSP’s
SLAs and understand the availability constraints.
 Third-party web services provider service levels
When your PaaS application depends on a third-party service, it is critical to
understand the SLA of that service. For example, your PaaS application may rely
on services such as Google Maps and use the Google Maps API to embed maps in
your own web pages with JavaScript.

222

Visit & Downloaded from : www.LearnEngineering.in


Visit & Downloaded from : www.LearnEngineering.in

 Network connectivity parameters for the network (Internet)-connecting PaaS


platform with third-party service providers. The parameters typically include
bandwidth and latency factors.
PaaS Health Monitoring
In general, PaaS applications are always web-based applications hosted on the
PaaS CSP platform (e.g., your Java or Python application hosted on the Google App
Engine). Hence, most of the techniques and processes used for monitoring a SaaS
application also apply to PaaS applications. Given the composition of PaaS
applications, customers should monitor their application, as well as the third-party
web component services.
When CSPs support monitoring via application programming interfaces (APIs),
monitoring your application can involve a standard web services protocol, such as
Representational State Transfer (REST), Simple Object Access Protocol (SOAP),
eXtensible Markup Language/ Hypertext Transfer Protocol (XML/HTTP), and in a
few cases, proprietary protocols.
The following options are available to customers to monitor the health of their service:
• Service health dashboard published by the CSP.
• CCID (this database is generally community-supported, and may not reflect
all CSPs and all incidents that have occurred)
• CSP customer mailing list that notifies customers of occurring and recently
occurred outages
• RSS feed for RSS readers with availability and outage information
• Internal or third-party-based service monitoring tools that periodically check
your PaaS application, as well as third-party web services that monitor your
application (e.g., Nagios monitoring tool)
IaaS Availability Management
Availability considerations for the IaaS delivery model should include both a
computing and storage (persistent and ephemeral) infrastructure in the cloud. IaaS
providers may also offer other services such as account management, a message queue
service, an identity and authentication service, a database service, a billing service,
and monitoring services. Hence, availability management should take into

223

Visit & Downloaded from : www.LearnEngineering.in


Visit & Downloaded from : www.LearnEngineering.in

consideration all the services that you depend on for your IT and business needs.
Customers are responsible for all aspects of availability management since they are
responsible for provisioning and managing the life cycle of virtual servers.
Managing your IaaS virtual infrastructure in the cloud depends on five factors:
• Availability of a CSP network, host, storage, and support application
infrastructure. This factor depends on the following:
— CSP data center architecture, including a geographically diverse and fault-
tolerance architecture.
— Reliability, diversity, and redundancy of Internet connectivity used by the
customer and the CSP.
— Reliability and redundancy architecture of the hardware and software
components used for delivering compute and storage services.
— Availability management process and procedures, including business
continuity processes established by the CSP.
— Web console or API service availability. The web console and API are
required to manage the life cycle of the virtual servers. When those services become
unavailable, customers are unable to provision, start, stop, and deprovision virtual
servers.
— SLA. Because this factor varies across CSPs, the SLA should be reviewed
and reconciled, including exclusion clauses.
• Availability of your virtual servers and the attached storage (persistent and
ephemeral) for compute services.
• Availability of virtual storage that your users and virtual server depend on for
storage service. This includes both synchronous and asynchronous storage access use
cases. Synchronous storage access use cases demand low data access latency and
continuous availability, whereas asynchronous use cases are more tolerant to latency
and availability.
• Availability of your network connectivity to the Internet or virtual network
connectivity to IaaS services. In some cases, this can involve virtual private network
(VPN) connectivity between your internal private data center and the public IaaS
cloud (e.g., hybrid clouds).

224

Visit & Downloaded from : www.LearnEngineering.in


Visit & Downloaded from : www.LearnEngineering.in

IaaS Health Monitoring


The following options are available to IaaS customers for managing the health of their
service:
• Service health dashboard published by the CSP.
• CCID (this database is generally community-supported, and may not reflect
all CSPs and all incidents that have occurred).
• CSP customer mailing list that notifies customers of occurring and recently
occurred outages.
• Internal or third-party-based service monitoring tools (e.g., Nagios) that
periodically check the health of your IaaS virtual server. For example, Amazon Web
Services (AWS) is offering a cloud monitoring service called CloudWatch. It also
provides customers with visibility into resource utilization, operational performance,
and overall demand patterns, including metrics such as CPU utilization, disk reads and
writes, and network traffic.
• Web console or API that publishes the current health status of your virtual
servers and network.

INDUSTRIAL CONNECTIVITY AND LATEST DEVELOPMENT

The grid and cloud computing plays a vital role in industries for the following:
 Collaborative engineering on the cloud
 Real-time data publishing
 Intellectual expertise and optimization services
 Automated data analysis services

225

Visit & Downloaded from : www.LearnEngineering.in


Visit & Downloaded from : www.LearnEngineering.in

B.E./ B.Tech. DEGREE EXAMINATION, NOVE.MBERIDECEMBER 2016.

Seventh Semester

Computer Science and Engineering

CS 6703 — GRID AND CLOUD COMPUTING

(Common to Seventh Semester Information Technology)


(Regulations 2013)

Time : Three hours Maximum : 100 marks


Answer ALL questions.

PART A — (10 x 2 = 20 marks)

1. Bring out the differences between private cloud and public cloud.

2. Highlight the importance of the term "cloud computing'.

3. List the requirements of resource sharing in a grid.

4. What are the security concerns associated with the grid?


5. Give the role of a Vkl.

6. Why do we need a hybrid cloud?

7. Name any four services offered in GT4.

8. What are the advantages of using Hadoop?

9. Mention the importance of Transport Level Security.

10. Discuss on the application and use of identity and access management

PART B — (5 x 16 = 80 marks)

11. (a) Illustrate the architecture of virtual machine and brief a operations.

Or
(b) Write short notes on :
226

Visit & Downloaded from : www.LearnEngineering.in


Visit & Downloaded from : www.LearnEngineering.in

(i) cluster of cooperative computers.


(ii) service oriented architecture.
12. (a) With a neat sketch, discuss the OGSA framework.

Or
(b) Explain the data intensive grid service models with suitable dial

13. (a) List the cloud deployment models and give a detailed note about

Or
(b) Give the importance of cloud computing and elaborate the differ of services
offered by it.

14. (a) Draw and explain the global toolkit architecture.

Or
(b) Give a detailed note on Hadoop framework.

15. (a) Explain trust models for grid security environment.

Or
(b) Write in detail about cloud security infrastructure.

227

Visit & Downloaded from : www.LearnEngineering.in


Visit & Downloaded from : www.LearnEngineering.in

B.E./ B.Tech. DEGREE EXAMINATION, APRIL/MAY 2017.

Seventh Semester

Computer Science and Engineering

CS 6703 — GRID AND CLOUD COMPUTING

(Common to Seventh Semester Information Technology)

(Regulations 2013)

Time : Three hours Maximum : 100 marks


Answer ALL questions.

PART A — (10 x 2 = 20 marks)


1. Tabulate the differences between high performance computing and high throughput
computing.
2. Give the basic operations of a VM.
3. What do you understand by the term 'data intensive?
4. Define "OGSA".
5. Mention the characteristic features of the cloud.
6. Summarize the differences between PaaS and SaaS.
7. Write the significant use of GRAM.
8. Name the different modules in Hadoop framework.
9. What are the various challenges in building the trust environment?
10. Write a brief note on the security requirements of a grid.
PART B — (5 x 16 = 80 marks)
11. Brief
(a) the interaction between the GPU and CPU in performing parallel
execution of operations. (16)
Or
Illustrate
(b) with a neat sketch, the grid computing infrastructure. (16)
12. Write
(a) a detailed note on OGSA security models. (16)
Or

228

Visit & Downloaded from : www.LearnEngineering.in


Visit & Downloaded from : www.LearnEngineering.in

Explain
(b) how migrations of grid services are handled. (16)
13. Discuss
(a) how virtualization is implemented in different layers (16)
Or
What
(b) do you mean by data centre automation using virtualization? (16)
14. Discuss
(a) MAPREDUCE with suitable diagrams. (16)
Or
Elaborate
(b) HDFS concepts with suitable illustrations. (16)
15. Write
(a) detailed note on identity and access management architecture. (16)
Or
Explain
(b) grid security infrastructure. (16)

229

Visit & Downloaded from : www.LearnEngineering.in

You might also like