Professional Documents
Culture Documents
Table of ConTenTs
Chapter 11: Is the Cloud Right for You? Foreword...............................................................................................4 Managed Service vs. Cloud Providers ...................................................5 How the Cloud Works ............................................................................5 What Is the Public Cloud? .....................................................................6 What Is a Private Cloud? .......................................................................8 Hybrid Solutions .................................................................................10 Physical-Public Hybrids ......................................................................10 Physical-Private Hybrids .....................................................................10 hysical-Public-Private Hybrids ..............................................................11 Virtual Private Clouds .........................................................................11 Universal Considerations for Cloud Infrastructure ...............................12 Recovery as a Service (RaaS) .............................................................13 Cloud for SMBs ...................................................................................14 Cloud for Large Organizations .............................................................15 Calculating Cost of Downtime .............................................................15 Determining RPO and RTO ..................................................................16 Is Cloud Backup and Recovery Right for Your Organization? ................17 How Cloud Backup and Recovery Works ...............................................17 Cloud Security Essentials ...................................................................18 Cloud vs. Tape.....................................................................................19 The Costs of Tape Backup ..................................................................19 Off-Site, Rapid Recovery .....................................................................20 The Cloud DR Opportunity ...................................................................22 Eco-Friendly Incentives for Cloud Computing.......................................23 Conclusion ..........................................................................................24
ChapTer 11
businessContinuityToday.com
The term cloud computing is so generic (and sometimes misused) that its nearly worthless in a practical discussion. Two aspects of cloud computing that are relevant to the discussion are Infrastructure as a Service (IaaS) and Software as a Service (SaaS). With IaaS, a service provider delivers raw resourceslike virtual machines, storage, and network bandwidthas a service. With SaaS, a provider layers a specific software solution on top of those raw resources and delivers that. When both IaaS and SaaS are combined in an offering that is specifically designed to provide data protection and disaster recovery, it is referred to as Recovery as a Service, or RaaS, which is discussed later in this chapter.
Like the Internet, the cloud is a network of computers. This network behaves like a collective virtual computer, where the applications can run independently from individual computers or server configurations.
businessContinuityToday.com
User
Virtual Server
The cloud network is made up of front-end layers and back-end layers. Front-end layers are the ones users see and interact with when, for example, accessing Internet-based email (like Gmail). The back-end is made up of hardware and software architecture that drives the front-end interface. Because the network of computers works together, the applications can take advantage of the combined computing power. Cloud computing also creates infinite flexibility. Depending on demand, resources can be increased or reduced as necessary by reassigning specific hardware.
businessContinuityToday.com
ongoing basis to customers who did not wish to buy their own server resources to co-locate. This idea of renting dedicated servers could be seen as the greatgrandfather of the cloud solution set, as information and applications owned by one entity were being run on computer resources owned by another, but this model was still too expensive and inflexible to work on a large scale. The sheer amount of datacenter space and costs associated with acquiring, maintaining, and refreshing that large a number of physical servers made the business model difficult to maintain for all but a few large-scale hosting companies. The advent of stable, commercially available virtualization solutions allowed the hosted server model to evolve from the rental of physical hardware computer resources into the rental of virtual computer resources. This allowed each physical device to be parceled out to many more customers, reducing the overhead on the co-location provider and allowing for much greater client density per given square foot of datacenter space. The problem was that there were still limits to how many virtual machines could be logically managed by native tool sets. Virtual machines (VMs) still had to be created and destroyed manually by the co-location staff, and that kept the model from being flexible enough to expand rapidly. A few years ago, key players in the space we now know as the public cloud began to roll out a new theory of hosted virtualization that broke through those barriers. By writing complex Web-based front-end solutions to the back-end virtualization platforms, many companies were able to allow clients to create and remove virtual servers or other computer resources themselves, instead of waiting for a co-location employee to manually perform these actions. Many of these providers went beyond simply offering virtual servers and have created the ability to instantiate storage resources, virtual application connection points, and other technologies that would have been impossible in the world of simple virtual server rental. Which brings us to the world of the public cloud that we understand today. Vendors provide access to limited virtual machines, storage, and computerresource command and control systems, and organizations use those tools to create, manage, and reallocate resources as required for various projects. In this respect, the public cloud is the combination of those control resources alongside the resources themselves. The defining factor of the public cloud is that these resources and command/control systems are never owned by the organization that rents them, but instead are owned and maintained by some third-party organization. Public cloud systems may seem like a panacea to the problems of overcrowded IT facilities, but they do have some drawbacks. First is that the data and all computer resources associated with it are housed within a datacenter controlled by some other entity. This could cause security issues for highly
The advent of stable, commercially available virtualization solutions allowed the hosted server model to evolve from the rental of physical hardware computer resources into the rental of virtual computer resources
businessContinuityToday.com
sensitive data (see the section below on private cloud technology). Then there is the fact thatwith the exception of completely new systemsdata and computer resources do not currently reside in the cloud. You will need some way to transfer the systems and the data resources from their current home into cloud computer resources located at your cloud providers facility. There are many solutions from various vendors that can allow you to achieve this goal, and as such this isnt an insurmountable obstacle, but it is one that must be taken into consideration as you plan your cloud strategy.
businessContinuityToday.com
into the overall private cloud pool when they are no longer required. Since the individual business units do not know where their physical server locations are, they no longer require long periods of architecture design to ensure they get the resources they need in a local datacenter. They simply use the front-end tools to request the required computer power, storage, and other resources, and the custom front-end/back-end solutions provision the best combination of virtual and physical resources in the best location for the purpose. This ability to control security while still allowing for true cloud computer resource allocation makes private clouds an attractive solution for large organizations that require a higher level of security and control than they would otherwise be able to obtain from the public cloud. That is not to say that private clouds are without their own drawbacks, though. Moving cloud computer resources internally eliminates the native redundancy of most public cloud providers; the public cloud allows for a form of native disaster recovery (DR) just by ensuring that no single computer resource is housed in only one location. Private clouds would not natively be able to provide this type of redundancy but could be outfitted with third-party tools that can provide such redundancy easily. It becomes a matter of finding and implementing the correct recovery solutions, something that isnt typically necessary for public cloud platforms. The methodologies of public cloud architecture definitely require an economy of scale. They necessitate large numbers of physical servers to act as virtual hosts, large amounts of server-class storage space (typically in the form of SAN systems), and a great deal of power and cooling systems to maintain. Also required is appropriate licensing for the virtual infrastructure technologies and a dedicated staff to manage the systems that manage the end-users solution sets. When combined with the development costs of the customized command/control interfaces and billing systems, this type of solution becomes cost-prohibitive to all but larger enterprise organizations looking to produce a specific and secure cloud computing platform internally. So, while this model is in use today and does address many security concerns that exist within the public cloud, it is not a solution that is within reach of the average organization looking to leverage cloud solutions.
The ability to control security while still allowing for true cloud computer resource allocation makes private clouds an attractive solution for large organizations.
businessContinuityToday.com
hybrid soluTions
As you can see, there are plusses and minuses to both public and private cloud solution sets. In addition to those hurdles inherent to the technology at this time, there is always the fact that many systems are onand will remain onphysical servers. This makes those particular systems incapable of migration to either a public or private cloud, since both of those technologies sets rely on virtualization at their core. In order to surmount these obstacles and to provide some facility for physical servers within an organization, many businesses are looking toward a variety of hybrid solution sets that merge existing technologies with the cloud. They also can use hybrid platforms to merge public cloud for most applications with private cloud for high-security application environments to leverage the best of both worlds. Physical-Public Hybrids The most common hybrid approach is to leverage the existing physical resources of the organization to host anything that is not readily suitable for public cloud and then contract with a vendor such as Amazon Web Services to provide cloud computer and/or storage resources for everything that can safely be migrated out of the local datacenter. An example would be a financial application with a Web-based front-end. The financial data is tightly controlled by internal and external regulatory compliance measures and therefore would probably not be easily migrated to a public cloud infrastructure. However, the Web-based front-end solution set would not hold sensitive data and could therefore be migrated with much less effort onto a cloud computer or cloud application platform. The appropriate levels of Web-based security, firewalls, and VPN infrastructure could then be applied to ensure that only data that is cleared to leave the datacenter is permitted to travel between the secure facility and the Web systems. The benefit of this type of hybridization is that the Web systems can be dynamically expanded to meet incoming user demand, but the secure systems can still be tightly controlled without redesigning them to exist within the cloud computer environment. Physical-Private Hybrids In some cases, even the security of a tightened private cloud computer environment is not suitable for the workloads currently residing in the traditional datacenter. In those cases, where the IT staff wants to gain more flexibility without redesigning site security, a private cloud infrastructure can be established to allow for cloud flexibility within the current datacenter environment.
10
businessContinuityToday.com
Medical records are a good example of this type of solution set. Where largescale health insurers or providers (major hospitals, etc.) require the flexibility of cloud, they must still be aware of the impact of using external resources to house data bound by HIPAA and other regulations. In many cases, moving to a public cloud infrastructure would require a massive reconfiguration of security protocols and procedures, while establishing a private cloud would allow for flexible infrastructure without physically moving outside of the current secure environment itself. Physical-Public-Private Hybrids This solution set is perhaps the most complex of hybrids, and its used only by the largest of organizations. The theory behind this technology is that there will be some servers that must remain physical, others that can become virtualized but cannot be placed on public cloud networks, and finally many servers that could easily be adapted to the public cloud. As an example, consider a multi-service insurance conglomerate. Many Webbased solutions are already exposed to public traffic and could therefore take advantage of the increased on-demand scalability of the public cloud computer and storage solutions on the market. In most of this class of organization, a large number of legacy solutions exists, many of which are bound to physical hardware configurations and cannot migrate to a public or private cloud infrastructure at all. Finally, newer solution sets that still host highly sensitive data could be virtualized but cannot leave the security confines of the organizations datacenters. Combining all three forms of infrastructure (physical servers and public and private clouds) allows for maximum flexibility for all the various types of workloads and systems that make up the business.
A virtual private cloud is simply a public cloud infrastructure that has been securityhardened to permit only recognized traffic streams.
11
businessContinuityToday.com
strictly limited connectivity to email traffic (SMTP links, etc.) but otherwise speak only to servers within the corporate datacenter via an encrypted tunnel. The theory is very similar to establishing a VPN connection between two sites of the organization, except here the public cloud is accessed within an especially walled-off section of the cloud computer and storage infrastructure accessible only to the business unless otherwise specified.
Most public cloud providers do offer the ability to establish VPN connections to your cloud computer resources on their platform, allowing you to focus on moving the data and system information.
businessContinuityToday.com
businessContinuityToday.com
With the advent of cloud computing, instead of just crossing their fingers or paying for the hardware, software, space, and staff required for storage, an entire mid-sized corporation can rent enough cloud space to keep a real-time,
14
businessContinuityToday.com
The first step to evaluating the quality of a data backup and recovery plan is to figure out the cost of downtime and evaluate the Recovery Time Objective (RTO) and Recovery Point Objective (RPO).
15
businessContinuityToday.com
Heres a simple way to estimate the average cost per hour of downtime:
businessContinuityToday.com
Cloud backup and recovery requires a combination of technologies: backup and recovery software plus a Cloud Service Provider (CSP).
17
businessContinuityToday.com
The best backup and recovery software will provide at least four layers of protection.
Recovery Server DC1 Double-Take Backup Repository Server Recovery Server Exchange
Amazon EC2 $90 / month + $0.20 / GB DC1 Domain Controller Active Directory DNS Exchange Server Users Users
Users
18
businessContinuityToday.com
Tape backup has inherent problems that can go quickly from inconvenient to disastrous.
businessContinuityToday.com
The best way to ensure a fast recovery is to have replacement equipment standing by at an off-site location with the necessary software and configuration to quickly transfer users and data.
The best way to ensure a fast recovery is to have replacement equipment standing by at an off-site location with the necessary software and configuration to quickly transfer users and data. The best practice includes a remote data center with servers, storage, networking equipment, and Internet access. Restoring to this remote data center from backup tapes will likely take too long, assumes that the tapes were not affected by the original problem, and still leaves the risk of recovering only old data. Instead, replication software can be used to keep the backup systems constantly updated. A four-hour RTO and RPO requires: Off-site hardware and infrastructure to run servers and applications ata updates to the DR site more often than every four hours, D preferably real-time ontinual updates of the application and OS configuration (without this, C recovery may fail after a patch or an upgrade) method to deal with any hardware differences between production A and recovery environments
20
businessContinuityToday.com
All the requirements in the list above can be met by currently available technology. If it is clear that local tape or disk-to-disk solutions do not provide adequate protection and a better solution is available, why isnt every server in the world protected? Usually the answer is cost. The cost of an off-site, rapid recovery solution comes in a variety of ways: Upfront cost echnical complexity (requires new IT specialists or time and budget to T train existing staff) perational complexity (managing a new data center and twice as much O equipment) roject management (complex, expensive projects require lots of P planning and management) Risk (expensive, complicated projects sometimes fail) Given all the cost, complexity, time, and risk involved in creating this capability, these projects are often delayed in favor of projects that produce immediate, obvious results, such as a Web server update or a desktop refresh. For some organizationsparticularly larger organizations with large staff and significant IT expertiseadding extra servers to an existing off-site location is relatively easy. But even in these large organizations, there are still servers that dont make the cut; they are not considered to be critical enough to justify the solution. If a server is so unimportant that it wont be missed when it fails, perhaps the next question is Why not just turn it off? The point of this off-site, rapid recovery solution is to preserve as much of the normal operating capability as possible. Customers and business partners dont care that a pipe burst and flooded the data center; they want to know when a business can deliver. If a server is important to meeting a business requirement, it is worth protecting. The question to ask is not Is this server worth the solution; instead, How do we make the solution practical for every server? Most of the cost and complexity of this solution comes not from the specialized tools for replication and recovery. Instead, the pain comes from, ironically, the extra facilities and equipment, both of which will sit relatively idle most of the time. Specifically: electing, acquiring, and building out a second data center (or the high S cost of renting one already configured) Selecting, acquiring, installing, and configuring the standby equipment Managing and maintaining the facility and equipment Integrating all the parts into a reliable solution
21
businessContinuityToday.com
This creates a peak-versus-average problem, where time and money are spent building a redundant data center that can meet the peak capacity of the IT department, but the average utilization of that capacity will be very small. You pay for peak but only get the benefit of a very low average utilization. Easy and fast network access, and the introduction of electronic business practices across all industries, has resulted in reliance upon IT systems and therefore puts business operations at risk when IT failures occur. Tape backup was the preferred recovery solution of the 1970s computing era. Disk-to-disk and server-to-server replication is becoming more prevalent because it provides near-real-time copies of data for faster, easier recovery.
Easy and fast network access, and the introduction of electronic business practices across all industries, has resulted in reliance upon IT systems and therefore puts business operations at risk when IT failures occur.
22
businessContinuityToday.com
o you provide a mechanism to recover the data/servers without lots of D downtime? an I pay for only what I use, or do I need dedicated servers in the C cloud? Once you find a solution that answers those questions to your satisfaction, you can look to protect every server in the infrastructure. It should be so costeffective that you can just sign up, set it, and forget it. Set a reminder to test failover every six months, and ensure you havent added any new servers.
23
businessContinuityToday.com
loud computing can reduce the number of servers in your data C center, which may reduce your costs. If you would typically host your transactional Web server farms or commerce applications, you can use the cloud to provide those services instead. The cloud frees smaller SMB companies from the burden of a data center, while larger corporations are using the cloud to host less-critical or lower-tiered applications to further reduce their datacenter footprint. Using the cloud can enhance backup and recovery capabilities and reduce the costs typically associated with tape.
ConClusion
The cloud concept still has a long way to go before we can be sure exactly what its definitions, roles, and limitations will be. There is a tremendous amount of promise in public, private, and hybrid cloud platforms, and much of this promise can be seen in the real-world implementations of cloud technology in the market today. Leveraging cloud platforms where they make the most sense is a matter of careful evaluation and proper migrationin much the same way as you would with most other technology within the corporate organization. The right partners, the right tools, and the right platforms can all work together today to build the data systems that will continue to serve you well for the future.
businessContinuityToday.com