You are on page 1of 4

Secondary storage is used to protect inactive data written from a primary storage array to a nonvo

latile tier of disk, flash or tape. Secondary storage is synonymous with the terms secondary memo
ry, auxiliary storage and external storage.

Secondary storage is a trade-off between high performance and economical long-term archiving.
Because it is accessed less frequently, data can be migrated to secondary storage devices with
lower performance and costs.

Companies are increasingly placing a second class of storage between primary storage and
archival storage as the foundation for a tiered storage environment.

Secondary storage vs. primary storage


Secondary storage commonly refers to nonvolatile storage devices, such as hard disk drives (HD
Ds) and solid-state drives (SSDs), that protect data for disaster recovery or long-term retention.
Optical media, backup tapes and remote archives are common secondary storage technologies.

Secondary storage sits below a company's primary storage tier, and is not under the direct control
of a computer's central processing unit (CPU). Secondary storage devices do not interact directly
with an application.

The purpose of secondary storage is to provide a high-capacity tier, although the data stored is not
immediately accessible. For example, a backup server is capable of storing a vast amount of data,
but getting access to it requires dedicated backup software. Similarly, optical disks and backup
tapes must first be mounted before they can be read.

A backup storage device is a type of secondary storage. Organizations often install multiple physic
al backup appliances in at least two locations to ensure data is redundant. The emergence of the
public cloud as a storage tier has allowed some companies to reduce, if not eliminate, the need for
such backup hardware.

Primary storage refers to local disks installed inside a server's chassis, or to disks in an external
storage array. Primary storage typically refers to random access memory (RAM) located near a
computer's CPU. This placement reduces the time needed to move data between storage and the
CPU.

Because RAM is volatile, it holds active data sets as long as the computer is connected to a power
source. Secondary storage, by contrast, uses nonvolatile storage devices, such as HDDs and SSDs,
which retain their contents even without power. Nonvolatile storage media is also less expensive
than RAM on a cost-per-gigabyte basis.

Devices used in a secondary storage tier


Secondary storage backs up primary storage by copying data through replication or other data
protection and recovery methods, such as archives and snapshots.
External hard drives are portable devices that serve as either secondary computer storage or a
network drive. An external drive attaches to a computer via a standard USB port. Older removable
media, such as a floppy disk or USB drive, are most often used by consumers to back up personal
computer storage. Newer computer systems do not support floppy disks.

Enterprises seldom deploy consumer-oriented portable devices as secondary storage due to


concerns about data security and inventory management. They use portable storage devices that
integrate enterprise-class data encryption at the device or cartridge level, which prevents
unauthorized users from gaining access to the data.

Other media used for enterprise secondary storage include disk-based systems or magnetic tape
libraries. Flash media, such as SSDs, can be paired with HDDs in a hybrid flash environment,
such as hyper-converged storage for secondary copy data.

Some all-flash arrays support replication to third-party disk systems for converged data protection
in a tiered storage environment. Due to its comparably higher cost and lower write endurance, all-
flash storage is rarely used exclusively for secondary data.

In a business environment, an older network-attached storage (NAS) box, storage area network
(SAN) or tape library can potentially serve as secondary storage. More recently, object storage
devices have been used in secondary storage to lessen the demands on primary storage arrays.

Cloud as a secondary storage tier


The rise of the software as a service (SaaS) model expands cloud storage use cases to provide a
secondary or tertiary tier. This is especially true when cloud storage is used for backup and data
archiving.

Cloud-based archiving has emerged as a cost-effective tool to store aged data that rarely changes,
in comparison to primary storage in a server. A secondary storage system internally managed in a
data center is known as a private cloud.

By contrast, data packets shipped via broadband internet pipes to a third-party services provider
reside in a public cloud, such as Amazon Web Services or Microsoft Azure. Companies frequently
choose a hybrid cloud model that keeps some data locally and archives less active data sets in a
public cloud repository.

Public cloud storage consumers access data stored on physical servers outside of their own data
center, connecting to it via the internet. This allows data to be accessed from any device, although
customers may incur charges above the monthly cloud subscription for ingress and egress, and for
running operations on the data.

For those reasons, plus lingering concerns about data security and availability, many enterprise
customers take a cautious approach when selecting the pubic cloud as a secondary target. The Sa
aS model allows a company to scale its cloud-based consumption costs based on varying demand.
Why secondary storage is gaining prominence
Due to corporate data growth, storage managers are adopting more secondary storage to reclaim
capacity on primary storage arrays. The ability to maintain older data copies in an easily
accessible form satisfies business and regulatory compliance requirements.

The data in secondary storage is usually older than the data in primary storage, especially if
backups lack policy-driven automation. The term secondary storage sometimes refers to data that
is accessed less frequently than primary or production data.

Several factors have contributed to the growing importance of secondary storage. Rather than
simply parking data for the long term, companies are facing mounting pressure to derive greater
value from it.

The emergence of big data analytics has companies storing more (and larger) data sets. Amid
stepped-up legal requirements, companies are reluctant to delete older data.

Incidents of ransomware and extortionware are rising, fueled by the expanding number of attack
surfaces generated by internet of things devices.

Vendors that specialize in secondary storage


In recent years, storage vendors have shifted their attention to boosting software capabilities to
enable customers to build tiered secondary and tertiary storage to a cloud, backup product or even
to other vendors' storage.

Startups specializing in secondary storage include Cohesity Inc. and Rubrik Inc. Cohesity sells
software on a branded appliance to converge archiving, analytics and data protection on one
platform. Rubrik provides a hardware platform that converges backup, data reduction and version
management.

Customers should consider several requirements for speed and performance when choosing a
secondary storage system, such as data ingestion, restore times, archiving and snapshots. Other
key features revolve around metadata search and reporting capabilities.

In the next year or so SAN administrators will begin encountering "WBEM-compliant" or


"Bluefin-compliant" products. Essentially the terms will mean the same thing: easier and more
effective SAN administration through universal information sharing among storage devices and
systems. You need to know about this new initiative so you're up-to-speed when you run into it.

WBEM is the acronym for Web-based Enterprise Management, a set of management information
standards being developed by the Distributed Management Task Force (DMTF) (www.dtmf.org)
to standardize processes and information formats in managing distributed systems. As part of the
process, the WEBM initiative includes the CIM standard data model, which provides a common
format for collecting and managing data for storage devices, among other things.

Recently the Storage Networking Industry Association (SNIA) has signed on to help produce a
storage-management specification code-named BlueFin, which, it hopes, will provide a single
standardized way to collect and present information for storage management.

The idea is that if storage devices, switches and other parts of a storage network present
information on themselves and their performance in a standardized fashion, it will be easier for
storage-management software to collect and analyze it. Currently storage devices are a tower of
Babel, with most companies collecting and presenting the information in their own way. That
means that the management-software companies have to write many different interfaces just to
cover the most popular products in the SAN market.

Although WBEM shares goals with the Simple Object Application Protocol (SOAP) backed by
Microsoft and the World Wide Web Consortium (W3C), it is more comprehensive. SOAP is
essentially a standardized way of passing messages among components of a system. In this it is
more nearly the equivalent of the CIM, which is only part of WBEM. WBEM offers a complete
framework for managing storage, including a data model and standard definitions and services for
devices.

Today WBEM for storage is something more than pie in the sky and something less than shipping
products. At this year's Storage Networking World Conference, SNIA members demonstrated
storage management using CIM in the interoperability lab. Currently the focus of the BlueFin
effort is to create a transition plan covering incorporating WBEM into SNIA technical and
marketing efforts. The plan is due to be completed this month.

On the plus side, however, DMTF has been working on developing standards such as CIM for
over a decade and had made considerable progress on WBEM over the last several years. Also, the
need for such a standard is clearly enormous and the major storage players, such as EMC and IBM
are clearly feeling the pressure from their customers. As a result the BlueFin standard is almost
certain to be fast-tracked and reports in the trade press indicate we will see the first compliant
products within a year.

For SNIA's take on CIM, see the white paper "Managing Fabrics Using CIM -- The Evolution" on
the SNIA site.

You might also like