You are on page 1of 36

Volume 13 | November 2009

Data Center
Efficiency and
Design

Green Power Protection spinning


into a Data Center near you
Isolated-Parallel UPS Systems
Efficiency and Reliability?
Powering Tomorrows Data Center:
400V AC versus 600V AC Power Systems
cabling the key to improving system-wide
power efficiency within the data center
11 Prepare Now for the is power distribution.
Next-Generation Data
Center 22 Data Center Efficiency
facility By Jaxon Lang, Vice President, Global
Connectivity Solutions Americas, ADC
Its in the Design
By Lex Coors, Vice President Data Center

corner Fueled by applications such as IPTV,


Internet gaming, file sharing and mobile
Technology and Engineering Group, Interxion
Data centers have always been power
broadband, the flood of data surging hogs, but the problem has accelerated
electrical across the worlds networks is rapidly in recent years. Ultimately, it boils down
morphing into a massive tidal wave-- to design, equipment selection and
4 Isolated-Parallel UPS one that threatens to overwhelm any operation of which measurement is an
Systems Efficiency and data center not equipped in advance to important part.
Reliability? handle the onslaught.
By Frank Herbener & Andrew Dyke, Piller Group
GmbH, Germany ITCorner
In todays data center world of ever
increasing power demand, the scale 25 Online backup or cloud
of mission critical business dependent
upon uninterruptible power grows ever recovery?
By Ian Masters, UK Sales & Marketing Director,
more. More power means more energy
Double-Take Software
and the battle to reduce running costs is
increasingly fierce. There is an old saying in the data
protection business that the whole point
mechanical Spotlights of backing up is preparing to restore.
Having a backup copy of your data is
8 Optimizing Air Cooling important, but it takes more than a
pile of tapes (or an on-line account) to
Using Dynamic Tracking Engineering and restore.
By John Peterson, Mission Critical Facility
Expert, HP
Design
Dynamic tracking should be considered 14 Green Power Protection 26 Five Best Practices
as a viable method to optimize the spinning into a Data for Mitigating Insider
effectiveness of cooling resources in a Center near You Breaches
data center. Companies using a By Adam Bosnian, VP Marketing Cyber-Ark
By Frank DeLattre, President, VYCON
dynamic tracking control system benefit Software
Flywheel energy storage systems
from reduced energy consumption and are gaining strong traction in data Mismanagement of processes involving
lower data center centers, hospitals, industrial and other privileged access, privileged data, or
costs. mission-critical operations where privileged users poses serious risks to
energy efficiency, costs, space and organizations. Such mismanagement is
environmental impact are concerns. also increasing enterprises vulnerability
This green energy storage technology to internal threats that can be caused
is solving sophisticated power problems by simple human error or malicious
that challenge computing operations deeds.
every day.

18 Powering Tomorrows
Data Center: 400V AC
versus 600V AC Power
Systems
By Jim Davis, business unit manager, Eaton
Power Quality and Control Operations
While major advancements in electrical
design and uninterruptible power
system (UPS) technology have provided
incremental efficiency improvements,

All rights reserved. No portion of DATA CENTER Journal may be reproduced without written AN EDM2R ENTERPRISES, INC. PUBLICATION ALPHARETTA, GA 30022
permission from the Executive Editor. The management of DATA CENTER Journal is not PHONE: 678-762-9366 | FAX: 866-708-3068 | WWW.DATACENTERJOURNAL.COM
responsible for opinions expressed by its writers or editors. We assume that all rights in
communications sent to our editorial staff are unconditionally assigned for publication. DESIGN : NEATWORKS, INC | TEL: 678-392-2992 | WWW.NEATWORKSINC.COM
All submissions are subject to unrestricted right to edit and/or to comment editorially.

 | THE DATA CENTER JOURNAL www.datacenterjournal.com


ITOPS Vendor
28 Energy Measurement
Index
Holis-Tech................................. Inside Front
Methods for the Data www.holistechconsulting.com
Center
By Info-Tech Research Group MovinCool ..................................... pg 1
Ultimately, energy data needs to be www.movincool.com
collected from two cost buckets: PDU Cables .................................. pg 3
data-serving equipment (servers, www.pducables.com
storage, networking, UPS) and support Piller ............................................... pg 7
equipment (air conditioning, ventilation, www.piller.com
lighting, and the like). Changes in one
bucket may affect the other bucket, and Server Tech ................................... pg 9
by tracking both, IT can understand this www.servertech.com
relationship. Snake Tray .................................... pg 10
www.snaketray.com
Binswanger ................................. pg 13
www.binswanger.com/arlington
Upsite ............................ pgs 19, 21, 23
www.upsite.com
Universal Electric ....................... pg 20
www.uecorp.com
Sealeze .......................................... pg 22
www.coolbalance.biz
AFCOM ........................................... pg 24
www.afcom.com
education 7x24 Exchange ............................ pg 27

Corner www.7x24exchange.org
Info-Tech Research Group ....... pg 29
www.infotech.com/measureit
30 Common Mistakes in Data Aire ....................................... Back
Existing Data Centers & www.dataaire.com
how to Correct Them
By Christopher M. Johnston, PE and Vali Sorell,
PE, Syska Hennessy Group, Inc.
After youve visited hundreds of data
centers over the last 20+ years
(like your authors), you begin to
Calendar
see problems that are common
to many of them. Were taking November
this opportunity to list some
of them and to recommend
how to correct them. November 15 - November 18, 2009
7x24 Exchange International 2009 Fall Conference
www.7x24exchange.org/fall09/index.htm

december
Yourturn December 2 - December 3, 2009
KyotoCooling Seminar: The Cooling Problem Solved
www.kyotocooling.com/KyotoCooling%20Seminars.html
32 Technology and the
economy December 1 - December 10, 2009
By Ken Baudry Gartner 28th Annual Data Center Conference 2009
An article from our Experts Blog www.datacenterdynamics.com

www.datacenterjournal.com THE DATA CENTER JOURNAL | 


facility corner
electrical
Isolated-Parallel UPS Systems
Efficiency and Reliability?
Frank Herbener & Andrew Dyke, Piller Group GmbH, Germany

In todays data centre world of ever increasing power demand, the scale of mission critical business
dependant upon uninterruptible power grows ever more. More power means more energy and
the battle to reduce running costs is increasingly fierce. Optimizing system efficiency without
compromise in reliability seems like an impossible task or is it?

A
parallel redundant scheme usually
provides N+1 redundancy to boost
reliability but suffers from single Isolated
Parallel System Isolated Distributed
points of failure including the out- Parallel
Redundant Redundant Redundant Redundant
put paralleling bus and the scheme Redundant
is limited to around 5 or 6MVA at low volt-
ages. The whole system is not fault tolerant Fault tolerant No Yes Yes Yes Yes
and is difficult to concurrently maintain.
A System + System approach can
Concurrently
overcome the maintenance and fault tolerance No Yes Yes Yes Yes
issues but suffers from a very low operating maintainable
point on the efficiency curve. Like the paral- Load Manage-
lel redundant scheme, it too, is limited in scale No No Yes Yes No
ment required
at low voltages.
An isolated or distributed redundant Typical UPS
scheme can be employed to tackle all these module loading 85% 50% 100%* 85% 94%
problems but such schemes introduce ad-
(max)
ditional requirements such as essential load
sharing management and static transfer Reliability order 5 1 4 3 2
switches for single corded loads. (1= best)
The Isolated-Parallel (IP) rotary UPS
system eliminates the fundamental draw- * One module is always completely unloaded.
backs of conventional approaches to provide

a highly reliable, fault tolerant, concur-
rently maintainable and yes, highly efficient Table 1 Comparison of UPS scheme topologies.
solution.

IP System Configuration Load sharing


The idea [1] of an IP system is to use In normal operation each critical load
a ring bus structure with individual UPS is directly supplied from the mains via its as-
modules interconnected via 3-phase isolation sociated UPS. In the case that the UPSs are all
chokes (IP chokes). Each IP choke, is de- equally loaded there is no power transferred
signed to limit fault currents to an acceptable through the IP chokes. Each unit indepen-
level at the same time as allowing sufficient dently regulates the voltage on its output bus.
load sharing in the case of a module output In an unbalanced load condition each
failure. Load sharing communications are UPS still feeds its dedicated load, but the units
not required and the scale of low voltage with resistive loads greater than the average
systems can be greatly increased. load of the system receive additional active Figure 1 Isolated-Parallel System

 | THE DATA CENTER JOURNAL www.datacenterjournal.com


UPS Topology
power from the lower loaded UPS via the IP bus (see Figure 2). It is
the combination of the relative phase angles of the UPS output busses The most suitable UPS topology to achieve the aforementioned
and the impedance of the IP choke that controls the power flow. The load dependent phase angle in a natural way is a rotary or diesel ro-
relative phase angles of the UPS must be naturally generated in cor- tary UPS with an internal coupling choke as shown in Figure 4.
relation to the load level in order to provide the ability of natural load
sharing among the UPS modules without the necessity of active load
sharing controls.

Figure 2 Example of load sharing in an IP system consisting of 16


UPS Modules

The influence of the IP choke should also be considered: With


all UPS modules having the same output voltage, the impedance of
the IP choke inhibits the exchange of reactive current, so that reactive
power control is also not necessary.
Looking at the mechanisms of natural load sharing in an IP
system, it is obvious that a normal UPS bypass operation would
significantly disturb the system. So, if the traditional bypass operation
is not allowed in an IP system, what will happen in case of a sudden
shutdown of a UPS Module? To say absolutely nothing would be
slightly exaggerated, but almost nothing is reality.
Figure 4 IP system using Piller UNIBLOCK T Rotary UPS with
bi-directional energy store.
1 Utility bus
2 IP bus
3 IP bus (return)
4 Rotary UPS with flywheel energy store.
5 Load bus
6 IP choke
7 Transfer breaker pair (bypass)
8 IP bus isolation breakers.

Note that a UPS module without a bi-directional energy store


Figure 3 Example of redundant load supply in the case one UPS fails. (e.g. battery or induction coupling) can be used but the system is
likely to exhibit lower stability under transient conditions.
The associated load is still connected to the IP bus via the IP
choke, which now works as a redundant power source. The load will Fault isolation
automatically be supplied from the IP bus without interruption. In There are two fault locations that must be evaluated: a). down-
this mode, each of the remaining UPS Modules equally feeds power stream of each UPS and b) the IP bus itself.
into the IP bus (Figure 3). There is no switching activity necessary to
maintain supply to the load. A). A fault on the IP bus is the most critical because it results in
An additional breaker between the load and the IP bus allows the highest local fault currents. The fault is parallel fed by each UPS
connection of the load directly to the IP bus, enabling the isolation of connected to the IP bus but limited by the sub-transient reactance of
the faulty UPS under controlled conditions. the UPS combined with the impedance of its IP choke. This means
that the effect on the individual UPS outputs is minimized and the
focal point remaining is the fault withstand of the IP ring itself.

www.datacenterjournal.com THE DATA CENTER JOURNAL | 


b). A fault on the load side of a UPS is mostly fed by the associ- sharing of those UPS systems which are still in Diesel operation can
ated UPS, limited by its sub-transient reactance only. A current from not be done by the regular droop function. To overcome this, Piller
each of the non affected UPS is fed into the fault too, but because of Group GmbH invented and patented the Delta-Droop-Control
the fact that there are two IP chokes in series between the fault and (DD-Control). This allows proper load sharing under this condition
each of the non affected UPS, this current contribution is very much without relying on load sharing communications. With the imple-
smaller. As a result of this, the disturbance at the non affected loads is mentation of DD-Control into the UPS Modules all UPS systems can
very low. This, in combination with the high fault current capability be reconnected to utility step by step until the whole IP system is in
of rotary ups ensures fast clearing of the fault while effectively isolat- mains operation once more. This removes another problem in large
ing the fault from the other loads. scale systems: that of step-load re-transfer to utility after mains failure.

Maintainability
The IP bus system is probably the simplest (high reliability)
system to concurrently maintain because the loads are independently
fed by UPS sources and these sources can readily be removed from
and returned to the system without load interruption. Not only that,
but the ring bus can be maintained, as can the IP chokes, also without
load interruption. All the other solutions with similar maintainability
(System, Isolated and Distributed redundant), have far greater com-
plexity of infrastructure, leading to more maintenance and increased
risk during such operations.

Projects
Figure 5 Example of a fault current distribution in case of a short The first IP system was realized in 2007 for a data center in Ash-
burn VA. It consists of two IP systems, each equipped with 16 x Piller
circuit on the load side of UPS #2 UNIBLOCK UBT1670kVA UPS with flywheel energy storage (total
Control installed capacity > 2 x20MWatts at low voltage). Each of the UPS is
backed up by a separate Diesel Generator with 2810kVA, which can be
The regulation of voltage, power and frequency plus any syn- connected directly to the UPS load bus and which is able to supply both
chronization is done by the controls inside each UPS module. The the critical and the essential loads. Since the success of this first instal-
UPS also controls the UPS related breakers and is able to synchronize lation, three more data centers have been commissioned, of which the
itself to different sources. Each system is controlled by a separate first phase of one is complete (a further 20MWatts) as of today.
system control PLC, which operates the system related breakers and There are further projects planned to be done in medium voltage
initializes synchronization processes if necessary. The system control and also a configuration combining the benefits of the IP system with
PLC also remotely controls the UPS regarding all operations that are the energy efficiency of natural gas engines is planned by Consulting
necessary for proper system integration. The redundant Master Engineers.
Control PLCs are used to control the IP system in total. Additional
pilot wires interconnecting the system controls allow safe system Conclusion
operation in the improbable case that both master control PLCs fail. In the form of an IP bus topology, a UPS scheme that combines
high reliability with high efficiency is possible.
Modes of operation High reliability is obtained by virtue of the use of rotary UPS
In case of a mains failure each UPS automatically disconnects (with MTBF values in the region 3-5 times better than static technolo-
from the mains and the load is initially supplied from the energy gy), combined with the elimination of load sharing controls, no mode
storage device of the UPS. From this moment on, the load sharing switching under failure conditions, load fault isolation and simplified
between the units is done by a droop-function based on a power-fre- maintenance.
quency-characteristic which is implemented in each UPS. No load High efficiency can be obtained with such a high reliability
sharing communication between the units is required. After the Diesel system because of the ability to simulate the System + System fault tol-
Engines are started and engaged, the loads are automatically trans- erance without the penalty of low operating efficiencies. A 20MWatt
ferred from the UPS energy storage device to the Diesel Engine so the design load can run with modules that are 94% loaded and yet, offer a
energy storage can be recharged and is then available for further use. reliability that is similar to the S+S scheme that has a maximum mod-
To achieve proper load sharing also in Diesel operation, each ule loading of just 50%. That can translate in to a difference in UPS
Diesel Engine is independently controlled by its UPS, whether the electrical efficiency of 3 or 4%. That means a potential waste in oper-
engine is mechanically coupled to the generator of the UPS (DRUPS) ating costs of $750,000 per year (ignoring additional cooling costs).
or an external Diesel-Generator (standby) is used. A special regula- Whats more, the solution is not only concurrently maintainable
tor structure inside the UPS in combination with the bi-directional and fault tolerant with high reliability and high efficiency, but can also
energy storage device allows active frequency and phase stabilization be realized at either low or medium voltages and can be implemented
while keeping the load supplied from the Diesel Engine. with DRUPS, separate standby diesel engines or even gas engines for
The retransfer of the system to utility is controlled by the master super-efficient large scale facilities.
control. The UPS units are re-transferred one by one, thereby avoid- For complete information on the invention and history of
ing severe load steps on the utility. After the whole system is syn- IP systems, refer to Piller Group GmbH paper by Frank Herbener
chronized and the first UPS system is reconnected to utility, the load entitled Isolated-Parallel UPS Configuration at www.piller.com.

 | THE DATA CENTER JOURNAL www.datacenterjournal.com


What do the following
organizations all have
in common?
3M | ABB | ABN Amro | Abovenet | ADP | AEG | Airbus | Alcan | Alcatel | Aldi | Allianz | Alstom | Altair | AMD | Anz Bank | AOL | Areva | Astra Zeneca | AT & T | Australian Stock Exchange |
Australian Post | Aviva | Bahrain Financial Harbour | Banca d'Italia | Banco Bradesco | Banco Santander | Bank of America | Bank of England | Bank of Hawaii | Bank of Morocco | Bank of Scotland
| Bank Paribas | Barclays | BASF | Bayer | BBC (British Broadcasting Corporation) | BP (British Petroleum) | BICC | Black & Decker | BMW | Bosch | Bouygues Telecom | BA (British Airways) | BG
(British Gas) | BT (British Telecom) | British Civil Service | British Government | Bull Computer | CAA (Civil Aviation Authority) | Canal+ | Capital One | Channel 4 (USA) | Channel 4 (UK) | Chase |
Chevron | Chinese Army | Chinese Navy | Chrysler | Citigroup | Central Intelligence Agency (CIA) | Commerzbank | Conoco | Credit Lyonnais | Credit Mutuel | Credit Suisse | CSC | Daimler Benz |
Danish Intelligence Service | Danish Bank | Danish Radio | Dassault | De Beers | Degussa | Dell Computer | Deutsche Bank | Deutsche Bundesbank | Deutsche Post | Disney | Dow Jones | Dresdner
Bank | DuPont | Dutch Military | EADS | EADS Hamburg | EASYNET | EDF | EDS | Eli Lilly | ESAT Telecom | European Patent Office | European Central Bank | Experian | Federal Reserve Bank |
FedEX | First National Bank | First Tennesee Bank | Ford Motor | France Telecom | French Airforce | French Army | Friends Provident | Fujitsu | GCHQ (British Government Communications Head
Quarters) | Girobank | GlaxoSmithKline | GUS (General United Stores) | Heidelberger | Hewlett Packard | Hitachi | HSBC | Hynix | Hyundai | IBM | ING Bank | Intel | IRS | Iscor | J P Morgan | John
Deere | Knauf | Knorr | Kodak | Lafrage | Linde | Lindsey Oil | Lloyds of London | Lockheed | Los Alamos National Laboratories | Lottery Vienna | Lottery Copenhagen | LSE (London Stock Exchange)
| Marks & Spencer | MBNA | Mercedes Benz | Merrill Lynch | MOD (British Ministry of Defence) | Morgan Grenfell | Morgan Stanley | Motorola | NASA | NASDAQ | National Grid (British) | National
Semiconductor | Natwest Bank | Nestl | Nokia | Nuclear Elektric (Germany) | NYSE (New York Stock Exchange) | NYSE Euronext | Pfizer | Philips | Phillip Morris | Porsche | Proctor & Gamble |
Putnam Investments | Qantas | QVC | Rank Xerox | Raytheon | RBS | Reuters | Rolls Royce | Royal Bank of Canada | Royal & Sun Alliance | RWE | Samsung | Scottish Widows | Sharp | Shell |
Siemens | Sky | Sony | Sony Ericsson | Sweden Television | TelecityGroup | Thyssen Krupp | T-Mobile | Union Bank of Switzerland | United Biscuit | United Health | Verizon | VISA | VW *

* The above is an extract of Piller installations and is by no means exhaustive.

They all rely on


data centers protected
by Piller.
When it comes to power protection leading organizations dont take chances. Time after time the worlds
leading organizations select Piller power protection systems to safeguard their data centers.
Why? Because there is no higher level of data center power protection available!
Whats more, Piller offers the most cost effective and the greenest through life investment available. So,
if you are planning major data center investment and would like to know more about why the worlds
leading organisations trust their data center power protection to Piller, contact us today.
datacenterprotect@piller.com

Nothing protects quite like Piller


ROTARY UPS SYSTEMS Piller Group GmbH
STATIC UPS SYSTEMS Abgunst 24,
37520 Osterode,
STATIC TRANSFER SWITCHES
Germany
KINETIC ENERGY STORAGE
T +49 (0) 5522 311 0
AIRCRAFT GROUND POWER SYSTEMS E datacenterprotect@piller.com
FREQUENCY CONVERTERS

NAVAL POWER SUPPLIES

SYSTEM INTEGRATION
A Langley Holdings Company www.piller.com
www.datacenterjournal.com
Piller Australia Pty. Ltd. THE| DATA
| Piller France SAS | Piller Germany GmbH & Co. KG | Piller Italia S.r.l. CENTER
Piller Iberica JOURNAL
S.L.U | 
Piller Power Singapore Pte. Ltd. | Piller UK Limited | Piller USA Inc.
facility corner
mechanical
Optimizing Air Cooling Using
Dynamic Tracking
By John Peterson, Mission Critical Facility Expert, HP

Dynamic tracking should be considered as a viable method to optimize the effectiveness of cooling
resources in a data center. Companies using a dynamic tracking control system benefit from reduced
energy consumption and lower data center costs.

O
Inside the Data Center the plenum, reducing the static pressure and sary. Due to the predetermined raised floor
ne of the most challenging tasks of effectiveness of the air distribution system. height, supply air temperature and humidity
running a data center is managing Cables, conduits for power and piping can necessities, the volatile air distribution system
the heat load within it. This re- also clog up the air distribution path, so becomes an inflexible piece of the overall
quires balancing a number of fac- thoughtful consideration and organization puzzle, at the expense of energy and possibly
tors including equipment location should be an essential part of the data center performance due to inadequate cooling.
adjacencies, power accessibility and available operations plan. However, even the best Meanwhile, the CRAC units are operat-
cooling. As high-density servers continue to laid plan can still end up with areas that are ing at variable rates to meet this load, but
grow in popularity along with in-row and in- starved for cooling air. mostly they are operating at their maxi-
rack solutions, the need for adequate cooling In a typical layout, there are rows of mum capacity instead of as-needed. Why?
in the data center will continue to grow at a computer equipment racks that draw cool air One reason is where the air temperature is
substantial rate. To meet the need for cooling from the front and expel hot air at the rear. measured. Each unit is operating on the
using a typical under floor air distribution This requires an overall footprint larger than return air temperature measured at the unit,
system, a manager often adjusts perforated the rack itself (Figure 1). and all units are sharing the same return air.
floor tiles and lets the nearest Computer When adding new data center equip- This means that if the load is irregular in the
Room Air Conditioner (CRAC) unit react ment, data center managers need to manage racks, the units simply cool for the overall
as necessary to each new load. However, unpredictable temperatures and identify a required capacity. Apply this across a data
this may cause a sudden and unpredictable new perfect balance of how many perforated center, and the units are generally handling
fluctuation in the air distribution system due tiles to use and where to locate them. They the cooling load without altering their flow
to changes in static pressure and air rerouting involve maintenance personnel to adjust based on changes happening in any localized
to available outlets which can have a ripple CRAC units, assist with tile layouts, and even area, which consequently allows that large
effect on multiple units. With new outlets possibly add or relocate the units as neces- variance of temperatures in the rows.
available, air, like water, will seek Temperature discrep-
the path with less resistance; the ancy is the main concern
new outlets may starve existing for most data center
areas of cooling, causing the ex- managers. They would like
isting CRAC units to cycle the air the air system not to be the
faster. This becomes a wasteful limiting factor when adding
use of fan energy, let alone fluc- new equipment to racks
tuations of cooling load energy and prefer to remove the
allocation. variable of fickle air cooling
Most managers understand from the equation of equip-
that the air supply plenum needs ment management. At the
to be a totally enclosed space to same time, almost behind
achieve pressurization for air the scenes, facility costs
distribution. Oversized or un- from cooling are increas-
sealed cutouts allow air to escape Figure 1: Overall footprint needed per rack ing to match the new load,

 | THE DATA CENTER JOURNAL www.datacenterjournal.com


How Do You Measure
the Energy Efficiency
of Your Data Center?

With Sentry Power Manager (SPM) and


Sentry POPS (Per Outlet Power Sensing) CDUs!
> Sentry POPS
Measure and monitor power information per WEB BASED SPM INTERFACE

outlet, device, application, or cabinet using Web BMS


DATABASE

based CDU Interface Sentry Power Manager


> Enterprise Cabinet Power Mngt.
> Sentry Power Manager > Reports & Trends
> Device Monitoring
Secure software solution to: SENTRY POWER
> Groups & Clusters
MANAGER APPLIANCE > Kilowatt Readings for Billing
> Monitor, manage & control multiple CDUs > Auto-Discovery of Sentry CDUs
> Alarms

> Alarm management, reports & trending


of power info P R I M A R Y E T H E R N E T P I P E L I N E

> ODBC Compliant Database for integration into


your Building Management or other systems WEB BASED CDU INTERFACE

> kW & kW-h IT power billing and monitoring


information per outlet, device, application,
cabinet or DC location

Solutions for the Data Center Equipment Cabinet Sentry: POPS Switched CDU
With Device Monitoring
> Rack Level Power Management
1040 Sandhill Drive tf +1.800.835.1515 > Outlet Power Monitoring (POPS)
Reno, NV 89521USA tel +1.775.284.2000 > Input Power Monitoring

www.servertech.com fax +1.775.284.2065 > Environmental Monitoring


> Outlet Groups
www.servertechblog.com sales@servertech.com > Alarms

www.datacenterjournal.com
Server Technology, Inc. Sentry is a trademark of Server Technology, Inc. THE DATA CENTER JOURNAL | 
Dynamic tracking systems can help transform the air distribution and energy use within
a data center, and should be considered as a viable solution to handle variable and
complex heat loads. The ability of dynamic tracking to reduced energy and preserve data
center flexibility are promising factors for driving optimization.

driving the need for more efficient use of ex- temperatures within each row of cooling, and method of measurement and control is some-
isting resources. A Gartner report shows that even the temperature entering a specific rack times referred to as dynamic tracking.
over 63% of respondents to a recent survey at a particular height. From these tempera- In the initial setup of dynamic track-
indicated that they rely on air systems to cool tures, an intelligent system can react to meet ing, the intelligent control system tests and
their data center over liquid cooling. Of those the need for cooling air at that location, learns which areas of the data center each
same respondents nearly 45% shared that eliminating the work of juggling floor tiles CRAC unit affects. Then, the units are tested
they are facing insufficient power which will and guessing at the air flow. together, and the control system modulates
need to be addressed in the near future. As How is this done? To begin with, the them to provide the most uniform distribu-
IT managers are able to correct their power temperature is measured differently. A tion within the constraints of the layout and
constraints they are able to deploy a more number of racks are mounted with sensors room architecture. This data allows the air
demanding infrastructure and subsequently that measure the supply air temperature at system to gather intelligence on how to com-
will require additional power and cooling. the front of the rack. This information is pensate for leaks and barriers in the plenum.
relayed to a central monitoring system that From there, the system knows how the units
Dynamic Tracking responds accordingly by adjusting the CRAC interact, and can intelligently judge how to
Although the air flow in a data center units. The units then function as a team and respond to changes within the data center. It
is complex, an opportunity now exists to op- not independently, meeting specific needs as is also able to rebalance when one of the units
timize the effectiveness of cooling resources monitored in real time by the sensors. Since fails or is being serviced.
and better manage the air system within the the temperature is tracked from the source To prevent a large fluctuation, the
data center. There are ways to monitor air and adjusts based on real time needs, this temperatures are measured over an extended
period of time and temperature is adjusted
depending on the cooling needs of the space.
The CRAC units respond based on the his-
tory of how each unit has affected the specific
area. The overarching intelligence of the dy-
namic tracking control system gauges wheth-
er an increase in temperature is sustained or
a series of momentary heat spikes and adjusts
itself accordingly. This prevents units from
cycling out of control from variables such as
human error, short peak demands, and sud-
den changes in load.
Once installed, a dynamic tracking
system can show how the CRAC units have
operated in the past and how they are cur-
rently performing. Most of the time, the units
operate at less than peak conditions, which is
an opportunity to increase energy efficiency
and create significant savings. Also, if the
units can measure and meet the load more
closely, the cost savings carry directly over to
the mechanical cooling plant as well.
Dynamic tracking systems can help
transform the air distribution and energy
use within a data center, and should be
considered as a viable solution to handle
variable and complex heat loads. The ability
of dynamic tracking to reduced energy and
preserve data center flexibility are promising
factors for driving optimization. n

1 Power & Cooling Remain the Top Data Center Infrastructure Issues, Gartner-
Research, February 2009
 Power & Cooling Remain the Top Data Center Infrastructure Issues, Gartner-
Research, February 2009

10 | THE DATA CENTER JOURNAL www.datacenterjournal.com


facility corner
cabling
Prepare Now for the
Next-Generation Data Center
by Jaxon Lang, Vice President, Global Connectivity Solutions Americas, ADC

Fueled by applications such as IPTV, Internet gaming, file sharing and mobile broadband, the flood
of data surging across the worlds networks is rapidly morphing into a massive tidal wave--one that
threatens to overwhelm any data center not equipped in advance to handle the onslaught.

T
he 2009 edition of the annual Cisco requirements, they also want to determine to or 24 fibers. 40-GbE transmission up to 100
Visual Networking Index predicts that what extent they can leverage their exist- meters will require parallel optics, with eight
the overall volume of Internet Proto- ing infrastructures to meet those needs. As multimode fibers transmitting and receiving
col (IP) traffic flowing across global they do so, many are discovering there are at 10 Gbps, using an MPO-style connector.
networks will quintuple between 2008 strategies available today that can help them Running 100 GbE will require 20 fibers, each
and 2013, with a compound annual growth achieve both goals. transmitting and receiving at 10 Gbps, within
rate (CAGR) of 40 percent. During that a single 24-fiber MPO-style connector.
same period, business IP traffic moving on 40GbE and 100GbE Are To achieve 10-GbE data rates for
the public Internet will grow by 31 percent, Coming distances up to 300 meters , some managers
according to the Cisco study, while enterprise Although most data centers today run have used MPO connectors to install laser-
IP traffic remaining within the corporate 10GbE between core devices, and some run optimized multimode fiber cables, either ISO
WAN will grow by 36 percent. 40GbE via aggregated 10GbE links, they 11801 Optical Mode 3 (OM3 or 50/125 m)
Faced with this looming challenge, data inevitably will need even faster connections or OM4 (50/125 m) fiber cables. Thus they
center managers know they must prepare to support high-speed applications, new already have taken an important step to pre-
now to deploy the solutions necessary to ac- server technologies and greater aggregation. pare for 40 and 100GbE transmission rates.
complish three tasks: transmit this deluge of In response, the Institute of Electrical and Working with their vendors, they can retrofit
information, store it and help lower total cost Electronics Engineers (IEEE) is developing a their 12-fiber MPO connectors to support
of ownership (TCO). Specifically, within the standard for 40 and 100GbE data rates (IEEE 40 GbE. It may even be possible to achieve
next five to seven years, they will need: 802.3ba). 100GbE rates by creating a special patch cord
more bandwidth Scheduled for ratification next year, that combines two of those 12-fiber MPO
faster connections the standard addresses multimode and connectors. Although the proposed standard
more and faster servers and singlemode optical-fiber cabling, as well as specifies 100 meters for 40 and 100GbE (a
more and faster storage copper cabling over very short distances (10 departure from 300 meters for 10GbE), the
meters, as of publication date). It is helpful vast majority of data center links currently
Todays data center operations account to examine the proposed standard and then cover 55 meters or less.
for up to half of total costs over the life cycle look at various strategies for evolving the data Those who are not using MPO-style
of a typical enterprise and retrofits make up center accordingly. Currently, IEEE 802.3ba connectors today may have options other
another 25 percent. Managers want solutions specifies the following: than forklift upgrades for achieving 40 and
that boost efficiencies immediately while 100GbE data rates. Initially, most data center
also making future upgrades easier and more Multimode Fiber managers will only run 40 and 100GbE on
affordable. Running 40 GbE and 100 GbE will require: a select few circuits--perhaps 10 percent or
Among the technologies that promise to 1) multi-fiber push-on (MPO) connectors 20 percent. So, depending on when they
provide these solutions are 40 and 100Gbps 2) laser-optimized 50/125 micrometer (m) will need more bandwidth, they can begin
Ethernet (GbE); Fibre Channel over Ethernet optical fiber and to deploy MPO terminated, laser-optimized,
(FCoE); and server virtualization. Because 3) an increase in the quantity of fiber--40 multimode fiber cables and evolve gradually.
they directly affect the infrastructure, these GbE requires six times the number of
technologies will require new approaches to fibers needed to run 10 GbE, and 100 GbE High-performance Cabling
cabling and connectors; higher fiber densi- requires 12 times that amount. Compliance with the proposed standard
ties; higher bandwidth performance; and will require a minimum of OM3 laser-opti-
MPO Connectors mized 50 m multimode fiber with reduced
more reliable, flexible and scalable opera-
A single MPO connector, factory-pre- insertion loss (2.0dB link loss) and minimal
tions. Although managers want to deploy
terminated to multi-fiber cables purchased in delay skew. As noted earlier, managers who
technologies that will satisfy their future
predetermined lengths, terminates up to 12 cap their investments in OM1 (62.5/125 m)

www.datacenterjournal.com THE DATA CENTER JOURNAL | 11


and OM2 (standard 50/125 m) cabling now Copper enhances Ethernet to work in data center en-
and install high-performance cabling and The proposed standard specifies the vironments. By deploying the electronics that
components going forward can position the transmission of 40GbE and 100GbE over support FCoE, which overlays Fibre Channel
data center for eventual 40GbE and 100GbE short distances of copper cabling, with on top of Ethernet, managers can eliminate
requirements. 10Gbps speeds over each lane--four lanes the need for--and costs of--parallel infrastruc-
for 40GbE and 10 lanes for 100GbE. Not tures; reduce the overall amount and costs
Much More Fiber intended for backbone and horizontal cabling, of required cabling; and reduce cooling and
Running a 10GbE application requires the use of copper probably will be limited to power-consumption levels. If they also begin
two fibers today, but running a 40GbE appli- very short distances for equipment-to-equip- to invest now in the OM3/OM4-compliant
cation will require eight fibers, and a 100GbE ment connections within or between racks. cabling for 40GbE and 100GbE, managers
application will require 20 fibers. Therefore, will position their data centers for a smooth
it is important to devise strategies today for Fibre Channel over Ethernet upgrade to FCoE-based equipment.
managing the much higher fiber densities (FCoE) Boosts Storage
of tomorrow. Managers must determine Because of Fibre Channels reliability Server Virtualization
not only how much physical space will be and low latency, most managers use it today Presents Its Own Issues
required but also how to manage and route for high-speed communications among their By running multiple virtual operating
large amounts of fiber in and above racks. SAN servers and storage systems. Yet because systems on one physical server, managers are
they rely on Ethernet for client-to-server or tackling several challenges: accommodating
Singlemode Fiber server-to-server transmissions, they have the space constraints created by more equip-
Running 40GbE over singlemode fiber been forced to invest in parallel networks and ment; reducing capital expenditures by buying
will require two fibers transmitting at 10Gbps interfaces, which obviously increase costs and fewer servers; improving server utilization;
over four channels using coarse wavelength create management headaches. and reducing power and cooling consump-
division multiplexing (CWDM) technology. In response, the industry has devel- tion. Currently, virtualization consolidates
Running 100GbE with singlemode fiber will oped a new standard (ANSI FC-BB-5) which applications on one physical server at a ratio
require two fibers transmitting at 25Gbps over combines Fibre Channel and Ethernet data of 4:1, but that could increase to 20:1. So
4 channels using LAN wave division multi- transmission onto a common network inter- many applications running on one server
plexing (WDM). face, basically by encapsulating Fibre Channel obviously require much greater availability
Although using WDM to run 40GbE frames within Ethernet data packets. FCoE and significantly more bandwidth.
and 100GbE over singlemode fiber is ideal for allows data centers to use the same cable Server virtualization means that down-
long distances (up to 10 km) and extended for both types of transmission and delivers time limits access to multiple applications. To
reach (up to 40 km), it probably will not be significant benefits, including better server provide the necessary redundancy, managers
the most cost-effective option for the data utilization; fewer required ports; lower power are deploying a second set of cables. The addi-
centers shorter (100-meter) distances. As the consumption; easier cable management; and tional bandwidth needed to support increased
industry finalizes the standard and vendors reduced costs. data transmission to and from the servers will
introduce equipment, managers will have To most cost effectively deploy FCoE, require additional services, which, in turn, will
a window of time in which to evaluate the managers may opt to use top-of-rack switches, demand still more bandwidth. While virtu-
evolving cost differences among singlemode, rather than traditional centralized switch- alization theoretically reduces the number of
multimode and copper cabling solutions for ing, to provide access to existing Ethernet servers and cabling volumes, the redundancy
both 40GbE and 100GbE. LAN and Fibre Channel SANs. Although the needed to support virtualization, in fact,
Typically, the elapsed time between the top-of-rack approach reduces the amount of means the data center needs more cabling.
release of a standard and the point at which cabling, it requires more flexible, manageable
the price of associated electronics comes operations, simply because managers will The Drive to Reduce TCO
down to a cost-effective level is about five have to reconfigure each rack. In addition, Although technologies such as FCoE
years. For example, the cost of the first 10GbE 40GbE and 100GbE require a higher-speed and server virtualization are aimed at
ports, which emerged right after the standard cabling medium. reducing TCO, the overall increase in data
was adopted in 2002, was roughly $32,000; As they try to devise workable, afford- requirements and equipment is putting a tre-
today, that same port costs about $2,000. If able strategies for deploying FCoE, managers mendous strain on power, cooling and space
40GbE and 100GbE ports follow that pattern, must take into account several factors. First, requirements. As a result, every enterprise
managers who already have adopted an MPO they have some time to move to FCoE. Cur- today tries to balance the need to deploy new
connectorization strategy will have until rent FCoE deployment rates are less than 5 technologies with the need to reduce TCO.
about 2015 to plan for and actually implement percent of storage ports sold. The emerging To do so, data center operators are looking for
the upgrades necessary to access the faster technologies of 40 GbE and 100 GbE certainly solutions that can handle changing configura-
technologies. make FCoE more enticing. tion requirements and reduce energy con-
Managers who have not opted for MPO FCoE can be a two-step approach. Ini- sumption--which inevitably will rise as more
connectors but have invested in OM3 multi- tially, the current investment in Fibre Chan- equipment comes online.
mode fiber that satisfies the length require- nel-based equipment disk arrays, servers and By devising migration strategies that
ments nevertheless may be able to devise switches can continue to be utilized. As FCoE protect existing investments and simultane-
a migration path. They could work with equipment becomes more cost effective and ously prepare for the deployment of new,
vendors to create a cord that combines 12 LC- readily available, a wholesale change can be high-speed technologies, managers can
type connectors into an MPO. However, they made at that time. enhance the capabilities, scalability and reli-
would have to test the site for length, insertion FCoE becomes possible due to the ability of the data center. In the process, they
loss and delay skew to ensure compliance with advent of Data Center Bridging (DCB) which can reduce TCO through more efficient op-
the 802.3ba standard. erations and reduced power consumption. n

12 | THE DATA CENTER JOURNAL www.datacenterjournal.com


DAllAs/Fort Worth Metroplex

Fort Worth, 14 miles Dallas, 22 miles

s
s

Spectacular, 441,362 sq. ft.


high-tech complex on
21 acres in
Arlington, texAs

50 acres available for expansion


Ex-semiconductor site;
low risk, low power costs
Building A
375,000 sq. ft.
Significant power to site
Building B
51,400 sq. ft.
Ceiling heights and floor loadings 71 acres
Building E
well-suited to data use 9,130 sq. ft.

5,860 tons of chiller capacity B

4,930 KW emergency generator capacity


A E
2.1 million gallon per day water capacity
W
.B
Plant systems include UPS, bulk gas, ar
di
DI water system; compressed air plant, n
Ro
ad
PCW plant and waste treatment plant
Electric power 40.8 megawatts of power
or 20.4 megawatts per feed
Approximately 91,000 sq. ft. of high-
quality office space
For complete details contact:
Ideally located in the heart of the
BINSWANGER
Dallas/Fort Worth Metroplex, minutes 1200 thREE lINcolN cENtRE, 5430 lBj fREEWAy, dAllAS, tx 75240
to I-20 and 25 minutes to DFW Airport 972-663-9494 fAx: 972-663-9461 E-MAIl: hdAvIS@BINSWANGER.coM
WoRldWIdE covERAGE WWW.BINSWANGER.coM/ARlINGtoN

www.datacenterjournal.com THE DATA CENTER JOURNAL | 13


Green Power Protection
spinning into a Data
Center near You
By Frank DeLattre, President, VYCON

14 | THE DATA CENTER JOURNAL www.datacenterjournal.com


K
eeping critical operations especially computer networks and
other vital process applications up and running during power
disturbances has been most commonly handled by uninterrupt-
ible power systems (UPSs) and stand-by generators. Whether
depending on centralized or distributed power protection, bat-
teries used with UPS systems have been the typical standard due primarily
to their low cost. However, when one is looking to increase reliability and
deploy green initiatives, toxic lead-acid batteries are not the best solution.
Frequent battery maintenance, testing, cooling requirements, weight,
toxic and hazardous chemicals and disposal issues are key concerns. Mak-

Today, data center and ing matters worse, one dead cell in a battery string can render the entire
battery bank useless not good when youre depending on your power

facility managers have many backup system to perform need it most. Every time the batteries are used
(cycled), even for a split second, the more likely it is that they will fail the
considerations to evaluate next time they are needed.

Clean Backup Power


when it comes to increasing Flywheel energy storage systems are gaining strong traction in data
energy efficiencies and reducing centers, hospitals, industrial and other mission-critical operations where
energy efficiency, costs, space and environmental impact are concerns. This
ones carbon footprint. The green energy storage technology is solving sophisticated power problems
that challenge computing operations every day. According to the Meta
challenge becomes how to Group, the cost of downtime can average a million dollars per hour for a
typical data center, so managers cant afford to take any risks. Flywheels
implement green technologies used with three-phase double-conversion UPS systems provide reliable
mission-critical protection against costly transients, harmonics, voltage
without disrupting high nines sags, spikes and blackouts.
A flywheel system can replace lead-acid batteries used with UPSs and
of availability and achieve a low works like a dynamic battery that stores energy kinetically by spinning a
mass around an axis. Electrical input spins the flywheel rotor up to speed,
total cost of ownership (TCO). and a standby charge keeps it spinning 24/7 until called upon to release the
stored energy. (Fig.1) The amount of energy available and its duration is
This challenge becomes even proportional to its mass and the square of its revolution speed. Specific to
flywheels, doubling mass doubles energy capacity, but doubling rotational
more crucial when looking at the speed quadruples energy capacity:

power protection infrastructure.


Depends on the shape of the rotating mass
M Mass of the flywheel
Angular velocity

Fig. 1 Flywheel Cutaway


www.datacenterjournal.com THE DATA CENTER JOURNAL | 15
During a power event, the flywheel provides backup power
seamlessly and instantaneously. Whats nice is that its not an either
or situation as the flywheel can be used with or without batteries.
When used with batteries, the flywheel is the first line of defense
against damaging power glitches the flywheel absorbs all the short
duration discharges thereby reducing the number and frequency of
discharges, which shortens the life of the battery. Since UPS batteries
are the weakest link in the power continuity scheme, flywheels paral-
leled with batteries give data center and facility managers peace of
mind that their batteries are safeguarded against premature aging and
unexpected failures. When the flywheel is used just with the UPS and
no batteries, the system will provide instant power to the connected
load exactly as it would do with a battery string. However, if the power
event lasts long enough to be considered a hard outage (rather than
just a transient outage), the flywheel will gracefully hand off to the
facilities engine-generator. Its important to know that according to Fig. 2 Power protection scheme with UPSs, batteries and flywheel
the Electric Power Research Institute (EPRI), 80 percent of all utility
power anomalies/disturbances last less than two seconds and 98 per-
cent last less than ten seconds. In the real world, the flywheel energy
storage system has plenty of time for the Automatic Transfer Switch Beating the Clock
(ATS) to determine if the outage is more than a transient and to start Many users are under a false sense of security by having 10 or 15
the generator and safely manage the hand-off. minutes of battery run time. They assume that if the generator does
not start they will be able to have a chance to correct the issue. It is
Shining Light on Real World Experience true that batteries provide much longer ride-through time, but the
most important ride-through time is in the first 30 seconds. We dont
need much more than this to have our stand-by generators come on
line. In most cases, our generators are on-line and loads are switched
over in 30 to 40 seconds. The flywheels are our first line of defense,
but should we need a few extra minutes to get a redundant genera-
tor on-line, then the battery can be utilized, said Smith. Having the
flywheels discharge first means the batteries are not discharged in
normal operation, thus their life can be extended.
In various industry studies such as the IEEE Gold Book, genset
start reliability for critical and non-critical applications was mea-
sured at 99.5%. For applications where the genset is tested regularly
and maintained properly, reliability substantially increases. When
the genset fails to start, 80% of the time it is because of failure of the
battery being used to start the generator. Just monitoring or adding a
redundant starting system can remove 80% of the non-start issues.

SunGard Data Center

SunGard, one of the worlds leading software and IT services


companies that serves more than 25,000 customers in more than 70
countries, first tried out flywheels in their data centers three years ago
to see how they would perform over a period of time.
The driver for utilizing flywheels is to reduce the life-cycle
cost and maintenance requirements when installing large banks of
batteries. In addition, the space savings by using flywheel and less
batteries means lower construction costs and allows the optimum
space utilization, commented Karl Smith, Head Of Critical Environ-
ments for SunGard Availability Services. Today, SunGards legacy data
centers still have batteries, but as it becomes necessary to replace the
batteries, they plan to reduce the number of strings of batteries and
complement them with a string of flywheels. For future data center
builds, SunGard is planning to have a combination for short run time
batteries, in parallel with a bank of flywheels.
Fig. 3 Lifecycle costs of batteries vs. flywheels. Battery costs are
based on a 4-year replacement cycle.

16 | THE DATA CENTER JOURNAL www.datacenterjournal.com


It Pays to be Green Hazmat permits, acid leak containment, floor loading issues,
The latest flywheel designs sold by world-leaders in 3-phase UPS slow recharge times, lead disposal compliance and transporting are
systems take advantage of higher speeds and full magnetic levitation causing facility managers to look closely at alternatives to energy
packing more green energy storage into a much smaller footprint and storage.
removing any kind of bearing maintenance requirements. As shown Protecting critical systems against costly power outages in a
in figure 2, over a 20-year design lifespan, cost savings from a hazmat- manner that is energy efficient, environmentally-friendly and provides
free flywheel versus a 5-minute valve regulated lead-acid (VRLA) a low total cost of ownership is a priority with most data center and
battery bank are in the range of $100,000 to $200,000 per flywheel facility managers. Double-conversion UPSs paired with flywheels
deployed. (Figure 4) is the next step in greening the power infrastructure.
These figures (Figure 3) are based on a typical installation of a
250kVA UPS using 10-year design life VRLA batteries housed in a Benefits of Flywheel Technology
cabinet. The yearly maintenance for the batteries is based on a recom- From 40kVA to over a megawatt, flywheel systems are increas-
mended quarterly check on the battery health to have some predict- ingly being used to assure the highest level of power quality and
ability on their availability. Moreover, these figures dont include reliability for mission-critical applications. The flexibility of these
floor space or cooling cost savings that can be achieved by using the systems allows a variety of configurations that can be custom-tailored
flywheel energy storage vs. batteries. to achieve the exact level of power protection required by the end user
based on budget, space available and environmental considerations. In
Batteries Unpredictable Failures any of these configurations, the user will ultimately benefit from the
While UPS systems have long used banks of lead-acid batteries many unique benefits of flywheel-based systems.
to provide the energy storage needed to ride through a power event,
they are, as stated earlier, notoriously unreliable. In fact, according
to the Electric Power Research Institute (EPRI), Batteries are the Benefits of Flywheel Technology
primary field failure problem with UPS systems. Predicting when
one battery in a string of dozens will fail is next to impossible even n No cooling required
with regular testing and frequent individual battery replacements. The n High power density - small footprint
truth is that engineering personnel dont test them as often as they n Parallel capability for future expansion and
should, and may not have testing/monitoring systems in place to do
so properly. Since flywheel systems are electro/mechanical devices,
redundancy
they can constantly self monitor and report to assure the user, that n Fast recharge (under 150 seconds)
they are ready for use or advise of the need for service. This is nearly n 99% efficiency for reduced operating cost
impossible to accomplish in a chemically based system. Every time a n No special facilities required
battery is used, it becomes less responsive to the next event. Batteries n Front access to the flywheel eliminates space
generate heat, and heat reduces battery life. If operated 10F above
their optimum setting of 75F, the lifespan of lead-acid batteries is issues and opens up installation site flexibility
cut in half. If operated at colder temperatures, chemical reactions are in support of future operational expansions and
slowed and performance is affected. Batteries can also release explo- re-arrangements
sive gases that must be ventilated away. n Low maintenance
Battery reliability is always in question. Are they fully charged?
n 20-year useful life
Has a cell gone
bad in the battery n Simple installation
string? When was n Quiet operation
the last time they n Wide temperature tolerance (-4F to 104F)
were tested? Some
facility managers
resist testing their Flywheels today comply with the highest international standards
batteries as the for performance and safety including those from UL and CE. Some
battery test in itself units, like those from VYCON, incorporate a host of advanced fea-
depletes battery tures that users expect to make the systems easy to use, maintain and
life. By contrast, monitor such as self-diagnostics, log files, adjustable voltage settings,
flywheel systems RS-232/485 interface, alarm status contacts, soft-start precharge from
provide reliable the DC bus and push-button shutdown. Available options include DC
energy storage disconnect, remote monitoring, Modbus and SNMP communications
instantaneously to and real-time monitoring software.
assure a predictable Data center managers throughout the U.S. and around the world
transition to the are evaluating technologies that will increase overall reliability while
stand-by genset. reducing costs. While the highest level of nines is the first require-
ment, being environmentally-friendly is certainly an added bonus. By
enhancing battery strings or eliminating them altogether with the use
Fig 4. VYCONs VDC Flywheel Energy Storage System paired of flywheels, managers take one more step in greening their facilities
and lowering TCO. n
with Eatons three-phase double-conversion UPS.

www.datacenterjournal.com THE DATA CENTER JOURNAL | 17


Powering Tomorrows Data Center:
400V AC versus 600V AC Power Systems
By Jim Davis, business unit manager, Eaton Power Quality and Control Operations

A growing demand for network bandwidth and faster, fault-free data processing has driven an
exponential increase in data center energy consumption, a trend with no end in sight.

I
ndustry reports show that data center tribution configurations at varying load levels Finally, the server or equipment internal
energy costs as a percent of total revenue using readily available equipment, taking into power supply converts the utilization voltage
is at an all-time high, and data center account the technology advancements and to the specific voltage needed. Most IT equip-
electricity consumption accounts for installation and operating costs that drive ment can operate at multiple voltages. Losses
almost .5 percent of the worlds green- total cost of ownership (TCO). through the UPS, the isolation transformer/
house gas emissions. As a result, data center PDU and the server equipment produce an
managers are under pressure to maximize The traditional U.S. data overall end-to-end efficiency of approximate-
data center performance while reducing cost center power system ly 76 percent.
and minimizing environmental impact, mak- In most U.S. data centers today, after Data center efficiency is often evalu-
ing data center energy efficiency critical. power is received from the electrical grid ated using the efficiency ratings of the server
According to a 2007 Frost & Sullivan and distributed within the facility, the UPS and IT equipment alone. Despite recent
survey of 400 information technology (IT) ensures a reliable and consistent level of advances in energy management and server
and facilities managers responsible for large power and provides seamless backup power technology, maximum efficiency can be
data centers, 78 percent of respondents protection. Isolation transformers step down achieved only by taking a holistic view of the
indicated that they were likely to adopt more the incoming voltage to the utilization volt- power distribution system. Each component
energy efficient power equipment in next five age and power distribution units (PDUs) feed impacts the end-to-end cost and efficiency
years, a solution thats often less costly and the power to multiple branch circuits. The of the system. The entire system must be
more quickly and easily implemented than isolation transformer and PDU are normally optimized in order for the data center to fully
data virtualization or cooling systems. combined in a single PDU component, many realize the efficiency gains offered by new
While major advancements in electri- of which are required throughout the facility. server technologies.
cal design and uninterruptible power system
(UPS) technology have provided incremental
efficiency improvements, the key to improv-
ing system-wide power efficiency within the
data center is power distribution. However,
todays 480V AC power distribution sys-
temsstandard in most U.S. data centers and
IT facilitiesare not optimized for efficiency.
Of the several alternative power distribution
systems currently available, 400V AC and
600V AC systems are generally accepted as
the most viable. While both have been proven
reliable in the field, conform to current
National Electrical Code (NEC) guidelines,
and can be easily deployed into existing 480V
AC infrastructure, there are important dif-
ferences in efficiency and cost that must be
carefully weighed.
This article offers a quantitative com-
parison of 400V AC and 600V AC power dis- Figure 1: End-to-end efficiency in the 400V AC power distribution system

18 | THE DATA CENTER JOURNAL www.datacenterjournal.com


The 400V AC power system voltage, adding significant cost and reducing
The 400V AC power distribution overall efficiency. Some UPS vendors create a
model offers a number of advantages in terms 600V AC UPS using isolation transformers in

Where
of efficiency, reliability and cost, as compared conjunction with a 480V AC UPS, reducing
to the 480V AC and 600V AC models. In efficiency even further.
a 400V system, the neutral is distributed As shown in Figure 2, losses through

are your
throughout the building, eliminating the the UPS, the isolation transformer/PDU,
need for PDU isolation transformers and and the server equipment produce an overall
delivering 230V phase-neutral power directly end-to-end efficiency of approximately 76
to the load. This enables the system to per- percentcomparable to the efficiency of
form more efficiently and reliably, and offers
significantly lower overall cost by omitting
multiple isolation transformers and branch
todays traditional 480V AC power distribu-
tion system.
energy
dollars
circuit conductors. Comparing total cost of
Figure 1 shows that losses through the ownership
auto-transformer, the UPS and the server TCO for the power distribution system

going?
equipment produce an overall end-to-end is determined by adding capital expendi-
efficiency of approximately 80 percent. tures (CAPEX) such as equipment purchase,
installation and commissioning costs, and
The 600V AC power system operational expenditures (OPEX), which
The 600V AC power system, while of- include the cost of electricity to run both the
Start with an Upsite Services
fering certain advantages over both the 480V UPS and the cooling equipment that removes
cooling health benchmark.
AC and 400V AC systems, carries inherent heat resulting from the normal operation of
inefficiencies making it an impractical solu- the UPS. Our diagnostic surveys offer systematic
tion for most U.S. data centers. The 600V AC The end-to-end efficiency of the 400V remediation strategies that will cor-
system offers a small equipment cost savings AC power distribution system is 80 percent rect airow inefciencies for improved
over the 480V AC and 400V AC systems, re- versus 76 percent efficiency in the 600V cooling capacity. Then increase server
quiring less copper wiring feeding and lower AC system, with both systems running in density and defer capital costs, all while
currents, which reduce energy cost. conventional double conversion mode. The reducing operating expenses.
In unique circumstances where larger 400V AC systems higher efficiency drives
data centers deploy multi-module parallel significant OPEX savings over the 600V Count on Upsites systematic
redundant UPS systems, 600V AC power AC system, substantially lowering the data solutions suite to optimize
equipment can support more modules with a centers TCO both in the first year of service your existing equipment and
single 4000A switchboard than in a 400V AC and over the 15-year typical service life of the your energy dollars.
system, allowing data center managers to add power equipment. its
ng un
a small amount of extra capacity at a nominal To further reduce OPEX, many UPS to cooli
ature
cost and with no increase in footprint. manufacturers offer high-efficiency systems mper ment
u r n air te place
With 600V AC power, the distribution that use various hardware- and software- r e t o w a n d
air nt
pass e cou or
system requires multiple isolation trans- based technologies to deliver efficiency by orated til f act
r f i t y rns
former-based PDUs to step down the incom- ratings between 96 and 99 percent, without pe ing capac ion patte rature
c o ol c u l a t e m pe
ing voltage to the 208/120V AC utilization sacrificing reliability. The Energy Saver cir et
binet intak
ca uipment
eq
IT

Receive a free Upsite Temperature


Stripvisit upsite.com/energy

Upsite is an ENERGY STAR Service and Product Provider


Partner, developing ways to optimize data centers and
improve energy efciency.

upsite.com
upsite corporate headquarters
Figure 2: End-to-end efficiency in the 600V AC power distribution system santa fe, new mexico usa 505.982.7800
upsite europe
utrecht, netherlands +31 (0)30 7523670

www.datacenterjournal.com All rights reserved. Upsite Technologies, Inc. 2009


FLEXIBLE
POWER SOLUTIONS
IN MINUTES.
NOT WEEKS!
On your mark, get set, go!

Expanding your power distribution capacity shouldnt


be a hardship. And with the flexible Starline Track Busway,
it wont be. Our overhead, scalable, add-as-needed
system can expand quickly, with no down time, and no routine maintenance. Make dealing with the
jungle of under-floor wires a thing of the past, and reliable expansion and reconfigurations a part of your future.
To learn more about Starline Track Busway and to find a representative near you, just
visit www.uecorp.com/busway/reps or call us at +1 724 597 7800.
System is a new offering that enables select tion for all system sizes analyzed. The 400V
new and existing UPSs to deliver industry- AC systems lower CAPEX gives data center
leading 99 percent efficiency, even at low load
levels, while still providing total protection
for critical loads. With this technology, the
managers a more cost-effective solution for
expanding data center capacity. The systems
analyzed produced an average annual OPEX
How
about
UPS operates at extremely high efficiency un- savings of 4 percent with the 400V AC system
less utility power conditions force the UPS to running in double conversion mode, and
work harder to maintain clean power to the 17 percent when running in high-efficiency

spending
load. The intelligent power core continu- mode. OPEX savings rates are linear across
ously monitors incoming power conditions all system sizes, indicating that savings will
and balances the need for efficiency with the continue to increase in direct proportion to
need for premium protection, to match the the size of the system.

fewer
conditions of the moment. Therefore, the 400V AC power distri-
When high-efficiency UPS systems bution system offers the highest degree of
are deployed, losses through the auto-trans- electrical efficiency for modern data centers,

energy
former, the UPS and the server equipment significantly reducing capital and operational
produce an overall end-to-end efficiency of expenditures and total cost of ownership as
approximately 84 percent. compared to 600V AC power systems. Recent
developments in UPS technologyincluding

dollars?
400V AC Powers Ahead the introduction of transformerless UPSs and
The 400V AC power distribution new energy management featuresfurther
systems lower equipment cost and higher enhance the 400V AC power distribution
end-to-end efficiency deliver significant CA- system for maximum efficiency.
Eliminate bypass airow with KoldLok
PEX, OPEX and TCO savings as compared This conclusion is supported by IT
Raised Floor Grommets. Seal cable
to the 600V AC system. The 400V AC system industry experts who theorize that 400V AC
openings with 98% effectiveness.
running in conventional double conversion power distribution will become standard as
mode offers an average 10 percent first-year U.S. data centers transition away from 480V
TCO savings and an average 5 percent TCO AC to a more efficient and cost-effective solu- Studies show that installing KoldLok
savings over its 15-year service life, as com- tion over the next one to four years. n Grommets facilitates data center man-
pared to the 600V AC system. When running agers turning off 18% of CRAC units at
the 400V AC UPS in high-efficiency mode, an annual operating cost savings of ap-
the first-year TCO savings increase to 16 per- About The Author: proximately $5,000 per unit.
cent, and the 15-year TCO savings increase Jim Davis is a business unit manager for Eatons
to 17 percent, minimizing data center cost in Power Quality and Control Operations Division. He Count on Upsites systematic
terms of both CAPEX and OPEX. can be reached at JimRDavis@eaton.com. For more solutions suite to optimize
In CAPEX investment alone, the information about the 400V UPS power scheme, your existing equipment and
400V AC configuration offers an average 15 visit www.eaton.com/400volt. your energy dollars.
percent savings over the 600V AC configura-

Chart 1: 15-year TCO (400V AC Energy Saver System vs. 600V AC double conversion mode)

Receive a free Upsite Temperature


Stripvisit upsite.com/energy

Upsite is an ENERGY STAR Service and Product Provider


Partner, developing ways to optimize data centers and
improve energy efciency.

upsite.com
upsite corporate headquarters
santa fe, new mexico usa 505.982.7800
upsite europe
utrecht, netherlands +31 (0)30 7523670
www.datacenterjournal.com All rights reserved. Upsite Technologies, Inc. 2009
One
size
does not fit all.
Thats why CoolBalance offers brush
seals to fit nearly any size opening. Ideal
Data Center Efficiency
for retrofitting existing data centers or
new installations, Sealezes CoolBalance
floor seals economically seal cable
Its in the Design
access holes, facilitating control and By Lex Coors, Vice President Data Center Technology and
regulation of critical air flow that cools Engineering Group, Interxion
computer room equipment.
Most companies undergoing data center projects have the mindset
of cutting costs rather than helping the environment, however,
they may want to adjust their focus. With data center greenhouse
emissions set to overtake the airline industry in the next five to
ten years, quadrupling by 2020, it has never been more critical for
organizations to optimize their data center.

I
f the cost savings are half as great for modular way and/or not operated well. So,
Dyna-Seal strip brush technology data centers as they have been for the how does an organization go about optimiz-
provides an effective seal* airline industry, we will need to fasten ing data center efficiency and improving its
our seatbelts. PUEenergy?
Seal around cable openings in walls Data centers have always been For organizations to reduce their PUE,
or floors power hogs, but the problem has accelerated they need to have an active focus on the
in recent years. Ultimately, it boils down to following three areas: external efficiency,
Variety of sizes; 5x5 inch to 10x24 inch design, equipment selection and operation internal efficiency and customer efficiency.
and 4 & 6 inch circle seals of which measurement is an important part. They need to be monitoring their best
SM The first step on an existing Datacenter to practice PUE ratios that go against industry

SAVE THE SERVERS!


achieve high(er) efficiencies is through the standards set by the likes of the Uptime Insti-
improvement of a data centers Power Usage tute, the Green Grid and the European Code
Effectiveness (PUEenergy) ratio PUEenergy of Conduct.
ratios can be used as a guide to define a data Although PUEenergy has been adopted
C B CoolBalance
TM

centers efficiency or green credentials and by the industry sector, institutions and
have become the de facto metric in the past government bodies alike as an agreed way to
year. measure the energy overhead of a data center
Easy to install A data center with a low PUEenergy of
1.5, implements lean design and has estab-
it may distract us from the ultimate goal:

Economical lished measurement data with demonstrable


year-on-year improvements can be classi-
A LOWER TOTAL DATACENTER
ENERGY USE AT A LOW
Quick and easy on-site fied as green or energy efficient. The dream PUEenergy.
green data center would have a PUEenergy
installation of one, which means that every watt of power If PUE in Power or Energy would be the
in the transformer is delivered directly to only benchmark indicator for governments
www.coolbalance.biz the IT equipment without any losses in the to decide the relative energy efficiency of data
site infrastructure. Unfortunately, this is not centers and in turn how best to apply a car-
* For more information about Dyna-Seal Technology, visit www.coolbalance.biz
physically possible as some infrastructure bon tariff then many data center owners may
services, such as cooling, always have energy decide to switch on servers that were previ-

800.787.7325 losses (for time being).


However, an inefficient data center is
ously earmarked to meet peaks in demand.
This in reality would mean lower PUEenergy
email: coolbalance@sealeze.com recognized as anything with a PUEenergy ratios but a higher total energy usage which
www.sealeze.com of greater than two. These are generally
based on legacy equipment, not built in a
defeats the original objective and may be a
problem for all.

www.datacenterjournal.com
Bearing this in mind taking some of n Reduce demand for new servers, which

Does
the following steps to achieve a better total can also increase efficiency by 10-20%
performance of the site in energy usage will n Introduce greener and more power ef-
help you achieve a more energy efficient ficient servers and enable power saving

energy
operation. features, this also equates to a 10-20%
gain
Step 1:
Measure the transformer (or other main By following the above steps, an
source energy usage) and the IT energy
usage and calculate the PUEenergy
organization can look to achieve an overall
efficiency gain of 65%, significantly improv-
ing its PUE ratio.
savings
have a
Step 2: The third and final piece of the
Start harvesting the low hanging fruits efficiency puzzle is customer focus. An
based on the Uptime Institute guide lines efficient data center should have hands-

nice
that have been set for many years and on expert support in energy efficiency
available on their website. implementation efforts, as well as the best
practice customer installation check lists.
Step 3: Staff need to be able to advise on how

ring?
Measure the transformer and the IT En- to reduce temperatures and energy usage
ergy usage again and calculate your new though things like innovative hot and cold
PUEenergy. You may observe that while aisle designs. They need to have the tools
your total energy usage has decreased, in place to measure and analyze efficiency,
your PUEenergy ratio has increased. implement the latest efficiency ratings, Prevent circulation of hot exhaust
develop and implement first phase actions, air with HotLok Blanking Panels.
Step 4: and integrate figures and ratings with Seal rack unit openings in IT server
Start switching off unneeded infrastruc- customers CSR. Without such expertise in cabinets with 99.97% effectiveness.
ture, while maintaining your redundancy place, organization will find it hard to reach
levels. their desired efficiency gains. New research shows that installing
Green and efficient data centers are HotLok Panels helps data center
Step 5: real and achievable, but emissions and managers achieve up to 29% reduc-
Measure the transformer and the IT cost of energy are rising fast (although
tion in annual operating costs and
energy usage and calculate your PUEen- people now and then forget these costs
simple payback in a few months.
ergy. You may now observe that your sometime decrease temporally), so we
PUEenergy and again that your total need to do more now. Organizations must
energy usage has decreased. work together especially when it comes to Count on Upsites systematic
It comes as no surprise that good measurement. Vendors should be provid- solutions suite to optimize
design leads to lower capital expenditure ing standard meters on all equipment to your existing equipment and
(CAPEX) and better efficiency, but what is measure energy usage versus productivity; your energy dollars.
good design? A model that has proved suc- if you dont know whether youre wasting
cessful both in terms of efficiency and green energy, how can you change it?
credentials is Modular Design. Modular But its not just vendors who are
Design was developed by Lex Coors, Vice responsible. Data center providers should
President of Data Center Technology provide leadership for industry standards
and Engineering Group, Interxion, and is and ratings that work, data center design
unique since it allows for future data center and operational efficiency steps, and sup-
expansion without interruption of services port for all customer IT efficiency improve-
to customers. ments. What is apparent is that the whole
Recent research by McKinsey and the industry, from the power suppliers to the Receive a free Upsite Temperature
Uptime Institute identified five key steps to rack makers, all need to work together to Stripvisit upsite.com/energy
achieving operational efficiency gains: improve efficiencies and ensure that we are
n Eliminate decommissioned servers, all at the forefront of efficient, green data
which will equal an overall gain of center design. n
10-25%
Upsite is an ENERGY STAR Service and Product Provider
n Virtualize, which leads to gains of Partner, developing ways to optimize data centers and
25-30% PUEenergy measures efficiency over time improve energy efciency.

n Upgrade older equipment leading to a using KwH. upsite.com


10-20% gain upsite corporate headquarters
santa fe, new mexico usa 505.982.7800
upsite europe
utrecht, netherlands +31 (0)30 7523670

www.datacenterjournal.com All rights reserved. Upsite Technologies, Inc. 2009


www.datacenterjournal.com
ITcorner
Online Backup or
Cloud Recovery?
By Ian Masters, UK Sales & Marketing Director,
Double-Take Software

Backing up files and data online has been around for quite a while, but it has
never really taken off in a big way for business customers. There is also a
new solution coming onto the market which uses the cloud for backup and
recovery of company data. While these two approaches to disaster recovery
appear to be similar, there are some significant differences as well.
So which one would be right for you?

C
loud recovery can be a nebulous want just your data in the cloud, you want the to the wrong standard. In the end, you have
term, so I would define it based on ability to actually start up applications and to compare the results that a cloud services
the solution having the following use them, no matter what went wrong in your provider can achieve, the service levels that
features: own environment. they work to, and the cost comparison to
The next area where cloud recovery can doing it yourself. The point is that security
1. The ability to recover workloads in the provide a better level of protection is around and reliability are hard, but they are easier at
cloud provisioning. Even using online backup scale. Companies like Amazon and Rack-
2. Effectively unlimited scalability with little systems, organizations would have to use space do infrastructure for a living, and do it
or no up-front provisioning replacement servers in the event of an outage. at huge scale. Amazons outages get reported
3. Pay-per-use billing model The whole point of recovering to the cloud is in news, but how does this compare to what
4. An infrastructure that is more secure and that they already have plenty of servers and an individual business can achieve?
more reliable than the one you would build additional capacity on tap. If you need more The last area where cloud recovery can
yourself space to cope with a recovery incident, then deliver better results is through usability and
5. Complete protection - i.e. non-expert users you can add this to your account. Under this protecting everything that a business needs.
should be able to recover everything they model, your costs are much lower than build- While some businesses know exactly what
need, by default. ing the DR solution yourself, because you get files should be protected, most either dont
If a solution does not meet up to these the benefit of duplicating your environment have this degree of control, or have got users
five criteria, then it should be called an without the upfront capital cost. into the habit of following standard formats
online backup product. This may be right Removing the up-front price and or saving documents into specific places.
for your business, but typically they require long-term commitment shifts the risk away The issues that people normally get bitten
more IT knowledge and are based on specific from the customer, and onto the vendor. The by are with databases, configuration changes
resources. vendor just has to keep the quality up to keep and weird applications that only a couple of
There is an old saying in the data customers loyal, which requires great service people within the organization use. Complete
protection business that the whole point of and efficient handling of customer accounts. protection means that all of these things can
backing up is preparing to restore. Having The cloud recovery provider takes on all the be protected without requiring an expert in
a backup copy of your data is important, management effort and constant improve- either your own systems, or with the cloud
but it takes more than a pile of tapes (or an ment of infrastructure that is required. A recovery solution.
on-line account) to restore. You might need business without in-house staff that is Cloud means so many different things
a replacement server, new storage, and maybe familiar with business continuity planning to so many people, that it sometimes seems
even a new data centre, depending on what may ultimately be much better off paying a not to mean anything at all. If you are going
went wrong. Traditionally, you would either monthly fee to someone who specializes in to depend on it to protect your data, it had
keep spare servers in a disaster recovery data this area. better mean something specific. These five
centre, or suffer a period of downtime while One area where cloud providers may points may not cover every possible protec-
you order and configure new equipment. be held to account is around security and tion goal, but they set a good minimum
With a cloud recovery solution, you dont reliability, but I think they hold the providers standard. n

www.datacenterjournal.com THE DATA CENTER JOURNAL | 25


ITcorner
Five Best Practices for
Mitigating Insider Breaches
by Adam Bosnian, VP Marketing Cyber-Ark Software

Mismanagement of processes involving privileged access,


privileged data, or privileged users poses serious risks to
organizations. Such mismanagement is also increasing
enterprises vulnerability to internal threats that can be caused
by simple human error or malicious deeds.

A
ccording to a recent Computing Technology Industry provides a detailed audit trail for all activity associated within these
Association (CompTIA) survey (see http://www.comptia. safe havens. This encourages more secure employee behavior and
org/pressroom/get_pr.aspx?prid=1410), although most re- significantly reduces the risk of human error.
spondents still consider viruses and malware the top security Here are some best practices for organizations serious about
threat, more than half (53 percent) attributed their data preventing internal breaches, be they accidental or malicious, of any
breaches to human error, presenting another dimension to the rising processes that involve privileged access, privileged data, or privileged
concern about insider threats. It should serve as a wake-up call to users.
many organizations, that inadvertent or malicious insider activity can

1
create a security risk. Establish a Safe Harbor
For instance, take the recent data breach that impacted the Metro By establishing a safe harbor or vault for highly sensitive
Nashville Public Schools. In this case, a contractor unintentionally data (such as adminstrator account passwords, HR files, or
placed the personal information of more than 18,000 students and intellectual property), build security directly into the business process,
6,000 parents on an unsecured Web server that was searchable via the independent of the existing network infrastructure. This will protect
Internet. Although this act was largely chalked up to human error and the data from the security threats of hackers and the accidental misuse
has since been corrected, anyone accessing the information when it by employees.
was freely available online could create a data breach that could cause A digital vault is set up as a dedicated, hardened server that
significant harm to these students and parents. provides a single data access channel with only one way in and one
Moreover, the Identity Theft Resource Center (ITRC) recently way out. It is protected with multiple layers of integrated security
reported that insider theft incidents more than doubled between 2007 including a firewall, VPN, authentication, access control, and full en-
and 2008, accounting for more than 15 percent of data breaches. Ac- cryption. By separating the server interfaces from the storage engine,
cording to the report, human error breaches, as well as those related to many of the security risks associated with widespread connectivity are
data-in-motion and accidental exposure, accounted for 35 percent of removed.

2
all data breaches reported, even after factoring in that the number of
breaches declined slightly during this period. Automate Privileged Identities and
To significantly cut the risk of these insider breaches, enterprises Activities
must have appropriate systems and processes in place to avoid or Ensure that administrative and application identities and
reduce human errors caused by inadvertent data leakage, sharing of passwords are changed regularly, highly guarded from unauthorized
passwords, and other seemingly harmless actions. use, and closely monitored, including full activity capture and record-
One approach to address these challenges is digital vault technol- ing. Monitor and report actual adherence to the defined policies. This
ogy, which is especially valuable for users with high levels of enter- is a critical component in safeguarding organizations and helps to
prise/network access as well as those handling sensitive information simplify audit and compliance requirements, as companies are able to
and/or business processes such as users with privileged access -- in- answer questions associated with who has access and what is being
cluding third-party vendors or consultants, executive-level personnel accessed.
-- or access to the core applications running within an organizations As listed among the Consensus Audit Guidelines 20 critical
critical infrastructure. security controls, the automated and continuous control of adminis-
Instead of trying to protect every facet of an enterprise network, trative privileges is essential to protecting against future breaches. [Ed-
digital vault technology creates safe havens -- distinct areas for storing, itors note: the guidelines are available at http://www.sans.org/cag/.]
protecting, and sharing the most critical business information -- and

26 | THE DATA CENTER JOURNAL www.datacenterjournal.com


3
Identify All Your Privileged Accounts These privileged, application identities are being increasingly
The best way to start managing privileged accounts is to scrutinized by internal and external auditors, especially during PCI-
create a checklist of operating systems, databases, appli- and SOX-driven audits, and are becoming one of the key reasons that
ances, routers, servers, directories, and applications throughout the many organizations fail compliance audits. Therefore, organizations
enterprise. Each target system typically has between one and five must have effective control of all privileged identities, including ap-
privileged accounts. Add them up and determine which area poses plication identities, to ensure compliance with audit and regulatory
the greatest risk. With this data in hand, organizations can easily requirements.

5
create a plan to secure, manage, automatically change, and log all
privileged passwords. avoid bad habits
To better protect against breaches, organizations must

4
establish best practices for securely exchanging privileged
Secure Embedded Application information. For instance, employees must avoid bad habits (such
Accounts as sending sensitive or highly confidential information via e-mail or
Up to 80 percent of system breaches are caused by internal writing down privileged passwords on sticky notes). IT managers
users, including privileged administrators and power users, who acci- must also ensure they educate employees about the need to create and
dentally or deliberately damage IT systems or release confidential data set secure passwords for their computers instead of using sequential
assets, according to a recent Cyber-Ark survey. password combinations or their first names.
Many times, the accounts leveraged by these users are the ap- The lesson here is that the risk of internal data misuse and
plication identities embedded within scripts, configuration files, or accidental leakage can be significantly mitigated by implementing ef-
an application. The identities are used to log into a target database or fective policies and technologies. In doing so, organizations can better
system and are often overlooked within a traditional security review. manage, control, and monitor the power they provide to their employ-
Even if located, the account identities are difficult to monitor and log ees and systems and avoid the negative economic and reputational
because they appear to a monitoring system as if the application (not impacts caused by an insider data breach, regardless of whether it was
the person using the account) is logging in. done maliciously or by human error. n
DCJ_ad_4fx.qxd:7x24 Conference 9/25/09 6:05 PM Page 1

End-to-End Reliability:
2009 FALL CONFERENCE For more information
and to register, visit
www.7x24exchange.org

November 15-18, 2009 Media Partners:


JW Marriott Desert Ridge, Phoenix, AZ

Conference Partners:

KEYNOTE TOPICS
Leadership and Accountability
When It Matters
Commander Kirk S. Lippold, USN (Ret)
Commander of the USS Cole

IBM Achieving Data Center MITSUBISHI ELECTRIC


Availability and Energy Efficiency UPS Division

Steven Sams Vice President


Global Site Facilities Services, IBM

Global Economic Impact on Data


Centers Can ASHRAE Books Help?
Don Beaty President DLB Associates
Past Chair of ASHRAE TC9.9

www.datacenterjournal.com THE DATA CENTER JOURNAL | 27


ITops
Energy Measurement
Methods for the Data Center
A recent Info-Tech study of over 800 mid-sized IT shops found that only
25% have fully adopted an IT energy measurement initiative.

F
or many shops, this information is unavailable: IT does not Considerations for Calculation
receive an energy bill, and does not use, or have, tools to iden- Ultimately, energy data needs to be collected from two cost
tify its share of energy consumption. In the past, electricity buckets: data-serving equipment (servers, storage, networking, UPS)
costs, especially in smaller IT shops, were of minor concern and support equipment (air conditioning, ventilation, lighting, and
in many cases, the energy bill was simply left in the hands of the like). Changes in one bucket may affect the other bucket, and by
the facilities director or company accountant to pay and file away. tracking both, IT can understand this relationship. These buckets are
However, in the same study, Info-Tech finds that 28% of IT de- also necessary for common efficiency calculations; for more informa-
partments are now piloting an energy measurement solution of some tion, refer to the Info-Tech Advisor research note, If You Measure It,
kind, and an additional one-quarter of shops are planning a measure- They Will Green: Data Center Energy Efficiency Metrics. Software
ment project within twelve months. Many converging factors drive for tracking energy use and cost is another consideration. While
interest in measuring and managing energy use, and the major ones assessing the need for a full energy management solution, IT shops
are outlined here: can use something as simple as an Excel spreadsheet to enter energy
figures and track costs over a few months. Specifics on collecting data-
n Increasing energy costs serving and support equipment energy data, and tracking software, are
The US Energy Information Administration (EIA) reports that discussed further below.
between 2000 and 2007, the average price of electricity for busi-
nesses increased from 7.4 cents per kilowatt-hour (kWh) to 9.7 Option One: You May Already Have Access to
cents per kWh an increase of 30%. Energy Data
Depending on data center setup, vintage and pedigree of
n Burgeoning data center energy consumption equipment, some IT shops can already collect energy numbers at the
According to the American Society of Heating, Refrigerating and data-serving or support equipment levels. The following scenarios are
Air-Conditioning Engineers (ASHRAE), energy density of typical common starting points when beginning data collection:
mid-range server setups has increased about four times between
2000 and 2009 (from about 1,000 watts per square foot to almost n Existing software metering
4,000). Greater server consumption means more waste in the Newer servers, power-distribution units (PDUs) and UPS
form of heat, so energy consumption of cooling and support systems have monitoring built into the included management
systems also spikes simultaneously. consoles. For example, newer HP ProLiant blades ship with
power tracking features, and the HP Insight Control management
n Green considerations console provides energy monitoring capabilities.
Energy consumption has an associated carbon footprint. Interest
in reducing energy use has increased in IT and senior manage- n Existing hardware metering
ment ranks. Some server racks and PDUs may have hardwired meters built
in. For example, some of APCs more basic PDUs for racks have
Ultimately, interest in energy data is driven by the age-old built-in power screens.
accounting precept: What gets measured gets done. Realizing that
energy use will become a compounding issue, a growing number of Unfortunately, built-in metering is rarer in the support equip-
IT shops seek to quantify energy as an operational cost, just like line ment bucket. Many older data center air conditioning units and air
items such as staffing and maintenance. Once the cost is accounted handlers do not provide this data. In some cases, one can estimate this
for, IT has a number to improve on. In this note, learn about three energy number by subtracting the data-serving bucket from the total
options for obtaining energy numbers in the data center. A companion data center energy draw. But, since older data centers may not be sub-
Info-Tech Advisor research note, Energy Measurement Methods for metered (the draw of the data center is not measured separately from
End-User Infrastructure describes how to obtain energy data at the the rest of the building), one cannot always perform this calculation,
user infrastructure level (workstations, printers, and the like). and installation of a meter is necessary.

28 | THE DATA CENTER JOURNAL www.datacenterjournal.com


If existing software or hardware metering includes management software for trending,
this may be enough to set up a baseline. However, if energy numbers need to be collected, IT
can record data from consoles, panels, or data files manually for short periods of time. This data
can be entered into spreadsheets or dedicated software. The US Department of Energy has a
directory of software packages, such as Energy Lens, a $595 US Excel plug-in, and offers a free
assessment tool, the Data Center Energy Profiler.

Option Two: Cheap & Cheerful


If energy numbers are not available through existing equipment or software, IT should
make an investment in this capability. This is a common scenario for smaller or older facilities,
and is often required to measure energy on the support equipment side for many shops. Cheap
and cheerful data collection options include:

n Basic watt readers


These measure wattage drawn from the plug. Inexpensive devices provide spot readings
only, starting around $20 US. However, a popular line at a slightly higher price point,
Watts up, offers energy tracking and PC connectivity with a graphing package, starting
around $130 US. These are best-suited to smaller server rooms and data centers but may
not be appropriate for larger or mission-critical facilities with aggressive energy needs.

n Industrial-strength meters
Standard Performance Evaluation Corporation (SPEC) provides a list of heavier-duty
energy meters, which typically run $200 US to more than $2000 US. These meters, many
of which are designed for manufacturing and industrial environments, include data con-
nectivity and are better-suited to handling the industrial-grade energy requirements of
multiple PDUs and high-voltage components in data centers. SPEC provides free measure-
ment software that is verified as compatible with these devices.

To collect data in both buckets, IT may need to have an electrician or data center profes-
sional install sub-meters or dedicated measurement devices. If the organization is not yet ready
for such a move, cheap and cheerful options should at least provide a rough cost number for the
data-serving bucket to quantify the true operational cost of servers and storage.
Note that options one or two often come along with two major disadvantages. First, some
solutions model energy use of isolated components in the data center. IT still wont understand
how changing energy consumption of a group of components affects other components in the
data center; for example, changing server loads affect heat output and thus air cooling needs.
Second, measuring total data center energy use at only one or a few points causes flat trend-
ing; essentially, IT will have a total energy use/cost number, but wont understand how energy
use trends up and down in different areas of the data center. With both of these disadvantages,
long-term optimization remains difficult. Options one and two are good options to get an
overall handle on energy costs, while major optimizations often require a bigger investment in
option three, described next.

Option Three: Professional-Grade Management Solutions


An increasing number of hardware vendors and data center energy equipment provid-
ers offer full management packages for data centers, which include integrated hardware and
software and extensive reporting and trending options. In addition, data center planners often
include these features in new data center plans, since the additional cost of such a project is
nominal. Complete management solutions tend to come in two forms:

n As an add-on to an existing facility


Both tier one and specialized vendors now provide power management capabilities for
existing facilities. Sentilla, for example, recently introduced a solution that includes wire-
less meters which feed software, priced on a per-device basis, starting at $40 US per month
and declining as volumes increase. The measurement devices can be installed directly or
clamp onto cables of existing equipment. Sentilla has priced this solution to allow a return
on investment of less than one year based on typical optimizations.

n Integrated into equipment upgrades or a new facility


New power equipment, servers, and other data center components often include power

www.datacenterjournal.com THE DATA CENTER JOURNAL | 29


tracking and management features as standard. This may not pro-
Recommendations vide complete data for both data-serving and support equipment
buckets; however, if an upgrade is being performed anyways, get-
ting these features without incurring additional costs is a bonus.
1. Go cheap and cheerful first. Automatic data Have the vendor demonstrate how these features work before
collection and trending in both data-serving buying.
and support equipment is very useful; it allows Professional grade solutions, whether installed independently
or included with data center upgrades, obviously cost more than op-
IT to identify when and why energy use spikes. tions one and two. These solutions, which automate collection of very
However, when piloting energy management, granular data, are useful once data center operators and IT leaders
it may be sufficient to collect rough data and fully understand energy use principles and baselines, and when the
record energy figures manually, in a spreadsheet business is ready to move to energy optimization and reduction. Op-
or basic tracking software, a few times a day for tions one and two are better choices for starting to establish energy
cost as an operational line item. Option three is better for long-term
a month or two. Eventually, a more aggressive energy and cost reduction goals.
solution will be required especially in organiza-
tions responsible for more than 50 servers. Bottom Line
2. Use basic data as a call to action. Tracking en- In the data center, options for energy monitoring and measure-
ergy use for a month or two, cheaply and cheer- ment are beginning to proliferate. Understand why IT shops are
benchmarking energy use now, which components need to be mea-
fully, gives IT a silver bullet. Senior management sured in data centers, and three options for getting started with data
now has a real number attached to the cost of collection and trending. n
energy; use this to get their attention. Moreover,
a demonstrative energy figure provides a great
Info-Tech Research Group is a global leader in providing IT research and advice.
starting point to build the business case for a
Info-Techs products and services combine actionable insight and relevant
comprehensive monitoring solution. advice with ready-to-use tools and templates that cover the full spectrum of IT
concerns. www.infotech.com

A
fter youve visited hundreds of data centers over the last 20+
years (like your authors), you begin to see problems that are

education common to many of them. Were taking this opportunity to


list some of them and to recommend how to correct them.
Please understand that we are focusing on existing older

corner (aka legacy) data centers that must remain in production.

1
Problem: Leaky raised access floor
Most existing data centers employ raised access floor
to route cold air from cooling units to floor air
outlet tiles and grilles that discharge the air where needed.

Common Mistakes
However, leaks in the floor waste the cold air and reduce
cooling ability.

in Existing Data r Remedy:


Identify the leaks and close them. Typical culprits

Centers and How


are misfitted floor tiles, gaps between floor tiles
and walls and columns, columns not built out
completely to the structural floor beneath, and

to Correct Them
oversized floor cable cutouts. Unnecessary
cutouts should be eliminated and necessary
cutouts should be closed with brush-type
closures.

By Christopher M. Johnston, PE and Vali Sorell, PE


Syska Hennessy Group, Inc.

30 | THE DATA CENTER JOURNAL www.datacenterjournal.com


2
Problem: Underfloor range is defined as 41 F dew point to 59 F foot) increases. At low critical load densities it
volume congested with dew point, with a maximum cap of 60% rela- is not a problem.
cables tive humidity. If the entering air temperature
This condition often manifests itself in floor is 75 F then the relative humidity can fall r Remedy:
tiles that wont lay flat and floor air outlet tiles anywhere from 33% to 60%. The days of tight As time passes and technology refreshes,
that wont discharge air. temperature and humidity control bands are migrate to a hot aisle/cold aisle arrange-
past and the need for simultaneous humidifi- ment. There is no magic bullet for this just
r remedy: cation and reheat are over. advance planning and attention to detail.
Identify control, signal, and power cables

7
that are not in service, then carefully remove r Remedy:
(mine) them. If you dont have this expertise Disable humidification and reheat in all cool- Problem: Too many CRAC
in your staff, then you should engage a skilled ing units except two in each room (on op- units operating
IT cabling contractor. posite sides of the room). Change the controls This one may seem counterintuitive,
for those units so they operate based on room so its no surprise that this occurs in most

3
dew point temperature. If multiple sensors legacy data centers. Poor air flow manage-
Problem: Space are used, its important that a single average ment creates hot spots i.e. locations where
temperature too cold value be used as the controlled value. This can the temperature entering the server cabinets
In the past, data center managers liked prevent calibration errors between multiple is outside of the TC9.9 thermal envelope.
to keep the room like a meat locker, believing sensors from forcing CRAC unit to fight each The conclusion most data center managers
the theory that a colder space would buy a other. Set the controls to maintain dew point and facilities managers make is that there is
little more ride through time when the cool- within the ASHRAE TC9.9 Recommended insufficient capacity, so they run more CRAC
ing system went off and had to be restarted. Thermal Envelope. units.
The miniscule additional ride through time (a

5
few seconds) is gained at the high operating r Remedy:
cost of keeping the room unnecessarily cold. Problem: Electrical Adding more CRAC units when the capac-
The current ASHRAE TC9.9 Recommended redundancy for cooling ity was already sufficient actually makes the
Thermal Envelope is 64.4 F to 80.6 F dry units is lower than the problem worse, especially when using con-
bulb air at the server inlet; the warmer the air mechanical redundancy stant volume CRAC units. The CRAC units
temperature the lower your operating cost. This is another one weve lost count of. The will operate less efficiently, using more energy
typical scenario is that the desired site redun- to dehumidify the space, which in turn forces
r Remedy: dancy is Tier III or Tier IV, the mechanical the reheat coils and the humidifiers to run
Move the control thermostats in each of your engineer has done a good job designing to the concurrently. The solution is to eliminate the
cooling units to the discharge air side if not desired tier, but the electrical engineer lost fo- humidifiers in all but two units (see item #4
already located there (one unit at a time) and cus and branch circuited every cooling unit to above) and disconnect all reheat coils. An
calibrate the thermostat. Set the thermostat one or two panelboards. The end result is that equally important step is to match the load
to maintain 60 F discharge air. Once all of the redundancy of the site is Tier I because within the space to the capacity available. It is
the thermostats are on the discharge air side, the electrical redundancy for the cooling common to see 300% of the needed capacity
start raising their setpoints 1 F at a time and units is lower than the mechanical redundan- actually on and operating at any time. Once
monitor the inlet temperature at your warm- cy. For example, assume that the need is for the air flow management remedies listed in
est servers for a day. If the inlet air tempera- 10 cooling units and 12 are provided, so the items #1 through #4 above is implemented,
ture at your warmest server is less than 75 F mechanical redundancy is N+2. The electrical the more appropriate capacity that should be
after a day, raise the temperature leaving the engineer however has circuited all cooling operating at any time is 125% to 150%.
cooling units another degree. Continue until units to one branch circuit panelboard so the

8
the warmest server has 75 F entering air. electrical redundancy is N if the one panel-
board fails then all cooling fails. Problem: The cabinets
restrict airflow into the

4
Problem: Cooling units r Remedy: servers contained inside
fight each other Identify another source to supply backup pow- Sometimes, the data centers worst enemies
We cannot count how many times er for the cooling units this source may be are the cabinets selected for the space. Legacy
weve seen one cooling unit cooling and direct from the standby generator if need be. data centers often used cabinets with solid
dehumidifying while the one beside it is hu- The main criterium for this Source 2 is that it glass or panel doors. Even though some
midifying. This is an energy wasting process is available if the original Source 1 fails. Then, breathing holes are provided, they do in
that is a relic of the days when the industry add transfer switches for each cooling unit so fact offer too much resistance to the air flow
consensus design condition was 72F +/- 2F that Source 2 will supply if Source 1 fails. needed by the computer equipment inside.
and 40% relative humidity +/- 5% (and before

6
that a relic of the paper punch card days). As r Remedy:
mentioned above, todays thermal envelope is Problem: No hot aisle/cold Replace doors with perforated doors of large
64.4 F to 80.6 F dry bulb. The same thermal aisle cabinet arrangement free area. The larger the free area, the better.
envelope specification also includes a recom- This problem becomes more burden- This applies to both front and rear doors of
mended range of moisture content. That some as the critical load density (watts/square the cabinets. n

www.datacenterjournal.com THE DATA CENTER JOURNAL | 31


Yourturn
From our Experts Blog:
www.datacenterjournal.com/blogs

Technology and the Economy


By Ken Baudry

T
he economy has certainly been tough and open architecture and DOS operat- longer. This isnt an exact science since we
on all of us these past 12 months. I ing system enabled other manufacturers to dont know how large the market will ulti-
thought it might be worthwhile to introduce IBM Compatible PCs, also know mately grow or where the curve really starts.
revisit an article we published on DCJ as clones. No matter how you draw the curve we are
in 2006 concerning technology and its Then in 1985 Microsoft introduced likely below the 50% penetration level and
market potential and duration. Windows. Windows moved the PC from text have a long stretch to go.
We believe that these questions and can based commands to point and click. This The dot-com boom was fueled by the
be easily answered by recalling something transformed the PC from a tool for only the release of significant IT resources and tal-
learned years ago in Econ101; the S curve. most dedicated to something that everyone ent as Y2K preparations drew to a close, an
The basic tenants of the S curve are that could easily master and moved the PC from investment community that recognized the
1) all successful products follow a something considered as a toy by many to tremendous technology growth ahead, and
known and predictable path through three a legitimate business tool. Sales volumes significant innovation.
stages; Innovation Growth and Maturity and kicked up, competition was fierce and prices The dot-com bust occurred because
2) that these stages are of equal length. dropped dramatically. an over anxious investment community
So lets explore the history of the computer By 2001, the PC was readily available, provided too much money too fast. The
and internet, events, dates and time frames: inexpensive, and standard equipment on buying power of the Early Adopters, people
The first electronic computer was de- almost every desk in corporate America, a and companies who want to be in the leading
veloped for the US military and was first put commodity product with low margins and edge and are willing to pay high prices just
in use in 1945. By todays standards for elec- slow growth. This could be the end of the wasnt significant enough to absorb all of the
tronic computers the ENIAC was a grotesque story but growth of another technology innovation. This pushed the supply above the
monster. It had thirty separate units, weighed would overshadow the development of the curve. As with all economic imbalances the
over thirty tons, used 19,000 vacuum tubes, PC, push technology into our everyday lives, market forces correction.
1,500 relays, and demanded almost 200,000 and give the PC a new lease on life. Further, many dot-com innovations
watts of electrical power. ENIAC was the As PCs developed so did ARPANET. lacked key infrastructure. Just as the automo-
prototype from which most other modern The Internet was largely used by IT profes- bile could not have been successful without
computers evolved. sionals, researchers, academia and other development of roads, bridges, gas stations,
the first commercial computer early adapters of technology. It was slow, tire dealers, hotels and even fast food, many
1960 with a monitor and keyboard text based and difficult to use. In 1994, Jim of the services that were introduced during
was introduced, Digitals PDP-1. Clark and Marc Anderson developed the the dot-com boom required significant devel-
the first personal computer Netscape Browser and just as Windows had opment in other areas.
1962 was introduced. It was called made the PC a practical tool, Netscape made For example, hosting applications at
LINC and each unit cost over $40,000. the Internet practical. remote unmanned data centers or colloca-
ARPANET was created to There were many other milestones that tion facilities is only practical with remote
1969 link government researchers deserve attention and were perhaps more management applications and inexpensive
scattered across the US at universities and re- important then some of the events mentioned bandwidth. We may take this for granted
search facilities so that they could share data. here, such as the research performed at Xerox today, bandwidth wasnt inexpensive seven
This was the start of the Internet. PARK, where modern desktop computing years ago and remote management tools were
Apple Computer Company was created; windows, icons, mice, pull down not as sophisticated as they are today.
1976 was created and around 1977, menus, What You See Is What You Get Yes there have been casualties along the
the first Apple computer was introduced. It (WYSIWYG) printing, networked worksta- way but significant advancements were made
was a kit that the customer assembled. The tions, object-oriented programming -- etc. during the dot-com boom and early adapters
next year Apple introduced a factory-as- What many dont know is that Xerox could have in many cases reaped many benefits.
sembled version. The volume of sales was have owned the PC revolution but simply Manage Service Providers, Collocation and
small and the costs high. The Apple was couldnt bring itself to disrupt its core busi- other services have sent significant growth
followed by an almost endless list of me-too ness of making copiers. and success since we first published this
computers; Timex Sinclair, Commodore, Why is all of this so important? Well, article and if our numbers are correct have
Tandy, Pet, etc. depending on your starting point the innova- quite a run to go
IBM introduced The Personal tion phase is likely to have been somewhere
1981 Computer. The IBM name between 20 and 30 years and possibly even

32 | THE DATA CENTER JOURNAL www.datacenterjournal.com


1960 Standards 1960 In November, DEC introduces the
BEGIN
FILE F (KIND=REMOTE); for Algol 60 are PDP-1, the first commercial computer
EBCDIC ARRAY E [0:11]; established with a monitor and keyboard input.
REPLACE E BY HELLO WORLD!;
WHILE TRUE DO jointly by
BEGIN American and
WRITE (F, *, E);
END; European
END. computer
scientists.
http://www.latec.edu/~acm/HelloWorld.shtml

Digital Equipment Corporation


Rand Corp.

1960 The Livermore Advance Research Computer


(LARC) by Remington Rand is designed for
scientific work and uses 60,000 transistors.
1960 Working at Rand Corp.,
Paul Baran develops the 1960 At Cornell University, Frank Rosenblatt
packet-switching principle for builds a computerthe Perceptronthat can
data communications. learn by trial and error through a neural network.

1960
1962 Max V. Mathews leads a Bell 1962 The first video game is invented by MIT 1962 The Telstar communications
Labs team in developing software that graduate student Steve Russell. It is soon satellite is launched on July 10 and
can design, store, and edit synthesized played in computer labs all over the US. relays the first transatlantic television
music. pictures.

1962 Atlas, considered the worlds


1962 Stanford and Purdue Universities most powerful computer, is
establish the first departments of inaugurated in England on December
computer science. 7. Its advances include virtual memory
and pipelined operations.
The Computer Museum

1962 H. Ross Perot founds Electronic 1963 On the basis of an idea of Alan
Data Systems, which will become the Turings, Joseph Weizenbaum at MIT
worlds largest computer service bureau. develops a mechanical psychiatrist
called Eliza that appears to possess
intelligence.

1962-1963
1969 Bell Labs withdraws from 1970 Shakey,
Project MAC, which developed developed at SRI
Multic, and begins to develop Unix. International, is
the first robot to
1969 The RS-232-C standard is use artificial
introduced to facilitate data exchange intelligence to
between computers and peripherals. navigate.

1970 Winston Royce


1969 The US Department of Defense publishes Managing
The Computer Museum
The Computer Museum

commissions Arpanet for research the Development of


networking, and the first four nodes Large Software
become operational at UCLA, UC Systems, which
Santa Barbara, SRI, and the University outlines the waterfall
of Utah. development method.

1969 1970
1977 Bill Gates and Paul A
setting up shop first in Al
1981 Barry Boehm devises Cocomo 1977 The Apple II is
(Constructive Cost Model), a announced in the spring
software cost-estimation model. and establishes the
st benchmark for personal
vice, 1981 Japan grabs a big piece of the computers.
The Computer Museum

chip market by producing chips


Apple Computer, Inc.

base with 64 Kbits of memory.


oo
IBM Archives

1977 Several companies


1980 David A. Patterson at 1981 Xerox introduces a begin experimenting
commercial version of the Alto with fiber-optic cable.
Microsoft Archives

UC Berkeley begins using


ps the the and
1976 Steve Jobs termSteve
reduced-instruction
Wozniak called
1977 theJobs
Steve Xerox
andStar.
Steve Wozniak 1981 The open-architecture IBM PC is launched in
setthe
design and build and, withIJohn
Apple Hennessy
, which incorporate Apple Computer on January 3. August, signaling to corporate America that desktop
at of
consists mostly Stanford,
a circuitdevelops
board. the concept. computing is going mainstream.

6 1976 1977 1980-198119771981


www.computer.org/computer/timeline/timeline.pdf
82
Perfect for a picnic.
Fine for a jog.
Agony for a computer.

For over 40 years, weve been the industry


innovator in precision environmental control.
Specializing in:
n Precision cooling units built to your specifications
n Short lead times
n Advanced control systems
n Ultra-reliable technology

The reliable choice in


precision cooling equipment.
714-921-6000
Get your free Building Owners Guide to
precision cooling at www.DataAire.com

You might also like