Professional Documents
Culture Documents
Data Center
Efficiency and
Design
18 Powering Tomorrows
Data Center: 400V AC
versus 600V AC Power
Systems
By Jim Davis, business unit manager, Eaton
Power Quality and Control Operations
While major advancements in electrical
design and uninterruptible power
system (UPS) technology have provided
incremental efficiency improvements,
All rights reserved. No portion of DATA CENTER Journal may be reproduced without written AN EDM2R ENTERPRISES, INC. PUBLICATION ALPHARETTA, GA 30022
permission from the Executive Editor. The management of DATA CENTER Journal is not PHONE: 678-762-9366 | FAX: 866-708-3068 | WWW.DATACENTERJOURNAL.COM
responsible for opinions expressed by its writers or editors. We assume that all rights in
communications sent to our editorial staff are unconditionally assigned for publication. DESIGN : NEATWORKS, INC | TEL: 678-392-2992 | WWW.NEATWORKSINC.COM
All submissions are subject to unrestricted right to edit and/or to comment editorially.
Corner www.7x24exchange.org
Info-Tech Research Group ....... pg 29
www.infotech.com/measureit
30 Common Mistakes in Data Aire ....................................... Back
Existing Data Centers & www.dataaire.com
how to Correct Them
By Christopher M. Johnston, PE and Vali Sorell,
PE, Syska Hennessy Group, Inc.
After youve visited hundreds of data
centers over the last 20+ years
(like your authors), you begin to
Calendar
see problems that are common
to many of them. Were taking November
this opportunity to list some
of them and to recommend
how to correct them. November 15 - November 18, 2009
7x24 Exchange International 2009 Fall Conference
www.7x24exchange.org/fall09/index.htm
december
Yourturn December 2 - December 3, 2009
KyotoCooling Seminar: The Cooling Problem Solved
www.kyotocooling.com/KyotoCooling%20Seminars.html
32 Technology and the
economy December 1 - December 10, 2009
By Ken Baudry Gartner 28th Annual Data Center Conference 2009
An article from our Experts Blog www.datacenterdynamics.com
In todays data centre world of ever increasing power demand, the scale of mission critical business
dependant upon uninterruptible power grows ever more. More power means more energy and
the battle to reduce running costs is increasingly fierce. Optimizing system efficiency without
compromise in reliability seems like an impossible task or is it?
A
parallel redundant scheme usually
provides N+1 redundancy to boost
reliability but suffers from single Isolated
Parallel System Isolated Distributed
points of failure including the out- Parallel
Redundant Redundant Redundant Redundant
put paralleling bus and the scheme Redundant
is limited to around 5 or 6MVA at low volt-
ages. The whole system is not fault tolerant Fault tolerant No Yes Yes Yes Yes
and is difficult to concurrently maintain.
A System + System approach can
Concurrently
overcome the maintenance and fault tolerance No Yes Yes Yes Yes
issues but suffers from a very low operating maintainable
point on the efficiency curve. Like the paral- Load Manage-
lel redundant scheme, it too, is limited in scale No No Yes Yes No
ment required
at low voltages.
An isolated or distributed redundant Typical UPS
scheme can be employed to tackle all these module loading 85% 50% 100%* 85% 94%
problems but such schemes introduce ad-
(max)
ditional requirements such as essential load
sharing management and static transfer Reliability order 5 1 4 3 2
switches for single corded loads. (1= best)
The Isolated-Parallel (IP) rotary UPS
system eliminates the fundamental draw- * One module is always completely unloaded.
backs of conventional approaches to provide
a highly reliable, fault tolerant, concur-
rently maintainable and yes, highly efficient Table 1 Comparison of UPS scheme topologies.
solution.
Maintainability
The IP bus system is probably the simplest (high reliability)
system to concurrently maintain because the loads are independently
fed by UPS sources and these sources can readily be removed from
and returned to the system without load interruption. Not only that,
but the ring bus can be maintained, as can the IP chokes, also without
load interruption. All the other solutions with similar maintainability
(System, Isolated and Distributed redundant), have far greater com-
plexity of infrastructure, leading to more maintenance and increased
risk during such operations.
Projects
Figure 5 Example of a fault current distribution in case of a short The first IP system was realized in 2007 for a data center in Ash-
burn VA. It consists of two IP systems, each equipped with 16 x Piller
circuit on the load side of UPS #2 UNIBLOCK UBT1670kVA UPS with flywheel energy storage (total
Control installed capacity > 2 x20MWatts at low voltage). Each of the UPS is
backed up by a separate Diesel Generator with 2810kVA, which can be
The regulation of voltage, power and frequency plus any syn- connected directly to the UPS load bus and which is able to supply both
chronization is done by the controls inside each UPS module. The the critical and the essential loads. Since the success of this first instal-
UPS also controls the UPS related breakers and is able to synchronize lation, three more data centers have been commissioned, of which the
itself to different sources. Each system is controlled by a separate first phase of one is complete (a further 20MWatts) as of today.
system control PLC, which operates the system related breakers and There are further projects planned to be done in medium voltage
initializes synchronization processes if necessary. The system control and also a configuration combining the benefits of the IP system with
PLC also remotely controls the UPS regarding all operations that are the energy efficiency of natural gas engines is planned by Consulting
necessary for proper system integration. The redundant Master Engineers.
Control PLCs are used to control the IP system in total. Additional
pilot wires interconnecting the system controls allow safe system Conclusion
operation in the improbable case that both master control PLCs fail. In the form of an IP bus topology, a UPS scheme that combines
high reliability with high efficiency is possible.
Modes of operation High reliability is obtained by virtue of the use of rotary UPS
In case of a mains failure each UPS automatically disconnects (with MTBF values in the region 3-5 times better than static technolo-
from the mains and the load is initially supplied from the energy gy), combined with the elimination of load sharing controls, no mode
storage device of the UPS. From this moment on, the load sharing switching under failure conditions, load fault isolation and simplified
between the units is done by a droop-function based on a power-fre- maintenance.
quency-characteristic which is implemented in each UPS. No load High efficiency can be obtained with such a high reliability
sharing communication between the units is required. After the Diesel system because of the ability to simulate the System + System fault tol-
Engines are started and engaged, the loads are automatically trans- erance without the penalty of low operating efficiencies. A 20MWatt
ferred from the UPS energy storage device to the Diesel Engine so the design load can run with modules that are 94% loaded and yet, offer a
energy storage can be recharged and is then available for further use. reliability that is similar to the S+S scheme that has a maximum mod-
To achieve proper load sharing also in Diesel operation, each ule loading of just 50%. That can translate in to a difference in UPS
Diesel Engine is independently controlled by its UPS, whether the electrical efficiency of 3 or 4%. That means a potential waste in oper-
engine is mechanically coupled to the generator of the UPS (DRUPS) ating costs of $750,000 per year (ignoring additional cooling costs).
or an external Diesel-Generator (standby) is used. A special regula- Whats more, the solution is not only concurrently maintainable
tor structure inside the UPS in combination with the bi-directional and fault tolerant with high reliability and high efficiency, but can also
energy storage device allows active frequency and phase stabilization be realized at either low or medium voltages and can be implemented
while keeping the load supplied from the Diesel Engine. with DRUPS, separate standby diesel engines or even gas engines for
The retransfer of the system to utility is controlled by the master super-efficient large scale facilities.
control. The UPS units are re-transferred one by one, thereby avoid- For complete information on the invention and history of
ing severe load steps on the utility. After the whole system is syn- IP systems, refer to Piller Group GmbH paper by Frank Herbener
chronized and the first UPS system is reconnected to utility, the load entitled Isolated-Parallel UPS Configuration at www.piller.com.
SYSTEM INTEGRATION
A Langley Holdings Company www.piller.com
www.datacenterjournal.com
Piller Australia Pty. Ltd. THE| DATA
| Piller France SAS | Piller Germany GmbH & Co. KG | Piller Italia S.r.l. CENTER
Piller Iberica JOURNAL
S.L.U |
Piller Power Singapore Pte. Ltd. | Piller UK Limited | Piller USA Inc.
facility corner
mechanical
Optimizing Air Cooling Using
Dynamic Tracking
By John Peterson, Mission Critical Facility Expert, HP
Dynamic tracking should be considered as a viable method to optimize the effectiveness of cooling
resources in a data center. Companies using a dynamic tracking control system benefit from reduced
energy consumption and lower data center costs.
O
Inside the Data Center the plenum, reducing the static pressure and sary. Due to the predetermined raised floor
ne of the most challenging tasks of effectiveness of the air distribution system. height, supply air temperature and humidity
running a data center is managing Cables, conduits for power and piping can necessities, the volatile air distribution system
the heat load within it. This re- also clog up the air distribution path, so becomes an inflexible piece of the overall
quires balancing a number of fac- thoughtful consideration and organization puzzle, at the expense of energy and possibly
tors including equipment location should be an essential part of the data center performance due to inadequate cooling.
adjacencies, power accessibility and available operations plan. However, even the best Meanwhile, the CRAC units are operat-
cooling. As high-density servers continue to laid plan can still end up with areas that are ing at variable rates to meet this load, but
grow in popularity along with in-row and in- starved for cooling air. mostly they are operating at their maxi-
rack solutions, the need for adequate cooling In a typical layout, there are rows of mum capacity instead of as-needed. Why?
in the data center will continue to grow at a computer equipment racks that draw cool air One reason is where the air temperature is
substantial rate. To meet the need for cooling from the front and expel hot air at the rear. measured. Each unit is operating on the
using a typical under floor air distribution This requires an overall footprint larger than return air temperature measured at the unit,
system, a manager often adjusts perforated the rack itself (Figure 1). and all units are sharing the same return air.
floor tiles and lets the nearest Computer When adding new data center equip- This means that if the load is irregular in the
Room Air Conditioner (CRAC) unit react ment, data center managers need to manage racks, the units simply cool for the overall
as necessary to each new load. However, unpredictable temperatures and identify a required capacity. Apply this across a data
this may cause a sudden and unpredictable new perfect balance of how many perforated center, and the units are generally handling
fluctuation in the air distribution system due tiles to use and where to locate them. They the cooling load without altering their flow
to changes in static pressure and air rerouting involve maintenance personnel to adjust based on changes happening in any localized
to available outlets which can have a ripple CRAC units, assist with tile layouts, and even area, which consequently allows that large
effect on multiple units. With new outlets possibly add or relocate the units as neces- variance of temperatures in the rows.
available, air, like water, will seek Temperature discrep-
the path with less resistance; the ancy is the main concern
new outlets may starve existing for most data center
areas of cooling, causing the ex- managers. They would like
isting CRAC units to cycle the air the air system not to be the
faster. This becomes a wasteful limiting factor when adding
use of fan energy, let alone fluc- new equipment to racks
tuations of cooling load energy and prefer to remove the
allocation. variable of fickle air cooling
Most managers understand from the equation of equip-
that the air supply plenum needs ment management. At the
to be a totally enclosed space to same time, almost behind
achieve pressurization for air the scenes, facility costs
distribution. Oversized or un- from cooling are increas-
sealed cutouts allow air to escape Figure 1: Overall footprint needed per rack ing to match the new load,
Solutions for the Data Center Equipment Cabinet Sentry: POPS Switched CDU
With Device Monitoring
> Rack Level Power Management
1040 Sandhill Drive tf +1.800.835.1515 > Outlet Power Monitoring (POPS)
Reno, NV 89521USA tel +1.775.284.2000 > Input Power Monitoring
www.datacenterjournal.com
Server Technology, Inc. Sentry is a trademark of Server Technology, Inc. THE DATA CENTER JOURNAL |
Dynamic tracking systems can help transform the air distribution and energy use within
a data center, and should be considered as a viable solution to handle variable and
complex heat loads. The ability of dynamic tracking to reduced energy and preserve data
center flexibility are promising factors for driving optimization.
driving the need for more efficient use of ex- temperatures within each row of cooling, and method of measurement and control is some-
isting resources. A Gartner report shows that even the temperature entering a specific rack times referred to as dynamic tracking.
over 63% of respondents to a recent survey at a particular height. From these tempera- In the initial setup of dynamic track-
indicated that they rely on air systems to cool tures, an intelligent system can react to meet ing, the intelligent control system tests and
their data center over liquid cooling. Of those the need for cooling air at that location, learns which areas of the data center each
same respondents nearly 45% shared that eliminating the work of juggling floor tiles CRAC unit affects. Then, the units are tested
they are facing insufficient power which will and guessing at the air flow. together, and the control system modulates
need to be addressed in the near future. As How is this done? To begin with, the them to provide the most uniform distribu-
IT managers are able to correct their power temperature is measured differently. A tion within the constraints of the layout and
constraints they are able to deploy a more number of racks are mounted with sensors room architecture. This data allows the air
demanding infrastructure and subsequently that measure the supply air temperature at system to gather intelligence on how to com-
will require additional power and cooling. the front of the rack. This information is pensate for leaks and barriers in the plenum.
relayed to a central monitoring system that From there, the system knows how the units
Dynamic Tracking responds accordingly by adjusting the CRAC interact, and can intelligently judge how to
Although the air flow in a data center units. The units then function as a team and respond to changes within the data center. It
is complex, an opportunity now exists to op- not independently, meeting specific needs as is also able to rebalance when one of the units
timize the effectiveness of cooling resources monitored in real time by the sensors. Since fails or is being serviced.
and better manage the air system within the the temperature is tracked from the source To prevent a large fluctuation, the
data center. There are ways to monitor air and adjusts based on real time needs, this temperatures are measured over an extended
period of time and temperature is adjusted
depending on the cooling needs of the space.
The CRAC units respond based on the his-
tory of how each unit has affected the specific
area. The overarching intelligence of the dy-
namic tracking control system gauges wheth-
er an increase in temperature is sustained or
a series of momentary heat spikes and adjusts
itself accordingly. This prevents units from
cycling out of control from variables such as
human error, short peak demands, and sud-
den changes in load.
Once installed, a dynamic tracking
system can show how the CRAC units have
operated in the past and how they are cur-
rently performing. Most of the time, the units
operate at less than peak conditions, which is
an opportunity to increase energy efficiency
and create significant savings. Also, if the
units can measure and meet the load more
closely, the cost savings carry directly over to
the mechanical cooling plant as well.
Dynamic tracking systems can help
transform the air distribution and energy
use within a data center, and should be
considered as a viable solution to handle
variable and complex heat loads. The ability
of dynamic tracking to reduced energy and
preserve data center flexibility are promising
factors for driving optimization. n
1 Power & Cooling Remain the Top Data Center Infrastructure Issues, Gartner-
Research, February 2009
Power & Cooling Remain the Top Data Center Infrastructure Issues, Gartner-
Research, February 2009
Fueled by applications such as IPTV, Internet gaming, file sharing and mobile broadband, the flood
of data surging across the worlds networks is rapidly morphing into a massive tidal wave--one that
threatens to overwhelm any data center not equipped in advance to handle the onslaught.
T
he 2009 edition of the annual Cisco requirements, they also want to determine to or 24 fibers. 40-GbE transmission up to 100
Visual Networking Index predicts that what extent they can leverage their exist- meters will require parallel optics, with eight
the overall volume of Internet Proto- ing infrastructures to meet those needs. As multimode fibers transmitting and receiving
col (IP) traffic flowing across global they do so, many are discovering there are at 10 Gbps, using an MPO-style connector.
networks will quintuple between 2008 strategies available today that can help them Running 100 GbE will require 20 fibers, each
and 2013, with a compound annual growth achieve both goals. transmitting and receiving at 10 Gbps, within
rate (CAGR) of 40 percent. During that a single 24-fiber MPO-style connector.
same period, business IP traffic moving on 40GbE and 100GbE Are To achieve 10-GbE data rates for
the public Internet will grow by 31 percent, Coming distances up to 300 meters , some managers
according to the Cisco study, while enterprise Although most data centers today run have used MPO connectors to install laser-
IP traffic remaining within the corporate 10GbE between core devices, and some run optimized multimode fiber cables, either ISO
WAN will grow by 36 percent. 40GbE via aggregated 10GbE links, they 11801 Optical Mode 3 (OM3 or 50/125 m)
Faced with this looming challenge, data inevitably will need even faster connections or OM4 (50/125 m) fiber cables. Thus they
center managers know they must prepare to support high-speed applications, new already have taken an important step to pre-
now to deploy the solutions necessary to ac- server technologies and greater aggregation. pare for 40 and 100GbE transmission rates.
complish three tasks: transmit this deluge of In response, the Institute of Electrical and Working with their vendors, they can retrofit
information, store it and help lower total cost Electronics Engineers (IEEE) is developing a their 12-fiber MPO connectors to support
of ownership (TCO). Specifically, within the standard for 40 and 100GbE data rates (IEEE 40 GbE. It may even be possible to achieve
next five to seven years, they will need: 802.3ba). 100GbE rates by creating a special patch cord
more bandwidth Scheduled for ratification next year, that combines two of those 12-fiber MPO
faster connections the standard addresses multimode and connectors. Although the proposed standard
more and faster servers and singlemode optical-fiber cabling, as well as specifies 100 meters for 40 and 100GbE (a
more and faster storage copper cabling over very short distances (10 departure from 300 meters for 10GbE), the
meters, as of publication date). It is helpful vast majority of data center links currently
Todays data center operations account to examine the proposed standard and then cover 55 meters or less.
for up to half of total costs over the life cycle look at various strategies for evolving the data Those who are not using MPO-style
of a typical enterprise and retrofits make up center accordingly. Currently, IEEE 802.3ba connectors today may have options other
another 25 percent. Managers want solutions specifies the following: than forklift upgrades for achieving 40 and
that boost efficiencies immediately while 100GbE data rates. Initially, most data center
also making future upgrades easier and more Multimode Fiber managers will only run 40 and 100GbE on
affordable. Running 40 GbE and 100 GbE will require: a select few circuits--perhaps 10 percent or
Among the technologies that promise to 1) multi-fiber push-on (MPO) connectors 20 percent. So, depending on when they
provide these solutions are 40 and 100Gbps 2) laser-optimized 50/125 micrometer (m) will need more bandwidth, they can begin
Ethernet (GbE); Fibre Channel over Ethernet optical fiber and to deploy MPO terminated, laser-optimized,
(FCoE); and server virtualization. Because 3) an increase in the quantity of fiber--40 multimode fiber cables and evolve gradually.
they directly affect the infrastructure, these GbE requires six times the number of
technologies will require new approaches to fibers needed to run 10 GbE, and 100 GbE High-performance Cabling
cabling and connectors; higher fiber densi- requires 12 times that amount. Compliance with the proposed standard
ties; higher bandwidth performance; and will require a minimum of OM3 laser-opti-
MPO Connectors mized 50 m multimode fiber with reduced
more reliable, flexible and scalable opera-
A single MPO connector, factory-pre- insertion loss (2.0dB link loss) and minimal
tions. Although managers want to deploy
terminated to multi-fiber cables purchased in delay skew. As noted earlier, managers who
technologies that will satisfy their future
predetermined lengths, terminates up to 12 cap their investments in OM1 (62.5/125 m)
s
s
Today, data center and ing matters worse, one dead cell in a battery string can render the entire
battery bank useless not good when youre depending on your power
facility managers have many backup system to perform need it most. Every time the batteries are used
(cycled), even for a split second, the more likely it is that they will fail the
considerations to evaluate next time they are needed.
A growing demand for network bandwidth and faster, fault-free data processing has driven an
exponential increase in data center energy consumption, a trend with no end in sight.
I
ndustry reports show that data center tribution configurations at varying load levels Finally, the server or equipment internal
energy costs as a percent of total revenue using readily available equipment, taking into power supply converts the utilization voltage
is at an all-time high, and data center account the technology advancements and to the specific voltage needed. Most IT equip-
electricity consumption accounts for installation and operating costs that drive ment can operate at multiple voltages. Losses
almost .5 percent of the worlds green- total cost of ownership (TCO). through the UPS, the isolation transformer/
house gas emissions. As a result, data center PDU and the server equipment produce an
managers are under pressure to maximize The traditional U.S. data overall end-to-end efficiency of approximate-
data center performance while reducing cost center power system ly 76 percent.
and minimizing environmental impact, mak- In most U.S. data centers today, after Data center efficiency is often evalu-
ing data center energy efficiency critical. power is received from the electrical grid ated using the efficiency ratings of the server
According to a 2007 Frost & Sullivan and distributed within the facility, the UPS and IT equipment alone. Despite recent
survey of 400 information technology (IT) ensures a reliable and consistent level of advances in energy management and server
and facilities managers responsible for large power and provides seamless backup power technology, maximum efficiency can be
data centers, 78 percent of respondents protection. Isolation transformers step down achieved only by taking a holistic view of the
indicated that they were likely to adopt more the incoming voltage to the utilization volt- power distribution system. Each component
energy efficient power equipment in next five age and power distribution units (PDUs) feed impacts the end-to-end cost and efficiency
years, a solution thats often less costly and the power to multiple branch circuits. The of the system. The entire system must be
more quickly and easily implemented than isolation transformer and PDU are normally optimized in order for the data center to fully
data virtualization or cooling systems. combined in a single PDU component, many realize the efficiency gains offered by new
While major advancements in electri- of which are required throughout the facility. server technologies.
cal design and uninterruptible power system
(UPS) technology have provided incremental
efficiency improvements, the key to improv-
ing system-wide power efficiency within the
data center is power distribution. However,
todays 480V AC power distribution sys-
temsstandard in most U.S. data centers and
IT facilitiesare not optimized for efficiency.
Of the several alternative power distribution
systems currently available, 400V AC and
600V AC systems are generally accepted as
the most viable. While both have been proven
reliable in the field, conform to current
National Electrical Code (NEC) guidelines,
and can be easily deployed into existing 480V
AC infrastructure, there are important dif-
ferences in efficiency and cost that must be
carefully weighed.
This article offers a quantitative com-
parison of 400V AC and 600V AC power dis- Figure 1: End-to-end efficiency in the 400V AC power distribution system
Where
of efficiency, reliability and cost, as compared conjunction with a 480V AC UPS, reducing
to the 480V AC and 600V AC models. In efficiency even further.
a 400V system, the neutral is distributed As shown in Figure 2, losses through
are your
throughout the building, eliminating the the UPS, the isolation transformer/PDU,
need for PDU isolation transformers and and the server equipment produce an overall
delivering 230V phase-neutral power directly end-to-end efficiency of approximately 76
to the load. This enables the system to per- percentcomparable to the efficiency of
form more efficiently and reliably, and offers
significantly lower overall cost by omitting
multiple isolation transformers and branch
todays traditional 480V AC power distribu-
tion system.
energy
dollars
circuit conductors. Comparing total cost of
Figure 1 shows that losses through the ownership
auto-transformer, the UPS and the server TCO for the power distribution system
going?
equipment produce an overall end-to-end is determined by adding capital expendi-
efficiency of approximately 80 percent. tures (CAPEX) such as equipment purchase,
installation and commissioning costs, and
The 600V AC power system operational expenditures (OPEX), which
The 600V AC power system, while of- include the cost of electricity to run both the
Start with an Upsite Services
fering certain advantages over both the 480V UPS and the cooling equipment that removes
cooling health benchmark.
AC and 400V AC systems, carries inherent heat resulting from the normal operation of
inefficiencies making it an impractical solu- the UPS. Our diagnostic surveys offer systematic
tion for most U.S. data centers. The 600V AC The end-to-end efficiency of the 400V remediation strategies that will cor-
system offers a small equipment cost savings AC power distribution system is 80 percent rect airow inefciencies for improved
over the 480V AC and 400V AC systems, re- versus 76 percent efficiency in the 600V cooling capacity. Then increase server
quiring less copper wiring feeding and lower AC system, with both systems running in density and defer capital costs, all while
currents, which reduce energy cost. conventional double conversion mode. The reducing operating expenses.
In unique circumstances where larger 400V AC systems higher efficiency drives
data centers deploy multi-module parallel significant OPEX savings over the 600V Count on Upsites systematic
redundant UPS systems, 600V AC power AC system, substantially lowering the data solutions suite to optimize
equipment can support more modules with a centers TCO both in the first year of service your existing equipment and
single 4000A switchboard than in a 400V AC and over the 15-year typical service life of the your energy dollars.
system, allowing data center managers to add power equipment. its
ng un
a small amount of extra capacity at a nominal To further reduce OPEX, many UPS to cooli
ature
cost and with no increase in footprint. manufacturers offer high-efficiency systems mper ment
u r n air te place
With 600V AC power, the distribution that use various hardware- and software- r e t o w a n d
air nt
pass e cou or
system requires multiple isolation trans- based technologies to deliver efficiency by orated til f act
r f i t y rns
former-based PDUs to step down the incom- ratings between 96 and 99 percent, without pe ing capac ion patte rature
c o ol c u l a t e m pe
ing voltage to the 208/120V AC utilization sacrificing reliability. The Energy Saver cir et
binet intak
ca uipment
eq
IT
upsite.com
upsite corporate headquarters
Figure 2: End-to-end efficiency in the 600V AC power distribution system santa fe, new mexico usa 505.982.7800
upsite europe
utrecht, netherlands +31 (0)30 7523670
spending
load. The intelligent power core continu- mode. OPEX savings rates are linear across
ously monitors incoming power conditions all system sizes, indicating that savings will
and balances the need for efficiency with the continue to increase in direct proportion to
need for premium protection, to match the the size of the system.
fewer
conditions of the moment. Therefore, the 400V AC power distri-
When high-efficiency UPS systems bution system offers the highest degree of
are deployed, losses through the auto-trans- electrical efficiency for modern data centers,
energy
former, the UPS and the server equipment significantly reducing capital and operational
produce an overall end-to-end efficiency of expenditures and total cost of ownership as
approximately 84 percent. compared to 600V AC power systems. Recent
developments in UPS technologyincluding
dollars?
400V AC Powers Ahead the introduction of transformerless UPSs and
The 400V AC power distribution new energy management featuresfurther
systems lower equipment cost and higher enhance the 400V AC power distribution
end-to-end efficiency deliver significant CA- system for maximum efficiency.
Eliminate bypass airow with KoldLok
PEX, OPEX and TCO savings as compared This conclusion is supported by IT
Raised Floor Grommets. Seal cable
to the 600V AC system. The 400V AC system industry experts who theorize that 400V AC
openings with 98% effectiveness.
running in conventional double conversion power distribution will become standard as
mode offers an average 10 percent first-year U.S. data centers transition away from 480V
TCO savings and an average 5 percent TCO AC to a more efficient and cost-effective solu- Studies show that installing KoldLok
savings over its 15-year service life, as com- tion over the next one to four years. n Grommets facilitates data center man-
pared to the 600V AC system. When running agers turning off 18% of CRAC units at
the 400V AC UPS in high-efficiency mode, an annual operating cost savings of ap-
the first-year TCO savings increase to 16 per- About The Author: proximately $5,000 per unit.
cent, and the 15-year TCO savings increase Jim Davis is a business unit manager for Eatons
to 17 percent, minimizing data center cost in Power Quality and Control Operations Division. He Count on Upsites systematic
terms of both CAPEX and OPEX. can be reached at JimRDavis@eaton.com. For more solutions suite to optimize
In CAPEX investment alone, the information about the 400V UPS power scheme, your existing equipment and
400V AC configuration offers an average 15 visit www.eaton.com/400volt. your energy dollars.
percent savings over the 600V AC configura-
Chart 1: 15-year TCO (400V AC Energy Saver System vs. 600V AC double conversion mode)
upsite.com
upsite corporate headquarters
santa fe, new mexico usa 505.982.7800
upsite europe
utrecht, netherlands +31 (0)30 7523670
www.datacenterjournal.com All rights reserved. Upsite Technologies, Inc. 2009
One
size
does not fit all.
Thats why CoolBalance offers brush
seals to fit nearly any size opening. Ideal
Data Center Efficiency
for retrofitting existing data centers or
new installations, Sealezes CoolBalance
floor seals economically seal cable
Its in the Design
access holes, facilitating control and By Lex Coors, Vice President Data Center Technology and
regulation of critical air flow that cools Engineering Group, Interxion
computer room equipment.
Most companies undergoing data center projects have the mindset
of cutting costs rather than helping the environment, however,
they may want to adjust their focus. With data center greenhouse
emissions set to overtake the airline industry in the next five to
ten years, quadrupling by 2020, it has never been more critical for
organizations to optimize their data center.
I
f the cost savings are half as great for modular way and/or not operated well. So,
Dyna-Seal strip brush technology data centers as they have been for the how does an organization go about optimiz-
provides an effective seal* airline industry, we will need to fasten ing data center efficiency and improving its
our seatbelts. PUEenergy?
Seal around cable openings in walls Data centers have always been For organizations to reduce their PUE,
or floors power hogs, but the problem has accelerated they need to have an active focus on the
in recent years. Ultimately, it boils down to following three areas: external efficiency,
Variety of sizes; 5x5 inch to 10x24 inch design, equipment selection and operation internal efficiency and customer efficiency.
and 4 & 6 inch circle seals of which measurement is an important part. They need to be monitoring their best
SM The first step on an existing Datacenter to practice PUE ratios that go against industry
centers efficiency or green credentials and by the industry sector, institutions and
have become the de facto metric in the past government bodies alike as an agreed way to
year. measure the energy overhead of a data center
Easy to install A data center with a low PUEenergy of
1.5, implements lean design and has estab-
it may distract us from the ultimate goal:
www.datacenterjournal.com
Bearing this in mind taking some of n Reduce demand for new servers, which
Does
the following steps to achieve a better total can also increase efficiency by 10-20%
performance of the site in energy usage will n Introduce greener and more power ef-
help you achieve a more energy efficient ficient servers and enable power saving
energy
operation. features, this also equates to a 10-20%
gain
Step 1:
Measure the transformer (or other main By following the above steps, an
source energy usage) and the IT energy
usage and calculate the PUEenergy
organization can look to achieve an overall
efficiency gain of 65%, significantly improv-
ing its PUE ratio.
savings
have a
Step 2: The third and final piece of the
Start harvesting the low hanging fruits efficiency puzzle is customer focus. An
based on the Uptime Institute guide lines efficient data center should have hands-
nice
that have been set for many years and on expert support in energy efficiency
available on their website. implementation efforts, as well as the best
practice customer installation check lists.
Step 3: Staff need to be able to advise on how
ring?
Measure the transformer and the IT En- to reduce temperatures and energy usage
ergy usage again and calculate your new though things like innovative hot and cold
PUEenergy. You may observe that while aisle designs. They need to have the tools
your total energy usage has decreased, in place to measure and analyze efficiency,
your PUEenergy ratio has increased. implement the latest efficiency ratings, Prevent circulation of hot exhaust
develop and implement first phase actions, air with HotLok Blanking Panels.
Step 4: and integrate figures and ratings with Seal rack unit openings in IT server
Start switching off unneeded infrastruc- customers CSR. Without such expertise in cabinets with 99.97% effectiveness.
ture, while maintaining your redundancy place, organization will find it hard to reach
levels. their desired efficiency gains. New research shows that installing
Green and efficient data centers are HotLok Panels helps data center
Step 5: real and achievable, but emissions and managers achieve up to 29% reduc-
Measure the transformer and the IT cost of energy are rising fast (although
tion in annual operating costs and
energy usage and calculate your PUEen- people now and then forget these costs
simple payback in a few months.
ergy. You may now observe that your sometime decrease temporally), so we
PUEenergy and again that your total need to do more now. Organizations must
energy usage has decreased. work together especially when it comes to Count on Upsites systematic
It comes as no surprise that good measurement. Vendors should be provid- solutions suite to optimize
design leads to lower capital expenditure ing standard meters on all equipment to your existing equipment and
(CAPEX) and better efficiency, but what is measure energy usage versus productivity; your energy dollars.
good design? A model that has proved suc- if you dont know whether youre wasting
cessful both in terms of efficiency and green energy, how can you change it?
credentials is Modular Design. Modular But its not just vendors who are
Design was developed by Lex Coors, Vice responsible. Data center providers should
President of Data Center Technology provide leadership for industry standards
and Engineering Group, Interxion, and is and ratings that work, data center design
unique since it allows for future data center and operational efficiency steps, and sup-
expansion without interruption of services port for all customer IT efficiency improve-
to customers. ments. What is apparent is that the whole
Recent research by McKinsey and the industry, from the power suppliers to the Receive a free Upsite Temperature
Uptime Institute identified five key steps to rack makers, all need to work together to Stripvisit upsite.com/energy
achieving operational efficiency gains: improve efficiencies and ensure that we are
n Eliminate decommissioned servers, all at the forefront of efficient, green data
which will equal an overall gain of center design. n
10-25%
Upsite is an ENERGY STAR Service and Product Provider
n Virtualize, which leads to gains of Partner, developing ways to optimize data centers and
25-30% PUEenergy measures efficiency over time improve energy efciency.
Backing up files and data online has been around for quite a while, but it has
never really taken off in a big way for business customers. There is also a
new solution coming onto the market which uses the cloud for backup and
recovery of company data. While these two approaches to disaster recovery
appear to be similar, there are some significant differences as well.
So which one would be right for you?
C
loud recovery can be a nebulous want just your data in the cloud, you want the to the wrong standard. In the end, you have
term, so I would define it based on ability to actually start up applications and to compare the results that a cloud services
the solution having the following use them, no matter what went wrong in your provider can achieve, the service levels that
features: own environment. they work to, and the cost comparison to
The next area where cloud recovery can doing it yourself. The point is that security
1. The ability to recover workloads in the provide a better level of protection is around and reliability are hard, but they are easier at
cloud provisioning. Even using online backup scale. Companies like Amazon and Rack-
2. Effectively unlimited scalability with little systems, organizations would have to use space do infrastructure for a living, and do it
or no up-front provisioning replacement servers in the event of an outage. at huge scale. Amazons outages get reported
3. Pay-per-use billing model The whole point of recovering to the cloud is in news, but how does this compare to what
4. An infrastructure that is more secure and that they already have plenty of servers and an individual business can achieve?
more reliable than the one you would build additional capacity on tap. If you need more The last area where cloud recovery can
yourself space to cope with a recovery incident, then deliver better results is through usability and
5. Complete protection - i.e. non-expert users you can add this to your account. Under this protecting everything that a business needs.
should be able to recover everything they model, your costs are much lower than build- While some businesses know exactly what
need, by default. ing the DR solution yourself, because you get files should be protected, most either dont
If a solution does not meet up to these the benefit of duplicating your environment have this degree of control, or have got users
five criteria, then it should be called an without the upfront capital cost. into the habit of following standard formats
online backup product. This may be right Removing the up-front price and or saving documents into specific places.
for your business, but typically they require long-term commitment shifts the risk away The issues that people normally get bitten
more IT knowledge and are based on specific from the customer, and onto the vendor. The by are with databases, configuration changes
resources. vendor just has to keep the quality up to keep and weird applications that only a couple of
There is an old saying in the data customers loyal, which requires great service people within the organization use. Complete
protection business that the whole point of and efficient handling of customer accounts. protection means that all of these things can
backing up is preparing to restore. Having The cloud recovery provider takes on all the be protected without requiring an expert in
a backup copy of your data is important, management effort and constant improve- either your own systems, or with the cloud
but it takes more than a pile of tapes (or an ment of infrastructure that is required. A recovery solution.
on-line account) to restore. You might need business without in-house staff that is Cloud means so many different things
a replacement server, new storage, and maybe familiar with business continuity planning to so many people, that it sometimes seems
even a new data centre, depending on what may ultimately be much better off paying a not to mean anything at all. If you are going
went wrong. Traditionally, you would either monthly fee to someone who specializes in to depend on it to protect your data, it had
keep spare servers in a disaster recovery data this area. better mean something specific. These five
centre, or suffer a period of downtime while One area where cloud providers may points may not cover every possible protec-
you order and configure new equipment. be held to account is around security and tion goal, but they set a good minimum
With a cloud recovery solution, you dont reliability, but I think they hold the providers standard. n
A
ccording to a recent Computing Technology Industry provides a detailed audit trail for all activity associated within these
Association (CompTIA) survey (see http://www.comptia. safe havens. This encourages more secure employee behavior and
org/pressroom/get_pr.aspx?prid=1410), although most re- significantly reduces the risk of human error.
spondents still consider viruses and malware the top security Here are some best practices for organizations serious about
threat, more than half (53 percent) attributed their data preventing internal breaches, be they accidental or malicious, of any
breaches to human error, presenting another dimension to the rising processes that involve privileged access, privileged data, or privileged
concern about insider threats. It should serve as a wake-up call to users.
many organizations, that inadvertent or malicious insider activity can
1
create a security risk. Establish a Safe Harbor
For instance, take the recent data breach that impacted the Metro By establishing a safe harbor or vault for highly sensitive
Nashville Public Schools. In this case, a contractor unintentionally data (such as adminstrator account passwords, HR files, or
placed the personal information of more than 18,000 students and intellectual property), build security directly into the business process,
6,000 parents on an unsecured Web server that was searchable via the independent of the existing network infrastructure. This will protect
Internet. Although this act was largely chalked up to human error and the data from the security threats of hackers and the accidental misuse
has since been corrected, anyone accessing the information when it by employees.
was freely available online could create a data breach that could cause A digital vault is set up as a dedicated, hardened server that
significant harm to these students and parents. provides a single data access channel with only one way in and one
Moreover, the Identity Theft Resource Center (ITRC) recently way out. It is protected with multiple layers of integrated security
reported that insider theft incidents more than doubled between 2007 including a firewall, VPN, authentication, access control, and full en-
and 2008, accounting for more than 15 percent of data breaches. Ac- cryption. By separating the server interfaces from the storage engine,
cording to the report, human error breaches, as well as those related to many of the security risks associated with widespread connectivity are
data-in-motion and accidental exposure, accounted for 35 percent of removed.
2
all data breaches reported, even after factoring in that the number of
breaches declined slightly during this period. Automate Privileged Identities and
To significantly cut the risk of these insider breaches, enterprises Activities
must have appropriate systems and processes in place to avoid or Ensure that administrative and application identities and
reduce human errors caused by inadvertent data leakage, sharing of passwords are changed regularly, highly guarded from unauthorized
passwords, and other seemingly harmless actions. use, and closely monitored, including full activity capture and record-
One approach to address these challenges is digital vault technol- ing. Monitor and report actual adherence to the defined policies. This
ogy, which is especially valuable for users with high levels of enter- is a critical component in safeguarding organizations and helps to
prise/network access as well as those handling sensitive information simplify audit and compliance requirements, as companies are able to
and/or business processes such as users with privileged access -- in- answer questions associated with who has access and what is being
cluding third-party vendors or consultants, executive-level personnel accessed.
-- or access to the core applications running within an organizations As listed among the Consensus Audit Guidelines 20 critical
critical infrastructure. security controls, the automated and continuous control of adminis-
Instead of trying to protect every facet of an enterprise network, trative privileges is essential to protecting against future breaches. [Ed-
digital vault technology creates safe havens -- distinct areas for storing, itors note: the guidelines are available at http://www.sans.org/cag/.]
protecting, and sharing the most critical business information -- and
5
create a plan to secure, manage, automatically change, and log all
privileged passwords. avoid bad habits
To better protect against breaches, organizations must
4
establish best practices for securely exchanging privileged
Secure Embedded Application information. For instance, employees must avoid bad habits (such
Accounts as sending sensitive or highly confidential information via e-mail or
Up to 80 percent of system breaches are caused by internal writing down privileged passwords on sticky notes). IT managers
users, including privileged administrators and power users, who acci- must also ensure they educate employees about the need to create and
dentally or deliberately damage IT systems or release confidential data set secure passwords for their computers instead of using sequential
assets, according to a recent Cyber-Ark survey. password combinations or their first names.
Many times, the accounts leveraged by these users are the ap- The lesson here is that the risk of internal data misuse and
plication identities embedded within scripts, configuration files, or accidental leakage can be significantly mitigated by implementing ef-
an application. The identities are used to log into a target database or fective policies and technologies. In doing so, organizations can better
system and are often overlooked within a traditional security review. manage, control, and monitor the power they provide to their employ-
Even if located, the account identities are difficult to monitor and log ees and systems and avoid the negative economic and reputational
because they appear to a monitoring system as if the application (not impacts caused by an insider data breach, regardless of whether it was
the person using the account) is logging in. done maliciously or by human error. n
DCJ_ad_4fx.qxd:7x24 Conference 9/25/09 6:05 PM Page 1
End-to-End Reliability:
2009 FALL CONFERENCE For more information
and to register, visit
www.7x24exchange.org
Conference Partners:
KEYNOTE TOPICS
Leadership and Accountability
When It Matters
Commander Kirk S. Lippold, USN (Ret)
Commander of the USS Cole
F
or many shops, this information is unavailable: IT does not Considerations for Calculation
receive an energy bill, and does not use, or have, tools to iden- Ultimately, energy data needs to be collected from two cost
tify its share of energy consumption. In the past, electricity buckets: data-serving equipment (servers, storage, networking, UPS)
costs, especially in smaller IT shops, were of minor concern and support equipment (air conditioning, ventilation, lighting, and
in many cases, the energy bill was simply left in the hands of the like). Changes in one bucket may affect the other bucket, and by
the facilities director or company accountant to pay and file away. tracking both, IT can understand this relationship. These buckets are
However, in the same study, Info-Tech finds that 28% of IT de- also necessary for common efficiency calculations; for more informa-
partments are now piloting an energy measurement solution of some tion, refer to the Info-Tech Advisor research note, If You Measure It,
kind, and an additional one-quarter of shops are planning a measure- They Will Green: Data Center Energy Efficiency Metrics. Software
ment project within twelve months. Many converging factors drive for tracking energy use and cost is another consideration. While
interest in measuring and managing energy use, and the major ones assessing the need for a full energy management solution, IT shops
are outlined here: can use something as simple as an Excel spreadsheet to enter energy
figures and track costs over a few months. Specifics on collecting data-
n Increasing energy costs serving and support equipment energy data, and tracking software, are
The US Energy Information Administration (EIA) reports that discussed further below.
between 2000 and 2007, the average price of electricity for busi-
nesses increased from 7.4 cents per kilowatt-hour (kWh) to 9.7 Option One: You May Already Have Access to
cents per kWh an increase of 30%. Energy Data
Depending on data center setup, vintage and pedigree of
n Burgeoning data center energy consumption equipment, some IT shops can already collect energy numbers at the
According to the American Society of Heating, Refrigerating and data-serving or support equipment levels. The following scenarios are
Air-Conditioning Engineers (ASHRAE), energy density of typical common starting points when beginning data collection:
mid-range server setups has increased about four times between
2000 and 2009 (from about 1,000 watts per square foot to almost n Existing software metering
4,000). Greater server consumption means more waste in the Newer servers, power-distribution units (PDUs) and UPS
form of heat, so energy consumption of cooling and support systems have monitoring built into the included management
systems also spikes simultaneously. consoles. For example, newer HP ProLiant blades ship with
power tracking features, and the HP Insight Control management
n Green considerations console provides energy monitoring capabilities.
Energy consumption has an associated carbon footprint. Interest
in reducing energy use has increased in IT and senior manage- n Existing hardware metering
ment ranks. Some server racks and PDUs may have hardwired meters built
in. For example, some of APCs more basic PDUs for racks have
Ultimately, interest in energy data is driven by the age-old built-in power screens.
accounting precept: What gets measured gets done. Realizing that
energy use will become a compounding issue, a growing number of Unfortunately, built-in metering is rarer in the support equip-
IT shops seek to quantify energy as an operational cost, just like line ment bucket. Many older data center air conditioning units and air
items such as staffing and maintenance. Once the cost is accounted handlers do not provide this data. In some cases, one can estimate this
for, IT has a number to improve on. In this note, learn about three energy number by subtracting the data-serving bucket from the total
options for obtaining energy numbers in the data center. A companion data center energy draw. But, since older data centers may not be sub-
Info-Tech Advisor research note, Energy Measurement Methods for metered (the draw of the data center is not measured separately from
End-User Infrastructure describes how to obtain energy data at the the rest of the building), one cannot always perform this calculation,
user infrastructure level (workstations, printers, and the like). and installation of a meter is necessary.
n Industrial-strength meters
Standard Performance Evaluation Corporation (SPEC) provides a list of heavier-duty
energy meters, which typically run $200 US to more than $2000 US. These meters, many
of which are designed for manufacturing and industrial environments, include data con-
nectivity and are better-suited to handling the industrial-grade energy requirements of
multiple PDUs and high-voltage components in data centers. SPEC provides free measure-
ment software that is verified as compatible with these devices.
To collect data in both buckets, IT may need to have an electrician or data center profes-
sional install sub-meters or dedicated measurement devices. If the organization is not yet ready
for such a move, cheap and cheerful options should at least provide a rough cost number for the
data-serving bucket to quantify the true operational cost of servers and storage.
Note that options one or two often come along with two major disadvantages. First, some
solutions model energy use of isolated components in the data center. IT still wont understand
how changing energy consumption of a group of components affects other components in the
data center; for example, changing server loads affect heat output and thus air cooling needs.
Second, measuring total data center energy use at only one or a few points causes flat trend-
ing; essentially, IT will have a total energy use/cost number, but wont understand how energy
use trends up and down in different areas of the data center. With both of these disadvantages,
long-term optimization remains difficult. Options one and two are good options to get an
overall handle on energy costs, while major optimizations often require a bigger investment in
option three, described next.
A
fter youve visited hundreds of data centers over the last 20+
years (like your authors), you begin to see problems that are
1
Problem: Leaky raised access floor
Most existing data centers employ raised access floor
to route cold air from cooling units to floor air
outlet tiles and grilles that discharge the air where needed.
Common Mistakes
However, leaks in the floor waste the cold air and reduce
cooling ability.
to Correct Them
oversized floor cable cutouts. Unnecessary
cutouts should be eliminated and necessary
cutouts should be closed with brush-type
closures.
7
that are not in service, then carefully remove r Remedy:
(mine) them. If you dont have this expertise Disable humidification and reheat in all cool- Problem: Too many CRAC
in your staff, then you should engage a skilled ing units except two in each room (on op- units operating
IT cabling contractor. posite sides of the room). Change the controls This one may seem counterintuitive,
for those units so they operate based on room so its no surprise that this occurs in most
3
dew point temperature. If multiple sensors legacy data centers. Poor air flow manage-
Problem: Space are used, its important that a single average ment creates hot spots i.e. locations where
temperature too cold value be used as the controlled value. This can the temperature entering the server cabinets
In the past, data center managers liked prevent calibration errors between multiple is outside of the TC9.9 thermal envelope.
to keep the room like a meat locker, believing sensors from forcing CRAC unit to fight each The conclusion most data center managers
the theory that a colder space would buy a other. Set the controls to maintain dew point and facilities managers make is that there is
little more ride through time when the cool- within the ASHRAE TC9.9 Recommended insufficient capacity, so they run more CRAC
ing system went off and had to be restarted. Thermal Envelope. units.
The miniscule additional ride through time (a
5
few seconds) is gained at the high operating r Remedy:
cost of keeping the room unnecessarily cold. Problem: Electrical Adding more CRAC units when the capac-
The current ASHRAE TC9.9 Recommended redundancy for cooling ity was already sufficient actually makes the
Thermal Envelope is 64.4 F to 80.6 F dry units is lower than the problem worse, especially when using con-
bulb air at the server inlet; the warmer the air mechanical redundancy stant volume CRAC units. The CRAC units
temperature the lower your operating cost. This is another one weve lost count of. The will operate less efficiently, using more energy
typical scenario is that the desired site redun- to dehumidify the space, which in turn forces
r Remedy: dancy is Tier III or Tier IV, the mechanical the reheat coils and the humidifiers to run
Move the control thermostats in each of your engineer has done a good job designing to the concurrently. The solution is to eliminate the
cooling units to the discharge air side if not desired tier, but the electrical engineer lost fo- humidifiers in all but two units (see item #4
already located there (one unit at a time) and cus and branch circuited every cooling unit to above) and disconnect all reheat coils. An
calibrate the thermostat. Set the thermostat one or two panelboards. The end result is that equally important step is to match the load
to maintain 60 F discharge air. Once all of the redundancy of the site is Tier I because within the space to the capacity available. It is
the thermostats are on the discharge air side, the electrical redundancy for the cooling common to see 300% of the needed capacity
start raising their setpoints 1 F at a time and units is lower than the mechanical redundan- actually on and operating at any time. Once
monitor the inlet temperature at your warm- cy. For example, assume that the need is for the air flow management remedies listed in
est servers for a day. If the inlet air tempera- 10 cooling units and 12 are provided, so the items #1 through #4 above is implemented,
ture at your warmest server is less than 75 F mechanical redundancy is N+2. The electrical the more appropriate capacity that should be
after a day, raise the temperature leaving the engineer however has circuited all cooling operating at any time is 125% to 150%.
cooling units another degree. Continue until units to one branch circuit panelboard so the
8
the warmest server has 75 F entering air. electrical redundancy is N if the one panel-
board fails then all cooling fails. Problem: The cabinets
restrict airflow into the
4
Problem: Cooling units r Remedy: servers contained inside
fight each other Identify another source to supply backup pow- Sometimes, the data centers worst enemies
We cannot count how many times er for the cooling units this source may be are the cabinets selected for the space. Legacy
weve seen one cooling unit cooling and direct from the standby generator if need be. data centers often used cabinets with solid
dehumidifying while the one beside it is hu- The main criterium for this Source 2 is that it glass or panel doors. Even though some
midifying. This is an energy wasting process is available if the original Source 1 fails. Then, breathing holes are provided, they do in
that is a relic of the days when the industry add transfer switches for each cooling unit so fact offer too much resistance to the air flow
consensus design condition was 72F +/- 2F that Source 2 will supply if Source 1 fails. needed by the computer equipment inside.
and 40% relative humidity +/- 5% (and before
6
that a relic of the paper punch card days). As r Remedy:
mentioned above, todays thermal envelope is Problem: No hot aisle/cold Replace doors with perforated doors of large
64.4 F to 80.6 F dry bulb. The same thermal aisle cabinet arrangement free area. The larger the free area, the better.
envelope specification also includes a recom- This problem becomes more burden- This applies to both front and rear doors of
mended range of moisture content. That some as the critical load density (watts/square the cabinets. n
T
he economy has certainly been tough and open architecture and DOS operat- longer. This isnt an exact science since we
on all of us these past 12 months. I ing system enabled other manufacturers to dont know how large the market will ulti-
thought it might be worthwhile to introduce IBM Compatible PCs, also know mately grow or where the curve really starts.
revisit an article we published on DCJ as clones. No matter how you draw the curve we are
in 2006 concerning technology and its Then in 1985 Microsoft introduced likely below the 50% penetration level and
market potential and duration. Windows. Windows moved the PC from text have a long stretch to go.
We believe that these questions and can based commands to point and click. This The dot-com boom was fueled by the
be easily answered by recalling something transformed the PC from a tool for only the release of significant IT resources and tal-
learned years ago in Econ101; the S curve. most dedicated to something that everyone ent as Y2K preparations drew to a close, an
The basic tenants of the S curve are that could easily master and moved the PC from investment community that recognized the
1) all successful products follow a something considered as a toy by many to tremendous technology growth ahead, and
known and predictable path through three a legitimate business tool. Sales volumes significant innovation.
stages; Innovation Growth and Maturity and kicked up, competition was fierce and prices The dot-com bust occurred because
2) that these stages are of equal length. dropped dramatically. an over anxious investment community
So lets explore the history of the computer By 2001, the PC was readily available, provided too much money too fast. The
and internet, events, dates and time frames: inexpensive, and standard equipment on buying power of the Early Adopters, people
The first electronic computer was de- almost every desk in corporate America, a and companies who want to be in the leading
veloped for the US military and was first put commodity product with low margins and edge and are willing to pay high prices just
in use in 1945. By todays standards for elec- slow growth. This could be the end of the wasnt significant enough to absorb all of the
tronic computers the ENIAC was a grotesque story but growth of another technology innovation. This pushed the supply above the
monster. It had thirty separate units, weighed would overshadow the development of the curve. As with all economic imbalances the
over thirty tons, used 19,000 vacuum tubes, PC, push technology into our everyday lives, market forces correction.
1,500 relays, and demanded almost 200,000 and give the PC a new lease on life. Further, many dot-com innovations
watts of electrical power. ENIAC was the As PCs developed so did ARPANET. lacked key infrastructure. Just as the automo-
prototype from which most other modern The Internet was largely used by IT profes- bile could not have been successful without
computers evolved. sionals, researchers, academia and other development of roads, bridges, gas stations,
the first commercial computer early adapters of technology. It was slow, tire dealers, hotels and even fast food, many
1960 with a monitor and keyboard text based and difficult to use. In 1994, Jim of the services that were introduced during
was introduced, Digitals PDP-1. Clark and Marc Anderson developed the the dot-com boom required significant devel-
the first personal computer Netscape Browser and just as Windows had opment in other areas.
1962 was introduced. It was called made the PC a practical tool, Netscape made For example, hosting applications at
LINC and each unit cost over $40,000. the Internet practical. remote unmanned data centers or colloca-
ARPANET was created to There were many other milestones that tion facilities is only practical with remote
1969 link government researchers deserve attention and were perhaps more management applications and inexpensive
scattered across the US at universities and re- important then some of the events mentioned bandwidth. We may take this for granted
search facilities so that they could share data. here, such as the research performed at Xerox today, bandwidth wasnt inexpensive seven
This was the start of the Internet. PARK, where modern desktop computing years ago and remote management tools were
Apple Computer Company was created; windows, icons, mice, pull down not as sophisticated as they are today.
1976 was created and around 1977, menus, What You See Is What You Get Yes there have been casualties along the
the first Apple computer was introduced. It (WYSIWYG) printing, networked worksta- way but significant advancements were made
was a kit that the customer assembled. The tions, object-oriented programming -- etc. during the dot-com boom and early adapters
next year Apple introduced a factory-as- What many dont know is that Xerox could have in many cases reaped many benefits.
sembled version. The volume of sales was have owned the PC revolution but simply Manage Service Providers, Collocation and
small and the costs high. The Apple was couldnt bring itself to disrupt its core busi- other services have sent significant growth
followed by an almost endless list of me-too ness of making copiers. and success since we first published this
computers; Timex Sinclair, Commodore, Why is all of this so important? Well, article and if our numbers are correct have
Tandy, Pet, etc. depending on your starting point the innova- quite a run to go
IBM introduced The Personal tion phase is likely to have been somewhere
1981 Computer. The IBM name between 20 and 30 years and possibly even
1960
1962 Max V. Mathews leads a Bell 1962 The first video game is invented by MIT 1962 The Telstar communications
Labs team in developing software that graduate student Steve Russell. It is soon satellite is launched on July 10 and
can design, store, and edit synthesized played in computer labs all over the US. relays the first transatlantic television
music. pictures.
1962 H. Ross Perot founds Electronic 1963 On the basis of an idea of Alan
Data Systems, which will become the Turings, Joseph Weizenbaum at MIT
worlds largest computer service bureau. develops a mechanical psychiatrist
called Eliza that appears to possess
intelligence.
1962-1963
1969 Bell Labs withdraws from 1970 Shakey,
Project MAC, which developed developed at SRI
Multic, and begins to develop Unix. International, is
the first robot to
1969 The RS-232-C standard is use artificial
introduced to facilitate data exchange intelligence to
between computers and peripherals. navigate.
1969 1970
1977 Bill Gates and Paul A
setting up shop first in Al
1981 Barry Boehm devises Cocomo 1977 The Apple II is
(Constructive Cost Model), a announced in the spring
software cost-estimation model. and establishes the
st benchmark for personal
vice, 1981 Japan grabs a big piece of the computers.
The Computer Museum