Professional Documents
Culture Documents
Mo n d a y, N o ve m b e r 1 8 , 2 0 1 3
O se r C o m m u n ic a tio n s G ro u p
D e n ve r
SCSD: Tell our readers about your company. Whats your main line of
business?
BM: Cognimem stands for cognitive memory, and what we do is build general purpose artificial intelligence hardware. It is different than traditional computing in that it
operates purely in parallel, and it is taught versus programmed.
Traditional computing, like what is in your smartphone or personal computer, has
reached its limits in going faster in a serial fashion. You probably have noticed that no
one talks about faster CPUs anymore, and when you try to put many of these processors together it is very difficult to program.
What we build is practical and commercially available hardware that is patterned after how we biologically process information. That is, massively parallel
pattern recognition. We do not have physically separate processor and memory
Continued on Page 17
Continued on Page 12
ADAPTIVE COMPUTING
ANNOUNCES NEW PRODUCT
SCSD: Tell our readers about your company. What is your main line of
business?
Continued on Page 12
SCSD: Does Adaptive help alleviate the logjam and solve these big
data challenges?
Continued on Page 17
Mo n d a y, N o ve m b e r 1 8 , 2 0 1 3
The key is to target your performance, capacity and your budget. In our
labs, we explored all the options for
affordable shared SSD storage that does
not break the bank. The test results, comparing only mechanical with hybrid SSD
solutions, were disappointing. All of the
buzz about hybrid RAID controllers went
down the drain.
We started testing all SSD drive
arrays. We tested a set of 16 drives with
good endurance for shared storage. It
started around 500K IOPS and after an
hour of data bombarding, the perform-
Su p e r C o m p u te r Sh o w D a ily
Infrastructure (SDSI)
knits the whole thing
together with Super
Node
Manager
(SNM), easy-to-use
configuration, optimization, and I/O
of
all
control
deployed
Super
Nodes; PERCEUS,
whole infrastructure
OS and application
stack provisioning; Abstractual, intelligent
system management and workload scheduling; GravityFS, distributed, parallel file
system; and GravityPark Open Parallel
Toolkit, advanced, next-generation
Continued on Page 17
AN INDEPENDENT PUBLICATION
NOT AFFILIATED WITH SC
Lee M. Oser
CEO and Editor-in-Chief
Lyle Sapp
Senior Associate Publisher
Director of Sales
Kim Forrester
Jeff Rosano
Associate Publishers
Lorrie Baumann
Editorial Director
Hayden Neeley
Senior Associate Editor
Jeanie Catron
Associate Editor
Yasmine Brown
Keaton Kohl
Graphic Designers
Ruth Haltiwanger
Customer Service Managers
Lynn Hilton
Jeff Meyer
Account Managers
Enrico Cecchi
European Sales
Super Computer Show Daily is published by
Oser Communications Group 2013.
All rights reserved.
Executive and editorial offices at:
1877 N. Kolb Road, Tucson, AZ 85715
520.721.1300/Fax: 520.721.6300
www.oser.com
European offices located at Lungarno Benvenuto
Cellini, 11, 50125 Florence, Italy.
Mo n d a y, N o ve m b e r 1 8 , 2 0 1 3
CIENA DEMONSTRATES
PROTOTYPE AT SC13
At Supercomputing, Ciena will demonstrate a prototype of an open, modular
multi-layer Software Defined Network
(SDN) controller and autonomic intelligence applications for use on carrier
grade wide area networks (WANs). The
SDN will connect to the industrys first
live, fully functional international
research testbed that unites all of the key
packet, optical and software building
blocks required to demonstrate and prove
the benefits of software-defined, multilayer service provider WANs.
The testbed was created in collaboration with Cienas research and education (R&E) partners CANARIE,
Internet2, StarLight and ESnet. It
spans more than 2500km and connects
Ciena labs in Ottawa, Canada and
Hanover, Md. with the R&E community via StarLight in Chicago. An impor-
tant component of Cienas OPn architecture, SDN supports open, application-driven and analytics-enhanced
control of wide area networks, laying
the groundwork for more efficient
capacity utilization and new advanced
research applications.
The testbed leverages OpenFlow
across both the packet and transport
layers, is supported by an open architecture carrier-scale SDN controller
and intrinsic multi-layer operation, and
incorporates real-time analytics software applications.
The SDN controller incorporates a
multi-layer path computation element
and leverages OpenFlow v1.3 with
transport extensions across packet,
OTN and photonic layers for end-toend flow/connection control of the following network elements: a prototype 4
NUMASCALE PROVIDES
PLUG-AND-PLAY SMP, SHARED
MEMORY AT A CLUSTER PRICE
By Trond Smestad, CEO, Numascale
Su p e r C o m p u te r Sh o w D a ily
one-person installation in data centers,
airborne ISR platforms, mobile shelters
and portable transit cases. The 3U x 18"
x 3.4" sleds fit easily into the enclosure to
protect your investment and your data in
highly secure environments.
The FSA supports OSS PCIe direct
attached storage as well as Fiber Channel
SAN or Infiniband NAS storage options
via the Fusion-io ION Data Accelerator
software. In direct attached mode, an
internal switch matrix allows from one to
four servers to have direct access to the
Fusion ioScale memory in multiple configurations. The sleds act in concert or
separately to fit the changing needs of
any storage application while supporting
any RAID level available to the servers.
In network attached mode, the ION Data
Accelerator software provides a fiber
channel or Infiniband path across servers,
virtual machines and more concurrent
users than the direct attached mode. Up
to 100TB of shared ioMemory becomes
available with industry leading performance, minimum latency and comprehen-
sive visibility.
The FSA achieves end-to-end high
availability at every level in the system.
At the ioMemory level, Fusion-io
Adaptive Flashback software increases
flash reliability and endurance by
rebuilding data at the individual NAND
banks. At the module level, the Fusion
ioScale flash memory offers the reliability proven in the worlds largest datacenters. At the chassis level, the OSS switch
matrix, removable sleds and IPMI module allow for environmental monitoring,
physical rerouting of storage traffic and
hot-swap of the ioScale memory platform. At the array level, the Fusion-io
ION Data Accelerator software provides
replication clustering and SNMP realtime performance and physical array
monitoring.
12
Mo n d a y, N o ve m b e r 1 8 , 2 0 1 3
VISUALIZE AT EXASCALE
WITH KITWARE
Advances in high-performance computing and data acquisition technologies are
allowing researchers to contemplate
more complex problems than ever before
in many scientific, engineering and medical fields. The research community is
facing the challenge of how to manage,
analyze and visualize data of such
unprecedented size in a meaningful way.
With expertise in high-performance, distributed visualization and data processing, Kitware is addressing these issues.
As a leader in scientific visualization, Kitware is developing the next-generation infrastructure that will power
visualization
at
the
exascale.
Visualization and analysis are critical to
understanding complex data, but current
Su p e r C o m p u te r Sh o w D a ily
analysis and visualization in an end-user
environment. ParaView users can quickly build visualizations to analyze their
data using qualitative and quantitative
techniques, and explore data interactively in 3D. ParaView was developed to
analyze extremely large datasets using
memory
computing
distributed
resources, but it is also a powerful tool on
a standard desktop or laptop computer.
To augment these products,
Kitware is contributing to the collaborative efforts on the Data Analysis at
Extreme (DAX) toolkit and Portable
Data-Parallel
Visualization
and
Analysis Library (PISTON). Both of
these efforts are targeted to delivering
extremely scalable data analysis functionality using current and next generation processors including multi- and
many-core architectures. With the
architectures of DAX and PISTON,
researchers will be able to leverage
boards and barebones: 1U cold storage
barebone, 2U SSD cache barebone, 3U
GPGPU card barebone, 4U 60 3.5 inch
HDD barebone, Mini ITX Intel Avoton
board, and Half Width Intel Denlow
board.
Seeing the growth potential of this
industry segment, ASRock decided to
fund a new subsidiary, ASRock Rack
Inc. ASRock Rack Inc. aims to bring
the market a fast, flexible, efficient
product design and distribute business
model, which should have the ability
to rock the industry. Like Zara or
UniQlo in the clothing business, we
want to bring a similar fast response
product design and distribution model
to the server industry. We believe it is
where the industry is going, and can
Mo n d a y, N o ve m b e r 1 8 , 2 0 1 3
Su p e r C o m p u te r Sh o w D a ily
code compilation.
Saying its a software defined system makes it sound easy, but it takes a
tightly integrated and flexible hardware
platform to provide the Super Node foundation. Clustered Systems ExaBlade is
such a platform. The base unit of the
ExaBlade is a five chassis blade rack
inclusive of a power distribution and
cooling. With a minimum of 100kW
power, scalable to twice that, quiet two
phase cooling, and PUEs approaching 1,
it simply eliminates power, power densiAberdeen (Contd. from p. 4)
Now the challenge is to push this performance out of the box. There are two
10GBASE-T ports in the system as
default, and 6 PCI-E 3.0 x8 slots available
for that purpose. We performed our tests
with default iSCSI function on our NAS
JT: Yes. In addition to continuing innovation of our market leading super computer, SGI ICE X, and unparalleled
shared memory system, SGI UV, we are
introducing three new solutions: SGI
InfiniteData Cluster, delivering breakthrough compute and storage density that
scales seamlessly from a small number of
cluster nodes to several thousand; SGI
ObjectStore, delivering innovative
object-based storage; and new intelligent
management of active archives for our
SGI InfiniteStorage Gateway. These new
solutions enable organizations to perform
Big Data analytics with faster and greater
insight, achieve the extreme capacity and
scale needed for Big Data storage, and
manage storage investments more cost
effectively.
SCSD: What distinguishes your products
from the competition?
17
JT: SGI achieves competitive differentiation through compute and storage solutions built with innovative architectural
advantages utilizing industry standard
components and tight integration. By
designing for performance, power, density and scalability, optimizing interconnections between layers and engineering
to reduce overhead and accelerate
deployment, SGI solutions deliver industry leading speed, scale and efficiency.