You are on page 1of 53

ScienceLogic Architecture

ScienceLogic Version 7.5.4

Table of Contents
Overview
Introduction
ScienceLogic Appliance Functions
User Interface
Database
Data Collection
Message Collection
API
Third-Party Functions
System Updates
Backup, Recovery & High Availability Options
Overview
Using a SAN
Configuration Backups
Scheduled Backups to External File Systems
Backup of Disaster Recovery Database
Database Replication for Disaster Recovery
High Availability for Database Servers
Differences Between Disaster Recovery and High Availability for Database Servers
High Availability for Data Collection
Restrictions
All-In-One Architecture
Overview
Disaster Recovery with an All-In-One Appliance
Scheduled Backup Requirement
Unsupported Configurations
Distributed Architecture
Overview
Architecture Outline
Database Capacity
Message Collection
Interface Layer Requirements
Scheduled Backup Requirements
Database Layer Configurations
Overview
Local Storage
Single Database Server Using Local Disk Space
Database with Disaster Recovery
Single Database attached to a SAN
Database attached to a SAN with Disaster Recovery
Database attached to a SAN with Disaster Recovery attached to a SAN
Clustered Databases with High Availability and SAN
Clustered Databases with High Availability using Local Disk
Clustered Databases with High Availability and Disaster Recovery
Clustered Databases with High Availability and Disaster Recovery attached to a SAN
Clustered Databases with High Availability Using Local Disk and Disaster Recovery
Interface Layer & API Configurations
Interface Layer Configurations
Single Administration Portal
Multiple Administration Portals

1
1
2
2
2
3
3
3
4
4
5
5
6
6
6
7
7
8
9
9
10
11
11
12
12
12
13
13
14
14
15
15
15
17
17
18
18
19
21
22
23
24
25
26
27
28
29
29
30
31

Load-Balanced Administration Portals


API Layer Configurations
Collector Group Configurations
Overview
"Traditional" and "Phone Home"Collectors
Using a Data Collector for Message Collection
Using Multiple Data Collectors in a Collector Group
How Collector Groups Handle Component Devices
High Availability for Data Collectors
Using Message Collectors in a Collector Group
Using Virtual Machines and Cloud Instances
Overview
Supported Hypervisors
Hardware Requirements
Database Servers
Data Collectors
Message Collectors
Administration Portals and Integration Servers
All-In-One Appliances
Amazon AWS Instances

32
33
34
34
35
36
37
38
39
42
45
45
45
46
46
47
47
47
47
48

Chapter

1
Overview

I n troduction
This manual describes the architecture of ScienceLogic systems, covering all possible configurations of
ScienceLogic appliances. This manual is intended for System Administrators and ScienceLogic staff who are
responsible for planning the architecture and configuration of ScienceLogic systems.
There are two types of ScienceLogic system:
l

All-In-One. In this type of system, a single appliance handles all the functions of the ScienceLogic platform.
The monitoring capacity of an All-In-One system cannot be increased by adding additional appliances.
Distributed. The functions of the ScienceLogic platform are divided between multiple appliances. A
Distributed system can be as small as two appliances, or can include multiple instances of each type of
appliance.

Overview

Scien ceLogic Applian ce Fun ction s


There are five general functions that a ScienceLogic appliance can perform. In large ScienceLogic systems, a
dedicated appliance performs each function. In smaller systems, some appliances perform several functions. This
section describes each function and the ScienceLogic appliances that can perform each function.

Use r Inte rface


Administrators and users access the user interface through a web browser session. In the user interface, you can
view collected data and reports, define organizations and user accounts, define policies, view events, and create
and view tickets, among other tasks. The appliance that provides the user interface function also generates all
scheduled reports. The following appliances can provide the user interface:
l

All-In-One Appliance. In small ScienceLogic systems, the All-In-One Appliance performs all functions.
Database Server. In small to mid-size ScienceLogic systems, a Database Server can providethe user
interface in addition to its database function.
Administration Portal. In mid-size and large ScienceLogic systems, a dedicated Administration Portal
appliance provides the user interface.

Database
The appliance that provides the database function is responsible for:
l

Storing all configuration data and data collected from managed devices.
In a distributed system, pushing data to and retrieving data from the appliances responsible for collecting
data.

Processing and normalizing collected data.

Allocating tasks to the other appliances in the ScienceLogic system.

Executing automation actions in response to events.

Sending all Email generated by the system.

Receiving all inbound Email for events, ticketing, and round-trip Email monitoring.

The following appliances can perform these database functions:


l

All-In-One Appliance. In small ScienceLogic systems, the All-In-One Appliance performs all functions.
Database Server. In all distributed ScienceLogic systems, a Database Server provides all database
functions. In some small to mid-size systems, a Database Server might also provide the user interface in
addition to its main database function.

Overview

Data C o l l e cti o n

The ScienceLogic appliances that perform the data collection function retrieve data from monitored devices and
applications in your network at regularly scheduled intervals. In a distributed system, appliances that perform the
data collection function also perform some pre-processing of collected data and execute automation actions in
response to events.
The following appliances can perform the collection function:
l

All-In-One Appliance. In small ScienceLogic systems, the All-In-One Appliance performs all functions.
Data Collector. In all distributed ScienceLogic systems, one or more Data Collectors provide all data
collection functions. In some systems, a Data Collector can also perform the message collection function.

Me ssage C o l l e cti o n
The ScienceLogic appliances that perform the message collection function receive and process inbound,
asynchronous syslog and trap messages from monitored devices.
The following appliances can perform the message collection function:
l

All-In-One Appliance. In small ScienceLogic systems, the All-In-One Appliance performs all functions.
Message Collector. In most distributed systems, dedicated Message Collector appliances perform
message collection. A single Message Collector can handle syslog and trap messages from devices that are
monitored by multiple Data Collectors.
Data Collector. In some distributed systems, a Data Collector can also perform the message collection
function. When a Data Collector is used for message collection, the Data Collector cannot be configured
for high availability and handles fewer inbound messages than a dedicated Message Collector.

API
The ScienceLogic platform provides a REST-based API that external systems can use to configure the platform and
access collected data. For example, the API can be used to automate the provisioning of devices in the
ScienceLogic platform.
The API is available through the Administration Portal, the All-In-One Appliance, and the Database Server.

NOTE: In older ScienceLogic systems, a dedicated Integration Server appliance provides the API function.
Although current ScienceLogic systems do not offer the dedicated Integration Server appliance,
ScienceLogic will continue to support existing dedicated Integration Server appliances.

Overview

Th ird-Party Fun ction s


ScienceLogic has partnered with LogicVein Net LineDancer to manage configuration of network devices, including
routers and switches.
Net LineDancer provides inventory reports, detailed hardware and software information of their devices,
configuration comparison and history, password change, automated detection of configuration changes, and
integration with network monitoring systems. Net LineDancer is integrated with the ScienceLogic platform; you can
view data from Net LineDancer in ScienceLogic dashboards.
Net LineDancer includes two modules: Core Server, installed on a server device, and Smart Bridge, installed on a
collector device. ScienceLogic supports installing Net LineDancer Core Server software on ScienceLogic Database
Servers and installing Net LineDancer Smart Bridge software on ScienceLogic Data Collectors.

Sys tem Updates


The ScienceLogic platform includes an automatic update tool, called System Updates, that automatically installs
software updates and patches. The updated software or patch is loaded onto a single ScienceLogic server, and the
software is automatically sent out to each ScienceLogic server that needs it. When you apply an update to your
ScienceLogic system, the platform automatically uploads any newer versions of the default Power-Packs.
The System Updates page in the user interface allows you to update the software on a ScienceLogic system. You
must first download the updates to the local computer (computer where you are running the browser). You can
then load the software update to the ScienceLogic system. When the software is loaded onto the ScienceLogic
system, you can install the software or schedule the software to be installed at a later time.
When you apply an update to your ScienceLogic system, the platform automatically uploads any newer versions of
the default Power-Packs. If a Power-Pack included in an update is not currently installed on your system, the
platform will automatically install the Power-Pack. If a Power-Pack included in an update is currently installed on
your system, the platform will automatically import the Power-Pack, but will not install the Power-Pack.
For full details on the System Update tool, see the System Administration manual.

Overview

Chapter

2
Backup, Recovery & High Availability Options

Ov erv iew
The ScienceLogic platform has multiple options for backup, recovery, and high availability. Different appliance
configurations support different options; your backup, recovery, and high availability requirements will guide the
configuration of your ScienceLogic system. This chapter describes each option and the requirements and
exclusions for each option.
This table summarizes which of the options listed in this chapter are supported by:
l

All-In-One Appliances

Distributed systems that do not use a SAN for storage

Distributed systems that use a SAN for storage

Backup, Recovery & High Availability Options

Us in g a SAN
A Database Server can be connected to a SAN for data storage. If a Database Server uses a SAN for data storage,
you can use the snapshot features provided by your SAN system to backup all ScienceLogic data.
All-In-One Appliances cannot use a SAN for data storage.
In distributed systems, you can use a SAN for data storage only if the SAN will meet the I/O requirements of the
ScienceLogic system. If you have a large system and are unsure if your SAN will meet these requirements,
ScienceLogic recommends using Database Servers that includes local solid-state drives (SSDs).
If your ScienceLogic system uses a SAN for data storage, you cannot use the scheduled full backup feature.
For more information on using a SAN with the ScienceLogic platform, including SAN hardware requirements, see
the Setting up a SAN manual.

Con figuration Backups


The ScienceLogic platform allows you to configure a daily scheduled backup of all configuration data stored in the
primary database. Configuration data includes scope and policy information, but does not include collected data,
events, or logs.
The ScienceLogic platform can perform daily configuration backups while the database is running and does not
suspend any ScienceLogic services.
The ScienceLogic platform can save local copies of the last seven days of configuration backups and stores the first
configuration backup of the month for the current month and the first configuration backup of the month for the
three previous months. Optionally, you can configure the platform to either copy the daily configuration backup to
an external file system using FTP, SFTP, NFS, or SMB or write the daily configuration backup directly to an external
file system.
All configurations of the ScienceLogic platform support configuration backups.
The configuration backup process automatically ensures that the backup media is large enough. The configuration
backup process calculates the sum of the size of all the tables to be backed up,and then doubles that size; the
resulting number is the required disk space for configuration backups. In almost all cases, the required space is
less than 1 GB.

Sch eduled Backups to Extern al File Sys tems


The ScienceLogic platform allows you to configure a scheduled backup of all data stored in the primary database.
The platform performs scheduled backups while the database is running and does not suspend any services. By
default, the platform creates the backup file on the local disk of the Database Server or All-In-One Appliance,
transfers the backup file to the external system, then removes the backup file from the Database Server or All-InOne Appliance.

Backup, Recovery & High Availability Options

Optionally, if you are using an NFS or SMB mount to store the backup, the backup file can be created directly on the
remote system.
An All-In-One Appliance must meet the following requirement to use the scheduled full backup feature:
l

The maximum amount of space used for data storage on the All-In-One Appliance is less than half of the
total disk space on the All-In-One Appliance.

A Distributed ScienceLogic system must meet the following requirements to use the scheduled full backup feature:

2
NOTE: For large ScienceLogic systems ScienceLogic recommends using the backup options for offline
backups (that is, the backup is not created locally and copied but instead is written directly to offline
storage) from the Disaster Recovery database.

The maximum amount of space used for data storage on the Database Servers is less than half of the total
disk space on the Database Servers.
The Database Servers in the system do not use a SAN for data storage.
To use the scheduled full backup feature, a Distributed ScienceLogic system deployed on a database with
disk storage must have a database smaller than 250 GB in size.

Backup of Dis as ter Recov ery Databas e


For ScienceLogic systems configured for disaster recovery, you can backup the secondary Disaster Recovery
database instead of backing up the primary Database Server. This backup option temporarily stops replication
between the databases, performs a full backup of the secondary database, and then re-enables replication and
performs a partial re-sync from the primary.
ScienceLogic recommends that you backup to an external file system when performing a DR backup.
l

DR backup includes all configuration data, performance data, and log data.

During DR backup, the primary Database Server remains online.

DR backup is disabled by default. You can configure the ScienceLogic platform to automatically launch this
backup at a frequency and time you specify.
The backup is stored on an NFS mount or SMB mount.

Databas e Replication for Dis as ter Recov ery


The ScienceLogic platform can be configured to replicate data stored on a Database Server or All-In-One
Appliance to a disaster recovery appliance with the same hardware specifications. The disaster recovery appliance
can be installed at the same site as the primary Database Server or All-In-One Appliance or can be installed at a
different location.

Backup, Recovery & High Availability Options

If the primary Database Server or All-In-One Appliance fails for any reason, failover to the disaster recovery
appliance is not automated by the ScienceLogic platform and must be performed manually.

NOTE: If the two Database Servers are not at the same site, you must also use the DRBDProxy module to
control network traffic and syncing.

For details on configuring Disaster Recovery for Database Servers, see the manual Disaster Recovery.

High Av ailability for Databas e Serv ers


Database Servers can be clustered in the same location to allow for automatic failover.
A cluster includes an active Database Server and a passive Database Server. The passive Database Server
provides redundancy and is dormant unless a failure occurs on the active Database Server. The ScienceLogic
platform ensures that the data on each Database Server is synched and that each Database Server is ready for
failover if necessary. If the active Database Server fails, the passive Database Server automatically becomes active
and performs all required database tasks. The previously passive Database Server remains active until another
failure occurs.
Each database cluster uses a virtual IP address. No reconfiguration of Administration Portals is required in the
event of failover.
The cluster can use either DRDB Replication (a software solution) or a SAN to synchronize data between the two
Database Servers.
The following requirements must be met to cluster two Database Servers:
l

The Database Servers must have the same hardware configuration.


Two network paths must be configured between the two Database Servers. One of the network paths must
be a direct connection between the Database Servers using a crossover cable.

All-In-One Appliances cannot be configured in a cluster for high availability.


For more information on database clustering for high availability, including a description of the scenarios under
which failover will occur, see the Database Clustering manual.

Backup, Recovery & High Availability Options

Differen ces Between Dis as ter Recov ery an d High Av ailability


for Databas e Serv ers
The ScienceLogic platform provides two solutions that allow for failover to another Database Server if the primary
Database Server fails: Disaster Recovery and High Availability. There are several differences between these two
distinct features:
l

The primary and secondary databases in a High Availability configuration must be located together to
configure the heartbeat network. In a Disaster Recovery configuration, the primary and secondary databases
can be in different locations.
In a High Availability configuration, the ScienceLogic platform performs failover automatically, although a
manual failover option is available. In a Disaster Recovery configuration, failover must be performed
manually.
A High Availability configuration is not supported for All-In-One Appliances. A Disaster Recovery
configuration is supported for All-In-One Appliances.
A High Availability configuration maintains ScienceLogic system operations if failure occurs on the hardware
or software on the primary Database Server. A Disaster Recovery configuration maintains ScienceLogic
system operations if the data center where the primary Database Server is located has a major outage,
provides a spare Database Server that can be quickly installed if the primary Database Server has a
permanent hardware failure, and/or to allow for rotation of ScienceLogic system operations between two
data centers.

NOTE:A Distributed ScienceLogic system can be configured for both High Availability and Disaster Recovery.

High Av ailability for Data Collection


In a Distributed ScienceLogic system, the Data Collectors and Message Collectors are grouped into Collector
Groups. A Distributed ScienceLogic system must have one or more Collector Groups configured. The Data
Collectors included in a Collector Group must have the same hardware configuration.
In the ScienceLogic platform, each monitored device is aligned with a Collector Group and the platform
automatically determines which Data Collector in that collector group is responsible for collecting data from the
monitored device. The ScienceLogic platform evenly distributes the devices monitored by a collector group across
the Data Collectors in that collector group. Each monitored device can send syslog and trap messages to any of the
Message Collectors in the collector group aligned with the monitored device.
To use a Data Collector for message collection, the Data Collector must be in a collector group that contains no
other Data Collectors or Message Collectors.
If you require always-available data collection, you can configure a Collector Group to include redundancy. When
a Collector Group is configured for high availability (that is, to include redundancy), if one of the Data Collectors in
the collector group fails, the ScienceLogic platform will automatically redistribute the devices from the failed Data

Backup, Recovery & High Availability Options

Collector among the other Data Collectors in the Collector Group. Optionally, the ScienceLogic platform can
automatically redistribute the devices again when the failed Data Collector is restored.
Each collector group that is configured for high availability includes a setting for Maximum Allowed Collector
Outage. This setting specifies the number of Data Collectors that can fail and data collection will still continue as
normal. If more Data Collectors than the specified maximum fail simultaneously, some or all monitored devices
will not be monitored until the failed Data Collectors are restored.
High availability is configured per-Collector Group, so a ScienceLogic system can have a mix of high availability
and non-high availability collector groups, including non-high availability collector groups that contain a Data
Collector that is also being used for message collection.

Re stri cti o ns
High availability for data collection cannot be used:
l

In All-In-One Appliance systems.

For Collector Groups that include a Data Collector that is being used for message collection.

For more information on the possible configurations for a Collector Group, see the Collector Group
Configurations chapter.

10

Backup, Recovery & High Availability Options

Chapter

3
All-In-One Architecture
3

Ov erv iew
In a ScienceLogic system that uses an All-In-One Appliance, a single appliance provides the user interface,
database functions, performs data and message collection, and provides API access.

All-In-One Architecture

11

Dis as ter Recov ery with an All- I n -On e Applian ce


You can configure an All-In-One Appliance to perform data replication to a secondary All-In-One Appliance for
disaster recovery:

The secondary All-In-One Appliance must have the same hardware configuration as the primary All-In-One
Appliance.

Sch eduled Backup Requiremen t


To use the scheduled full backup feature with an All-In-One Appliance, the maximum amount of space used for
data storage on the All-In-One Appliance must be less than half the total disk space on the All-In-One Appliance.
To use the scheduled full backup feature, the All-In-One Appliance must have a database less than 250 GBin
size.

Un s upported Con figuration s


The following features are not supported by All-In-One Appliances:

12

Using a SAN for storage.

High Availability for Database Servers.

High Availability for Data Collectors.

Additional Data Collectors, Message Collectors, or Administration Portals.

All-In-One Architecture

Chapter

4
Distributed Architecture

Ov erv iew

In a Distributed ScienceLogic system, the functions of the ScienceLogic platform are divided between multiple
appliances. The smallest Distributed ScienceLogic system has two appliances:
l

A Database Server that provides the user interface and performs all database functions.

A Data Collector that performs data collection and message collection.

Large ScienceLogic systems can include multiple instances of each type of appliance. For a description of the
appliance functions, see the Overview chapter.

Distributed Architecture

13

Arch itecture Outlin e


The general architecture of a distributed system includes two required layers, the database layer and the collection
layer, and an optional layer, the interface layer:

The Database Layer Configurations, Interface Layer & API Configurations, and Collector Group
Configurations chapters describe all the possible configurations for each layer.

Databas e Capacity
In a ScienceLogic system, Database Servers and All-In-One Appliances are labeled with a Capacity. This capacity
represents the total monitoring capacity of the ScienceLogic system.
For each discovered device, the ScienceLogic platform calculates a device rating based on the license that was
used to install the ScienceLogic system and the amount of collection the platform is performing for the device. For
example, a physical device that is polled frequently and has many aligned Dynamic Applications and monitoring
policies could have a higher device rating than a component device that is polled infrequently and is aligned with
only one or two Dynamic Applications.
The sum of all the rating values for all devices discovered in the system cannot exceed the Capacity for the
Database Server or All-In-One Appliance. The Capacity Rating is defined by the license key issued to you by
ScienceLogic. All Database Servers and All-In-One Appliancesin a system must have the same capacity rating.
For details on sizing and scaling your ScienceLogic system to fit your workload, contact ScienceLogic Support or
your Account Manager.

14

Distributed Architecture

Mes s age Collection


When a Data Collector is used for message collection, the Data Collector can process approximately 20 syslog or
trap messages per second.
When a Message Collector is used for message collection, the Message Collector can process approximately 100
to 300 syslog or trap messages per second. The number of syslog and trap messages that can be processed is
dependent on the presence and configuration of syslog and trap event policies.

I n terface Layer Requiremen ts


The interface layer includes one or more Administration Portals. The Administration Portal provides access to the
user interface and also generates all scheduled reports. However, in some Distributed ScienceLogic systems, the
interface layer is optional and the Database Server can provide all functions of the Administration Portal.
If your Distributed ScienceLogic system meets all of the following requirements, the interface layer is optional and
your Database Server can provide all functions of the Administration Portal. If your system does not meet all of the
following requirements, the interface layer is required and you must include at least one Administration Portal in
your system:
l

The ScienceLogic system will have a low number of concurrent connections to the web interface.

The ScienceLogic system will have a low number of simultaneously running reports.

Precise requirements for concurrent connections and simultaneously running reports vary with usage patterns and
report size. Typically, a dedicated Administration Portal is recommended for a ScienceLogic system with more than
fifty concurrent connections to the web interface or more than 10 scheduled reports per hour.

Sch eduled Backup Requiremen ts


The ScienceLogic platform allows you to configure a scheduled backup of all data stored in the primary database.
The platform performs scheduled backups while the database is running and does not suspend any services. By
default, the platform creates the backup file on the local disk of the Database Server or All-In-One Appliance,
transfers the backup file to the external system, then removes the backup file from the Database Server or All-InOne Appliance.
Optionally, if you are using an NFS or SMB mount to store the backup, the backup file can be created directly on the
remote system.
An All-In-One Appliance must meet the following requirement to use the scheduled full backup feature:
l

The maximum amount of space used for data storage on the All-In-One Appliance is less than half of the
total disk space on the All-In-One Appliance.

Distributed Architecture

15

A Distributed ScienceLogic system must meet the following requirements to use the scheduled full backup feature:

NOTE: For large ScienceLogic systems ScienceLogic recommends using the backup options for offline
backups (that is, the backup is not created locally and copied but instead is written directly to offline
storage) from the Disaster Recovery database.

16

The maximum amount of space used for data storage on the Database Servers is less than half of the total
disk space on the Database Servers.
The Database Servers in the system do not use a SAN for data storage.
To use the scheduled full backup feature, a Distributed ScienceLogic system deployed on a database with
disk storage must have a database smaller than 250 GB in size.

Distributed Architecture

Chapter

5
Database Layer Configurations

Ov erv iew
This chapter contains diagrams for all possible configurations of the database layer in a distributed ScienceLogic
system. The possible configurations are:
l

A single Database Server using local disk space for storage.

A single Database Server using a SAN for storage.


One primary Database Server and one secondary Database Server for disaster recovery with the primary
Database Server using a SAN for storage.
One primary Database Server and one secondary Database Server for disaster recovery with the primary
Database Server and the secondary Database Server both using a SAN for storage.

Two Database Servers in a high availability cluster using a SAN for storage.

Two Database Servers in a high availability cluster using local disk space for storage.

One primary Database Server and one secondary Database Server for disaster recovery. Both Database
Servers use local disk space for storage.

Two Database Servers in a high availability cluster using a SAN for storage with an additional Database
Server for disaster recovery.
Two Database Servers in a high availability cluster using a SAN for storage with an additional Database
Server for disaster recovery that uses a SAN for storage.
Two Database Servers in a high availability cluster using local disk for storage with an additional Database
Server for disaster recovery.

Database Layer Configurations

17

Local Storage
ScienceLogic Database Servers include one of two types of local storage: HDD or SSD.
l

For ScienceLogic systems that monitor 5,000 or more devices, Database Servers with SSD will provide
significantly improved performance.
For ScienceLogic systems that monitor 10,000 or more devices, Database Servers with SSD are the required
design standard.

Sin gle Databas e Serv er Us in g Local Dis k Space

18

Database Layer Configurations

The following restrictions apply to this configuration:


l

The interface layer is optional only if the system meets all of the requirements listed in the Interface Layer
Requirements section in the Distributed Architecture chapter.
To use the scheduled full backup feature with this configuration, the maximum amount of space used for data
storage on the Database Server must be less than half the total disk space on the Database Server.

Databas e with Dis as ter Recov ery

The following restrictions apply to this configuration:


l

To use the scheduled full backup feature with this configuration, the maximum amount of space used for data
storage on the Database Server must be less than half the total disk space on the Database Server.
To use the scheduled online full backup feature, a Distributed ScienceLogic system deployed on a database
with disk storage must have a database less than 250 GBin size.

Database Layer Configurations

19

For large ScienceLogic systems,you must use the backup options for offline backups (that is, the backup is not
created locally and copied but instead is written directly to offline storage) from the Disaster Recovery
database.
The interface layer is optional only if the system meets all of the requirements listed in the Interface Layer
Requirements section in the Distributed Architecture chapter.

For details on configuring Disaster Recovery for Database Servers, see the manual Disaster Recovery.

20

Database Layer Configurations

Sin gle Databas e attach ed to a SAN

The following restrictions apply to this configuration:


l

The interface layer is optional only if the system meets all of the requirements listed in the Interface Layer
Requirements section in the Distributed Architecture chapter.
The scheduled full backup feature cannot be used with this configuration. The SAN Configuration and
Snapshots manual describes how to use SANsnapshots to backup the database in this configuration. The
scheduled full backup feature cannot be used with this configuration.

Database Layer Configurations

21

Databas e attach ed to a SAN with Dis as ter Recov ery

The following restrictions apply to this configuration:


l

The scheduled full backup feature cannot be used with this configuration. In this configuration, ScienceLogic
recommends using the backup options for offline backups (that is, the backup is not created locally and
copied but instead is written directly to offline storage) from the Disaster Recovery database.
The interface layer is optional only if the system meets all of the requirements listed in the Interface Layer
Requirements section in the Distributed Architecture chapter.

For details on configuring Disaster Recovery for Database Servers, see the manual Disaster Recovery.

22

Database Layer Configurations

Databas e attach ed to a SAN with Dis as ter Recov ery


attach ed to a SAN

The following restrictions apply to this configuration:


l

The scheduled full backup feature cannot be used with this configuration. In this configuration, ScienceLogic
recommends using the backup options for offline backups (that is, the backup is not created locally and
copied but instead is written directly to offline storage) from the Disaster Recovery database.
The interface layer is optional only if the system meets all of the requirements listed in the Interface Layer
Requirements section in the Distributed Architecture chapter.

For details on configuring Disaster Recovery for Database Servers, see the manual Disaster Recovery.

Database Layer Configurations

23

Clus tered Databas es with High Av ailability an d SAN

The following restrictions apply to this configuration:


l

The two clustered Database Servers must be located in the same facility and attached directly to each other
with a network cable.
The interface layer is optional only if the system meets all of the requirements listed in the Interface Layer
Requirements section in the Distributed Architecture chapter.

The scheduled full backup feature cannot be used with this configuration.

The backup from DR feature cannot be used with this configuration.

For details on configuring High Availability for Database Servers, see the manual Database Clustering.

24

Database Layer Configurations

Clus tered Databas es with High Av ailability us in g Local Dis k

The following restrictions apply to this configuration:


l

The two clustered Database Servers must be located in the same facility and attached directly to each other
with a network cable.
The two clustered Database Servers must use DRBD Replication to ensure data is synched between the two
servers.
The interface layer is optional only if the system meets all of the requirements listed in the Interface Layer
Requirements section in the Distributed Architecture chapter.
To use the scheduled online full backup feature, a Distributed ScienceLogic system deployed on a database
with disk storage must have a database less than 250 GBin size.
The backup from DR feature cannot be used with this configuration.

Database Layer Configurations

25

For details on configuring High Availability for Database Servers, see the manual Database Clustering.

Clus tered Databas es with High Av ailability an d Dis as ter


Recov ery

The following restrictions apply to this configuration:


l

The two clustered Database Servers must be located in the same facility and attached directly to each other
with a network cable.
The scheduled full backup feature cannot be used with this configuration.
In this configuration, ScienceLogic recommends using the backup options for offline backups (that is, the
backup is not created locally and copied but instead is written directly to offline storage) from the Disaster
Recovery database.
The interface layer is optional only if the system meets all of the requirements listed in the Interface Layer
Requirements section in the Distributed Architecture chapter.

For details on configuring High Availability for Database Servers, see the manual Database Clustering.
For details on configuring Disaster Recovery for Database Servers, see the manual Disaster Recovery.

26

Database Layer Configurations

Clus tered Databas es with High Av ailability an d Dis as ter


Recov ery attach ed to a SAN

The following restrictions apply to this configuration:


l

The two clustered Database Servers must be located in the same facility and attached directly to each other
with a network cable.
The scheduled full backup feature cannot be used with this configuration.
In this configuration, ScienceLogic recommends using the backup options for offline backups (that is, the
backup is not created locally and copied but instead is written directly to offline storage) from the Disaster
Recovery database.
The interface layer is optional only if the system meets all of the requirements listed in the Interface Layer
Requirements section in the Distributed Architecture chapter.

For details on configuring High Availability for Database Servers, see the manual Database Clustering.
For details on configuring Disaster Recovery for Database Servers, see the manual Disaster Recovery.

Database Layer Configurations

27

Clus tered Databas es with High Av ailability Us in g Local Dis k


an d Dis as ter Recov ery

The following restrictions apply to this configuration:


l

The two clustered Database Servers must be located in the same facility and attached directly to each other
with a network cable.
The two clustered Database Servers must use DRBD Replication to ensure data is synched between the two
servers.
The scheduled full backup feature cannot be used with this configuration.
For large ScienceLogic systems, ScienceLogic recommends using the backup options for offline backups (that
is, the backup is not created locally and copied but instead is written directly to offline storage) from the
Disaster Recovery database.
The interface layer is optional only if the system meets all of the requirements listed in the Interface Layer
Requirements section in the Distributed Architecture chapter.

For details on configuring High Availability for Database Servers, see the manual Database Clustering.
For details on configuring Disaster Recovery for Database Servers, see the manual Disaster Recovery.

28

Database Layer Configurations

Chapter

6
Interface Layer & API Configurations

I n terface Layer Con figuration s


In a Distributed ScienceLogic system, the interface layer is optional if the system meets all the requirements listed in
the Distributed Architecture chapter. If the interface layer is required, it must include at least one Administration
Portal.
For all Distributed ScienceLogic systems, browser session information is stored on the main database, not on the
Administration Portal currently in use by the administrator or user.

Interface Layer & API Configurations

29

Sin gle Admin is tration Portal


In this configuration, the interface layer includes a single Administration Portal. An administrator or user can log in
to the ScienceLogic platform using the Administration Portal:

30

Interface Layer & API Configurations

Multiple Admin is tration Portals


The interface layer can include multiple Administration Portals. An administrator or user can log in to the
ScienceLogic platform using any of the Administration Portals:

Interface Layer & API Configurations

31

Load-Balan ced Admin is tration Portals


A third-party load-balancing solution can be used to distribute traffic evenly among the Administration Portals:

NOTE:ScienceLogic does not recommend a specific product for this purpose and does not provide technical
support for configuring or maintaining a third-party load-balancing solution.

32

Interface Layer & API Configurations

API Layer Con figuration s


The ScienceLogic platformprovides an optional REST-based API that external systems can use to configure the
platform and access collected data. For example, the API can be used to automate the provisioning of devices in
the ScienceLogic platform.
The API is available through the Administration Portal, the All-In-One Appliance, and the Database Server.

NOTE:In older ScienceLogic systems, a dedicated Integration Server appliance provides the API function.
Although current ScienceLogic systems do not offer the dedicated Integration Server appliance,
ScienceLogic will continue to support existing dedicated Integration Server appliances.

Interface Layer & API Configurations

33

Chapter

7
Collector Group Configurations

Ov erv iew
For Distributed ScienceLogic systems, a collector group is a group of Data Collectors. Data Collectors retrieve data
from managed devices and applications. This collection occurs during initial discovery, during nightly updates, and
in response to policies and Dynamic Applications defined for each managed device. The collected data is used to
trigger events, display data in the ScienceLogic platform, and generate graphs and reports.
Grouping multiple Data Collectors allows you to:
l

Create a load-balanced collection system, where you can manage more devices without loss of
performance. At any given time, the Data Collector with the lightest load handles the next discovered device.
Create a redundant, high-availability system that minimizes downtime should a failure occur. If a Data
Collector fails, another Data Collector is available to handle collection until the problem is solved.

Collector Group Configurations

34

In a Distributed ScienceLogic system, the Data Collectors and Message Collectors are organized as Collector
Groups. Each monitored device is aligned with a Collector Group:

A Distributed ScienceLogic system must have one or more Collector Groups configured. The Data Collectors
included in a Collector Group must have the same hardware configuration.
A Distributed ScienceLogic system could include Collector Groups configured using each of the possible
configurations. For example, suppose an enterprise has a main data center that contains the majority of devices
monitored by the ScienceLogic system. Suppose the enterprise also has a second data center where only a few
devices are monitored by the ScienceLogic system. The ScienceLogic system might have two collector groups:
l

In the main data center, a Collector Group configured with high availability that contains multiple Data
Collectors and Message Collectors.
In the second data center, a Collector Group that contains a single Data Collector that is also responsible for
message collection.

"Tradition al" an d "Ph on e Home"Collectors


The ScienceLogic platform supports two methods for communication between Database Servers and the Data
Collectors and Message Collectors in a system:
l

The traditional method, where the Database Server initiates communication with each Data Collector and
Message Collector. The Database Server periodically pushes configuration data to the Data Collectors and
Message Collectors and retrieves data from the Data Collectors and Message Collectors.
o

35

The benefit of this method is that communication to the Database Server is extremely limited, so the
Database Server remains as secure as possible.

Collector Group Configurations

The Phone Home method, where the Data Collectors and Message Collectors initiate communication with
the Database Server. The Database Server then creates anSSHtunnel. The Database Server uses the SSH
tunnel to periodically push configuration data to the Data Collectors and Message Collectors and retrieve
data from the Data Collectors and Message Collectors.
o

The benefits of this method are that no firewall rules must be added on the network that contains the
Data Collectors and no new TCP ports are opened on the network that contains the Data Collectors.

The Phone Home configuration uses public key/private key authentication to maintain the security of the Database
Server. Each Data Collector is aligned with an SSH account on the Database Server and uses SSH to
communicate with the Database Server. Each SSH account on the Database Server is highly restricted, has no
login access, and cannot access a shell or execute commands on the Database Server.

Us in g a Data Collector for Mes s age Collection


To use a Data Collector for message collection, the Data Collector must be in a collector group that contains no
other Data Collectors or Message Collectors.

Collector Group Configurations

36

NOTE:When a Data Collector is used for message collection, the Data Collector can handle fewer inbound
messages than a dedicated Message Collector.

Us in g Multiple Data Collectors in a Collector Group


A Collector Group can include multiple Data Collectors to maximize the number of managed devices. In this
configuration, the Collector Group is not configured for high availability:

In this configuration:
l

37

All Data Collectors in the Collector Group must have the same hardware configuration
If you need to collect syslog and trap messages from the devices aligned with the Collector Group, you must
include a Message Collector in the Collector Group. For a description of how a Message Collector can be
added to a Collector Group, see the Using Message Collection Units in a Collector Group section of this
chapter.
The ScienceLogic platform evenly distributes the devices monitored by a collector group among the Data
Collectors in the collector group. Devices are distributed based on the amount of time it takes to collect data
for the Dynamic Applications aligned to each device.
Component devices are distributed differently than physical devices; component devices are always aligned
to the same Data Collector as its root device.

Collector Group Configurations

NOTE: If you merge a component device with a physical device, the ScienceLogic system allows data for the
merged component device and data from the physical device to be collected on different Data
Collectors. Data that was aligned with the component device is always collected on the Data
Collector for its root device. If necessary, data aligned with the physical device can be collected on a
different Data Collector.

How Collector Groups Han dle Compon en t Dev ices


Collector Groups handle component devices differently than physical devices.
For physical devices (as opposed to component devices), after the ScienceLogic system creates the device ID, the
ScienceLogic system distributes devices, round-robin, among the Data Collectors in the specified Collector
Group.
Each component device must use the same Data Collector used by its root device. For component devices, the
ScienceLogic system must keep all the component devices on the same Data Collector used by the root device
(the physical device that manages the component devices). The ScienceLogic platform cannot distribute the
component devices among the Data Collectors in the specified Collector Group.

NOTE: If you merge a component device with a physical device, the ScienceLogic system allows data for the
merged component device and data from the physical device to be collected on different Data
Collectors. Data that was aligned with the component device is always collected on the Data
Collector for its root device. If necessary, data aligned with the physical device can be collected on a
different Data Collector.

Collector Group Configurations

38

High Av ailability for Data Collectors


To configure a Collector Group for high availability, the Collector Group must include multiple Data Collectors:

In this configuration:
l

All Data Collectors in the Collector Group must have the same hardware configuration.
If you need to collect syslog and trap messages from the devices monitored by a high availability Collector
Group, you must include a Message Collector in the Collector Group. For a description of how a Message
Collector can be added to a Collector Group, see the Using Message Collection Units in a Collector
Group section of this chapter.
Each collector group that is configured for high availability includes a setting for Maximum Allowed Collector
Outage. This setting specifies the number of Data Collectors that can fail and data collection will continue. If
more Data Collectors than the specified maximum fail simultaneously, some or all monitored devices will not
be monitored until the failed Data Collectors are restored.

WARNING: If a collector group is configured for high availability and the number of failed Data Collectors in
that collector group becomes greater than the Maximum Allowed Collector Outage setting, the
platform will not failover within the Collector Group. The platform will not collect or store any
data from the devices aligned with the failed Data Collector(s) until the failure is fixed, and the
platform will generate a critical event. This is true regardless of whether the Data Collectors are
able to collect data.

39

Collector Group Configurations

In this example, the Collector Group includes four Data Collectors. The Collector Group is configured to allow for
an outage of two Data Collectors.
When all Data Collectors are available, the ScienceLogic system evenly distributes the devices monitored by a
collector group among the Data Collectors in the Collector Group. In this example, there are 200 devices
monitored by the Collector Group, with each of the four Data Collectors responsible for collecting data from 50
devices. For simplicity, this example assumes that the platform spends the same amount of time collecting
Dynamic Application data from every device; therefore, the devices are divided evenly across the four collectors.
If one of the Data Collectors in the example Collector Group fails, the 50 devices that the Data Collector was
monitoring are redistributed evenly between the other three Data Collectors:

Collector Group Configurations

40

If a second Data Collector in the example Collector Group fails, the 50 devices that theData Collector was
monitoring are redistributed evenly between the other two Data Collectors:

If a third Data Collector in the example Collector Group fails, the Collector Group has exceeded its maximum
allowable outage. Until one of the three failed Data Collectors becomes available, 100 devices are not
monitored:

41

Collector Group Configurations

Us in g Mes s age Collectors in a Collector Group


If you need to collect syslog and trap messages from the devices monitored by a Collector Group that includes
multiple Data Collectors, you must include a Message Collector in the Collector Group:

Collector Group Configurations

42

If your monitored devices generate a large amount of syslog and trap messages, a Collector Group can include
multiple Message Collectors:

In this configuration, a monitored device can send syslog and trap messages to either Message Collector.

NOTE:Each syslog and trap message should be sent to only one Message Collector.

A third-party load-balancing solution can be used to distribute syslog and trap messages evenly among the
Message Collectors in a Collector Group:

43

Collector Group Configurations

NOTE:ScienceLogic does not recommend a specific product for this purpose and does not provide technical
support for configuring or maintaining a third-party load-balancing solution.

One or more Message Collectors can be included in multiple Collector Groups:

In this configuration, each managed device in Collector Group A and Collector Group B must use a unique IP
address when sending syslog and trap messages. The IP address used to send syslog and trap messages is called
the primary IP. For example, if a device monitored by Collector Group A and a device monitored by Collector
Group B use the same primary IP address for data collection, one of the two devices must be configured to use a
different IP address when sending syslog and trap messages.
A Collector Group can have multiple Message Collectors that are also included in other Collector Groups. It is
possible to include every Message Collector in your ScienceLogic system in every Collector Group in your
ScienceLogic system.

Collector Group Configurations

44

Chapter

8
Using Virtual Machines and Cloud Instances

Ov erv iew
Each appliance in your ScienceLogic configuration can be run as a virtual machine. This chapter provides software
and hardware requirements for ScienceLogic appliances running on VMs.

Supported Hyperv is ors


ScienceLogic supports deploying appliances as virtual machines on the following types of hypervisor systems:
l

VMware vSphere Hypervisor (ESXi) 4.1

VMware vSphere Hypervisor (ESXi) 5.0

VMware vSphere Hypervisor (ESXi) 5.1

VMware vSphere Hypervisor (ESXi) 5.5

Citrix XenServer 5.6

Citrix XenServer 6.1

Citrix XenServer 6.2

Microsoft Windows Server 2008 R2 SP 1 Hyper-V

Microsoft Windows Server 2012 Hyper-V

Microsoft Windows Server 2012 R2 Hyper-V

RedHat/CentOS 6.2 KVM

Ubuntu 12 and later KVM

NOTE: A virtualized Database Server must not exceed 2,500 monitored devices.

Using Virtual Machines and Cloud Instances

45

NOTE: Microsoft Hyper-V Linux Integration software cannot be installed on any ScienceLogic appliances.

NOTE: ScienceLogic licensing relies on appliance MAC addresses and UUIDs remaining intact. VM
migrations which change these identifiers (such as storage vMotion with default .vmx settings, or
CloudStack orchestration move between XenServer hosts) will invalidate the license, thus limiting or
disabling the appliance operation until a new license can be applied.

NOTE: ScienceLogic databases have a very high bandwidth of memory changes under normal operations,
often in excess of 10Gb/sec. This rate of memory change limits the feasibility of VM live migration
methods (such as vMotion) for ScienceLogic appliances because on moderately large databases, the
rate of memory change is too high to be synchronized between hosts over a 10Gb/sec ethernet link.

Please contact ScienceLogic support for current recommended specifications and limitations for appliances
installed as virtual machines.
The following requirements apply to all virtualized appliances:
l

Fixed storage is required. Dynamically-expanding storage is not supported.


Memory over-commit is not supported. In the case of VMware, this means that 100% of memory must be
reserved for all ScienceLogic appliances. Running on a virtualization server that is close to capacity might
result in unexpected behavior.
Running on a virtualization server that is close to capacity will result in unexpected behavior.

Hardware Requiremen ts
The following sections list the minimum and recommended specifications for virtual machines.

Database S e rve rs
Capacity

RAM (GB)

CPU Cores

Disk (GB)

100 devices (Minimum Specification with separate


Administration Portal)

80

100 devices (Minimum Specification without separate


Administration Portal)

80

500 devices (Minimum Specification)

16

200

1,000 devices (Minimum Specification)

24

300

1,000 devices (Recommended Specification)

48

600

Additional Resources per 1,000 devices

16

300

46

Using Virtual Machines and Cloud Instances

Data C o l l e cto rs
Capacity

RAM (GB)

CPU Cores

Disk (GB)

100 devices (Minimum Specification)

60

500 devices (Minimum Specification)

12

90

1,000 devices (Minimum Specification)

16

120

1,000 devices (Recommended Specification)

24

150

NOTE: Data Collectors that monitor a large number of video devices can support half the number of devices
listed.

Me ssage C o l l e cto rs
Capacity

RAM (GB)

CPU Cores

Disk (GB)

100 devices (Minimum Specification)

60

500+ devices (Minimum Specification)

90

NOTE: Message Collectors cannot take advantage of additional resources in a single system, either on a
virtual device or a physical host, because the local database on each Message Collector is limited in
size and will not benefit from more memory. In addition, message processing is single-threaded and
will not benefit from more cores. For additional capacity, add additional Message Collectors behind a
load balancer.

A dmi ni strati o n P o rtal s and Inte grati o n S e rve rs


Capacity

RAM (GB)

CPU Cores

Disk (GB)

100 devices (Minimum Specification)

60

500+ devices (Minimum Specification)

60

A l l -In-O ne A ppl i ance s

Capacity

RAM (GB)

CPU Cores

Disk (GB)

100 devices (Minimum Specification)

80

500 devices (Minimum Specification)

24

180

Using Virtual Machines and Cloud Instances

47

Capacity

RAM (GB)

CPU Cores

Disk (GB)

1,000 devices (Minimum Specification)

24

300

1,000 devices (Recommended Specification)

48

600

Amaz on AW S I n s tan ces


ScienceLogic supports deploying the following appliances as AWSinstances:

ScienceLogic appliance

Type

All-In-One Appliance

General purpose: m3.2xlarge

Administration Portal

General purpose: m3.2xlarge

Database Server

i2.2xlarge

Data Collector

General purpose: m3.2xlarge

Message Collector

General purpose: m3.2xlarge

An instance is a virtual server that resides in the AWS cloud. An Amazon Machine Image (AMI)is the collection of
files and information that AWS uses to create an instance. A single AMI can launch multiple instances.
For details on AMIs, see http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/AMIs.html.
The ScienceLogic AMI is defined by ScienceLogic. You can use the ScienceLogic AMIto create Elastic Compute
Cloud (EC2) instances.

NOTE:Elastic Compute Cloud (EC2) instances are virtual servers that come in a variety of configurations and
can be easily changed as your computing needs change. For more information on EC2, see
http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/concepts.html

The ScienceLogic AMI is private and is for ScienceLogic customers only. After you collect specific information about
your AWS account, you can send a request (and the collected information) to ScienceLogic, and ScienceLogic will
share the ScienceLogic AMI with you.

48

Using Virtual Machines and Cloud Instances

2003 - 2015, ScienceLogic, Inc.


All rights reserved.
LIMITATION OF LIABILITY AND GENERAL DISCLAIMER
ALL INFORMATION AVAILABLE IN THIS GUIDE IS PROVIDED "AS IS," WITHOUT WARRANTY OF ANY
KIND, EITHER EXPRESS OR IMPLIED. SCIENCELOGIC AND ITS SUPPLIERS DISCLAIM ALL WARRANTIES,
EXPRESS OR IMPLIED, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF
MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE OR NON-INFRINGEMENT.
Although ScienceLogic has attempted to provide accurate information on this Site, information on this Site
may contain inadvertent technical inaccuracies or typographical errors, and ScienceLogic assumes no
responsibility for the accuracy of the information. Information may be changed or updated without notice.
ScienceLogic may also make improvements and / or changes in the products or services described in this
Site at any time without notice.

Copyrights and Trademarks


ScienceLogic, the ScienceLogic logo, and EM7 are trademarks of ScienceLogic, Inc. in the United States,
other countries, or both.
Below is a list of trademarks and service marks that should be credited to ScienceLogic, Inc.The and
symbols reflect the trademark registration status in the U.S. Patent and Trademark Office and may not be
appropriate for materials to be distributed outside the United States.
l
l
l
l
l

ScienceLogic
EM7 and em7
Simplify IT
Dynamic Application
Relational Infrastructure Management

The absence of a product or service name, slogan or logo from this list does not constitute a waiver of
ScienceLogics trademark or other intellectual property rights concerning that name, slogan, or logo.
Please note that laws concerning use of trademarks or product names vary by country. Always consult a
local attorney for additional guidance.

Other
If any provision of this agreement shall be unlawful, void, or for any reason unenforceable, then that
provision shall be deemed severable from this agreement and shall not affect the validity and enforceability
of any remaining provisions. This is the entire agreement between the parties relating to the matters
contained herein.
In the U.S. and other jurisdictions, trademark owners have a duty to police the use of their marks.Therefore,
if you become aware of any improper use of ScienceLogic Trademarks, including infringement or
counterfeiting by third parties, report them to Science Logics legal department immediately.Report as much
detail as possible about the misuse, including the name of the party, contact information, and copies or
photographs of the potential misuse to: legal@sciencelogic.com

800-SCI-LOGIC (1-800-724-5644)
International: +1-703-354-1010

You might also like