You are on page 1of 35

SOLARIS 10

SWAP ADMINISTRATION

BACK
1
1 4
Swap Management

 Virtual memory combines RAM and dedicated disk storage areas known as swap
space.
 Virtual memory makes it possible for the operating environment (OE) to use a large
range of memory.
 When working with swap space, RAM is the most critical resource in your system

BACK
2
2 1
Swap Management

 Paging is the transfer of selected memory pages between RAM and the swap areas. When
you page private data to swap spaces, physical RAM is made available for other processes to
use.

 Swapping is the movement of all memory pages associated with a process, between RAM
and a disk.

 The swap utility provides a method of adding, deleting, and monitoring the swap areas used
by the kernel. Swap area changes made from the command line are not permanent and are lost
after a reboot.

BACK
3
2 1
END OF CONCEPT

SWAP ADMINISTRATION

4
SOLARIS 10
NETWORK FILE SYSTEM (NFS)

BACK
5
2 3
NETWORK FILE SYSTEM (NFS)

mountd
statd & lockd
statd
Client Daemons Server nfsd
lockd
nfs4cbd

Files NFS Purpose Accessing the


Centralized Data

Server Client Advantages

/etc/dfs/dfstab /etc/vfstab Reduces storage costs


/etc/dfs/sharetab
/etc/mnttab Provides data consistency
/etc/rmtab
Provides reliability
/etc/dfs/fstype

6
NETWORK FILE SYSTEM (NFS)

Introduction

 The NFS service enables computers of different architectures running different operating
systems to share file systems across a network.

 Instead of placing copies of commonly used files on every system, the NFS service
enables you to place one copy of the files on one computer’s hard disk. All other systems
can then access the files acrossthe network. When using the NFS service, remote file
systems are almost indistinguishable from local file systems.

There are two components are NFS Server and NFS Client .

 The NFS server contains file resources shared with other systems on the network.

 A computer acts as a server when it makes files and directories on its hard disk available
to the other computers on the network.

 The NFS client system mounts file resources shared over the network and presents the
file resources to users as if they were local files.

BACK
7
2 3
NETWORK FILE SYSTEM (NFS)

NFS Server Files :-

BACK
8
2 3
NETWORK FILE SYSTEM (NFS)

NFS Client Files


File Description
/etc/vfstab Defines file systems to be mounted locally.

/etc/mnttab Lists currently mounted file systems, including automounted


directories. The contents of this file are maintained by the
kernel
and cannot be edited.
/etc/dfs/fstypes Lists the default file system types for remote file systems.

Daemons :-
NFS Server
 mountd --- Handles file system mount requests from remote systems, and
provides access control

 nfsd -- Handles client file system requests

 statd -- Works with the lockd daemon to provide crash recovery functions for the
lock manager and lockd Supports record locking operations on NFS files
BACK
9
2 3
NETWORK FILE SYSTEM (NFS)

nfslogd -- Provides operational logging

 nfs4cbd - is for the exclusive use of the NFS version 4 client, manages the
communication endpoints for the NFS version 4 callback program. The daemon has no
user-accessible interface

NFS-Client
 statd -- Works with the lockd daemon to provide crash recovery functions for the lock
manager
 lockd -- Supports record-locking operations on NFS files

BACK
10
2 3
END OF CONCEPT
NETWORK FILE SYSTEM (NFS)

BACK
11
2 3
SOLARIS 10
SOLARIS VOLUME MANAGER (SVM)

BACK
12
3 5
SOLARIS VOLUME MANAGER (SVM)

Volumes Resizing
Soft Partitions
State Database
Cannot span
Replicas No raid implementation
Hot Spare Pool

Draw Backs
Components of
UFS
Commands
Availability
SVM Purpose Redundancy
metadb Performance
metainit
metastat
metattach RAID-Levels
metadetach
metasync
metaroot RAID-0
RAID-1
RAID-5

BACK
13
3 5
The Solaris volume manager to manage storage for high availability, reliability, performance.

 Solaris volume manager overcomes the drawbacks of the traditional disk management in
Solaris

 Drawbacks with Traditional disk management:

 The slice can’t be resized the UFS file system can not be resized.
 File systems can not span on multiple slices.
 RAID Implementation is not possible.

RAID LEVELS:

RAID (Redundant Array of Inexpensive Disks ) refers to a set of disks, called an array or a
volume .This array provides improved reliability, performance, data redundancy.

Solaris volume manager supports the following RAID levels.

RAID-0: It is for Concatenation and Striping. It won’t provide any data redundancy. RAID-0
offers a high data transfer rate and high I/O throughput, but suffers lower reliability. Any single
drive failure can cause data loss.

14
RAID-1: Mirroring uses equal amounts of disk capacity to store data and a copy
(mirror) of the data. Data is duplicated, or mirrored over two or more physical disks.
Data can be read from both disks simultaneously. If one physical disk fails, you can
continue to use the mirror with no loss in performance or loss of data.

RAID-5: RAID-5 uses striping to spread the data over the disks in an array. It is also
records parity
information to provide some data redundancy.

Components:-

Volumes: A volume is a group of physical slices that appears to the system as single,
logical device. Volumes are actually pseudo, or virtual devices in standard UNIX
terms.

Types of Volumes

 RAID-0 Volume
 RAID-1 Volume
 RAID-5 Volume

15
SOLARIS VOLUME MANAGER (SVM)

The volumes are destination for user data and applications and file systems. As with physical
devices, volumes are accessed through block or raw device names. Solaris volume manager
enables you to expand a volume by adding additional slices and also expand a UFS file systems
online.

The inability to reduce the size of a file system is a UFS limitation. Similarly, after a Solaris
Volume Manager partition has been increased in size it cannot be reduced.

Volume names must begin with the letter “d” followed by a number. Solaris Volume Manager has
128 default volume names from 0-127.

State Database and State Database Replicas

The state database that stores information about the state of your solaris volume manager
configuration. The state database records and tracks changes made to your configuration. Solaris
Volume Manager automatically updates the state database when a configuration or state changes
occurs. Creating a new volume is an example of a configuration change. A sub-mirror failure is an
example of a state change.

The state database is actually a collection of multiple, replicated database copies. Each copy,
referred to as a state database replica, and ensures that the data in the database is always valid.
A Solaris volume manager configuration must have and operating state database.
BACK
16
3 5
SOLARIS VOLUME MANAGER (SVM)

State database replicas can be configured on Dedicated slices .Slices can be later become
part of volumes.

Hot Spare Pools

A hot spare pool is a collection of slices reserved by Solaris volume manager to be


automatically substituted for failed components.

Hot spares provide increased data availability for RAID-1 and RAID-5 volumes.

Soft Partition

Soft partition is used to make partition on existing physical slice.

By using the Soft Partition we can over come the traditional limitation of 7 slices per disk.
Soft Partition will provide the feature to divide the physical slices as many partition as possible

BACK
17
3 5
END OF CONCEPT
SOLARIS VOLUME MANAGER (SVM)

BACK
18
3 5
SOLARIS 10
ZONES

BACK
19
3 6
ZONES

Global Configured
Incomplete
Non-Global Shutdown Running

Sparse root Ready


Types States
Installed
Whole root

Zones

Commands Purpose
Daemons

zonecfg
zoneadm Virtualization of the
zsched zoneadmd
zologin Operating system
zonename

BACK
20
3 6
ZONES

Solaris Zones is a new feature of Solaris-10. Zones technology provides virtual operating
system services to allow applications to run in an isolated and secure environment . A zone is a
virtual environment that is create within a single running instance of the solaris operating
environment.

Application running in a zone environment cannot effect application running in a different zone,
even though they exists and run on the same physical server.

Types of Zones:

 Global Zone
 Non-Global Zone

Global Zone

 The global zone is the default zone and is used for the system wide configuration
and control.

 The global zone is assigned zone ID 0 by the system.

 It provides the single bootable instance of the solaris operating environment that
runs on the system.
21
ZONES

 It contains a full installation of solaris system packages.


 It is the only zone that is aware of non-global zones and their configuration.
 It is the only zone from which a non-global zone can be configured, installed,
managed and uninstalled.

Non-Global Zone

 A non-global zone is created from a global zone and also managed by it.
 You can have up to 8192 non-global zones on a single physical system.
 The non-global zone is assigned a zone ID by the system when it is booted.
 It shares the solaris kernel that is booted from the global zone.
 It contains a subset of the installed solais system packages.
 It can contain additional software packages shared from the global zone.
 It is not ware of the existence of other zones.

Non-Global Zones have two types.

 Sparse root zone.


 Whole root zone.

22
ZONES

Sparse root zone


A Sparse root zone optimizes sharing by implementing read-only loopback filesystems from the
global zone and only installing sub-set of the system root packages locally. The majority of the
root file system is shared from the global zone. Generally this model would require about 100mb
of disk space
Whole root zone
All of the required solaris packages are copied to the zones private filesystem.

Zone Daemons:

zoneadmd
zsched

Zoneadmd:
This daemon starts when a zone needs to be managed. An instance of zoneadmd will be started
for each zone so it is not uncommon to have multiple instances of this daemon running on a
single server.

23
ZONES

Zoneadmd is responsible for


 Allocates the zone ID and starts the zsched process.
 Sets system-wide resource controls.
 Plumbs the virtual network interface.
 Mounts any loop back filesystems.
Zsched:
The zsched process is started by zoneadmd and exists for each active zone. A zone is said to
be active when in the ready, running or shutting down state. The job of zsched is to keep track
of kernel threads running within the zone. It is also known as the zone scheduler.
Zone States:
Configured: A zone in this state when the configuration has been completed and storage has
been committed. Additional configuration that must be done after the initial reboot has yet to be
done.
 Incomplete: A zone is set to this state during an install or uninstalls operation. Upon
completion of the operation it changes to the correct state.

 Installed: A zone in this state has a confirmed configuration. The zoneadm cmd is used
to verify that the zone will run on the designated solaris system.

BACK
24
3 6
25
ZONES

Ready: The kernel creates the zsched process. The network interfaces are plumbed and
file Systems are mounted. The systems also assign a zone ID at this state but there are no
processes associated with this zone.
Running: A zone enters this state when the first user process is created. This is the normal
state for an operational zone.
Shutting-Down: Transitional states that are only visible while a zone is in the process of
being halted. If a zone cannot shutdown for any reason then it will also display this state

26
END OF CONCEPT
ZONES

BACK
27
3 6
Live Upgrade Overview

Solaris Live Upgrade provides


a method of upgrading a
system while the system
continues to operate.
While your current boot
environment “BE” is running,
you can
duplicate the boot
environment, then upgrade
the duplicate.
install a Solaris Flash
archive on a boot
environment.

28
Live Upgrade Options

Live Upgrade Options


Target OS distro can be from the following
sources
Local lofi mount
NFS mount share
Local DVD
Jumpstart server/profile file

29
Live Upgrade Steps

Step 1 – Create BE
lucreate
Step 2 – Upgrade BE
luupgrade
Step 3 – Activate BE
luactivate <be_oper2>
Step 4* – Fallback to orig BE
luactivate <be_oper2>
Step 5* – Remove inactive BE
ludelete <be_oper2>
* optional

30
Live Upgrade Packages

Before upgrading, you MUST install the Solaris Live


Upgrade packages from the target OS...why you ask?
Latest Live Upgrade enhancements
Newest capabilities
Greater chance that you might run into problems
Solaris Live Upgrade packages
SUNWluu, SUNWlur, SUNWlucfg & SUNWluzone*
Must remove these, then add new pks from target
OS install disk
Minimum Patch requirements
Sun Infodoc ID206844, Solaris Live-Upgrade
Minimum Patch Requirements.

31
Live Upgrade Rules of Thumb

N+3 upgrade paths


Check that you are using the most updated LU
packages
Pre-Patch your current OS installation
Know your disk partitions/slice layouts before hand
Setup your slices to support liveupgrade from the
beginning (the first time you install Solaris)
If performing LU on 2 disk mirror, break the mirror,
perform LU on disk2 (leaving current BE on disk1).
Then, once LU is completed and disk2 is active BE,
setup disk mirroring

32
Using LU in Solaris 10...

UFS to UFS and UFS to ZFS

Config FS on new BE, then copy critical FS/files to new BE.

Create ZFS root pool BE from UFS root (ZFS root pool must

exist before the lucreate)

Setup slices for new BE correctly from the beginnin

Have 2 disks installed

One for Current BE

One for liveuprgade

Example with one 20gig disk:

c0d0s0, /, 9000MB
c0d0s1, swap, 2000MB
c0d0s2, backup, 20000MB (automatically created)
c0d0s3, /altroot, 9000MB (this is where BE will be)
33
LU Command Description

#lu :- A deprecated curser-based menuing interface for creating and


administering boot Environments

# luactivate :- Designate the specified boot environment as the one to boot


from in subsequent boots

# lucancel :- Cancel a scheduled Live Upgrade operation

# lucompare :- Compare the contents of two boot environments

# lucreate :- Create a boot environment

# lucurr :- Display the name of the currently booted boot environment

# lufslist :- List the file systems of a specified boot environment

# lumake :- Re-create a boot environment based on the current boot


environment

34
# lumount / #luunmount :- Mount/unmount file systems of a specified boot
environment

# lurename :- Rename a boot environnent

# lustatus :- For every boot environment, list whether a boot environment is


active, active upon thenext boot, in the midst of a copy operation, and if a copy
operation is scheduled for it.

# luupgrade :- Modify a boot environment by installing flash archives, installing


a complete OS, installing and/or deleting OS and application packages, or
installing OS patches.

# luupgrade :- Modify a boot environment by installing flash archives, installing


a complete OS, installing and/or deleting OS and application packages, or
installing OS patches.

35

You might also like