Professional Documents
Culture Documents
SWAP ADMINISTRATION
BACK
1
1 4
Swap Management
Virtual memory combines RAM and dedicated disk storage areas known as swap
space.
Virtual memory makes it possible for the operating environment (OE) to use a large
range of memory.
When working with swap space, RAM is the most critical resource in your system
BACK
2
2 1
Swap Management
Paging is the transfer of selected memory pages between RAM and the swap areas. When
you page private data to swap spaces, physical RAM is made available for other processes to
use.
Swapping is the movement of all memory pages associated with a process, between RAM
and a disk.
The swap utility provides a method of adding, deleting, and monitoring the swap areas used
by the kernel. Swap area changes made from the command line are not permanent and are lost
after a reboot.
BACK
3
2 1
END OF CONCEPT
SWAP ADMINISTRATION
4
SOLARIS 10
NETWORK FILE SYSTEM (NFS)
BACK
5
2 3
NETWORK FILE SYSTEM (NFS)
mountd
statd & lockd
statd
Client Daemons Server nfsd
lockd
nfs4cbd
6
NETWORK FILE SYSTEM (NFS)
Introduction
The NFS service enables computers of different architectures running different operating
systems to share file systems across a network.
Instead of placing copies of commonly used files on every system, the NFS service
enables you to place one copy of the files on one computer’s hard disk. All other systems
can then access the files acrossthe network. When using the NFS service, remote file
systems are almost indistinguishable from local file systems.
There are two components are NFS Server and NFS Client .
The NFS server contains file resources shared with other systems on the network.
A computer acts as a server when it makes files and directories on its hard disk available
to the other computers on the network.
The NFS client system mounts file resources shared over the network and presents the
file resources to users as if they were local files.
BACK
7
2 3
NETWORK FILE SYSTEM (NFS)
BACK
8
2 3
NETWORK FILE SYSTEM (NFS)
Daemons :-
NFS Server
mountd --- Handles file system mount requests from remote systems, and
provides access control
statd -- Works with the lockd daemon to provide crash recovery functions for the
lock manager and lockd Supports record locking operations on NFS files
BACK
9
2 3
NETWORK FILE SYSTEM (NFS)
nfs4cbd - is for the exclusive use of the NFS version 4 client, manages the
communication endpoints for the NFS version 4 callback program. The daemon has no
user-accessible interface
NFS-Client
statd -- Works with the lockd daemon to provide crash recovery functions for the lock
manager
lockd -- Supports record-locking operations on NFS files
BACK
10
2 3
END OF CONCEPT
NETWORK FILE SYSTEM (NFS)
BACK
11
2 3
SOLARIS 10
SOLARIS VOLUME MANAGER (SVM)
BACK
12
3 5
SOLARIS VOLUME MANAGER (SVM)
Volumes Resizing
Soft Partitions
State Database
Cannot span
Replicas No raid implementation
Hot Spare Pool
Draw Backs
Components of
UFS
Commands
Availability
SVM Purpose Redundancy
metadb Performance
metainit
metastat
metattach RAID-Levels
metadetach
metasync
metaroot RAID-0
RAID-1
RAID-5
BACK
13
3 5
The Solaris volume manager to manage storage for high availability, reliability, performance.
Solaris volume manager overcomes the drawbacks of the traditional disk management in
Solaris
The slice can’t be resized the UFS file system can not be resized.
File systems can not span on multiple slices.
RAID Implementation is not possible.
RAID LEVELS:
RAID (Redundant Array of Inexpensive Disks ) refers to a set of disks, called an array or a
volume .This array provides improved reliability, performance, data redundancy.
RAID-0: It is for Concatenation and Striping. It won’t provide any data redundancy. RAID-0
offers a high data transfer rate and high I/O throughput, but suffers lower reliability. Any single
drive failure can cause data loss.
14
RAID-1: Mirroring uses equal amounts of disk capacity to store data and a copy
(mirror) of the data. Data is duplicated, or mirrored over two or more physical disks.
Data can be read from both disks simultaneously. If one physical disk fails, you can
continue to use the mirror with no loss in performance or loss of data.
RAID-5: RAID-5 uses striping to spread the data over the disks in an array. It is also
records parity
information to provide some data redundancy.
Components:-
Volumes: A volume is a group of physical slices that appears to the system as single,
logical device. Volumes are actually pseudo, or virtual devices in standard UNIX
terms.
Types of Volumes
RAID-0 Volume
RAID-1 Volume
RAID-5 Volume
15
SOLARIS VOLUME MANAGER (SVM)
The volumes are destination for user data and applications and file systems. As with physical
devices, volumes are accessed through block or raw device names. Solaris volume manager
enables you to expand a volume by adding additional slices and also expand a UFS file systems
online.
The inability to reduce the size of a file system is a UFS limitation. Similarly, after a Solaris
Volume Manager partition has been increased in size it cannot be reduced.
Volume names must begin with the letter “d” followed by a number. Solaris Volume Manager has
128 default volume names from 0-127.
The state database that stores information about the state of your solaris volume manager
configuration. The state database records and tracks changes made to your configuration. Solaris
Volume Manager automatically updates the state database when a configuration or state changes
occurs. Creating a new volume is an example of a configuration change. A sub-mirror failure is an
example of a state change.
The state database is actually a collection of multiple, replicated database copies. Each copy,
referred to as a state database replica, and ensures that the data in the database is always valid.
A Solaris volume manager configuration must have and operating state database.
BACK
16
3 5
SOLARIS VOLUME MANAGER (SVM)
State database replicas can be configured on Dedicated slices .Slices can be later become
part of volumes.
Hot spares provide increased data availability for RAID-1 and RAID-5 volumes.
Soft Partition
By using the Soft Partition we can over come the traditional limitation of 7 slices per disk.
Soft Partition will provide the feature to divide the physical slices as many partition as possible
BACK
17
3 5
END OF CONCEPT
SOLARIS VOLUME MANAGER (SVM)
BACK
18
3 5
SOLARIS 10
ZONES
BACK
19
3 6
ZONES
Global Configured
Incomplete
Non-Global Shutdown Running
Zones
Commands Purpose
Daemons
zonecfg
zoneadm Virtualization of the
zsched zoneadmd
zologin Operating system
zonename
BACK
20
3 6
ZONES
Solaris Zones is a new feature of Solaris-10. Zones technology provides virtual operating
system services to allow applications to run in an isolated and secure environment . A zone is a
virtual environment that is create within a single running instance of the solaris operating
environment.
Application running in a zone environment cannot effect application running in a different zone,
even though they exists and run on the same physical server.
Types of Zones:
Global Zone
Non-Global Zone
Global Zone
The global zone is the default zone and is used for the system wide configuration
and control.
It provides the single bootable instance of the solaris operating environment that
runs on the system.
21
ZONES
Non-Global Zone
A non-global zone is created from a global zone and also managed by it.
You can have up to 8192 non-global zones on a single physical system.
The non-global zone is assigned a zone ID by the system when it is booted.
It shares the solaris kernel that is booted from the global zone.
It contains a subset of the installed solais system packages.
It can contain additional software packages shared from the global zone.
It is not ware of the existence of other zones.
22
ZONES
Zone Daemons:
zoneadmd
zsched
Zoneadmd:
This daemon starts when a zone needs to be managed. An instance of zoneadmd will be started
for each zone so it is not uncommon to have multiple instances of this daemon running on a
single server.
23
ZONES
Installed: A zone in this state has a confirmed configuration. The zoneadm cmd is used
to verify that the zone will run on the designated solaris system.
BACK
24
3 6
25
ZONES
Ready: The kernel creates the zsched process. The network interfaces are plumbed and
file Systems are mounted. The systems also assign a zone ID at this state but there are no
processes associated with this zone.
Running: A zone enters this state when the first user process is created. This is the normal
state for an operational zone.
Shutting-Down: Transitional states that are only visible while a zone is in the process of
being halted. If a zone cannot shutdown for any reason then it will also display this state
26
END OF CONCEPT
ZONES
BACK
27
3 6
Live Upgrade Overview
28
Live Upgrade Options
29
Live Upgrade Steps
Step 1 – Create BE
lucreate
Step 2 – Upgrade BE
luupgrade
Step 3 – Activate BE
luactivate <be_oper2>
Step 4* – Fallback to orig BE
luactivate <be_oper2>
Step 5* – Remove inactive BE
ludelete <be_oper2>
* optional
30
Live Upgrade Packages
31
Live Upgrade Rules of Thumb
32
Using LU in Solaris 10...
Create ZFS root pool BE from UFS root (ZFS root pool must
c0d0s0, /, 9000MB
c0d0s1, swap, 2000MB
c0d0s2, backup, 20000MB (automatically created)
c0d0s3, /altroot, 9000MB (this is where BE will be)
33
LU Command Description
34
# lumount / #luunmount :- Mount/unmount file systems of a specified boot
environment
35