You are on page 1of 26

27/02/2012

The Lin

Logical Vol me Manage (LVM)


.nksupport.com

Linux Server Management Fully-Managed Linux Servers Package Linux Web Hosting Services Aix Monitoring Monitor AIX performance. CPU, Memory, Disk etc. Try Now!
.ManageEngine.com

Point Of Sales(Singapore) User Friendly POS Software For Retail Outlets And Restaurants.

.basic.com.sg

Ma

the s our ce be w ith ou, but r emember the KIS S pr inciple ; - )

Sof panorama

Search

A bo

B lle in

La e

Pa

m on h

T op

i i ed

The Linux Logical Volume Manager (LVM)


News

Managem en Operations on Volumes Create a new volume Logical

Lin

Di k

Recommended Recommended Books


Links

HOW-TOs

Recommended Papers

Chea hee G b

LV M

Repa i ioning

Snap ho

LVM Tools Get

Sof

RA ID

a e

file

em

filesystem

Loopback

Reco e Moving a

commands

Basic LVM

information about free space

Create and mount a partition

Extend the partition

file

Re i e he

Reco e ol m e LV M

em >

volume group to another system

T o ble hoo ing

Lin

FA T A L co ld LV M De ice m od le .dep no load Scan Fil e i h

m odp obe

Con olling

Pa i ion label

Renam ing

V ol m e

Logical

Humor

Etc

General concepts of logical volume manager (LVM) stems from the desire to be able to create filesystems that span several physical disks as well as change the size of existing partitions of the fly. Another important feature is the ability to create snapshots. See Snapshots

The Linux LVM implementation is similar to the probably has more in common with AIX

HP-UX LVM implementation although actual code implementation. Both of them are derivatives of VxFS, the Veritas filesystem). The Veritas filesystem and volume manager have their roots in a fault-tolerant proprietary minicomputer built by Solaris since at least 1993 and were later Volume Manager code has been used (in line tools) in Windows. The quality and architectural integrity of the current
. of pano ama.o g/Comme cial_lin e /logical_ ol me_manage . h ml

Veritas in the 1980s. They have been available for integrated into HP-UX, AIX and SCO UNIX. Veritas extensively modified form and without command

1/26

27/02/2012

The Lin

Logical Vol me Manage (LVM)

case of severe malfunction are limited. LVM is the source of many difficult to resolve problem, updates.

implementation of LVM is low. Recovery tools in

including, but not limited to situation when production server became unbootable after regular

LVM adds additional layer of complexity and my recommendation is to avoid it unless you need some of the functions provided.

Recovery of LVM-controlled partition is more complex and time consuming. It helps if See

installation DVD rescue mode automatically recognizes LVM group. This is the case for Suse 10.

Recovering a Lost LVM Volume Disk Novell User Communities

Recovery of LVM partitions a good explanation of basic mechanism. Putting the root partition under LVM is a risky decision and you may pay the price unless you are an LVM expert. In large enterprise environment if the partition is not on SAN or NAS usually extension of partition means adding a new pair of hardrives. In such cases creating cpio archive of the partition, recreation of partitions and restoring the content is a better deal as such cases happen once in several years. Another the second pair of drives without LVM, modify /etc/fstab and replace original drives with the new. Among cases when LVM is essential are the following: You need Snapshots. path to avoid LVM is to start with it, optimize size of the partitions based on actual usage, and then move all content to

Some of your partitions are so dynamic that can span several drives during lifetime. In such cases SAN or NAS is a better solution then using LVM.

All-in-all current LVM is a pretty convoluted implementation of three tier storage hierarchy (physical volumes, logical volumes, partitions). Such an implementation both from architectural and from efficiency standpoints is somewhat inferior to integrated solutions like ZFS.

For a regular sysadmin who does not have much LVM experience, the sense of desperation and cold down the spine in case LVM-based partition goes south dampen all advantages that LVM provides. You can find pretty interesting and 0?: opinionated tidbits about such situations on the Net. For example, emotional statement in the discussion thread dev-dm-

I only use those for mounting flash drives, and mapping encrypted partitions. Sorry, i dont do LVM anymore, after a small problem lost me 300GB of data. Its much easier to backup.

Putting root filesystem under LVM often happens if the first partition is a service partition (for example Dell service

partition). In this case swap partition and boot partition take another two primary partitions and extended is the last one a file so that the root partition is a primary partition.

and it is usually put completely under LVM. In this case it is better to allocate SWAP partition on a different volume or to

If you have root system on LVM volume you need to train yourself to use recovery disk and mount those partitions. It structure of your LVM volume, for example /etc/lvm/backup/vg01
. of pano ama.o g/Comme cial_lin e /logical_ ol me_manage . h ml

also helps to have a separate backup on CD or other media of /etc/lvm. Among other things it contains the file with the

2/26

27/02/2012

The Lin

Logical Vol me Manage (LVM)

LVM was originally written (adapted from IBM code?) in 1998 by Heinz Mauelshagen. A good introduction to the basic concepts can be found in Wikipedia articles Logical volume management and Logical Volume Manager (Linux). Some code was donated by IBM [IBM pitches its open source side]. It is unclear is it is still used. See En e p i e V ol m e Managem en S em - Wikipedia

IBM has donated technology, code and skills to the Linux community, Kloeckner said, citing the company's donation of the Logical Volume Manager and its Journaling File System.

Matthew O'Keefe who from 1990 to May 2000, taught and performed research in storage systems and parallel

simulation software as a professor of electrical and computer engineering at the University of Minnesota founded Sistina Software in May of 2000 to develop storage infrastructure software for Linux, including the Linux Logical Volume Manager (LVM). They created LVM2. Sistina was acquired by Red Hat in December 2003.

LVM2 is identical in Red Hat and Suse although it has different GUI interface for managing volumes. The installers for both Red Hat and Suse are LVM-aware.

Although Linux volume manager works OK and is pretty reliable, documentation sucks badly for a commercial product. The most readable documentation that I have found is the article by Klaus Heinrich Kiwi Logical volume management published at IBM Developer Works on September 11, 2007. Good chea hee i a ailable f om RedHa - LV M chea hee

T he m o

Ki i Logical

eadable doc m en a ion ha I ha e fo nd i o da ed

11, 2007. I i no

ol m e m anagem en p bli hed a IBM De elope Wo k on Sep em be

he a icle b

Kla

Hein ich

Good chea hee i a ailable f om RedHa - LV M chea hee Moreover in RHEL 4 GUI interface is almost unusable as the left pane cannot be enlarged. YAST in Suse 10 was a much better deal.

Terminology
The LVM hierarchy includes Physical Volume (PV) (typically a hard disk or partition, though it may well just be a device that 'looks' like a hard disk e.g. a RAID device). Volume Group (VG) (the new virtual disk that can contain several the highest level abstraction used within the LVM. hda1 hdc1 \ / \/ physical disks) and Logical Volumes (LV) -- the equivalent of a disk partition in a non-LVM system. The Volume Group is

(PV:s on partitions or whole disks)

diskvg / | \ | / | \

(VG)

usrlv rootlv varlv (LV:s) ext2 reiserfs xfs (filesystems) The lowest level in the LVM storage hierarchy is the Physical Volume (PV). A PV is a single device or partition and is
. of pano ama.o g/Comme cial_lin e /logical_ ol me_manage . h ml 3/26

27/02/2012

The Lin

Logical Vol me Manage (LVM)

created with the command: pvcreate device. This step initializes a partition for later use. During this step each physical the volume group.

volume is divided chunks of data, known as physical extents, these extents have the same size as the logical extents for

Multiple Physical Volumes (initialized partitions) are merged into a Volume Group (VG). This is done with the command: vgcreate volume_name device {device}. This step also registers volume_name in the LVM kernel module and therefore it is made accessible to the kernel I/O layer. For example: vgcreate test-volume /dev/hda2 /dev/hda10 A Volume Group is pool from which Logical Volumes (LV) can be allocated. LV is the equivalent of a disk partition in a non-LVM system. The LV is visible as a standard block device; as such the LV can contain a file system (eg. /home). Creating an LV is done with lvcreate command Here is summary of terrminology used: Pa i ion - a portion of physical hard disk space. A hard disk may contain one or more partitions. Partitions are defined by BIOS and described by partition tables stored on a harddrive. V ol m e - a logical concept which hides the physical organization of storage space. A compatibility volume directly corresponds to a partition while LVM volume may span more than one partition on one or more physical disks. A volume is seen by users as a single drive letter. Ph ical V ol m e (PV ) Synonym for "hard disk". A single physical hard drive.

V ol m e G o p (V G) A set of one or more PVs which form a single storage pool. You can define multiple VGs on each system. Logical V ol m e (LV ) A usable unit of disk space within VG. LVs are used analogously to partitions on PCs or multiple physical volumes that constitute VG. . Can be physical partition or logical volume.

slices under Solaris: they usually contain filesystems or paging spaces ("swap")Unlike physical partition can span Roo pa i ion. Physical or logical partition what holds root filesystem and mount points for all other partitions.

LVM Tools
LV M T ool pvcreate vgcreate vgextend vgreduce lvcreate lvextend lvremove vgdisplay lvdisplay pvscan De c ip ion Create physical volume from a hard drive Create logical volume group from one or more physical volumes Add a physical volume to an existing volume group Remove a physical volume from a volume group Create a logical volume from available space in the volume group Extend the size of a logical volume from free physical extents in the logical volume group Remove a logical volume from a logical volume group, after unmounting it Show properties of existing volume group Show properties of existing logical volumes Sho p ope ie of e i ing ph ical ol m e

Getting the map of the LVM environment


. of pano ama.o g/Comme cial_lin e /logical_ ol me_manage . h ml 4/26

27/02/2012

The Lin

Logical Vol me Manage (LVM)

Commands vgdisplay, lvdisplay, and pvscan have man pages that provide a wealth of information how to navigated the maze of volumes on the particular server.

The first command to use is pvdisplay which provides you with the information about volumes available
pvscan [-d|--debug] [-e|--exported] [-h|--help] [--ignorelockingfailure] [-n|--novolumegroup] [-s|--short] [-u|--uuid] [-v[v]|--verbose [-verbose]] pvscan scans all supported LVM block devices in the system for physical volumes. See lvm for common options. -e, --exported Only show physical volumes belonging to exported volume groups. -n, --novolumegroup Only show physical volumes not belonging to any volume group. -s, --short Short listing format. -u, --uuid Show UUIDs (Uniform Unique Identifiers) in addition to device special names.

vgdisplay shows logical volumes one by one and provides the information about free disk space on each: vgdisplay vg0 | grep "Total PE"

Operations on Logical Volumes


Among typical operations ( adapted from A Walkthrough of the LVM for Linux) : A dding a di k o he V ol m e G o p

To add /dev/hda6 to the Volume Group just type vgextend vg01 /dev/hda6 and you're done!

Mo ing C ea ing a

You can check this out by using vgdisplay -v vg01. Note that there are now a lot more PEs available!

Volume within the Volume Group. You can also stripe an LV across two Physical Volumes with the -i flag in

iped Logical V ol m e Note that LVM created your whole Logical Volume on one Physical

lvcreate. We'll create a new LV, lv02, striped across hda5 and hda6. Type lvcreate -l4 -nlv02 -i2 vg01 /dev/hda5 /dev/hda6. Specifying the PV on the command line tells LVM which PEs to use, while the -i2 command tells it to stripe it across the two.

You now have an LV striped across two PVs!

Mo ing da a

same size and are mapped automatically by LVM. This does not have to be the case, though. In fact, you can move an entire LV from one PV to another, even while the disk is mounted and in use! This will impact your performance, but it can prove useful.

i hin a V ol m e G o p Up to now, PEs and LEs were pretty much interchangable. They are the

Let's move lv01 to hda6 from hda5. Type pvmove -n/dev/vg01/lv01 /dev/hda5 /dev/hda6. This will move all LEs used by lv01 mapped to PEs on /dev/hda5 to new PEs on /dev/hda6. Effectively, this migrates data from hda5 to hda6. It takes a while, but when it's done, take a look with lvdisplay -v /dev/vg01/lv01 and notice that it now resides entirely on /dev/hda6!

Rem o ing a Logical V ol m e f om a V ol m e G o p Let's say we no longer need lv02. We can remove it and place its PEs back in the empty pool for the Volume Group. First, unmounting its filesystem. Next, deactivate it with lvchange -a n /dev/vg01/lv02. Finally, delete it by typing lvremove /dev/vg01/lv02. Look at the Volume
. of pano ama.o g/Comme cial_lin e /logical_ ol me_manage . h ml

5/26

27/02/2012

The Lin

Logical Vol me Manage (LVM)

Group and notice that the PEs are now unused.

Rem o ing a di k f om he V ol m e G o p You can also remove a disk from a volume group. We aren't using hda5 anymore, so we can remove it from the Volume Group. Just type vgreduce vg01 /dev/hda5 and it's gone!

A file system on logical volume may be extended. Also more space may be added to a VG by adding new partitions or devices with the command: vgextend. For example: lvextend -L +4G /dev/VolGroup00/LogVol04 The command pvmove can be used in several ways to move any LV elsewhere. There are also many more commands to rename, remove, split, merge, activate, deactivate and get extended information about current PV's, VG's and LV's.

Here is a typical du map of a server with volume manager installed. As you can see all partitions except /boot partition are referred vi path /dev/mapper/VolGroup00-LogVolxx where xx is two digit number:

Filesystem

/dev/mapper/VolGroup00-LogVol00 /de / da3 none 4128448 194449

1K-blocks

Used Available Use% Mounted on

316304 3602432 9% / 22382

/dev/mapper/VolGroup00-LogVol05 /dev/mapper/VolGroup00-LogVol03 /dev/mapper/VolGroup00-LogVol02 /dev/mapper/VolGroup00-LogVol04 /dev/hde 8256952 594366 4128448 4128448

2020484

0 2020484 0% /dev/shm

162027 13% /boo

42012 3876724 2% /home 41640 3877096 2% /tmp

8256952 3189944 4647580 41% /usr 174232 7663344 3% /var 594366

0 100% /media/cdrecorder

Resiliency to renumbering of physical hard disks


LVM identifies PVs by UUID, not by device name. Each disk (PV) is labeled with a UUID, which uniquely identifies it to the system. 'vgscan' identifies this after a new disk is added that changes your drive numbering. Most distros run vgscan in the lvm startup scripts to cope with this on reboot after a hardware addition. If you're doing a hot-add, you'll have to run this by hand I think. On the other hand, if your vg is activated and being used, the renumbering should not affect it vgscan with a complaint about a missing PV.

at all. It's only the activation that needs the identifier, and the worst case scenario is that the activation will fail without a

The failure or removal of a drive that LVM is currently using will cause problems with current use and future activations of the VG that was using it.

How to get information about free space


vgdisplay shows logical volumes one by one and provides the information about free disk space on each: vgdisplay volume_group_one | grep "Total PE"
. of pano ama.o g/Comme cial_lin e /logical_ ol me_manage . h ml 6/26

27/02/2012

The Lin

Logical Vol me Manage (LVM)

How to create a new volume


# vgcreate vg01 /dev/hda2 /dev/hda10 Volume group "vg01" successfully created

How to create and mount a partition


1. C ea e he pa i ion i h lvcreate

# lvcreate -L 5G -n data vg02 Logical volume "data" created 2. Fo m a pa i ion # mkfs -t ext3 /dev/vg02/data 3. Make m o n poin and m o n i # mkdir /data

# mount /dev/vg02/data /data/ 4. Check e l

# df -h /data

Filesystem Size Used Avail Use% Mounted on /dev/mapper/test--volume-data 50.0G 33M 5.0G 1% /data 5. A dd i o /e c/f ab

You can create shell function to simplify this task if you need to create many similar partitions like is often the case with Oracle databases. For example:

# Create oracle archive filesystem # Parameters: # 1 - name of archive

# 2 - size in gigabytes

# 3 - name of logical volume (default lv0) function make_archive {

mkdir -p /oracle/$1/archive

chown oracle:dba /oracle/$1/archive lvcreate -L ${2}G -n archive vg0 mkfs -t ext3 /dev/vg0/archive

echo "/dev/$3/archive /oracle/$1/archive ext2 defaults 1 2" >> /etc/fstab mount /oracle/$1/archive # that will check the mount point if fstab } df -k

How to extend the partition

. of pano ama.o g/Comme cial_lin

e /logical_ ol me_manage . h ml

7/26

27/02/2012

The Lin

Logical Vol me Manage (LVM)

If one wishes to use all the free physical extents on the volume group one can achieve this using the lvm lvextend command :

lvm lvextend -L +4G /dev/VolGroup00/LogVol04 # extend /var ext2online /dev/VolGroup00/LogVol04

Option -l operates with free extents . This adds the 7153 free extents to the logical volume: # lvm lvextend -l+7153 /dev/TestVG/TestLV Logical volume TestLV successfully resized

Extending logical volume TestLV to 30.28 GB

"lvextend -L +54 /dev/vg01/lvol10 /dev/sdk3" tries to extend the size of that logical volume by 54MB on physical volume /dev/sdk3. This is only possible if /dev/sdk3 is a member of volume group vg01.

Then the pvcreate command is used to create the new physical volume using the new partition, and the pvs again to verify the new physical volume. See redhat.com Knowledgebase

After extending the volume group and the logical volume, it is possible to resize the file system on the fly. This is done using ext2online. First I verify the file system size, perform the resize, and then verify the size again: # df -h /mnt/test Filesystem

/dev/mapper/TestVG-TestLV

Size Used Avail Use% Mounted on

2.3G 36M 2.2G 2% /mnt/test

# ext2online /dev/TestVG/TestLV

ext2online v1.1.18 - 2001/03/18 for EXT2FS 0.5b # df -h /mnt/test Filesystem

/dev/mapper/TestVG-TestLV

Size Used Avail Use% Mounted on

30G 39M 29G 1% /mnt/test em

For more information see Re i ing he file

How to remove LVM partition


Use lvremove to Remove a logical volume from a logical volume group, after unmounting it syntax: lvremove [-A/--autobackup y/n] [-d/--debug] [-f/--force] [-h/-?/--help] [-t/--test] [-v/--verbose] LogicalVolumePath [LogicalVolumePath...]

lvremove removes one or more logical volumes. Confirmation will be requested before deactivating any active logical mounted filesystem).

volume prior to removal. Logical volumes cannot be deactivated or removed while they are open (e.g. if they contain a

. of pano ama.o g/Comme cial_lin

e /logical_ ol me_manage . h ml

8/26

27/02/2012

The Lin

Logical Vol me Manage (LVM)

Options. -f, --force Remove active logical volumes without confirmation. For example: Remove the active logical volume lvol1 in volume group vg00 without asking for confirmation: lvremove -f vg00/lvol1

Remove all logical volumes in volume group vg00: lvremove vg00 T op pda e T op i i ed

Google Sea ch
Search B lle in

Latest

Past week Past month

Old News ;-)


[Sep 14, 2011] Using the Multipathed storage with LVM
Now that the multipath is configured, you need to perform disk management to make them available for use. If your original install was on LVM, you may want to add the new disks to the existing volume group and create some new logical volumes for use. If your original install was on regular disk partitions, you may want to create new volume groups and logical volumes for use. In both cases, you might want to partition the volume groups and automate the mounting of these new partitions to certain mount points. About this task The following example illustrates how the above can be achieved. For detailed information about LVM administration, please consult: Red Hat LVM Administrator's Guide at http://www.redhat.com/docs/enUS/Red_Hat_Enterprise_Linux/5.2/html/Cluster_Logical_Volume_Manager/ or SLES10 SP2 Storage Administration Guide at http://www.novell.com/documentation/sles10/stor_evms/index.html?page=/documentation/sles10/stor_evms/data/mpiousing.html Starting with an existing Linux environment on a blade, and a multipath zone configuration that will allow the blade to access some storage, here are a set of generic steps to make use of the new storage: Procedure 1. Determine which disks are not multipathed disks. From step 1 of previous section With existing configuration, the output of df and fdisk -l should indicate which disks are already in use before multipath was setup. In this example, sda is the only disk
. of pano ama.o g/Comme cial_lin e /logical_ ol me_manage . h ml 9/26

27/02/2012

The Lin

Logical Vol me Manage (LVM)

exists before multipath was setup. 2. Create and/or open /etc/multipath.conf and blacklist the local disk. For a SLES10 machine: cp /usr/share/doc/packages/multipath-

tools/multipath.conf.synthetic /etc/multipath.conf The /usr/share/doc/packages/multipath-tools/multipath.conf.annotated file can be used as a reference to further determine how to configure your multipathing environment. For a RHEL5 machine: Edit the /etc/multipath.conf that has already been created by default. Related documentations can be found in the /usr/share/doc/device-mapper-multipath-0.4.7/ directory. 3. Open the /etc/multipath.conf file, and edit the file to black list disks that are not meant to be multipathed. In this example, sda is blacklisted. blacklist { devnode "^sda"

4. Enable and activate the multipath daemon(s). On both RHEL and SLES, the commands are: chkconfig multipathd on Additionally, on a RHEL system, this command is required: chkconfig mdmpd on 5. Reboot the blade Note: Note: if the machine is not rebooted, the latest configuration may not be detected. 6. Check if multipathd daemon is running by issuing: service mdmpd status Additionally, if you are running a RHEL system, check if mdmpd daemon is running: service multipathd status 7. Run the command multipath -ll to verify that the disk(s) are now properly recognized as multipath devices. multipath -ll

mpath2 (350010b900004b868) dm-3 IBM-ESXS,GNA073C3ESTT0Z [size=68G][features=0][hwhandler=0] \_ round-robin 0 [prio=1][active] \_ 1:0:1:0 sdc 8:32 [active][ready] \_ 1:0:3:0 sde 8:64 [active][ready]

\_ round-robin 0 [prio=1][enabled]

mpath1 (35000cca0071acd29) dm-2 IBM-ESXS,VPA073C3-ETS10 [size=68G][features=0][hwhandler=0] \_ round-robin 0 [prio=1][active] \_ 1:0:0:0 sdb 8:16 [active][ready] \_ 1:0:2:0 sdd 8:48 [active][ready]

\_ round-robin 0 [prio=1][enabled]

As expected, two sets of paths are detected - two paths in each set. From examining the above output, notice that sdc and sde are actually the same physical disk, accessible from the blade via two different devices. Similarly in the case of sdb and sdd. Note that the device names are dm-2 and dm-3. 8. If your disks are new, skip this step. Optionally, if you have previous data or partition table, use the following command to erase the partition table. dd if=/dev/zero of=/dev/dm-X bs=8k count=100
. of pano ama.o g/Comme cial_lin e /logical_ ol me_manage . h ml 10/26

27/02/2012

The Lin

Logical Vol me Manage (LVM)

X is the device number as shown in step 9. Be very careful when doing this step as it is destructive. Your data will not be able to be recovered if you erase the partition table. In the test environment, both the disks will be used in the existing volume group. dd if=/dev/zero of=/dev/dm-2 bs=8k count=100 dd if=/dev/zero of=/dev/dm-3 bs=8k count=100 Note: This step will erase your disks 9. Create a new physical volume with each disk by entering the following command: pvcreate /dev/dm-X In our environment: pvcreate /dev/dm-2 pvcreate /dev/dm-3 10. Run lvm pvdisplay to see if the physical volumes are displayed correctly. If at any time in this LV management process, you would like to view the status of existing related entities like physical volume (pv), volume group (vg), and logical volume (lv), issue the corresponding command: lvm pvdisplay lvm vgdisplay lvm lvdisplay

11. If the new entity you just created or changed could not be found, you may want to issue the corresponding command to scan for the device: pvscan vgscan lvscan

12. Run the vgscan command to show any existing volume groups. On a RHEL system, the installer creates VolGroup00 by default if another partitioning scheme is not chosen. On a SLES system, no volume groups exist. The following shows an output of an existing volume group VolGroup00. vgscan

Reading all physical volumes. This may take a while...

Found volume group "VolGroup00" using metadata type lvm2 13. Add the physical volume(s) to an existing volume group using the vgextend command. In our environment, add /dev/dm-2 and /dev/dm-3 created in step 11 to the existing volume group VolGroup00 found in step 14, using the command: vgextend VolGroup00 /dev/dm-2 /dev/dm-3

Volume Group VolGroup00 successfully extended 14. If there is no existing volume group, create a new volume group using the vgcreate command. For example, to create a new volume group VolGroup00 with the physical volumes /dev/dm-2 and /dev/dm-3, run this command: vgcreate VolGroup00 /dev/dm-2 /dev/dm-3

Volume group "VolGroup00" successfully created Creating a new logical volume and setting up automounting Now that more storage is available in the VolGroup00 volume group, you can use the extra storage to save your data.

[May 14, 2010] Restoring LVM Volumes with Acronis True Image
Knowledge Base You need to back up logical volumes of LVM and ordinary (non-LVM) partitions. There is no need to back up physical volumes of LVM, as they are backed up sector-by-sector and there is no guarantee that it will work after the restore. The listed Acronis products recognize logical LVM volumes as Dynamic or GPT volumes. Logical LVM volumes can be restored as non-LVM (regular) partitions in Acronis Rescue Mode. Logical LVM volumes can be restored on top of existing LVM volumes. See LVM Volumes Acronis True Image 9.1 Server for Linux Supports or LVM Volumes Supported by Acronis True Image Echo.
. of pano ama.o g/Comme cial_lin e /logical_ ol me_manage . h ml 11/26

27/02/2012

The Lin

Logical Vol me Manage (LVM)

Solution Restoring LVM volumes as non-LVMs 1. Restore partitions Restore logical LVM volumes and non-LVM partitions one by one with Acronis backup software. Do not forget to make the boot partition Active (/ or /boot if available). 2. Make the system bootable 1. Boot from Linux Distribution Rescue CD. 2. Enter rescue mode. 3. Mount the restored root(/) partition. If the rescue CD mounted partitions automatically, skip to the next step. Most distributions will try to mount the system partitions as designated in /etc/fstab of the restored system. Since there are no LVMs available, this process is likely to fail. This is why you might need to mount the restored partitions manually: Enter the following command: #cat /proc/partitions You will get the list of recognized partitions:

major minor #blocks name 8 0 8388608 sda 8 1 104391 sda1 8 2 8281507 sda2
Mount the root(/) partition: #mount -t [fs_type] [device] [system_mount_point] In the example below /dev/sda2 is root, because it was restored as second primary partition on SATA disk #mount -t ext3 /dev/sda2 /mnt/sysimage 4. Mount /boot if it was not mounted automatically: #mount -t [fs_type] /dev/[device] /[system_mount_point]/boot Example: #mount -t ext3 /dev/sda1 /mnt/sysimage/boot 5. chroot to the mounted / of the restored partition: #chroot [mount_point] 6. Mount /proc in chroot #mount -t proc proc /proc 7. Create hard disk devices in /dev if it was not populated automatically. Check existing partitions with cat /proc/partitions and create appropriate devices for them: #/sbin/MAKEDEV [device] 8. Edit /etc/fstab on the restored partition: Replace all entries of /dev/VolGroupXX/LogVolXX with appropriate /dev/[device]. You can find which device you need to mount in cat /proc/partitions. 9. Edit grub.conf Open /boot/grub/grub.conf and edit it to replace /dev/VolGroupXX/LogVolXX with appropriate /dev/[device] 10. Reactivate GRUB Run the following command to re-activate GRUB automatically: #grub-install /dev/[device] 11. Make sure the system boots fine. Restoring LVM volumes on prepared LVMs
. of pano ama.o g/Comme cial_lin e /logical_ ol me_manage . h ml 12/26

27/02/2012

The Lin

Logical Vol me Manage (LVM)

1. Prepare the LVM volumes Boot from Acronis Bootable Media; Press F11 after the Starting Acronis Loader... message appears and you get to the selection screen of the program; After you get the Linux Kernel Settings prompt, remove the word quiet and click OK; Select the Full version menu item to boot. Wait for # prompt to appear; List the partitions you have on the hard disk: #fdisk -l This will give not only the list of partitions on the hard drive, but also the name of the device associated with the hard disk. Start creating partitions using fdisk : #fdisk [device] where [device] is the name of the device associated with the hard disk Create physical volumes for LVMs: #lvm pvcreate [partition] for example, #lvm pvcreate /dev/sda2 Create LVM group #lvm vgcreate [name] [device] where [name] is a name of the Volume Group you create; and [device] is the name of the device associated with the partition you want to add to the Volume Group for example, #lvm vgcreate VolGroup00 /dev/sda2 Create LVM volumes inside the group: #lvm lvcreate L[size] -n[name] [VolumeGroup] where [size] is the size of the Volume being created (e.g. 4G); [name] is the name of the Volume being created; [VolumeGroup] is the name of the Volume Group where we want to place the volume For example, #lvm lvcreate -L6G -nLogVol00 VolGroup00 Activate the created LVM: #lvm vgchange -ay Start Acronis product: #/bin/product 2. Restore partitions Restore partitions from your backup archive to the created LVM volumes

[Feb 9, 2009] USB Hard Drive in RAID1


January 31, 2008 | www.bgevolution.com This concept works just as for an internal hard drive. Although, USB drives seem to not remain part of the array after a reboot, therefore to use a USB device in a RAID1 setup, you will have to leave the drive connected, and the computer running. Another tactic is to occasionally sync your USB drive to the array, and shut down the USB drive after synchronization. Either tactic is effective. You can create a quick script to add the USB partitions to the RAID1. The first thing to do when synchronizing is to add the partition: sudo mdadm --add /dev/md0 /dev/sdb1 I have 4 partitions therefore my script contains 4 add commands. Then grow the arrays to fit the number of devices: sudo mdadm --grow /dev/md0 --raid-devices=3 After growing the array your USB drive will magically sync USB is substantially slower than SATA or PATA. Anything over 100
. of pano ama.o g/Comme cial_lin e /logical_ ol me_manage . h ml 13/26

27/02/2012

The Lin

Logical Vol me Manage (LVM)

Gigabytes will take some time. My 149 Gigabyte /home partition takes about an hour and a half to synchronize. Once its synced I do not experience any apparent difference in system performance.

[Jan 9, 2009] Linux lvm


15.12.2008 | Linuxconfig.org This article describes a basic logic behind a Linux logical volume manager by showing real examples of configuration and usage. Despite the fact that Debian Linux will be used for this tutorial, you can also apply the same command line syntax with other Linux distributions such as Red Hat, Mandriva, SuSe Linux and others.

[Nov 12, 2008] /dev/dm-0


fdisk -l output in case you are using LVM contains many messages like Dis k /dev/dm- 0 does n' t conta in a va lid pa r tition ta ble LinuxQuestions.org This has been very helpful to me. I found this thread by Goggle on dm-0 because I also got the no partition table error message. Here is what I think: When the programs fdisk and sfdisk are run with the option -l and no argument, e.g. # /sbin/fdisk -l they look for all devices that can have cylinders, heads, sectors, etc. If they find such a device, they output that information to standard output and they output the partition table to standard output. If there is no partition table, they have an error message (also standard output). One can see this by piping to 'less', e.g. # /sbin/fdisk -l | less /dev/dm-0 ... /dev/dm3 on my fedora C5 system seem to be device mappers associated with LVM. RAID might also require device mappers.

[Aug 26, 2008] Moving LVM volumes to a different volume group by Sander Marechal
2008-08-25 | www.jejik.com I went with SystemRescueCD which comes with both mdadm and LVM out-of-the-box. The system layout is quite simple. /dev/sda1 and /dev/sdb1 make up a 500 GB mdadm RAID1 volume. This RAID volume contains an LVM volume group called 3ware, named so because in my old server it was connected to my 3ware RAID card. It contains a single logical volume called media. The original 80 GB disk is on /dev/sdc1 which contains an LVM volume group called linuxvg. Inside that volume group are three volumes: boot, root and swap. Goal: Move linuxvg-root and linuxvg-boot to the 3ware volume group. Additional goal: Rename 3ware to linuxvg. The latter is more for aesthetic reasons but as a bonus it also means that there is no need to fiddle with grub or fstab settings after the move. Before starting SystemRescueCD and start moving things around there are a few things that need to be done first. Start by making a copy of /etc/mdadm/mdadm.conf because you will need it later. Also, because the machine will be booting from the RAID array I need to install grub to those two disks. # grub-install /dev/sda

# grub-install /dev/sdb Now it s time to boot into SystemRescueCD. I start off by copying /etc/mdadm/mdadm.conf back and starting the RAID1 array. This command scans for all the arrays defined in mdadm.conf and tries to start them. # mdadm --assemble --scan Next I need to make a couple of changes to /etc/lvm/lvm.conf. If I were to scan for LVM volume groups at this point, it would find the 3ware group three times: once in /dev/md0, /dev/sda1 and /dev/sdb1. So I adjust the filter setting in lvm.conf so it will not scan /dev/sda1 and /dev/sdb1. filter = [ "r|/dev/cdrom|", "r|/dev/sd[ab]1|" ]
. of pano ama.o g/Comme cial_lin e /logical_ ol me_manage . h ml 14/26

27/02/2012

The Lin

Logical Vol me Manage (LVM)

LVM can now scan the hard drives and find all the volume groups. # vgscan I disable the volume groups so that I can rename them. linuxvg becomes linuxold and 3ware becomes the new linuxvg. Then I reenable the volume groups. # vgchange -a n

# vgrename linuxvg linuxold # vgrename 3ware linuxvg # vgchange -a y

Now I can create a new logical volume in the 500 Gb volume group for my boot partition and create an ext3 filesystem in it. # lvcreate --name boot --size 512MB linuxvg # mkfs.ext3 /dev/mapper/linuxvg-boot

I create mount points to mount the original boot partition and the new boot partition and then use rsync to copy all the data. Don t use cp for this! Rsync with the -ah option will preserve all soft links, hard links and file permissions while cp does not. If you do not want to use rsync you could also use the dd command to transfer the data directly from block device to block device. # mkdir /mnt/src /mnt/dst

# mount -t ext3 /dev/mapper/linuxold-boot /mnt/src # mount -t ext3 /dev/mapper/linuxvg-boot /mnt/dst # rsync -avh /mnt/src/ /mnt/dst/ # umount /mnt/src /mnt/dst

Rinse and repeat to copy over the root filesystem. # lvcreate --name root --size 40960MB linuxvg # mkfs.ext3 /dev/mapper/linuxvg-root # mount -t ext3 /dev/mapper/linuxold-root /mnt/src # mount -t ext3 /dev/mapper/linuxvg-root /mnt/dst # rsync -avh /mnt/src/ /mnt/dst/ # umount /mnt/src /mnt/dst

There's no sense in copying the swap volume. Simply create a new one. # lvcreate --name swap --size 1024MB linuxvg # mkswap /dev/mapper/linuxvg-swap

And that's it. I rebooted into Debian Lenny to make sure that everything worked and I removed the 80 GB disk from my server. While this wans t particularly hard, I do hope that the maintainers of LVM create an lvmove command to make this even easier.

[Aug 15, 2008] Linux RAID Smackdown Crush RAID 5 with RAID 10
LinuxPlanet Creating RAID 10 No Linux installer that I know of supports RAID 10, so we have to jump through some extra hoops to set it up in a fresh installation. This is my favorite layout for RAID systems: /dev/md0 is a RAID 1 array containing the root filesystem. /dev/md1 is a RAID 10 array containing a single LVM group divided into logical volumes for /home, /var, and /tmp, and anything else I feel like stuffing in there. Each disk has its own identical swap partition that is not part of RAID or LVM, just plain old ordinary swap. One way is to use your Linux installer to create the RAID 1 array and the swap partitions, then boot into the new filesystem and create the RAID 10 array. This works, but then you have to move /home, /var, /tmp, and whatever you else you want there, which means copying files and editing /etc/fstab. I get tired thinking about it. Another way is to prepare your arrays and logical volumes in advance and then install your new system over them, and that is what we are going to do. You need a bootable live Linux that includes mdadm, LVM2 and GParted, unless you're a crusty old commandline commando that doesn't need any sissy GUIs, and are happy with fdisk. Two that I know have all of these are Knoppix and
. of pano ama.o g/Comme cial_lin e /logical_ ol me_manage . h ml 15/26

27/02/2012

The Lin

Logical Vol me Manage (LVM)

SystemRescueCD; I used SystemRescueCD. Step one is to partition all of your drives identically. The partition sizes in my example system are small for faster testing; on a production system the 2nd primary partition would be as large as possible: 1st primary partition, 5GB 2nd primary partition, 7GB swap partition, 1GB The first partition on each drive must be marked as bootable, and the first two partitions must be marked as "fd Linux raid auto" in fdisk. In GParted, use Partition -> Manage Flags. Now you can create your RAID arrays with the mdadm command. This command creates the RAID1 array for the root filesystem: # mda dm - - c ea e /de /md0 - - le el= a id1 - - a id- de ice =2 /de /hda 1 /de / da 1

mdadm: layout defaults to n1

mdadm: chunk size defaults to 64K mdadm: size set to 3076352K mdadm: array /dev/md0 started. This will take some time, which cat /proc/mdstat will tell you: Personalities : 'linear' 'raid0' 'raid1' 'raid6' 'raid5' 'raid4' 'multipath' 'raid10' md0 : active raid10 sda1'1' hda1'0' 3076352 blocks 2 near-copies '2/2' 'UU' '====>................' resync = 21.8% (673152/3076352) finish=3.2min speed=12471K/sec This command creates the RAID 10 array: # mdadm -v --create /dev/md1 --level=raid10 --raid-devices=2 /dev/hda2 /dev/sda2 Naturally you want to be very careful with your drive names, and give mdadm time to finish. It will tell you when it's done: RAID10 conf printout: --- wd: rd:2 disk 0, wo:0, o:1, dev:hda2 disk 1, wo:0, o:1, dev:sda2

mdadm --detail /dev/md0 displays detailed information on your arrays. Create LVM Group and Volumes Now we'll put a LVM group and volumes on /dev/md1. I use vg- for volume group names and lv- for the logical volumes in the volume groups. Using descriptive names, like lv-home, will save your sanity later when you're creating filesystems and mountpoints. The -L option specifies the size of the volume: # p c ea e /de /md1 # gc ea e g- e

# l c ea e - L4g - nl - home # l c ea e - L2g - nl - a

e 1 /de /md1 g- e g- e

# l c ea e - L1g - nl - mp g- e

e 1

e 1

e 1

You'll get confirmations for every command, and you can use vgdisplay and lvdisplay to see the fruits of your labors. Use vgdisplay to see how much space is left.

Getting E-mail notifications when MD devices fail


I use the MD (multiple device) logical volume manager to mirror the boot devices on the Linux servers I support. When I first started using MD, the mdadm utility was not available to manage and monitor MD devices. Since disk failures are relatively common in large shops, I used the shell script from my SysAdmin article Monitoring and Managing Linux Software RAID to send E-mail when a device entered the failed state. While reading through the mdadm(8) manual page, I came across the monitor and mail options. These options can be used to monitor the operational state of the MD devices in a server, and generate E-mail notifications if a problem is detected. E-mail notification support can be enabled by running mdadm with the monitor option to monitor devices, the daemonise option to create a daemon process, and the mail option to generate E-mail: $ /sbin/mdadm monitor scan daemonise mail=root@localhost Once mdadm is daemonized, an E-mail similar to the following will be sent each time a failure is detected:

. of pano ama.o g/Comme cial_lin

e /logical_ ol me_manage . h ml

16/26

27/02/2012

The Lin

Logical Vol me Manage (LVM)

From: mdadm monitoring

To: root@localhost.localdomain

Subject: Fail event on /dev/md1:biscuit This is an automatically generated mail message from mdadm running on biscuit

A Fail event had been detected on md device /dev/md1. Faithfully yours, etc. I digs me some mdadm!

Linux LVM silliness


While attempting to create a 2-way LVM mirror this weekend on my Fedora Core 5 workstation, I received the following error: $ lvcreate -L1024 -m 1 vgdata Not enough PVs with free space available for parallel allocation. Consider --alloc anywhere if desperate.

Since the two devices were initialized specifically for this purpose and contained no other data, I was confused by this error message. After scouring Google for answers, I found a post that indicated that I needed a log LV for this to work, and the log LV had to be on it s own disk. I am not sure about most people, but who on earth orders a box with three disks? Ugh! Posted by matty, filed under Linux LVM. Date: May 3, 2006, 9:50 pm | 2 Comments

[linux-lvm] Raid 0+1


From: "Wayne Pascoe" <lists-june2004 penguinpowered org> To: linux-lvm redhat com Subject: [linux-lvm] Raid 0+1 Date: Wed, 21 Jul 2004 13:22:53 +0100 (BST) Hi all, I am working on a project to evaluate LVM2 against Veritas Volume Manager for a new Linux deployment. I am trying to get a Raid 0+1 solution working and I'm struggling. So far, this is where I am: 1. I created 8GB partitions on 4 disks, sdb, sdc, sdd and sde, and set their partition types to 8e with fdisk

2. I then ran vgscan, follwed by pvcreate /dev/sdb1, /dev/sdc1, /dev/sdd1, /dev/sde1

3. Next, I created 2 volume groups as follows: vgcreate StripedData1 /dev/sdb1 /dev/sdc1 vgcreate StripedData2 /dev/sdd1 /dev/sde1 4. Next, I created 2 volumes, one in each group as follows: lvcreate -i 2 -I 64 -n Data1 -L 6G StripedData1 lvcreate -i 2 -I 64 -n Data2 -L 6G StripedData2 things start to go wrong.

Now I have 2 striped volumes, but no redundancy. This is where I think

. of pano ama.o g/Comme cial_lin

e /logical_ ol me_manage . h ml

17/26

27/02/2012

The Lin

Logical Vol me Manage (LVM)

5. I now create a raid device, /dev/md0 consisting of these two

volumes. I run mkraid on this, create a file system, and mount it on /Data1. This all works fine, and I have a 6GB filesystem on /Data1

Now I need to be able to resize this whole solution, and I'm not sure if the way I've built it caters for what I need to do...

I unmount /Data1 and use lvextend to extend the 2 volumes from 6GB to /dev/md0 are extended, I cannot resize /dev/md0 using resize2fs /dev/md0

7.5GB. This succeeds. Now even though both of the volumes that make up

Can anyone advise me how I can achieve what I'm looking for here ? I'm guessing maybe I did things the wrong way around, but I can't find a solution that will give me both striping and mirroring :( Thanks in advance,

--

Wayne Pascoe

LVM HOWTO
Introduction 1. Latest Version 2. Disclaimer 3. Contributors 1. What is LVM? 2. What is Logical Volume Management? 2.1. Why would I want it? 2.2. Benefits of Logical Volume Management on a Small System 2.3. Benefits of Logical Volume Management on a Large System 3. Anatomy of LVM 3.1. 3.2. 3.3. 3.4. 3.8. volume group (VG) physical volume (PV) logical volume (LV) physical extent (PE)</ mapping modes (linear/striped) Snapshots

4. Frequently Asked Questions 4.1. LVM 2 FAQ 4.2. LVM 1 FAQ 5. Acquiring LVM 5.1. 5.2. 5.3. 5.4. 5.5. 5.6. 5.7. 5.8. 5.9. Download the source Download the development source via CVS Before You Begin Initial Setup Checking Out Source Code Code Updates Starting a Project Hacking the Code Conflicts

6. Building the kernel modules 6.1. Building the device-mapper module 6.2. Build the LVM 1 kernel module
. of pano ama.o g/Comme cial_lin e /logical_ ol me_manage . h ml 18/26

27/02/2012

The Lin

Logical Vol me Manage (LVM)

7. LVM 1 Boot time scripts 7.1. 7.2. 7.3. 7.4. 7.5. 7.6. Caldera Debian Mandrake Redhat Slackware SuSE

8. LVM 2 Boot Time Scripts 9. Building LVM from the Source 9.1. Make LVM library and tools 9.2. Install LVM library and tools 9.3. Removing LVM library and tools 10. Transitioning from previous versions of LVM to LVM 1.0.8 10.1. Upgrading to LVM 1.0.8 with a non-LVM root partition 10.2. Upgrading to LVM 1.0.8 with an LVM root partition and initrd 11. Common Tasks 11.1. Initializing disks or disk partitions 11.2. Creating a volume group 11.3. Activating a volume group 11.4. Removing a volume group 11.5. Adding physical volumes to a volume group 11.6. Removing physical volumes from a volume group 11.7. Creating a logical volume 11.8. Removing a logical volume 11.9. Extending a logical volume 11.10. Reducing a logical volume 11.11. Migrating data off of a physical volume 12. Disk partitioning 12.1. Multiple partitions on the same disk 12.2. Sun disk labels 13. Recipes 13.1. 13.2. 13.3. 13.4. 13.5. 13.6. 13.7. 13.8. 13.9. Setting up LVM on three SCSI disks Setting up LVM on three SCSI disks with striping Add a new disk to a multi-disk SCSI system Taking a Backup Using Snapshots Removing an Old Disk Moving a volume group to another system Splitting a volume group Converting a root filesystem to LVM 1 Recover physical volume metadata

A. Dangerous Operations A.1. Restoring the VG UUIDs using uuid_fixer A.2. Sharing LVM volumes B. Reporting Errors and Bugs C. Contact and Links C.1. Mail lists C.2. Links D. GNU Free Documentation License D.1. PREAMBLE D.2. APPLICABILITY AND DEFINITIONS D.3. VERBATIM COPYING D.4. COPYING IN QUANTITY D.5. MODIFICATIONS D.6. COMBINING DOCUMENTS D.7. COLLECTIONS OF DOCUMENTS D.8. AGGREGATION WITH INDEPENDENT WORKS D.9. TRANSLATION D.10. TERMINATION
. of pano ama.o g/Comme cial_lin e /logical_ ol me_manage . h ml 19/26

27/02/2012

The Lin

Logical Vol me Manage (LVM)

D.11. FUTURE REVISIONS OF THIS LICENSE D.12. ADDENDUM: How to use this License for your documents

The Linux and Unix Menagerie LVM Quick Command Reference For Linux And Unix
1. LVM Basic relationships. A quick run-down on how the different parts are related Physical volume - This consists of one, or many, partitions (or physical extent groups) on a physical drive. Volume group - This is composed of one or more physical volumes and contains one or more logical volumes. Logical volume - This is contained within a volume group. 2. LVM creation commands (These commands are used to initialize, or create, new logical objects) - Note that we have yet to explore these fully, as they can be used to do much more than we've demonstrated so far in our simple setup. pvcreate - Used to create physical volumes. vgcreate - Used to create volume groups. lvcreate - Used to create logical volumes. 3. LVM monitoring and display commands (These commands are used to discover, and display the properties of, existing logical objects). Note that some of these commands include cross-referenced information. For instance, pvdisplay includes information about volume groups associated with the physical volume. pvscan - Used to scan the OS for physical volumes. vgscan - Used to scan the OS for volume groups. lvscan - Used to scan the OS for logical volumes. pvdisplay - Used to display information about physical volumes. vgdisplay - Used to display information about volume groups. lvdisplay - Used to display information about logical volumes. 4. LVM destruction or removal commands (These commands are used to ensure that logical objects are not allocable anymore and/or remove them entirely) Note, again, that we haven't fully explored the possibilities with these commands either. The "change" commands in particular are good for a lot more than just prepping a logical object for destruction. pvchange - Used to change the status of a physical volume. vgchange - Used to change the status of a volume group. lvchange - Used to change the status of a logical volume. pvremove - Used to wipe the disk label of a physical drive so that LVM does not recognize it as a physical volume. vgremove - Used to remove a volume group. lvremove - Used to remove a logical volume. 5. Manipulation commands (These commands allow you to play around with your existing logical objects. We haven't posted on "any" of these commands yet - Some of them can be extremely dangerous to goof with for no reason) pvextend - Used to add physical devices (or partition(s) of same) to a physical volume. pvreduce - Used to remove physical devices (or partition(s) of same) from a physical volume. vgextend - Used to add new physical disk (or partition(s) of same) to a volume group. vgreduce - Used to remove physical disk (or partition(s) of same) from a volume group. lvextend - Used to increase the size of a logical volume. lvreduce - Used to decrease the size of a logical volume.

[Jun 19, 2011] Monitoring and Display Commands For LVM On Linux And Unix
The Linux and Unix Menagerie Physical Volumes: The two commands we'll be using here are pvscan and pvdisplay. pvscan, as with all of the following commands, pretty much does what the name implies. It scans your system for LVM physical volumes. When used straight-up, it will list out all the physical volumes it can find on the system, including those "not" associated with volume groups (output truncated to save on space): host # pvscan pvscan pvscan -- reading all physical volumes (this may take a while...) ... pvscan -- ACTIVE PV "/dev/hda1" is in no VG [512 MB] ... pvscan -- ACTIVE PV "/dev/hdd1" of VG "vg01"[512 MB / 266 MB free] ...
. of pano ama.o g/Comme cial_lin e /logical_ ol me_manage . h ml 20/26

27/02/2012

The Lin

Logical Vol me Manage (LVM)

Next, we'll use pvdisplay to display our only physical volume: host # pvdisplay /dev/hdd1 <-- Note that you can leave the /dev/hdd1, or any specification, off of the command line if you want to display all of your physical volumes. We just happen to know we only have one and are being particular ;) ... PV Name /dev/hdd1 VG Name vg01 PV Size 512 MB ... Other output should include whether or not the physical volume is allocatable (or "can be used" ;), total physical extents (see our post on getting started with LVM for a little more information on PE's), free physical extents, allocated physical extents and the physical volume's UUID (Identifier). Volume Groups: The two commands we'll be using here are vgscan and vgdisplay. vgscan will report on all existing volume groups, as well as create a file (generally) called /etc/lvmtab (Some versions will create an /etc/lvmtab.d directory as well): host # vgscan vgscan -- reading all physical volumes (this may take a while...) vgscan -- found active volume group "vg01" ... vgdisplay can be used to check on the state and condition of our volume group(s). Again, we're specifying our volume group on the command line, but this is not necessary: host # vgdisplay vg01 ... VG Name vg01 ... VG Size 246 MB ... this command gives even more effusive output. Everything from the maximum logical volumes the volume group can contain (including how many it currently does and how many of those are open), separate (yet similar) information with regards to the physical volumes it can encompass, all of the information you've come to expect about the physical extents and, of course, each volume's UUID.

redhat.com The Linux Logical Volume Manager by Heinz Mauelshagen and Matthew O'Keefe
Introduction Basic LVM commands Differences between LVM1 and LVM2 Summary About the authors Storage technology plays a critical role in increasing the performance, availability, and manageability of Linux servers. One of the most important new developments in the Linux 2.6 kernelon which the Red Hat Enterprise Linux 4 kernel is basedis the Linux Logical Volume Manager, version 2 (or LVM 2). It combines a more consistent and robust internal design with important new features including volume mirroring and clustering, yet it is upwardly compatible with the original Logical Volume Manager 1 (LVM 1) commands and metadata. This article summarizes the basic principles behind the LVM and provide examples of basic operations to be performed with it. Introduction Logical volume management is a widely-used technique for deploying logical rather than physical storage. With LVM, "logical" partitions can span across physical hard drives and can be resized (unlike traditional ext3 "raw" partitions). A physical disk is divided into one or more physical volumes (Pvs), and logical volume groups (VGs) are created by combining PVs as shown in Figure 1. LVM internal organization. Notice the VGs can be an aggregate of PVs from multiple physical disks. Figure 2. Mapping logical extents to physical extents shows how the logical volumes are mapped onto physical volumes. Each PV consists of a number of fixed-size physical extents (PEs); similarly, each LV consists of a number of fixed-size logical extents (LEs). (LEs and PEs are always the same size, the default in LVM 2 is 4 MB.) An LV is created by mapping logical extents to physical extents, so that references to logical block numbers are resolved to physical block numbers. These mappings can be constructed to achieve particular performance, scalability, or availability goals. For example, multiple PVs can be connected together to create a single large logical volume as shown in Figure 3. LVM linear mapping. This approach, known as a linear mapping, allows a file system or database larger than a single volume to be created using two physical disks. An alternative approach is a striped mapping, in which stripes (groups of contiguous physical extents) from
. of pano ama.o g/Comme cial_lin e /logical_ ol me_manage . h ml 21/26

27/02/2012

The Lin

Logical Vol me Manage (LVM)

alternate PVs are mapped to a single LV, as shown in Figure 4. LVM striped mapping. The striped mapping allows a single logical volume to nearly achieve the combined performance of two PVs and is used quite often to achieve high-bandwidth disk transfers. Figure 4. LVM striped mapping (4 physical extents per stripe) Through these different types of logical-to-physical mappings, LVM can achieve four important advantages over raw physical partitions: 1. Logical volumes can be resized while they are mounted and accessible by the database or file system, removing the downtime associated with adding or deleting storage from a Linux server 2. Data from one (potentially faulty or damaged) physical device may be relocated to another device that is newer, faster or more resilient, while the original volume remains online and accessible 3. Logical volumes can be constructed by aggregating physical devices to increase performance (via disk striping) or redundancy (via disk mirroring and I/O multipathing) 4. Logical volume snapshots can be created to represent the exact state of the volume at a certain point-in-time, allowing accurate backups to proceed simultaneously with regular system operation Basic LVM commands

Initializing disks or disk partitions


To use LVM, partitions and whole disks must first be converted into physical volumes (PVs) using the pvcreate command. For example, to convert /dev/hda and /dev/hdb into PVs use the following commands: pvcreate /dev/hda pvcreate /dev/hdb If a Linux partition is to be converted make sure that it is given partition type 0x8E using fdisk, then use pvcreate: pvcreate /dev/hda1

Creating a volume group


Once you have one or more physical volumes created, you can create a volume group from these PVs using the vgcreate command. The following command: vgcreate volume_group_one /dev/hda /dev/hdb creates a new VG called volume_group_one with two disks, /dev/hda and /dev/hdb, and 4 MB PEs. If both /dev/hda and /dev/hdb are 128 GB in size, then the VG volume_group_one will have a total of 2**16 physical extents that can be allocated to logical volumes.

Additional PVs can be added to this volume group using the vgextend command. The following commands convert /dev/hdc into a PV and then adds that PV to volume_group_one:

pvcreate /dev/hdc vgextend volume_group_one /dev/hdc This same PV can be removed from volume_group_one by the vgreduce command: vgreduce volume_group_one /dev/hdc Note that any logical volumes using physical extents from PV /dev/hdc will be removed as well. This raises the issue of how we create an LV within a volume group in the first place.

Creating a logical volume


We use the lvcreate command to create a new logical volume using the free physical extents in the VG pool. Continuing our example using VG volume_group_one (with two PVs /dev/hda and /dev/hdb and a total capacity of 256 GB), we could allocate nearly all the PEs in the volume group to a single linear LV called logical_volume_one with the following LVM command: lvcreate -n logical_volume_one --size 255G volume_group_one Instead of specifying the LV size in GB we could also specify it in terms of logical extents. First we use vgdisplay to determine the number of PEs in the volume_group_one: vgdisplay volume_group_one | grep "Total PE" which returns Total PE 65536
. of pano ama.o g/Comme cial_lin e /logical_ ol me_manage . h ml 22/26

27/02/2012

The Lin

Logical Vol me Manage (LVM)

Then the following lvcreate command will create a logical volume with 65536 logical extents and fill the volume group completely: lvcreate -n logical_volume_one -l 65536 volume_group_one To create a 1500MB linear LV named logical_volume_one and its block device special file /dev/volume_group_one/logical_volume_one use the following command: lvcreate -L1500 -n logical_volume_one volume_group_one The lvcreate command uses linear mappings by default. Striped mappings can also be created with lvcreate. For example, to create a 255 GB large logical volume with two stripes and stripe size of 4 KB the following command can be used: lvcreate -i2 -I4 --size 255G -n logical_volume_one_striped volume_group_one It is possible to allocate a logical volume from a specific physical volume in the VG by specifying the PV or PVs at the end of the lvcreate command. If you want the logical volume to be allocated from a specific physical volume in the volume group, specify the PV or PVs at the end of the lvcreate command line. For example, this command:

lvcreate -i2 -I4 -L128G -n logical_volume_one_striped volume_group_one /dev/hda /dev/hdb creates a striped LV named logical_volume_one that is striped across two PVs (/dev/hda and /dev/hdb) with stripe size 4 KB and 128 GB in size. An LV can be removed from a VG through the lvremove command, but first the LV must be unmounted: umount /dev/volume_group_one/logical_volume_one lvremove /dev/volume_group_one/logical_volume_one Note that LVM volume groups and underlying logical volumes are included in the device special file directory tree in the /dev directory with the following layout: /dev// so that if we had two volume groups myvg1 and myvg2 and eatt>, six device special files would be created: /dev/myvg1/lv01 /dev/myvg1/lv02 /dev/myvg1/lv03 /dev/myvg2/lv01 /dev/myvg2/lv02 /dev/myvg2/lv03

Extending a logical volume


An LV can be extended by using the lvextend command. You can specify either an absolute size for the extended LV or how much additional storage you want to add to the LVM. For example: lvextend -L120G /dev/myvg/homevol will extend LV /dev/myvg/homevol to 12 GB, while lvextend -L+10G /dev/myvg/homevol will extend LV /dev/myvg/homevol by an additional 10 GB. Once a logical volume has been extended, the underlying file system can be expanded to exploit the additional storage now available on the LV. With Red Hat Enterprise Linux 4, it is possible to expand both the ext3fs and GFS file systems online, without bringing the system down. (The ext3 file system can be shrunk or expanded offline using the ext2resize command.) To resize ext3fs, the following command ext2online /dev/myvg/homevol will extend the ext3 file system to completely fill the LV, /dev/myvg/homevol, on which it resides. The file system specified by device (partition, loop device, or logical volume) or mount point must currently be mounted, and it will be enlarged to fill the device, by default. If an optional size parameter is specified, then this size will be used instead.

Recommended Links
In case of broken links please try to use Google search. If you find the page please notify us about new location
. of pano ama.o g/Comme cial_lin e /logical_ ol me_manage . h ml 23/26

27/02/2012

The Lin

Logical Vol me Manage (LVM)

Search

In e nal page Wikipedia Logical

pda e b

age: Latest : Past week : Past month : Past year

Logical V ol m e Manage (Lin

ol m e m anagem en - Wikipedia, he f ee enc clopedia

En e p i e V ol m e Managem en S Redhat

) - Wikipedia, he f ee enc clopedia

em - Wikipedia, he f ee enc clopedia

Linux Logical Volume Manager overview paper LVM Administrator's Guide for RHEL 4.6 LVM Administrator's Guide Lin Red Ha En e p i e Lin No ell

Red Hat Magazine | Tips and tricks- What is the procedure to resize..

Logical V ol m e Managem en (LV M) G ide b 5 deplo m en g ide

A J Le i

U ing Logical V ol m e Managem en (LV M) o O gani e Y o Com m ni ie Lin Ho LV M (Logical V ol m e Managem en )

Di k on SLES 10 No ell U e

Pa i ioning Y o

o Mo n a Specific Pa i ion of a Xen File-backed V i

Ha d Di k Befo e In alling SU SE No ell U e Com m ni ie 5 U nde o and T anding LV M o ial Ho o and T o ial

al Di k

Info m IT Managing S o age in Red Ha En e p i e Lin A Beginne ' G ide T o LV M Ho Ho oFo ge - Lin Ho

T o Re i e e 3 Pa i ion Wi ho Pa i ion

Lo ing Da a Ho

oFo ge - Lin

E panding Lin Lin Lin

i h LV M - Fedo aNEWS.O RG a e RA ID config.o g

Logical V ol m e Manage (LV M) on Sof l m - Logical V ol m e Manage - Lin

LV M HO WT O Outdated and incomplete. RHEL Logical Volume Manager (LVM) LVM Configuration LV M2 Re o LVM2. ce Page provides links to tarballs, mailing lists, source code, documentation, and chat channels for

An Introduction to Disk Partitions


. of pano ama.o g/Comme cial_lin e /logical_ ol me_manage . h ml 24/26

27/02/2012

The Lin

Logical Vol me Manage (LVM)

Linux man page on LVM2 tools for more details. Linux on System z: Volume management recommendations" (developerWorks, October 2005) discusses LVM2 schemes for kernel 2.6, as well as the Enterprise Volume Management System (EVMS) as an alternative.

"Common threads: Learning Linux LVM, Part 1" (developerWorks, March 2001) and "Common threads: Learning Linux Gentoo Technologies, Inc.

LVM, Part 2" (developerWorks, April 2001) outdated articles by Daniel Robbins (drobbins@gentoo.org), President/CEO,

Linux Documentation Project has a variety of useful documents, especially its HOWTOs. LVM HOWTO A Beginner's Guide To LVM | HowtoForge - Linux Howtos and Tutorials Managing RAID and LVM with Linux LinuxDevCenter.com -- Managing Disk Space with LVM LVM2 Resource Page Linux Logical Volume Manager (LVM) on Software RAID Expanding Linux Partitions with LVM - FedoraNEWS.ORG

HOWTO
LV M Ho Cen O S/Red Ha Deplo m en G ide ha a RA ID/LV M ho M hT V ' RA ID ho o o o

Recommended Papers
[Aug 11, 2007] Logical volume management by Klaus Heinrich Kiwi
Sep 11, 2007 | IBM developerworks Volume management is not new in the -ix world (UNIX, AIX, and so forth). And logical volume management (LVM) has been around since Linux k ernel 2.4v1 and 2.6.9v2. This article reveals the most useful features of LVM2 a relatively new userspace toolset that provides logical volume management facilities and suggests several ways to simplify your system administration task s.

Volume Managers in Linux Barriers and journaling filesystems Linux Logical Volume Manager (LVM) on Software RAID

. of pano ama.o g/Comme cial_lin

e /logical_ ol me_manage . h ml

25/26

27/02/2012

The Lin

Logical Vol me Manage (LVM)

C opyright 1996-2011 by Dr. Nik olai Be zrouk ov. www.softpanoram a.org was cre ate d as a se rvice to the UN Sustainable De ve lopm e nt Ne twork ing Program m e (SDNP) in the author fre e tim e . This docum e nt is an industrial com pilation de signe d and c ea ed e cl i el fo ed ca ional e and is distribute d unde r the Softpanoram a C onte nt Lice nse . Site use s AdSe nse so you ne e d to be aware of Google privacy policy. O riginal m ate rials copyright be long to re spe ctive owne rs. Quotes are made for educational purposes onl in compliance with the fair use doctrine. Thi i a Spa an WHYFF (We Help Yo Fo
F ee) i e i en b people fo hom Engli h i no a na i e lang age. Grammar and spelling errors should be expected. The i e con ain ome b oken link a i de elop like a li ing

ee...

Di claime : The statements, views and opinions presented on this web page are those of the author and are not endorsed b , nor do the opinions of the author present and former emplo ers, SDNP or an other organi ation the author ma be associated with. W e do not warrant the corre ctne ss of the inform ation provide d or its fitne ss for any purpose necessaril reflect, the

Last modified: Januar 27, 2012

. of pano ama.o g/Comme cial_lin

e /logical_ ol me_manage . h ml

26/26

You might also like