You are on page 1of 12

What is an over-committed aggregate?

Why do I see space available on my volume, but my filer tells me I don't have any space left on my device? Why do I need volume guarantees enabled? How do I find out how much space I am actually using in my aggregate? What is an over-committed aggregate? An over-committed aggregate is one in which the total space allocated to volumes exceeds that of what is allowable by the containing aggregate. This situation will arise when volume guarantees are turned off. When this happens, the amount of space taken by the volume will only reflect the amount of data inside of that volume. So if someone creates a volume as 100GB and only has 20GB inside of the volume, df -A will only show that volume as using 20GB if volume guarantees are not enabled. If guarantees are turned on, that volume will show as using 100GB. Example: Before creating volume "test":
filer> df -Ah

Aggregate total used avail capacity


aggr1 705GB 51GB 654GB 7% aggr1/.snapshot 37GB 133MB 37GB 0%

After creating 100GB volume "test" with no guarantee and 20GB of data:
filer> df -Ah

Aggregate total used avail capacity


aggr1 705GB 71GB 634GB 10% aggr1/.snapshot 37GB 133MB 37GB 0%

After enabling volume guarantee on "test":


filer> df -Ah

Aggregate total used avail capacity


aggr1 705GB 151GB 554GB 21% aggr1/.snapshot 37GB 133MB 37GB 0%

Why do I see space available on my volume, but my filer tells me I don't have any space left on my device? An aggregate becomes over-committed by creating a situation where the volume allocation exceeds the aggregate allocation. If one creates an aggregate of 500GB, then they are limited to 500GB of free space (after WAFL overhead). If volume guarantees are on, then you could create five 100GB volumes and the aggregate would show 100% space used in df -A. However, if volume guarantees are disabled, you could create as many 100GB volumes as you wanted, and the aggregate would only see the data inside of the volumes as taken. When this happens, the volumes will fill over time as they are used, and once they reach a total of 500GB used, the aggregate will show as full and no more writes can take place on that aggregate, even if the individual volumes have not been filled. Why do I need volume guarantees enabled? Volume guarantees need to be enabled in a majority of cases to avoid a situation where one can no longer write to an aggregate due to lack of space. If volume guarantees are on, the space usage can be monitored on a per-volume basis, and there is an accurate representation of what you want to allocate versus what you are using. You are avoiding a situation of not knowing when you are running out of space by guaranteeing you have space available. How do I find out how much space I am actually using in my aggregate? Df and df -A, when used together, can help illustrate how much space is actually on an aggregate versus how much the volumes are using. However, these commands can be misinterpreted and, occasionally, inaccurate. The best way to show how much space is being used vs. being allocated is "aggr show_space" This command will illustrate the accurate amount of space actually being used, regardless of guarantees. aggr show_space with volume guarantee on for "test":
filer> aggr show_space aggr1 -h

Aggregate 'aggr1'
Total space WAFL reserve Snap reserve Usable space BSR NVLOG

825GB Volume syncdest test Aggregate Total space Snap reserve WAFL reserve

82GB Allocated 50GB 100GB* Allocated 150GB 37GB 82GB

37GB Used 214MB 816KB Used 215MB 133MB 1207MB Guarantee volume volume Avail 554GB 37GB 81GB

705GB

1180MB

Space allocated to volumes in the aggregate

*Note how the allocation of the volume "test" greatly differs from the "used". aggr show_space with volume guarantee disabled for "test":
filer> aggr show_space aggr1 -h

Aggregate 'aggr1'
Total space 825GB Volume syncdest test Aggregate Total space Snap reserve WAFL reserve WAFL reserve 82GB Allocated 50GB 868KB* Allocated 50GB 37GB 82GB Snap reserve 37GB Guarantee volume none Avail 654GB 37GB 81GB Usable space 705GB BSR NVLOG 1180MB

Space allocated to volumes in the aggregate


Used 214MB 868KB Used 215MB 133MB 1207MB

*Notice how "test" is showing only 868KB used - this is because the "20GB" inside of the volume is actually a lun with space reservations turned on - but no data inside of it. Additionally, note how the space allocated matches the space used. This is how the filer sees the space in a volume with no guarantee versus one with guarantee enabled.

To check snap schedule on all the aggregates: filer> snap sched -A To check snap reserve space on aggregates: filer> snap reserve A

What is WAFL_check? WAFL_check is a diagnostic tool used to check WAFL file systems. It is normally run on WAFL inconsistent volumes or aggregates in order to correct the inconsistencies. WAFL_check should only be run under the instruction of NetApp Technical Support as it can alter file systems and may result in data loss if used incorrectly. WAFL_check should never be run on striped aggregates. This includes striped member aggregates. Wafliron should be used instead. If the root aggregate/volume is marked WAFL inconsistent, the filer will be unable to boot until the aggregate is checked or another aggregate is designated as root and a new root FlexVol is created on that aggregate. An aggregate or volume can be marked inconsistent for several reasons. One of the most common causes is a parity inconsistency due to FC-AL loop instability. What is the difference between WAFL_check and wafliron?

WAFL_check and wafliron are both diagnostic tools used to check WAFL file systems. If WAFL_check is run, the administrator can choose whether or not to commit changes. Wafliron will make changes as it runs and report these changes. The administrator has no choice over which changes wafliron will commit. Wafliron can be run while the filer is online and serving data from volumes/aggregates not being checked. WAFL_check, however, must be run from the Special Boot Menu and the storage appliance will not be serving data until the WAFL_check completes and the administrator chooses to commit changes. NetApp Technical Support should always be consulted before running either wafliron or WAFL_check. Note: WAFL_check can take a long time to run and the storage appliance will not serve ANY data during this time. It will remain unavailable from the network and only be accessible from a console connection. What should be done prior to running WAFL_check? Several steps need to be taken to prepare to run WAFL_check. 1. Identify and resolve the cause of the file system inconsistency. If the inconsistency was caused by FC-AL loop instability or errors, loop testing should be performed to isolate the problem. NetApp FC-AL diagnostics can be used for troubleshooting. Note: If the cause of the inconsistency is unresolved prior to starting WAFL_check, then the WAFL_check may be unable to correct the inconsistencies properly. Additionally, since the original problem still exists, the aggregate/volume could become inconsistent again. 2. Connect to a console port on the filer using a laptop or PC. If a laptop is used, ensure that the laptop is connected to AC power and that any power management/hibernation that might shutdown the laptop after a period of time is disabled. It is critical that the laptop remain on for the entire duration of the WAFL_check. For FAS3000/FAS6000 series filers, the Remote LAN Management (RLM) card can be used to connect to the filer's console. From a PC, open an SSH session to the RLM and enter system console. Additional details on using the RLM can be found in the Data ONTAP Systems Administration Guide. WARNING: NetApp Bug 224882 tracks a problem in which WAFL_check may fail to prompt to commit changes if the RLM system console is detached around the time this commit message is printed. Be sure that the RLM console session is not disconnected while running WAFL_check. This bug is first fixed in Data ONTAP 7.2.4. The filer requires the following settings on the Terminal Emulator:
y y y y y

Bits per second: 9600 Data bits: 8 Parity: None Stop bits: 1 Flow Control: hardware

3. Setup the laptop/PC to log all filer console output to a file.

How Long Does it take to run WAFL_check?

The time it takes to complete WAFL_check depends on many factors, and as such the time cannot be accurately calculated. Factors affecting the total run time include:
y y y y y y y y y y

Mean size of files in the file system being checked Number of inodes in use Layout of the data on a volume/aggregate Size of the volume/aggregate Number of file system inconsistencies if they exist Storage appliance's CPU speed Storage appliance's Memory Speed of the disk drives (i.e. 5400 RMP vs. 7200 RPM vs. 10000 RPM vs. 15000 RPM) Data ONTAP version Number of FlexVols contained in the aggregate being checked

In general, WAFL_check will take several hours to several days depending on the above factors. WAFL_check runs through several phases during the file system check. In most cases, the scan of inode file normal files (phase 5.3b for a FlexVol or phase 3b for a traditional volume) will take the most amount of time since this comprises the bulk of the data to be checked. In Data ONTAP 7.2.3 and later, WAFL_check will include time estimations during FlexVol check phases 5.3b (scanning volume inodes) and 5.4 (checking volume directories). For example: Selection (1-5)? WAFL_check aggr1 ... Checking volume flexvol1 ... Phase [5.3b]: Scan inode file normal files. (inodes 3.56% done) 2 hrs 15 min estimated time remaining (inodes 5.84% done) 2 hrs 41 min estimated time remaining (inodes 8.13% done) 2 hrs 49 min estimated time remaining ... Phase [5.4]: Scan directories. (dirs 5.00% done) 0 hrs 50 min estimated time remaining (dirs 10.00% done) 0 hrs 49 min estimated time remaining (dirs 15.00% done) 0 hrs 58 min estimated time remaining ... How to Run WAFL_check WAFL_check is run from the Special Boot Menu. As such, console access to the storage appliance is required to run WAFL_check. Console output should be logged to a file when running WAFL_check so that the output can be reviewed by NetApp Technical Support. To access the Special Boot Menu, press Ctrl+C when prompted during boot. Note: It is important to use the same release of Data ONTAP that the filer is running unless otherwise instructed by NetApp Technical Support. Note: For storage appliances that have floppy disk drives, a set of Data ONTAP boot floppies are required. Boot the filer using the OS boot floppy diskettes. The filer will boot to the following Special Boot Menu: (1) Normal boot.

(2) Boot without /etc/rc. (3) Change password. (4) Initialize all disks. (5) Maintenance mode boot. Selection (1-5)? The WAFL_check option is a hidden option from this menu. Do NOT select one of the 1 - 5 options. Instead, type WAFL_check. This will start the file system check. It will ask for confirmation before checking each aggregate/volume. Check all aggregates/volumes unless instructed otherwise. Note: To check a specific traditional volume or aggregate, use WAFL_check <volumename/aggregate_name>. Note: When checking an aggregate, all associated FlexVols will also be checked. It is not possible to check a single FlexVol within an aggregate. After starting the WAFL_check, the filer administrator should watch the console for the first 20 - 30 minutes to ensure WAFL_check is progressing and not logging excessive errors. If excessive errors are seen, NetApp Technical Support should be contacted immediately. If 'WAFL_check' finds problems on a volume, it will ask for confirmation before committing changes after checking each volume. You should consider the changes it asks to be made. WARNING: NetApp Technical Support should be consulted before committing any changes found by WAFL_check. Failure to do so may result in data loss. In order to commit WAFL_check changes, you must enter "y" for yes. After WAFL_check is finished and the changes are committed or rejected, the storage appliance will prompt for reboot. If this is a floppy boot filer, make sure no floppies are in the floppy drive. Then press any key to reboot. Can I stop WAFL_check? WAFL_check can only be stopped by power-cycling the filer controller. Since WAFL_check does not make any changes until the filer administrator chooses to commit changes at the end of the check, it is safe to stop WAFL_check. However, any checks that were done to the point that it was stopped will be lost. Therefore, it is best to let WAFL_check run to completion unless otherwise advised by NetApp Technical Support. WAFL_check Phases When running WAFL_check on a traditional volume, it will check the volume in several phases. When running WAFL_check on an aggregate, it will check both the aggregate and the contained flexible volumes. Note: WAFL_check is a diagnostic tool, and its usage and output is subject to change. The different phases are summarized in the following table: Phase 1 2 3 Traditional volume Aggregate Verify fsinfo blocks Verify fsinfo blocks Verify metadata indirect Verify metadata indirect blocks blocks Scan inode file Scan inode file Flexible volume Verify fsinfo blocks Verify metadata indirect blocks Scan inode file

3a 3b 3c 4 5

6 6a 6b 6c 6d

Checks WAFL special Checks WAFL special metadata files metadata files Checks normal (user Checks normal (user data) data) files files Checks files that had Checks files that had been been marked for deletion marked for deletion Scan directories Scan directories N/A Scans FlexVols Checks volume inodes and verifies Aggregate's access to the FlexVol Verifies contents within the FlexVol Clean up Clean up Finds lost streams (for Finds lost streams (for example, CIFS example, CIFS metadata/ACLs) metadata/ACLs) Finds lost files and Finds lost files and moves moves them to lost+found them to lost+found Finds lost blocks and Finds lost blocks and moves moves them to lost+found them to lost+found Checks blocks used Checks blocks used

Checks WAFL special metadata files Checks normal (user data) files Checks files that had been marked for deletion Scan directories Scans FlexVols

Clean up Finds lost streams (for example, CIFS metadata/ACLs) Finds lost files and moves them to lost+found Finds lost blocks and moves them to lost+found Checks blocks used

The following are examples of the output seen on the console when running WAFL_check. WAFL_check on a traditional volume: Selection (1-5)? WAFL_check vol1 Checking vol1... WAFL_check NetApp Release 7.2.3 Starting at Tue Oct 23 20:30:06 GMT 2007 Phase 1: Verify fsinfo blocks. Phase 2: Verify metadata indirect blocks. Phase 3: Scan inode file. Phase 3a: Scan inode file special files. Phase 3a time in seconds: 0 Phase 3b: Scan inode file normal files. Phase 3b time in seconds: 2 Phase 3 time in seconds: 2 Phase 4: Scan directories. Phase 4 time in seconds: 2 Phase 6: Clean up. Phase 6a: Find lost nt streams. Phase 6a time in seconds: 0 Phase 6b: Find lost files. Phase 6b time in seconds: 7 Phase 6c: Find lost blocks. Phase 6c time in seconds: 0 Phase 6d: Check blocks used.

Phase 6d time in seconds: 0 Phase 6 time in seconds: 7 Clearing inconsistency flag on volume vol1. WAFL_check total time in seconds: 11 Commit changes for volume vol1 to disk? y Inconsistent vol vol1 marked clean. WAFL_check output will be saved to file /vol/vol1/etc/crash/WAFL_check Press any key to reboot system. WAFL_check on an aggregate: Selection (1-5)? WAFL_check aggr0 Checking aggr0... WAFL_check NetApp Release 7.2.3 Starting at Tue Oct 23 18:52:17 GMT 2007 Phase 1: Verify fsinfo blocks. Phase 2: Verify metadata indirect blocks. Phase 3: Scan inode file. Phase 3a: Scan inode file special files. Phase 3a time in seconds: 1 Phase 3b: Scan inode file normal files. (inodes 99.74% done) (inodes 100.00% done) Phase 3b time in seconds: 1762 Phase 3 time in seconds: 1763 Phase 4: Scan directories. Phase 4 time in seconds: 0 Phase 5: Check volumes. Phase 5a: Check volume inodes Phase 5a time in seconds: 0 Phase 5b: Check volume contents Checking volume flexvol1... Phase [5.1]: Verify fsinfo blocks. Phase [5.2]: Verify metadata indirect blocks. Phase [5.3]: Scan inode file. Phase [5.3a]: Scan inode file special files. Phase [5.3a] time in seconds: 27 Phase [5.3b]: Scan inode file normal files. (inodes 100.00% done) 0 hrs 0 min estimated time remaining Phase [5.3b] time in seconds: 3964 Phase [5.3] time in seconds: 3992 Phase [5.4]: Scan directories. Phase [5.4] time in seconds: 6 Phase [5.6]: Clean up. Phase [5.6a]: Find lost nt streams. Phase [5.6a] time in seconds: 0 Phase [5.6b]: Find lost files. Phase [5.6b] time in seconds: 16 Phase [5.6c]: Find lost blocks. Phase [5.6c] time in seconds: 0 Phase [5.6d]: Check blocks used. Tue Oct 23 20:29:53 GMT Tue Oct 23 20:29:53

Phase [5.6d] time in seconds: 19 Phase [5.6] time in seconds: 35 Volume flexvol1 WAFL_check time in seconds: 4033 (No filesystem state changed.) Phase 5b time in seconds: 4098 Phase 6: Clean up. Phase 6a: Find lost nt streams. Phase 6a time in seconds: 0 Phase 6b: Find lost files. Phase 6b time in seconds: 5 Phase 6c: Find lost blocks. Phase 6c time in seconds: 0 Phase 6d: Check blocks used. Phase 6d time in seconds: 1 Phase 6 time in seconds: 6 Clearing inconsistency flag on aggregate aggr0. WAFL_check total time in seconds: 5867 Commit changes for aggregate aggr0 to disk? yes Inconsistent aggr aggr0 marked clean. WAFL_check output will be saved to file /etc/crash/aggregates/aggr0/WAFL_check on the root volume Where to find the changes made by WAFL_check The results of a WAFL_check are stored in /etc/crash/WAFL_check on the storage appliance's root volume (pre7G) or in a /etc/crash folder within each volume to which changes were made (7G and later). After an Autosupport is generated due to the reboot following WAFL_check, these files are rotated to WAFL_check.0, WAFL_check.1, etc. Can WAFL_check be run on a SnapMirror/SnapVault destination? It is possible to run WAFL_check on a SnapMirror/SnapVault destination, but this will break the SnapMirror/SnapVault relationships if changes are needed. Depending on the changes made by WAFL_check, it may be possible to resync the SnapMirror/SnapVault relationships following the completion of the WAFL_check. However, resync is not guaranteed to succeed. In some cases, the relationships may need to be reinitialized. Note: After WAFL_check is run on a destination volume for volume SnapMirror, a "block type initialization" scan will automatically start on the traditional/flexible volume that was checked. Until this scanner completes, volume SnapMirror relationships cannot be resync'd, updated, or initialized. This limitation is tracked as NetApp Bug 142586. Please review the Bugs Online report to verify whether this bug is fixed on your version of Data ONTAP. The "block type initialization" scan may take several days to complete depending on the size of the FlexVol and the load on the storage appliance. To check the status of the command, use the wafl scan status command in priv set advanced mode:
filer> priv set advanced filer*> wafl scan status Volume sm_dest: Scan id Type of scan progress 1 block type initialization

snap 0, inode 58059 of 30454809

Can WAFL_check be run on a SnapLock aggregate/volume? WAFL_check can be run on both SnapLock compliant and SnapLock enterprise volumes and aggregates. However, SnapLock Compliant volumes have some restrictions that may prevent wafliron from functioning properly. NetApp Technical Support should always be consulted before starting wafliron on a SnapLock Compliant aggregate/volume. Can WAFL_check be used on a Foreign, inconsistent aggregate? An aggregate or traditional volume will be marked foreign and be offlined if it is moved to a filer other than the one that created it. This could occur with an inconsistent aggregate if it is moved to another filer in order to run WAFL_check while minimizing the downtime on the original filer. The aggregate may or may not be in a degraded state. WARNING: Do NOT attempt to online the aggregate. If the aggregate is attempted to be brought online from Maintenance Mode, the following error will be generated: Volume (aggrname) is inconsistent and has a degraded raidgroup with dirty parity. This volume can not be brought online prior to doing the recommended steps for recovery, as it raises the risk of further system panic. If this is a replica volume, the recommended steps for recovery are to run WAFL_check at source and then execute "snapmirror initialize" on this volume, otherwise run WAFL_check on the volume. In order to run WAFL_check on a foreign, inconsistent aggregate, the aggregate must first be restricted in order to allow the system to mark the aggregate as a native aggregate. To do this: 1. Boot the filer to Maintenance Mode. 2. Run aggr restrict . 3. Exit Maintenance Mode 4. Reboot the filer to the Special Boot Menu 5. Start WAFL_check on the aggregate The aggr restrict command can also be used on traditional volumes. If the above procedure is not used to online the aggregate, the following error will be generated: WAFL_check: volume/aggregate (aggrname) is foreign and cannot be checked.. Can WAFL_check be used to delete Snapshots? WAFL_check can delete Snapshots using the -snapshots flag. It should only be used under the direction of NetApp Technical Support. Note: Once the Snapshots are chosen for deletion, WAFL_check will automatically start on the aggregate and associated FlexVols. To do this, boot to the Special Boot Menu and enter WAFL_check -snapshots . WAFL_check will then prompt whether each Snapshot on the aggregate and associated FlexVols should be deleted. Following is an example output: (1) Normal boot. (2) Boot without /etc/rc. (3) Change password. (4) Initialize owned disks (28 disks are owned by this filer). (4a) Same as option 4, but create a flexible root volume.

(5) Maintenance mode boot. Selection (1-5)? WAFL_check -snapshots aggr11 Checking aggr1... WAFL_check NetApp Release 7.2.2 Starting at Tue Jun 5 23:22:40 GMT 2007 Snapshot 19 of aggregate aggr1 with mod time Mon Jun 4 23:00:02 GMT 2007 Delete? yes Deleting Snapshot 20 of aggregate aggr1 with mod time Tue Jun 5 04:00:02 GMT 2007 Delete? no Phase 1: Verify fsinfo blocks. Phase 2: Verify metadata indirect blocks. Phase 3: Scan inode file. Phase 3a: Scan inode file special files. Phase 3a time in seconds: 0 Phase 3b: Scan inode file normal files. Phase 3b time in seconds: 0 Phase 3 time in seconds: 1 Phase 4: Scan directories. Snapdir: directory references unused snapshot: hourly.3. Unlinking. Inode 67, type 2: Setting link count to 6 (was 7). Phase 4 time in seconds: 0 Phase 5: Check volumes. Phase 5a: Check volume inodes Phase 5a time in seconds: 0 Phase 5b: Check volume contents Snapshot 18 of volume flexvol1 in aggregate aggr1 with mod time Sat Jun 04:00:01 GMT 2007 Delete? yes Deleting Snapshot 22 of volume flexvol1 in aggregate aggr1 with mod time Mon Jun 16:00:02 GMT 2007 Delete? no Checking volume flexvol1... Phase [5.1]: Verify fsinfo blocks. Phase [5.2]: Verify metadata indirect blocks. Phase [5.3]: Scan inode file. Phase [5.3a]: Scan inode file special files. Phase [5.3a] time in seconds: 0 Phase [5.3b]: Scan inode file normal files. Phase [5.3b] time in seconds: 1 Phase [5.3] time in seconds: 1 Phase [5.4]: Scan directories. Snapdir: directory references unused snapshot: nightly.1. Unlinking. Inode 67, type 2: Setting link count to 9 (was 10). Phase [5.4] time in seconds: 0 Phase [5.6]: Clean up. Phase [5.6a]: Find lost nt streams. Phase [5.6a] time in seconds: 0 Phase [5.6b]: Find lost files. Phase [5.6b] time in seconds: 0 Phase [5.6c]: Find lost blocks. 2

Phase [5.6c] time in seconds: 0 Phase [5.6d]: Check blocks used. Phase [5.6d] time in seconds: 0 Phase [5.6] time in seconds: 0 Volume flexvol1 WAFL_check time in seconds: 2 Directory link counts fixed: 1 Invalid snapshot directory entries cleared: 1 WAFL_check output will be saved to file /vol/flexvol1/etc/crash/WAFL_check Phase 5b time in seconds: 9 Phase 6: Clean up. Phase 6a: Find lost nt streams. Phase 6a time in seconds: 0 Phase 6b: Find lost files. Phase 6b time in seconds: 5 Phase 6c: Find lost blocks. Phase 6c time in seconds: 0 Phase 6d: Check blocks used. Phase 6d time in seconds: 1 Phase 6 time in seconds: 6 WAFL_check total time in seconds: 17 Directory link counts fixed: 1 Invalid snapshot directory entries cleared: 1 Commit changes for aggregate aggr1 to disk? yes WAFL_check output will be saved to file /etc/crash/aggregates/aggr1/WAFL_check on the root volume Press any key to reboot system. Can WAFL_check be used on a 64-bit aggregate? Data ONTAP 8.0 7-Mode includes a new type of aggregate called a 64-bit aggregate. WAFL_check cannot be used to perform file system checks on 64-bit aggregates. Please contact NetApp Support for assistance. Can the inconsistent aggregate be moved to another filer before starting the WAFL_check? It is possible to move the inconsistent aggregate to another storage controller in order to run WAFL_check. This action is usually taken if the original filer contains multiple aggregates and only one non-root aggregate is inconsistent. In order to prevent downtime on the other aggregates, the inconsistent aggregate can be moved to another storage controller. Before moving the inconsistent aggregate to a new storage controller, the following conditions must be met on the new controller: 1. It is running the same release of Data ONTAP. 2. It can accept the additional storage of the inconsistent aggregate without reaching maximum capacity. To determine the maximum capacity, refer to the NetApp System Configuration Guide. 3. The disks, shelves, and shelf modules are supported on the new storage controller. To verify compatibility, refer to the NetApp System Configuration Guide. 4. Downtime on the new storage controller is acceptable.

Why do I see this error when I attempt to create an aggregate? v6030-hds01> aggr create hds01_aggr1 -r 10 -d brcd-hds02:4.126L1 aggr create: Couldn't create aggregate: V-Series supports only raid0 raidtype.

Solution
You will need to assign the disks to the filer prior to creating the aggregate. Use the disk assign command to assign the LUNs to the filer: filer01> disk assign san_switch01:4.126L121 Wed Feb 14 14:01:40 GMT [filer01: diskown.changingOwner:info]: changing owne rship for disk san_switch01:4.126L121 (S/N 50 05A620001) from unowned (ID -1) to v 6030-hds01 (ID 118044401) filer01> Wed Feb 14 14:01:40 GMT [filer01: raid.assim.lun.nolabels:info] : Disk san_switch01:4.126L121 Shelf - Bay - [HITACHI OPEN-V 0000] S/N [ 50 05A620001] has uninitialized labels and is being treated as a hot spare. Then create the aggregate with the aggr create command: filer01> aggr create hds01_aggr3 -r 10 -d san_switch01:4.126L121 Creation of an aggregate with 1 disks has been initiated. The disks need to be zeroed before addition to the aggregate. The process has been initiated and you will be notified via the system log as disks are added. filer01> Wed Feb 14 14:02:00 GMT [filer01: raid.vol.disk.add.done:notice ]: Addition of Disk /hds01_aggr3/plex0/rg0/san_switch01:4.126L121 Shelf - Bay - [H ITACHI OPEN-V 0000] S/N [50 05A620001] to aggregate hds01_aggr3 has completed successfully Wed Feb 14 14:02:00 GMT [filer01: wafl.vol.add:notice]: Aggregate hds01_aggr 3 has been added to the system.

Check for broken disk: vol status f


sysconfig d

rdfile /etc/messages

check for the logs on the message file.

You might also like