How To Use ‘zpool split’ to Split rpool in solaris 11 (SPARC)
A mirrored ZFS storage pool can be quickly cloned as a backup pool by using the zpool split command. You can use this feature to split a mirrored root pool, but the pool that is split off is not bootable until you perform some additional steps.
Caveats and Assumptions
1.The new/split pool must be used on the same host or the same server family type and architecture. ie: If the source rpool is imported and split on a T5120, the new pool must also be used on the same server type (T5xx0).
Splitting the rpool
Verify the current status of the rpool. At minimum it should be a mirror. Depending on the number of available devices or desired end goal, an additional (3rd) mirror may be added and used to create the new rpool.
2 way mirror example
# zpool status rpool pool: rpool state: ONLINE scan: resilvered 22.3G in 0h7m with 0 errors on Thu Mar 13 05:55:12 2014 config: NAME STATE READ WRITE CKSUM rpool ONLINE 0 0 0 mirror-0 ONLINE 0 0 0 c3t0d0s0 ONLINE 0 0 0 c3t1d0s0 ONLINE 0 0 0
This post uses two-way-mirror example, though the procedure is no different for a 3-way mirror with the exception of the disk used to create the new pool.
2. According to the man page of zpool the last device in the pool will be used to create the new pool unless a device is specified. It is best practice to specify the device regardless to ensure the correct device is chosen.
# zpool split rpool newrpool c3t1d0s0
3. By default any new pool created using ‘zpool split’ is automatically exported once the split completes. Therefore the new pool won’t appear in ‘zpool list’.
# zpool list NAME SIZE ALLOC FREE CAP DEDUP HEALTH ALTROOT rpool 136G 22.3G 114G 16% 1.00x ONLINE -
4. Use ‘zpool import’ to confirm the new pool was created with the correct device :
# zpool import pool: newrpool id: 712596357404561922 state: ONLINE action: The pool can be imported using its name or numeric identifier. config: newrpool ONLINE c3t1d0s0 ONLINE
5. Temporarily import the pool, ‘newrpool’, using a different mount point so modifications to critical files can be made before the pool is used. The following imports the pool and sets a temporary mountpoint, but does not mount any of the datasets.
# mkdir /newrpool # zpool import -N -R /newrpool newrpool # zfs list -r newrpool NAME USED AVAIL REFER MOUNTPOINT newrpool 23.0G 111G 73.5K /newrpool/newrpool newrpool/ROOT 2.44G 111G 31K legacy newrpool/ROOT/solaris 2.44G 111G 1.92G /newrpool newrpool/ROOT/solaris/var 475M 111G 449M /newrpool/var newrpool/VARSHARE 59.5K 111G 59.5K /newrpool/var/share newrpool/dump 16.4G 111G 15.9G - newrpool/export 100K 111G 35K /newrpool/export newrpool/export/home 65.5K 111G 32K /newrpool/export/home newrpool/export/home/jack 33.5K 111G 33.5K /newrpool/export/home/jack newrpool/swap 4.13G 111G 4.00G -
6. Mount the top-level filesystem where the ‘menu.lst’ is located.
# zfs mount -vO -o mountpoint=/newrpool newrpool
7. Edit the menu.lst file to change the entry to point at the new pool name.
# cd /newrpool/boot # ls menu.lst # cat menu.lst title Oracle Solaris 11.1 SPARC bootfs rpool/ROOT/solaris # cp menu.lst menu.lst.orig # vi menu.lst title Oracle Solaris 11.1 SPARC bootfs newrpool/ROOT/solaris
8. Mount the root filesystem.
# zfs mount -vO -o mountpoint=/newrpool newrpool/ROOT/solaris # cd /newrpool /newrpool# ls bin home net root system boot home_hls-mfg newrpool rpool tmp dev import nfs4 sbin usr devices kernel opt share var etc lib platform shared workspace export media proc support ws hls-mfg mnt re sw
9. Remove the zpool.cache file. This file maintains an on-disk view of the imported ZFS pools and is read at boot time to populate the in-core ZFS Storage Pool configs without having to delay the boot by scanning and reading all the disks presented to the host. The purpose of removing this file is to ensure that we do scan all the disks and re-create this file with the configuration of the new pool when the system is booted from the ‘newrpool’.
# rm /newrpool/etc/zfs/zpool.cache
10. edit the vfstab and change the poolname for the swap device. Change all entries referencing ‘rpool’ to the new pool name, ‘newrpool’:
# grep rpool /newrpool/etc/vfstab /dev/zvol/dsk/rpool/swap - - swap - no -
11. Update the boot archive for the newrpool.
# bootadm update-archive -v -R /newrpool
12. Unmount the filesystems for the new pool and export it.
# cd / # zfs unmount /newrpool # umount /newrpool # zpool export newrpool
13. Shutdown the system to the OK prompt.
# shutdown -y -i0 -g0
14. Boot the system from the disk used to create the new pool, ‘newrpool’. Use either the full path to the device if the device is external to the host or the internal alias or full device path if it’s an internal disk. Boot the host with a reconfiguration reboot to ensure all device entries are updated.
{0} ok boot disk1 -rs
15. Watch the boot carefully for any errors or warnings. If the system boots and presents the login prompt with no errors, CONTINUE. If the system drops to maintenance mode and/or presents any SMF errors, STOP and resolve.
16. Verify the filesystems point to the correct new pool name, ‘newrpool’ and perform initial validation steps.
# zpool list NAME SIZE ALLOC FREE CAP DEDUP HEALTH ALTROOT newrpool 136G 22.4G 114G 16% 1.00x ONLINE - # zfs list NAME USED AVAIL REFER MOUNTPOINT newrpool 23.0G 111G 75K /newrpool newrpool/ROOT 2.45G 111G 31K legacy newrpool/ROOT/solaris 2.45G 111G 1.93G / newrpool/ROOT/solaris/var 476M 111G 450M /var newrpool/VARSHARE 61K 111G 61K /var/share newrpool/dump 16.4G 111G 15.9G - newrpool/export 100K 111G 35K /export newrpool/export/home 65.5K 111G 32K /export/home newrpool/export/home/jack 33.5K 111G 33.5K /export/home/jack newrpool/swap 4.13G 111G 4.00G - # swap -l swapfile dev swaplo blocks free /dev/zvol/dsk/newrpool/swap 285,1 16 8388592 8388592 # svcs -xv -- Resolve any SMF services that have faults
No comments:
Post a Comment