Solaris 11:Increasing the size of a vdisk in LDom (with backend device as ZFS volume)
LDom supports a number of possible other devices to be exported by the virtual disk server as a virtual disk to the guest domain. You can export a physical disk, disk slice, volumes, or file as a block device. The following procedure is based on a setup of ZVOLs for underlying backend device on primary domain. The basic steps would work for other kinds of devices as well.
Assumptions:
– Guest domain is running Solaris 11
– underlying device for the vdisk on primary is a zvol
– filesystem on vdisk in guest domain is ZFS
– Guest domain is running Solaris 11
– underlying device for the vdisk on primary is a zvol
– filesystem on vdisk in guest domain is ZFS
Let us consider the 2 cases:
1. zpool based on EFI labeled disk in guest LDOM
2. zpool based on SMI labeled disk in guest LDOM
1. zpool based on EFI labeled disk in guest LDOM
2. zpool based on SMI labeled disk in guest LDOM
Expanding zvol on Primary Domain
Get the current volsize and increase volsize on primary domain for the underlying ZVOL:
primary-domain # zfs get volsize ldom01/vol01 NAME PROPERTY VALUE SOURCE ldom01/vol01 volsize 50G local
primary-domain # zfs set volsize=70g ldom01/vol01
1. zpool based on EFI labeled disk in guest LDOM
On guest Domain
We would use the auto-expand feature of ZFS for the EFI label based disk in guest domain to get the new size of the dataset. First check if the auto-expand flag is set on required dataset (datapool).
We would use the auto-expand feature of ZFS for the EFI label based disk in guest domain to get the new size of the dataset. First check if the auto-expand flag is set on required dataset (datapool).
ldom01 # zpool get autoexpand datapool NAME PROPERTY VALUE SOURCE datapool autoexpand off local
Check the size of datapool and set the auto-expand flag on:
ldom01 # df -kl /datapool Filesystem 1024-blocks Used Available Capacity Mounted on datapool 51351552 31 51351465 1% /datapool
ldom01 # zpool set autoexpand=on datapool
Check the size of datapool again:
ldom01 # df -kl /datapool Filesystem 1024-blocks Used Available Capacity Mounted on datapool 71995392 31 71995304 1% /datapool # zpool list datapool NAME SIZE ALLOC FREE CAP DEDUP HEALTH ALTROOT datapool 69.8G 88K 69.7G 0% 1.00x ONLINE -
2. zpool based on SMI labeled disk in guest LDOM
On guest Domain
We would use Solaris 11 format subcommand expand to increase the size of vdisk. Please note, the new subcommand ‘expand’ is available in Solaris 11 only.
We would use Solaris 11 format subcommand expand to increase the size of vdisk. Please note, the new subcommand ‘expand’ is available in Solaris 11 only.
# format -e /dev/rdsk/c2d3s0
format> partition
PARTITION MENU: 
        0      - change `0' partition 
        1      - change `1' partition 
        2      - change `2' partition 
        3      - change `3' partition 
        4      - change `4' partition 
        5      - change `5' partition 
        6      - change `6' partition 
        7      - change `7' partition 
        expand - expand label to use the maximum allowed space 
        select - select a predefined table 
        modify - modify a predefined partition table 
        name   - name the current table 
        print  - display the current table 
        label  - write partition map and label to the disk 
        ![cmd] - execute [cmd], then return 
        quit 
partition> expand 
Expansion of label cannot be undone; continue (y/n) ? y 
The expanded capacity was added to the disk label and "s2". 
Disk label was written to disk. 
partition> print
Current partition table (original): 
Total disk cylinders available: 3980 + 2 (reserved cylinders) 
Part      Tag    Flag     Cylinders        Size            Blocks 
  0       root    wm       0 - 1420       49.91GB    (2842/0/0) 209534976 
  1 unassigned    wu       0               0         (0/0/0)            0 
  2     backup    wu       0 - 2968      69.93GB    (3980/0/0) 293437440 
  3 unassigned    wm       0               0         (0/0/0)            0 
  4 unassigned    wm       0               0         (0/0/0)            0 
  5 unassigned    wm       0               0         (0/0/0)            0 
  6 unassigned    wm       0               0         (0/0/0)            0 
  7 unassigned    wm       0               0         (0/0/0)            0 
partition> 0 
Part      Tag    Flag     Cylinders        Size            Blocks 
  0       root    wm       0 - 1420       49.91GB    (2842/0/0) 209534976 
Enter partition id tag[root]: 
Enter partition permission flags[wm]: 
Enter new starting cyl[0]: 
Enter partition size[209534976b, 2842c, 2841e, 102312.00mb, 99.91gb]: 2968c
partition> print
Current partition table (unnamed): 
Total disk cylinders available: 2968 + 2 (reserved cylinders) 
Part      Tag    Flag     Cylinders        Size            Blocks 
  0       root    wm       0 - 2968      69.93GB    (3980/0/0) 293437440 
  1 unassigned    wu       0               0         (0/0/0)            0 
  2     backup    wu       0 - 2968      69.93GB    (3980/0/0) 293437440 
  3 unassigned    wm       0               0         (0/0/0)            0 
  4 unassigned    wm       0               0         (0/0/0)            0 
  5 unassigned    wm       0               0         (0/0/0)            0 
  6 unassigned    wm       0               0         (0/0/0)            0 
  7 unassigned    wm       0               0         (0/0/0)            0 
partition> label
[0] SMI Label 
[1] EFI Label 
Specify Label type[0]: 0 
Ready to label disk, continue? y 
partition> q
Now use auto-expand feature to resize the ZFS dataset. Check the size of datapool and set the auto-expand flag on:
ldom01 # df -kl /datapool Filesystem 1024-blocks Used Available Capacity Mounted on datapool 51351552 31 51351465 1% /datapool
ldom01 # zpool set autoexpand=on datapool
Check the size of datapool again:
ldom01 # df -kl /datapool Filesystem 1024-blocks Used Available Capacity Mounted on datapool 71995392 31 71995304 1% /datapool # zpool list datapool NAME SIZE ALLOC FREE CAP DEDUP HEALTH ALTROOT datapool 69.8G 88K 69.7G 0% 1.00x ONLINE -
 
 
No comments:
Post a Comment