Jorgen Lundman wrote:
Fairly new to XEN. I am currently exploring ways where I can grow or increase 
the capacity of disks  given to a domU.

Currently I have dom01 running osol b119, and dom02 centos. The dom01 is setup 
with a single 20G vdisk (vmdk on NFS) using ZFS inside.

Growing the size of the vdisk is quite easy:

nfs# zfs set quota=30G zpool1/dom01
dom0# vi /xen.dom01/disk01/vdisk.xml
and increase the max-size and sector entries. I was then hoping that ZFS would just see the new space and use it.
>
dom01# zpool set autoexpand=on /rpool
dom01# reboot

But alas:

# zpool list
NAME    SIZE   USED  AVAIL    CAP  HEALTH  ALTROOT
rpool  19.9G  9.01G  10.9G    45%  ONLINE  -


It appears the .vmdk file also embeds the disk geometry. Amusingly though, 
format/fdisk it says I am only using 57%.

Yeah, we don't support growning a vdisk today..




Perhaps the reason here is that as it is a root pool, it has to be an SMI label, and can 
only grow if the label is updated. Not sure I can do that "live".


As my second attempt I created /xen/dom02/disk02, 10G vmdk. I used virsh 
attach-disk to attach it, and very nicely it just shows up in format. The idea 
was the user could just attach it as a stripe.

Alas, root pools can not be on 2 disks:

# zpool add -f rpool c0t1d0s0
cannot add to 'rpool': root pool can not have multiple vdevs or separate logs

So that will not work either. Ironically, had I gone with UFS, I probably could 
just attach the 2nd disk as a stripe with disksuite. Sigh. I suspect Linux lvm 
would also be happy.

What would users of XEN recommend?

You can mirror the rpool and swap the disk.. However there is a bug in
b111a which I think is 6852962 and fixed in b119.  So you will need
at least b119 for this to work.. I verified it on b121..
    http://bugs.opensolaris.org/bugdatabase/view_bug.do?bug_id=6852962

Here's how to swap out the rpool disk...


####
#### check your current setup.
####
r...@unknown:~# zpool status
  pool: rpool
 state: ONLINE
 scrub: none requested
config:

        NAME        STATE     READ WRITE CKSUM
        rpool       ONLINE       0     0     0
          c7t0d0s0  ONLINE       0     0     0

errors: No known data errors
r...@unknown:~# zfs list
NAME                        USED  AVAIL  REFER  MOUNTPOINT
rpool                      5.23G  14.3G  81.5K  /rpool
rpool/ROOT                 4.23G  14.3G    19K  legacy
rpool/ROOT/opensolaris     4.62M  14.3G  2.87G  /
rpool/ROOT/small-be        1.72M  14.3G   365M  /
rpool/ROOT/small-be-clone  3.89M  14.3G   481M  /
rpool/ROOT/snv111b         3.02G  14.3G  2.87G  /
rpool/ROOT/snv121          1.20G  14.3G   825M  /
rpool/dump                  512M  14.3G   512M  -
rpool/export               63.5K  14.3G    21K  /export
rpool/export/home          42.5K  14.3G    21K  /export/home
rpool/export/home/mrj      21.5K  14.3G  21.5K  /export/home/mrj
rpool/swap                  512M  14.7G   137M  -
r...@unknown:~#



#### create a new vdisk and attach it to the guest
: core2[1]#; vdiskadm create -s 30g /vdisks/opensolaris-bigger
: core2[1]#; virsh attach-disk opensolaris /vdisks/opensolaris-bigger xvdb 
--driver tap --subdriver vdisk


####
#### fdisk and format the disk so all the space is in slice 0
####
r...@unknown:~# echo "" | format
Searching for disks...done
AVAILABLE DISK SELECTIONS:
       0. c7t0d0 <DEFAULT cyl 2609 alt 0 hd 255 sec 63>
          /xpvd/x...@51712
       1. c7t1d0 <DEFAULT cyl 3916 alt 0 hd 255 sec 63>
          /xpvd/x...@51728
Specify disk (enter its number): Specify disk (enter its number):
r...@unknown:~#
r...@unknown:~# /usr/sbin/fdisk -n -B /dev/rdsk/c7t1d0p0

r...@unknown:~# format /dev/rdsk/c7t1d0p0
selecting /dev/rdsk/c7t1d0p0
[disk formatted, no defect list found]


FORMAT MENU:
        disk       - select a disk
        type       - select (define) a disk type
        partition  - select (define) a partition table
        current    - describe the current disk
        format     - format and analyze the disk
        fdisk      - run the fdisk program
        repair     - repair a defective sector
        show       - translate a disk address
        label      - write label to the disk
        analyze    - surface analysis
        defect     - defect list management
        backup     - search for backup labels
        verify     - read and display labels
        save       - save new disk/partition definitions
        volname    - set 8-character volume name
        !<cmd>     - execute <cmd>, then return
        quit
format> partition


PARTITION MENU:
        0      - change `0' partition
        1      - change `1' partition
        2      - change `2' partition
        3      - change `3' partition
        4      - change `4' partition
        5      - change `5' partition
        6      - change `6' partition
        7      - change `7' partition
        select - select a predefined table
        modify - modify a predefined partition table
        name   - name the current table
        print  - display the current table
        label  - write partition map and label to the disk
        !<cmd> - execute <cmd>, then return
        quit
partition> 0
Part      Tag    Flag     Cylinders        Size            Blocks
  0 unassigned    wm       0 - 3914       29.99GB    (3915/0/0) 62894475

Enter partition id tag[unassigned]: root
Enter partition permission flags[wm]:
Enter new starting cyl[1]:
Enter partition size[62878410b, 3914c, 3914e, 30702.35mb, 29.98gb]: 3914e
partition> label
Ready to label disk, continue? y

partition> quit


FORMAT MENU:
        disk       - select a disk
        type       - select (define) a disk type
        partition  - select (define) a partition table
        current    - describe the current disk
        format     - format and analyze the disk
        fdisk      - run the fdisk program
        repair     - repair a defective sector
        show       - translate a disk address
        label      - write label to the disk
        analyze    - surface analysis
        defect     - defect list management
        backup     - search for backup labels
        verify     - read and display labels
        save       - save new disk/partition definitions
        volname    - set 8-character volume name
        !<cmd>     - execute <cmd>, then return
        quit
format> qui
r...@unknown:~#



####
#### mirror the rpool disk, install grub on the new disk
####
r...@unknown:~# zpool attach -f rpool c7t0d0s0 c7t1d0s0
Please be sure to invoke installgrub(1M) to make 'c7t1d0s0' bootable.
Make sure to wait until resilver is done before rebooting.
r...@unknown:~# installgrub /boot/grub/stage1 /boot/grub/stage2 /dev/rdsk/c7t1d
0s0
stage1 written to partition 0 sector 0 (abs 16065)
stage2 written to partition 0, 273 sectors starting at 50 (abs 16115)
r...@unknown:~#


####
#### wait for reslivering to be complete
####
r...@unknown:~# zpool status
  pool: rpool
 state: ONLINE
status: One or more devices is currently being resilvered.  The pool will
        continue to function, possibly in a degraded state.
action: Wait for the resilver to complete.
 scrub: resilver in progress for 0h3m, 35.98% done, 0h6m to go
config:

        NAME          STATE     READ WRITE CKSUM
        rpool         ONLINE       0     0     0
          mirror      ONLINE       0     0     0
            c7t0d0s0  ONLINE       0     0     0
            c7t1d0s0  ONLINE       0     0     0  1.75G resilvered

errors: No known data errors
r...@unknown:~# zpool status
  pool: rpool
 state: ONLINE
 scrub: resilver completed after 0h8m with 0 errors on Tue Sep  1 15:11:41 2009
config:

        NAME          STATE     READ WRITE CKSUM
        rpool         ONLINE       0     0     0
          mirror      ONLINE       0     0     0
            c7t0d0s0  ONLINE       0     0     0
            c7t1d0s0  ONLINE       0     0     0  4.86G resilvered

errors: No known data errors
r...@unknown:~#


####
#### at this point I would shutdown, snapshot your original disk, then
#### modify the guest config so the new disk was set as bootable
#### (e.g. xm list -l <guest> > /tmp/myguest.sxp;vim /tmp/myguest;xm new -F 
/tmp/myguest
####


####
#### reboot to test the new rpool, then remove the old disk
#### from the mirror
####
r...@unknown:~# zpool detach rpool c7t0d0s0

####
#### Not sure if you need to reboot to see the larger size or not?
#### I rebooted, before checking.
####

 statusr...@unknown:~# zpool status
  pool: rpool
 state: ONLINE
 scrub: none requested
config:

        NAME        STATE     READ WRITE CKSUM
        rpool       ONLINE       0     0     0
          c7t1d0s0  ONLINE       0     0     0

errors: No known data errors
r...@unknown:~# zfs list
NAME                        USED  AVAIL  REFER  MOUNTPOINT
rpool                      5.23G  24.2G  83.5K  /rpool
rpool/ROOT                 4.23G  24.2G    19K  legacy
rpool/ROOT/opensolaris     4.62M  24.2G  2.87G  /
rpool/ROOT/small-be        1.72M  24.2G   365M  /
rpool/ROOT/small-be-clone  3.89M  24.2G   481M  /
rpool/ROOT/snv111b         3.02G  24.2G  2.87G  /
rpool/ROOT/snv121          1.20G  24.2G   827M  /
rpool/dump                  512M  24.2G   512M  -
rpool/export               63.5K  24.2G    21K  /export
rpool/export/home          42.5K  24.2G    21K  /export/home
rpool/export/home/mrj      21.5K  24.2G  21.5K  /export/home/mrj
rpool/swap                  512M  24.5G   137M  -
r...@unknown:~#
_______________________________________________
xen-discuss mailing list
xen-discuss@opensolaris.org

Reply via email to