Hi Jordi,

After 'enlarging the disk' did you update the FDISK partitioning to see the
new cylinders?

And, then update the VTOC label to see the new space too?

In fdisk, you will need to delete the partition, and create it again - as
long as you keep the start cylinder the same, and don't make the partition
smaller (unlikely in this case) then all should be ok.

Similarly for the VTOC label, you will need to edit the specific slice
information.

If you don't do this the zfs will not see the new space, despite the disk
being bigger.

HTH,

Darren.

On 03/05/2012 06:44, Jordi Espasa Clofent wrote:
> Hi,
> 
> I a Solaris 10 Update 10 box with 1 disk which is used for two different 
> zpool:
> 
> root@sct-jordi-02:~# cat /etc/release
>                      Oracle Solaris 10 8/11 s10x_u10wos_17b X86
>    Copyright (c) 1983, 2011, Oracle and/or its affiliates. All rights 
> reserved.
>                              Assembled 23 August 2011
> 
> root@sct-jordi-02:~# echo | format
> Searching for disks...done
> 
> 
> AVAILABLE DISK SELECTIONS:
>         0. c0t0d0 <DEFAULT cyl 7829 alt 2 hd 255 sec 63>
>            /pci@0,0/pci15ad,1976@10/sd@0,0
> Specify disk (enter its number): Specify disk (enter its number):
> 
> root@sct-jordi-02:~# zpool iostat -v
>                 capacity     operations    bandwidth
> pool        alloc   free   read  write   read  write
> ----------  -----  -----  -----  -----  -----  -----
> opt          290M  29.5G      0      0    213  5.33K
>    c0t0d0s7   290M  29.5G      0      0    213  5.33K
> ----------  -----  -----  -----  -----  -----  -----
> rpool       13.7G  16.3G      0      3  14.4K  53.8K
>    c0t0d0s0  13.7G  16.3G      0      3  14.4K  53.8K
> ----------  -----  -----  -----  -----  -----  -----
> 
> root@sct-jordi-02:~# zpool get all rpool opt
> NAME   PROPERTY       VALUE               SOURCE
> opt    size           29.8G               -
> opt    capacity       0%                  -
> opt    altroot        -                   default
> opt    health         ONLINE              -
> opt    guid           13450764434721172659  default
> opt    version        29                  default
> opt    bootfs         -                   default
> opt    delegation     on                  default
> opt    autoreplace    off                 default
> opt    cachefile      -                   default
> opt    failmode       wait                default
> opt    listsnapshots  on                  default
> opt    autoexpand     off                 default
> opt    free           29.5G               -
> opt    allocated      290M                -
> opt    readonly       off                 -
> rpool  size           30G                 -
> rpool  capacity       45%                 -
> rpool  altroot        -                   default
> rpool  health         ONLINE              -
> rpool  guid           16899781381017818003  default
> rpool  version        29                  default
> rpool  bootfs         rpool/ROOT/s10_u10  local
> rpool  delegation     on                  default
> rpool  autoreplace    off                 default
> rpool  cachefile      -                   default
> rpool  failmode       continue            local
> rpool  listsnapshots  on                  default
> rpool  autoexpand     on                  local
> rpool  free           16.3G               -
> rpool  allocated      13.7G               -
> rpool  readonly       off                 -
> 
> 
> Note, as you can see, the slice 0 i used for 'rpool' and the slice 7 is 
> used for 'opt'. The autoexpand propierty is enabled in 'rpool' but is 
> disabled in 'opt'
> 
> This machine is a virtual one (VMware), so I can enlarge the disk easily 
> if I need. Let's say I enlarge the disk 10 GB:
> 
> # Before enlarge the disk
> 
> root@sct-jordi-02:~# echo | format ; df -h
> Searching for disks...done
> 
> 
> AVAILABLE DISK SELECTIONS:
>         0. c0t0d0 <DEFAULT cyl 7829 alt 2 hd 255 sec 63>
>            /pci@0,0/pci15ad,1976@10/sd@0,0
> Specify disk (enter its number): Specify disk (enter its number):
> Filesystem             size   used  avail capacity  Mounted on
> rpool/ROOT/s10_u10      30G   5.7G   7.3G    44%    /
> /devices                 0K     0K     0K     0%    /devices
> ctfs                     0K     0K     0K     0%    /system/contract
> proc                     0K     0K     0K     0%    /proc
> mnttab                   0K     0K     0K     0%    /etc/mnttab
> swap                    10G   328K    10G     1%    /etc/svc/volatile
> objfs                    0K     0K     0K     0%    /system/object
> sharefs                  0K     0K     0K     0%    /etc/dfs/sharetab
> /usr/lib/libc/libc_hwcap2.so.1
>                          13G   5.7G   7.3G    44%    /lib/libc.so.1
> fd                       0K     0K     0K     0%    /dev/fd
> swap                    10G    36K    10G     1%    /tmp
> swap                    10G    40K    10G     1%    /var/run
> rpool/export            30G    32K   7.3G     1%    /export
> rpool/export/home       30G    31K   7.3G     1%    /export/home
> opt                     29G   290M    29G     1%    /opt
> opt/zones               29G    31K    29G     1%    /opt/zones
> rpool                   30G    42K   7.3G     1%    /rpool
> 
> 
> # After enlarge the disk 10GB
> 
> root@sct-jordi-02:~# devfsadm
> root@sct-jordi-02:~# echo | format ; df -h
> Searching for disks...done
> 
> 
> AVAILABLE DISK SELECTIONS:
>         0. c0t0d0 <DEFAULT cyl 7829 alt 2 hd 255 sec 63>
>            /pci@0,0/pci15ad,1976@10/sd@0,0
> Specify disk (enter its number): Specify disk (enter its number):
> Filesystem             size   used  avail capacity  Mounted on
> rpool/ROOT/s10_u10      30G   5.7G   7.3G    44%    /
> /devices                 0K     0K     0K     0%    /devices
> ctfs                     0K     0K     0K     0%    /system/contract
> proc                     0K     0K     0K     0%    /proc
> mnttab                   0K     0K     0K     0%    /etc/mnttab
> swap                    10G   328K    10G     1%    /etc/svc/volatile
> objfs                    0K     0K     0K     0%    /system/object
> sharefs                  0K     0K     0K     0%    /etc/dfs/sharetab
> /usr/lib/libc/libc_hwcap2.so.1
>                          13G   5.7G   7.3G    44%    /lib/libc.so.1
> fd                       0K     0K     0K     0%    /dev/fd
> swap                    10G    44K    10G     1%    /tmp
> swap                    10G    40K    10G     1%    /var/run
> rpool/export            30G    32K   7.3G     1%    /export
> rpool/export/home       30G    31K   7.3G     1%    /export/home
> opt                     29G   290M    29G     1%    /opt
> opt/zones               29G    31K    29G     1%    /opt/zones
> rpool                   30G    42K   7.3G     1%    /rpool
> 
> The size of rpool zpool stills the same.
> 
> How to do it?
> 
> PS. I know perfectly how to expand any zpool just adding a new device; 
> actually I think is even better, but that's not the point.
> 
> 

_______________________________________________
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Reply via email to