On 2012-12-06 09:35, Albert Shih wrote:
1) add a 5th top-level vdev (eg. another set of 12 disks)
That's not a problem.
That IS a problem if you're going to ultimately remove an enclosure -
once added, you won't be able to remove the extra top-level VDEV from
your ZFS pool.
2) replace the disks with larger ones one-by-one, waiting for a
resilver in between
This is the point I don't see how to do it. I've 48 disk actually from
/dev/da0 -> /dev/da47 (I'm under FreeBSD 9.0) lets say 3To.
I've 4 raidz2 the first from /dev/da0 -> /dev/da11 etc..
So I add physically a new enclosure with new 12 disks for example 4To disk.
I'm going to have new /dev/da48 --> /dev/da59.
Say I want remove /dev/da0 -> /dev/da11. First I pull out the /dev/da0.
I believe FreeBSD should perform similarly to that in Solaris-based
OSes. Since your pools are not yet "broken", and since you have the
luxury of all disks being present during migration, it is safer not
to pull out a disk physically and put a new one in its place
(physically or via hotsparing), but rather to try software replacement
with "zpool replace". This way your pool does not lose redundancy for
the duration of replacement.
The first raidz2 going to be in «degraded state». So I going to tell the
pool the new disk is /dev/da48.
repeat this_process until /dev/da11 replace by /dev/da59.
Roughly so. Other list members might chime in - but MAYBE it is even
possible or advisable to do software replacement on all 12 disks in
parallel (since the originals are all present)?
But at the end how many space I'm going to use on those /dev/da48 -->
/dev/da51. Am I going to have 3To or 4To ? Because each time before
complete ZFS going to use only 3 To how at the end he going to magically
use 4To ?
While the migration is underway and some but not all disks have
completed it, you can only address the old size (3To); when your
active disks are all big - you'd suddenly see the pool expand to
use the available space (if the autoexpand property is on), or
use a series of "zpool online -e componentname".
When I would like to change the disk, I also would like change the disk
enclosure, I don't want to use the old one.
Second question, when I'm going to pull out the first enclosure meaning the
old /dev/da0 --> /dev/da11 and reboot the server the kernel going to give
new number of those disk meaning
old /dev/da12 --> /dev/da0
old /dev/da13 --> /dev/da1
etc...
old /dev/da59 --> /dev/da47
how zfs going to manage that ?
Supposedly, it should manage that well :)
Once your old enclosure's disks are not used anyway, so you can remove
it, you should "zpool export" your pool before turning off the hardware.
This would remove it from the OS's "zfs cachefile", and upon the next
import the pool would undergo a full search for components. It is slower
than cachefile when you have many devices at static locations, because
it ensures that all storage devices are consulted and the new map of
the pool components' locations is drawn. Thus the device numbering
would change somehow due to HW changes and OS reconfiguration, then
the full zpool import will take note of this and import old data from
new addresses (device-names).
HTH,
//Jim
_______________________________________________
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss