We have a 24-disk server, so the current design is 2-disk root mirror and 2x 
11-disk RAIDZ2 vdevs. I suppose another solution could have been to have 3x 
7-disk vdevs plus a hot spare, but the capacity starts to get compromised. 
Using 1TB disks in our current config will give us growth capacity to 16TiB. 
Obviously 3.5TiB is a small starting point, but we are facing an exponential 
growth curve.
It seems like the recommendation is to keep expanding out in disk quantity 
rather than upgrading disk size to meet growth requirements. Perhaps 1TB SATA 
disk is already too big a lump for ZFS resilver.
Looks like there is also some hope that if "bp rewrite" becomes a reality it 
will also help by allowing online defragmentation:
http://www.opensolaris.org/jive/thread.jspa?messageID=186582

NEWS
Actually I've just noticed that Matt Ahrens has updated the status of this bug 
to "need more information" and added a comment that the bugfix for 6343667 
involved a major rewrite, which might have removed the problem. Has anyone out 
there got a large zpool running under snv_94 or higher, and if so have you had 
to rebuild a disk yet? If this has fixed both bugs then I'm definitely hoping 
for an early backport to Solaris 10.
-- 
This message posted from opensolaris.org
_______________________________________________
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Reply via email to