> > Ross wrote: > >> The problem is they might publish these numbers, but we > really have > >> no way of controlling what number manufacturers will > choose to use > >> in the future. > >> > >> If for some reason future 500GB drives all turn out to be slightly > >> smaller than the current ones you're going to be stuck. Reserving > >> 1-2% of space in exchange for greater flexibility in replacing > >> drives sounds like a good idea to me. As others have said, RAID > >> controllers have been doing this for long enough that even the very > >> basic models do it now, and I don't understand why such simple > >> features like this would be left out of ZFS.
It would certainly be "terrible" go back to the days where 5% of the filesystem space is inaccessible to users, and force the sysadmin to manually change that percentage to 0 to get full use of the disk. Oh wait, UFS still does that, and it's a configurable parameter at mkfs time (and can be tuned on the fly) For a ZFS pool, (until block pointer rewrite capability) this would have to be a pool-create-time parameter. Perhaps a --usable-size=N[%] option which would either cut down the size of the EFI slices or fake the disk geometry so the EFI label ends early. Or it would be a small matter of programming to build a perl wrapper for zpool create that would accomplish the same thing. --Joe _______________________________________________ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss