MC wrote:
> Thanks for the comprehensive replies!
> 
> I'll need some baby speak on this one though: 
> 
>> The recommended use of whole disks is for drives with volatile write 
>> caches where ZFS will enable the cache if it owns the whole disk. There 
>> may be an RFE lurking here, but it might be tricky to correctly implement 
>> to protect against future data corruptions by non-ZFS use.
> 
> I don't know what you mean by "drives with volatile write caches", but I'm 
> dealing with commodity SATA2 drives from WD/Seagate/Hitachi/Samsung.  

You may see it in the data sheet as "buffer" or "cache buffer" for such drives.
Usually 8-16 MBytes with 32 MBytes for newer drives.

> This disk replacement thing is a pretty common use case, so I think it would 
> be smart to sort it out while someone cares, and then stick the authoritative
> answer into the zfs wiki.  This is what I can contribute without knowing the 
> answer:

The authoritative answer is in the man page for zpool.
    System Administration Commands                          zpool(1M)

             The size of new_device must be greater than or equal  to
             the minimum size of all the devices in a mirror or raidz
             configuration.

> The best way to incorporate abnormal disk size variance tolerance into a 
> raidz array 
> is BLANK, and it has these BLANK side effects.  

This is a problem for replacement, not creation.  For creation, the problem 
becomes
more generic, but can make use of automation.  I've got some algorithms to do 
that,
but am not quite ready with a generic solution which is administrator friendly. 
 In
other words, the science isn't difficult, the automation is.
  -- richard
_______________________________________________
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Reply via email to