>On 07/11/2012 12:24 PM, Justin Stringfellow wrote:
>>> Suppose you find a weakness in a specific hash algorithm; you use this
>>> to create hash collisions and now imagined you store the hash collisions 
>>> in a zfs dataset with dedup enabled using the same hash algorithm.....
>> 
>> Sorry, but isn't this what dedup=verify solves? I don't see the problem 
>> here. Maybe all that's n
eeded is a comment in the manpage saying hash algorithms aren't perfect.
>
>It does solve it, but at a cost to normal operation. Every write gets
>turned into a read. Assuming a big enough and reasonably busy dataset,
>this leads to tremendous write amplification.


If and only if the block is being dedup'ed.  (In that case, you're just
changing the write of a whole block into one read (of the block) and an 
update in the dedup date (the whole block isn't written)

Casper

_______________________________________________
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Reply via email to