Kjetil Torgrim Homme wrote:
Andrey Kuzmin <andrey.v.kuz...@gmail.com> writes:

Downside you have described happens only when the same checksum is
used for data protection and duplicate detection. This implies sha256,
BTW, since fletcher-based dedupe has been dropped in recent builds.

if the hash used for dedup is completely separate from the hash used for
data protection, I don't see any downsides to computing the dedup hash
from uncompressed data.  why isn't it?

It isn't separate because that isn't how Jeff and Bill designed it. I think the design the have is great.

Instead of trying to pick holes in the theory can you demonstrate a real performance problem with compression=on and dedup=on and show that it is because of the compression step ?

Otherwise if you want it changed code it up and show how what you have done is better in all cases.

--
Darren J Moffat
_______________________________________________
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Reply via email to