Roy Sigurd Karlsbakk wrote:
Hi all

I've been doing a lot of testing with dedup and concluded it's not really ready 
for production. If something fails, it can render the pool unuseless for hours 
or maybe days, perhaps due to single-threded stuff in zfs. There is also very 
little data available in the docs (though I've from what I've got on this list) 
on how much memory one should have for deduping an xTiB dataset.

Does anyone know how the status is for dedup now? In 134 it doesn't work very 
well, but is it better in ON140 etc?

Best regards

roy
--
Roy Sigurd Karlsbakk
(+47) 97542685
r...@karlsbakk.net
http://blogg.karlsbakk.net/
--
I all pedagogikk er det essensielt at pensum presenteres intelligibelt. Det er 
et elementært imperativ for alle pedagoger å unngå eksessiv anvendelse av 
idiomer med fremmed opprinnelse. I de fleste tilfeller eksisterer adekvate og 
relevante synonymer på norsk.
_______________________________________________
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


I just integrated a performance improvement for dedup which will dramatically help when the dedup table does not fit in memory. For more details take a look at:

6938089 dedup-induced latency causes FC initiator logouts/FC port resets

This will improve performance for such tasks as rm-ing files in a dedup enabled dataset, and destroying a dedup enabled dataset. It's still a best practice to size your system accordingly such that the dedup table can stay resident in the ARC or L2ARC.

- George
_______________________________________________
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Reply via email to