Miles Nordin wrote:
"et" == Erik Trimble <erik.trim...@oracle.com> writes:
et> frequently-accessed files from multiple VMs are in fact
et> identical, and thus with dedup, you'd only need to store one
et> copy in the cache.
although counterintuitive I thought this wasn't part of the initial
release. Maybe I'm wrong altogether or maybe it got added later?
http://blogs.sun.com/bonwick/en_US/entry/zfs_dedup#comment-1257191094000
No, you're reading that blog right - dedup is on a per-pool basis. What
I was talking about was inside a single pool. Without dedup enabled on
a pool, if I have 2 VM images, both of which are say WinXP, then I'd
have to cache identical files twice. With dedup, I'd only have to cache
those blocks once, even if they were being accessed by both VMs.
So, dedup is both hard on RAM (you need the DDT), and easier (it lowers
the amount of actual data blocks which have to be stored in cache).
--
Erik Trimble
Java System Support
Mailstop: usca22-123
Phone: x17195
Santa Clara, CA
_______________________________________________
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss