On Wed, May 4, 2011 at 4:36 PM, Erik Trimble <erik.trim...@oracle.com> wrote:
> If so, I'm almost certain NetApp is doing post-write dedup.  That way, the
> strictly controlled max FlexVol size helps with keeping the resource limits
> down, as it will be able to round-robin the post-write dedup to each FlexVol
> in turn.

They are, its in their docs. A volume is dedup'd when 20% of
non-deduped data is added to it, or something similar. 8 volumes can
be processed at once though, I believe, and it could be that weaker
systems are not able to do as many in parallel.

> block usage has a significant 4k presence.  One way I reduced this initally
> was to have the VMdisk image stored on local disk, then copied the *entire*
> image to the ZFS server, so the server saw a single large file, which meant
> it tended to write full 128k blocks.  Do note, that my 30 images only takes

Wouldn't you have been better off cloning datasets that contain an
unconfigured install and customizing from there?

-B

-- 
Brandon High : bh...@freaks.com
_______________________________________________
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Reply via email to