On Oct 4, 2011, at 4:14 PM, Daniel Carosone wrote:

> I sent a zvol from host a, to host b, twice.  Host b has two pools,
> one ashift=9, one ashift=12.  I sent the zvol to each of the pools on
> b.  The original source pool is ashift=9, and an old revision (2009_06
> because it's still running xen). 

:-)

> I sent it twice, because something strange happened on the first send,
> to the ashift=12 pool.  "zfs list -o space" showed figures at least
> twice those on the source, maybe roughly 2.5 times.

Can you share the output?

"15% of nothin' is nothin'!'"
                 Jimmy Buffett

> I suspected this may be related to ashift, so tried the second send to
> the ahsift=9 pool; these received snapshots line up with the same
> space consumption as the source.
> 
> What is going on? Is there really that much metadata overhead?  How
> many metadata blocks are needed for each 8k vol block, and are they
> each really only holding 512 bytes of metadata in a 4k allocation?
> Can they not be packed appropriately for the ashift?

Doesn't matter how small metadata compresses, the minimum size you can write
is 4KB.

> 
> Longer term, if zfs were to pack metadata into full blocks by ashift,
> is it likely that this could be introduced via a zpool upgrade, with
> space recovered as metadata is rewritten - or would it need the pool
> to be recreated?  Or is there some other solution in the works?

I think we'd need to see the exact layout of the internal data. This can be 
achieved with the zfs_blkstats macro in mdb. Perhaps we can take this offline
and report back?
 -- richard

-- 

ZFS and performance consulting
http://www.RichardElling.com
VMworld Copenhagen, October 17-20
OpenStorage Summit, San Jose, CA, October 24-27
LISA '11, Boston, MA, December 4-9 













_______________________________________________
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Reply via email to