I sent a zvol from host a, to host b, twice.  Host b has two pools,
one ashift=9, one ashift=12.  I sent the zvol to each of the pools on
b.  The original source pool is ashift=9, and an old revision (2009_06
because it's still running xen). 

I sent it twice, because something strange happened on the first send,
to the ashift=12 pool.  "zfs list -o space" showed figures at least
twice those on the source, maybe roughly 2.5 times.

I suspected this may be related to ashift, so tried the second send to
the ahsift=9 pool; these received snapshots line up with the same
space consumption as the source.

What is going on? Is there really that much metadata overhead?  How
many metadata blocks are needed for each 8k vol block, and are they
each really only holding 512 bytes of metadata in a 4k allocation?
Can they not be packed appropriately for the ashift?

Longer term, if zfs were to pack metadata into full blocks by ashift,
is it likely that this could be introduced via a zpool upgrade, with
space recovered as metadata is rewritten - or would it need the pool
to be recreated?  Or is there some other solution in the works?

--
Dan.

Attachment: pgpmWrX1wMTDh.pgp
Description: PGP signature

_______________________________________________
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Reply via email to