On 09/13/2010 01:28 PM, Kevin Wolf wrote:

Anytime you grow the freelist with qcow2, you have to write a brand new
freelist table and update the metadata synchronously to point to a new
version of it.  That means for a 1TB image, you're potentially writing
out 128MB of data just to allocate a new cluster.
No. qcow2 has two-level tables.

File size: 1 TB
Number of clusters: 1 TB / 64 kB = 16 M
Number of refcount blocks: (16 M * 2 B) / 64kB = 512
Total size of all refcount blocks: 512 * 64kB = 32 MB
Size of recount table: 512 * 8 B = 4 kB

When we grow an image file, the refcount blocks can stay where they are,
only the refcount table needs to be rewritten. So we have to copy a
total of 4 kB for growing the image file when it's 1 TB in size (all
assuming 64k clusters).

The other result of this calculation is that we need to grow the
refcount table each time we cross a 16 TB boundary. So additionally to
being a small amount of data, it doesn't happen in practice anyway.




Interesting, I misremembered it as 8 bytes per cluster, not 2. So it's actually fairly dense (though still not as dense as a bitmap).

--
error compiling committee.c: too many arguments to function


Reply via email to