On 7/2/20 9:50 AM, Max Reitz wrote:
On 28.06.20 13:02, Alberto Garcia wrote:
This field allows us to indicate that the L2 metadata update does not
come from a write request with actual data but from a preallocation
request.

For traditional images this does not make any difference, but for
images with extended L2 entries this means that the clusters are
allocated normally in the L2 table but individual subclusters are
marked as unallocated.

This will allow preallocating images that have a backing file.

There is one special case: when we resize an existing image we can
also request that the new clusters are preallocated. If the image
already had a backing file then we have to hide any possible stale
data and zero out the new clusters (see commit 955c7d6687 for more
details).

In this case the subclusters cannot be left as unallocated so the L2
bitmap must be updated.

Signed-off-by: Alberto Garcia <be...@igalia.com>
Reviewed-by: Eric Blake <ebl...@redhat.com>
---
  block/qcow2.h         | 8 ++++++++
  block/qcow2-cluster.c | 2 +-
  block/qcow2.c         | 6 ++++++
  3 files changed, 15 insertions(+), 1 deletion(-)

Sounds good, but I’m just not quite sure about the details on
falloc/full allocation: With .prealloc = true, writing to the
preallocated subclusters will require a COW operation.  That’s not
ideal, and avoiding those COWs may be a reason to do preallocation in
the first place.

I'm not sure I follow the complaint. If a cluster is preallocated but the subcluster is marked unallocated, then doing a partial write to that subcluster must provide the correct contents for the rest of the subcluster (either filling with zero, or reading from a backing file) - but this COW can be limited to just the portion of the subcluster, and is no different than the COW you have to perform without subclusters when doing a write to a preallocated cluster in general.


Now, with backing files, it’s entirely correct.  You need a COW
operation, because that’s the point of having a backing file.

But without a backing file I wonder if it wouldn’t be better to set
.prealloc = false to avoid that COW.

Without a backing file, there is no read required - writing to an unallocated subcluster within a preallocated cluster merely has to provide zeros to the rest of the write. And depending on whether we can intelligently guarantee that the underlying protocol already reads as zeroes when preallocated, we even have an optimization where even that is not necessary. We can still lump it in the "COW" terminology, in that our write is more complex than merely writing in place, but it isn't a true copy-on-write operation as there is nothing to be copied.


Of course, if we did that, you couldn’t create the overlay separately
from the backing file, preallocate it, and only then attach the backing
file to the overlay.  But is that a problem?

(Or are there other problems to consider?)

Max


--
Eric Blake, Principal Software Engineer
Red Hat, Inc.           +1-919-301-3226
Virtualization:  qemu.org | libvirt.org


Reply via email to