On Mon, May 18, 2020 at 15:49:02 -0500, Eric Blake wrote: > On 5/13/20 10:49 PM, John Snow wrote:
[...] > > + > > + /* NB: new bitmap is anonymous and enabled */ > > + cluster_size = bdrv_dirty_bitmap_granularity(target_bitmap); > > + new_bitmap = bdrv_create_dirty_bitmap(bs, cluster_size, NULL, errp); > > + if (!new_bitmap) { > > + return NULL; > > + } > > This means if the guest writes to the disk while the job is ongoing, the > bitmap will be updated to mark that portion of the bitmap as set, even if it > was not allocated at the time the job started. But then again, the guest > writes are causing allocation, so this seems like the right thing to do. Well, this could be made problem of the caller by skipping any newly allocated sectors to be written to the bitmap. The caller then can decide whether a snapshot of the allocation map is needed and thus a new inactive bitmap should be used as the destination, or whether new writes should be tracked by using an active bitmap. > Do we need to worry about the converse case where the job started with > something allocated but runs in parallel with the guest trimming, such that > our bitmap marks something as set even though at the conclusion of our job > it is no longer allocated? Given the semantics above this would conveniently not be a problem of the population job. If you create a snapshot of the allocation map any any point we'd care about that state. Anyways, from the point of view of the bitmap code any write to a sector sets the bit so the trimming should not be treated differently. Speicifically libvirt plans to use it on overlay(snapshot) images where the btimaps are not present so in that case even trimmed sectors need to mask the data in the backing image so they can technically be considered as allocated too.