During incremental backups, if the target has a cluster size that is larger than the backup cluster size and we are backing up to a target that cannot (for whichever reason) pull clusters up from a backing image, we may inadvertantly create unusable incremental backup images.
For example: If the bitmap tracks changes at a 64KB granularity and we transmit 64KB of data at a time but the target uses a 128KB cluster size, it is possible that only half of a target cluster will be recognized as dirty by the backup block job. When the cluster is allocated on the target image but only half populated with data, we lose the ability to distinguish between zero padding and uninitialized data. This does not happen if the target image has a backing file that points to the last known good backup. Even if we have a backing file, though, it's likely going to be faster to just buffer the redundant data ourselves from the live image than fetching it from the backing file, so let's just always round up to the target granularity. Signed-off-by: John Snow <js...@redhat.com> --- block/backup.c | 13 ++++++++++--- 1 file changed, 10 insertions(+), 3 deletions(-) diff --git a/block/backup.c b/block/backup.c index fcf0043..62faf81 100644 --- a/block/backup.c +++ b/block/backup.c @@ -568,9 +568,16 @@ void backup_start(BlockDriverState *bs, BlockDriverState *target, job->on_target_error = on_target_error; job->target = target; job->sync_mode = sync_mode; - job->sync_bitmap = sync_mode == MIRROR_SYNC_MODE_INCREMENTAL ? - sync_bitmap : NULL; - job->cluster_size = BACKUP_CLUSTER_SIZE_DEFAULT; + if (sync_mode == MIRROR_SYNC_MODE_INCREMENTAL) { + BlockDriverInfo bdi; + + bdrv_get_info(job->target, &bdi); + job->sync_bitmap = sync_bitmap; + job->cluster_size = MAX(BACKUP_CLUSTER_SIZE_DEFAULT, + bdi.cluster_size); + } else { + job->cluster_size = BACKUP_CLUSTER_SIZE_DEFAULT; + } job->sectors_per_cluster = job->cluster_size / BDRV_SECTOR_SIZE; job->common.len = len; job->common.co = qemu_coroutine_create(backup_run); -- 2.4.3