Re: [Qemu-devel] [PATCH] backup: Fail early if cannot determine cluster size

2016-06-14 Thread Fam Zheng
On Tue, 06/14 16:49, Max Reitz wrote:
> On 18.05.2016 07:53, Fam Zheng wrote:
> > Otherwise the job is orphaned and block_job_cancel_sync in
> > bdrv_close_all() when quiting will hang.
> > 
> > A simple reproducer is running blockdev-backup from null-co:// to
> > null-co://.
> > 
> > Cc: qemu-sta...@nongnu.org
> > Signed-off-by: Fam Zheng 
> > ---
> >  block/backup.c | 34 ++
> >  1 file changed, 18 insertions(+), 16 deletions(-)
> 
> Sorry for having waited so long (I was thinking that maybe Jeff wanted
> to take this patch), but now the patch no longer applies and I don't
> feel comfortable with just fixing it up myself; git-backport-diff tells
> me the required changes are "[0015] [FC]".

Oh it turns out this patch is superseded by

commit 91ab68837933232bcef99da7c968e6d41900419b
Author: Kevin Wolf 
Date:   Thu Apr 14 12:59:55 2016 +0200

backup: Don't leak BackupBlockJob in error path

Signed-off-by: Kevin Wolf 
Reviewed-by: Max Reitz 
Reviewed-by: Alberto Garcia 

So let's just drop it.

Fam



Re: [Qemu-devel] [PATCH] backup: Fail early if cannot determine cluster size

2016-06-14 Thread Fam Zheng
On Tue, 06/14 16:49, Max Reitz wrote:
> On 18.05.2016 07:53, Fam Zheng wrote:
> > Otherwise the job is orphaned and block_job_cancel_sync in
> > bdrv_close_all() when quiting will hang.
> > 
> > A simple reproducer is running blockdev-backup from null-co:// to
> > null-co://.
> > 
> > Cc: qemu-sta...@nongnu.org
> > Signed-off-by: Fam Zheng 
> > ---
> >  block/backup.c | 34 ++
> >  1 file changed, 18 insertions(+), 16 deletions(-)
> 
> Sorry for having waited so long (I was thinking that maybe Jeff wanted
> to take this patch), but now the patch no longer applies and I don't
> feel comfortable with just fixing it up myself; git-backport-diff tells
> me the required changes are "[0015] [FC]".

No problem! I'll rebase it and submit again.

Fam



Re: [Qemu-devel] [PATCH] backup: Fail early if cannot determine cluster size

2016-06-14 Thread Max Reitz
On 18.05.2016 07:53, Fam Zheng wrote:
> Otherwise the job is orphaned and block_job_cancel_sync in
> bdrv_close_all() when quiting will hang.
> 
> A simple reproducer is running blockdev-backup from null-co:// to
> null-co://.
> 
> Cc: qemu-sta...@nongnu.org
> Signed-off-by: Fam Zheng 
> ---
>  block/backup.c | 34 ++
>  1 file changed, 18 insertions(+), 16 deletions(-)

Sorry for having waited so long (I was thinking that maybe Jeff wanted
to take this patch), but now the patch no longer applies and I don't
feel comfortable with just fixing it up myself; git-backport-diff tells
me the required changes are "[0015] [FC]".

Max



signature.asc
Description: OpenPGP digital signature


Re: [Qemu-devel] [PATCH] backup: Fail early if cannot determine cluster size

2016-05-18 Thread Fam Zheng
On Wed, 05/18 08:25, Eric Blake wrote:
> On 05/17/2016 11:53 PM, Fam Zheng wrote:
> > Otherwise the job is orphaned and block_job_cancel_sync in
> > bdrv_close_all() when quiting will hang.
> 
> s/quiting/quitting/
> 
> > 
> > A simple reproducer is running blockdev-backup from null-co:// to
> > null-co://.
> 
> Worth a qemu-iotests addition?

Sure, as you asked, will send one in addition.

Fam

> 
> > 
> > Cc: qemu-sta...@nongnu.org
> > Signed-off-by: Fam Zheng 
> > ---
> >  block/backup.c | 34 ++
> >  1 file changed, 18 insertions(+), 16 deletions(-)
> > 
> 
> -- 
> Eric Blake   eblake redhat com+1-919-301-3266
> Libvirt virtualization library http://libvirt.org
> 





Re: [Qemu-devel] [PATCH] backup: Fail early if cannot determine cluster size

2016-05-18 Thread Eric Blake
On 05/17/2016 11:53 PM, Fam Zheng wrote:
> Otherwise the job is orphaned and block_job_cancel_sync in
> bdrv_close_all() when quiting will hang.

s/quiting/quitting/

> 
> A simple reproducer is running blockdev-backup from null-co:// to
> null-co://.

Worth a qemu-iotests addition?

> 
> Cc: qemu-sta...@nongnu.org
> Signed-off-by: Fam Zheng 
> ---
>  block/backup.c | 34 ++
>  1 file changed, 18 insertions(+), 16 deletions(-)
> 

-- 
Eric Blake   eblake redhat com+1-919-301-3266
Libvirt virtualization library http://libvirt.org



signature.asc
Description: OpenPGP digital signature


Re: [Qemu-devel] [PATCH] backup: Fail early if cannot determine cluster size

2016-05-18 Thread John Snow


On 05/18/2016 01:53 AM, Fam Zheng wrote:
> Otherwise the job is orphaned and block_job_cancel_sync in
> bdrv_close_all() when quiting will hang.
> 

Tch, mea culpa.

> A simple reproducer is running blockdev-backup from null-co:// to
> null-co://.
> 
> Cc: qemu-sta...@nongnu.org
> Signed-off-by: Fam Zheng 
> ---
>  block/backup.c | 34 ++
>  1 file changed, 18 insertions(+), 16 deletions(-)
> 
> diff --git a/block/backup.c b/block/backup.c
> index 491fd14..3ff48c6 100644
> --- a/block/backup.c
> +++ b/block/backup.c
> @@ -505,6 +505,7 @@ void backup_start(BlockDriverState *bs, BlockDriverState 
> *target,
>  int64_t len;
>  BlockDriverInfo bdi;
>  int ret;
> +int64_t cluster_size;
>  
>  assert(bs);
>  assert(target);
> @@ -568,23 +569,10 @@ void backup_start(BlockDriverState *bs, 
> BlockDriverState *target,
>  goto error;
>  }
>  
> -BackupBlockJob *job = block_job_create(&backup_job_driver, bs, speed,
> -   cb, opaque, errp);
> -if (!job) {
> -goto error;
> -}
> -
> -job->on_source_error = on_source_error;
> -job->on_target_error = on_target_error;
> -job->target = target;
> -job->sync_mode = sync_mode;
> -job->sync_bitmap = sync_mode == MIRROR_SYNC_MODE_INCREMENTAL ?
> -   sync_bitmap : NULL;
> -
>  /* If there is no backing file on the target, we cannot rely on COW if 
> our
>   * backup cluster size is smaller than the target cluster size. Even for
>   * targets with a backing file, try to avoid COW if possible. */
> -ret = bdrv_get_info(job->target, &bdi);
> +ret = bdrv_get_info(target, &bdi);
>  if (ret < 0 && !target->backing) {
>  error_setg_errno(errp, -ret,
>  "Couldn't determine the cluster size of the target image, "
> @@ -594,11 +582,25 @@ void backup_start(BlockDriverState *bs, 
> BlockDriverState *target,
>  goto error;
>  } else if (ret < 0 && target->backing) {
>  /* Not fatal; just trudge on ahead. */
> -job->cluster_size = BACKUP_CLUSTER_SIZE_DEFAULT;
> +cluster_size = BACKUP_CLUSTER_SIZE_DEFAULT;
>  } else {
> -job->cluster_size = MAX(BACKUP_CLUSTER_SIZE_DEFAULT, 
> bdi.cluster_size);
> +cluster_size = MAX(BACKUP_CLUSTER_SIZE_DEFAULT, bdi.cluster_size);
>  }
>  
> +BackupBlockJob *job = block_job_create(&backup_job_driver, bs, speed,
> +   cb, opaque, errp);
> +if (!job) {
> +goto error;
> +}
> +
> +job->on_source_error = on_source_error;
> +job->on_target_error = on_target_error;
> +job->target = target;
> +job->sync_mode = sync_mode;
> +job->sync_bitmap = sync_mode == MIRROR_SYNC_MODE_INCREMENTAL ?
> +   sync_bitmap : NULL;
> +job->cluster_size = cluster_size;
> +
>  bdrv_op_block_all(target, job->common.blocker);
>  job->common.len = len;
>  job->common.co = qemu_coroutine_create(backup_run);
> 

Thank you.

Reviewed-by: John Snow 




Re: [Qemu-devel] [PATCH] backup: Fail early if cannot determine cluster size

2016-05-18 Thread Jeff Cody
On Wed, May 18, 2016 at 01:53:27PM +0800, Fam Zheng wrote:
> Otherwise the job is orphaned and block_job_cancel_sync in
> bdrv_close_all() when quiting will hang.
> 
> A simple reproducer is running blockdev-backup from null-co:// to
> null-co://.
> 
> Cc: qemu-sta...@nongnu.org
> Signed-off-by: Fam Zheng 

Reviewed-by: Jeff Cody 

> ---
>  block/backup.c | 34 ++
>  1 file changed, 18 insertions(+), 16 deletions(-)
> 
> diff --git a/block/backup.c b/block/backup.c
> index 491fd14..3ff48c6 100644
> --- a/block/backup.c
> +++ b/block/backup.c
> @@ -505,6 +505,7 @@ void backup_start(BlockDriverState *bs, BlockDriverState 
> *target,
>  int64_t len;
>  BlockDriverInfo bdi;
>  int ret;
> +int64_t cluster_size;
>  
>  assert(bs);
>  assert(target);
> @@ -568,23 +569,10 @@ void backup_start(BlockDriverState *bs, 
> BlockDriverState *target,
>  goto error;
>  }
>  
> -BackupBlockJob *job = block_job_create(&backup_job_driver, bs, speed,
> -   cb, opaque, errp);
> -if (!job) {
> -goto error;
> -}
> -
> -job->on_source_error = on_source_error;
> -job->on_target_error = on_target_error;
> -job->target = target;
> -job->sync_mode = sync_mode;
> -job->sync_bitmap = sync_mode == MIRROR_SYNC_MODE_INCREMENTAL ?
> -   sync_bitmap : NULL;
> -
>  /* If there is no backing file on the target, we cannot rely on COW if 
> our
>   * backup cluster size is smaller than the target cluster size. Even for
>   * targets with a backing file, try to avoid COW if possible. */
> -ret = bdrv_get_info(job->target, &bdi);
> +ret = bdrv_get_info(target, &bdi);
>  if (ret < 0 && !target->backing) {
>  error_setg_errno(errp, -ret,
>  "Couldn't determine the cluster size of the target image, "
> @@ -594,11 +582,25 @@ void backup_start(BlockDriverState *bs, 
> BlockDriverState *target,
>  goto error;
>  } else if (ret < 0 && target->backing) {
>  /* Not fatal; just trudge on ahead. */
> -job->cluster_size = BACKUP_CLUSTER_SIZE_DEFAULT;
> +cluster_size = BACKUP_CLUSTER_SIZE_DEFAULT;
>  } else {
> -job->cluster_size = MAX(BACKUP_CLUSTER_SIZE_DEFAULT, 
> bdi.cluster_size);
> +cluster_size = MAX(BACKUP_CLUSTER_SIZE_DEFAULT, bdi.cluster_size);
>  }
>  
> +BackupBlockJob *job = block_job_create(&backup_job_driver, bs, speed,
> +   cb, opaque, errp);
> +if (!job) {
> +goto error;
> +}
> +
> +job->on_source_error = on_source_error;
> +job->on_target_error = on_target_error;
> +job->target = target;
> +job->sync_mode = sync_mode;
> +job->sync_bitmap = sync_mode == MIRROR_SYNC_MODE_INCREMENTAL ?
> +   sync_bitmap : NULL;
> +job->cluster_size = cluster_size;
> +
>  bdrv_op_block_all(target, job->common.blocker);
>  job->common.len = len;
>  job->common.co = qemu_coroutine_create(backup_run);
> -- 
> 2.8.2
> 



[Qemu-devel] [PATCH] backup: Fail early if cannot determine cluster size

2016-05-17 Thread Fam Zheng
Otherwise the job is orphaned and block_job_cancel_sync in
bdrv_close_all() when quiting will hang.

A simple reproducer is running blockdev-backup from null-co:// to
null-co://.

Cc: qemu-sta...@nongnu.org
Signed-off-by: Fam Zheng 
---
 block/backup.c | 34 ++
 1 file changed, 18 insertions(+), 16 deletions(-)

diff --git a/block/backup.c b/block/backup.c
index 491fd14..3ff48c6 100644
--- a/block/backup.c
+++ b/block/backup.c
@@ -505,6 +505,7 @@ void backup_start(BlockDriverState *bs, BlockDriverState 
*target,
 int64_t len;
 BlockDriverInfo bdi;
 int ret;
+int64_t cluster_size;
 
 assert(bs);
 assert(target);
@@ -568,23 +569,10 @@ void backup_start(BlockDriverState *bs, BlockDriverState 
*target,
 goto error;
 }
 
-BackupBlockJob *job = block_job_create(&backup_job_driver, bs, speed,
-   cb, opaque, errp);
-if (!job) {
-goto error;
-}
-
-job->on_source_error = on_source_error;
-job->on_target_error = on_target_error;
-job->target = target;
-job->sync_mode = sync_mode;
-job->sync_bitmap = sync_mode == MIRROR_SYNC_MODE_INCREMENTAL ?
-   sync_bitmap : NULL;
-
 /* If there is no backing file on the target, we cannot rely on COW if our
  * backup cluster size is smaller than the target cluster size. Even for
  * targets with a backing file, try to avoid COW if possible. */
-ret = bdrv_get_info(job->target, &bdi);
+ret = bdrv_get_info(target, &bdi);
 if (ret < 0 && !target->backing) {
 error_setg_errno(errp, -ret,
 "Couldn't determine the cluster size of the target image, "
@@ -594,11 +582,25 @@ void backup_start(BlockDriverState *bs, BlockDriverState 
*target,
 goto error;
 } else if (ret < 0 && target->backing) {
 /* Not fatal; just trudge on ahead. */
-job->cluster_size = BACKUP_CLUSTER_SIZE_DEFAULT;
+cluster_size = BACKUP_CLUSTER_SIZE_DEFAULT;
 } else {
-job->cluster_size = MAX(BACKUP_CLUSTER_SIZE_DEFAULT, bdi.cluster_size);
+cluster_size = MAX(BACKUP_CLUSTER_SIZE_DEFAULT, bdi.cluster_size);
 }
 
+BackupBlockJob *job = block_job_create(&backup_job_driver, bs, speed,
+   cb, opaque, errp);
+if (!job) {
+goto error;
+}
+
+job->on_source_error = on_source_error;
+job->on_target_error = on_target_error;
+job->target = target;
+job->sync_mode = sync_mode;
+job->sync_bitmap = sync_mode == MIRROR_SYNC_MODE_INCREMENTAL ?
+   sync_bitmap : NULL;
+job->cluster_size = cluster_size;
+
 bdrv_op_block_all(target, job->common.blocker);
 job->common.len = len;
 job->common.co = qemu_coroutine_create(backup_run);
-- 
2.8.2