On Mon, Jan 10, 2011 at 11:18:46AM +0100, René Nussbaumer wrote:
> ---
> Resent as it accepts now an array of disks
>  lib/backend.py |   24 ++++++++++++++++++++++++
>  1 files changed, 24 insertions(+), 0 deletions(-)
> 
> diff --git a/lib/backend.py b/lib/backend.py
> index 6b45a40..2bd9953 100644
> --- a/lib/backend.py
> +++ b/lib/backend.py
> @@ -1353,6 +1353,30 @@ def BlockdevWipe(disk, offset, size):
>    _WipeDevice(rdev.dev_path, offset, size)
>  
>  
> +def BlockdevPauseResumeSync(disks, pause):
> +  """Pause or resume the sync of the block device.
> +
> +  @type disk: list of L{objects.Disk}
> +  @param disk: the disks object we want to pause/resume
> +  @type pause: bool
> +  @param pause: Wheater to pause or resume
> +
> +  """
> +  success = []
> +  for disk in disks:
> +    try:
> +      rdev = _RecursiveFindBD(disk)
> +    except errors.BlockDeviceError:
> +      rdev = None
> +
> +    if not rdev:
> +      _Fail("Cannot change sync for device %s: device not found", 
> disk.iv_name)
> +
> +    success.append(rdev.PauseResumeSync(pause))
> +
> +  return success

The problem here is that, let's say we have two disks. The first one is
paused fine, the second one is not found, and thus the code raises an
exception, which is propagated to cmdlib, hence the job fails.

But noone does cleanup for the original disk, which remains paused.

This is a generic problem in the rpc/backend system, where we have a
nice _Fail for overall RPC failures, but no system for individual items
failure/success status in a RPC call.

I think here you should not use _Fail, but a list of [(status, msg)]
explicitly (and the checks as such).

iustin

Reply via email to