Hi all, Bear with me, this is archaeology.
It appears that once upon a time this was working as expected: https://code.google.com/p/ganeti/issues/detail?id=224 But there was an unrelated bug where a user could add an activated disk to an instance with deactivated disks, which is bad as there is but one active flag. To avoid this, the new disk was deactivated / shutdown. https://code.google.com/p/ganeti/issues/detail?id=471 However, as gnt-instance deactivate-disks warns, DRBD disks in use should not be deactivated for fear of damage. Preventing the case where --no-wait-for-sync was supplied with deactivated instance disks looks legitimate. However, this patch appears to do the opposite and does not allow --no-wait-for-sync when disks are activated. I will try to contact the original author, but it would be nice if someone would do a sanity check to see if I am missing something obvious :) Cheers, Riba On Tue, Jul 29, 2014 at 3:02 PM, Dimitris Aragiorgis <dim...@grnet.gr> wrote: > Hi, > > Any comments on this? Is this really the intended behavior? > > Thanks, > dimara > > > * Stratos Psomadakis <pso...@grnet.gr> [2014-07-15 10:45:12 +0300]: > > > On 07/08/2014 11:49 PM, Neal Oakey wrote: > > > > > > Hi, > > > > > > I think it is related to my Bug report: > > > https://code.google.com/p/ganeti/issues/detail?id=768 > > > > Yeap, and since 2.10 you can no longer add a disk with > > wait_for_syn=False. But I cannot see why you shouldn't be able to do > that. > > > > > > > > Greetings, > > > Neal > > > > > > Am 08.07.2014 14:20, schrieb Stratos Psomadakis: > > > > Hi, > > > > > > > we tried adding new disks to running instances with the wait_for_sync > > > > option set > > > > to False and the job failed with OpPrereqError("Can't add a disk to > an > > > > instance with activated disks and --no-wait-for-sync given."). > > > > > > > Commit 3c26084 deactivates disks which are added to instances with > > > > instance.disks_active=False. However, it also forbids using > > > > wait_for_sync=False > > > > when adding new disks to instances with activated disks (e.g. running > > > > instances). > > > > > > > This doesn't seem consistent with the corresponding 'gnt-instance add > > > > --no-wait-for-sync' command. Is this the intended behavior? > > > > > > > If not, is the following fix ok? > > > > > > > diff --git a/lib/cmdlib/instance.py b/lib/cmdlib/instance.py > > > > index 480198d..3cf674d 100644 > > > > --- a/lib/cmdlib/instance.py > > > > +++ b/lib/cmdlib/instance.py > > > > @@ -2819,14 +2819,6 @@ class LUInstanceSetParams(LogicalUnit): > > > > constants.DT_EXT), > > > > errors.ECODE_INVAL) > > > > > > > - if not self.op.wait_for_sync and self.instance.disks_active: > > > > - for mod in self.diskmod: > > > > - if mod[0] == constants.DDM_ADD: > > > > - raise errors.OpPrereqError("Can't add a disk to an > instance > > > with" > > > > - " activated disks and" > > > > - " --no-wait-for-sync given.", > > > > - errors.ECODE_INVAL) > > > > - > > > > if self.op.disks and self.instance.disk_template == > > > > constants.DT_DISKLESS: > > > > raise errors.OpPrereqError("Disk operations not supported for" > > > > " diskless instances", > > > errors.ECODE_INVAL) > > > > > > > Thanks, > > > > Stratos > > > > > > > > > > > > > -- > > Stratos Psomadakis > > <pso...@grnet.gr> > > > > >