On Thu, May 23, 2013 at 12:39 PM, Bernardo Dal Seno <[email protected]>wrote:

> On 23 May 2013 10:02, Thomas Thrainer <[email protected]> wrote:
> >
> >
> >
> > On Wed, May 22, 2013 at 6:39 PM, Bernardo Dal Seno <[email protected]>
> > wrote:
> >>
> >> On 22 May 2013 10:52, Thomas Thrainer <[email protected]> wrote:
> >> >
> >> >
> >> >
> >> > On Mon, May 20, 2013 at 5:11 PM, Bernardo Dal Seno <
> [email protected]>
> >> > wrote:
> >> >>
> >> >> Masterd checks that specifications for new disks don't include
> spindles
> >> >> when exclusive storage is disabled.
> >> >>
> >> >> Signed-off-by: Bernardo Dal Seno <[email protected]>
> >> >> ---
> >> >>  lib/cmdlib/instance.py         | 27 +++++++++++++++++++++++----
> >> >>  lib/cmdlib/instance_storage.py | 26 ++++++++++++++++++++++----
> >> >>  2 files changed, 45 insertions(+), 8 deletions(-)
> >> >>
> >> >> diff --git a/lib/cmdlib/instance.py b/lib/cmdlib/instance.py
> >> >> index 6498ce2..392ae49 100644
> >> >> --- a/lib/cmdlib/instance.py
> >> >> +++ b/lib/cmdlib/instance.py
> >> >> @@ -1030,6 +1030,13 @@ class LUInstanceCreate(LogicalUnit):
> >> >>          raise errors.OpPrereqError("Disk template %s not supported
> >> >> with"
> >> >>                                     " exclusive storage" %
> >> >> self.op.disk_template,
> >> >>                                     errors.ECODE_STATE)
> >> >
> >> >
> >> > In case of DRBD, only one of the two nodes has to have exclusive
> storage
> >> > enabled to skip the "else" branch. The "then" branch would clearly
> >> > produce
> >> > an error then, but in the spirit of defensive programming a
> >> > "compat.all(map(has_es, nodes))" instead of "else" would reveal the
> >> > indent a
> >> > bit better and be more future proof.
> >>
> >> I'm not sure I follow you. If we don't support DRBD with exclusive
> >> storage (e_s), as it's the case right now, in case one node has e_s
> >> set then we report an error, and that's it. If we start supporting
> >> DRBD, then the error is not reported and the instance is subject to
> >> the e_s restriction if any of its node has e_s set; so spindles are
> >> accepted as long as at least one node has e_s (it will ignored when
> >> actually creating the LVs on the non-e_s node, of course). So the code
> >> is partially ready to support DRBD, the missing part is the low-level
> >> handling of the disks (in noded). Does it make sense?
> >
> >
> > Maybe I misunderstood: I thought that when we want to support spindles on
> > DRBD, we should allocate whole spindles on both nodes. That way you still
> > get dedicated storage, even if it spans more than one node. And dedicated
> > storage would also be preserved after a migration. And in this case it
> would
> > make sense to check e_s on all nodes, or am I wrong?
>
> So, most of the times, all == any. The problem is: What should happen
> when you have a split instance (please note that e_s is a node
> parameter, not an instance one)? I'm not sure if we'll ever allow
> this, but it could have legitimate uses, like moving an instance
> between nodegroups with different e_s settings. But even if it's not
> legal, it's good to handle that in a robust way. In case of a split
> instance, first of all you want that disk creation happens uniformly
> on the nodes, so that either all disks created on a node are exclusive
> or none is; you cannot really have a mix, because in that case you'll
> end up with having shared PVs (Actually, this could change if/when
> we'll support more VGs per node, as in that case you could have
> different VGs treated in different ways). That's why disk creation
> uses the e_s flag on a per-node basis (see d4724b14). With explicit
> spindles, you want to be able (and soon need) to specify the number of
> spindles if at least one node hosts exclusive disks, hence "any".
> Maybe there are cases where "all" is better, but so far I haven't
> found them.
>
> In any case, a split instance should be a transient configuration, and
> cluster-verify complains loudly about it, so it's okay if everything
> is not perfect in such a situation, e.g., it's okay if a split
> instance doesn't get the performance improved of exclusive storage.
>
> Thanks for the explanation!


> > But obviously, the behavior right now does not change at all when using
> all
> > instead of any, as there is always only exactly one node involved.
> >
> > Anyway, if you prefer to leave it as is, that's fine with me. I just
> wanted
> > to understand the reasoning ;-).
>
> Which is a legitimate concern, I'd say :-)
>
> Thanks,
> Bernardo
>



-- 
Thomas Thrainer | Software Engineer | [email protected] |

Google Germany GmbH
Dienerstr. 12
80331 München

Registergericht und -nummer: Hamburg, HRB 86891
Sitz der Gesellschaft: Hamburg
Geschäftsführer: Graham Law, Katherine Stephens

Reply via email to