On Tue, Sep 22, 2009 at 10:51 AM, Iustin Pop <[email protected]> wrote:
> On Tue, Sep 22, 2009 at 10:46:09AM +0100, Guido Trotter wrote:
>> On Tue, Sep 22, 2009 at 10:31 AM, Iustin Pop <[email protected]> wrote:
>> > On Tue, Sep 22, 2009 at 10:26:10AM +0100, Guido Trotter wrote:
>> >> On Tue, Sep 22, 2009 at 10:25 AM, Guido Trotter <[email protected]> 
>> >> wrote:
>> >> > On Tue, Sep 22, 2009 at 10:24 AM, Iustin Pop <[email protected]> wrote:
>> >> >> On Tue, Sep 22, 2009 at 10:09:43AM +0100, Guido Trotter wrote:
>> >> >>> On Tue, Sep 22, 2009 at 9:41 AM, Iustin Pop <[email protected]> wrote:
>> >> >>> >
>> >> >>> > I was going to add another flag to the cluster settings, and to me 
>> >> >>> > it
>> >> >>> > seems that it's not really good to have N flags directly int the 
>> >> >>> > cluster
>> >> >>> > object. For example, we have today “modify_etc_hosts”, and I want 
>> >> >>> > to add
>> >> >>> > now “enable_bridging”, we'll also need “modify_root_ssh”, etc.
>> >> >>> >
>> >> >>> > I think we should either name them all flag_$blah (not so nice for
>> >> >>> > cmdline) or move them to a sub-object cluster.flags (this is not 
>> >> >>> > good as
>> >> >>> > then gnt-cluster modify is harder to work, or maybe not).
>> >> >>> >
>> >> >>> > Is it a non-issue actually? Or is it, but I should just ignore it 
>> >> >>> > and
>> >> >>> > add my flag for now?
>> >> >>>
>> >> >>> Why making the interface to them more complicated by hiding them in a
>> >> >>> "flags" object?
>> >> >>> In the end what problem is there if they stay in the main cluster one?
>> >> >>> They're settings for a cluster.
>> >> >>
>> >> >> Yes, true, which is why I said maybe it's a non-issue.
>> >> >>
>> >> >> It just seems to me that if we have 8-10 boolean flags it is better if
>> >> >> we separate those, as just from the name it's not clear what they mean.
>> >> >> I'm happy to leave things as they are if you think it's fine.
>> >> >>
>> >> >>> Also, for the bridge, should we have an explicit enable/disable
>> >> >>> bridging, or just not populate the default bridge, if --no-bridging is
>> >> >>> passed (same as we do for --no-lvm-...)?
>> >> >>
>> >> >> We need explicit disable, in which case we won't run the cluster verify
>> >> >> checks for the dependencies of bridging.
>> >> >
>> >> > Which we could not run anyway, if there is not default bridge set (and
>> >> > no instance needs a bridge).
>> >>
>> >> == all the nics are in "routed" mode... Isn't this safer?
>> >
>> > We're back at square one, circa May 2009 :)
>> >
>>
>> Wow! Like playing stairs and snakes! :)
>>
>> > I still argue that explicit (yes, I need bridging) is better than
>> > implicit.  Otherwise, right after a cluster creation, with no instances,
>> > your proposal would make the cluster verify say everything's OK
>>
>> Not if a default bridge is set up
>
> Wrong context, I was discussing your proposal of only looking at if any
> instance is in bridged mode.
>

I meant for that to happen only if there was no default bridge, not in
general even if a default bridge was set! :)
Sorry if it wasn't clear.

>> > whereas
>> > it's not. And I like more if gnt-instance add rejects my misguided
>> > attempts to create a bridged network when the cluster config doesn't
>> > allow it.
>> >
>>
>> But why not allowing it? We already support more than one bridge, so
>> if there is no default, and no instance has a bridge, no need to
>> check, if there is no defualt and some instance has some bridge, need
>> to check all bridges specified by instances, if there is a default,
>> that should exist everywhere! And at instance creation time, if you
>> specify a bridge, we check anyway!
>> (of course if no default one is there, default nic mode is routed) :)
>
> Wrong context, I was discussing your proposal of only looking at if any
> instance is in bridged mode, not if default_bridge == empty.
>
As above :)

Thanks,

Guido

Reply via email to