Hongbin, for the implementation of heterogeneous, I think we should avoid to talking with nova or other service directly, which will bring lots of coding. maybe the best way is to refactor our heat template, and let a bay support several heat template when we scale-out new node or delete additional node.
Eli. 2016-06-02 22:42 GMT+08:00 Hongbin Lu <hongbin...@huawei.com>: > Madhuri, > > It looks both of us agree the idea of having heterogeneous set of nodes. > For the implementation, I am open to alternative (I supported the > work-around idea because I cannot think of a feasible implementation by > purely using Heat, unless Heat support "for" logic which is very unlikely > to happen. However, if anyone can think of a pure Heat implementation, I am > totally fine with that). > > Best regards, > Hongbin > > > -----Original Message----- > > From: Kumari, Madhuri [mailto:madhuri.kum...@intel.com] > > Sent: June-02-16 12:24 AM > > To: OpenStack Development Mailing List (not for usage questions) > > Subject: Re: [openstack-dev] [magnum] Discuss the idea of manually > > managing the bay nodes > > > > Hi Hongbin, > > > > I also liked the idea of having heterogeneous set of nodes but IMO such > > features should not be implemented in Magnum, thus deviating Magnum > > again from its roadmap. Whereas we should leverage Heat(or may be > > Senlin) APIs for the same. > > > > I vote +1 for this feature. > > > > Regards, > > Madhuri > > > > -----Original Message----- > > From: Hongbin Lu [mailto:hongbin...@huawei.com] > > Sent: Thursday, June 2, 2016 3:33 AM > > To: OpenStack Development Mailing List (not for usage questions) > > <openstack-dev@lists.openstack.org> > > Subject: Re: [openstack-dev] [magnum] Discuss the idea of manually > > managing the bay nodes > > > > Personally, I think this is a good idea, since it can address a set of > > similar use cases like below: > > * I want to deploy a k8s cluster to 2 availability zone (in future 2 > > regions/clouds). > > * I want to spin up N nodes in AZ1, M nodes in AZ2. > > * I want to scale the number of nodes in specific AZ/region/cloud. For > > example, add/remove K nodes from AZ1 (with AZ2 untouched). > > > > The use case above should be very common and universal everywhere. To > > address the use case, Magnum needs to support provisioning > > heterogeneous set of nodes at deploy time and managing them at runtime. > > It looks the proposed idea (manually managing individual nodes or > > individual group of nodes) can address this requirement very well. > > Besides the proposed idea, I cannot think of an alternative solution. > > > > Therefore, I vote to support the proposed idea. > > > > Best regards, > > Hongbin > > > > > -----Original Message----- > > > From: Hongbin Lu > > > Sent: June-01-16 11:44 AM > > > To: OpenStack Development Mailing List (not for usage questions) > > > Subject: RE: [openstack-dev] [magnum] Discuss the idea of manually > > > managing the bay nodes > > > > > > Hi team, > > > > > > A blueprint was created for tracking this idea: > > > https://blueprints.launchpad.net/magnum/+spec/manually-manage-bay- > > > nodes . I won't approve the BP until there is a team decision on > > > accepting/rejecting the idea. > > > > > > From the discussion in design summit, it looks everyone is OK with > > the > > > idea in general (with some disagreements in the API style). However, > > > from the last team meeting, it looks some people disagree with the > > > idea fundamentally. so I re-raised this ML to re-discuss. > > > > > > If you agree or disagree with the idea of manually managing the Heat > > > stacks (that contains individual bay nodes), please write down your > > > arguments here. Then, we can start debating on that. > > > > > > Best regards, > > > Hongbin > > > > > > > -----Original Message----- > > > > From: Cammann, Tom [mailto:tom.camm...@hpe.com] > > > > Sent: May-16-16 5:28 AM > > > > To: OpenStack Development Mailing List (not for usage questions) > > > > Subject: Re: [openstack-dev] [magnum] Discuss the idea of manually > > > > managing the bay nodes > > > > > > > > The discussion at the summit was very positive around this > > > requirement > > > > but as this change will make a large impact to Magnum it will need > > a > > > > spec. > > > > > > > > On the API of things, I was thinking a slightly more generic > > > > approach to incorporate other lifecycle operations into the same > > API. > > > > Eg: > > > > magnum bay-manage <bay> <life-cycle-op> > > > > > > > > magnum bay-manage <bay> reset –hard > > > > magnum bay-manage <bay> rebuild > > > > magnum bay-manage <bay> node-delete <name/uuid> magnum bay-manage > > > > <bay> node-add –flavor <flavor> magnum bay-manage <bay> node-reset > > > > <name> magnum bay-manage <bay> node-list > > > > > > > > Tom > > > > > > > > From: Yuanying OTSUKA <yuany...@oeilvert.org> > > > > Reply-To: "OpenStack Development Mailing List (not for usage > > > > questions)" <openstack-dev@lists.openstack.org> > > > > Date: Monday, 16 May 2016 at 01:07 > > > > To: "OpenStack Development Mailing List (not for usage questions)" > > > > <openstack-dev@lists.openstack.org> > > > > Subject: Re: [openstack-dev] [magnum] Discuss the idea of manually > > > > managing the bay nodes > > > > > > > > Hi, > > > > > > > > I think, user also want to specify the deleting node. > > > > So we should manage “node” individually. > > > > > > > > For example: > > > > $ magnum node-create —bay … > > > > $ magnum node-list —bay > > > > $ magnum node-delete $NODE_UUID > > > > > > > > Anyway, if magnum want to manage a lifecycle of container > > > > infrastructure. > > > > This feature is necessary. > > > > > > > > Thanks > > > > -yuanying > > > > > > > > > > > > 2016年5月16日(月) 7:50 Hongbin Lu > > > > <hongbin...@huawei.com<mailto:hongbin...@huawei.com>>: > > > > Hi all, > > > > > > > > This is a continued discussion from the design summit. For recap, > > > > Magnum manages bay nodes by using ResourceGroup from Heat. This > > > > approach works but it is infeasible to manage the heterogeneity > > > across > > > > bay nodes, which is a frequently demanded feature. As an example, > > > > there is a request to provision bay nodes across availability zones > > > [1]. > > > > There is another request to provision bay nodes with different set > > > > of flavors [2]. For the request features above, ResourceGroup won’t > > > > work very well. > > > > > > > > The proposal is to remove the usage of ResourceGroup and manually > > > > create Heat stack for each bay nodes. For example, for creating a > > > > cluster with 2 masters and 3 minions, Magnum is going to manage 6 > > > Heat > > > > stacks (instead of 1 big Heat stack as right now): > > > > * A kube cluster stack that manages the global resources > > > > * Two kube master stacks that manage the two master nodes > > > > * Three kube minion stacks that manage the three minion nodes > > > > > > > > The proposal might require an additional API endpoint to manage > > > > nodes or a group of nodes. For example: > > > > $ magnum nodegroup-create --bay XXX --flavor m1.small --count 2 -- > > > > availability-zone us-east-1 …. > > > > $ magnum nodegroup-create --bay XXX --flavor m1.medium --count 3 -- > > > > availability-zone us-east-2 … > > > > > > > > Thoughts? > > > > > > > > [1] https://blueprints.launchpad.net/magnum/+spec/magnum- > > > availability- > > > > zones > > > > [2] https://blueprints.launchpad.net/magnum/+spec/support-multiple- > > > > flavor > > > > > > > > Best regards, > > > > Hongbin > > > > > > > > > ______________________________________________________________________ > > > > _ > > > > ___ > > > > OpenStack Development Mailing List (not for usage questions) > > > > Unsubscribe: OpenStack-dev- > > > > requ...@lists.openstack.org?subject:unsubscribe<http://OpenStack- > > dev > > > > - requ...@lists.openstack.org?subject:unsubscribe> > > > > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > > > > > > > > ______________________________________________________________________ > > > > _ > > > > ___ > > > > OpenStack Development Mailing List (not for usage questions) > > > > Unsubscribe: OpenStack-dev- > > > > requ...@lists.openstack.org?subject:unsubscribe > > > > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > _______________________________________________________________________ > > ___ > > OpenStack Development Mailing List (not for usage questions) > > Unsubscribe: OpenStack-dev- > > requ...@lists.openstack.org?subject:unsubscribe > > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > _______________________________________________________________________ > > ___ > > OpenStack Development Mailing List (not for usage questions) > > Unsubscribe: OpenStack-dev- > > requ...@lists.openstack.org?subject:unsubscribe > > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > -- 天涯无处不重逢
__________________________________________________________________________ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev