Re: [openstack-dev] [TripleO][Kolla][Heat][Higgins][Magnum][Kuryr] Gap analysis: Heat as a k8s orchestrator

2016-05-29 Thread Hongbin Lu


> -Original Message-
> From: Steven Dake (stdake) [mailto:std...@cisco.com]
> Sent: May-29-16 3:29 PM
> To: OpenStack Development Mailing List (not for usage questions)
> Subject: Re: [openstack-dev]
> [TripleO][Kolla][Heat][Higgins][Magnum][Kuryr] Gap analysis: Heat as a
> k8s orchestrator
> 
> Quick question below.
> 
> On 5/28/16, 1:16 PM, "Hongbin Lu"  wrote:
> 
> >
> >
> >> -Original Message-
> >> From: Zane Bitter [mailto:zbit...@redhat.com]
> >> Sent: May-27-16 6:31 PM
> >> To: OpenStack Development Mailing List
> >> Subject: [openstack-dev]
> >> [TripleO][Kolla][Heat][Higgins][Magnum][Kuryr]
> >> Gap analysis: Heat as a k8s orchestrator
> >>
> >> I spent a bit of time exploring the idea of using Heat as an
> external
> >> orchestration layer on top of Kubernetes - specifically in the case
> >> of TripleO controller nodes but I think it could be more generally
> >> useful too - but eventually came to the conclusion it doesn't work
> >> yet, and probably won't for a while. Nevertheless, I think it's
> >> helpful to document a bit to help other people avoid going down the
> >> same path, and also to help us focus on working toward the point
> >> where it _is_ possible, since I think there are other contexts where
> >> it would be useful too.
> >>
> >> We tend to refer to Kubernetes as a "Container Orchestration Engine"
> >> but it does not actually do any orchestration, unless you count just
> >> starting everything at roughly the same time as 'orchestration'.
> >> Which I wouldn't. You generally handle any orchestration
> requirements
> >> between services within the containers themselves, possibly using
> >> external services like etcd to co-ordinate. (The Kubernetes project
> >> refer to this as "choreography", and explicitly disclaim any attempt
> >> at
> >> orchestration.)
> >>
> >> What Kubernetes *does* do is more like an actively-managed version
> of
> >> Heat's SoftwareDeploymentGroup (emphasis on the _Group_). Brief
> recap:
> >> SoftwareDeploymentGroup is a type of ResourceGroup; you give it a
> map
> >> of resource names to server UUIDs and it creates a
> SoftwareDeployment
> >> for each server. You have to generate the list of servers somehow to
> >> give it (the easiest way is to obtain it from the output of another
> >> ResourceGroup containing the servers). If e.g. a server goes down
> you
> >> have to detect that externally, and trigger a Heat update that
> >> removes it from the templates, redeploys a replacement server, and
> >> regenerates the server list before a replacement SoftwareDeployment
> >> is created. In constrast, Kubernetes is running on a cluster of
> >> servers, can use rules to determine where to run containers, and can
> >> very quickly redeploy without external intervention in response to a
> >> server or container falling over. (It also does rolling updates,
> >> which Heat can also do albeit in a somewhat hacky way when it comes
> >> to SoftwareDeployments - which we're planning to fix.)
> >>
> >> So this seems like an opportunity: if the dependencies between
> >> services could be encoded in Heat templates rather than baked into
> >> the containers then we could use Heat as the orchestration layer
> >> following the dependency-based style I outlined in [1]. (TripleO is
> >> already moving in this direction with the way that composable-roles
> >> uses
> >> SoftwareDeploymentGroups.) One caveat is that fully using this style
> >> likely rules out for all practical purposes the current
> >> Pacemaker-based HA solution. We'd need to move to a lighter-weight
> HA
> >> solution, but I know that TripleO is considering that anyway.
> >>
> >> What's more though, assuming this could be made to work for a
> >> Kubernetes cluster, a couple of remappings in the Heat environment
> >> file should get you an otherwise-equivalent single-node non-HA
> >> deployment basically for free. That's particularly exciting to me
> >> because there are definitely deployments of TripleO that need HA
> >> clustering and deployments that don't and which wouldn't want to pay
> >> the complexity cost of running Kubernetes when they don't make any
> real use of it.
> >>
> >> So you'd have a Heat resource type for the controlle

Re: [openstack-dev] [higgins] Docker-compose support

2016-05-31 Thread Hongbin Lu
I don’t think it is a good to re-invent docker-compose in Higgins. Instead, we 
should leverage existing libraries/tools if we can.

Frankly, I don’t think Higgins should interpret any docker-compose like DSL in 
server, but maybe it is a good idea to have a CLI extension to interpret 
specific DSL and translate it to a set of REST API calls to Higgins server. The 
solution should be generic enough so that we can re-use it to interpret another 
DSL (e.g. pod, TOSCA, etc.) in the future.

Best regards,
Hongbin

From: Denis Makogon [mailto:lildee1...@gmail.com]
Sent: May-31-16 3:25 PM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [higgins] Docker-compose support

Hello.

It is hard to tell if given API will be final version, but i tried to make it 
similar to CLI and its capabilities. So, why not?

2016-05-31 22:02 GMT+03:00 Joshua Harlow 
mailto:harlo...@fastmail.com>>:
Cool good to know,

I see 
https://github.com/docker/compose/pull/3535/files#diff-1d1516ea1e61cd8b44d000c578bbd0beR66

Would that be the primary API? Hard to tell what is the API there actually, 
haha. Is it the run() method?

I was thinking more along the line that higgins could be a 'interpreter' of the 
same docker-compose format (or similar format); if the library that is being 
created takes a docker-compose file and turns it into a 'intermediate' 
version/format that'd be cool. The compiled version would then be 'executable' 
(and introspectable to) by say higgins (which could say traverse over that 
intermediate version and activate its own code to turn the intermediate 
versions primitives into reality), or a docker-compose service could or ...

What abou TOSCA? From my own perspective compose format is too limited, so it 
is really necessary to consider regarding use of TOSCA in Higgins workflows.


Libcompose also seems to be targeted at a higher level library, from at least 
reading the summary, neither seem to be taking a compose yaml file, turning it 
into a intermediate format, exposing that intermediate format to others for 
introspection/execution (and also likely providing a default execution engine 
that understands that format) but instead both just provide an equivalent of:

That's why i've started this thread, as community we have use cases for Higgins 
itself and for compose but most of them are not formalized or even written. 
Isn't this a good time to define them?

  project = make_project(yaml_file)
  project.run/up()

Which probably isn't the best API for something like a web-service that uses 
that same library to have. IMHO having a long running run() method

Well, compose allows to run detached executions for most of its API calls. By 
use of events, we can track service/containers statuses (but it is not really 
trivial).

exposed, without the necessary state tracking, ability to 
interrupt/pause/resume that run() method and such is not going to end well for 
users of that lib (especially a web-service that needs to periodically be 
`service webservice stop` or restart, or ...).

Yes, agreed. But docker or swarm by itself doesn't provide such API (can't tell 
the same for K8t).

Denis Makogon wrote:
Hello Stackers.


As part of discussions around what Higgins is and what its mission there
are were couple of you who mentioned docker-compose [1] and necessity of
doing the same thing for Higgins but from scratch.

I don't think that going that direction is the best way to spend
development cycles. So, that's why i ask you to take a look at recent
patchset submitted to docker-compose upstream [2] that makes this tool
(initially designed as CLI) to become a library with Python API.  The
whole idea is to make docker-compose look similar to libcompose [3]
(written on Go).

If we need to utilize docker-compose features in Higgins i'd recommend
to work on this with Docker community and convince them to land that
patch to upstream.

If you have any questions, please let me know.

[1] https://docs.docker.com/compose/
[2] https://github.com/docker/compose/pull/3535
[3] https://github.com/docker/libcompose


Kind regards,
Denys Makogon
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: 
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: 
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-d

[openstack-dev] [Higgins] Proposing Eli Qiao to be a Higgins core

2016-05-31 Thread Hongbin Lu
Hi team,

I wrote this email to propose Eli Qiao (taget-9) to be a Higgins core. 
Normally, the requirement to join the core team is to consistently contribute 
to the project for a certain period of time. However, given the fact that the 
project is new and the initial core team was formed based on a commitment, I am 
fine to propose a new core based on a strong commitment to contribute plus a 
few useful patches/reviews. In addition, Eli Qiao is currently a Magnum core 
and I believe his expertise will be an asset of Higgins team.

According to the OpenStack Governance process [1], we require a minimum of 4 +1 
votes from existing Higgins core team within a 1 week voting window (consider 
this proposal as a +1 vote from me). A vote of -1 is a veto. If we cannot get 
enough votes or there is a veto vote prior to the end of the voting window, Eli 
is not able to join the core team and needs to wait 30 days to reapply.

The voting is open until Tuesday June 7st.

[1] https://wiki.openstack.org/wiki/Governance/Approved/CoreDevProcess

Best regards,
Hongbin
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Higgins] Call for contribution for Higgins API design

2016-05-31 Thread Hongbin Lu
Hi team,

As discussed in the last team meeting, we agreed to define core use cases for 
the API design. I have created a blueprint for that. We need an owner of the 
blueprint and it requires a spec to clarify the API design. Please let me know 
if you interest in this work (it might require a significant amount of time to 
work on the spec).

https://blueprints.launchpad.net/python-higgins/+spec/api-design

Best regards,
Hongbin
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [higgins] Should we rename "Higgins"?

2016-05-31 Thread Hongbin Lu
Shu,

According to the feedback from the last team meeting, Gatling doesn't seem to 
be a suitable name. Are you able to find an alternative name?

Best regards,
Hongbin

> -Original Message-
> From: Shuu Mutou [mailto:shu-mu...@rf.jp.nec.com]
> Sent: May-24-16 4:30 AM
> To: openstack-dev@lists.openstack.org
> Cc: Haruhiko Katou
> Subject: [openstack-dev] [higgins] Should we rename "Higgins"?
> 
> Hi all,
> 
> Unfortunately "higgins" is used by media server project on Launchpad
> and CI software on PYPI. Now, we use "python-higgins" for our project
> on Launchpad.
> 
> IMO, we should rename project to prevent increasing points to patch.
> 
> How about "Gatling"? It's only association from Magnum. It's not used
> on both Launchpad and PYPI.
> Is there any idea?
> 
> Renaming opportunity will come (it seems only twice in a year) on
> Friday, June 3rd. Few projects will rename on this date.
> http://markmail.org/thread/ia3o3vz7mzmjxmcx
> 
> And if project name issue will be fixed, I'd like to propose UI
> subproject.
> 
> Thanks,
> Shu
> 
> 
> ___
> ___
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: OpenStack-dev-
> requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Higgins] Call for contribution for Higgins API design

2016-05-31 Thread Hongbin Lu
Sheel,

Thanks for taking the responsibility. Assigned the BP to you. As discussed, 
please submit a spec for the API design. Feel free to let us know if you need 
any help.

Best regards,
Hongbin

From: Sheel Rana Insaan [mailto:ranasheel2...@gmail.com]
Sent: May-31-16 9:23 PM
To: Hongbin Lu
Cc: adit...@nectechnologies.in; vivek.jain.openst...@gmail.com; 
flw...@catalyst.net.nz; Shuu Mutou; Davanum Srinivas; OpenStack Development 
Mailing List (not for usage questions); Chandan Kumar; hai...@xr.jp.nec.com; Qi 
Ming Teng; sitlani.namr...@yahoo.in; Yuanying; Kumari, Madhuri; 
yanya...@cn.ibm.com
Subject: Re: [Higgins] Call for contribution for Higgins API design


Dear Hongbin,

I am interested in this.
Thanks!!

Best Regards,
Sheel Rana
On Jun 1, 2016 3:53 AM, "Hongbin Lu" 
mailto:hongbin...@huawei.com>> wrote:
Hi team,

As discussed in the last team meeting, we agreed to define core use cases for 
the API design. I have created a blueprint for that. We need an owner of the 
blueprint and it requires a spec to clarify the API design. Please let me know 
if you interest in this work (it might require a significant amount of time to 
work on the spec).

https://blueprints.launchpad.net/python-higgins/+spec/api-design

Best regards,
Hongbin
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [magnum][lbaas] Operator-facing installation guide

2016-06-01 Thread Hongbin Lu
Hi lbaas team,

I wonder if there is an operator-facing installation guide for neutron-lbaas. I 
asked that because Magnum is working on an installation guide [1] and 
neutron-lbaas is a dependency of Magnum. We want to link to an official lbaas 
guide so that our users will have a completed instruction. Any pointer?

[1] https://review.openstack.org/#/c/319399/

Best regards,
Hongbin
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [magnum] Discuss the idea of manually managing the bay nodes

2016-06-01 Thread Hongbin Lu
Hi team,

A blueprint was created for tracking this idea: 
https://blueprints.launchpad.net/magnum/+spec/manually-manage-bay-nodes . I 
won't approve the BP until there is a team decision on accepting/rejecting the 
idea.

From the discussion in design summit, it looks everyone is OK with the idea in 
general (with some disagreements in the API style). However, from the last team 
meeting, it looks some people disagree with the idea fundamentally. so I 
re-raised this ML to re-discuss.

If you agree or disagree with the idea of manually managing the Heat stacks 
(that contains individual bay nodes), please write down your arguments here. 
Then, we can start debating on that.

Best regards,
Hongbin

> -Original Message-
> From: Cammann, Tom [mailto:tom.camm...@hpe.com]
> Sent: May-16-16 5:28 AM
> To: OpenStack Development Mailing List (not for usage questions)
> Subject: Re: [openstack-dev] [magnum] Discuss the idea of manually
> managing the bay nodes
> 
> The discussion at the summit was very positive around this requirement
> but as this change will make a large impact to Magnum it will need a
> spec.
> 
> On the API of things, I was thinking a slightly more generic approach
> to incorporate other lifecycle operations into the same API.
> Eg:
> magnum bay-manage  
> 
> magnum bay-manage  reset –hard
> magnum bay-manage  rebuild
> magnum bay-manage  node-delete  magnum bay-manage 
> node-add –flavor  magnum bay-manage  node-reset 
> magnum bay-manage  node-list
> 
> Tom
> 
> From: Yuanying OTSUKA 
> Reply-To: "OpenStack Development Mailing List (not for usage
> questions)" 
> Date: Monday, 16 May 2016 at 01:07
> To: "OpenStack Development Mailing List (not for usage questions)"
> 
> Subject: Re: [openstack-dev] [magnum] Discuss the idea of manually
> managing the bay nodes
> 
> Hi,
> 
> I think, user also want to specify the deleting node.
> So we should manage “node” individually.
> 
> For example:
> $ magnum node-create —bay …
> $ magnum node-list —bay
> $ magnum node-delete $NODE_UUID
> 
> Anyway, if magnum want to manage a lifecycle of container
> infrastructure.
> This feature is necessary.
> 
> Thanks
> -yuanying
> 
> 
> 2016年5月16日(月) 7:50 Hongbin Lu
> mailto:hongbin...@huawei.com>>:
> Hi all,
> 
> This is a continued discussion from the design summit. For recap,
> Magnum manages bay nodes by using ResourceGroup from Heat. This
> approach works but it is infeasible to manage the heterogeneity across
> bay nodes, which is a frequently demanded feature. As an example, there
> is a request to provision bay nodes across availability zones [1].
> There is another request to provision bay nodes with different set of
> flavors [2]. For the request features above, ResourceGroup won’t work
> very well.
> 
> The proposal is to remove the usage of ResourceGroup and manually
> create Heat stack for each bay nodes. For example, for creating a
> cluster with 2 masters and 3 minions, Magnum is going to manage 6 Heat
> stacks (instead of 1 big Heat stack as right now):
> * A kube cluster stack that manages the global resources
> * Two kube master stacks that manage the two master nodes
> * Three kube minion stacks that manage the three minion nodes
> 
> The proposal might require an additional API endpoint to manage nodes
> or a group of nodes. For example:
> $ magnum nodegroup-create --bay XXX --flavor m1.small --count 2 --
> availability-zone us-east-1 ….
> $ magnum nodegroup-create --bay XXX --flavor m1.medium --count 3 --
> availability-zone us-east-2 …
> 
> Thoughts?
> 
> [1] https://blueprints.launchpad.net/magnum/+spec/magnum-availability-
> zones
> [2] https://blueprints.launchpad.net/magnum/+spec/support-multiple-
> flavor
> 
> Best regards,
> Hongbin
> ___
> ___
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: OpenStack-dev-
> requ...@lists.openstack.org?subject:unsubscribe<http://OpenStack-dev-
> requ...@lists.openstack.org?subject:unsubscribe>
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> ___
> ___
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: OpenStack-dev-
> requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [magnum] Discuss the idea of manually managing the bay nodes

2016-06-01 Thread Hongbin Lu
Personally, I think this is a good idea, since it can address a set of similar 
use cases like below:
* I want to deploy a k8s cluster to 2 availability zone (in future 2 
regions/clouds).
* I want to spin up N nodes in AZ1, M nodes in AZ2.
* I want to scale the number of nodes in specific AZ/region/cloud. For example, 
add/remove K nodes from AZ1 (with AZ2 untouched).

The use case above should be very common and universal everywhere. To address 
the use case, Magnum needs to support provisioning heterogeneous set of nodes 
at deploy time and managing them at runtime. It looks the proposed idea 
(manually managing individual nodes or individual group of nodes) can address 
this requirement very well. Besides the proposed idea, I cannot think of an 
alternative solution.

Therefore, I vote to support the proposed idea.

Best regards,
Hongbin

> -Original Message-
> From: Hongbin Lu
> Sent: June-01-16 11:44 AM
> To: OpenStack Development Mailing List (not for usage questions)
> Subject: RE: [openstack-dev] [magnum] Discuss the idea of manually
> managing the bay nodes
> 
> Hi team,
> 
> A blueprint was created for tracking this idea:
> https://blueprints.launchpad.net/magnum/+spec/manually-manage-bay-
> nodes . I won't approve the BP until there is a team decision on
> accepting/rejecting the idea.
> 
> From the discussion in design summit, it looks everyone is OK with the
> idea in general (with some disagreements in the API style). However,
> from the last team meeting, it looks some people disagree with the idea
> fundamentally. so I re-raised this ML to re-discuss.
> 
> If you agree or disagree with the idea of manually managing the Heat
> stacks (that contains individual bay nodes), please write down your
> arguments here. Then, we can start debating on that.
> 
> Best regards,
> Hongbin
> 
> > -Original Message-
> > From: Cammann, Tom [mailto:tom.camm...@hpe.com]
> > Sent: May-16-16 5:28 AM
> > To: OpenStack Development Mailing List (not for usage questions)
> > Subject: Re: [openstack-dev] [magnum] Discuss the idea of manually
> > managing the bay nodes
> >
> > The discussion at the summit was very positive around this
> requirement
> > but as this change will make a large impact to Magnum it will need a
> > spec.
> >
> > On the API of things, I was thinking a slightly more generic approach
> > to incorporate other lifecycle operations into the same API.
> > Eg:
> > magnum bay-manage  
> >
> > magnum bay-manage  reset –hard
> > magnum bay-manage  rebuild
> > magnum bay-manage  node-delete  magnum bay-manage
> >  node-add –flavor  magnum bay-manage  node-reset
> >  magnum bay-manage  node-list
> >
> > Tom
> >
> > From: Yuanying OTSUKA 
> > Reply-To: "OpenStack Development Mailing List (not for usage
> > questions)" 
> > Date: Monday, 16 May 2016 at 01:07
> > To: "OpenStack Development Mailing List (not for usage questions)"
> > 
> > Subject: Re: [openstack-dev] [magnum] Discuss the idea of manually
> > managing the bay nodes
> >
> > Hi,
> >
> > I think, user also want to specify the deleting node.
> > So we should manage “node” individually.
> >
> > For example:
> > $ magnum node-create —bay …
> > $ magnum node-list —bay
> > $ magnum node-delete $NODE_UUID
> >
> > Anyway, if magnum want to manage a lifecycle of container
> > infrastructure.
> > This feature is necessary.
> >
> > Thanks
> > -yuanying
> >
> >
> > 2016年5月16日(月) 7:50 Hongbin Lu
> > mailto:hongbin...@huawei.com>>:
> > Hi all,
> >
> > This is a continued discussion from the design summit. For recap,
> > Magnum manages bay nodes by using ResourceGroup from Heat. This
> > approach works but it is infeasible to manage the heterogeneity
> across
> > bay nodes, which is a frequently demanded feature. As an example,
> > there is a request to provision bay nodes across availability zones
> [1].
> > There is another request to provision bay nodes with different set of
> > flavors [2]. For the request features above, ResourceGroup won’t work
> > very well.
> >
> > The proposal is to remove the usage of ResourceGroup and manually
> > create Heat stack for each bay nodes. For example, for creating a
> > cluster with 2 masters and 3 minions, Magnum is going to manage 6
> Heat
> > stacks (instead of 1 big Heat stack as right now):
> > * A kube cluster stack that manages the global resources
> > * Two kube master stacks that manage the two master nodes
> > * Three kube minion stacks that man

Re: [openstack-dev] [magnum] Discuss the idea of manually managing the bay nodes

2016-06-02 Thread Hongbin Lu
Madhuri,

It looks both of us agree the idea of having heterogeneous set of nodes. For 
the implementation, I am open to alternative (I supported the work-around idea 
because I cannot think of a feasible implementation by purely using Heat, 
unless Heat support "for" logic which is very unlikely to happen. However, if 
anyone can think of a pure Heat implementation, I am totally fine with that).

Best regards,
Hongbin

> -Original Message-
> From: Kumari, Madhuri [mailto:madhuri.kum...@intel.com]
> Sent: June-02-16 12:24 AM
> To: OpenStack Development Mailing List (not for usage questions)
> Subject: Re: [openstack-dev] [magnum] Discuss the idea of manually
> managing the bay nodes
> 
> Hi Hongbin,
> 
> I also liked the idea of having heterogeneous set of nodes but IMO such
> features should not be implemented in Magnum, thus deviating Magnum
> again from its roadmap. Whereas we should leverage Heat(or may be
> Senlin) APIs for the same.
> 
> I vote +1 for this feature.
> 
> Regards,
> Madhuri
> 
> -Original Message-
> From: Hongbin Lu [mailto:hongbin...@huawei.com]
> Sent: Thursday, June 2, 2016 3:33 AM
> To: OpenStack Development Mailing List (not for usage questions)
> 
> Subject: Re: [openstack-dev] [magnum] Discuss the idea of manually
> managing the bay nodes
> 
> Personally, I think this is a good idea, since it can address a set of
> similar use cases like below:
> * I want to deploy a k8s cluster to 2 availability zone (in future 2
> regions/clouds).
> * I want to spin up N nodes in AZ1, M nodes in AZ2.
> * I want to scale the number of nodes in specific AZ/region/cloud. For
> example, add/remove K nodes from AZ1 (with AZ2 untouched).
> 
> The use case above should be very common and universal everywhere. To
> address the use case, Magnum needs to support provisioning
> heterogeneous set of nodes at deploy time and managing them at runtime.
> It looks the proposed idea (manually managing individual nodes or
> individual group of nodes) can address this requirement very well.
> Besides the proposed idea, I cannot think of an alternative solution.
> 
> Therefore, I vote to support the proposed idea.
> 
> Best regards,
> Hongbin
> 
> > -Original Message-
> > From: Hongbin Lu
> > Sent: June-01-16 11:44 AM
> > To: OpenStack Development Mailing List (not for usage questions)
> > Subject: RE: [openstack-dev] [magnum] Discuss the idea of manually
> > managing the bay nodes
> >
> > Hi team,
> >
> > A blueprint was created for tracking this idea:
> > https://blueprints.launchpad.net/magnum/+spec/manually-manage-bay-
> > nodes . I won't approve the BP until there is a team decision on
> > accepting/rejecting the idea.
> >
> > From the discussion in design summit, it looks everyone is OK with
> the
> > idea in general (with some disagreements in the API style). However,
> > from the last team meeting, it looks some people disagree with the
> > idea fundamentally. so I re-raised this ML to re-discuss.
> >
> > If you agree or disagree with the idea of manually managing the Heat
> > stacks (that contains individual bay nodes), please write down your
> > arguments here. Then, we can start debating on that.
> >
> > Best regards,
> > Hongbin
> >
> > > -Original Message-
> > > From: Cammann, Tom [mailto:tom.camm...@hpe.com]
> > > Sent: May-16-16 5:28 AM
> > > To: OpenStack Development Mailing List (not for usage questions)
> > > Subject: Re: [openstack-dev] [magnum] Discuss the idea of manually
> > > managing the bay nodes
> > >
> > > The discussion at the summit was very positive around this
> > requirement
> > > but as this change will make a large impact to Magnum it will need
> a
> > > spec.
> > >
> > > On the API of things, I was thinking a slightly more generic
> > > approach to incorporate other lifecycle operations into the same
> API.
> > > Eg:
> > > magnum bay-manage  
> > >
> > > magnum bay-manage  reset –hard
> > > magnum bay-manage  rebuild
> > > magnum bay-manage  node-delete  magnum bay-manage
> > >  node-add –flavor  magnum bay-manage  node-reset
> > >  magnum bay-manage  node-list
> > >
> > > Tom
> > >
> > > From: Yuanying OTSUKA 
> > > Reply-To: "OpenStack Development Mailing List (not for usage
> > > questions)" 
> > > Date: Monday, 16 May 2016 at 01:07
> > > To: "OpenStack Development Mailing List (not for usage questions)"
> > > 
> > > Subject

Re: [openstack-dev] [magnum] Discuss the idea of manually managing the bay nodes

2016-06-03 Thread Hongbin Lu
I agree that heterogeneous cluster is more advanced and harder to control, but 
I don't get why we (as service developers/providers) care about that. If there 
is a significant portion of users asking for advanced topologies (i.e. 
heterogeneous cluster) and willing to deal with the complexities, Magnum should 
just provide them (unless there are technical difficulties or other valid 
arguments). From my point of view, Magnum should support the basic use cases 
well (i.e. homogenous), *and* be flexible to accommodate various advanced use 
cases if we can.

Best regards,
Hongbin

> -Original Message-
> From: Adrian Otto [mailto:adrian.o...@rackspace.com]
> Sent: June-02-16 7:24 PM
> To: OpenStack Development Mailing List (not for usage questions)
> Subject: Re: [openstack-dev] [magnum] Discuss the idea of manually
> managing the bay nodes
> 
> I am really struggling to accept the idea of heterogeneous clusters. My
> experience causes me to question whether a heterogeneus cluster makes
> sense for Magnum. I will try to explain why I have this hesitation:
> 
> 1) If you have a heterogeneous cluster, it suggests that you are using
> external intelligence to manage the cluster, rather than relying on it
> to be self-managing. This is an anti-pattern that I refer to as “pets"
> rather than “cattle”. The anti-pattern results in brittle deployments
> that rely on external intelligence to manage (upgrade, diagnose, and
> repair) the cluster. The automation of the management is much harder
> when a cluster is heterogeneous.
> 
> 2) If you have a heterogeneous cluster, it can fall out of balance.
> This means that if one of your “important” or “large” members fail,
> there may not be adequate remaining members in the cluster to continue
> operating properly in the degraded state. The logic of how to track and
> deal with this needs to be handled. It’s much simpler in the
> heterogeneous case.
> 
> 3) Heterogeneous clusters are complex compared to homogeneous clusters.
> They are harder to work with, and that usually means that unplanned
> outages are more frequent, and last longer than they with a homogeneous
> cluster.
> 
> Summary:
> 
> Heterogeneous:
>   - Complex
>   - Prone to imbalance upon node failure
>   - Less reliable
> 
> Heterogeneous:
>   - Simple
>   - Don’t get imbalanced when a min_members concept is supported by the
> cluster controller
>   - More reliable
> 
> My bias is to assert that applications that want a heterogeneous mix of
> system capacities at a node level should be deployed on multiple
> homogeneous bays, not a single heterogeneous one. That way you end up
> with a composition of simple systems rather than a larger complex one.
> 
> Adrian
> 
> 
> > On Jun 1, 2016, at 3:02 PM, Hongbin Lu  wrote:
> >
> > Personally, I think this is a good idea, since it can address a set
> of similar use cases like below:
> > * I want to deploy a k8s cluster to 2 availability zone (in future 2
> regions/clouds).
> > * I want to spin up N nodes in AZ1, M nodes in AZ2.
> > * I want to scale the number of nodes in specific AZ/region/cloud.
> For example, add/remove K nodes from AZ1 (with AZ2 untouched).
> >
> > The use case above should be very common and universal everywhere. To
> address the use case, Magnum needs to support provisioning
> heterogeneous set of nodes at deploy time and managing them at runtime.
> It looks the proposed idea (manually managing individual nodes or
> individual group of nodes) can address this requirement very well.
> Besides the proposed idea, I cannot think of an alternative solution.
> >
> > Therefore, I vote to support the proposed idea.
> >
> > Best regards,
> > Hongbin
> >
> >> -Original Message-
> >> From: Hongbin Lu
> >> Sent: June-01-16 11:44 AM
> >> To: OpenStack Development Mailing List (not for usage questions)
> >> Subject: RE: [openstack-dev] [magnum] Discuss the idea of manually
> >> managing the bay nodes
> >>
> >> Hi team,
> >>
> >> A blueprint was created for tracking this idea:
> >> https://blueprints.launchpad.net/magnum/+spec/manually-manage-bay-
> >> nodes . I won't approve the BP until there is a team decision on
> >> accepting/rejecting the idea.
> >>
> >> From the discussion in design summit, it looks everyone is OK with
> >> the idea in general (with some disagreements in the API style).
> >> However, from the last team meeting, it looks some people disagree
> >> with the idea fundamentally. so I re-raised this ML to re-discuss.
> >>
> >> If you agree or disagree with the idea of m

[openstack-dev] Announcing Higgins -- Container management service for OpenStack

2016-06-03 Thread Hongbin Lu
Hi all,

We would like to introduce you a new container project for OpenStack called 
Higgins (might be renamed later [1]).

Higgins is a Container Management service for OpenStack. The key objective of 
the Higgins project is to enable tight integration between OpenStack and 
container technologies. In before, there is no perfect solution that can 
effectively bring containers to OpenStack. Magnum provides service to provision 
and manage Container Orchestration Engines (COEs), such as Kubernetes, Docker 
Swarm, and Apache Mesos, on top of Nova instances, but container management is 
out of its scope [2]. Nova-docker enables operating Docker containers from 
existing Nova APIs, but it can't support container features that go beyond the 
compute model. Heat docker plugin allows using Docker containers as Heat 
resources, but it has a similar limitation as nova-docker. Generally speaking, 
OpenStack is lack of a container management service that can integrate 
containers with OpenStack, and Higgins is created to fill the gap.

Higgins aims to provide an OpenStack-native API for launching and managing 
containers backed by different container technologies, such as Docker, Rocket 
etc. Higgins doesn't require calling other services/tools to provision the 
container infrastructure. Instead, it relies on existing infrastructure that is 
setup by operators. In our vision, the key value Higgins brings to OpenStack is 
enabling one platform for provisioning and managing VMs, baremetals, and 
containers as compute resource. In particular, VMs, baremetals, and containers 
will share the following:
- Single authentication and authorization system: Keystone
- Single UI Dashboard: Horizon
- Single resource and quota management
- Single block storage pools: Cinder
- Single networking layer: Neutron
- Single CLI: OpenStackClient
- Single image management: Glance
- Single Heat template for orchestration
- Single resource monitoring and metering system: Telemetry

For more information, please find below:
Wiki: https://wiki.openstack.org/wiki/Higgins
The core team: https://review.openstack.org/#/admin/groups/1382,members
Team meeting: Every Tuesday 0300 UTC at #openstack-meeting

NOTE: we are looking for feedback to shape the project roadmap. If you're 
interested in this project, we appreciate your inputs in the etherpad: 
https://etherpad.openstack.org/p/container-management-service

Best regards,
The Higgins team

[1] http://lists.openstack.org/pipermail/openstack-dev/2016-May/095746.html
[2] https://review.openstack.org/#/c/311476/
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Magnum] The Magnum Midcycle

2016-06-07 Thread Hongbin Lu
Hi all,

Please find the Doodle pool below for selecting the Magnum midcycle date. 
Presumably, it will be a 2 days event. The location is undecided for now. The 
previous midcycles were hosted in bay area so I guess we will stay there at 
this time.

http://doodle.com/poll/5tbcyc37yb7ckiec

In addition, the Magnum team is finding a host for the midcycle. Please let us 
know if you interest to host us.

Best regards,
Hongbin
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [magnum][heat] spawn a group of nodes on different availability zones

2016-06-07 Thread Hongbin Lu
Hi Heat team,

A question inline.

Best regards,
Hongbin

> -Original Message-
> From: Steven Hardy [mailto:sha...@redhat.com]
> Sent: March-03-16 3:57 AM
> To: OpenStack Development Mailing List (not for usage questions)
> Subject: Re: [openstack-dev] [magnum][heat] spawn a group of nodes on
> different availability zones
> 
> On Wed, Mar 02, 2016 at 05:40:20PM -0500, Zane Bitter wrote:
> > On 02/03/16 05:50, Mathieu Velten wrote:
> > >Hi all,
> > >
> > >I am looking at a way to spawn nodes in different specified
> > >availability zones when deploying a cluster with Magnum.
> > >
> > >Currently Magnum directly uses predefined Heat templates with Heat
> > >parameters to handle configuration.
> > >I tried to reach my goal by sticking to this model, however I
> > >couldn't find a suitable Heat construct that would allow that.
> > >
> > >Here are the details of my investigation :
> > >- OS::Heat::ResourceGroup doesn't allow to specify a list as a
> > >variable that would be iterated over, so we would need one
> > >ResourceGroup by AZ
> > >- OS::Nova::ServerGroup only allows restriction at the hypervisor
> > >level
> > >- OS::Heat::InstanceGroup has an AZs parameter but it is marked
> > >unimplemented , and is CFN specific.
> > >- OS::Nova::HostAggregate only seems to allow adding some metadatas
> > >to a group of hosts in a defined availability zone
> > >- repeat function only works inside the properties section of a
> > >resource and can't be used at the resource level itself, hence
> > >something like that is not allowed :
> > >
> > >resources:
> > >   repeat:
> > > for_each:
> > >   <%az%>: { get_param: availability_zones }
> > > template:
> > >   rg-<%az%>:
> > > type: OS::Heat::ResourceGroup
> > > properties:
> > >   count: 2
> > >   resource_def:
> > > type: hot_single_server.yaml
> > > properties:
> > >   availability_zone: <%az%>
> > >
> > >
> > >The only possibility that I see is generating a ResourceGroup by AZ,
> > >but it would induce some big changes in Magnum to handle
> > >modification/generation of templates.
> > >
> > >Any ideas ?
> >
> > This is a long-standing missing feature in Heat. There are two
> > blueprints for this (I'm not sure why):
> >
> > https://blueprints.launchpad.net/heat/+spec/autoscaling-
> availabilityzo
> > nes-impl
> > https://blueprints.launchpad.net/heat/+spec/implement-
> autoscalinggroup
> > -availabilityzones
> >
> > The latter had a spec with quite a lot of discussion:
> >
> > https://review.openstack.org/#/c/105907
> >
> > And even an attempted implementation:
> >
> > https://review.openstack.org/#/c/116139/
> >
> > which was making some progress but is long out of date and would need
> > serious work to rebase. The good news is that some of the changes I
> > made in Liberty like https://review.openstack.org/#/c/213555/ should
> > hopefully make it simpler.
> >
> > All of which is to say, if you want to help then I think it would be
> > totally do-able to land support for this relatively early in Newton :)
> >
> >
> > Failing that, the only think I can think to try is something I am
> > pretty sure won't work: a ResourceGroup with something like:
> >
> >   availability_zone: {get_param: [AZ_map, "%i"]}
> >
> > where AZ_map looks something like {"0": "az-1", "1": "az-2", "2":
> > "az-1", ...} and you're using the member index to pick out the AZ to
> > use from the parameter. I don't think that works (if "%i" is resolved
> > after get_param then it won't, and I suspect that's the case) but
> it's
> > worth a try if you need a solution in Mitaka.
> 
> Yeah, this won't work if you attempt to do the map/index lookup in the
> top-level template where the ResourceGroup is defined, but it *does*
> work if you pass both the map and the index into the nested stack, e.g
> something like this (untested):
> 
> $ cat rg_az_map.yaml
> heat_template_version: 2015-04-30
> 
> parameters:
>   az_map:
> type: json
> default:
>   '0': az1
>   '1': az2
> 
> resources:
>  AGroup:
> type: OS::Heat::ResourceGroup
> properties:
>   count: 2
>   resource_def:
> type: server_mapped_az.yaml
> properties:
>   availability_zone_map: {get_param: az_map}
>   index: '%index%'
> 
> $ cat server_mapped_az.yaml
> heat_template_version: 2015-04-30
> 
> parameters:
>   availability_zone_map:
> type: json
>   index:
> type: string
> 
> resources:
>  server:
> type: OS::Nova::Server
> properties:
>   image: the_image
>   flavor: m1.foo
>   availability_zone: {get_param: [availability_zone_map, {get_param:
> index}]}

This is nice. It seems to address our heterogeneity requirement at *deploy* 
time. However, I wonder what is the runtime behavior. For example, I deploy a 
stack by:
$ heat stack-create -f rg_az_map.yaml -P az_map='{"0":"az1","1":"az2"}'

Then, I want to remove a sever by:
$ heat stack-update -f rg_az_map.yaml 

Re: [openstack-dev] [Higgins] Proposing Eli Qiao to be a Higgins core

2016-06-07 Thread Hongbin Lu
Hi all,

Thanks for your votes. Eli Qiao has been added to the core team: 
https://review.openstack.org/#/admin/groups/1382,members .

Best regards,
Hongbin

> -Original Message-
> From: Chandan Kumar [mailto:chku...@redhat.com]
> Sent: June-01-16 12:27 AM
> To: Sheel Rana Insaan
> Cc: Hongbin Lu; adit...@nectechnologies.in;
> vivek.jain.openst...@gmail.com; Shuu Mutou; Davanum Srinivas; hai-
> x...@xr.jp.nec.com; Yuanying; Kumari, Madhuri; yanya...@cn.ibm.com;
> flw...@catalyst.net.nz; OpenStack Development Mailing List (not for
> usage questions); Qi Ming Teng; sitlani.namr...@yahoo.in;
> qiaoliy...@gmail.com
> Subject: Re: [Higgins] Proposing Eli Qiao to be a Higgins core
> 
> Hello,
> 
> 
> > On Jun 1, 2016 3:09 AM, "Hongbin Lu"  wrote:
> >>
> >> Hi team,
> >>
> >>
> >>
> >> I wrote this email to propose Eli Qiao (taget-9) to be a Higgins
> core.
> >> Normally, the requirement to join the core team is to consistently
> >> contribute to the project for a certain period of time. However,
> >> given the fact that the project is new and the initial core team was
> >> formed based on a commitment, I am fine to propose a new core based
> >> on a strong commitment to contribute plus a few useful
> >> patches/reviews. In addition, Eli Qiao is currently a Magnum core
> and
> >> I believe his expertise will be an asset of Higgins team.
> >>
> >>
> 
> +1 from my side.
> 
> Thanks,
> 
> Chandan Kumar
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [higgins] Should we rename "Higgins"?

2016-06-07 Thread Hongbin Lu
Hi all,

According to the decision at the last team meeting, we will rename the project 
to “Zun”. Eli Qiao has submitted a rename request: 
https://review.openstack.org/#/c/326306/ . The infra team will rename the 
project in gerrit and git in the next maintenance windows (possibly a couple 
months after). At the meanwhile, I propose to start using the new name by 
ourselves. That includes:
- Use the new launchpad project: https://launchpad.net/zun (need helps to copy 
your bugs and BPs to the new LP project)
- Send emails with “[Zun]” in the subject
- Use the new IRC channel #openstack-zun
(others if you can think of)

If you have any concern or suggestion, please don’t hesitate to contact us. 
Thanks.

Best regards,
Hongbin

From: Yanyan Hu [mailto:huyanya...@gmail.com]
Sent: June-02-16 3:31 AM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [higgins] Should we rename "Higgins"?

Aha, it's pretty interesting, I vote for Zun as well :)

2016-06-02 12:56 GMT+08:00 Fei Long Wang 
mailto:feil...@catalyst.net.nz>>:
+1 for Zun, I love it and it's definitely a good container :)


On 02/06/16 15:46, Monty Taylor wrote:
> On 06/02/2016 06:29 AM, 秀才 wrote:
>> i suggest a name Zun :)
>> please see the reference: https://en.wikipedia.org/wiki/Zun
> It's available on pypi and launchpad. I especially love that one of the
> important examples is the "Four-goat square Zun"
>
> https://en.wikipedia.org/wiki/Zun#Four-goat_square_Zun
>
> I don't get a vote - but I vote for this one.
>
>> -- Original --
>> *From: * "Rochelle 
>> Grober";mailto:rochelle.gro...@huawei.com>>;
>> *Date: * Thu, Jun 2, 2016 09:47 AM
>> *To: * "OpenStack Development Mailing List (not for usage
>> questions)"mailto:openstack-dev@lists.openstack.org>>;
>> *Cc: * "Haruhiko 
>> Katou"mailto:har-ka...@ts.jp.nec.com>>;
>> *Subject: * Re: [openstack-dev] [higgins] Should we rename "Higgins"?
>>
>> Well, you could stick with the wine bottle analogy  and go with a bigger
>> size:
>>
>> Jeroboam
>> Methuselah
>> Salmanazar
>> Balthazar
>> Nabuchadnezzar
>>
>> --Rocky
>>
>> -Original Message-
>> From: Kumari, Madhuri 
>> [mailto:madhuri.kum...@intel.com<mailto:madhuri.kum...@intel.com>]
>> Sent: Wednesday, June 01, 2016 3:44 AM
>> To: OpenStack Development Mailing List (not for usage questions)
>> Cc: Haruhiko Katou
>> Subject: Re: [openstack-dev] [higgins] Should we rename "Higgins"?
>>
>> Thanks Shu for providing suggestions.
>>
>> I wanted the new name to be related to containers as Magnum is also
>> synonym for containers. So I have few options here.
>>
>> 1. Casket
>> 2. Canister
>> 3. Cistern
>> 4. Hutch
>>
>> All above options are free to be taken on pypi and Launchpad.
>> Thoughts?
>>
>> Regards
>> Madhuri
>>
>> -Original Message-
>> From: Shuu Mutou 
>> [mailto:shu-mu...@rf.jp.nec.com<mailto:shu-mu...@rf.jp.nec.com>]
>> Sent: Wednesday, June 1, 2016 11:11 AM
>> To: 
>> openstack-dev@lists.openstack.org<mailto:openstack-dev@lists.openstack.org>
>> Cc: Haruhiko Katou mailto:har-ka...@ts.jp.nec.com>>
>> Subject: Re: [openstack-dev] [higgins] Should we rename "Higgins"?
>>
>> I found container related names and checked whether other project uses.
>>
>> https://en.wikipedia.org/wiki/Straddle_carrier
>> https://en.wikipedia.org/wiki/Suezmax
>> https://en.wikipedia.org/wiki/Twistlock
>>
>> These words are not used by other project on PYPI and Launchpad.
>>
>> ex.)
>> https://pypi.python.org/pypi/straddle
>> https://launchpad.net/straddle
>>
>>
>> However the chance of renaming in N cycle will be done by Infra-team on
>> this Friday, we would not meet the deadline. So
>>
>> 1. use 'Higgins' ('python-higgins' for package name) 2. consider other
>> name for next renaming chance (after a half year)
>>
>> Thoughts?
>>
>>
>> Regards,
>> Shu
>>
>>
>>> -Original Message-
>>> From: Hongbin Lu 
>>> [mailto:hongbin...@huawei.com<mailto:hongbin...@huawei.com>]
>>> Sent: Wednesday, June 01, 2016 11:37 AM
>>> To: OpenStack Development Mailing List (not for usage questions)
>>> mailto:openstack-dev@lists.openstack.org>>
>>> Subject: Re: [openstack-dev] [higgins] Should we rename "Higgins"?
>>>

Re: [openstack-dev] [Magnum] The Magnum Midcycle

2016-06-08 Thread Hongbin Lu
Ricardo,

Thanks for the offer. Would I know where is the exact location?

Best regards,
Hongbin

> -Original Message-
> From: Ricardo Rocha [mailto:rocha.po...@gmail.com]
> Sent: June-08-16 5:43 AM
> To: OpenStack Development Mailing List (not for usage questions)
> Subject: Re: [openstack-dev] [Magnum] The Magnum Midcycle
> 
> Hi Hongbin.
> 
> Not sure how this fits everyone, but we would be happy to host it at
> CERN. How do people feel about it? We can add a nice tour of the place
> as a bonus :)
> 
> Let us know.
> 
> Ricardo
> 
> 
> 
> On Tue, Jun 7, 2016 at 10:32 PM, Hongbin Lu 
> wrote:
> > Hi all,
> >
> >
> >
> > Please find the Doodle pool below for selecting the Magnum midcycle
> date.
> > Presumably, it will be a 2 days event. The location is undecided for
> now.
> > The previous midcycles were hosted in bay area so I guess we will
> stay
> > there at this time.
> >
> >
> >
> > http://doodle.com/poll/5tbcyc37yb7ckiec
> >
> >
> >
> > In addition, the Magnum team is finding a host for the midcycle.
> > Please let us know if you interest to host us.
> >
> >
> >
> > Best regards,
> >
> > Hongbin
> >
> >
> >
> __
> >  OpenStack Development Mailing List (not for usage questions)
> > Unsubscribe:
> > openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> >
> 
> ___
> ___
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: OpenStack-dev-
> requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Magnum] The Magnum Midcycle

2016-06-09 Thread Hongbin Lu
Thanks CERN for offering the host. We will discuss the dates and location in 
the next team meeting [1].

[1] 
https://wiki.openstack.org/wiki/Meetings/Containers#Agenda_for_2016-06-14_1600_UTC

Best regards,
Hongbin

From: Tim Bell [mailto:tim.b...@cern.ch]
Sent: June-09-16 2:27 PM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [Magnum] The Magnum Midcycle

If we can confirm the dates and location, there is a reasonable chance we could 
also offer remote conferencing using Vidyo at CERN. While it is not the same as 
an F2F experience, it would provide the possibility for remote participation 
for those who could not make it to Geneva.

We may also be able to organize tours, such as to the anti-matter factory and 
super conducting magnet test labs prior or afterwards if anyone is interested…

Tim

From: Spyros Trigazis mailto:strig...@gmail.com>>
Reply-To: "OpenStack Development Mailing List (not for usage questions)" 
mailto:openstack-dev@lists.openstack.org>>
Date: Wednesday 8 June 2016 at 16:43
To: "OpenStack Development Mailing List (not for usage questions)" 
mailto:openstack-dev@lists.openstack.org>>
Subject: Re: [openstack-dev] [Magnum] The Magnum Midcycle

Hi Hongbin.

CERN's location: https://goo.gl/maps/DWbDVjnAvJJ2

Cheers,
Spyros


On 8 June 2016 at 16:01, Hongbin Lu 
mailto:hongbin...@huawei.com>> wrote:
Ricardo,

Thanks for the offer. Would I know where is the exact location?

Best regards,
Hongbin

> -Original Message-
> From: Ricardo Rocha 
> [mailto:rocha.po...@gmail.com<mailto:rocha.po...@gmail.com>]
> Sent: June-08-16 5:43 AM
> To: OpenStack Development Mailing List (not for usage questions)
> Subject: Re: [openstack-dev] [Magnum] The Magnum Midcycle
>
> Hi Hongbin.
>
> Not sure how this fits everyone, but we would be happy to host it at
> CERN. How do people feel about it? We can add a nice tour of the place
> as a bonus :)
>
> Let us know.
>
> Ricardo
>
>
>
> On Tue, Jun 7, 2016 at 10:32 PM, Hongbin Lu 
> mailto:hongbin...@huawei.com>>
> wrote:
> > Hi all,
> >
> >
> >
> > Please find the Doodle pool below for selecting the Magnum midcycle
> date.
> > Presumably, it will be a 2 days event. The location is undecided for
> now.
> > The previous midcycles were hosted in bay area so I guess we will
> stay
> > there at this time.
> >
> >
> >
> > http://doodle.com/poll/5tbcyc37yb7ckiec
> >
> >
> >
> > In addition, the Magnum team is finding a host for the midcycle.
> > Please let us know if you interest to host us.
> >
> >
> >
> > Best regards,
> >
> > Hongbin
> >
> >
> >
> __
> >  OpenStack Development Mailing List (not for usage questions)
> > Unsubscribe:
> > openstack-dev-requ...@lists.openstack.org?subject:unsubscribe<http://openstack-dev-requ...@lists.openstack.org?subject:unsubscribe>
> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> >
>
> ___
> ___
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: OpenStack-dev-
> requ...@lists.openstack.org?subject:unsubscribe<http://requ...@lists.openstack.org?subject:unsubscribe>
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: 
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe<http://openstack-dev-requ...@lists.openstack.org?subject:unsubscribe>
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [magnum-ui][magnum] Proposed Core addition, and removal notice

2016-06-10 Thread Hongbin Lu
+1

> -Original Message-
> From: Shuu Mutou [mailto:shu-mu...@rf.jp.nec.com]
> Sent: June-10-16 5:33 AM
> To: openstack-dev@lists.openstack.org
> Cc: Haruhiko Katou
> Subject: [openstack-dev] [magnum-ui][magnum] Proposed Core addition,
> and removal notice
> 
> Hi team,
> 
> I propose the following changes to the magnum-ui core group.
> 
> + Thai Tran
>   http://stackalytics.com/report/contribution/magnum-ui/90
>   I'm so happy to propose Thai as a core reviewer.
>   His reviews have been extremely valuable for us.
>   And he is active Horizon core member.
>   I believe his help will lead us to the correct future.
> 
> - David Lyle
> 
> http://stackalytics.com/?metric=marks&project_type=openstack&release=al
> l&module=magnum-ui&user_id=david-lyle
>   No activities for Magnum-UI since Mitaka cycle.
> 
> - Harsh Shah
>   http://stackalytics.com/report/users/hshah
>   No activities for OpenStack in this year.
> 
> - Ritesh
>   http://stackalytics.com/report/users/rsritesh
>   No activities for OpenStack in this year.
> 
> Please respond with your +1 votes to approve this change or -1 votes to
> oppose.
> 
> Thanks,
> Shu
> 
> 
> ___
> ___
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: OpenStack-dev-
> requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Higgins] Call for contribution for Higgins API design

2016-06-10 Thread Hongbin Lu
Yuanying,

The etherpads you pointed to were a few years ago and the information looks a 
bit outdated. I think we can collaborate a similar etherpad with updated 
information (i.e. remove container runtimes that we don’t care, add container 
runtimes that we care). The existing etherpad can be used as a starting point. 
What do you think?

Best regards,
Hongbin

From: Yuanying OTSUKA [mailto:yuany...@oeilvert.org]
Sent: June-01-16 12:43 AM
To: OpenStack Development Mailing List (not for usage questions); Sheel Rana 
Insaan
Cc: adit...@nectechnologies.in; yanya...@cn.ibm.com; flw...@catalyst.net.nz; Qi 
Ming Teng; sitlani.namr...@yahoo.in; Yuanying; Chandan Kumar
Subject: Re: [openstack-dev] [Higgins] Call for contribution for Higgins API 
design

Just F.Y.I.

When Magnum wanted to become “Container as a Service”,
There were some discussion about API design.

* https://etherpad.openstack.org/p/containers-service-api
* https://etherpad.openstack.org/p/openstack-containers-service-api



2016年6月1日(水) 12:09 Hongbin Lu 
mailto:hongbin...@huawei.com>>:
Sheel,

Thanks for taking the responsibility. Assigned the BP to you. As discussed, 
please submit a spec for the API design. Feel free to let us know if you need 
any help.

Best regards,
Hongbin

From: Sheel Rana Insaan 
[mailto:ranasheel2...@gmail.com<mailto:ranasheel2...@gmail.com>]
Sent: May-31-16 9:23 PM
To: Hongbin Lu
Cc: adit...@nectechnologies.in<mailto:adit...@nectechnologies.in>; 
vivek.jain.openst...@gmail.com<mailto:vivek.jain.openst...@gmail.com>; 
flw...@catalyst.net.nz<mailto:flw...@catalyst.net.nz>; Shuu Mutou; Davanum 
Srinivas; OpenStack Development Mailing List (not for usage questions); Chandan 
Kumar; hai...@xr.jp.nec.com<mailto:hai...@xr.jp.nec.com>; Qi Ming Teng; 
sitlani.namr...@yahoo.in<mailto:sitlani.namr...@yahoo.in>; Yuanying; Kumari, 
Madhuri; yanya...@cn.ibm.com<mailto:yanya...@cn.ibm.com>
Subject: Re: [Higgins] Call for contribution for Higgins API design


Dear Hongbin,

I am interested in this.
Thanks!!

Best Regards,
Sheel Rana
On Jun 1, 2016 3:53 AM, "Hongbin Lu" 
mailto:hongbin...@huawei.com>> wrote:
Hi team,

As discussed in the last team meeting, we agreed to define core use cases for 
the API design. I have created a blueprint for that. We need an owner of the 
blueprint and it requires a spec to clarify the API design. Please let me know 
if you interest in this work (it might require a significant amount of time to 
work on the spec).

https://blueprints.launchpad.net/python-higgins/+spec/api-design

Best regards,
Hongbin
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: 
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe<http://openstack-dev-requ...@lists.openstack.org?subject:unsubscribe>
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Quesion about Openstack Containers and Magnum

2016-06-11 Thread Hongbin Lu
Hi,

It looks Spyros already answered your question: 
http://lists.openstack.org/pipermail/openstack-dev/2016-June/097083.html . Is 
anything else we can help, or you have further questions?

Best regards,
Hongbin

From: zhihao wang [mailto:wangzhihao...@hotmail.com]
Sent: June-11-16 1:03 PM
To: openstack-dev@lists.openstack.org
Subject: [openstack-dev] Quesion about Openstack Containers and Magnum


Dear Openstack Dev Members:

I would like to install the Magnum on OpenStack to manage Docker Containers.
I have a openstack Liberty production setup. one controller node, and a few 
compute nodes.

I am wondering how can I install Openstack Magnum on OpenStack Liberty on 
distributed production environment (1 controller node and some compute nodes)?

I know I can install Magnum using desstack, but I dont want the developer 
version,

Is there a way/guide to install it on production environment?

Thanks
Wally
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Higgins][Zun] Project roadmap

2016-06-12 Thread Hongbin Lu
Hi team,

During the team meetings these weeks, we collaborated the initial project 
roadmap. I summarized it as below. Please review.

* Implement a common container abstraction for different container runtimes. 
The initial implementation will focus on supporting basic container operations 
(i.e. CRUD).
* Focus on non-nested containers use cases (running containers on physical 
hosts), and revisit nested containers use cases (running containers on VMs) 
later.
* Provide two set of APIs to access containers: The Nova APIs and the 
Zun-native APIs. In particular, the Zun-native APIs will expose full container 
capabilities, and Nova APIs will expose capabilities that are shared between 
containers and VMs.
* Leverage Neutron (via Kuryr) for container networking.
* Leverage Cinder for container data volume.
* Leverage Glance for storing container images. If necessary, contribute to 
Glance for missing features (i.e. support layer of container images).
* Support enforcing multi-tenancy by doing the following:
** Add configurable options for scheduler to enforce neighboring containers 
belonging to the same tenant.
** Support hypervisor-based container runtimes.

The following topics have been discussed, but the team cannot reach consensus 
on including them into the short-term project scope. We skipped them for now 
and might revisit them later.
* Support proxying API calls to COEs.
* Advanced container operations (i.e. keep container alive, load balancer 
setup, rolling upgrade).
* Nested containers use cases (i.e. provision container hosts).
* Container composition (i.e. support docker-compose like DSL).

NOTE: I might forgot and mis-understood something. Please feel free to point 
out if anything is wrong or missing.

Best regards,
Hongbin
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [magnum] Notes for Magnum design summit

2016-06-13 Thread Hongbin Lu
availability zones, flavors), but need to elaborate the details further.

The idea revolves around creating a heat stack for each node in the bay. This 
idea shows a lot of promise but needs more investigation and isn’t a current 
priority.

Tom


From: Hongbin Lu mailto:hongbin...@huawei.com>>
Reply-To: "OpenStack Development Mailing List (not for usage questions)" 
mailto:openstack-dev@lists.openstack.org>>
Date: Saturday, 30 April 2016 at 05:05
To: "OpenStack Development Mailing List (not for usage questions)" 
mailto:openstack-dev@lists.openstack.org>>
Subject: [openstack-dev] [magnum] Notes for Magnum design summit

Hi team,

For reference, below is a summary of the discussions/decisions in Austin design 
summit. Please feel free to point out if anything is incorrect or incomplete. 
Thanks.

1. Bay driver: https://etherpad.openstack.org/p/newton-magnum-bay-driver
- Refactor existing code into bay drivers
- Each bay driver will be versioned
- Individual bay driver can have API extension and magnum CLI could load the 
extensions dynamically
- Work incrementally and support same API before and after the driver change

2. Bay lifecycle operations: 
https://etherpad.openstack.org/p/newton-magnum-bays-lifecycle-operations
- Support the following operations: reset the bay, rebuild the bay, rotate TLS 
certificates in the bay, adjust storage of the bay, scale the bay.

3. Scalability: https://etherpad.openstack.org/p/newton-magnum-scalability
- Implement Magnum plugin for Rally
- Implement the spec to address the scalability of deploying multiple bays 
concurrently: https://review.openstack.org/#/c/275003/

4. Container storage: 
https://etherpad.openstack.org/p/newton-magnum-container-storage
- Allow choice of storage driver
- Allow choice of data volume driver
- Work with Kuryr/Fuxi team to have data volume driver available in COEs 
upstream

5. Container network: 
https://etherpad.openstack.org/p/newton-magnum-container-network
- Discuss how to scope/pass/store OpenStack credential in bays (needed by Kuryr 
to communicate with Neutron).
- Several options were explored. No perfect solution was identified.

6. Ironic Integration: 
https://etherpad.openstack.org/p/newton-magnum-ironic-integration
- Start the implementation immediately
- Prefer quick work-around for identified issues (cinder volume attachment, 
variation of number of ports, etc.)

7. Magnum adoption challenges: 
https://etherpad.openstack.org/p/newton-magnum-adoption-challenges
- The challenges is listed in the etherpad above

8. Unified abstraction for COEs: 
https://etherpad.openstack.org/p/newton-magnum-unified-abstraction
- Create a new project for this efforts
- Alter Magnum mission statement to clarify its goal (Magnum is not a container 
service, it is sort of a COE management service)

9. Magnum Heat template version: 
https://etherpad.openstack.org/p/newton-magnum-heat-template-versioning
- In each bay driver, version the template and template definition.
- Bump template version for minor changes, and bump bay driver version for 
major changes.

10. Monitoring: https://etherpad.openstack.org/p/newton-magnum-monitoring
- Add support for sending notifications to Ceilometer
- Revisit bay monitoring and self-healing later
- Container monitoring should not be done by Magnum, but it can be done by 
cAdvisor, Heapster, etc.

11. Others: https://etherpad.openstack.org/p/newton-magnum-meetup
- Clear Container support: Clear Container needs to integrate with COEs first. 
After the integration is done, Magnum team will revisit bringing the Clear 
Container COE to Magnum.
- Enhance mesos bay to DCOS bay: Need to do it step-by-step: First, create a 
new DCOS bay type. Then, deprecate and delete the mesos bay type.
- Start enforcing API deprecation policy: 
https://governance.openstack.org/reference/tags/assert_follows-standard-deprecation.html
- Freeze API v1 after some patches are merged.
- Multi-tenancy within a bay: not the priority in Newton cycle
- Manually manage bay nodes (instead of being managed by Heat ResourceGroup): 
It can address the use case of heterogeneity of bay nodes (i.e. different 
availability zones, flavors), but need to elaborate the details further.
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: 
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe<http://openstack-dev-requ...@lists.openstack.org?subject:unsubscribe>
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: 
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe<http://openstack-dev-requ...@lists.openstack.org?subject:unsubscribe>
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___

Re: [openstack-dev] [Higgins][Zun] Project roadmap

2016-06-13 Thread Hongbin Lu


From: Antoni Segura Puimedon [mailto:toni+openstac...@midokura.com]
Sent: June-13-16 3:22 AM
To: OpenStack Development Mailing List (not for usage questions)
Cc: yanya...@cn.ibm.com; Qi Ming Teng; adit...@nectechnologies.in; 
sitlani.namr...@yahoo.in; flw...@catalyst.net.nz; Chandan Kumar; Sheel Rana 
Insaan; Yuanying
Subject: Re: [openstack-dev] [Higgins][Zun] Project roadmap



On Mon, Jun 13, 2016 at 12:10 AM, Hongbin Lu 
mailto:hongbin...@huawei.com>> wrote:
Hi team,

During the team meetings these weeks, we collaborated the initial project 
roadmap. I summarized it as below. Please review.

* Implement a common container abstraction for different container runtimes. 
The initial implementation will focus on supporting basic container operations 
(i.e. CRUD).
* Focus on non-nested containers use cases (running containers on physical 
hosts), and revisit nested containers use cases (running containers on VMs) 
later.
* Provide two set of APIs to access containers: The Nova APIs and the 
Zun-native APIs. In particular, the Zun-native APIs will expose full container 
capabilities, and Nova APIs will expose capabilities that are shared between 
containers and VMs.
* Leverage Neutron (via Kuryr) for container networking.

Great! Let us know anytime we can help

* Leverage Cinder for container data volume.
Have you considered fuxi?

https://github.com/openstack/fuxi
[Hongbin Lu] We discussed if we should leverage Kuryr/Fuxi for storage, but we 
are unclear what this project offer exactly and how it works. The maturity of 
the project is also a concern, but we will revisit it later.


* Leverage Glance for storing container images. If necessary, contribute to 
Glance for missing features (i.e. support layer of container images).
* Support enforcing multi-tenancy by doing the following:
** Add configurable options for scheduler to enforce neighboring containers 
belonging to the same tenant.

What about have the scheduler pluggable instead of having a lot of 
configuration options?
[Hongbin Lu] For short-term, no. We will implement a very simple scheduler to 
start. For long-term, we will wait for the scheduler-as-a-service project: 
https://wiki.openstack.org/wiki/Gantt . I believe Gantt will have a pluggable 
architecture so that your requirement will be satisfied. If not, we will 
revisit it.


** Support hypervisor-based container runtimes.

Is that hyper.sh?
[Hongbin Lu] It could be, or Clear Container, or something similar.



The following topics have been discussed, but the team cannot reach consensus 
on including them into the short-term project scope. We skipped them for now 
and might revisit them later.
* Support proxying API calls to COEs.
* Advanced container operations (i.e. keep container alive, load balancer 
setup, rolling upgrade).
* Nested containers use cases (i.e. provision container hosts).
* Container composition (i.e. support docker-compose like DSL).

Will it have ordering primitives, i.e. this container won't start until this 
one is up and running. ?
I also wonder whether the Higgins container abstraction will have rich status 
reporting that can be used in ordering.
For example, whether it can differentiate started containers from those that 
are already listening in their exposed
ports.
[Hongbin Lu] I am open to that, but needs to discuss the idea further.


NOTE: I might forgot and mis-understood something. Please feel free to point 
out if anything is wrong or missing.

Best regards,
Hongbin

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: 
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe<http://openstack-dev-requ...@lists.openstack.org?subject:unsubscribe>
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Higgins][Zun] Project roadmap

2016-06-13 Thread Hongbin Lu


> -Original Message-
> From: Sudipto Biswas [mailto:sbisw...@linux.vnet.ibm.com]
> Sent: June-13-16 1:43 PM
> To: openstack-dev@lists.openstack.org
> Subject: Re: [openstack-dev] [Higgins][Zun] Project roadmap
> 
> 
> 
> On Monday 13 June 2016 06:57 PM, Flavio Percoco wrote:
> > On 12/06/16 22:10 +, Hongbin Lu wrote:
> >> Hi team,
> >>
> >> During the team meetings these weeks, we collaborated the initial
> >> project roadmap. I summarized it as below. Please review.
> >>
> >> * Implement a common container abstraction for different container
> >> runtimes. The initial implementation will focus on supporting basic
> >> container operations (i.e. CRUD).
> >
> > What COE's are being considered for the first implementation? Just
> > docker and kubernetes?
[Hongbin Lu] Container runtimes, docker in particular, are being considered for 
the first implementation. We discussed how to support COEs in Zun but cannot 
reach an agreement on the direction. I will leave it for further discussion.

> >
> >> * Focus on non-nested containers use cases (running containers on
> >> physical hosts), and revisit nested containers use cases (running
> >> containers on VMs) later.
> >> * Provide two set of APIs to access containers: The Nova APIs and
> the
> >> Zun-native APIs. In particular, the Zun-native APIs will expose full
> >> container capabilities, and Nova APIs will expose capabilities that
> >> are shared between containers and VMs.
> >
> > - Is the nova side going to be implemented in the form of a Nova
> > driver (like ironic's?)? What do you mean by APIs here?
[Hongbin Lu] Yes, the plan is to implement a Zun virt-driver for Nova. The idea 
is similar to Ironic.

> >
> > - What operations are we expecting this to support (just CRUD
> > operations on containers?)?
[Hongbin Lu] We are working on finding the list of operations to support. There 
is a BP for tracking this effort: 
https://blueprints.launchpad.net/zun/+spec/api-design .

> >
> > I can see this driver being useful for specialized services like
> Trove
> > but I'm curious/concerned about how this will be used by end users
> > (assuming that's the goal).
[Hongbin Lu] I agree that end users might not be satisfied by basic container 
operations like CRUD. We will discuss how to offer more to make the service to 
be useful in production.

> >
> >
> >> * Leverage Neutron (via Kuryr) for container networking.
> >> * Leverage Cinder for container data volume.
> >> * Leverage Glance for storing container images. If necessary,
> >> contribute to Glance for missing features (i.e. support layer of
> >> container images).
> >
> > Are you aware of https://review.openstack.org/#/c/249282/ ?
> This support is very minimalistic in nature, since it doesn't do
> anything beyond just storing a docker FS tar ball.
> I think it was felt that, further support for docker FS was needed.
> While there were suggestions of private docker registry, having
> something in band (w.r.t openstack) maybe desirable.
[Hongbin Lu] Yes, Glance doesn't support layer of container images which is a 
missing feature.

> >> * Support enforcing multi-tenancy by doing the following:
> >> ** Add configurable options for scheduler to enforce neighboring
> >> containers belonging to the same tenant.
> >> ** Support hypervisor-based container runtimes.
> >>
> >> The following topics have been discussed, but the team cannot reach
> >> consensus on including them into the short-term project scope. We
> >> skipped them for now and might revisit them later.
> >> * Support proxying API calls to COEs.
> >
> > Any link to what this proxy will do and what service it'll talk to?
> > I'd generally advice against having proxy calls in services. We've
> > just done work in Nova to deprecate the Nova Image proxy.
[Hongbin Lu] Maybe "proxy" is not the right word. What I mean is to translate 
the request to API calls of COEs. For example, users request to create a 
container in Zun, then Zun creates a single-container pod in k8s.

> >
> >> * Advanced container operations (i.e. keep container alive, load
> >> balancer setup, rolling upgrade).
> >> * Nested containers use cases (i.e. provision container hosts).
> >> * Container composition (i.e. support docker-compose like DSL).
> >>
> >> NOTE: I might forgot and mis-understood something. Please feel free
> >> to point out if anything is wrong or missing.
> >
> > It sounds yo

Re: [openstack-dev] [Higgins][Zun] Project roadmap

2016-06-14 Thread Hongbin Lu


> -Original Message-
> From: Flavio Percoco [mailto:fla...@redhat.com]
> Sent: June-14-16 3:44 AM
> To: OpenStack Development Mailing List (not for usage questions)
> Subject: Re: [openstack-dev] [Higgins][Zun] Project roadmap
> 
> On 13/06/16 18:46 +, Hongbin Lu wrote:
> >
> >
> >> -Original Message-
> >> From: Sudipto Biswas [mailto:sbisw...@linux.vnet.ibm.com]
> >> Sent: June-13-16 1:43 PM
> >> To: openstack-dev@lists.openstack.org
> >> Subject: Re: [openstack-dev] [Higgins][Zun] Project roadmap
> >>
> >>
> >>
> >> On Monday 13 June 2016 06:57 PM, Flavio Percoco wrote:
> >> > On 12/06/16 22:10 +, Hongbin Lu wrote:
> >> >> Hi team,
> >> >>
> >> >> During the team meetings these weeks, we collaborated the initial
> >> >> project roadmap. I summarized it as below. Please review.
> >> >>
> >> >> * Implement a common container abstraction for different
> container
> >> >> runtimes. The initial implementation will focus on supporting
> >> >> basic container operations (i.e. CRUD).
> >> >
> >> > What COE's are being considered for the first implementation? Just
> >> > docker and kubernetes?
> >[Hongbin Lu] Container runtimes, docker in particular, are being
> considered for the first implementation. We discussed how to support
> COEs in Zun but cannot reach an agreement on the direction. I will
> leave it for further discussion.
> >
> >> >
> >> >> * Focus on non-nested containers use cases (running containers on
> >> >> physical hosts), and revisit nested containers use cases (running
> >> >> containers on VMs) later.
> >> >> * Provide two set of APIs to access containers: The Nova APIs and
> >> the
> >> >> Zun-native APIs. In particular, the Zun-native APIs will expose
> >> >> full container capabilities, and Nova APIs will expose
> >> >> capabilities that are shared between containers and VMs.
> >> >
> >> > - Is the nova side going to be implemented in the form of a Nova
> >> > driver (like ironic's?)? What do you mean by APIs here?
> >[Hongbin Lu] Yes, the plan is to implement a Zun virt-driver for Nova.
> The idea is similar to Ironic.
> >
> >> >
> >> > - What operations are we expecting this to support (just CRUD
> >> > operations on containers?)?
> >[Hongbin Lu] We are working on finding the list of operations to
> support. There is a BP for tracking this effort:
> https://blueprints.launchpad.net/zun/+spec/api-design .
> >
> >> >
> >> > I can see this driver being useful for specialized services like
> >> Trove
> >> > but I'm curious/concerned about how this will be used by end users
> >> > (assuming that's the goal).
> >[Hongbin Lu] I agree that end users might not be satisfied by basic
> container operations like CRUD. We will discuss how to offer more to
> make the service to be useful in production.
> 
> I'd probably leave this out for now but this is just my opinion.
> Personally, I think users, if presented with both APIs - nova's and
> Zun's - they'll prefer Zun's.
> 
> Specifically, you don't interact with a container the same way you
> interact with a VM (but I'm sure you know all these way better than me).
> I guess my concern is that I don't see too much value in this other
> than allowing specialized services to run containers through Nova.

ACK

> 
> 
> >> >
> >> >
> >> >> * Leverage Neutron (via Kuryr) for container networking.
> >> >> * Leverage Cinder for container data volume.
> >> >> * Leverage Glance for storing container images. If necessary,
> >> >> contribute to Glance for missing features (i.e. support layer of
> >> >> container images).
> >> >
> >> > Are you aware of https://review.openstack.org/#/c/249282/ ?
> >> This support is very minimalistic in nature, since it doesn't do
> >> anything beyond just storing a docker FS tar ball.
> >> I think it was felt that, further support for docker FS was needed.
> >> While there were suggestions of private docker registry, having
> >> something in band (w.r.t openstack) maybe desirable.
> >[Hongbin Lu] Yes, Glance doesn't support layer of container images
> which is a missing feature.
> 
> Yup, I didn't mean to imply that would d

Re: [openstack-dev] [Magnum] The Magnum Midcycle

2016-06-14 Thread Hongbin Lu
Hi Tim,

Thanks for providing the host. We discussed the midcycle location at the last 
team meeting. It looks a significant number of Magnum team members has 
difficulties to travel to Geneva, so we are not able to hold the midcycle at 
CERN. Thanks again for the willingness to host us.

Best regards,
Hongbin

From: Tim Bell [mailto:tim.b...@cern.ch]
Sent: June-09-16 2:27 PM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [Magnum] The Magnum Midcycle

If we can confirm the dates and location, there is a reasonable chance we could 
also offer remote conferencing using Vidyo at CERN. While it is not the same as 
an F2F experience, it would provide the possibility for remote participation 
for those who could not make it to Geneva.

We may also be able to organize tours, such as to the anti-matter factory and 
super conducting magnet test labs prior or afterwards if anyone is interested…

Tim

From: Spyros Trigazis mailto:strig...@gmail.com>>
Reply-To: "OpenStack Development Mailing List (not for usage questions)" 
mailto:openstack-dev@lists.openstack.org>>
Date: Wednesday 8 June 2016 at 16:43
To: "OpenStack Development Mailing List (not for usage questions)" 
mailto:openstack-dev@lists.openstack.org>>
Subject: Re: [openstack-dev] [Magnum] The Magnum Midcycle

Hi Hongbin.

CERN's location: https://goo.gl/maps/DWbDVjnAvJJ2

Cheers,
Spyros


On 8 June 2016 at 16:01, Hongbin Lu 
mailto:hongbin...@huawei.com>> wrote:
Ricardo,

Thanks for the offer. Would I know where is the exact location?

Best regards,
Hongbin

> -Original Message-
> From: Ricardo Rocha 
> [mailto:rocha.po...@gmail.com<mailto:rocha.po...@gmail.com>]
> Sent: June-08-16 5:43 AM
> To: OpenStack Development Mailing List (not for usage questions)
> Subject: Re: [openstack-dev] [Magnum] The Magnum Midcycle
>
> Hi Hongbin.
>
> Not sure how this fits everyone, but we would be happy to host it at
> CERN. How do people feel about it? We can add a nice tour of the place
> as a bonus :)
>
> Let us know.
>
> Ricardo
>
>
>
> On Tue, Jun 7, 2016 at 10:32 PM, Hongbin Lu 
> mailto:hongbin...@huawei.com>>
> wrote:
> > Hi all,
> >
> >
> >
> > Please find the Doodle pool below for selecting the Magnum midcycle
> date.
> > Presumably, it will be a 2 days event. The location is undecided for
> now.
> > The previous midcycles were hosted in bay area so I guess we will
> stay
> > there at this time.
> >
> >
> >
> > http://doodle.com/poll/5tbcyc37yb7ckiec
> >
> >
> >
> > In addition, the Magnum team is finding a host for the midcycle.
> > Please let us know if you interest to host us.
> >
> >
> >
> > Best regards,
> >
> > Hongbin
> >
> >
> >
> __
> >  OpenStack Development Mailing List (not for usage questions)
> > Unsubscribe:
> > openstack-dev-requ...@lists.openstack.org?subject:unsubscribe<http://openstack-dev-requ...@lists.openstack.org?subject:unsubscribe>
> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> >
>
> ___
> ___
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: OpenStack-dev-
> requ...@lists.openstack.org?subject:unsubscribe<http://requ...@lists.openstack.org?subject:unsubscribe>
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: 
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe<http://openstack-dev-requ...@lists.openstack.org?subject:unsubscribe>
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Magnum] The Magnum Midcycle

2016-06-14 Thread Hongbin Lu
Hi team,

As discussed in the team meeting, we are going to choose between Austin and San 
Francisco. A doodle pool was created to select the location: 
http://doodle.com/poll/2x9utspir7vk8ter . Please cast your vote there. On 
behalf of Magnum team, thanks Rackspace for providing the host.

Best regards,
Hongbin

From: Adrian Otto [mailto:adrian.o...@rackspace.com]
Sent: June-09-16 3:56 PM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [Magnum] The Magnum Midcycle

Rackspace is willing to host in Austin, TX or San Antonio, TX, or San 
Francisco, CA.

--
Adrian

On Jun 7, 2016, at 1:35 PM, Hongbin Lu 
mailto:hongbin...@huawei.com>> wrote:
Hi all,

Please find the Doodle pool below for selecting the Magnum midcycle date. 
Presumably, it will be a 2 days event. The location is undecided for now. The 
previous midcycles were hosted in bay area so I guess we will stay there at 
this time.

http://doodle.com/poll/5tbcyc37yb7ckiec

In addition, the Magnum team is finding a host for the midcycle. Please let us 
know if you interest to host us.

Best regards,
Hongbin
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: 
openstack-dev-requ...@lists.openstack.org<mailto:openstack-dev-requ...@lists.openstack.org>?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [magnum][lbaas][docs] Operator-facing installation guide

2016-06-14 Thread Hongbin Lu
Hi neutron-lbaas team,

Could anyone confirm if there is an operator-facing install guide for 
neutron-lbaas. So far, the closest one we could find is: 
https://wiki.openstack.org/wiki/Neutron/LBaaS/HowToRun , which doesn't seem to 
be a comprehensive install guide. I asked that because there are several users 
who want to install Magnum but couldn't find an instruction to install 
neutron-lbaas. Although we are working on decoupling from neutron-lbaas, we 
still need to provide instruction for users who want a load balancer. If the 
install guide is missing, any plan to create one?

Best regards,
Hongbin

> -Original Message-
> From: Adrian Otto [mailto:adrian.o...@rackspace.com]
> Sent: June-02-16 6:50 PM
> To: OpenStack Development Mailing List (not for usage questions)
> Subject: Re: [openstack-dev] [magnum][lbaas] Operator-facing
> installation guide
> 
> Brandon,
> 
> Magnum uses neutron’s LBaaS service to allow for multi-master bays. We
> can balance connections between multiple kubernetes masters, for
> example. It’s not needed for single master bays, which are much more
> common. We have a blueprint that is in design stage for de-coupling
> magnum from neutron LBaaS for use cases that don’t require it:
> 
> https://blueprints.launchpad.net/magnum/+spec/decouple-lbaas
> 
> Adrian
> 
> > On Jun 2, 2016, at 2:48 PM, Brandon Logan
>  wrote:
> >
> > Call me ignorance, but I'm surprised at neutron-lbaas being a
> > dependency of magnum.  Why is this?  Sorry if it has been asked
> before
> > and I've just missed that answer?
> >
> > Thanks,
> > Brandon
> > On Wed, 2016-06-01 at 14:39 +, Hongbin Lu wrote:
> >> Hi lbaas team,
> >>
> >>
> >>
> >> I wonder if there is an operator-facing installation guide for
> >> neutron-lbaas. I asked that because Magnum is working on an
> >> installation guide [1] and neutron-lbaas is a dependency of Magnum.
> >> We want to link to an official lbaas guide so that our users will
> >> have a completed instruction. Any pointer?
> >>
> >>
> >>
> >> [1] https://review.openstack.org/#/c/319399/
> >>
> >>
> >>
> >> Best regards,
> >>
> >> Hongbin
> >>
> >>
> >>
> _
> >> _ OpenStack Development Mailing List (not for usage questions)
> >> Unsubscribe:
> >> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> >
> >
> __
> >  OpenStack Development Mailing List (not for usage questions)
> > Unsubscribe:
> > openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> 
> ___
> ___
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: OpenStack-dev-
> requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [magnum][lbaas][docs] Operator-facing installation guide

2016-06-14 Thread Hongbin Lu
Thanks.

Actually, we were looking for lbaas v1 and the linked document seems to mainly 
talk about v2, but we are migrating to v2 so I am satisfied.

Best regards,
Hongbin

From: Anne Gentle [mailto:annegen...@justwriteclick.com]
Sent: June-14-16 6:06 PM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [magnum][lbaas][docs] Operator-facing installation 
guide

Let us know if this is what you're looking for:

http://docs.openstack.org/mitaka/networking-guide/adv-config-lbaas.html

Thanks Major Hayden for writing it up.
Anne

On Tue, Jun 14, 2016 at 3:54 PM, Hongbin Lu 
mailto:hongbin...@huawei.com>> wrote:
Hi neutron-lbaas team,

Could anyone confirm if there is an operator-facing install guide for 
neutron-lbaas. So far, the closest one we could find is: 
https://wiki.openstack.org/wiki/Neutron/LBaaS/HowToRun , which doesn't seem to 
be a comprehensive install guide. I asked that because there are several users 
who want to install Magnum but couldn't find an instruction to install 
neutron-lbaas. Although we are working on decoupling from neutron-lbaas, we 
still need to provide instruction for users who want a load balancer. If the 
install guide is missing, any plan to create one?

Best regards,
Hongbin

> -Original Message-
> From: Adrian Otto 
> [mailto:adrian.o...@rackspace.com<mailto:adrian.o...@rackspace.com>]
> Sent: June-02-16 6:50 PM
> To: OpenStack Development Mailing List (not for usage questions)
> Subject: Re: [openstack-dev] [magnum][lbaas] Operator-facing
> installation guide
>
> Brandon,
>
> Magnum uses neutron’s LBaaS service to allow for multi-master bays. We
> can balance connections between multiple kubernetes masters, for
> example. It’s not needed for single master bays, which are much more
> common. We have a blueprint that is in design stage for de-coupling
> magnum from neutron LBaaS for use cases that don’t require it:
>
> https://blueprints.launchpad.net/magnum/+spec/decouple-lbaas
>
> Adrian
>
> > On Jun 2, 2016, at 2:48 PM, Brandon Logan
> mailto:brandon.lo...@rackspace.com>> wrote:
> >
> > Call me ignorance, but I'm surprised at neutron-lbaas being a
> > dependency of magnum.  Why is this?  Sorry if it has been asked
> before
> > and I've just missed that answer?
> >
> > Thanks,
> > Brandon
> > On Wed, 2016-06-01 at 14:39 +, Hongbin Lu wrote:
> >> Hi lbaas team,
> >>
> >>
> >>
> >> I wonder if there is an operator-facing installation guide for
> >> neutron-lbaas. I asked that because Magnum is working on an
> >> installation guide [1] and neutron-lbaas is a dependency of Magnum.
> >> We want to link to an official lbaas guide so that our users will
> >> have a completed instruction. Any pointer?
> >>
> >>
> >>
> >> [1] https://review.openstack.org/#/c/319399/
> >>
> >>
> >>
> >> Best regards,
> >>
> >> Hongbin
> >>
> >>
> >>
> _
> >> _ OpenStack Development Mailing List (not for usage questions)
> >> Unsubscribe:
> >> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe<http://openstack-dev-requ...@lists.openstack.org?subject:unsubscribe>
> >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> >
> >
> __
> >  OpenStack Development Mailing List (not for usage questions)
> > Unsubscribe:
> > openstack-dev-requ...@lists.openstack.org?subject:unsubscribe<http://openstack-dev-requ...@lists.openstack.org?subject:unsubscribe>
> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
> ___
> ___
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: OpenStack-dev-
> requ...@lists.openstack.org?subject:unsubscribe<http://requ...@lists.openstack.org?subject:unsubscribe>
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: 
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe<http://openstack-dev-requ...@lists.openstack.org?subject:unsubscribe>
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



--
Anne Gentle
www.justwriteclick.com<http://www.justwriteclick.com>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Higgins][Zun] Project roadmap

2016-06-16 Thread Hongbin Lu
Welcome! Please feel free to ping us in IRC (#openstack-zun) or join our weekly 
meeting (https://wiki.openstack.org/wiki/Zun#Meetings). I am happy to discuss 
how to collaborate further.

Best regards,
Hongbin

From: Pengfei Ni [mailto:feisk...@gmail.com]
Sent: June-16-16 6:36 AM
To: OpenStack Development Mailing List (not for usage questions)
Cc: Qi Ming Teng; yanya...@cn.ibm.com; flw...@catalyst.net.nz; 
adit...@nectechnologies.in; sitlani.namr...@yahoo.in; Chandan Kumar; Sheel Rana 
Insaan; Yuanying
Subject: Re: [openstack-dev] [Higgins][Zun] Project roadmap

Hello, everyone,

Hypernetes has done some work same as this project, that is

- Leverate Neutron for container network
- Leverate Cinder for storage
- Leverate Keystone for auth
- Leverate HyperContainer for hypervisor-based container runtime

We could help to provide hypervisor-based container runtime (HyperContainer) 
integration for Zun.

See https://github.com/hyperhq/hypernetes and 
http://blog.kubernetes.io/2016/05/hypernetes-security-and-multi-tenancy-in-kubernetes.html
 for more information about Hypernetes, and see 
https://github.com/hyperhq/hyperd for more information about HyperContainer.


Best regards.


---
Pengfei Ni
Software Engineer @Hyper

2016-06-13 6:10 GMT+08:00 Hongbin Lu 
mailto:hongbin...@huawei.com>>:
Hi team,

During the team meetings these weeks, we collaborated the initial project 
roadmap. I summarized it as below. Please review.

* Implement a common container abstraction for different container runtimes. 
The initial implementation will focus on supporting basic container operations 
(i.e. CRUD).
* Focus on non-nested containers use cases (running containers on physical 
hosts), and revisit nested containers use cases (running containers on VMs) 
later.
* Provide two set of APIs to access containers: The Nova APIs and the 
Zun-native APIs. In particular, the Zun-native APIs will expose full container 
capabilities, and Nova APIs will expose capabilities that are shared between 
containers and VMs.
* Leverage Neutron (via Kuryr) for container networking.
* Leverage Cinder for container data volume.
* Leverage Glance for storing container images. If necessary, contribute to 
Glance for missing features (i.e. support layer of container images).
* Support enforcing multi-tenancy by doing the following:
** Add configurable options for scheduler to enforce neighboring containers 
belonging to the same tenant.
** Support hypervisor-based container runtimes.

The following topics have been discussed, but the team cannot reach consensus 
on including them into the short-term project scope. We skipped them for now 
and might revisit them later.
* Support proxying API calls to COEs.
* Advanced container operations (i.e. keep container alive, load balancer 
setup, rolling upgrade).
* Nested containers use cases (i.e. provision container hosts).
* Container composition (i.e. support docker-compose like DSL).

NOTE: I might forgot and mis-understood something. Please feel free to point 
out if anything is wrong or missing.

Best regards,
Hongbin

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: 
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe<http://openstack-dev-requ...@lists.openstack.org?subject:unsubscribe>
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [magnum] 2 million requests / sec, 100s of nodes

2016-06-17 Thread Hongbin Lu
Ricardo,

Thanks for sharing. It is good to hear that Magnum works well with a 200 nodes 
cluster.

Best regards,
Hongbin

> -Original Message-
> From: Ricardo Rocha [mailto:rocha.po...@gmail.com]
> Sent: June-17-16 11:14 AM
> To: OpenStack Development Mailing List (not for usage questions)
> Subject: [openstack-dev] [magnum] 2 million requests / sec, 100s of
> nodes
> 
> Hi.
> 
> Just thought the Magnum team would be happy to hear :)
> 
> We had access to some hardware the last couple days, and tried some
> tests with Magnum and Kubernetes - following an original blog post from
> the kubernetes team.
> 
> Got a 200 node kubernetes bay (800 cores) reaching 2 million requests /
> sec.
> 
> Check here for some details:
> https://openstack-in-production.blogspot.ch/2016/06/scaling-magnum-and-
> kubernetes-2-million.html
> 
> We'll try bigger in a couple weeks, also using the Rally work from
> Winnie, Ton and Spyros to see where it breaks. Already identified a
> couple issues, will add bugs or push patches for those. If you have
> ideas or suggestions for the next tests let us know.
> 
> Magnum is looking pretty good!
> 
> Cheers,
> Ricardo
> 
> ___
> ___
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: OpenStack-dev-
> requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Magnum] Midcycle location and date

2016-06-20 Thread Hongbin Lu
Hi all,

This is a reminder that there are doodle pools for midcycle participants to 
select the location and time:

Location: http://doodle.com/poll/2x9utspir7vk8ter
Date: http://doodle.com/poll/5tbcyc37yb7ckiec

If you are able to attend the midcycle, I encourage you to vote for your 
preferred location and date. We will try to finalize everything in the team 
meeting tomorrow.

Best regards,
Hongbin
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [magnum] Discuss the idea of manually managing the bay nodes

2016-06-20 Thread Hongbin Lu
mail/openstack-dev/2016-June/097522.html
[2] https://review.openstack.org/#/c/328822/

Best regards,
Hongbin

> -Original Message-
> From: Ricardo Rocha [mailto:rocha.po...@gmail.com]
> Sent: June-07-16 3:02 PM
> To: OpenStack Development Mailing List (not for usage questions)
> Subject: Re: [openstack-dev] [magnum] Discuss the idea of manually
> managing the bay nodes
> 
> +1 on this. Another use case would be 'fast storage' for dbs, 'any
> storage' for memcache and web servers. Relying on labels for this makes
> it really simple.
> 
> The alternative of doing it with multiple clusters adds complexity to
> the cluster(s) description by users.
> 
> On Fri, Jun 3, 2016 at 1:54 AM, Fox, Kevin M  wrote:
> > As an operator that has clouds that are partitioned into different
> host aggregates with different flavors targeting them, I totally
> believe we will have users that want to have a single k8s cluster span
> multiple different flavor types. I'm sure once I deploy magnum, I will
> want it too. You could have some special hardware on some nodes, not on
> others. but you can still have cattle, if you have enough of them and
> the labels are set appropriately. Labels allow you to continue to
> partition things when you need to, and ignore it when you dont, making
> administration significantly easier.
> >
> > Say I have a tenant with 5 gpu nodes, and 10 regular nodes allocated
> into a k8s cluster. I may want 30 instances of container x that doesn't
> care where they land, and prefer 5 instances that need cuda. The former
> can be deployed with a k8s deployment. The latter can be deployed with
> a daemonset. All should work well and very non pet'ish. The whole
> tenant could be viewed with a single pane of glass, making it easy to
> manage.
> >
> > Thanks,
> > Kevin
> > 
> > From: Adrian Otto [adrian.o...@rackspace.com]
> > Sent: Thursday, June 02, 2016 4:24 PM
> > To: OpenStack Development Mailing List (not for usage questions)
> > Subject: Re: [openstack-dev] [magnum] Discuss the idea of manually
> > managing the bay nodes
> >
> > I am really struggling to accept the idea of heterogeneous clusters.
> My experience causes me to question whether a heterogeneus cluster
> makes sense for Magnum. I will try to explain why I have this
> hesitation:
> >
> > 1) If you have a heterogeneous cluster, it suggests that you are
> using external intelligence to manage the cluster, rather than relying
> on it to be self-managing. This is an anti-pattern that I refer to as
> “pets" rather than “cattle”. The anti-pattern results in brittle
> deployments that rely on external intelligence to manage (upgrade,
> diagnose, and repair) the cluster. The automation of the management is
> much harder when a cluster is heterogeneous.
> >
> > 2) If you have a heterogeneous cluster, it can fall out of balance.
> This means that if one of your “important” or “large” members fail,
> there may not be adequate remaining members in the cluster to continue
> operating properly in the degraded state. The logic of how to track and
> deal with this needs to be handled. It’s much simpler in the
> heterogeneous case.
> >
> > 3) Heterogeneous clusters are complex compared to homogeneous
> clusters. They are harder to work with, and that usually means that
> unplanned outages are more frequent, and last longer than they with a
> homogeneous cluster.
> >
> > Summary:
> >
> > Heterogeneous:
> >   - Complex
> >   - Prone to imbalance upon node failure
> >   - Less reliable
> >
> > Heterogeneous:
> >   - Simple
> >   - Don’t get imbalanced when a min_members concept is supported by
> the cluster controller
> >   - More reliable
> >
> > My bias is to assert that applications that want a heterogeneous mix
> of system capacities at a node level should be deployed on multiple
> homogeneous bays, not a single heterogeneous one. That way you end up
> with a composition of simple systems rather than a larger complex one.
> >
> > Adrian
> >
> >
> >> On Jun 1, 2016, at 3:02 PM, Hongbin Lu  wrote:
> >>
> >> Personally, I think this is a good idea, since it can address a set
> of similar use cases like below:
> >> * I want to deploy a k8s cluster to 2 availability zone (in future 2
> regions/clouds).
> >> * I want to spin up N nodes in AZ1, M nodes in AZ2.
> >> * I want to scale the number of nodes in specific AZ/region/cloud.
> For example, add/remove K nodes from AZ1 (with AZ2 untouched).
> >>
> >> The use case above should be very c

Re: [openstack-dev] [Kuryr][Magnum] - Kuryr nested containers and Magnum Integration

2016-06-21 Thread Hongbin Lu
Gal,

Thanks for starting this ML. Since the work involves both team, I think it is a 
good idea to start by splitting the task first. Then, we can see which items go 
to which teams. Vikas, do you mind to update this ML once the task is spitted? 
Thanks in advance.

Best regards,
Hongbin

From: Gal Sagie [mailto:gal.sa...@gmail.com]
Sent: June-21-16 2:14 AM
To: OpenStack Development Mailing List (not for usage questions); Vikas 
Choudhary; Antoni Segura Puimedon; Irena Berezovsky; Fawad Khaliq; Omer Anson; 
Hongbin Lu
Subject: [Kuryr][Magnum] - Kuryr nested containers and Magnum Integration

Hello all,

I am writing this out to provide awareness and hopefully get some work started
on the above topic.

We have merged a spec about supporting nested containers and integration with 
Magnum
some time ago [1] , Fawad (CCed) led this spec.

We are now seeking for volunteers to start implementing this on both Kuryr and 
the needed
parts in Magnum.

Vikas (CCed) volunteered in the last IRC meeting [2] to start and split this 
work into sub-tasks
so it will be easier to share, anyone else that is interested to join this 
effort is more then welcome to join in and contact Vikas.
I do know several other people showed interest to work on this so i hope we can 
pull everyone
together in this thread, or online at IRC.

Thanks
Gal.

[1] https://review.openstack.org/#/c/269039/
[2] 
http://eavesdrop.openstack.org/meetings/kuryr/2016/kuryr.2016-06-20-14.00.html
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Kuryr][Magnum] - Kuryr nested containers and Magnum Integration

2016-06-24 Thread Hongbin Lu


From: Vikas Choudhary [mailto:choudharyvika...@gmail.com]
Sent: June-22-16 3:58 AM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [Kuryr][Magnum] - Kuryr nested containers and 
Magnum Integration

Hi Eli Qiao,

Please find my responses inline.

On Wed, Jun 22, 2016 at 12:47 PM, taget 
mailto:qiaoliy...@gmail.com>> wrote:
hi Vikas,
thanks for you clarify, relied in lines.
On 2016年06月22日 14:36, Vikas Choudhary wrote:

Magnum:

  1.  Support to pass neutron network names at container creation apis such as 
pod-create in k8s case.
Hmm. Magnum has deleted all wrapper API for container creation and pod-create
Oh, I was referring to older design then. In that case, What would be 
corresponding alternative now?
Is this related to Zun somehow?
[Hongbin Lu] The alternative is native CLI tool (i.e. kubectl, docker). This is 
unrelated to Zun. Zun is a totally independent service, regardless of Magnum 
exists or not.




  1.  If Kuryr is used as network driver at bay creation, update heat template 
creation logic for kuryr-agent provisioning on all the bay nodes. This will 
also include passing required configuration and credentials also.

In this case, I am confused, we need to install kuryr-agent on all bay nodes, 
so and kuryr-agent's for binding neutron ports and containers port, we will 
need to install neutron-agent on bay nodes too?
Neutron-agent will not be required on bay nodes. Only kuryr-agent will be 
sufficient to plumb the vifs and vlan tagging.





--

Best Regards,

Eli Qiao (乔立勇), Intel OTC.

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: 
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe<http://openstack-dev-requ...@lists.openstack.org?subject:unsubscribe>
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Magnum]Midcycle

2016-06-24 Thread Hongbin Lu
Hi all,

The Magnum midcycle will be held in Aug 4 - 5 at Austin. Below is the link to 
register. Hope to see you all there.

https://www.eventbrite.com/e/magnum-midcycle-tickets-26245489967

Best regards,
Hongbin
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Magnum] Want to know our users

2016-06-28 Thread Hongbin Lu
Hi all,

FYI. The Magnum team is collecting a list of Magnum users. If you are using 
Magnum, we would like to have your name and organization recorded in our wiki 
[1]. Please note that this is totally optional, but we hope you will let us 
know if you are our users.

[1] https://wiki.openstack.org/wiki/Magnum#Users

Best regards,
Hongbin
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Zun][Higgins] IRC channel rename notice

2016-06-28 Thread Hongbin Lu
Hi all,

Here is a notice that we have moved our IRC channel from #openstack-higgins to 
#openstack-zun . Right now, all the bots have been moved to the new channel 
[1]. The old channel is no longer used anymore.

[1] https://review.openstack.org/#/c/330017/

Best regards,
Hongbin
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Zun][Higgins] The design of Zun

2016-06-28 Thread Hongbin Lu
Hi all,

At the last team meeting, we made an important decision about the design. I 
would like to summarize it in this email so that everyone will be on the same 
page.

Short version:
* Zun aims to build an unified API layers to interface with various Container 
Orchestration Engines (COEs).
* Zun will provide a reference implementation of the API.

Long version:
The key objective of the Zun project is to bring various container technologies 
to OpenStack. Such container technologies includes Container runtimes (i.e. 
Docker, Rkt, Clear Container) and COEs (i.e. Kubernetes, Docker Swarm). The 
main obstacle is that these two groups of technologies look very different from 
each other, and it is hard to abstract all of them into a common set of API. 
Generally speaking, COEs look relatively high-level and focus on managing 
containerized applications, which are typically consistent of a set of 
containers, its inter-connections, its load balancers, and more. In comparison, 
container runtimes looks relatively simple and they focus on managing a single 
container. It doesn't seem to make sense to group them all together. A 
potential solution is to drop one group of technologies and focus on the other. 
However, we decided to choose a better solution, which is to separate the 
support of these two group of technologies.

First, we agreed to have Zun deeply integrate with COEs. In particular, Zun 
will build an unified API to abstract different COEs. The built API should 
expose the common feature set among prevailing COEs, such as deploying an 
application to one or multiple containers, scaling the application, setup a 
load-balancer for the application, upgrade the application, etc. Second, we 
agreed to develop a reference implementation of the Zun API. The reference 
implementation will deeply integrate with various container runtimes, and focus 
on the basic of managing a single container and integrating containers with 
existing OpenStack primitives (i.e. networking, storage, 
authentication/authorization, monitoring, quota management, multi-tenancy, 
etc.).

The details of the discussion can be found in this etherpad: 
https://etherpad.openstack.org/p/zun-architecture-decisions . Please feel free 
to reply if you have any comment or anything is unclear.

[1] https://etherpad.openstack.org/p/zun-architecture-decisions

Best regards,
Hongbin
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Higgins][Zun] Team meeting next week

2016-07-06 Thread Hongbin Lu
Hi all,

FYI. I won't be able to chair the next team meeting because I will be on the 
flight at that time. Madhuri will chair the next meeting on behalf. Thanks.

Best regards,
Hongbin
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [magnum] Use Keystone trusts in Magnum?

2016-07-06 Thread Hongbin Lu
Johannes,

Magnum generates Keystone trust for each bay: 
https://blueprints.launchpad.net/magnum/+spec/create-trustee-user-for-each-bay 
. Possibly, you can reuse the trust stored in the bay for this purpose.

Best regards,
Hongbin

> -Original Message-
> From: Johannes Grassler [mailto:jgrass...@suse.de]
> Sent: July-06-16 9:40 AM
> To: OpenStack Development Mailing List
> Subject: [openstack-dev] [magnum] Use Keystone trusts in Magnum?
> 
> Hello,
> 
> I submitted https://review.openstack.org/#/c/326428 a while ago to get
> around having to configure Heat's policy.json in a very permissive
> manner[0]. I naively only tested it as one user, but gating caught that
> omission and dutifully failed (a user cannot stack-get another user's
> Heat stack, even if it's the Magnum service user). Ordinarily, that is.
> 
> Beyond the ordinary, Heat uses[1] Keystone trusts[2] to handle what is
> basically the same problem (acting on a user's behalf way past the time
> of the stack-create when the token used for the stack-create may have
> expired already).
> 
> I propose doing the same thing in Magnum to get the Magnum service user
> the ability to perform a stack-get on all of its bays' stacks. That way
> the hairy problems with the wide-open permissions neccessary for a
> global stack-list can be avoided entirely.
> 
> I'd be willing to implement this, either as part of the existing change
> referenced above or with a blueprint and all the bells and whistles.
> 
> So I have two questions:
> 
> 1) Is this an acceptable way to handle the issue?
> 
> 2) If so, is it blueprint material or can I get away with adding the
> code
> required for Keystone trusts to the existing change?
> 
> Cheers,
> 
> Johannes
> 
> 
> Footnotes:
> 
> [0] See Steven Hardy's excellent dissection of the problem at the root
> of it:
> 
>  http://lists.openstack.org/pipermail/openstack-dev/2016-
> July/098742.html
> 
> [1] http://hardysteven.blogspot.de/2014/04/heat-auth-model-updates-
> part-1-trusts.html
> 
> [2] https://wiki.openstack.org/wiki/Keystone/Trusts
> 
> --
> Johannes Grassler, Cloud Developer
> SUSE Linux GmbH, HRB 21284 (AG Nürnberg)
> GF: Felix Imendörffer, Jane Smithard, Graham Norton Maxfeldstr. 5,
> 90409 Nürnberg, Germany
> 
> ___
> ___
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: OpenStack-dev-
> requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Zun][Higgins] Request initial feedback for the design spec

2016-07-08 Thread Hongbin Lu
Hi all,

Because I cannot attend the team meeting next week, I would leave a message 
here: I am working on a spec to define the high-level design of the Zun service.

https://etherpad.openstack.org/p/zun-containers-service-design-spec

I would encourage everyone to participant on this efforts to sharp our project. 
Please feel free to edit or leave comments on the etherpad. Your feedback is 
greatly appreciated.

Best regards,
Hongbin
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tc][all] Big tent? (Related to Plugins for all)

2016-07-15 Thread Hongbin Lu
No, Magnum still uses Barbican as an optional dependency, and I believe nobody 
has proposed to remove Barbican entirely. I have no position about big tent but 
using Magnum as an example of "projects are not working together" is 
inappropriate.

Best regards,
Hongbin

> -Original Message-
> From: Fox, Kevin M [mailto:kevin@pnnl.gov]
> Sent: July-15-16 2:37 PM
> To: OpenStack Development Mailing List (not for usage questions)
> Subject: Re: [openstack-dev] [tc][all] Big tent? (Related to Plugins
> for all)
> 
> Some specific things:
> 
> Magnum trying to not use Barbican as it adds an addition dependency.
> See the discussion on the devel mailing list for details.
> 
> Horizon discussions at the summit around wanting to use Zaqar for
> dynamic ui updates instead of polling, but couldn't depend on a non
> widely deployed subsystem.
> 
> Each Advanced OpenStack Service implementing a guest controller
> communication channel that are incompatible with each other and work
> around communications issues differently. This makes a lot more pain
> for Ops to debug or architect a viable solution. For example:
>  * Sahara uses ssh from the controllers to the vms. This does not play
> well with tenant networks. They have tried to work around this several
> ways:
> * require every vm to have a floating ip. (Unnecessary attack
> surface)
> * require the controller to be on the one and only network node
> (Uses ip netns exec to tunnel. Doesn't work for large clouds)
> * Double tunnel ssh via the controller vm's. so some vms have fips,
> some don't. Better then all, but still not good.
>   * Trove uses Rabbit for the guest agent to talk back to the
> controllers. This has traffic going the right direction to work well
> with tenant networks.
> * But Rabbit is not multitenant so a security risk if any user can
> get into any one of the database vm's.
> Really, I believe the right solution is to use a multitenant aware
> message queue so that the guest agent can pull in the right direction
> for tenant networks, and not have the security risk. We have such a
> system already, Zaqar, but its not widely deployed and projects don't
> want to depend on other projects that aren't widely deployed.
> 
> The lack of Instance Users has caused lots of projects to try and work
> around the lack thereof. I know for sure, Mangum, Heat, and Trove work
> around the lack. I'm positive others have too. As an operator, it makes
> me have to very carefully consider all the tradeoffs each project made,
> and decide if I can accept the same risk they assumed. Since each is
> different, thats much harder.
> 
> I'm sure there are more examples. but I hope you get I'm not just
> trying to troll.
> 
> The TC did apply inconsistant rules on letting projects in. That was
> for sure a negative before the big tent. But it also provided a way to
> apply pressure to projects to fix some of the issues that multiple
> projects face, and that plague user/operators and raise the whole
> community up, and that has fallen to the wayside since. Which is a big
> negative now. Maybe that could be bolted on top of the Big Tent I don't
> know.
> 
> I could write a very long description about the state of being an
> OpenStack App developer too that touches on all the problems with
> getting a consistent target and all the cross project communication
> issues there of. But thats probably for some other time.
> 
> Thanks,
> Kevin
> 
> 
> From: Jay Pipes [jaypi...@gmail.com]
> Sent: Friday, July 15, 2016 8:17 AM
> To: openstack-dev@lists.openstack.org
> Subject: Re: [openstack-dev] [tc][all] Big tent? (Related to Plugins
> for all)
> 
> Kevin, can you please be *specific* about your complaints below? Saying
> things like "less project communication" and "projects not working
> together because of fear of adding dependencies" and "worse user
> experience" are your personal opinions. Please back those opinions up
> with specific examples of what you are talking about so that we may
> address specific things and not vague ideas.
> 
> Also, the overall goal of the Big Tent, as I've said repeatedly and
> people keep willfully ignoring, was *not* to "make the community more
> inclusive". It was to replace the inconsistently-applied-by-the-TC
> *subjective* criteria for project applications to OpenStack with an
> *objective* list of application requirements that could be
> *consistently* reviewed by the TC.
> 
> Thanks,
> -jay
> 
> On 07/14/2016 01:30 PM, Fox, Kevin M wrote:
> > I'm going to go ahead and ask the difficult question now as the
> answer is relevant to the attached proposal...
> >
> > Should we reconsider whether the big tent is the right approach going
> forward?
> >
> > There have been some major downsides I think to the Big Tent approach,
> such as:
> >   * Projects not working together because of fear of adding extra
> dependencies Ops don't already have
> >   * Reimplementing features, badl

[openstack-dev] [magnum] Proposing Spyros Trigazis for Magnum core reviewer team

2016-07-22 Thread Hongbin Lu
Hi all,

Spyros has consistently contributed to Magnum for a while. In my opinion, what 
differentiate him from others is the significance of his contribution, which 
adds concrete value to the project. For example, the operator-oriented install 
guide he delivered attracts a significant number of users to install Magnum, 
which facilitates the adoption of the project. I would like to emphasize that 
the Magnum team has been working hard but struggling to increase the adoption, 
and Spyros's contribution means a lot in this regards. He also completed 
several essential and challenging tasks, such as adding support for OverlayFS, 
adding Rally job for Magnum, etc. In overall, I am impressed by the amount of 
high-quality patches he submitted. He is also helpful in code reviews, and his 
comments often help us identify pitfalls that are not easy to identify. He is 
also very active in IRC and ML. Based on his contribution and expertise, I 
think he is qualified to be a Magnum core reviewer.

I am happy to propose Spyros to be a core reviewer of Magnum team. According to 
the OpenStack Governance process [1], we require a minimum of 4 +1 votes from 
Magnum core reviewers within a 1 week voting window (consider this proposal as 
a +1 vote from me). A vote of -1 is a veto. If we cannot get enough votes or 
there is a veto vote prior to the end of the voting window, Spyros is not able 
to join the core team and needs to wait 30 days to reapply.

The voting is open until Thursday July 29st.

[1] https://wiki.openstack.org/wiki/Governance/Approved/CoreDevProcess

Best regards,
Hongbin
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Magnum] Remove Davanum Srinivas from Magnum core team

2016-07-22 Thread Hongbin Lu
Hi all,

Based on Dims's request, I removed him from the Magnum core reviewer team. 
Dims's contribution started from the first commit of the Magnum tree, and he 
was served as a Magnum core reviewer for a long time. I am sorry to hear that 
Dims want to leave the team, but thanks for his contribution and guidance to 
the project.

Note: this removal doesn't require a vote because Dims requested to be removed.

Best regards,
Hongbin


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Magnum] Select our project mascot/logo

2016-07-25 Thread Hongbin Lu
Hi team,

OpenStack want to promote individual projects by choosing a mascot to represent 
the project. The idea is to create a family of logos for OpenStack projects 
that are unique, yet immediately identifiable as part of OpenStack. OpenStack 
will be using these logos to promote each project on the OpenStack website, at 
the Summit and in marketing materials.

We can select our own mascot, and then OpenStack will have an illustrator 
create the logo for us. The mascot can be anything from the natural world-an 
animal, fish, plant, or natural feature such as a mountain or waterfall. We 
need to select our top mascot candidates by the first deadline (July 27, this 
Wednesday). There's more info on the website: 
http://www.openstack.org/project-mascots

Action Item: Everyone please let me know what is your favorite mascot. You can 
either reply to this ML or discuss it in the next team meeting.

Best regards,
Hongbin
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [kolla] [daisycloud-core] [requirements] [magnum] [oslo] Do we really need to upgrade pbr, docker-py and oslo.utils

2017-04-19 Thread Hongbin Lu
Zun required docker-py to be 1.8 or higher because older version of
docker-py didn't have the API we need. Sorry if it caused difficulties on
your side but I don't think it is feasible to downgrade the version for now
since it will affect a ton of other projects.

Best regards,
Hongbin

On Thu, Apr 20, 2017 at 12:15 AM, Steven Dake (stdake) 
wrote:

> Hu,
>
>
>
> Kolla does not manage the global requirements process as it is global to
> OpenStack.  The Kolla core reviewers essentially rubber stamp changes from
> the global requirements bot assuming they pass our gating.  If they don’t
> pass our gating, we work with the committer to sort out a working solution.
>
>
>
> Taking a look at the specific issues you raised:
>
>
>
> Pbr: https://github.com/openstack/requirements/blame/stable/
> ocata/global-requirements.txt#L158
>
> Here is the change: https://github.com/openstack/requirements/commit/
> 74a8e159e3eda7c702a39e38ab96327ba85ced3c
>
> (from the infrastructure team)
>
>
>
> Docker-py: https://github.com/openstack/requirements/blame/stable/
> ocata/global-requirements.txt#L338
>
> Here is the change: https://github.com/openstack/requirements/commit/
> 330139835347a26f435ab1262f16cf9e559f32a6
>
> (from the magnum team)
>
>
>
> oslo-utils: https://github.com/openstack/requirements/blame/
> 62383acc175b77fe7f723979cefaaca65a8d12fe/global-requirements.txt#L136
>
> https://github.com/openstack/requirements/commit/
> 510c4092f48a3a9ac7518decc5d3724df8088eb7
>
> (I am not sure which team this is – the oslo team perhaps?)
>
>
>
> I would recommend taking the changes up with the requirements team or the
> direct authors.
>
>
>
> Regards
>
> -steve
>
>
>
>
>
>
>
> *From: *"hu.zhiji...@zte.com.cn" 
> *Reply-To: *"OpenStack Development Mailing List (not for usage
> questions)" 
> *Date: *Wednesday, April 19, 2017 at 8:45 PM
> *To: *"openstack-dev@lists.openstack.org"  openstack.org>
> *Subject: *[openstack-dev] [kolla] [daisycloud-core]Do we really need to
> upgrade pbr, docker-py and oslo.utils
>
>
>
> Hello,
>
>
>
> As global requirements changed in Ocata, Kolla upgrads pbr>=1.8 [1] ,
>
> docker-py>=1.8.1[2] . Besides, Kolla also starts depending on
>
> oslo.utils>=3.18.0 to use uuidutils.generate_uuid() instead of
> uuid.uuid4() to
>
> generate UUID.
>
>
>
> IMHO, Upgrading of [1] and [2] are actually not what Kolla really need to,
>
> and uuidutils.generate_uuid() is also supported by oslo.utils-3.16. I mean
>
> If we keep Kolla's requirement in Ocata as what it was in Newton, upper
> layer
>
> user of Kolla like daisycloud-core project can still keep other things
> unchanged
>
> to upgrade Kolla from stable/newton to stable/ocata. Otherwise, we have to
>
> upgrade from centos-release-openstack-newton to
>
> centos-release-openstack-ocata(we do not use pip since it conflicts with
> yum
>
> on files installed by same packages). But this kind of upgrade may be too
>
> invasive that may impacts other applications.
>
>
>
> I know that there were some discusstions about global requirements update
>
> these days. So if not really need to do these upgrades by Kolla itself, can
>
> we just keep the requirement unchanged as long as possible?
>
>
>
> My 2c.
>
>
>
> [1] https://github.com/openstack/kolla/commit/
> 2f50beb452918e37dec6edd25c53e407c6e47f53
>
> [2] https://github.com/openstack/kolla/commit/
> 85abee13ba284bb087af587b673f4e44187142da
>
> [3] https://github.com/openstack/kolla/commit/
> cee89ee8bef92914036189d02745c08894a9955b
>
>
>
>
>
>
>
>
>
>
>
> B. R.,
>
> Zhijiang
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Zun] Zun Mascot

2017-04-21 Thread Hongbin Lu
Hi team,

Please review the mascot below and let me know your feedback if any. We will 
discuss/approve the mascot at the next team meeting.

Best regards,
Hongbin

From: Heidi Joy Tretheway [mailto:heidi...@openstack.org]
Sent: April-21-17 6:16 PM
To: Hongbin Lu
Subject: Re: Zun mascot follow-up

Hi Hongbin,
Our designers came up with a great mascot (dolphin) for your team that looks 
substantially different than Magnum’s shark (which was my concern). Would you 
please let me know what your team thinks?

[cid:51A123EE-C4AD-440E-9A01-B45C3F586012]
On Feb 21, 2017, at 10:28 AM, Hongbin Lu 
mailto:hongbin...@huawei.com>> wrote:

Heidi,

Thanks for following this up and the advice. No problem for the website things. 
I will have the Zun team to choose another mascot and let you know.

Best regards,
Hongbin

From: Heidi Joy Tretheway [mailto:heidi...@openstack.org]
Sent: February-21-17 1:19 PM
To: Hongbin Lu
Subject: Zun mascot follow-up


Hi Hongbin,

I wanted to follow up to ensure you got my note to the Zun dev team list. I 
apologize that your mascot choice was listed wrong on 
openstack.org/project-mascots<http://openstack.org/project-mascots>. It should 
have shown as Zun (mascot not chosen) but instead showed up as Tricircle’s 
chosen mascot, the panda.

The error is entirely my fault, and we’ll get it fixed on the website shortly. 
Thanks for your patience, and please carry on with your debate over the best 
Zun mascot!

Below, your choices can work except for the barrel, because there are no 
human-made objects allowed. Also you are correct that it could be confusing to 
have both a Hawk (Winstackers) and a Falcon, so I would advise the team to look 
at the stork, dolphin, or tiger.

Thank you!



Thanks for the inputs. By aggregating feedback from different source, the 
choice is as below:

* Barrel

* Storks

* Falcon (I am not sure this one since another team already chose Hawk)

* Dolphins

* Tiger

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Zun] project on-board schedule

2017-04-27 Thread Hongbin Lu
Hi all,

There is a recent change on schedule about the Zun new contributors on-board 
session at Boston Summit. The new time is Monday, May 8, 2:00pm-3:30pm [1]. 
Please feel free to let me know if the new time doesn't work for you. Look 
forward to seeing you all there.

[1] 
https://www.openstack.org/summit/boston-2017/summit-schedule/events/18693/zun-project-onboarding

Best regards,
Hongbin

From: joehuang
Sent: April-26-17 2:31 AM
To: Kendall Nelson; OpenStack Development Mailing List (not for usage 
questions); Hongbin Lu
Subject: RE: [openstack-dev] project on-board schedule

Thank you very much, Hongbin and Kendall.

Best Regards
Chaoyi Huang (joehuang)

From: Kendall Nelson [kennelso...@gmail.com]
Sent: 26 April 2017 11:19
To: joehuang; OpenStack Development Mailing List (not for usage questions); 
Hongbin Lu
Subject: Re: [openstack-dev] project on-board schedule

Yes, I should be able to make that happen :)

- Kendall

On Tue, Apr 25, 2017, 10:03 PM joehuang 
mailto:joehu...@huawei.com>> wrote:
Hello, Kendall,

Thank you very much for the slot you provided, but consider that it's launch 
time, I am afraid that audience need to have launch too.

I just discussed with Hongbin, the PTL of Zun, he said it's OK to exchange the 
project on-boarding time slot between Zun[1] and Tricircle[2].

After exchange, Tricircle will share with Sahara and use the first half (45 
minutes) just like Zun in this time slot. And Zun's on-boarding session will be 
moved to Monday 2:00pm~3:30pm.

Is this exchange is feasible?

[1] 
https://www.openstack.org/summit/boston-2017/summit-schedule/events/18701/zunsahara-project-onboarding
[2] 
https://www.openstack.org/summit/boston-2017/summit-schedule/events/18693/tricircle-project-onboarding

Best Regards
Chaoyi Huang (joehuang)

From: Kendall Nelson [kennelso...@gmail.com<mailto:kennelso...@gmail.com>]
Sent: 26 April 2017 4:07
To: OpenStack Development Mailing List (not for usage questions); joehuang

Subject: Re: [openstack-dev] project on-board schedule
Hello Joe,

I can offer TriCircle a lunch slot on Wednesday from 12:30-1:50?
-Kendall


On Tue, Apr 25, 2017 at 4:08 AM joehuang 
mailto:joehu...@huawei.com>> wrote:
Hi,

Thank you Tom, I found that the on-boarding session of Tricircle [1] is 
overlapping with my talk{2]:

[1] 
https://www.openstack.org/summit/boston-2017/summit-schedule/events/18693/tricircle-project-onboarding
[2] 
https://www.openstack.org/summit/boston-2017/summit-schedule/events/18076/when-one-cloud-is-not-enough-an-overview-of-sites-regions-edges-distributed-clouds-and-more

Is there any other project can help us to exchange the on-boarding session? 
Thanks a lot, I just find the issue.

Best Regards
Chaoyi Huang (joehuang)


From: Tom Fifield [t...@openstack.org<mailto:t...@openstack.org>]
Sent: 25 April 2017 16:50
To: openstack-dev@lists.openstack.org<mailto:openstack-dev@lists.openstack.org>
Subject: Re: [openstack-dev] project on-board schedule

On 25/04/17 16:35, joehuang wrote:
> Hello,
>
> Where can I find the project on-board schedule in OpenStack Boston
> summit? I haven't found it yet, and maybe I missed some mail. Thanks a lot.

It's listed on the main summit schedule, under the Forum :)

Here's a direct link to the Forum category:

https://www.openstack.org/summit/boston-2017/summit-schedule/#track=146


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: 
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe<http://openstack-dev-requ...@lists.openstack.org?subject:unsubscribe>
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: 
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe<http://openstack-dev-requ...@lists.openstack.org?subject:unsubscribe>
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Zun] Proposal a change of Zun core team

2017-04-28 Thread Hongbin Lu
Hi all,

I proposes a change of Zun's core team memberships as below:

+ Feng Shengqin (feng-shengqin)
- Wang Feilong (flwang)

Feng Shengqin has contributed a lot to the Zun projects. Her contribution 
includes BPs, bug fixes, and reviews. In particular, she completed an essential 
BP and had a lot of accepted commits in Zun's repositories. I think she is 
qualified for the core reviewer position. I would like to thank Wang Feilong 
for his interest to join the team when the project was found. I believe we are 
always friends regardless of his core membership.

By convention, we require a minimum of 4 +1 votes from Zun core reviewers 
within a 1 week voting window (consider this proposal as a +1 vote from me). A 
vote of -1 is a veto. If we cannot get enough votes or there is a veto vote 
prior to the end of the voting window, this proposal is rejected.

Best regards,
Hongbin
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [devstack] [all] systemd in devstack by default

2017-05-03 Thread Hongbin Lu
Hi Sean,

I tried the new systemd devstack and frankly I don't like it. There are several 
handy operations in screen that seems to be impossible after switching to 
systemd. For example, freeze a process by "Ctrl + a + [". In addition, 
navigating though the logs seems difficult (perhaps I am not familiar with 
journalctl).

From my understanding, the plan is dropping screen entirely in devstack? I 
would argue that it is better to keep both screen and systemd, and let users 
choose one of them based on their preference.

Best regards,
Hongbin

> -Original Message-
> From: Sean Dague [mailto:s...@dague.net]
> Sent: May-03-17 6:10 AM
> To: openstack-dev@lists.openstack.org
> Subject: Re: [openstack-dev] [devstack] [all] systemd in devstack by
> default
> 
> On 05/02/2017 08:30 AM, Sean Dague wrote:
> > We started running systemd for devstack in the gate yesterday, so far
> > so good.
> >
> > The following patch (which will hopefully land soon), will convert
> the
> > default local use of devstack to systemd as well -
> > https://review.openstack.org/#/c/461716/. It also includes
> > substantially updated documentation.
> >
> > Once you take this patch, a "./clean.sh" is recommended. Flipping
> > modes can cause some cruft to build up, and ./clean.sh should be
> > pretty good at eliminating them.
> >
> > https://review.openstack.org/#/c/461716/2/doc/source/development.rst
> > is probably specifically interesting / useful for people to read, as
> > it shows how the standard development workflows will change (for the
> > better) with systemd.
> >
> > -Sean
> 
> As a follow up, there are definitely a few edge conditions we've hit
> with some jobs, so the following is provided as information in case you
> have a job that seems to fail in one of these ways.
> 
> Doing process stop / start
> ==
> 
> The nova live migration job is special, it was restarting services
> manually, however it was doing so with some copy / pasted devstack code,
> which means it didn't evolve with the rest of devstack. So the stop
> code stopped working (and wasn't robust enough to make it clear that
> was the issue).
> 
> https://review.openstack.org/#/c/461803/ is the fix (merged)
> 
> run_process limitations
> ===
> 
> When doing the systemd conversion I looked for a path forward which was
> going to make 90% of everything just work. The key trick here was that
> services start as the "stack" user, and aren't daemonizing away from
> the console. We can take the run_process command and make that the
> ExecStart in a unit file.
> 
> *Except* that only works if the command is specified by an *absolute
> path*.
> 
> So things like this in kuryr-libnetwork become an issue
> https://github.com/openstack/kuryr-
> libnetwork/blob/3e2891d6fc5d55b3712258c932a5a8b9b323f6c2/devstack/plugi
> n.sh#L148
> 
> There is also a second issue there, which is calling sudo in the
> run_process line. If you need to run as a user/group different than the
> default, you need to specify that directly.
> 
> The run_process command now supports that -
> https://github.com/openstack-
> dev/devstack/blob/803acffcf9254e328426ad67380a99f4f5b164ec/functions-
> common#L1531-L1535
> 
> And lastly, run_process really always did expect that the thing you
> started remained attached to the console. These are run as "simple"
> services in systemd. If you are running a thing which already
> daemonizes systemd is going to assume (correctly in this simple mode)
> the fact that the process detatched from it means it died, and kill and
> clean it up.
> 
> This is the issue the OpenDaylight plugin ran into.
> https://review.openstack.org/#/c/461889/ is the proposed fix.
> 
> 
> 
> If you run into any other issues please pop into #openstack-qa (or
> respond to this email) and we'll try to work through them.
> 
>   -Sean
> 
> --
> Sean Dague
> http://dague.net
> 
> ___
> ___
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: OpenStack-dev-
> requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [qa][devstack][kuryr][fuxi][zun] Consolidate docker installation

2017-05-04 Thread Hongbin Lu
Hi all,

Just want to give a little bit update about this. After discussing with the QA 
team, we agreed to create a dedicated repo for this purpose: 
https://github.com/openstack/devstack-plugin-container . In addition, a few 
patches [1][2][3] were proposed to different projects for switching to this 
common devstack plugin. I hope more teams will interest in using this plugin 
and helping out to improve and maintain it.

[1] https://review.openstack.org/#/c/457348/
[2] https://review.openstack.org/#/c/461210/
[3] https://review.openstack.org/#/c/461212/

Best regards,
Hongbin

> -Original Message-
> From: Davanum Srinivas [mailto:dava...@gmail.com]
> Sent: April-02-17 8:17 PM
> To: OpenStack Development Mailing List (not for usage questions)
> Subject: Re: [openstack-dev] [qa][devstack][kuryr][fuxi][zun]
> Consolidate docker installation
> 
> Hongbin,
> 
> Nice. +1 in theory :) the etcd one i have a WIP for the etcd/DLM,
> please see here https://review.openstack.org/#/c/445432/
> 
> -- Dims
> 
> On Sun, Apr 2, 2017 at 8:13 PM, Hongbin Lu 
> wrote:
> > Hi devstack team,
> >
> >
> >
> > Please find my proposal about consolidating docker installation into
> > one place that is devstack tree:
> >
> >
> >
> > https://review.openstack.org/#/c/452575/
> >
> >
> >
> > Currently, there are several projects that installed docker in their
> > devstack plugins in various different ways. This potentially
> introduce
> > issues if more than one such services were enabled in devstack
> because
> > the same software package will be installed and configured multiple
> > times. To resolve the problem, an option is to consolidate the docker
> > installation script into one place so that all projects will leverage
> > it. Before continuing this effort, I wanted to get early feedback to
> > confirm if this kind of work will be accepted. BTW, etcd installation
> > might have a similar problem and I would be happy to contribute
> > another patch to consolidate it if that is what will be accepted as
> well.
> >
> >
> >
> > Best regards,
> >
> > Hongbin
> >
> >
> >
> __
> >  OpenStack Development Mailing List (not for usage questions)
> > Unsubscribe:
> > openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> >
> 
> 
> 
> --
> Davanum Srinivas :: https://twitter.com/dims
> 
> ___
> ___
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: OpenStack-dev-
> requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Zun] Proposal a change of Zun core team

2017-05-05 Thread Hongbin Lu
Hi all,

Thanks for your vote. According to the feedback, I adjusted the core team 
membership accordingly. Welcome Feng Shengqin to the core team.

https://review.openstack.org/#/admin/groups/1382,members

Best regards,
Hongbin

From: shubham sharma [mailto:shubham@gmail.com]
Sent: May-02-17 1:03 AM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [Zun] Proposal a change of Zun core team

+1

Regards
Shubham

On Tue, May 2, 2017 at 6:33 AM, Qiming Teng 
mailto:teng...@linux.vnet.ibm.com>> wrote:
+1

Qiming


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: 
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [qa] Create subnetpool on dynamic credentials

2017-05-20 Thread Hongbin Lu
Hi QA team,

I have a proposal to create subnetpool/subnet pair on dynamic credentials: 
https://review.openstack.org/#/c/466440/ . We (Zun team) have use cases for 
using subnets with subnetpools. I wanted to get some early feedback on this 
proposal. Will this proposal be accepted? If not, would appreciate alternative 
suggestion if any. Thanks in advance.

Best regards,
Hongbin
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [fuxi][stackube][kuryr] IRC meeting

2017-05-22 Thread Hongbin Lu
Hi all,

We will have an IRC meeting at UTC 1400-1500 Tuesday (2017-05-23). At the 
meeting, we will discuss the k8s storage integration with OpenStack. This 
effort might cross more than one teams (i.e. kuryr and stackube). You are more 
than welcomed to join us at #openstack-meeting-cp tomorrow.

Best regards,
Hongbin

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [qa] Create subnetpool on dynamic credentials

2017-05-24 Thread Hongbin Lu
Hi Andrea,

Sorry I just got a chance to get back to this. Yes, an advantage is 
creating/deleting subnetpool once instead of creating/deleting per test. It 
seems neutron doesn’t support setting subnetpool_id after a subnet is created. 
If this is true, it means we cannot leverage the pre-created subnet from 
credential provider because we want to test against a subnet with a subnetpool. 
Eventually, we need to create a pair of subnet/subnetpool for each test and 
take care of the configuration of these resources. This looks complex 
especially for our contributors most of who don’t have a strong networking 
background.

Another motivation of this proposal is that we want to run all the tests 
against a subnet with subnetpool. We currently run tests without subnetpool but 
it doesn’t work well in some dev environment [1]. The issue was tracked down to 
the limitation of the docker networking model that makes its plugin hard to 
identify the correct subnet (unless it has a subnetpool because libnetwork will 
record its uuid). This is why I prefer to run tests against a pre-created 
subnet/subnetpool pair. Ideally, Tempest could provide a feasible solution to 
address our use cases.

[1] https://bugs.launchpad.net/zun/+bug/1690284

Best regards,
Hongbin

From: Andrea Frittoli [mailto:andrea.fritt...@gmail.com]
Sent: May-22-17 9:23 AM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [qa] Create subnetpool on dynamic credentials

Hi Hongbin,

If several of your test cases require a subnet pool, I think the simplest 
solution would be creating one in the resource creation step of the tests.
As I understand it, subnet pools can be created by regular projects (they do 
not require admin credentials).

The main advantage that I can think of for having subnet pools provisioned as 
part of the credential provider code is that - in case of pre-provisioned 
credentials - the subnet pool would be created and delete once per test user as 
opposed to once per test class.

That said I'm not opposed to the proposal in general, but if possible I would 
prefer to avoid adding complexity to an already complex part of the code.

andrea

On Sun, May 21, 2017 at 2:54 AM Hongbin Lu 
mailto:hongbin...@huawei.com>> wrote:
Hi QA team,

I have a proposal to create subnetpool/subnet pair on dynamic credentials: 
https://review.openstack.org/#/c/466440/ . We (Zun team) have use cases for 
using subnets with subnetpools. I wanted to get some early feedback on this 
proposal. Will this proposal be accepted? If not, would appreciate alternative 
suggestion if any. Thanks in advance.

Best regards,
Hongbin
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: 
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe<http://openstack-dev-requ...@lists.openstack.org?subject:unsubscribe>
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [fuxi][kuryr] Where to commit codes for Fuxi-golang

2017-05-30 Thread Hongbin Lu
Please consider leveraging Fuxi instead. Kuryr/Fuxi team is working very hard 
to deliver the docker network/storage plugins. I wish you will work with us to 
get them integrated with Magnum-provisioned cluster. Currently, COE clusters 
provisioned by Magnum is far away from enterprise-ready. I think the Magnum 
project will be better off if it can adopt Kuryr/Fuxi which will give you a 
better OpenStack integration.

Best regards,
Hongbin

From: Spyros Trigazis [mailto:strig...@gmail.com]
Sent: May-30-17 7:47 AM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [fuxi][kuryr] Where to commit codes for Fuxi-golang

FYI, there is already a cinder volume driver for docker available, written
in golang, from rexray [1].

Our team recently contributed to libstorage [3], it could support manila too. 
Rexray
also supports the popular cloud providers.

Magnum's docker swarm cluster driver, already leverages rexray for cinder 
integration. [2]

Cheers,
Spyros

[1] https://github.com/codedellemc/rexray/releases/tag/v0.9.0
[2] https://github.com/codedellemc/libstorage/releases/tag/v0.6.0
[3] 
http://git.openstack.org/cgit/openstack/magnum/tree/magnum/drivers/common/templates/swarm/fragments/volume-service.sh?h=stable/ocata

On 27 May 2017 at 12:15, zengchen 
mailto:chenzeng...@163.com>> wrote:
Hi John & Ben:
 I have committed a patch[1] to add a new repository to Openstack. Please take 
a look at it. Thanks very much!

 [1]: https://review.openstack.org/#/c/468635

Best Wishes!
zengchen




在 2017-05-26 21:30:48,"John Griffith" 
mailto:john.griffi...@gmail.com>> 写道:



On Thu, May 25, 2017 at 10:01 PM, zengchen 
mailto:chenzeng...@163.com>> wrote:

Hi john:
I have seen your updates on the bp. I agree with your plan on how to 
develop the codes.
However, there is one issue I have to remind you that at present, Fuxi not 
only can convert
 Cinder volume to Docker, but also Manila file. So, do you consider to involve 
Manila part of codes
 in the new Fuxi-golang?
Agreed, that's a really good and important point.  Yes, I believe Ben 
Swartzlander

is interested, we can check with him and make sure but I certainly hope that 
Manila would be interested.
Besides, IMO, It is better to create a repository for Fuxi-golang, because
 Fuxi is the project of Openstack,
Yeah, that seems fine; I just didn't know if there needed to be any more 
conversation with other folks on any of this before charing ahead on new repos 
etc.  Doesn't matter much to me though.


   Thanks very much!

Best Wishes!
zengchen



At 2017-05-25 22:47:29, "John Griffith" 
mailto:john.griffi...@gmail.com>> wrote:



On Thu, May 25, 2017 at 5:50 AM, zengchen 
mailto:chenzeng...@163.com>> wrote:
Very sorry to foget attaching the link for bp of rewriting Fuxi with go 
language.
https://blueprints.launchpad.net/fuxi/+spec/convert-to-golang

At 2017-05-25 19:46:54, "zengchen" 
mailto:chenzeng...@163.com>> wrote:

Hi guys:
hongbin had committed a bp of rewriting Fuxi with go language[1]. My 
question is where to commit codes for it.
We have two choice, 1. create a new repository, 2. create a new branch.  IMO, 
the first one is much better. Because
there are many differences in the layer of infrastructure, such as CI.  What's 
your opinion? Thanks very much

Best Wishes
zengchen

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: 
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Hi Zengchen,

For now I was thinking just use Github and PR's outside of the OpenStack 
projects to bootstrap things and see how far we can get.  I'll update the BP 
this morning with what I believe to be the key tasks to work through.

Thanks,
John


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: 
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: 
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [fuxi][kuryr] Where to commit codes for Fuxi-golang

2017-05-31 Thread Hongbin Lu
Please find my replies inline.

Best regards,
Hongbin

From: Spyros Trigazis [mailto:strig...@gmail.com]
Sent: May-30-17 9:56 AM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [fuxi][kuryr] Where to commit codes for Fuxi-golang



On 30 May 2017 at 15:26, Hongbin Lu 
mailto:hongbin...@huawei.com>> wrote:
Please consider leveraging Fuxi instead.

Is there a missing functionality from rexray?

[Hongbin Lu] From my understanding, Rexray targets on the overcloud use cases 
and assumes that containers are running on top of nova instances. You mentioned 
Magnum is leveraging Rexray for Cinder integration. Actually, I am the core 
reviewer who reviewed and approved those Rexray patches. From what I observed, 
the functionalities provided by Rexray are minimal. What it was doing is simply 
calling Cinder API to search an existing volume, attach the volume to the Nova 
instance, and let docker to bind-mount the volume to the container. At the time 
I was testing it, it seems to have some mystery bugs that prevented me to get 
the cluster to work. It was packaged by a large container image, which might 
take more than 5 minutes to pull down. With that said, Rexray might be a choice 
for someone who are looking for cross cloud-providers solution. Fuxi will focus 
on OpenStack and targets on both overcloud and undercloud use cases. That means 
Fuxi can work with Nova+Cinder or a standalone Cinder. As John pointed out in 
another reply, another benefit of Fuxi is to resolve the fragmentation problem 
of existing solutions. Those are the differentiators of Fuxi.

Kuryr/Fuxi team is working very hard to deliver the docker network/storage 
plugins. I wish you will work with us to get them integrated with 
Magnum-provisioned cluster.

Patches are welcome to support fuxi as an *option* instead of rexray, so users 
can choose.

Currently, COE clusters provisioned by Magnum is far away from 
enterprise-ready. I think the Magnum project will be better off if it can adopt 
Kuryr/Fuxi which will give you a better OpenStack integration.

Best regards,
Hongbin

fuxi feature request: Add authentication using a trustee and a trustID.

[Hongbin Lu] I believe this is already supported.

Cheers,
Spyros


From: Spyros Trigazis [mailto:strig...@gmail.com<mailto:strig...@gmail.com>]
Sent: May-30-17 7:47 AM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [fuxi][kuryr] Where to commit codes for Fuxi-golang

FYI, there is already a cinder volume driver for docker available, written
in golang, from rexray [1].

Our team recently contributed to libstorage [3], it could support manila too. 
Rexray
also supports the popular cloud providers.

Magnum's docker swarm cluster driver, already leverages rexray for cinder 
integration. [2]

Cheers,
Spyros

[1] https://github.com/codedellemc/rexray/releases/tag/v0.9.0
[2] https://github.com/codedellemc/libstorage/releases/tag/v0.6.0
[3] 
http://git.openstack.org/cgit/openstack/magnum/tree/magnum/drivers/common/templates/swarm/fragments/volume-service.sh?h=stable/ocata

On 27 May 2017 at 12:15, zengchen 
mailto:chenzeng...@163.com>> wrote:
Hi John & Ben:
 I have committed a patch[1] to add a new repository to Openstack. Please take 
a look at it. Thanks very much!

 [1]: https://review.openstack.org/#/c/468635

Best Wishes!
zengchen



在 2017-05-26 21:30:48,"John Griffith" 
mailto:john.griffi...@gmail.com>> 写道:


On Thu, May 25, 2017 at 10:01 PM, zengchen 
mailto:chenzeng...@163.com>> wrote:

Hi john:
I have seen your updates on the bp. I agree with your plan on how to 
develop the codes.
However, there is one issue I have to remind you that at present, Fuxi not 
only can convert
 Cinder volume to Docker, but also Manila file. So, do you consider to involve 
Manila part of codes
 in the new Fuxi-golang?
Agreed, that's a really good and important point.  Yes, I believe Ben 
Swartzlander

is interested, we can check with him and make sure but I certainly hope that 
Manila would be interested.
Besides, IMO, It is better to create a repository for Fuxi-golang, because
 Fuxi is the project of Openstack,
Yeah, that seems fine; I just didn't know if there needed to be any more 
conversation with other folks on any of this before charing ahead on new repos 
etc.  Doesn't matter much to me though.


   Thanks very much!

Best Wishes!
zengchen


At 2017-05-25 22:47:29, "John Griffith" 
mailto:john.griffi...@gmail.com>> wrote:


On Thu, May 25, 2017 at 5:50 AM, zengchen 
mailto:chenzeng...@163.com>> wrote:
Very sorry to foget attaching the link for bp of rewriting Fuxi with go 
language.
https://blueprints.launchpad.net/fuxi/+spec/convert-to-golang

At 2017-05-25 19:46:54, "zengchen" 
mailto:chenzeng...@163.com>> wrote:
Hi guys:
hongbin had committed a bp of rewriting Fuxi with go language[1]. My 
question i

[openstack-dev] [Zun] Propose addition of Zun core team and removal notice

2017-06-19 Thread Hongbin Lu
Hi all,

I would like to propose the following change to the Zun core team:

+ Shunli Zhou (shunliz)

Shunli has been contributing to Zun for a while and did a lot of work. He has 
completed the BP for supporting resource claim and be closed to finish the 
filter scheduler BP. He showed a good understanding of the Zun's code base and 
expertise on other OpenStack projects. The quantity [1] and quality of his 
submitted code also shows his qualification. Therefore, I think he will be a 
good addition to the core team.

In addition, I have a removal notice. Davanum Srinivas (Dims) and Yanyan Hu 
requested to be removed from the core team. Dims had been helping us since the 
inception of the project. I treated him as mentor and his guidance is always 
helpful for the whole team. As the project becomes mature and stable, I agree 
with him that it is time to relieve him from the core reviewer responsibility 
because he has many other important responsibilities for the OpenStack 
community. Yanyan's leaving is because he has been relocated and focused on an 
out-of-OpenStack area. I would like to take this chance to thank Dims and 
Yanyan for their contribution to Zun.

Core reviewers, please cast your vote on this proposal.

Best regards,
Hongbin



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [trove][all][tc] A proposal to rearchitect Trove

2017-06-22 Thread Hongbin Lu
> Incidentally, the reason that discussions always come back to that is
> because OpenStack isn't very good at it, which is a huge problem not
> only for the *aaS projects but for user applications in general running
> on OpenStack.
> 
> If we had fine-grained authorisation and ubiquitous multi-tenant
> asynchronous messaging in OpenStack then I firmly believe that we, and
> application developers, would be in much better shape.
> 
> > If you create these projects as applications that run on cloud
> > infrastructure (OpenStack, k8s or otherwise),
> 
> I'm convinced there's an interesting idea here, but the terminology
> you're using doesn't really capture it. When you say 'as applications
> that run on cloud infrastructure', it sounds like you mean they should
> run in a Nova VM, or in a Kubernetes cluster somewhere, rather than on
> the OpenStack control plane. I don't think that's what you mean though,
> because you can (and IIUC Rackspace does) deploy OpenStack services
> that way already, and it has no real effect on the architecture of
> those services.
> 
> > then the discussions focus
> > instead on how the real end-users -- the ones that actually call the
> > APIs and utilize the service -- would interact with the APIs and not
> > the underlying infrastructure itself.
> >
> > Here's an example to think about...
> >
> > What if a provider of this DBaaS service wanted to jam 100 database
> > instances on a single VM and provide connectivity to those database
> > instances to 100 different tenants?
> >
> > Would those tenants know if those databases were all serviced from a
> > single database server process running on the VM?
> 
> You bet they would when one (or all) of the other 99 decided to run a
> really expensive query at an inopportune moment :)
> 
> > Or 100 contains each
> > running a separate database server process? Or 10 containers running
> > 10 database server processes each?
> >
> > No, of course not. And the tenant wouldn't care at all, because the
> 
> Well, if they had any kind of regulatory (or even performance)
> requirements then the tenant might care really quite a lot. But I take
> your point that many might not and it would be good to be able to offer
> them lower cost options.
> 
> > point of the DBaaS service is to get a database. It isn't to get one
> > or more VMs/containers/baremetal servers.
> 
> I'm not sure I entirely agree here. There are two kinds of DBaaS. One
> is a data API: a multitenant database a la DynamoDB. Those are very
> cool, and I'm excited about the potential to reduce the granularity of
> billing to a minimum, in much the same way Swift does for storage, and
> I'm sad that OpenStack's attempt in this space (MagnetoDB) didn't work
> out. But Trove is not that.
> 
> People use Trove because they want to use a *particular* database, but
> still have all the upgrades, backups, &c. handled for them. Given that
> the choice of database is explicitly *not* abstracted away from them,
> things like how many different VMs/containers/baremetal servers the
> database is running on are very much relevant IMHO, because what you
> want depends on both the database and how you're trying to use it. And
> because (afaik) none of them have native multitenancy, it's necessary
> that no tenant should have to share with any other.
> 
> Essentially Trove operates at a moderate level of abstraction -
> somewhere between managing the database + the infrastructure it runs on
> yourself and just an API endpoint you poke data into. It also operates
> at the coarse end of a granularity spectrum running from
> VMs->Containers->pay as you go.
> 
> It's reasonable to want to move closer to the middle of the granularity
> spectrum. But you can't go all the way to the high abstraction/fine
> grained ends of the spectra (which turn out to be equivalent) without
> becoming something qualitatively different.
> 
> > At the end of the day, I think Trove is best implemented as a hosted
> > application that exposes an API to its users that is entirely
> separate
> > from the underlying infrastructure APIs like Cinder/Nova/Neutron.
> >
> > This is similar to Kevin's k8s Operator idea, which I support but in
> a
> > generic fashion that isn't specific to k8s.
> >
> > In the same way that k8s abstracts the underlying infrastructure (via
> > its "cloud provider" concept), I think that Trove and similar
> projects
> > need to use a similar abstraction and focus on providing a different
> > API t

Re: [openstack-dev] [Zun] Propose addition of Zun core team and removal notice

2017-06-24 Thread Hongbin Lu
Hi all,

Thanks for your votes. According to the feedback, I added Shunli to the core 
team [1].

Best regards,
Hongbin

[1] https://review.openstack.org/#/admin/groups/1382,members

> -Original Message-
> From: Shuu Mutou [mailto:shu-mu...@rf.jp.nec.com]
> Sent: June-21-17 8:56 PM
> To: OpenStack Development Mailing List (not for usage questions)
> Cc: Haruhiko Katou
> Subject: Re: [openstack-dev] [Zun] Propose addition of Zun core team
> and removal notice
> 
> +1 to all from me.
> 
> Welcome Shunli! And greate thanks to Dims and Yanyan!!.
> 
> Best regards,
> Shu
> 
> > -Original Message-
> > From: Kumari, Madhuri [mailto:madhuri.kum...@intel.com]
> > Sent: Wednesday, June 21, 2017 12:30 PM
> > To: OpenStack Development Mailing List (not for usage questions)
> > 
> > Subject: Re: [openstack-dev] [Zun] Propose addition of Zun core team
> > and removal notice
> >
> > +1 from me as well.
> >
> >
> >
> > Thanks Dims and Yanyan for you contribution to Zun :)
> >
> >
> >
> > Regards,
> >
> > Madhuri
> >
> >
> >
> > From: Kevin Zhao [mailto:kevin.z...@linaro.org]
> > Sent: Wednesday, June 21, 2017 6:37 AM
> > To: OpenStack Development Mailing List (not for usage questions)
> > 
> > Subject: Re: [openstack-dev] [Zun] Propose addition of Zun core team
> > and removal notice
> >
> >
> >
> > +1 for me.
> >
> > Thx!
> >
> >
> >
> > On 20 June 2017 at 13:50, Pradeep Singh  > <mailto:ps4openst...@gmail.com> > wrote:
> >
> > +1 from me,
> >
> > Thanks Shunli for your great work :)
> >
> >
> >
> > On Tue, Jun 20, 2017 at 10:02 AM, Hongbin Lu
>  > <mailto:hongbin...@huawei.com> > wrote:
> >
> > Hi all,
> >
> >
> >
> > I would like to propose the following change to the Zun
> core team:
> >
> >
> >
> > + Shunli Zhou (shunliz)
> >
> >
> >
> > Shunli has been contributing to Zun for a while and did a
> lot of
> > work. He has completed the BP for supporting resource claim and be
> > closed to finish the filter scheduler BP. He showed a good
> > understanding of the Zun’s code base and expertise on other OpenStack
> > projects. The quantity [1] and quality of his submitted code also
> shows his qualification.
> > Therefore, I think he will be a good addition to the core team.
> >
> >
> >
> > In addition, I have a removal notice. Davanum Srinivas
> > (Dims) and Yanyan Hu requested to be removed from the core team. Dims
> > had been helping us since the inception of the project. I treated him
> > as mentor and his guidance is always helpful for the whole team. As
> > the project becomes mature and stable, I agree with him that it is
> > time to relieve him from the core reviewer responsibility because he
> > has many other important responsibilities for the OpenStack community.
> > Yanyan’s leaving is because he has been relocated and focused on an
> > out-of-OpenStack area. I would like to take this chance to thank Dims
> and Yanyan for their contribution to Zun.
> >
> >
> >
> > Core reviewers, please cast your vote on this proposal.
> >
> >
> >
> > Best regards,
> >
> > Hongbin
> >
> >
> >
> >
> >
> >
> >
> >
> >
> >
> > __
> > 
> > OpenStack Development Mailing List (not for usage
> > questions)
> > Unsubscribe:
> > openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> > <http://openstack-dev-requ...@lists.openstack.org?subject:unsubscribe>
> >
> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-
> > dev
> >
> >
> >
> >
> > __
> > 
> > OpenStack Development Mailing List (not for usage questions)
> > Unsubscribe:
> > openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> > <http://openstack-dev-requ...@lists.openstack.org?subject:unsubscribe>
> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-
> > dev
> >
> >
> 
> ___
> ___
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: OpenStack-dev-
> requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Zun] Upgrade from 'docker-py' to 'docker'

2017-06-24 Thread Hongbin Lu
Hi team,

We have recently finished the upgrade from 'docker-py' to 'docker'. If your 
devstack environment run into errors due to the incompatibility between the old 
and new docker python package (such as [1]), you could try the commands below:

  $ sudo pip uninstall docker-py docker-pycreds
  $ sudo pip install -c /opt/stack/requirements/upper-constraints.txt \
  -e /opt/stack/zun
  $ sudo systemctl restart devstack@kuryr*
  $ sudo systemctl restart devstack@zun*

For context, 'docker-py' is the old python-binding library for consuming docker 
REST API. It has been renamed to 'docker' and the old package will be dropped 
eventually. At the last a few days, there are several reports of errors due to 
double installation of both 'docker-py' and 'docker' packages in the 
development environment, so we need to migrate from 'docker-py' to 'docker' to 
resolve the issue.

Right now, all Zun's components and dependencies has finished the upgrade 
[2][3][4], and there is another proposed patch to drop 'docker-py' from global 
requirements [5]. The package conflict issue will be entirely resolved when the 
upgrade is finished globally.

[1] https://bugs.launchpad.net/zun/+bug/1693425
[2] https://review.openstack.org/#/c/475526/
[3] https://review.openstack.org/#/c/475863/
[4] https://review.openstack.org/#/c/475893/
[5] https://review.openstack.org/#/c/475962/

Best regards,
Hongbin
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [zun] Some general ZUN use case / drivers type questions

2017-07-06 Thread Hongbin Lu
Hi Greg,

Please find my replies inline.

Best regards,
Hongbin

From: Waines, Greg [mailto:greg.wai...@windriver.com]
Sent: July-06-17 11:49 AM
To: openstack-dev@lists.openstack.org
Subject: [openstack-dev] [zun] Some general ZUN use case / drivers type 
questions

Apologize I have some ‘newbie’ questions on zun.
I have looked a bit at zun ... a few slide decks and a few summit presentation 
videos.
I am somewhat familiar with old container orchestration attempts in openstack 
... nova and heat.
And somewhat familiar with Magnum for COEs on VMs.


Question 1:

-  in long term, will ZUN manage containers hosted by OpenStack VMs or 
OpenStack Hosts or both ?

oI think the answer is both, and

oI think technically ZUN will manage the containers in OpenStack VM(s) or 
OpenStack Host(s), thru a COE

•  where the COE is kubernetes, swarm, mesos ... or, initially, some very 
simple default COE provided by ZUN itself.

[Hongbin Lu] Yes. Zun aims to support containers in VMs, baremetal, or COEs in 
long term. A clarification is Zun doesn’t aim to become a COE, but it could be 
used together with Heat [1] to achieve some container orchestration equivalent 
functionalities.

[1] https://review.openstack.org/#/c/437810/
Question 2:
-  what is currently supported in MASTER ?

[Hongbin Lu] What currently supported is container-in-baremetal scenario. The 
next release might introduce container-in-vm. COE integration might be the long 
term pursue.


Question 3:
-  in the scenario where ZUN is managing containers thru Kubernetes 
directly on OpenStack Host(s)
oI believe the intent is that,
at the same time, and on the same OpenStack Host(s),
NOVA is managing VMs on the OpenStack Host(s)
o??? Has anyone started to look at the Resource Management / Arbitration of 
the OpenStack Host’s Resources,
   between ZUN and NOVA ???
[Hongbin Lu] No, it hasn’t. We started with an assumption that Zun and Nova are 
managing disjoined set of resources (i.e. compute hosts) so there is not 
resource contention. The ability to share compute resources across multiple 
OpenStack services for VMs and containers is cool and it might require 
discussions across multiple teams to build consensus of this pursue.
Question 4:
-  again, in the scenario where ZUN is managing containers thru 
Kubernetes directly on OpenStack Host(s)
-  what are the Technical Pros / Cons of this approach, relative to 
using OpenStack VM(s) ?
oPROs
•  ??? does this really use less resources than the VM Scenario ???
• is there an example you can walk me thru ?
•  I suppose that instead of pre-allocating resources to a fairly large VM for 
hosting containers,
you would only use the resources for the containers that are actually launched,
oCONs
•  for application containers, you are restricted by the OS running on the 
OpenStack Host,

[Hongbin Lu] Yes, there are pros and cons of either approach, and Zun is not 
biased on either approach. Instead, Zun aims to support both if it is feasible.


Greg.
WIND RIVER
Titanium Cloud
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [zun] Some general ZUN use case / drivers type questions

2017-07-07 Thread Hongbin Lu
Hi Greg,

Sorry for the confusion. I used the term “container-in-baremetal” to refer to a 
deployment pattern that containers are running on physical compute nodes (not 
an instance provided by Nova/Ironic). I think your second interpretation is 
right if “OpenStack Hosts” means a compute node. I think a diagram [1] could 
explain the current deployment scenario better.

For the container-in-coe scenario, it is out of the current focus but the team 
is exploring ideas on it. I don’t have specific answers for the two questions 
you raised but I encourage you to bring up your use cases to the team and keep 
the discussion open.

[1] https://www.slideshare.net/hongbin034/clipboards/zun-deployment

Best regards,
Hongbin

From: Waines, Greg [mailto:greg.wai...@windriver.com]
Sent: July-07-17 7:05 AM
To: OpenStack Development Mailing List (not for usage questions)
Cc: Hongbin Lu
Subject: Re: [openstack-dev] [zun] Some general ZUN use case / drivers type 
questions

Hongbin,
Thanks for the responses.
A couple of follow up, clarifying questions ...


• You mentioned that currently Zun supports the container-in-baremetal 
scenario

ois this done by leveraging Ironic baremetal service ?

•  e.g. does Zun launch an Ironic baremetal instance (running docker) in order 
to host containers being launched by Zun ?

oOR

odo you must mean that, in this scenario, OpenStack Hosts are 
deployed&configured with docker software,
and Zun expects docker to be running on each OpenStack Host, in order to launch 
its containers ?


• In the future, when Zun supports the container-in-coe scenario

ois the idea that the COE (Kubernetes or Swarm) will abstract from Zun 
whether the COE’s minion nodes are OpenStack VMs or OpenStack Baremetal 
Instances (or OpenStack Hosts) ?

ois the idea that Magnum will support launching COEs with VM minion nodes 
and/or Baremetal minion nodes ?


Greg.


From: Hongbin Lu mailto:hongbin...@huawei.com>>
Reply-To: 
"openstack-dev@lists.openstack.org<mailto:openstack-dev@lists.openstack.org>" 
mailto:openstack-dev@lists.openstack.org>>
Date: Thursday, July 6, 2017 at 2:39 PM
To: 
"openstack-dev@lists.openstack.org<mailto:openstack-dev@lists.openstack.org>" 
mailto:openstack-dev@lists.openstack.org>>
Subject: Re: [openstack-dev] [zun] Some general ZUN use case / drivers type 
questions

Hi Greg,

Please find my replies inline.

Best regards,
Hongbin

From: Waines, Greg [mailto:greg.wai...@windriver.com]
Sent: July-06-17 11:49 AM
To: openstack-dev@lists.openstack.org<mailto:openstack-dev@lists.openstack.org>
Subject: [openstack-dev] [zun] Some general ZUN use case / drivers type 
questions

Apologize I have some ‘newbie’ questions on zun.
I have looked a bit at zun ... a few slide decks and a few summit presentation 
videos.
I am somewhat familiar with old container orchestration attempts in openstack 
... nova and heat.
And somewhat familiar with Magnum for COEs on VMs.


Question 1:

-  in long term, will ZUN manage containers hosted by OpenStack VMs or 
OpenStack Hosts or both ?

oI think the answer is both, and

oI think technically ZUN will manage the containers in OpenStack VM(s) or 
OpenStack Host(s), thru a COE

•  where the COE is kubernetes, swarm, mesos ... or, initially, some very 
simple default COE provided by ZUN itself.

[Hongbin Lu] Yes. Zun aims to support containers in VMs, baremetal, or COEs in 
long term. A clarification is Zun doesn’t aim to become a COE, but it could be 
used together with Heat [1] to achieve some container orchestration equivalent 
functionalities.

[1] https://review.openstack.org/#/c/437810/
Question 2:
-  what is currently supported in MASTER ?

[Hongbin Lu] What currently supported is container-in-baremetal scenario. The 
next release might introduce container-in-vm. COE integration might be the long 
term pursue.


Question 3:
-  in the scenario where ZUN is managing containers thru Kubernetes 
directly on OpenStack Host(s)
oI believe the intent is that,
at the same time, and on the same OpenStack Host(s),
NOVA is managing VMs on the OpenStack Host(s)
o??? Has anyone started to look at the Resource Management / Arbitration of 
the OpenStack Host’s Resources,
   between ZUN and NOVA ???
[Hongbin Lu] No, it hasn’t. We started with an assumption that Zun and Nova are 
managing disjoined set of resources (i.e. compute hosts) so there is not 
resource contention. The ability to share compute resources across multiple 
OpenStack services for VMs and containers is cool and it might require 
discussions across multiple teams to build consensus of this pursue.
Question 4:
-  again, in the scenario where ZUN is managing containers thru 
Kubernetes directly on OpenStack Host(s)
-  what are the Technical Pros / Cons of this approach, relative to 
using OpenStack VM(s) ?
oPROs
•  ??? does this really use less reso

Re: [openstack-dev] [zun] Some general ZUN use case / drivers type questions

2017-07-07 Thread Hongbin Lu
Hi Greg,

Zun currently leverages the “--memory", “--cpu-period”, and “--cpu-quota” 
options to limit the CPU and memory. Zun does do resource tracking and 
scheduling right now, but this is temporary. The long-term plan is to switch to 
the Placement API [1] after it is spited out from Nova.

[1] https://docs.openstack.org/nova/latest/placement.html

Best regards,
Hongbin

From: Waines, Greg [mailto:greg.wai...@windriver.com]
Sent: July-07-17 11:00 AM
To: Hongbin Lu; OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [zun] Some general ZUN use case / drivers type 
questions

Thanks Hongbin.

I’ve got zun setup in devstack now, so will play with it a bit to better 
understand.

Although a couple more questions (sorry)

• in the current zun implementation of containers directly on compute 
nodes,
does zun leverage any of the docker capabilities to restrict the amount of 
resources used by a container ?
e.g. the amount and which cpu cores the container’s processes are allowed to 
use,
 how much memory the container is allowed to access/use, etc.

e.g. see https://docs.docker.com/engine/admin/resource_constraints/

• and then,
I know you mentioned that the assumption is that there are separate 
availability zones for zun and nova.

obut does zun do Resource Tracking and Scheduling based on that Resource 
Tracking for the nodes its using ?

Greg.


From: Hongbin Lu mailto:hongbin...@huawei.com>>
Date: Friday, July 7, 2017 at 10:42 AM
To: Greg Waines mailto:greg.wai...@windriver.com>>, 
"openstack-dev@lists.openstack.org<mailto:openstack-dev@lists.openstack.org>" 
mailto:openstack-dev@lists.openstack.org>>
Subject: RE: [openstack-dev] [zun] Some general ZUN use case / drivers type 
questions

Hi Greg,

Sorry for the confusion. I used the term “container-in-baremetal” to refer to a 
deployment pattern that containers are running on physical compute nodes (not 
an instance provided by Nova/Ironic). I think your second interpretation is 
right if “OpenStack Hosts” means a compute node. I think a diagram [1] could 
explain the current deployment scenario better.

For the container-in-coe scenario, it is out of the current focus but the team 
is exploring ideas on it. I don’t have specific answers for the two questions 
you raised but I encourage you to bring up your use cases to the team and keep 
the discussion open.

[1] https://www.slideshare.net/hongbin034/clipboards/zun-deployment

Best regards,
Hongbin

From: Waines, Greg [mailto:greg.wai...@windriver.com]
Sent: July-07-17 7:05 AM
To: OpenStack Development Mailing List (not for usage questions)
Cc: Hongbin Lu
Subject: Re: [openstack-dev] [zun] Some general ZUN use case / drivers type 
questions

Hongbin,
Thanks for the responses.
A couple of follow up, clarifying questions ...


• You mentioned that currently Zun supports the container-in-baremetal 
scenario

ois this done by leveraging Ironic baremetal service ?

•  e.g. does Zun launch an Ironic baremetal instance (running docker) in order 
to host containers being launched by Zun ?

oOR

odo you must mean that, in this scenario, OpenStack Hosts are 
deployed&configured with docker software,
and Zun expects docker to be running on each OpenStack Host, in order to launch 
its containers ?


• In the future, when Zun supports the container-in-coe scenario

ois the idea that the COE (Kubernetes or Swarm) will abstract from Zun 
whether the COE’s minion nodes are OpenStack VMs or OpenStack Baremetal 
Instances (or OpenStack Hosts) ?

ois the idea that Magnum will support launching COEs with VM minion nodes 
and/or Baremetal minion nodes ?


Greg.


From: Hongbin Lu mailto:hongbin...@huawei.com>>
Reply-To: 
"openstack-dev@lists.openstack.org<mailto:openstack-dev@lists.openstack.org>" 
mailto:openstack-dev@lists.openstack.org>>
Date: Thursday, July 6, 2017 at 2:39 PM
To: 
"openstack-dev@lists.openstack.org<mailto:openstack-dev@lists.openstack.org>" 
mailto:openstack-dev@lists.openstack.org>>
Subject: Re: [openstack-dev] [zun] Some general ZUN use case / drivers type 
questions

Hi Greg,

Please find my replies inline.

Best regards,
Hongbin

From: Waines, Greg [mailto:greg.wai...@windriver.com]
Sent: July-06-17 11:49 AM
To: openstack-dev@lists.openstack.org<mailto:openstack-dev@lists.openstack.org>
Subject: [openstack-dev] [zun] Some general ZUN use case / drivers type 
questions

Apologize I have some ‘newbie’ questions on zun.
I have looked a bit at zun ... a few slide decks and a few summit presentation 
videos.
I am somewhat familiar with old container orchestration attempts in openstack 
... nova and heat.
And somewhat familiar with Magnum for COEs on VMs.


Question 1:

-  in long term, will ZUN manage containers hosted by OpenStack VMs or 
OpenStack Hosts or both ?

oI think the answer is both,

Re: [openstack-dev] [zun] Some general ZUN use case / drivers type questions

2017-07-11 Thread Hongbin Lu
Hi Greg,

There is no such API in Zun. I created a BP for this feature request: 
https://blueprints.launchpad.net/zun/+spec/show-container-engine-info . 
Hopefully, the implementation will be available at the next release or two.

Best regards,
Hongbin

From: Waines, Greg [mailto:greg.wai...@windriver.com]
Sent: July-11-17 10:24 AM
To: Hongbin Lu; OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [zun] Some general ZUN use case / drivers type 
questions

Hey Hongbin,

is there a way to display ZUN’s resource usage ?
i.e. analogous to nova’s “nova hypervisor-show ”
e.g. memory usages, cpu usage, etc .

Greg.


From: Hongbin Lu mailto:hongbin...@huawei.com>>
Date: Friday, July 7, 2017 at 2:08 PM
To: Greg Waines mailto:greg.wai...@windriver.com>>, 
"openstack-dev@lists.openstack.org<mailto:openstack-dev@lists.openstack.org>" 
mailto:openstack-dev@lists.openstack.org>>
Subject: RE: [openstack-dev] [zun] Some general ZUN use case / drivers type 
questions

Hi Greg,

Zun currently leverages the “--memory", “--cpu-period”, and “--cpu-quota” 
options to limit the CPU and memory. Zun does do resource tracking and 
scheduling right now, but this is temporary. The long-term plan is to switch to 
the Placement API [1] after it is spited out from Nova.

[1] https://docs.openstack.org/nova/latest/placement.html

Best regards,
Hongbin

From: Waines, Greg [mailto:greg.wai...@windriver.com]
Sent: July-07-17 11:00 AM
To: Hongbin Lu; OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [zun] Some general ZUN use case / drivers type 
questions

Thanks Hongbin.

I’ve got zun setup in devstack now, so will play with it a bit to better 
understand.

Although a couple more questions (sorry)

• in the current zun implementation of containers directly on compute 
nodes,
does zun leverage any of the docker capabilities to restrict the amount of 
resources used by a container ?
e.g. the amount and which cpu cores the container’s processes are allowed to 
use,
 how much memory the container is allowed to access/use, etc.

e.g. see https://docs.docker.com/engine/admin/resource_constraints/

• and then,
I know you mentioned that the assumption is that there are separate 
availability zones for zun and nova.

obut does zun do Resource Tracking and Scheduling based on that Resource 
Tracking for the nodes its using ?

Greg.


From: Hongbin Lu mailto:hongbin...@huawei.com>>
Date: Friday, July 7, 2017 at 10:42 AM
To: Greg Waines mailto:greg.wai...@windriver.com>>, 
"openstack-dev@lists.openstack.org<mailto:openstack-dev@lists.openstack.org>" 
mailto:openstack-dev@lists.openstack.org>>
Subject: RE: [openstack-dev] [zun] Some general ZUN use case / drivers type 
questions

Hi Greg,

Sorry for the confusion. I used the term “container-in-baremetal” to refer to a 
deployment pattern that containers are running on physical compute nodes (not 
an instance provided by Nova/Ironic). I think your second interpretation is 
right if “OpenStack Hosts” means a compute node. I think a diagram [1] could 
explain the current deployment scenario better.

For the container-in-coe scenario, it is out of the current focus but the team 
is exploring ideas on it. I don’t have specific answers for the two questions 
you raised but I encourage you to bring up your use cases to the team and keep 
the discussion open.

[1] https://www.slideshare.net/hongbin034/clipboards/zun-deployment

Best regards,
Hongbin

From: Waines, Greg [mailto:greg.wai...@windriver.com]
Sent: July-07-17 7:05 AM
To: OpenStack Development Mailing List (not for usage questions)
Cc: Hongbin Lu
Subject: Re: [openstack-dev] [zun] Some general ZUN use case / drivers type 
questions

Hongbin,
Thanks for the responses.
A couple of follow up, clarifying questions ...


• You mentioned that currently Zun supports the container-in-baremetal 
scenario

ois this done by leveraging Ironic baremetal service ?

•  e.g. does Zun launch an Ironic baremetal instance (running docker) in order 
to host containers being launched by Zun ?

oOR

odo you must mean that, in this scenario, OpenStack Hosts are 
deployed&configured with docker software,
and Zun expects docker to be running on each OpenStack Host, in order to launch 
its containers ?


• In the future, when Zun supports the container-in-coe scenario

ois the idea that the COE (Kubernetes or Swarm) will abstract from Zun 
whether the COE’s minion nodes are OpenStack VMs or OpenStack Baremetal 
Instances (or OpenStack Hosts) ?

ois the idea that Magnum will support launching COEs with VM minion nodes 
and/or Baremetal minion nodes ?


Greg.


From: Hongbin Lu mailto:hongbin...@huawei.com>>
Reply-To: 
"openstack-dev@lists.openstack.org<mailto:openstack-dev@lists.openstack.org>" 
mailto:openstack-dev@li

Re: [openstack-dev] [zun] Some general ZUN use case / drivers type questions

2017-07-11 Thread Hongbin Lu
Greg,

No, it isn’t. We are working hard to integrate with Cinder (either via Fuxi or 
direct integration). Perhaps this design spec can provide some information 
about where we are heading to: https://review.openstack.org/#/c/468658/ .

Best regards,
Hongbin

From: Waines, Greg [mailto:greg.wai...@windriver.com]
Sent: July-11-17 2:13 PM
To: Hongbin Lu; OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [zun] Some general ZUN use case / drivers type 
questions

Thanks Hongbin,

another quick question,
is ZUN integrated with FUXI for Container mounting of Cinder Volumes yet ?

( my guess is no ... don’t see any options for that in the zun cli for create 
or run )

Greg.

From: Hongbin Lu mailto:hongbin...@huawei.com>>
Date: Tuesday, July 11, 2017 at 2:04 PM
To: Greg Waines mailto:greg.wai...@windriver.com>>, 
"openstack-dev@lists.openstack.org<mailto:openstack-dev@lists.openstack.org>" 
mailto:openstack-dev@lists.openstack.org>>
Subject: RE: [openstack-dev] [zun] Some general ZUN use case / drivers type 
questions

Hi Greg,

There is no such API in Zun. I created a BP for this feature request: 
https://blueprints.launchpad.net/zun/+spec/show-container-engine-info . 
Hopefully, the implementation will be available at the next release or two.

Best regards,
Hongbin

From: Waines, Greg [mailto:greg.wai...@windriver.com]
Sent: July-11-17 10:24 AM
To: Hongbin Lu; OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [zun] Some general ZUN use case / drivers type 
questions

Hey Hongbin,

is there a way to display ZUN’s resource usage ?
i.e. analogous to nova’s “nova hypervisor-show ”
e.g. memory usages, cpu usage, etc .

Greg.


From: Hongbin Lu mailto:hongbin...@huawei.com>>
Date: Friday, July 7, 2017 at 2:08 PM
To: Greg Waines mailto:greg.wai...@windriver.com>>, 
"openstack-dev@lists.openstack.org<mailto:openstack-dev@lists.openstack.org>" 
mailto:openstack-dev@lists.openstack.org>>
Subject: RE: [openstack-dev] [zun] Some general ZUN use case / drivers type 
questions

Hi Greg,

Zun currently leverages the “--memory", “--cpu-period”, and “--cpu-quota” 
options to limit the CPU and memory. Zun does do resource tracking and 
scheduling right now, but this is temporary. The long-term plan is to switch to 
the Placement API [1] after it is spited out from Nova.

[1] https://docs.openstack.org/nova/latest/placement.html

Best regards,
Hongbin

From: Waines, Greg [mailto:greg.wai...@windriver.com]
Sent: July-07-17 11:00 AM
To: Hongbin Lu; OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [zun] Some general ZUN use case / drivers type 
questions

Thanks Hongbin.

I’ve got zun setup in devstack now, so will play with it a bit to better 
understand.

Although a couple more questions (sorry)

• in the current zun implementation of containers directly on compute 
nodes,
does zun leverage any of the docker capabilities to restrict the amount of 
resources used by a container ?
e.g. the amount and which cpu cores the container’s processes are allowed to 
use,
 how much memory the container is allowed to access/use, etc.

e.g. see https://docs.docker.com/engine/admin/resource_constraints/

• and then,
I know you mentioned that the assumption is that there are separate 
availability zones for zun and nova.

obut does zun do Resource Tracking and Scheduling based on that Resource 
Tracking for the nodes its using ?

Greg.


From: Hongbin Lu mailto:hongbin...@huawei.com>>
Date: Friday, July 7, 2017 at 10:42 AM
To: Greg Waines mailto:greg.wai...@windriver.com>>, 
"openstack-dev@lists.openstack.org<mailto:openstack-dev@lists.openstack.org>" 
mailto:openstack-dev@lists.openstack.org>>
Subject: RE: [openstack-dev] [zun] Some general ZUN use case / drivers type 
questions

Hi Greg,

Sorry for the confusion. I used the term “container-in-baremetal” to refer to a 
deployment pattern that containers are running on physical compute nodes (not 
an instance provided by Nova/Ironic). I think your second interpretation is 
right if “OpenStack Hosts” means a compute node. I think a diagram [1] could 
explain the current deployment scenario better.

For the container-in-coe scenario, it is out of the current focus but the team 
is exploring ideas on it. I don’t have specific answers for the two questions 
you raised but I encourage you to bring up your use cases to the team and keep 
the discussion open.

[1] https://www.slideshare.net/hongbin034/clipboards/zun-deployment

Best regards,
Hongbin

From: Waines, Greg [mailto:greg.wai...@windriver.com]
Sent: July-07-17 7:05 AM
To: OpenStack Development Mailing List (not for usage questions)
Cc: Hongbin Lu
Subject: Re: [openstack-dev] [zun] Some general ZUN use case / drivers type 
questions

Hongbin,
Thanks for the responses.
A couple of

Re: [openstack-dev] [zun] sandbox and clearcontainers

2017-07-11 Thread Hongbin Lu
Hi Surya,

First, I would like to provide some context for folks who are not familiar with 
the sandbox concept in Zun. The "sandbox" is for providing isolated environment 
for one or multiple containers. In docker driver, we used it as a placeholder 
of a set of Linux namespaces (i.e. network, ipc, etc.) that the "real" 
container(s) is going to run. For example, if end-user run "zun run nginx", Zun 
will first create an infra container (sandbox) and leverage the set of Linux 
namespace it creates, then Zun will create the "real" (nginx) container by 
using the Linux namespaces of the infra container. Strictly speaking, this is 
not container inside container, but it is container inside a set of 
pre-existing Linux namespaces.

Second, we are working on making sandbox optional [1]. After this feature is 
implemented (targeted on Pike), operators can configure Zun into one of the two 
modes: "container-in-sandbox" and "standalone container". Each container driver 
will have a choice to support either modes or support both. For clear 
container, I assume it can be integrated with Zun via a clear container driver. 
Then, the driver can implement the "standalone" mode, in which there is only a 
bare clear container. An alternative is to implement "container-in-sandbox" 
mode. In this scenario, the sandbox itself is a clear container as you 
mentioned. Inside the clear container, I guess there is a kernel that can be 
used to boot user's container image(s) (like how to run hypercontainer as pod 
[2]). However, I am not exactly sure if this scenario is possible.

Hope this answers your question.

[1] https://blueprints.launchpad.net/zun/+spec/make-sandbox-optional
[2] 
http://blog.kubernetes.io/2016/05/hypernetes-security-and-multi-tenancy-in-kubernetes.html

Best regards,
Hongbin

From: surya.prabha...@dell.com [mailto:surya.prabha...@dell.com]
Sent: July-11-17 7:14 PM
To: openstack-dev@lists.openstack.org
Subject: [openstack-dev] [zun] sandbox and clearcontainers

Dell - Internal Use - Confidential
Hi Folks,
I am just trying to wrap my head around zun's sandboxing and clear 
containers.   From what Hongbin told in Barcelona ( see the attached pic which 
I scrapped from his video)

[cid:image002.jpg@01D2FA9E.8B2A7D00]

current implementation in Zun is, Sandbox is the outer container and the real 
user container is nested inside the sandbox.  I am trying to figure out how 
this is going to play out
when we have clear containers.

I envision the following scenarios:


1)  Scenario 1: where the sandbox itself is a clear container and user will 
nest another clear container inside the sandbox. This is like nested 
virtualization.

But I am not sure how this is going to work since the nested containers won't 
get VT-D cpu flags.

2)  Scenario 2: the outer sandbox is just going to be a standard docker 
container without vt-d and the inside container is going to be the real clear 
container with vt-d.  Now this

might work well but we might be losing the isolation features for the network 
and storage which lies open in the sandbox. Wont this defeat the whole purpose 
of using clear containers.

I am just wondering what is the thought process for this design inside zun.  If 
this is trivial and if I am missing something please shed some light :).

Thanks
Surya ( spn )
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [zun] "--image-driver glance" doesn't seem to work @ master

2017-07-12 Thread Hongbin Lu
Hi Greg,

I created a bug to record the issue: 
https://bugs.launchpad.net/zun/+bug/1703955 . Due to this bug, Zun couldn’t 
find the docker image if the image was uploaded to glance under a different 
name. I think it will work if you can upload the image to glance with name 
“cirros”. For example:

$ docker pull cirros
$ docker save cirros | glance image-create --visibility public 
--container-format=docker --disk-format=raw --name cirros
$ zun run -i --name ctn-ping --image-driver glance cirros ping 8.8.8.8

Best regards,
Hongbin

From: Waines, Greg [mailto:greg.wai...@windriver.com]
Sent: July-12-17 1:01 PM
To: OpenStack Development Mailing List (not for usage questions)
Subject: [openstack-dev] [zun] "--image-driver glance" doesn't seem to work @ 
master

Just tried this, this morning.
I can not launch a container when I specify to pull the container image from 
glance (instead of docker hub).
I get an error back from docker saying the “:latest” can not be 
found.
I tried renaming the glance image to “:latest” ... but that didn’t 
work either.


stack@devstack-zun:~/devstack$ glance image-list

+--+--+

| ID   | Name |

+--+--+

| 6483d319-69d8-4c58-b0fb-7338a1aff85f | cirros-0.3.5-x86_64-disk |

| 3055d450-d780-4699-bc7d-3b83f3391fe9 | gregos   |  <-- it 
is of container format docker

| e8f3cab8-056c-4851-9f67-141dda91b9a2 | kubernetes/pause |

+--+--+

stack@devstack-zun:~/devstack$ docker images

REPOSITORY  TAG IMAGE IDCREATED 
SIZE

scratch latest  019a481dc9ea5 days ago  
0B

kuryr/busybox   latest  a3bb6046b1195 days ago  
1.21MB

cirros  latest  f8ce316a37a718 months ago   
7.74MB

kubernetes/pauselatest  f9d5de0795392 years ago 
240kB

stack@devstack-zun:~/devstack$ zun run --name ctn-ping --image-driver glance 
gregos ping 8.8.8.8

...

...
stack@devstack-zun:~/devstack$ zun show ctn-ping
+---+-+
| Property  | Value 


  |
+---+-+
| addresses | 10.0.0.6, fdac:1365:7242:0:f816:3eff:fea4:fb65


  |
| links | ["{u'href': 
u'http://10.10.10.17:9517/v1/containers/cb83a98c-776c-4ea8-83a7-ef3430f5e6d2', 
u'rel': u'self'}", "{u'href': 
u'http://10.10.10.17:9517/containers/cb83a98c-776c-4ea8-83a7-ef3430f5e6d2', 
u'rel': u'bookmark'}"] |
| image | gregos


  |
  

 |
| status| Error 


  |

| status_reason | Docker internal error: 404 Client Error: Not Found ("No 
such image: gregos:latest").

|


stack@devstack-zun:~/devstack$



Am I doing something wrong ?

Greg.





FULL logs below


stack@devstack-zun:~/devstack$ source openrc admin demo

WARNING: setting legacy OS_TENANT_NAME to support cli tools.

stack@devstack-zun:~/devstack$ docker images

REPOSITORY  TAG IMAGE IDCREATED 
SIZE

kuryr/busybox   latest  a3bb6046b1195 days ago  
1.21MB

scratch latest  019a481dc9ea5 days ago  
0B

kubernetes/pauselatest 

Re: [openstack-dev] [zun] "--nets network=..." usage question

2017-07-12 Thread Hongbin Lu
Hi Greg,

This parameter has just been added to the CLI and it hasn’t been fully 
implemented yet. Sorry for the confusion. Here is how I expect this parameter 
to work:

1. Create from neutron network name:
$ zun run --name ctn-ping --nets network=private …

2. Create from neutron network uuid:
$ zun run --name ctn-ping --nets network=c59455d9-c103-4c05-b28c-a1f5d041d804 …

3. Create from neutron port uuid/name:
$ zun run --name ctn-ping --nets port= …

4. Give me a network:
$ zun run --name ctn-ping --nets auto …

For now, please simply ignore this parameter. Zun will find a usable network 
under your tenant to boot the container.

Best regards,
Hongbin

From: Waines, Greg [mailto:greg.wai...@windriver.com]
Sent: July-12-17 1:20 PM
To: OpenStack Development Mailing List (not for usage questions)
Subject: [openstack-dev] [zun] "--nets network=..." usage question

What is expected for the “--nets network=...” parameter on zun run or create ?
Is it the network name, the subnet name, the network uuid, the subnet uuid, ... 
I think I’ve tried them all and none work.

Full logs:

stack@devstack-zun:~/devstack$ neutron net-list

neutron CLI is deprecated and will be removed in the future. Use openstack CLI 
instead.

+--+-+--+--+

| id   | name| tenant_id
| subnets  |

+--+-+--+--+

| c1731d77-c849-4b6b-b5e9-85030c8c6b52 | public  | 
dcea3cea809f40c1a53b85ec3522de36 | aec0bc66-fb6a-453b-93c7-d04537a6bb05 
2001:db8::/64   |

|  | |  
| 8c881229-982e-417b-bbaa-e86d6192afa6 172.24.4.0/24   |

| c59455d9-c103-4c05-b28c-a1f5d041d804 | private | 
c8398b3154094049960e86b3caba1a4a | e12679b1-87e6-42cf-a2fe-e0f954dbd15f 
fdac:1365:7242::/64 |

|  | |  
| a1fc0a84-8cae-4193-8d33-711b612529b7 10.0.0.0/26 |

+--+-+--+--+

stack@devstack-zun:~/devstack$

stack@devstack-zun:~/devstack$

stack@devstack-zun:~/devstack$

stack@devstack-zun:~/devstack$

stack@devstack-zun:~/devstack$ zun run --name ctn-ping --nets network=private 
cirros ping 8.8.8.8
...
stack@devstack-zun:~/devstack$ zun list
+--+--++++---+---+
| uuid | name | image  | status | 
task_state | addresses | ports |
+--+--++++---+---+
| 649724f6-2ccd-4b21-8684-8f6616228d86 | ctn-ping | cirros | Error  | None  
 |   | []|
+--+--++++---+---+
stack@devstack-zun:~/devstack$ zun show ctn-ping | fgrep reason
| status_reason | Docker internal error: 404 Client Error: Not Found 
("network private not found").  

 |
stack@devstack-zun:~/devstack$

stack@devstack-zun:~/devstack$ zun delete ctn-ping

Request to delete container ctn-ping has been accepted.

stack@devstack-zun:~/devstack$

stack@devstack-zun:~/devstack$ zun run --name ctn-ping --nets 
network=c59455d9-c103-4c05-b28c-a1f5d041d804 cirros ping 8.8.8.8

...
stack@devstack-zun:~/devstack$ zun list
+--+--++++---+---+
| uuid | name | image  | status | 
task_state | addresses | ports |
+--+--++++---+---+
| 6093bdc2-d288-4ea9-a98b-3ca055318c9e | ctn-ping | cirros | Error  | None  
 |   | []|
+--+--++++---+---+
stack@devstack-zun:~/devstack$ zun show ctn-ping | fgrep reason
| status_reason | Docker internal error: 404 Client Error: Not Found 
("network c59455d9-c103-4c05-b28c-a1f5d041d804 not found"). 

 |
stack@devstack-zun:~/devstack$



Any ideas ?

Greg.
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/ope

Re: [openstack-dev] [zun][api version]Does anyone know the idea of default api version in the versioned API?

2017-07-26 Thread Hongbin Lu
Hi all,

Here is a bit of the context. Zun has introduced API micro version in the 
server [1] and the client [2]. The micro version needs to be bumped in server 
side [3] as long as a backward-incompatible change is made. In client side, we 
currently hard-code the default version. The client will pick the default 
version unless the version is explicitly specified.

As far as I know, the openstack community doesn’t have consensus on the 
specification of the default API version. Some projects picked a stable version 
as default, and other projects picked the latest version. How to bump the 
default version is also controversial. If the default version is hard-coded, it 
might need to be bumped every time a change is made. Alternatively, there are 
some workarounds to avoid the hard-code default version. Each approach has pros 
and cons.

For Zun, I think the following options are available (refer this spec [4] if 
you interest to read more details):
1. Negotiate the default version between client and server, and pick the 
maximum version that both client and server are supporting.
2. Hard-code the default version and bump it manually or periodically (how to 
bump it periodically?)
3. Hard-code the default version and keep it unchanged.
4. Pick the latest version as default.

Thoughts on this?

[1] https://blueprints.launchpad.net/zun/+spec/api-microversion
[2] https://blueprints.launchpad.net/zun/+spec/api-microversion-cli
[3] 
https://docs.openstack.org/zun/latest/contributor/api-microversion.html#when-do-i-need-a-new-microversion
[4] 
https://specs.openstack.org/openstack/ironic-specs/specs/approved/cli-default-api-version.html

Best regards,
Hongbin

From: Shunli Zhou [mailto:shunli6...@gmail.com]
Sent: July-25-17 9:29 PM
To: openstack-dev@lists.openstack.org
Subject: [openstack-dev] [zun][api version]Does anyone know the idea of default 
api version in the versioned API?

Does anyone know the idea of default api version in versioned api?
I'm not sure if we should bump the default api version everytime the api 
version bumped? Could anyone explain the policy of how to bump the default api 
version?

Thanks.
B.R.
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [zun][unit test] Could anyone help one the unittest fail?

2017-07-27 Thread Hongbin Lu
Hi Shunli,

Sorry for the late reply, I saw you uploaded a revision of the patch and got 
the gate pass. I guess you have resolved this issue?

Best regards,
Hongbin

From: Shunli Zhou [mailto:shunli6...@gmail.com]
Sent: July-25-17 10:20 PM
To: openstack-dev@lists.openstack.org
Subject: [openstack-dev] [zun][unit test] Could anyone help one the unittest 
fail?

Could anyone help on the unittest fail about the pecan api, refer to 
http://logs.openstack.org/31/486931/1/check/gate-zun-python27-ubuntu-xenial/c329b47/console.html#_2017-07-25_08_13_05_180414

I have two api, they are added in two patches. The first is 
HostController:get_all, which list all the zun host. The second is the 
HostController:get_one. So the get_all version restrict to 1.4 and get_one 
version is restricted to 1.5.

Not know why the pecan will call get_one when test get_all. I debugged the 
code, the pecan first call get_all with version 1.4, and everything is ok, but 
after that pecan will also route the request to get_one, which requires 1.5 
version. And then the test failed. The code works fine in devstack.

Could anyone help me why the test failed, what's wrong about the test code?


Thanks.
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Zun] Queens PTL candidacy

2017-08-01 Thread Hongbin Lu
Hi all,

I nominated myself to be a candidate of Zun PTL for Queens. As the founder of 
this
project, it is my honor to work with all of you to build an innovative
OpenStack container service.

OpenStack provides a full-featured data center management solution which
includes multi-tenant security, networking, storage, management and monitoring,
and more. All theses services are needed regardless of whether containers,
virtual machines, or baremetal servers are being used [1]. In this context,
Zun's role is to bring prevailing container technologies to OpenStack and
enable the reuse of existing infrastructure services for containers.
Eventually, different container technologies should be easily accessible by
cloud consumers, which is a goal Zun is contributing to.

Since April 2016, in which the project was founded, the Zun team has been
working hard to achieve its mission. We managed to delivere most of the
important features includes:
* A full-featured container API.
* A docker driver that serves as reference implementation.
* Neutron integration via Kuryr-libnetwork.
* Two image drivers: Docker Registry (i.e. Docker Hub) and Glance.
* Multi-tenancy: Containers are isolated by Keystone projects.
* Horizon integration.
* OpenStack Client integration.
* Heat integration.

By looking ahead to Queens, I would suggest the Zun team to focus on the
followings:
* NFV: Containerized NFV workload is emerging and we wants to adapt this trend.
* Containers-on-VMs: Provide an option to auto-provision VMs for containers.
  This is for use cases that containers need to be strongly isolated by VMs.
* Cinder integration: Leverage Cinder for providing data volume for containers.
* Alternative container runtime: Introduce a second container runtime as a
  Docker alternative.
* Capsule API: Pack multiple containers into a managed unit.

Beyond Pike, I would estimate Zun to move toward the following directions:
* Kubernetes: Kubernetes is probably the most popluar containers orchestration
  tool, but there are still some gaps that prevent Kubernetes to work well with
  OpenStack. I think Zun might be able to help to reduce the gaps. We could
  explore integration options for Kubernetes to make OpenStack more appealing
  for cloud-native users.
* Placement API: Nova team is working to split its scheduler out and Zun would
  like to leverage this new service if appropriate.

[1] https://www.openstack.org/assets/pdf-downloads/Containers-and-OpenStack.pdf

Best regards,
Hongbin
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [magnum] PTL nomination is open until Jan 29

2017-01-23 Thread Hongbin Lu
Hi all,

I sent this email to encourage you to run for the Magnum PTL for Pike [1]. I 
think most of the audience are in this ML so I sent the message to here.

First, I would like to thank for your interest in the Magnum project. It is 
great to work with you to build the project and make it better and better. 
Second, I would like to relay a reminder that the Pike PTL nomination is open 
*now*, and will be closed at Jan 29 23:45 UTC [1]. I wish more than one of you 
will step up to run for Magnum PTL position. I think the community will be 
healthier if there are more than one PTL candidates. If you are considering to 
run, I think the blog post below will help you understand more about this role.

  http://blog.flaper87.com/something-about-being-a-ptl

I strongly agree with the following key points of being a PTL:
* Make sure you will have enough time dedicated to the upstream.
* Prepare to step down in a cycle or two and create the next PTLs.
* Community decides: PTLs are not dictators.

If you have any query to decide, feel free to reach out to me and I am happy to 
share my past experience of being a Magnum PTL. Below is the history of Magnum 
PTLs. I sincerely thank them for their leaderships, but I would encourage a 
change in the upcoming cycles, simply for following the convention of other 
OpenStack projects to circulate the PTL position, ideally to a new person of a 
different affiliation. I think this will let everyone feel the ownership of the 
project and help the community in the long run.

Juno and earlier: Adrian Otto
Kilo: Adrian Otto
Liberty: Adrian Otto
Mitaka: Adrian Otto
Newton: Hongbin Lu
Ocata: Adrian Otto

[1] https://governance.openstack.org/election/

Best regards,
Hongbin
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Zun] Propose a change of the Zun core team membership

2017-01-23 Thread Hongbin Lu
Hi Zun cores,

I proposed a change of Zun core team membership as below:

+ Kevin Zhao (kevin-zhao)
- Haiwei Xu (xu-haiwei)

Kevin has been working for Zun for a while, and made significant contribution. 
He submitted several non-trivial patches with high quality. One of his 
challenging task is adding support of container interactive mode, and it looks 
he is capable to handle this challenging task (his patches are under reviews 
now). I think he is a good addition to the core team. Haiwei is a member of the 
initial core team. Unfortunately, his activity dropped down in the past a few 
months.

According to the OpenStack Governance process [1], we require a minimum of 4 +1 
votes from Zun core reviewers within a 1 week voting window (consider this 
proposal as a +1 vote from me). A vote of -1 is a veto. If we cannot get enough 
votes or there is a veto vote prior to the end of the voting window, this 
proposal is rejected.

[1] https://wiki.openstack.org/wiki/Governance/Approved/CoreDevProcess

Best regards,
Hongbin

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [magnum] CoreOS template v2

2017-01-24 Thread Hongbin Lu
As Spyros mentioned, an option is to start by cloning the existing templates. 
However, I have a concern for this approach because it will incur a lot of 
duplication. An alternative approach is modifying the existing CoreOS templates 
in-place. It might be a little difficult to implement but it saves your 
overhead to deprecate the old version and roll out the new version.

Best regards,
Hongbin

From: Spyros Trigazis [mailto:strig...@gmail.com]
Sent: January-24-17 3:47 PM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [magnum] CoreOS template v2

Hi.

IMO, you should add a BP and start by adding a v2 driver in /contrib.

Cheers,
Spyros

On Jan 24, 2017 20:44, "Kevin Lefevre" 
mailto:lefevre.ke...@gmail.com>> wrote:
Hi,

The CoreOS template is not really up to date and in sync with upstream CoreOS « 
Best Practice » (https://github.com/coreos/coreos-kubernetes), it is more a 
port of th fedora atomic template but CoreOS has its own Kubernetes deployment 
method.

I’d like to implement the changes to sync kubernetes deployment on CoreOS to 
latest kubernetes version (1.5.2) along with standards components according the 
CoreOS Kubernetes guide :
  - « Defaults » add ons like kube-dns , heapster and kube-dashboard (kube-ui 
has been deprecated for a long time and is obsolete)
  - Canal for network policy (Calico and Flannel)
  - Add support for RKT as container engine
  - Support sane default options recommended by Kubernetes upstream (admission 
control : https://kubernetes.io/docs/admin/admission-controllers/, using 
service account…)
  - Of course add every new parameters to HOT.

These changes are difficult to implement as is (due to the fragment concept and 
everything is a bit messy between common and specific template fragment, 
especially for CoreOS).

I’m wondering if it is better to clone the CoreOS v1 template to a new v2 
template en build from here ?
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: 
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Zun] PTL candidacy

2017-01-27 Thread Hongbin Lu
Hi all,

This is the first time Zun participants on official PTL election, which is
exciting. I nominated myself to be a candidate of Zun PTL. As the founder of
this project, it is my honor to work with all of you to build an innovative
container service for OpenStack.

Since April 2016, in which the project was founded, the Zun team were making
an impressive progress during these 8 months. With the hard work of the whole
team, we delivered most of the fundamental capabilities. The completed
essential features includes:
* A full-featured container API.
* Two drivers that serve as reference implementations.
* Neutron integration in one of the drivers.
* Two image drivers: Docker Registry (i.e. Docker Hub) and Glance.
* Multi-tenancy: Containers are isolated by OpenStack projects.
* HA deployment: Support multiple compute hosts.
* Horizon integration.
* OpenStack Client integration.

Zun is an important project for OpenStack because it enables unique use cases
that requires container to be an OpenStack-managed resource (i.e. orchstrating
containerized and virtualized resources). Furthermore, it allows users to use
one platform, that is OpenStack, to manage containers, VMs, and baremetal.
To achieve the goal, there are a lot of exicting tasks to do.

For Pike, I would suggest the Zun team to focus on the followings:
* Container network: Integrate with Kuryr-libnetwork for providing networking
  for Docker containers.
* Container storage: Leverage Cinder for providing container data volume.
* Nova integration: Enhance the existing Nova driver.
* Resource management: Enhance the management of compute host resources, and
  introduce different placement policy (i.e. pin container to dedicated
  CPU cores).

Beyond Pike, I would suggest to target for the following use cases:
* Strong isolation between neighboring containers. This could be solved by
  introducing Hypervisor-based container runtime.
* Containerize stateful application (i.e. DBMS).
* NFV/HPC workload.

Also, I would like to highlight that nova-docker is going to retire, and
the users who were using nova-docker might want to find a replacement.
If they are willing to migrate from nova-docker to Zun, I would encourage the
Zun team to help out.

Best regards,
Hongbin
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Zun] Propose a change of the Zun core team membership

2017-01-30 Thread Hongbin Lu
Hi all,

Thanks for the votes. According to the feedback, this proposal is approved. 
Welcome Kevin to the core team.

Best regards,
Hongbin

-Original Message-
From: Qiming Teng [mailto:teng...@linux.vnet.ibm.com] 
Sent: January-29-17 6:54 AM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [Zun] Propose a change of the Zun core team 
membership

+1 to both.

On Mon, Jan 23, 2017 at 10:56:00PM +, Hongbin Lu wrote:
> Hi Zun cores,
> 
> I proposed a change of Zun core team membership as below:
> 
> + Kevin Zhao (kevin-zhao)
> - Haiwei Xu (xu-haiwei)
> 
> Kevin has been working for Zun for a while, and made significant 
> contribution. He submitted several non-trivial patches with high quality. One 
> of his challenging task is adding support of container interactive mode, and 
> it looks he is capable to handle this challenging task (his patches are under 
> reviews now). I think he is a good addition to the core team. Haiwei is a 
> member of the initial core team. Unfortunately, his activity dropped down in 
> the past a few months.
> 
> According to the OpenStack Governance process [1], we require a minimum of 4 
> +1 votes from Zun core reviewers within a 1 week voting window (consider this 
> proposal as a +1 vote from me). A vote of -1 is a veto. If we cannot get 
> enough votes or there is a veto vote prior to the end of the voting window, 
> this proposal is rejected.
> 
> [1] https://wiki.openstack.org/wiki/Governance/Approved/CoreDevProcess
> 
> Best regards,
> Hongbin
> 

> __
>  OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: 
> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Zun] Choose a project mascot

2017-02-14 Thread Hongbin Lu
Hi Zun team,

OpenStack has a mascot program [1]. Basically, if we like, we can choose a 
mascot to represent our team. The process is as following:
* We choose a mascot from the natural world, which can be an animal (i.e. fish, 
bird), natural feature (i.e. waterfall) or other natural element (i.e. flame).
* Once we choose a mascot, I communicate the choice with OpenStack foundation 
staff.
* Someone will work on a draft based on the style of the family of logos.
* The draft will be sent back to us for approval.

The final mascot will be used to present our team. All, any idea for the mascot 
choice?

[1] https://www.openstack.org/project-mascots/

Best regards,
Hongbin
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Zun] Choose a project mascot

2017-02-17 Thread Hongbin Lu
Hi all,

Thanks for the inputs. By aggregating feedback from different source, the 
choice is as below:
* Barrel
* Storks
* Falcon (I am not sure this one since another team already chose Hawk)
* Dolphins
* Tiger

We will make a decision at the next team meeting.

Best regards,
Hongbin

From: Pradeep Singh [mailto:ps4openst...@gmail.com]
Sent: February-16-17 10:46 AM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [Zun] Choose a project mascot

I was thinking about falcon(light, powerful and fast), or dolphins or tiger.

On Wed, Feb 15, 2017 at 12:29 AM, Hongbin Lu 
mailto:hongbin...@huawei.com>> wrote:
Hi Zun team,

OpenStack has a mascot program [1]. Basically, if we like, we can choose a 
mascot to represent our team. The process is as following:
* We choose a mascot from the natural world, which can be an animal (i.e. fish, 
bird), natural feature (i.e. waterfall) or other natural element (i.e. flame).
* Once we choose a mascot, I communicate the choice with OpenStack foundation 
staff.
* Someone will work on a draft based on the style of the family of logos.
* The draft will be sent back to us for approval.

The final mascot will be used to present our team. All, any idea for the mascot 
choice?

[1] https://www.openstack.org/project-mascots/

Best regards,
Hongbin

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: 
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe<http://openstack-dev-requ...@lists.openstack.org?subject:unsubscribe>
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Zun]Use 'uuid' instead of 'id' as object ident in data model

2017-02-21 Thread Hongbin Lu
Gordon & Qiming,

Thanks for your inputs. The only reason Zun was using 'id' is because the data 
model was copied from other projects and those projects are using 'id', but I 
couldn't think of a reason why they were using 'id' at the first place. By 
aggregating the feedback so far, I think it makes sense for Zun to switch to 
'uuid' since we introduced etcd as an alternative datastore and etcd didn't 
support auto-increment primary key, unless someone pointed out a valid reason 
for stay using 'id'...

Best regards,
Hongbin

> -Original Message-
> From: gordon chung [mailto:g...@live.ca]
> Sent: February-21-17 8:29 AM
> To: openstack-dev@lists.openstack.org
> Subject: Re: [openstack-dev] [Zun]Use 'uuid' instead of 'id' as object
> ident in data model
> 
> 
> 
> On 21/02/17 01:28 AM, Qiming Teng wrote:
> >> in mysql[2].
> > Can someone remind me the benefits we get from Integer over UUID as
> > primary key? UUID, as its name implies, is meant to be an identifier
> > for a resource. Why are we generating integer key values?
> 
> this ^. use UUID please. you can google why auto increment is a
> probably not a good idea.
> 
> from a selfish pov, as gnocchi captures data on all resources in
> openstack, we store everything as a uuid anyways. even if your id
> doesn't clash in zun, it has a higher chance of clashing when you
> consider all the other resources from other services.
> 
> cheers,
> --
> gord
> 
> ___
> ___
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: OpenStack-dev-
> requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Zun] Choose a project mascot

2017-02-21 Thread Hongbin Lu
Hi all,

It looks someone proposed “panda” as a mascot for the Zun team [1] (although I 
don’t know who proposed it), and I think panda would be an interesting choice. 
Thoughts on this?

[1] https://www.openstack.org/project-mascots/

Best regards,
Hongbin

From: Pradeep Singh [mailto:ps4openst...@gmail.com]
Sent: February-16-17 10:46 AM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [Zun] Choose a project mascot

I was thinking about falcon(light, powerful and fast), or dolphins or tiger.

On Wed, Feb 15, 2017 at 12:29 AM, Hongbin Lu 
mailto:hongbin...@huawei.com>> wrote:
Hi Zun team,

OpenStack has a mascot program [1]. Basically, if we like, we can choose a 
mascot to represent our team. The process is as following:
* We choose a mascot from the natural world, which can be an animal (i.e. fish, 
bird), natural feature (i.e. waterfall) or other natural element (i.e. flame).
* Once we choose a mascot, I communicate the choice with OpenStack foundation 
staff.
* Someone will work on a draft based on the style of the family of logos.
* The draft will be sent back to us for approval.

The final mascot will be used to present our team. All, any idea for the mascot 
choice?

[1] https://www.openstack.org/project-mascots/

Best regards,
Hongbin

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: 
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe<http://openstack-dev-requ...@lists.openstack.org?subject:unsubscribe>
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Zun] Choose a project mascot

2017-02-27 Thread Hongbin Lu
Hi all,

We discussed the mascot choice a few times. At the last team meeting, we 
decided to choose dolphins as the Zun’s mascot. Thanks Pradeep for proposing 
this mascot and thanks all for providing feedback.

Best regards,
Hongbin

From: Pradeep Singh [mailto:ps4openst...@gmail.com]
Sent: February-16-17 10:46 AM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [Zun] Choose a project mascot

I was thinking about falcon(light, powerful and fast), or dolphins or tiger.

On Wed, Feb 15, 2017 at 12:29 AM, Hongbin Lu 
mailto:hongbin...@huawei.com>> wrote:
Hi Zun team,

OpenStack has a mascot program [1]. Basically, if we like, we can choose a 
mascot to represent our team. The process is as following:
* We choose a mascot from the natural world, which can be an animal (i.e. fish, 
bird), natural feature (i.e. waterfall) or other natural element (i.e. flame).
* Once we choose a mascot, I communicate the choice with OpenStack foundation 
staff.
* Someone will work on a draft based on the style of the family of logos.
* The draft will be sent back to us for approval.

The final mascot will be used to present our team. All, any idea for the mascot 
choice?

[1] https://www.openstack.org/project-mascots/

Best regards,
Hongbin

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: 
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe<http://openstack-dev-requ...@lists.openstack.org?subject:unsubscribe>
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [ptls] Project On-Boarding Rooms

2017-03-16 Thread Hongbin Lu
Zun team could squeeze the session into 45 minutes and give the other 45 
minutes to another team if anyone interest.

Best regards,
Hongbin

From: Kendall Nelson [mailto:kennelso...@gmail.com]
Sent: March-16-17 11:11 AM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [ptls] Project On-Boarding Rooms

Hello All!
I am pleased to see how much interest there is in these onboarding rooms. As of 
right now I can accommodate all the official projects (sorry Cyborg) that have 
requested a room to make all the requests fit, I have combined docs and i18n 
and taken Thierry's suggestion to combine Infra/QA/RelMgmt/Regs/Stable.
These are the projects that have requested a slot:
Solum
Tricircle
Karbor
Freezer
Kuryr
Mistral
Dragonflow
Coudkitty
Designate
Trove
Watcher
Magnum
Barbican
Charms
Tacker
Zun
Swift
Watcher
Kolla
Horizon
Keystone
Nova
Cinder
Telemetry
Infra/QA/RelMgmt/Regs/Stable
Docs/i18n
If there are any other projects willing to share a slot together please let me 
know!
-Kendall Nelson (diablo_rojo)

On Thu, Mar 16, 2017 at 8:49 AM Jeremy Stanley 
mailto:fu...@yuggoth.org>> wrote:
On 2017-03-16 10:31:49 +0100 (+0100), Thierry Carrez wrote:
[...]
> I think we could share a 90-min slot between a number of the supporting
> teams:
>
> Infrastructure, QA, Release Management, Requirements, Stable maint
>
> Those teams are all under-staffed and wanting to grow new members, but
> 90 min is both too long and too short for them. I feel like regrouping
> them in a single slot and give each of those teams ~15 min to explain
> what they do, their process and tooling, and a pointer to next steps /
> mentors would be immensely useful.

I can see this working okay for the Infra team. Pretty sure I can't
come up with anything useful (to our team) we could get through in a
90-minute slot given our new contributor learning curve, so would
feel bad wasting a full session. A "this is who we are and what we
do, if you're interested in these sorts of things and want to find
out more on getting involved go here, thank you for your time" over
10 minutes with an additional 5 for questions could at least be
minimally valuable for us, on the other hand.
--
Jeremy Stanley

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: 
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [magnum][osc] What name to use for magnum commands in osc?

2017-03-20 Thread Hongbin Lu
Zun had a similar issue of colliding on the keyword "container", and we chose 
to use an alternative term "appcontainer" that is not perfect but acceptable. 
IMHO, this kind of top-level name collision issue would be better resolved by 
introducing namespace per project, which is the approach adopted by AWS.

Best regards,
Hongbin

> -Original Message-
> From: Jay Pipes [mailto:jaypi...@gmail.com]
> Sent: March-20-17 3:35 PM
> To: openstack-dev@lists.openstack.org
> Subject: Re: [openstack-dev] [magnum][osc] What name to use for magnum
> commands in osc?
> 
> On 03/20/2017 03:08 PM, Adrian Otto wrote:
> > Team,
> >
> > Stephen Watson has been working on an magnum feature to add magnum
> commands to the openstack client by implementing a plugin:
> >
> >
> https://review.openstack.org/#/q/status:open+project:openstack/python-
> > magnumclient+osc
> >
> > In review of this work, a question has resurfaced, as to what the
> client command name should be for magnum related commands. Naturally,
> we’d like to have the name “cluster” but that word is already in use by
> Senlin.
> 
> Unfortunately, the Senlin API uses a whole bunch of generic terms as
> top-level REST resources, including "cluster", "event", "action",
> "profile", "policy", and "node". :( I've warned before that use of
> these generic terms in OpenStack APIs without a central group
> responsible for curating the API would lead to problems like this. This
> is why, IMHO, we need the API working group to be ultimately
> responsible for preventing this type of thing from happening. Otherwise,
> there ends up being a whole bunch of duplication and same terms being
> used for entirely different things.
> 
>  >Stephen opened a discussion with Dean Troyer about this, and found
> that “infra” might be a suitable name and began using that, and
> multiple team members are not satisfied with it.
> 
> Yeah, not sure about "infra". That is both too generic and not an
> actual "thing" that Magnum provides.
> 
>  > The name “magnum” was excluded from consideration because OSC aims
> to be project name agnostic. We know that no matter what word we pick,
> it’s not going to be ideal. I’ve added an agenda on our upcoming team
> meeting to judge community consensus about which alternative we should
> select:
> >
> > https://wiki.openstack.org/wiki/Meetings/Containers#Agenda_for_2017-
> 03
> > -21_1600_UTC
> >
> > Current choices on the table are:
> >
> >   * c_cluster (possible abbreviation alias for
> container_infra_cluster)
> >   * coe_cluster
> >   * mcluster
> >   * infra
> >
> > For example, our selected name would appear in “openstack …” commands.
> Such as:
> >
> > $ openstack c_cluster create …
> >
> > If you have input to share, I encourage you to reply to this thread,
> or come to the team meeting so we can consider your input before the
> team makes a selection.
> 
> What is Magnum's service-types-authority service_type?
> 
> Best,
> -jay
> 
> ___
> ___
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: OpenStack-dev-
> requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [magnum][osc] What name to use for magnum commands in osc?

2017-03-20 Thread Hongbin Lu


> -Original Message-
> From: Dean Troyer [mailto:dtro...@gmail.com]
> Sent: March-20-17 5:19 PM
> To: OpenStack Development Mailing List (not for usage questions)
> Subject: Re: [openstack-dev] [magnum][osc] What name to use for magnum
> commands in osc?
> 
> On Mon, Mar 20, 2017 at 3:37 PM, Adrian Otto 
> wrote:
> > the  argument is actually the service name, such as “ec2”.
> This is the same way the openstack cli works. Perhaps there is another
> tool that you are referring to. Have I misunderstood something?
> 
> I am going to jump in here and clarify one thing.  OSC does not do
> project namespacing, or any other sort of namespacing for its resource
> names.  It uses qualified resource names (fully-qualified even?).  In
> some cases this results in something that looks a lot like namespacing,
> but it isn't. The Volume API commands are one example of this, nearly
> every resource there includes the word 'volume' but not because that is
> the API name, it is because that is the correct name for those
> resources ('volume backup', etc).

[Hongbin Lu] I might provide a minority point of view here. What confused me is 
inconsistent style of the resource name. For example, there is a "container" 
resource for a swift container, and there is "secret container" resource a 
barbican container. I just found it odd to have both un-qualified resource 
(i.e. container) and qualified resource name (i.e. secret container) in the 
same CLI. It appears to me that some resources are namespaced and others are 
not, and this kind of style provides a suboptimal user experiences from my 
point of view.

I think the style would be more consistent if all the resources are qualified 
or un-qualified, not the mix of both.

> 
> > We could so the same thing and use the text “container_infra”, but we
> felt that might be burdensome for interactive use and wanted to find
> something shorter that would still make sense.
> 
> Naming resources is hard to get right.  Here's my throught process:
> 
> For OSC, start with how to describe the specific 'thing' being
> manipulated.  In this case, it is some kind of cluster.  In the list
> you posted in the first email, 'coe cluster' seems to be the best
> option.  I think 'coe' is acceptable as an abbreviation (we usually do
> not use them) because that is a specific term used in the field and
> satisfies the 'what kind of cluster?' question.  No underscores please,
> and in fact no dash here, resource names have spaces in them.
> 
> dt
> 
> --
> 
> Dean Troyer
> dtro...@gmail.com
> 
> ___
> ___
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: OpenStack-dev-
> requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Zun] Document about how to deploy Zun with multiple hosts

2017-03-22 Thread Hongbin Lu
Kevin,

I don’t think there is any such document right now. I submitted a ticket for 
creating one:

https://bugs.launchpad.net/zun/+bug/1675245

There is a guidance for setting up a multi-host devstack environment: 
https://docs.openstack.org/developer/devstack/guides/multinode-lab.html . You 
could possibly use it as a starting point and inject Zun-specific configuration 
there. The guide divide nodes into two kinds: cluster controller and compute 
node. In the case of Zun, zun-api & zun-compute can run on cluster controller, 
and zun-compute can run on compute node. Hope it helps.

Best regards,
Hongbin

From: Kevin Zhao [mailto:kevin.z...@linaro.org]
Sent: March-22-17 10:39 PM
To: OpenStack Development Mailing List (not for usage questions)
Subject: [openstack-dev] [Zun] Document about how to deploy Zun with multiple 
hosts

Hi guys,
Nowadays I want to try Zun in multiple hosts. But I didn't find the doc 
about how to deploy it.
I wonder where is document to show the users about how to deploy zun with 
multiple hosts? That will be easy for development.
Thanks  :-)
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [qa][devstack][kuryr][fuxi][zun] Consolidate docker installation

2017-04-02 Thread Hongbin Lu
Hi devstack team,

Please find my proposal about consolidating docker installation into one place 
that is devstack tree:

https://review.openstack.org/#/c/452575/

Currently, there are several projects that installed docker in their devstack 
plugins in various different ways. This potentially introduce issues if more 
than one such services were enabled in devstack because the same software 
package will be installed and configured multiple times. To resolve the 
problem, an option is to consolidate the docker installation script into one 
place so that all projects will leverage it. Before continuing this effort, I 
wanted to get early feedback to confirm if this kind of work will be accepted. 
BTW, etcd installation might have a similar problem and I would be happy to 
contribute another patch to consolidate it if that is what will be accepted as 
well.

Best regards,
Hongbin
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Zun] Containers in privileged mode

2018-01-02 Thread Hongbin Lu
Hi Joao,

Right now, it is impossible to create containers with escalated privileged,
such as setting privileged mode or adding additional caps. This is
intentional for security reasons. Basically, what Zun currently provides is
"serverless" containers, which means Zun is not using VMs to isolate
containers (for people who wanted strong isolation as VMs, they can choose
secure container runtime such as Clear Container). Therefore, it is
insecure to give users control of any kind of privilege escalation.
However, if you want this feature, I would love to learn more about the use
cases.

Best regards,
Hongbin

On Tue, Jan 2, 2018 at 10:20 AM, João Paulo Sá da Silva <
joao-sa-si...@alticelabs.com> wrote:

> Hello!
>
> Is it possible to create containers in privileged mode or to add caps as
> NET_ADMIN?
>
>
>
> Kind regards,
>
> João
>
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Zun] Containers in privileged mode

2018-01-02 Thread Hongbin Lu
Please find my reply inline.

Best regards,
Hongbin

On Tue, Jan 2, 2018 at 2:06 PM, João Paulo Sá da Silva <
joao-sa-si...@alticelabs.com> wrote:

> Thanks for your answer, Hongbin, it is very appreciated.
>
>
>
> The use case is to use Virtualized Network Functions in containers instead
> of virtual machines. The rational for using containers instead of VMs is
> better VNF density in resource constrained hosts.
>
> The goal is to have several VNFs (DHCP, FW, etc) running on severely
> resource constrained Openstack compute node.  But without NET_ADMIN cap I
> can’t even start dnsmasq.
>
Make sense. Would you help writing a blueprint for this feature:
https://blueprints.launchpad.net/zun ? We use blueprint to track all
requested features.


>
>
> Is it possible to use clear container with zun/openstack?
>
Yes, it is possible. We are adding documentation about that:
https://review.openstack.org/#/c/527611/ .

>
>
> From checking gerrit it seems that this point was already address and
> dropped? Regarding the security concerns I disagree, if users choose to
> allow such situation they should be allowed.
>
> It is the user responsibility to recognize the dangers and act
> accordingly.
>
>
>
> In Neutron you can go as far as fully disabling  port security, this was
> implemented again with VNFs in mind.
>
Make sense as well. IMHO, we should disallow privilege escalation by
default, but I am open to introduce a configurable option to allow it. I
can see this is necessary for some use cases. Cloud administrators should
be reminded the security implication of doing that.


>
>
> Kind regards,
>
> João
>
>
>
>
>
> >Hi Joao,
>
> >
>
> >Right now, it is impossible to create containers with escalated
> privileged,
>
> >such as setting privileged mode or adding additional caps. This is
>
> >intentional for security reasons. Basically, what Zun currently provides
> is
>
> >"serverless" containers, which means Zun is not using VMs to isolate
>
> >containers (for people who wanted strong isolation as VMs, they can choose
>
> >secure container runtime such as Clear Container). Therefore, it is
>
> >insecure to give users control of any kind of privilege escalation.
>
> >However, if you want this feature, I would love to learn more about the
> use
>
> >cases.
>
> >
>
> >Best regards,
>
> >Hongbin
>
> >
>
> >On Tue, Jan 2, 2018 at 10:20 AM, João Paulo Sá da Silva <
>
> >joao-sa-silva at alticelabs.com> wrote:
>
> >
>
> >> Hello!
>
> >>
>
> >> Is it possible to create containers in privileged mode or to add caps as
>
> >> NET_ADMIN?
>
> >>
>
> >>
>
> >>
>
> >> Kind regards,
>
> >>
>
> >> João
>
> >>
>
> >>
>
> >>
>
> >> 
> __
>
> >> OpenStack Development Mailing List (not for usage questions)
>
> >> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:
> unsubscribe
>
> >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
> >>
>
> >>
>
> -- next part --
>
> An HTML attachment was scrubbed...
>
> URL:  attachments/20180102/e1ecb71a/attachment.html>
>
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Zun] Containers in privileged mode

2018-01-03 Thread Hongbin Lu
On Wed, Jan 3, 2018 at 10:41 AM, João Paulo Sá da Silva <
joao-sa-si...@alticelabs.com> wrote:

> Hello,
>
>
>
> I created the BP: https://blueprints.launchpad.
> net/zun/+spec/add-capacities-to-containers .
>
Thanks for creating the BP.


>
>
> About the clear containers, I’m not quite sure how using them solves my
> capabilities situation. Can you elaborate on that?
>
What I was trying to say is that Zun offers choice of container runtime:
runc or clear container. I am not sure how clear container deal with
capabilities and privilege escalation. I will leave this question to others.


>
>
> Will zun ever be able to launch LXD containers?
>
Not for now. Only Docker is supported.


>
>
> Kind regards,
>
> João
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [api-wg] [api] [cinder] [nova] Support specify action name in request url

2018-01-19 Thread Hongbin Lu
I remembered there are several discussions about action APIs in the past. This 
is one discussion I can find: 
http://lists.openstack.org/pipermail/openstack-dev/2016-December/109136.html . 
An obvious alternative is to expose each action with an independent API 
endpoint. For example:

* POST /servers//start:Start a server
* POST /servers//stop:Stop a server
* POST /servers//reboot:Reboot a server
* POST /servers//pause:Pause a server

Several people pointed out the pros and cons of either approach and other 
alternatives [1] [2] [3]. Eventually, we (OpenStack Zun team) have adopted the 
alternative approach [4] above and it works very well from my perspective. 
However, I understand that there is no consensus on this approach within the 
OpenStack community.

[1] http://lists.openstack.org/pipermail/openstack-dev/2016-December/109178.html
[2] http://lists.openstack.org/pipermail/openstack-dev/2016-December/109208.html
[3] http://lists.openstack.org/pipermail/openstack-dev/2016-December/109248.html
[4] 
https://developer.openstack.org/api-ref/application-container/#manage-containers

Best regards,
Hongbin

From: TommyLike Hu [mailto:tommylik...@gmail.com]
Sent: January-18-18 5:07 AM
To: OpenStack Development Mailing List (not for usage questions) 

Subject: [openstack-dev] [api-wg] [api] [cinder] [nova] Support specify action 
name in request url

Hey all,
   Recently We found an issue related to our OpenStack action APIs. We usually 
expose our OpenStack APIs by registering them to our API Gateway (for instance 
Kong [1]), but it becomes very difficult when regarding to action APIs. We can 
not register and control them seperately because them all share the same 
request url which will be used as the identity in the gateway service, not say 
rate limiting and other advanced gateway features, take a look at the basic 
resources in OpenStack

   1. Server: "/servers/{server_id}/action"  35+ APIs are include.
   2. Volume: "/volumes/{volume_id}/action"  14 APIs are include.
   3. Other resource

We have tried to register different interfaces with same upstream url, such as:

   api gateway: /version/resource_one/action/action1 => upstream: 
/version/resource_one/action
   api gateway: /version/resource_one/action/action2 => upstream: 
/version/resource_one/action

But it's not secure enough cause we can pass action2 in the request body while 
invoking /action/action1, also, try to read the full body for route is not 
supported by most of the api gateways(maybe plugins) and will have a 
performance impact when proxy. So my question is do we have any solution or 
suggestion for this case? Could we support specify action name both in request 
body and url such as:

URL:/volumes/{volume_id}/action
BODY:{'extend':{}}

and:

URL:/volumes/{volume_id}/action/extend
BODY: {'extend':{}}

Thanks
Tommy

[1]: https://github.com/Kong/kong
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [kuryr][libnetwork] Release kuryr-libnetwork 1.x for Queens

2018-01-19 Thread Hongbin Lu
Hi Kuryr team,

I think Kuryr-libnetwork is ready to move out of beta status. I propose to
make the first 1.x release of Kuryr-libnetwork for Queens and cut a stable
branch on it. What do you think about this proposal?

Best regards,
Hongbin
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [nova][neutron] Extend instance IP filter for floating IP

2018-01-24 Thread Hongbin Lu
Hi all,

Nova currently allows us to filter instances by fixed IP address(es). This 
feature is known to be useful in an operational scenario that cloud 
administrators detect abnormal traffic in an IP address and want to trace down 
to the instance that this IP address belongs to. This feature works well except 
a limitation that it only supports fixed IP address(es). In the real 
operational scenarios, cloud administrators might find that the abused IP 
address is a floating IP and want to do the filtering in the same way as fixed 
IP.

Right now, unfortunately, the experience is diverged between these two classes 
of IP address. Cloud administrators need to deploy the logic to (i) detect the 
class of IP address (fixed or floating), (ii) use nova's IP filter if the 
address is a fixed IP address, (iii) do manual filtering if the address is a 
floating IP address. I wonder if nova team is willing to accept an enhancement 
that makes the IP filter support both. Optimally, cloud administrators can 
simply pass the abused IP address to nova and nova will handle the 
heterogeneity.

In term of implementation, I expect the change is small. After this patch [1], 
Nova will query Neutron to compile a list of ports' device_ids (device_id is 
equal to the uuid of the instance to which the port binds) and use the 
device_ids to query the instances. If Neutron returns an empty list, Nova can 
give a second try to query Neutron for floating IPs. There is a RFE [2] and POC 
[3] for proposing to add a device_id attribute to the floating IP API resource. 
Nova can leverage this attribute to compile a list of instances uuids and use 
it as filter on listing the instances.

If this feature is implemented, will it benefit the general community? Finally, 
I also wonder how others are tackling a similar problem. Appreciate your 
feedback.

[1] https://review.openstack.org/#/c/525505/
[2] https://bugs.launchpad.net/neutron/+bug/1723026
[3] https://review.openstack.org/#/c/534882/

Best regards,
Hongbin
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Zun] PTL non candidacy

2018-02-06 Thread Hongbin Lu
Hi all,

Just let you know that I won't run for Zun PTL for Rocky because we already
have a good PTL candidate:

  https://review.openstack.org/#/c/541187/

I am happy to see that we are able to circulate the PTL role and this is an
indication that our project become mature and our community is healthy. I
will definitely continue my contribution to Zun regardless of I am the PTL
or not.

It is a pleasure to work with you and I am looking forwarding to working
with our new PTL to continue to build our project.

Best regards,
Hongbin
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [osc][python-openstackclient] Consistency of option name

2018-02-11 Thread Hongbin Lu
Hi all,

I was working on the OSC plugin of my project and trying to choose a CLI
option to represent the availability zone of the container. When I came
across the existing commands, I saw some inconsistencies on the naming.
Some commands use the syntax '--zone ', while others use the syntax
'--availability-zone '. For example:

* openstack host list ... [--zone ]
* openstack aggregate create ... [--zone ]
* openstack volume create ... [--availability-zone ]
* openstack consistency group create ... [--availability-zone
]

I wonder if it makes sense to address this inconsistency. Is it possible
have all commands using one syntax?

Best regards,
Hongbin
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Zun] Meeting cancel

2018-02-13 Thread Hongbin Lu
Hi team,

We won't have team meetings in the next two weeks. This is because next
week is Lunar New Year and the next next week is the PTG. We will resume
the weekly team meeting at Mar 6, 2018. Please find the schedule in:
https://wiki.openstack.org/wiki/Zun#Meetings .

Happy holiday everyone!

Best regards,
Hongbin
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [zun][kuryr][kuryr-libnetwork][neutron] Gate breakage due to removal of tag extension

2018-02-14 Thread Hongbin Lu
 Hi all,

Zun's gate is currently broken due to the removal of tag extension [1] at
neutron side. The reason is that Zun has a dependency on Kuryr-libnetwork
and Kuryr-libnetwork relies on the tag extension that was removed.

A quick fixup is to revert the tag extension removal patch [2]. This will
unblock the gate immediately. Potential alternative fixes are welcome as
long as it can quickly unblock the gate. Your help is greatly appreciated.

[1] https://review.openstack.org/#/c/534964/
[2] https://review.openstack.org/#/c/544179/

Best regards,
Hongbin
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [docs] About the convention to use '.' instead of 'source'.

2018-02-17 Thread Hongbin Lu
Hi all,

We have contributors submit patches [1] about switching over from 'source'
to  '.'. Frankly, it is a bit confused for reviewers to review those
patches since it is unclear what are the rationals of the change. By
tracing down to the patch [2] that introduced this convention,
unfortunately, it doesn't help since there is not too much information in
the commit message. Moreover, this convention doesn't seem to be followed
very well in the community. I saw devstack is still using 'source' instead
of '.' [3], which contradicts to what the docs said [4].

If anyone can clarify the rationals of this convention, it will be really
helpful.

[1] https://review.openstack.org/#/c/543155/
[2] https://review.openstack.org/#/c/304545/3
[3] https://github.com/openstack-dev/devstack/blob/master/stack.sh#L592
[4]
https://docs.openstack.org/doc-contrib-guide/writing-style/code-conventions

Best regards,
Hongbin
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


<    1   2   3   4   5   >