[openstack-dev] [nettworking-ovn [patch-update b8af082
I would like to update this patch on a standard deployment of Newton. After applying patch (copying over diffs) are there any steps needed for db sync? There are a number of changes in ovn_db_sync.py https://github.com/openstack/networking-ovn/commit/b8af082e326d1294ec3110a1d4ac266da84868b8 __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [Ironic] Baremetal Storage Service?
Hi Jay, 2016-11-13 3:12 GMT+09:00 Jay Pipes : > On 11/12/2016 09:31 AM, Akira Yoshiyama wrote: >> >> Hi Stackers, >> >> In TripleO, Ironic provides physical servers for an OpenStack >> deployment but we have to configure physical storages manually, or >> with any tool, if required. It's better that an OpenStack service >> manages physical storages as same as Ironic for servers. >> >> IMO, there are 2 plans to manage physical storages: > > When you say "manage physical storage" are you referring to configuring > something like Ceph or GlusterFS or even NFS on a bunch of baremetal > servers? No. "physical storages" means storage products like EMC VNX, NetApp Data ONTAP, HPE Lefthand and so on. Say there is a new service named X to manage them. A user, he/she will be a new IaaS admin, requests many baremetal servers to Ironic and some baremetal storages to X. After they are provided, he/she will start to build a new OpenStack deployment with them. Nova in the new one will provide VMs on the servers and Cinder will manage logical volumes on the storages. X doesn't manage each logical volume but pools, user accounts and network connections of the storages. BR, Akira > That isn't a multi-tenant HTTP API service designed for lots of users but > rather a need to automate some mostly one-time storage setup actions. > > If so, I think that is more the realm of configuration management systems > like Puppet or Ansible than OpenStack itself. > > Best, > -jay > >> a) extends Ironic >> b) creates a new service >> >> Which is better? Any ideas? >> >> Thank you, >> Akira __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [kolla][release] Version numbers for kolla-ansible repository.
Thanks, I have done exactly as you recommended. Regards -steve On 11/12/16, 10:49 AM, "Andreas Jaeger" wrote: >On 11/12/2016 11:15 AM, Steven Dake (stdake) wrote: >> The proposal I think you made is what I was thinking we would do, so just to >> clarify: >> Kolla keeps all branches/tags >> When kolla-ansible is created with an upstream tag in PC, the >> project-config will include no post jobs so no artifacts will be created > >we import the complete repo - so create a copy of the repo for import, >delete the branches and tags on that copy - and then it gets imported as >is. Then you don't need post jobs removal. > >> Kolla-ansible will have all branches deleted from it (so we maintain the >> back ports in the kolla repo where the code originated > >Do this as before. > >> New (ocata) versions of kolla-ansible will have a 4.0.0 tag but no branches, >> and possibly no tags. >> >> The delta between kolla and kolla-ansible is in the diagram in this thread, >> but in a nutshell: >> >> Today Kolla contains build.py docker dir, sensible dir, and kolla-ansible.py >> >> In the future: >> Kolla - contains build.py, docker dir >> Kolla-ansible contains sensible dir, kolla-ansible.py >> >> Both repos contain whatever is needed to make those repos work with the >> various OpenStack processes. >> >> I agree back ports will be more challenging in general with a repo split. >> The core reviewers committed to maintaining back ports properly when they >> voted on the repo split. > >Andreas >-- > Andreas Jaeger aj@{suse.com,opensuse.org} Twitter: jaegerandi > SUSE LINUX GmbH, Maxfeldstr. 5, 90409 Nürnberg, Germany >GF: Felix Imendörffer, Jane Smithard, Graham Norton, >HRB 21284 (AG Nürnberg) > GPG fingerprint = 93A3 365E CE47 B889 DF7F FED1 389A 563C C272 A126 > > >__ >OpenStack Development Mailing List (not for usage questions) >Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe >http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [tc][kolla] Ansible module with GPLv3
Excerpts from Jeremy Stanley's message of 2016-11-05 14:08:29 +: > On 2016-11-04 16:38:45 -0700 (-0700), Clint Byrum wrote: > [...] > > Modules are not plugins. > [...] > > This only refers to dynamic inventory, which is hardly even a plugin > > interface. > > > > Strategy plugins run in ansible itself and must import pieces of Ansible, > > and thus must be GPLv3: > [...] > > On further reading I mostly concur. Unfamiliarity with Ansible led > me to believe they used the terms plug-in and module > interchangeably, and I missed that the "strategy" qualifier was key > to Michał's original problem statement. Strategy plugins do seem to > generally import lots of GPLv3 licensed Python modules from Ansible > and call into them, which under conventional wisdom makes the result > a derivative work of Ansible that therefore needs to be distributed > under a license compatible with GPLv3 (Apache2 would qualify > according to the FSF). > https://www.apache.org/licenses/GPL-compatibility.html "This licensing incompatibility applies only when some Apache project software becomes a derivative work of some GPLv3 software, because then the Apache software would have to be distributed under GPLv3. This would be incompatible with ASF's requirement that all Apache software must be distributed under the Apache License 2.0. We avoid GPLv3 software because merely linking to it is considered by the GPLv3 authors to create a derivative work. We want to honor their license. Unless GPLv3 licensors relax this interpretation of their own license regarding linking, our licensing philosophies are fundamentally incompatible. This is an identical issue for both GPLv2 and GPLv3. Despite our best efforts, the FSF has never considered the Apache License to be compatible with GPL version 2, citing the patent termination and indemnification provisions as restrictions not present in the older GPL license. The Apache Software Foundation believes that you should always try to obey the constraints expressed by the copyright holder when redistributing their work." So, no, I don't believe that they're compatible and I don't believe this could even be rewritten to be ASL 2.0. Ansible plugins need to be GPLv3. Period. I think it just has to be an exception and stored in a separate repo. __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
[openstack-dev] [nova][barbican] ocata summit security specs and testing session recap
At the Ocata summit we held a design summit session covering several security-related specs from Dane Fichter and Peter Hamilton. The full etherpad is here: https://etherpad.openstack.org/p/ocata-nova-summit-security Dane was present and the majority of the discussion was on the cert validation spec: https://review.openstack.org/#/c/357151/ Daniel Berrange has done the most review on the spec and was present to discuss some of the issues with the proposal. Ultimately there was agreement to have an incremental step forward and allow passing a list of certificate uuids when creating a server which would be used for signed image verification. The spec lays out several alternatives and options for improving on this later, but they are out of scope right now so we're starting small to address the main problem defined in the spec. I missed some of the discussion in the room and there aren't many details in the etherpad, so if Dane or Daniel want to update the etherpad or expand on this thread that would be helpful. I have reviewed the cert validation spec and added several questions and concerns around things like, how do we handle evacuate and migration when we don't persist the list of trusted cert IDs used to create the server? Discussion on that will continue in the spec. The other thing we talked about during this session was the need for a CI job that can test a lot of the security-related features we already support, like signed image verification and using a real key manager like Barbican. The idea being before we add more features in this space we really need to start doing integration testing of the code we already have. Dane Fichter has started working on some of this already. We shouldn't require any changes to Tempest as there are no API changes, but we need some work in devstack to configure it for signed images and using a real key manager. And then we need a new CI job defined which uses the Barbican devstack plugin to deploy Barbican and configure the other services like Nova and Glance to use it. I've volunteered to help work on pulling those CI job pieces together. -- Thanks, Matt Riedemann __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [Ironic] Baremetal Storage Service?
On 11/12/2016 09:31 AM, Akira Yoshiyama wrote: Hi Stackers, In TripleO, Ironic provides physical servers for an OpenStack deployment but we have to configure physical storages manually, or with any tool, if required. It's better that an OpenStack service manages physical storages as same as Ironic for servers. IMO, there are 2 plans to manage physical storages: When you say "manage physical storage" are you referring to configuring something like Ceph or GlusterFS or even NFS on a bunch of baremetal servers? That isn't a multi-tenant HTTP API service designed for lots of users but rather a need to automate some mostly one-time storage setup actions. If so, I think that is more the realm of configuration management systems like Puppet or Ansible than OpenStack itself. Best, -jay a) extends Ironic b) creates a new service Which is better? Any ideas? Thank you, Akira __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [kolla][release] Version numbers for kolla-ansible repository.
On 11/12/2016 11:15 AM, Steven Dake (stdake) wrote: The proposal I think you made is what I was thinking we would do, so just to clarify: Kolla keeps all branches/tags When kolla-ansible is created with an upstream tag in PC, the project-config will include no post jobs so no artifacts will be created we import the complete repo - so create a copy of the repo for import, delete the branches and tags on that copy - and then it gets imported as is. Then you don't need post jobs removal. Kolla-ansible will have all branches deleted from it (so we maintain the back ports in the kolla repo where the code originated Do this as before. New (ocata) versions of kolla-ansible will have a 4.0.0 tag but no branches, and possibly no tags. The delta between kolla and kolla-ansible is in the diagram in this thread, but in a nutshell: Today Kolla contains build.py docker dir, sensible dir, and kolla-ansible.py In the future: Kolla - contains build.py, docker dir Kolla-ansible contains sensible dir, kolla-ansible.py Both repos contain whatever is needed to make those repos work with the various OpenStack processes. I agree back ports will be more challenging in general with a repo split. The core reviewers committed to maintaining back ports properly when they voted on the repo split. Andreas -- Andreas Jaeger aj@{suse.com,opensuse.org} Twitter: jaegerandi SUSE LINUX GmbH, Maxfeldstr. 5, 90409 Nürnberg, Germany GF: Felix Imendörffer, Jane Smithard, Graham Norton, HRB 21284 (AG Nürnberg) GPG fingerprint = 93A3 365E CE47 B889 DF7F FED1 389A 563C C272 A126 __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [Openstack-operators] [trove][nova] More file injection woes
On 11/12/2016 8:19 AM, Amrith Kumar wrote: I'm adding [trove] to the subject as we're interested in where this ends up. Matt, it may make sense to include other projects that use service VM's if they are using file injection/configDrive/... -amrith Amrith, is that list of projects defined anywhere? If not, it'd be good to start documenting that in the nova devref so that the nova team has an idea of which projects are using it and how they are using it. -- Thanks, Matt Riedemann __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
[openstack-dev] [Ironic] Baremetal Storage Service?
Hi Stackers, In TripleO, Ironic provides physical servers for an OpenStack deployment but we have to configure physical storages manually, or with any tool, if required. It's better that an OpenStack service manages physical storages as same as Ironic for servers. IMO, there are 2 plans to manage physical storages: a) extends Ironic b) creates a new service Which is better? Any ideas? Thank you, Akira __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [Openstack-operators] [trove][nova] More file injection woes
I'm adding [trove] to the subject as we're interested in where this ends up. Matt, it may make sense to include other projects that use service VM's if they are using file injection/configDrive/... -amrith -Original Message- From: Matt Riedemann [mailto:mrie...@linux.vnet.ibm.com] Sent: Friday, November 11, 2016 8:12 PM To: OpenStack Development Mailing List (not for usage questions) ; openstack-operat...@lists.openstack.org Subject: [Openstack-operators] [nova] More file injection woes Chris Friesen reported a bug [1] where injected files on a server aren't in the guest after it's evacuated to another compute host. This is because the injected files aren't persisted in the nova database at all. Evacuate and rebuild use similar code paths, but rebuild is a user operation and the command line is similar to boot, but evacuate is an admin operation and the admin doesn't have the original injected files. We've talked about issues with file injection before [2] - in that case not being able to tell if it can be honored and it just silently doesn't inject the files but the server build doesn't fail. We could eventually resolve that with capabilities discovery in the API. There are other issues with file injection, like potential security issues, and we've talked about getting rid of it for years because you can use the config drive. The metadata service is not a replacement, as noted in the code [3], because the files aren't persisted in nova so they can't be served up later. I'm sure we've talked about this before, but if we were to seriously consider deprecating file injection, what does that look like? Thoughts off the top of my head are: 1. Add a microversion to the server create and rebuild REST APIs such that the personality files aren't accepted unless: a) you're also building the server with a config drive b) or CONF.force_config_drive is True c) or the image has the 'img_config_drive=mandatory' property 2. Deprecate VFSLocalFS in Ocata for removal in Pike. That means libguestfs is required. We'd do this because I think VFSLocalFS is the one with potential security issues. Am I missing anything? Does this sound like a reasonable path forward? Are there other use cases out there for file injection that we don't have alternatives for like config drive? Note I'm cross-posting to the operators list for operator feedback there too. [1] https://bugs.launchpad.net/nova/+bug/1638961 [2] http://lists.openstack.org/pipermail/openstack-dev/2016-July/098703.html [3] https://github.com/openstack/nova/blob/b761ea47b97c6df09e21755f7fbaaa2061290fbb/nova/api/metadata/base.py#L179-L187 -- Thanks, Matt Riedemann ___ OpenStack-operators mailing list openstack-operat...@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [kolla][release] Version numbers for kolla-ansible repository.
The proposal I think you made is what I was thinking we would do, so just to clarify: Kolla keeps all branches/tags When kolla-ansible is created with an upstream tag in PC, the project-config will include no post jobs so no artifacts will be created Kolla-ansible will have all branches deleted from it (so we maintain the back ports in the kolla repo where the code originated New (ocata) versions of kolla-ansible will have a 4.0.0 tag but no branches, and possibly no tags. The delta between kolla and kolla-ansible is in the diagram in this thread, but in a nutshell: Today Kolla contains build.py docker dir, sensible dir, and kolla-ansible.py In the future: Kolla - contains build.py, docker dir Kolla-ansible contains sensible dir, kolla-ansible.py Both repos contain whatever is needed to make those repos work with the various OpenStack processes. I agree back ports will be more challenging in general with a repo split. The core reviewers committed to maintaining back ports properly when they voted on the repo split. Regards -steve On 11/11/16, 7:43 AM, "Doug Hellmann" wrote: >Excerpts from Steven Dake (stdake)'s message of 2016-11-08 21:41:26 +: >> >> On 11/8/16, 9:08 AM, "Doug Hellmann" wrote: >> >> >Excerpts from Steven Dake (stdake)'s message of 2016-11-08 13:08:11 +: >> >> Hey folks, >> >> >> >> As we split out the repository per our unanimous vote several months ago, >> >> we have a choice to make (I think, assuming we are given latitude of the >> >> release team who is in the cc list) as to which version to call >> >> kolla-ansible. >> >> >> >> My personal preference is to completely retag our newly split repo with >> >> all old tags from kolla git revisions up to version 3.0.z. The main >> >> rationale I can think of is kolla-ansible 1 = liberty, 2 = mitaka, 3 = >> >> newton. I think calling kolla-ansible 1.0 = newton would be somewhat >> >> confusing, but we could do that as well. >> >> >> >> The reason the repository needs to be retagged in either case is to >> >> generate release artifacts (tarballs, pypi, etc). >> >> >> >> Would also like feedback from the release team on what they think is a >> >> best practice here (we may be breaking new ground for the OpenStack >> >> release team, maybe not – is there prior art here?) >> >> >> >> For a diagram (mostly for the release team) of the repository split check >> >> out: >> >> https://www.gliffy.com/go/share/sg9fc5eg9ktg9binvc89 >> >> >> >> Thanks! >> >> -steve >> > >> >When you say "split," I'm going to assume that you mean the >> >openstack/kolla repo has the full history but that openstack/kolla-ansible >> >only contains part of the files and their history. >> >> Doug, >> >> I’d like to maintain history for both the repos, and then selectively remove >> the stuff not neeeded for each repo (so they will then diverge). > >Sure, that's one way to do it. I recommend picking just one of the >repos to have the old tags. I'm not sure if it would be simpler to >keep them in the repo that is current (openstack/kolla, I think?) >because artifact names for the old versions won't change that way, >or to keep all of that history and the stable branches in the repo >where you'll be doing new work to make backporting simpler. > >What's the difference between kolla and kolla-ansible? > >> >Assuming the history is preserved in openstack/kolla, then I don't >> >think you want new tags. The way to reproduce the 1, 2, or 3 versions >> >is to check out the existing tag in openstack/kolla. Having similar >> >tags in openstack/kolla-ansible will be confusing about which is >> >the actual tag that produced the build artifacts that were shipped >> >with those version numbers. New versions tagged on master in >> >openstack/kolla-ansible can still start from 4 (or 3.x, I suppose). >> >> Ok that works. I think the lesson there is we can’t change the past :) I >> think we would want kolla-ansible >> >> > >> >Do you maintain stable branches? Are those being kept in openstack/kolla >> >or openstack/kolla-ansible? >> Great question and something I hadn’t thought of. >> >> Yes we maintain stable branches for liberty, mitaka, and newton. I’m not >> sure if a stable branch for liberty is in policy for OpenStack. Could you >> advice here? > >Liberty is scheduled to be EOL-ed around 17 Nov, so if you have the >branch I would keep it for now and go through the EOL process normally. > >> >> I believe the result we want is to maintain the stable branches for >> liberty/mitaka/newton in kolla and then tag kolla-ansible Ocata as 4.0.0. I >> don’t know if we need the 1/2/3 tags deleted in this case. Could you advise? >> >> Thanks for your help and contributions Doug :) >> >> Regards >> -steve >> >> > >> >Doug >> > >> >__ >> >OpenStack Development Mailing List (not for usage questions) >> >Unsubscribe: openstack-dev-requ...@lists.openstack.org
[openstack-dev] [neutron][dvr][fip] router support two external network
Hi all Currently, neutron model supports one router with one external network, which is used to connect router to outside world. FIP can be allocated from external network which is gateway of a router. One private fixed ip of one port(usually it is a vm port ) can only associate one floating ip. In some deployment scenario, all ports served by one router, all ports need ip address which can be accessed by intranet, and some ports need ip address which can be accessed by internet. I was wondering how neutron to resolve this kind of use cases? One idea is one router support two external network(one for intranet, the other one for internet, but only have gateway), the other idea is one router still have only one external network, but this external have two different type of subnet (one for internet, the other one for intranet). any comment is welcome. thanks. __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev