Re: [openstack-dev] [nova][python-novaclient] microversion implementation on client side
Thanks, Devananda, I read the ironic spec and it almost cover all the case what I'm looking for. The only we missed in nova is return max/min version by header when nova can't process the requested version. 2015-04-28 15:38 GMT+08:00 Devananda van der Veen devananda@gmail.com: FWIW, we enumerated the use-cases and expected behavior for all combinations of server [pre versions, older version, newer version] and client [pre versions, older version, newer version, user-specified version], in this informational spec: http://specs.openstack.org/openstack/ironic-specs/specs/kilo/api-microversions.html#proposed-change Not all of that is implemented yet within our client, but the auto-negotiation of version is done. While our clients probably don't share any code, maybe something here can help: http://git.openstack.org/cgit/openstack/python-ironicclient/tree/ironicclient/common/http.py#n72 -Deva On Mon, Apr 27, 2015 at 2:49 AM, John Garbutt j...@johngarbutt.com wrote: I see these changes as really important. We need to establish good patterns other SDKs can copy. On 24 April 2015 at 12:05, Alex Xu sou...@gmail.com wrote: 2015-04-24 18:15 GMT+08:00 Andrey Kurilin akuri...@mirantis.com: When user execute cmd without --os-compute-version. The nova client should discover the nova server supported version. Then cmd choice the latest version supported both by client and server. In that case, why X-Compute-API-Version can accept latest value? Also, such discovery will require extra request to API side for every client call. I think it is convenient for some case. like give user more easy to try nova api by some code access the nova api directly. Yes, it need one more extra request. But if without discover, we can't ensure the client support server. Maybe client too old for server even didn't support the server's min version. For better user experience, I think it worth to discover the version. And we will call keystone each nova client cli call, so it is acceptable. We might need to extend the API to make this easier, but I think we need to come up with a simple and efficient pattern here. Case 1: Existing python-novaclient calls, now going to v2.1 API We can look for the transitional entry of computev21, as mentioned above, but it seems fair to assume v2.1 and v2.0 are accessed from the same service catalog entry of compute, by default (eventually). Lets be optimistic about what the cloud supports, and request latest version from v2.1. If its a v2.0 only API endpoint, we will not get back a version header with the response, we could error out if the user requested v2.x min_version via the CLI parameters. In most cases, we get the latest return values, and all is well. Case 2: User wants some info they know was added to the response in a specific microversion We can request latest and error out if we don't get a new enough version to meet the user's min requirement. Case 3: Adding support for a new request added in a microversion We could just send latest and assume the new functionality, then raise an error when you get bad request (or similar), and check the version header to see if that was the cause of the problem, so we can say why it failed. If its supported, everything just works. If the user requests a specific version before it was supported, we should error out as not supported, I guess? In a way it would be cleaner if we had a way for the client to say latest but requires 2.3, so you get a bad version request if your minimum requirement is not respected, so its much clearer than miss-interpreting random errors that you might generate. But I guess its not totally required here. Would all that work? It should avoid an extra API call to discover the specific version we have available. '--os-compute-version=None' can be supported, that means will return the min version of server supported. From my point of view '--os-compute-version=None' is equal to not specified value. Maybe, it would be better to accept value min for os-compute-version option. I think '--os-compute-version=None' means not specified version request header when send api request to server. The server behavior is if there isn't value specified, the min version will be used. --os-compte-version=v2 means no version specified I guess? Can we go back to the use cases here please? What do the users need here and why? 3. if the microversion non-supported, but user call cmd with --os-compute-version, this should return failed. Imo, it should be implemented on API side(return BadRequest when X-Compute-API-Version header is presented in V2) V2 is already deployed now, and doesn't do that. No matter what happens we need to fix that. Emm I'm not sure. Because GET '/v2/' already can be used to discover microversion
Re: [openstack-dev] [neutron][api] Extensions out, Micro-versions in
Thanks Kevin, answers inline. On 6 May 2015 at 00:28, Fox, Kevin M kevin@pnnl.gov wrote: so... as an operator looking at #3, If I need to support lbaas, I'm getting pushed to run more and more services, like octavia, plus a neutron-lbaas service, plus neutron? This seems like an operator scalability issue... What benifit does splitting out the advanced services into their own services have? You have a valid point here. In the past I was keen on insisting that neutron was supposed to be a management layer only service for most networking services. However, the consensus seems to move toward a microservices-style architecture. It would be interesting to get some feedback regarding the additional operational burden of managing a plethora of services, even if it is worth noting that when one deploys neutron with its reference architecture there are already plenty of moving parts. Regardless, I need to slaps your hand because this discussion is not really pertinent to this thread, which is specifically about having a strategy for the Neutron API. I would be happy to have a separate thread for defining a strategy for neutron services. I'm pretty sure Doug will be more than happy to slap your hands too. Thanks, Kevin -- *From:* Salvatore Orlando [sorla...@nicira.com] *Sent:* Tuesday, May 05, 2015 3:13 PM *To:* OpenStack Development Mailing List *Subject:* [openstack-dev] [neutron][api] Extensions out, Micro-versions in There have now been a few iterations on the specification for Neutron micro-versioning [1]. It seems that no-one in the community opposes introducing versioning. In particular API micro-versioning as implemented by Nova and Ironic seems a decent way to evolve the API incrementally. What the developer community seems not yet convinced about is moving away from extensions. It seems everybody realises the flaws of evolving the API through extensions, but there are understandable concerns regarding impact on plugins/drivers as well as the ability to differentiate, which is something quite dear to several neutron teams. I tried to consider all those concerns and feedback received; hopefully everything has been captured in a satisfactory way in the latest revision of [1]. With this ML post I also seek feedback from the API-wg concerning the current proposal, whose salient points can be summarised as follows: #1 extensions are not part anymore of the neutron API. Evolution of the API will now be handled through versioning. Once microversions are introduced: - current extensions will be progressively moved into the Neutron unified API - no more extension will be accepted as part of the Neutron API #2 Introduction of features for addressing diversity in Neutron plugins It is possible that the combination of neutron plugins chosen by the operator won't be able to support the whole Neutron API. For this reason a concept of feature is included. What features are provided depends on the plugins loaded. The list of features is hardcoded as strictly dependent on the Neutron API version implemented by the server. The specification also mandates a minimum set of features every neutron deployment must provide (those would be the minimum set of features needed for integrating Neutron with Nova). #3 Advanced services are still extensions This a temporary measure, as APIs for load balancing, VPN, and Edge Firewall are still served through neutron WSGI. As in the future this API will live independently it does not make sense to version them with Neutron APIs. #4 Experimenting in the API One thing that has plagued Neutron in the past is the impossibility of getting people to reach any sort of agreement over the shape of certain APIs. With the proposed plan we encourage developers to submit experimental APIs. Experimental APIs are unversioned and no guarantee is made regarding deprecation or backward compatibility. Also they're optional, as a deployer can turn them off. While there are caveats, like forever-experimental APIs, this will enable developer to address user feedback during the APIs' experimental phase. The Neutron community and the API-wg can provide plenty of useful feeback, but ultimately is user feedback which determines whether an API proved successful or not. Please note that the current proposal goes in a direction different from that approved in Nova when it comes to experimental APIs [3] #5 Plugin/Vendor specific APIs Neutron is without doubt the project with the highest number of 3rd party (OSS and commercial) integration. After all it was mostly vendors who started this project. Vendors [4] use the extension mechanism to expose features in their products not covered by the Neutron API or to provide some sort of value-added service. The current proposal still allows 3rd parties to attach extensions to the neutron API, provided that: - they're not considered
Re: [openstack-dev] [Fuel] Transaction scheme
First of all I propose to wrap HTTP handlers by begin/commit/rollback I don't know what you are talking about, but we do wrap handlers in transaction for a long time. Here's the code https://github.com/stackforge/fuel-web/blob/2de3806128f398d192d7e31f4ca3af571afeb0b2/nailgun/nailgun/api/v1/handlers/base.py#L53-L84 The issue is that we sometimes perform `.commit()` inside the code (e.g. `task.execute()`) and therefore it's hard to predict which data are committed and which are not. In order to avoid this, we have to declare strict scopes for different layers. Yes, we definitely should base on idea that handlers should open transaction on the begin and close it on the end. But that won't solve all problems, because sometimes we should commit data before handler's end. For instance, commit some task before sending message to Astute. Such cases complicate the things.. and it would be cool if could avoid them by re-factoring our architecture. Perhaps, we could send tasks to Astute when the handler is done? What do you think? Thanks, igor On Wed, May 6, 2015 at 12:15 PM, Lukasz Oles lo...@mirantis.com wrote: On Wed, May 6, 2015 at 10:51 AM, Alexander Kislitsky akislit...@mirantis.com wrote: Hi! The refactoring of transactions management in Nailgun is critically required for scaling. First of all I propose to wrap HTTP handlers by begin/commit/rollback decorator. After that we should introduce transactions wrapping decorator into Task execute/message calls. And the last one is the wrapping of receiver calls. As result we should have begin/commit/rollback calls only in transactions decorator. Big +1 for this. I always wondered why we don't have it. Also I propose to separate working with DB objects into separate lair and use only high level Nailgun objects in the code and tests. This work was started long time ago, but not finished yet. On Thu, Apr 30, 2015 at 12:36 PM, Roman Prykhodchenko m...@romcheg.me wrote: Hi folks! Recently I faced a pretty sad fact that in Nailgun there’s no common approach to manage transactions. There are commits and flushes in random places of the code and it used to work somehow just because it was all synchronous. However, after just a few of the subcomponents have been moved to different processes, it all started producing races and deadlocks which are really hard to resolve because there is absolutely no way to predict how a specific transaction is managed but by analyzing the source code. That is rather an ineffective and error-prone approach that has to be fixed before it became uncontrollable. Let’s arrange a discussions to design a document which will describe where and how transactions are managed and refactor Nailgun according to it in 7.0. Otherwise results may be sad. - romcheg __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev -- Łukasz Oleś __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] How to turn tempest CLI tests into python-*client in-tree functional tests
On Wed, 6 May 2015, Robert Collins wrote: Its actually an antipattern. It tells testr that tests are appearing and disappearing depending on what test entry point a user runs each time. testr expects the set of tests to only change when code changes. So, I fully expect that this pattern is going to lead to wtf moments now, and likely more in future. Whats the right forum for discussing the pressures that lead to this hack, so we can do something that works better with the underlying tooling, rather than in such a disruptive fashion? I'd appreciate here (that is on this list) because from my perspective there are a lot of embedded assumptions in the way testr does things and wants the environment to be that aren't immediately obvious and would perhaps be made more clear if you could expand on the details of what's wrong with this particular hack. I tried to come up with a specific question here to drive that illumination a bit more concretely but a) not enough coffee yet b) mostly I just want to know more detail about the first three paragraphs above. Thanks. -- Chris Dent tw:@anticdent freenode:cdent https://tank.peermore.com/tanks/cdent __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [Nova][Ironic] Large number of ironic driver bugs in nova
Hi I noticed last night that there are 23 bugs currently filed in nova tagged as ironic related. Whilst some of those are scheduler issues, a lot of them seem like things in the ironic driver itself. Does the ironic team have someone assigned to work on these bugs and generally keep an eye on their driver in nova? How do we get these bugs resolved? Thanks for this call out. I don't think we have anyone specifically assigned to keep an eye on the Ironic Nova driver, we would look at it from time to time or when someone ask us to in the Ironic channel/ML/etc... But that said, I think we need to pay more attention to the bugs in Nova. I've added one item about it to be discussed in the next Ironic meeting[1]. And in the meantime, I will take a look at some of the bugs myself. [1] https://wiki.openstack.org/wiki/Meetings/Ironic#Agenda_for_next_meeting Thanks again, Lucas __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [Fuel] Transaction scheme
Hi! The refactoring of transactions management in Nailgun is critically required for scaling. First of all I propose to wrap HTTP handlers by begin/commit/rollback decorator. After that we should introduce transactions wrapping decorator into Task execute/message calls. And the last one is the wrapping of receiver calls. As result we should have begin/commit/rollback calls only in transactions decorator. Also I propose to separate working with DB objects into separate lair and use only high level Nailgun objects in the code and tests. This work was started long time ago, but not finished yet. On Thu, Apr 30, 2015 at 12:36 PM, Roman Prykhodchenko m...@romcheg.me wrote: Hi folks! Recently I faced a pretty sad fact that in Nailgun there’s no common approach to manage transactions. There are commits and flushes in random places of the code and it used to work somehow just because it was all synchronous. However, after just a few of the subcomponents have been moved to different processes, it all started producing races and deadlocks which are really hard to resolve because there is absolutely no way to predict how a specific transaction is managed but by analyzing the source code. That is rather an ineffective and error-prone approach that has to be fixed before it became uncontrollable. Let’s arrange a discussions to design a document which will describe where and how transactions are managed and refactor Nailgun according to it in 7.0. Otherwise results may be sad. - romcheg __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] How to turn tempest CLI tests into python-*client in-tree functional tests
On 05/06/2015 04:57 AM, Chris Dent wrote: On Wed, 6 May 2015, Robert Collins wrote: Its actually an antipattern. It tells testr that tests are appearing and disappearing depending on what test entry point a user runs each time. testr expects the set of tests to only change when code changes. So, I fully expect that this pattern is going to lead to wtf moments now, and likely more in future. Whats the right forum for discussing the pressures that lead to this hack, so we can do something that works better with the underlying tooling, rather than in such a disruptive fashion? I'd appreciate here (that is on this list) because from my perspective there are a lot of embedded assumptions in the way testr does things and wants the environment to be that aren't immediately obvious and would perhaps be made more clear if you could expand on the details of what's wrong with this particular hack. I tried to come up with a specific question here to drive that illumination a bit more concretely but a) not enough coffee yet b) mostly I just want to know more detail about the first three paragraphs above. There are 2 reasons that pattern exists. 1) testr discovery is quite slow on large trees, especially if your intent is to run a small subset of tests by sending an argument 2) testr still doesn't have an exclude facility, so top level test exclusion has to be done by quite complicated negative asserting regex, which are very error prone (and have been done incorrectly many times). Especially if you *still* want to support partial test passing. -Sean -- Sean Dague http://dague.net __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [all] cross project communication: periodic developer newsletter?
Joe Gordon wrote: On Tue, May 5, 2015 at 9:53 AM, James Bottomley wrote: On Tue, 2015-05-05 at 10:45 +0200, Thierry Carrez wrote: The issue is, who can write such content ? It is a full-time job to produce authored content, you can't just copy (or link to) content produced elsewhere. It takes a very special kind of individual to write such content: the person has to be highly technical, able to tackle any topic, and totally connected with the OpenStack development community. That person has to be cross-project and ideally have already-built legitimacy. Here, you're being overly restrictive. Lwn.net isn't staffed by top level kernel maintainers (although it does solicit the occasional article from them). It's staffed by people who gained credibility via their insightful reporting rather than by their contributions. I see no reason why the same model wouldn't work for OpenStack. ++. I have a hunch that like many things (in OpenStack) if you make a space for people to step up, they will. I guess being burnt trying to set that up in the past makes me overly pessimistic. Let's see... Anyone interested in producing that kind of OpenStack Developer Community Digest ? -- Thierry Carrez (ttx) __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [Nova][Ironic] Large number of ironic driver bugs in nova
On 6 May 2015 at 09:39, Lucas Alvares Gomes lucasago...@gmail.com wrote: Hi I noticed last night that there are 23 bugs currently filed in nova tagged as ironic related. Whilst some of those are scheduler issues, a lot of them seem like things in the ironic driver itself. Does the ironic team have someone assigned to work on these bugs and generally keep an eye on their driver in nova? How do we get these bugs resolved? Thanks for this call out. I don't think we have anyone specifically assigned to keep an eye on the Ironic Nova driver, we would look at it from time to time or when someone ask us to in the Ironic channel/ML/etc... But that said, I think we need to pay more attention to the bugs in Nova. I've added one item about it to be discussed in the next Ironic meeting[1]. And in the meantime, I will take a look at some of the bugs myself. [1] https://wiki.openstack.org/wiki/Meetings/Ironic#Agenda_for_next_meeting Thanks to you both for raising this and pushing on this. Maybe we can get a named cross project liaison to bridge the Ironic and Nova meetings. We are working on building a similar pattern for Neutron. It doesn't necessarily mean attending every nova-meeting, just someone to act as an explicit bridge between our two projects? I am open to whatever works though, just hoping we can be more proactive about issues and dependencies that pop up. Thanks, John __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [Fuel] Transaction scheme
I mean, that we should have explicitly wrapped http handlers. For example: @transaction def PUT(...): ... We don't need transactions, for example, in GET methods. I propose to rid of complex data flows in our code. Code with 'commit' call inside the the method should be split into independent units. I like the solution with sending tasks to Astute at the end of handler execution. On Wed, May 6, 2015 at 12:57 PM, Igor Kalnitsky ikalnit...@mirantis.com wrote: First of all I propose to wrap HTTP handlers by begin/commit/rollback I don't know what you are talking about, but we do wrap handlers in transaction for a long time. Here's the code https://github.com/stackforge/fuel-web/blob/2de3806128f398d192d7e31f4ca3af571afeb0b2/nailgun/nailgun/api/v1/handlers/base.py#L53-L84 The issue is that we sometimes perform `.commit()` inside the code (e.g. `task.execute()`) and therefore it's hard to predict which data are committed and which are not. In order to avoid this, we have to declare strict scopes for different layers. Yes, we definitely should base on idea that handlers should open transaction on the begin and close it on the end. But that won't solve all problems, because sometimes we should commit data before handler's end. For instance, commit some task before sending message to Astute. Such cases complicate the things.. and it would be cool if could avoid them by re-factoring our architecture. Perhaps, we could send tasks to Astute when the handler is done? What do you think? Thanks, igor On Wed, May 6, 2015 at 12:15 PM, Lukasz Oles lo...@mirantis.com wrote: On Wed, May 6, 2015 at 10:51 AM, Alexander Kislitsky akislit...@mirantis.com wrote: Hi! The refactoring of transactions management in Nailgun is critically required for scaling. First of all I propose to wrap HTTP handlers by begin/commit/rollback decorator. After that we should introduce transactions wrapping decorator into Task execute/message calls. And the last one is the wrapping of receiver calls. As result we should have begin/commit/rollback calls only in transactions decorator. Big +1 for this. I always wondered why we don't have it. Also I propose to separate working with DB objects into separate lair and use only high level Nailgun objects in the code and tests. This work was started long time ago, but not finished yet. On Thu, Apr 30, 2015 at 12:36 PM, Roman Prykhodchenko m...@romcheg.me wrote: Hi folks! Recently I faced a pretty sad fact that in Nailgun there’s no common approach to manage transactions. There are commits and flushes in random places of the code and it used to work somehow just because it was all synchronous. However, after just a few of the subcomponents have been moved to different processes, it all started producing races and deadlocks which are really hard to resolve because there is absolutely no way to predict how a specific transaction is managed but by analyzing the source code. That is rather an ineffective and error-prone approach that has to be fixed before it became uncontrollable. Let’s arrange a discussions to design a document which will describe where and how transactions are managed and refactor Nailgun according to it in 7.0. Otherwise results may be sad. - romcheg __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev -- Łukasz Oleś __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [TripleO] Core reviewer update proposal
On 05/05/2015 01:57 PM, James Slagle wrote: Hi, I'd like to propose adding Giulio Fidente and Steve Hardy to TripleO Core. Giulio has been an active member of our community for a while. He worked on the HA implementation in the elements and recently has been making a lot of valuable contributions and reviews related to puppet in the manifests, heat templates, ceph, and HA. Steve Hardy has been instrumental in providing a lot of Heat domain knowledge to TripleO and his reviews and guidance have been very beneficial to a lot of the template refactoring. He's also been reviewing and contributing in other TripleO projects besides just the templates, and has shown a solid understanding of TripleO overall. 180 day stats: | gfidente | 2080 42 166 0 079.8% | 16 ( 7.7%) | | shardy | 2060 27 179 0 086.9% | 16 ( 7.8%) | TripleO cores, please respond with +1/-1 votes and any comments/objections within 1 week. +1 Congrats! Giulio and Steve, also please do let me know if you'd like to serve on the TripleO core team if there are no objections. I'd also like to give a heads-up to the following folks whose review activity is very low for the last 90 days: | tomas-8c8 ** | 80 0 0 8 2 100.0% |0 ( 0.0%) | |lsmola ** | 60 0 0 6 5 100.0% |0 ( 0.0%) | | cmsj ** | 60 2 0 4 266.7% |0 ( 0.0%) | | jprovazn **| 10 1 0 0 0 0.0% |0 ( 0.0%) | I've shifted my focus in a slightly different area, although I plan to contribute to some parts of TripleO I don't have overall overview of all major parts of the project which is necessary for core reviews - feel free to remove me from the core team. Jan __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [all] cross project communication: periodic developer newsletter?
Hugh Blemings wrote: +2 I think asking LWN if they have the bandwidth and interest to do this would be ideal - they've credibility in the Free/Open Source space and a proven track record. Nice people too. On the bandwidth side, as a regular reader I was under the impression that they struggled with their load already, but I guess if it comes with funding that could be an option. On the interest side, my past tries to invite them to the OpenStack Summit so that they could cover it (the way they cover other conferences) were rejected, so I have doubts in that area as well. Anyone having a personal connection that we could leverage to pursue that option further ? -- Thierry Carrez (ttx) __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [Fuel] Transaction scheme
On Wed, May 6, 2015 at 10:51 AM, Alexander Kislitsky akislit...@mirantis.com wrote: Hi! The refactoring of transactions management in Nailgun is critically required for scaling. First of all I propose to wrap HTTP handlers by begin/commit/rollback decorator. After that we should introduce transactions wrapping decorator into Task execute/message calls. And the last one is the wrapping of receiver calls. As result we should have begin/commit/rollback calls only in transactions decorator. Big +1 for this. I always wondered why we don't have it. Also I propose to separate working with DB objects into separate lair and use only high level Nailgun objects in the code and tests. This work was started long time ago, but not finished yet. On Thu, Apr 30, 2015 at 12:36 PM, Roman Prykhodchenko m...@romcheg.me wrote: Hi folks! Recently I faced a pretty sad fact that in Nailgun there’s no common approach to manage transactions. There are commits and flushes in random places of the code and it used to work somehow just because it was all synchronous. However, after just a few of the subcomponents have been moved to different processes, it all started producing races and deadlocks which are really hard to resolve because there is absolutely no way to predict how a specific transaction is managed but by analyzing the source code. That is rather an ineffective and error-prone approach that has to be fixed before it became uncontrollable. Let’s arrange a discussions to design a document which will describe where and how transactions are managed and refactor Nailgun according to it in 7.0. Otherwise results may be sad. - romcheg __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev -- Łukasz Oleś __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [neutron] How should edge services APIs integrate into Neutron?
I think Paul is correctly scoping this discussion in terms of APIs and management layer. For instance, it is true that dynamic routing support, and BGP support might be a prerequisite for BGP VPNs, but it should be possible to have at least an idea of how user and admin APIs for this VPN use case should look like. In particular the discussion on service chaining is a bit out of scope here. I'd just note that [1] seems to have a lot of overlap with group-based-policies [2], and that it appears to be a service that consumes Neutron rather than an extension to it. The current VPN service was conceived to be fairly generic. IPSEC VPN is the only implemented one, but SSL VPN and BGP VPN were on the map as far as I recall. Personally having a lot of different VPN APIs is not ideal for users. As a user, I probably don't even care about configuring a VPN. What is important for me is to get L2 or L3 access to a network in the cloud; therefore I would seek for common abstractions that might allow a user for configuring a VPN service using the same APIs. Obviously then there will be parameters which will be specific for the particular class of VPN being created. I listened to several contributors in the area in the past, and there are plenty of opinions across a spectrum which goes from total abstraction (just expose edges at the API layer) to what could be tantamount to a RESTful configuration of a VPN appliance. I am not in a position such to prescribe what direction the community should take; so, for instance, if the people working on XXX VPN believe the best way forward for them is to start a new project, so be it. The other approach would obviously to build onto the current APIs. The only way the Neutron API layer provides to do that is to extend and extension. This sounds terrible, and it is indeed terrible. There is a proposal for moving toward versioned APIs [3], but until that proposal is approved and implemented extensions are the only thing we have. From an API perspective the mechanism would be simpler: 1 - declare the extension, and implement get_required_extension to put 'vpnaas' as a requirement 2 - implement a DB mixin for it providing basic CRUD operations 3 - add it to the VPN service plugin and add its alias to 'supported_extensions_aliases' (step 2 and 3 can be merged if you wish not to have a mixin) What might be a bit more challenging is defining how this reflects onto VPN. Ideally you would have a driver for every VPN type you support, and then have a little dispatcher to route the API call to the appropriate driver according to the VPN type. Salvatore [1] https://blueprints.launchpad.net/neutron/+spec/intent-based-service-chaining [2] https://wiki.openstack.org/wiki/GroupBasedPolicy [3] https://review.openstack.org/#/c/136760 On 6 May 2015 at 07:14, Vikram Choudhary vikram.choudh...@huawei.com wrote: Hi Paul, Thanks for starting this mail thread. We are also eyeing for supporting MPBGP in neutron and will like to actively participate in this discussion. Please let me know about the IRC channels which we will be following for this discussion. Currently, I am following below BP’s for this work. https://blueprints.launchpad.net/neutron/+spec/edge-vpn https://blueprints.launchpad.net/neutron/+spec/bgp-dynamic-routing https://blueprints.launchpad.net/neutron/+spec/dynamic-routing-framework https://blueprints.launchpad.net/neutron/+spec/prefix-clashing-issue-with-dynamic-routing-protocol Moreover, a similar kind of work is being headed by Cathy for defining an intent framework which can extended for various use case. Currently it will be leveraged for SFC but I feel the same can be used for providing intend VPN use case. https://blueprints.launchpad.net/neutron/+spec/intent-based-service-chaining Thanks Vikram *From:* Paul Michali [mailto:p...@michali.net] *Sent:* 06 May 2015 01:38 *To:* OpenStack Development Mailing List (not for usage questions) *Subject:* [openstack-dev] [neutron] How should edge services APIs integrate into Neutron? There's been talk in VPN land about new services, like BGP VPN and DM VPN. I suspect there are similar things in other Advanced Services. I talked to Salvatore today, and he suggested starting a ML thread on this... Can someone elaborate on how we should integrate these API extensions into Neutron, both today, and in the future, assuming the proposal that Salvatore has is adopted? I could see two cases. The first, and simplest, is when a feature has an entirely new API that doesn't leverage off of an existing API. The other case would be when the feature's API would dovetail into the existing service API. For example, one may use the existing vpn_service API to create the service, but then create BGP VPN or DM VPN connections for that service, instead of the IPSec connections we have today. If there are examples already of how to extend an existing API extension that would help in
Re: [openstack-dev] [neutron] IPv4 transition/interoperation with IPv6
Robert Li (baoli) ba...@cisco.com wrote on 05/05/2015 09:02:08 AM: Currently dual stack is supported. Can you be specific on what interoperation/transition techniques you are interested in? We’ve been thinking about NAT64 (stateless or stateful). thanks, Robert On 5/4/15, 9:56 PM, Mike Spreitzer mspre...@us.ibm.com wrote: Does Neutron support any of the 4/6 interoperation/transition techniques? I wear an operator's hat nowadays, and want to make IPv6 as useful and easy to use as possible for my tenants. I think the interoperation/transition techniques will play a big role in this. Is dual stacking working in routers now? At the moment I am still using Juno, but plan to move to Kilo. I want to encourage my tenants to use as much IPv6 as possible. But I expect some will have to keep some of their workload on v4 (I know there is on-going work to get many application frameworks up to v6 speed, and it is not complete yet). I expect some tenants could be mixed: some workload on v4 and some on v6. Such a tenant would appreciate a NAT between his v6 space and his v4 space. This is the easiest cases --- sections 2.5 and 2.6 --- of RFC 6144. I would prefer to do it in a stateless way if possible. That would be pretty easy if Neutron and Nova were willing to accept an IPv6 subnet that is much smaller than 2^64 addresses. I see that my macs differ only in their last 24 bits. Some tenants could put their entire workload on v6, but that workload would be unreachable from customers of all those ISPs (such as mine, CableVision) that deny IPv6 service to their customers. There are techniques for coping, and Teredo looks like a pretty good one. It has been shipped in Windows for years. Yet I can not find a Windows machine where the Teredo actually works. What's up with that? If Windows somehow got its Teredo, or other, act together, that would be only half the job; Teredo requires something from the server side as well, right? Supposing a focus on mobile, where IPv6 is much more available, and/or progress by Microsoft and/or other ISPs, my tenant might face a situation where his clients could come in over v6 but some of his servers still have to run on v4. That's section 2.3 of RFC 6144. While I am a Neutron operator, I am also a customer of a lower layer network provider. That network provider will happily give me a few /64. How do I serve IPv6 subnets to lots of my tenants? In the bad old v4 days this would be easy: a tenant puts all his stuff on his private networks and NATs (e.g., floating IP) his edge servers onto a public network --- no need to align tenant private subnets with public subnets. But with no NAT for v6, there is no public/private distinction --- I can only give out the public v6 subnets that I am given. Yes, NAT is bad. But not being able to get your job done is worse. Sean M. Collins s...@coreitpro.com wrote on 05/05/2015 06:26:28 AM: I think that Neutron exposes enough primitives through the API that advanced services for handling your transition technique of choice could be built. I think that is right, if I am willing to assume Neutron is using OVS --- or build a bunch of alternatives that correspond to all the Neutron plugins and mechanisms that I might encounter. And it would feel a lot like Neutron implementation work. Really, it is one instance of doing some NFV. Thanks, Mike __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [PKG-Openstack-devel][horizon][xstatic] XStatic-Angular-Bootstrap in violation of the MIT/Expat license (forwarded from: python-xstatic-angular-bootstrap_0.11.0.2-1_amd64.changes R
On 05/05/2015 05:05 PM, Michael Krotscheck wrote: The real question seems to be whether packagers have a disproportionate amount of power to set development goals, tools, and policy. This is a common theme that I've encountered frequently, and it leads to no small amount of tension. This tension serves no-one, and really just causes all of us stress. How about we start a separate thread to discuss the roles of package maintainers in OpenStack? Michael Mostly, everyone has been super friendly in the OpenStack community, and reactions are almost always very constructive, plus my concerns are almost always addressed (and when they are not, either their's a real reason why, or it's hard to do). I haven't felt tension so much as you're claiming, apart maybe with a very low amount of individuals, but that's unavoidable in such large community. Thomas __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] How to turn tempest CLI tests into python-*client in-tree functional tests
On 14 February 2015 at 10:26, Joe Gordon joe.gord...@gmail.com wrote: Digging through the logs this originated from this bug: https://bugs.launchpad.net/tempest/+bug/1260710 Its probably not needed everywhere and in all the clients. So I've looked more closely at this. Its actually an antipattern. It tells testr that tests are appearing and disappearing depending on what test entry point a user runs each time. testr expects the set of tests to only change when code changes. So, I fully expect that this pattern is going to lead to wtf moments now, and likely more in future. Whats the right forum for discussing the pressures that lead to this hack, so we can do something that works better with the underlying tooling, rather than in such a disruptive fashion? -Rob -- Robert Collins rbtcoll...@hp.com Distinguished Technologist HP Converged Cloud __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [nova] Which error code should we return when OverQuota
On Wed, 6 May 2015, Sean Dague wrote: All other client errors, just be a 400. And use the emerging error reporting json to actually tell the client what's going on. Please do not do this. Please use the 4xx codes as best as you possibly can. Yes, they don't always match, but there are several of them for reasons™ and it is usually possible to find one that sort of fits. Using just 400 is bad for a healthy HTTP ecosystem. Sure, for the most part people are talking to OpenStack through official clients but a) what happens when they aren't, b) is that the kind of world we want? I certainly don't. I want a world where the HTTP APIs that OpenStack and other services present actually use HTTP and allow a diversity of clients (machine and human). Using response codes effectively makes it easier to write client code that is either simple or is able to use generic libraries effectively. Let's be honest: OpenStack doesn't have a great record of using HTTP effectively or correctly. Let's not make it worse. In the case of quota, 403 is fairly reasonable because you are in fact Forbidden from doing the thing you want to do. Yes, with the passage of time you may very well not be forbidden so the semantics are not strictly matching but it is more immediately expressive yet not quite as troubling as 409 (which has a more specific meaning). 400 is useful fallback for when nothing else will do. -- Chris Dent tw:@anticdent freenode:cdent https://tank.peermore.com/tanks/cdent__ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
[openstack-dev] [cinder][nova] Question on Cinder client exception handling
Hi In order to work on [1] , nova need to know what kind of exception are raised when using cinderclient so that it can handle like [2] did? In this case, we don't need to distinguish the error case based on string compare , it's more accurate and less error leading Anyone is doing it or any other methods I can use to catch cinder specified exception in nova? Thanks [1] https://bugs.launchpad.net/nova/+bug/1450658 [2] https://github.com/openstack/python-neutronclient/blob/master/neutronclient/v2_0/client.py#L64 Best Regards! Kevin (Chen) Ji 纪 晨 Engineer, zVM Development, CSTL Notes: Chen CH Ji/China/IBM@IBMCN Internet: jiche...@cn.ibm.com Phone: +86-10-82454158 Address: 3/F Ring Building, ZhongGuanCun Software Park, Haidian District, Beijing 100193, PRC__ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
[openstack-dev] [heat][ceilometer] autoscaling
hey there, Please I wanna know if their is anyway I can have cpu, ram and network meters for each VM returned by ceilometer to heat for autoscaling tasks? In advance, thank you for your response, Sara __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [nova] Which error code should we return when OverQuota
On 05/06/2015 07:11 AM, Chris Dent wrote: On Wed, 6 May 2015, Sean Dague wrote: All other client errors, just be a 400. And use the emerging error reporting json to actually tell the client what's going on. Please do not do this. Please use the 4xx codes as best as you possibly can. Yes, they don't always match, but there are several of them for reasons™ and it is usually possible to find one that sort of fits. Using just 400 is bad for a healthy HTTP ecosystem. Sure, for the most part people are talking to OpenStack through official clients but a) what happens when they aren't, b) is that the kind of world we want? I certainly don't. I want a world where the HTTP APIs that OpenStack and other services present actually use HTTP and allow a diversity of clients (machine and human). Absolutely. And the problem is there is not enough namespace in the HTTP error codes to accurately reflect the error conditions we hit. So the current model means the following: If you get any error code, it means multiple failure conditions. Throw it away, grep the return string to decide if you can recover. My proposal is to be *extremely* specific for the use of anything besides 400, so there is only 1 situation that causes that to arise. So 403 means a thing, only one thing, ever. Not 2 kinds of things that you need to then figure out what you need to do. If you get a 400, well, that's multiple kinds of errors, and you need to then go conditional. This should provide a better experience for all clients, human and machine. Using response codes effectively makes it easier to write client code that is either simple or is able to use generic libraries effectively. Let's be honest: OpenStack doesn't have a great record of using HTTP effectively or correctly. Let's not make it worse. In the case of quota, 403 is fairly reasonable because you are in fact Forbidden from doing the thing you want to do. Yes, with the passage of time you may very well not be forbidden so the semantics are not strictly matching but it is more immediately expressive yet not quite as troubling as 409 (which has a more specific meaning). Except it's not, because you are saying to use 403 for 2 issues (Don't have permissions and Out of quota). Turns out, we have APIs for adjusting quotas, which your user might have access to. So part of 403 space is something you might be able to code yourself around, and part isn't. Which means you should always ignore it and write custom logic client side. Using something beyond 400 is *not* more expressive if it has more than one possible meaning. Then it's just muddy. My point is that all errors besides 400 should have *exactly* one cause, so they are specific. -Sean -- Sean Dague http://dague.net __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [nova] Which error code should we return when OverQuota
It does, however I looked through the history of that repo, and that's just in one of Jay's documents that predates the group. I'm a little cautious to give it a lot of weight without rationale. Honestly, there is this obsession of assuming that there *are* good fits for HTTP status codes for non webbrowser interaction patterns. There are not. The error code set was based around a specific expected web browser / website model from 20 years ago. I honestly think we'd be better served by limiting our use of non 200 or 400 codes to really narrow conditions (the ones that you'd expect from the browser interaction pattern). This would approach the whole problem from the least surprise perspective. 404 - resource doesn't exist (appropriate for GET /foo/ID_NUMBER where the thing isn't there) 403 - permissions error. Appropriate for a resource that exists, but you do not have enough permissions for it. I.e. it's an admin URL or someone else's resource. All other client errors, just be a 400. And use the emerging error reporting json to actually tell the client what's going on. -Sean On 05/05/2015 09:48 PM, Alex Xu wrote: From API-WG guideline, exceed quota should be 403 https://github.com/openstack/api-wg/blob/master/guidelines/http.rst 2015-05-06 3:30 GMT+08:00 Chen CH Ji jiche...@cn.ibm.com mailto:jiche...@cn.ibm.com: In doing patch [1], A suggestion is submitted that we should return 400 (bad Request) instead of 403 (Forbidden) I take a look at [2], seems 400 is not a good candidate because /'//The request could not be understood by the server due to malformed syntax. The client SHOULD NOT repeat the request without modifications. //'/ may be a 409 (conflict) error if we really don't think 403 is a good one? Thanks [1] https://review.openstack.org/#/c/173985/ [2] http://www.w3.org/Protocols/rfc2616/rfc2616-sec10.html Best Regards! Kevin (Chen) Ji 纪 晨 Engineer, zVM Development, CSTL Notes: Chen CH Ji/China/IBM@IBMCN Internet: jiche...@cn.ibm.com mailto:jiche...@cn.ibm.com Phone: +86-10-82454158 tel:%2B86-10-82454158 Address: 3/F Ring Building, ZhongGuanCun Software Park, Haidian District, Beijing 100193, PRC __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev -- Sean Dague http://dague.net __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
[openstack-dev] [Fuel][Plugin] Contributor license agreement for fuel plugin code?
If fuel plugin code is checked into a stackforge repository (as suggested in the fuel plugin wiki https://wiki.openstack.org/wiki/Fuel/Plugins#Repo), who owns that code? Is there a contributor license agreement to sign? (For example, contributors to OpenStack would sign this https://review.openstack.org/static/cla.html) Thanks, Emma __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [Fuel][Plugin] Contributor license agreement for fuel plugin code?
On 2015-05-06 11:02:42 + (+), Emma Gordon (projectcalico.org) wrote: If fuel plugin code is checked into a stackforge repository (as suggested in the fuel plugin wiki https://wiki.openstack.org/wiki/Fuel/Plugins#Repo), who owns that code? I am not a lawyer, but my understanding is that the individual copyright holders mentioned in comments at the tops of various files, listed in an AUTHORS file (if included) and indicated within the repository's Git commit history retain rights over their contributions in a project relying on the Apache License (or those rights may belong to their individual respective employers in a work-for-hire situation as well). Is there a contributor license agreement to sign? (For example, contributors to OpenStack would sign this https://review.openstack.org/static/cla.html) If Fuel is planning to apply for inclusion in OpenStack, then it may make sense to get all current and former contributors to its source repositories to agree to the OpenStack Individual Contributor License Agreement. Note that it does _not_ change the ownership of the software (copyrights), it's intended to simply reinforce the OpenStack Foundation's ability to continue to redistribute the software under the Apache License by affirming that the terms of the license are applied correctly and intentionally. More detailed questions are probably best posed to the legal-disc...@lists.openstack.org mailing list, or to your own private legal representation. -- Jeremy Stanley __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [neutron][api] Extensions out, Micro-versions in
Hi Salvatore, Two questions/remarks below. From: Salvatore Orlando sorla...@nicira.commailto:sorla...@nicira.com Reply-To: OpenStack Development Mailing List (not for usage questions) openstack-dev@lists.openstack.orgmailto:openstack-dev@lists.openstack.org Date: onsdag 6 maj 2015 00:13 To: OpenStack Development Mailing List openstack-dev@lists.openstack.orgmailto:openstack-dev@lists.openstack.org Subject: [openstack-dev] [neutron][api] Extensions out, Micro-versions in #5 Plugin/Vendor specific APIs Neutron is without doubt the project with the highest number of 3rd party (OSS and commercial) integration. After all it was mostly vendors who started this project. Vendors [4] use the extension mechanism to expose features in their products not covered by the Neutron API or to provide some sort of value-added service. The current proposal still allows 3rd parties to attach extensions to the neutron API, provided that: - they're not considered part of the Neutron API, in terms of versioning, documentation, and client support BOB There are today vendor specific commands in the Neutron CLI client. Such commands are prepended with the name of the vendor, like cisco_command and nec_command. I think that makes it quite visible to the user that the command is specific to a vendor feature and not part of neutron core. Would it be possible to allow for that also going forward? I would think that from a user perspective it can be convenient to be able to access vendor add-on features using a single CLI client. - they do not redefine resources defined by the Neutron API. BOB Does “redefine here include extending a resource with additional attributes? - they do not live in the neutron source tree The aim of the provisions above is to minimize the impact of such extensions on API portability. Thanks for reading and thanks in advance for your feedback, Salvatore The title of this post has been inspired by [2] (the message in the banner may be unintelligible to readers not fluent in european football) [1] https://review.openstack.org/#/c/136760/ [2] http://a.espncdn.com/combiner/i/?img=/photo/2015/0502/fc-banner-jd-1296x729.jpgw=738site=espnfc [3] http://specs.openstack.org/openstack/nova-specs/specs/kilo/implemented/api-microversions.html [4] By vendor here we refer either to a cloud provider or a company providing Neutron integration for their products. __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [oslo.config] MultiStrOpt VS. ListOpt
ZhiQiang, Please log a bug and we can try to do what jd suggested. -- dims On Wed, May 6, 2015 at 9:21 AM, Julien Danjou jul...@danjou.info wrote: On Wed, May 06 2015, ZhiQiang Fan wrote: I come across a problem that crudini cannot handle MultiStrOpt[1], I don't know why such type configuration option is needed. It seems ListOpt is a better choice. Currently I find lots of MultiStrOpt options in both Nova and Ceilometer, and I think other projects have too. Here are my questions: 1) how can I update such type of option without manually rewrite the config file? (like devstack scenario) 2) Is there any way to migrate MultiStrOpt to ListOpt? The ListOpt will take last specified value while MultiStrOpt takes all, so the compatibility is a big problem Any hints? I didn't check extensively, but this is something I hit regularly. It seems to me we have to two types doing more or less the same things and mapping to the same data structure (i.e. list). We should unify them. -- Julien Danjou // Free Software hacker // http://julien.danjou.info __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev -- Davanum Srinivas :: https://twitter.com/dims __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
[openstack-dev] [fuel] Some changes in build script
Dear colleagues, Please, be informed that I've made some changes in our build script in order to support priorities for rpm repositories. I've also removed some unnecessary variables (EXTRA_RPM_REPOS and EXTRA_DEB_REPOS) and renamed some others. We don't need EXTRA_DEB_REPOS any more because it is possible to set an arbitrary number of repositories together with their priorities in run time on UI. The variable MULTI_MIRROR_CENTOS has been introduced and it has the following format 'repo1,pri1,url1 repo2,pri2,url2'. So, we don't need EXTRA_RPM_REPOS as well. Please, make sure this patch [1] does not break anything for you. Review is welcome. [1] https://review.openstack.org/#/c/176438 Vladimir Kozhukalov __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [opentack-dev][meetings] Proposing changes in Rally meetings
Tony, many thanks for noticing it, I didn't see it for some reason while looking at the iCal file / checking the wiki. We will use another time then. Best regards, Mikhail Dubov Engineering OPS Mirantis, Inc. E-Mail: mdu...@mirantis.com Skype: msdubov On Wed, May 6, 2015 at 5:15 AM, Tony Breeds t...@bakeyournoodle.com wrote: On Tue, May 05, 2015 at 06:22:47PM +0300, Mikhail Dubov wrote: Hi Rally team, as mentioned in another message from me in this list, we have decided to use the meeting time on Wednesdays at 14:00 UTC for our *usual weekly meeting in #openstack-meeting*. This meeting time and channel will clash with the docs meeting every second week. Check May 13th (UTC) at: https://www.google.com/calendar/embed?src=bj05mroquq28jhud58esggqmh4%40group.calendar.google.comctz=Iceland/Reykjavik It looks like #openstack-meeting-4 is free at that time. As for the release meeting, we wil hold it just before the main meeting, weekly on Wednesdays at 13:30 UTC in *#openstack-rally*. Do you want this listed as a meeting on https://wiki.openstack.org/wiki/Meetings and in thr iCal above? Tony. __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [all] TC Communications planning
On 05/06/15 16:13, Anne Gentle wrote: Hi all, In the interest of communicating sooner rather than later, I wanted to write a new thread to say that Flavio Percoco and I are going to work on a TC communications plan as co-chairs of a TC communications working group. I think we can find a happy medium amongst meeting minutes, gerrit reviews, and irregular blog entries by applying some comms planning, so that Flavio and I can dive in. Please answer these questions on the list if you're interested in shaping the communications plan: Audience considerations: Is the primary audience current OpenStack contributors or those in consumer roles? I would think Consumer Roles, most contributors are already in the know and on the mailing lists and meetings. What percentage of the audience are fairly new contributors? Fairly new to OpenStack itself? Is the audience more likely to be an outsider looking in to OpenStack governance? I would think so. The 'insider' already knows where and how to find the information. Is the audience wanting to click links to learn more, or do they just want the summary? Both would be great. There will be those who only want a summary, but sometimes would also like a bit more detail on a specific subject Does the audience always want an action to take, or is simply getting information their goal? I would leave that up to the audience. But if this is a communication channel - we should decide if it should be one-way or both ways. Channel considerations: Is this audience with their goals more likely to use blogs, RSS, and Twitter or subscribe to mailing lists? If we are talking about the non-contributor - a definite no to mailing and a huge yes to the first part of the list. Depending on the channels chosen, is cross-posting to multiple channels a huge error, or are we leaning towards a wide net rather than laser targeting? Cross posting should be fine since it will mostly be a link pointing to the main source of content - which will be a blog post of some sorts. Is there another channel we haven't considered that is widely consumed? FaceBook? But I personally don't go near it. Does the cadence have to be weekly, even if not much happened with the TC is the activity rate for the week? I do no think it has to be weekly, because perhaps that would become quite boring - if nothing really happened. I would say it should according to the need - but a minimum of once a month (even if there was nothing exciting). Thanks all for participating and giving input. Anne and Flavio __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev -- Best Regards, Maish Saidel-Keesing __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
[openstack-dev] ?????? [heat][ceilometer] autoscaling
I don't understand what you mean. Firstly, ceilometer doesn't return meters or samples to heat. In fact, heat configures an alarm in ceilometer and the action of this alarm is to send a REST to heat. When heat gets this REST, it triggers autoscalling. Besides, you can use #ceilometer alarm-list to see what alarm heat configures, then you could run #ceilometer query-sample to see the meter and sample. Hope it helps. -- Luo gangyiluogan...@chinamobile.com -- -- ??: ICHIBA Sara;ichi.s...@gmail.com; : 2015??5??6??(??) 8:25 ??: openstack-devopenstack-dev@lists.openstack.org; : [openstack-dev] [heat][ceilometer] autoscaling hey there, Please I wanna know if their is anyway I can have cpu, ram and network meters for each VM returned by ceilometer to heat for autoscaling tasks? In advance, thank you for your response, Sara__ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
[openstack-dev] [oslo.config] MultiStrOpt VS. ListOpt
Hi, devs I come across a problem that crudini cannot handle MultiStrOpt[1], I don't know why such type configuration option is needed. It seems ListOpt is a better choice. Currently I find lots of MultiStrOpt options in both Nova and Ceilometer, and I think other projects have too. Here are my questions: 1) how can I update such type of option without manually rewrite the config file? (like devstack scenario) 2) Is there any way to migrate MultiStrOpt to ListOpt? The ListOpt will take last specified value while MultiStrOpt takes all, so the compatibility is a big problem Any hints? Thanks! [1] https://github.com/pixelb/crudini/blob/6c7cb8330d2b3606610af20c767433358c8d20ab/TODO#L19 __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [heat][ceilometer] autoscaling
On 06/05/15 08:25, ICHIBA Sara wrote: hey there, Please I wanna know if their is anyway I can have cpu, ram and network meters for each VM returned by ceilometer to heat for autoscaling tasks? In advance, thank you for your response, Sara The openstack-dev list is for discussing future development plans for OpenStack only. For questions about how to use OpenStack, you can post to the regular openst...@lists.openstack.org list, but it's usually better to use http://ask.openstack.org/ cheers, Zane. __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [all] cross project communication: periodic developer newsletter?
On Wed, 2015-05-06 at 11:54 +0200, Thierry Carrez wrote: Hugh Blemings wrote: +2 I think asking LWN if they have the bandwidth and interest to do this would be ideal - they've credibility in the Free/Open Source space and a proven track record. Nice people too. On the bandwidth side, as a regular reader I was under the impression that they struggled with their load already, but I guess if it comes with funding that could be an option. On the interest side, my past tries to invite them to the OpenStack Summit so that they could cover it (the way they cover other conferences) were rejected, so I have doubts in that area as well. Anyone having a personal connection that we could leverage to pursue that option further ? Sure, be glad to. I've added Jon to the cc list (if his openstack mail sorting scripts operate like mine, that will get his attention). I already had a preliminary discussion with him: lwn.net is interested but would need to hire an extra person to cover the added load. That makes it quite a big business investment for them. James __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [Neutron][IPAM] Do we need migrate script for neutron IPAM now?
I agree, we should amend it to not run pluggable IPAM as the default for now. When we decide to make it the default, the migration scripts will be needed. John On 5/5/15, 1:47 PM, Salvatore Orlando sorla...@nicira.commailto:sorla...@nicira.com wrote: Patch #153236 is introducing pluggable IPAM in the db base plugin class, and default to it at the same time, I believe. If the consensus is to default to IPAM driver then in order to satisfy grenade requirements those migrations scripts should be run. There should actually be a single script to be run in a one-off fashion. Even better is treated as a DB migration. However, the plan for Kilo was to not turn on pluggable IPAM for default. Now that we are targeting Liberty, we should have this discussion again, and not take for granted that we should default to pluggable IPAM just because a few months ago we assumed it would be default by Liberty. I suggest to not enable it by default, and then consider in L-3 whether we should do this switch. For the time being, would it be possible to amend patch #153236 to not run pluggable IPAM by default. I appreciate this would have some impact on unit tests as well, which should be run both for pluggable and traditional IPAM. Salvatore On 4 May 2015 at 20:11, Pavel Bondar pbon...@infoblox.commailto:pbon...@infoblox.com wrote: Hi, During fixing failures in db_base_plugin_v2.py with new IPAM[1] I faced to check-grenade-dsvm-neutron failures[2]. check-grenade-dsvm-neutron installs stable/kilo, creates networks/subnets and upgrades to patched master. So it validates that migrations passes fine and installation is works fine after it. This is where failure occurs. Earlier there was an agreement about using pluggable IPAM only for greenhouse installation, so migrate script from built-in IPAM to pluggable IPAM was postponed. And check-grenade-dsvm-neutron validates greyhouse scenario. So do we want to update this agreement and implement migration scripts from built-in IPAM to pluggable IPAM now? Details about failures. Subnets created before patch was applied does not have correspondent IPAM subnet, so observed a lot of failures like this in [2]: Subnet 2c702e2a-f8c2-4ea9-a25d-924e32ef5503 could not be found Currently config option in patch is modified to use pluggable_ipam by default (to catch all possible UT/tempest failures). But before the merge patch will be switched back to non-ipam implementation by default. I would prefer to implement migrate script as a separate review, since [1] is already quite big and hard for review. [1] https://review.openstack.org/#/c/153236 [2] http://logs.openstack.org/36/153236/54/check/check-grenade-dsvm-neutron/42ab4ac/logs/grenade.sh.txt.gz - Pavel Bondar __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribehttp://openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [Fuel] LBaaS in version 5.1
Hi Daniel, Unfortunately, we never supported LBaaS until Fuel 6.0 when plugin system was introduced and LBaaS plugin was created. So, I think than docs about it never existed for 5.1. But as I know, you can easily install LBaaS in 5.1 (it should be shipped in our repos) and configure it with accordance to standard OpenStack cloud administrator guide [1]. [1] http://docs.openstack.org/admin-guide-cloud/content/install_neutron-lbaas-agent.html On Wed, May 6, 2015 at 2:12 PM, Daniel Comnea comnea.d...@gmail.com wrote: HI all, Recently i used Fuel 5.1 to deploy Openstack Icehouse on a Lab (PoC) and a request came with enabling Neutron LBaaS. I have looked up on Fuel doc to see if this is supported in the version i'm running but failed ot find anything. Anyone can point me to any docs which mentioned a) yes it is supported and b) how to update it via Fuel? Thanks, Dani __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
[openstack-dev] [RefStackl] - http://refstack.org/ - does not resolve
What are we doing to have name resolved? Meanwhile what is IP address to reach it? Do we really expect people to submit results to that web site? Thanks, Arkady __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [oslo.config] MultiStrOpt VS. ListOpt
On Wed, May 06 2015, ZhiQiang Fan wrote: I come across a problem that crudini cannot handle MultiStrOpt[1], I don't know why such type configuration option is needed. It seems ListOpt is a better choice. Currently I find lots of MultiStrOpt options in both Nova and Ceilometer, and I think other projects have too. Here are my questions: 1) how can I update such type of option without manually rewrite the config file? (like devstack scenario) 2) Is there any way to migrate MultiStrOpt to ListOpt? The ListOpt will take last specified value while MultiStrOpt takes all, so the compatibility is a big problem Any hints? I didn't check extensively, but this is something I hit regularly. It seems to me we have to two types doing more or less the same things and mapping to the same data structure (i.e. list). We should unify them. -- Julien Danjou // Free Software hacker // http://julien.danjou.info signature.asc Description: PGP signature __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
[openstack-dev] [Fuel] LBaaS in version 5.1
HI all, Recently i used Fuel 5.1 to deploy Openstack Icehouse on a Lab (PoC) and a request came with enabling Neutron LBaaS. I have looked up on Fuel doc to see if this is supported in the version i'm running but failed ot find anything. Anyone can point me to any docs which mentioned a) yes it is supported and b) how to update it via Fuel? Thanks, Dani __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [TripleO] Core reviewer update proposal
On Tue, 2015-05-05 at 07:57 -0400, James Slagle wrote: Hi, I'd like to propose adding Giulio Fidente and Steve Hardy to TripleO Core. Giulio has been an active member of our community for a while. He worked on the HA implementation in the elements and recently has been making a lot of valuable contributions and reviews related to puppet in the manifests, heat templates, ceph, and HA. +1 Giulio has become one of our resident HA experts. Steve Hardy has been instrumental in providing a lot of Heat domain knowledge to TripleO and his reviews and guidance have been very beneficial to a lot of the template refactoring. He's also been reviewing and contributing in other TripleO projects besides just the templates, and has shown a solid understanding of TripleO overall. +1 Steve's Heat expertise has been invaluable. 180 day stats: | gfidente | 2080 42 166 0 079.8% | 16 ( 7.7%) | | shardy | 2060 27 179 0 086.9% | 16 ( 7.8%) | TripleO cores, please respond with +1/-1 votes and any comments/objections within 1 week. Giulio and Steve, also please do let me know if you'd like to serve on the TripleO core team if there are no objections. I'd also like to give a heads-up to the following folks whose review activity is very low for the last 90 days: | tomas-8c8 ** | 80 0 0 8 2 100.0% |0 ( 0.0%) | |lsmola ** | 60 0 0 6 5 100.0% |0 ( 0.0%) | | cmsj ** | 60 2 0 4 266.7% |0 ( 0.0%) | | jprovazn **| 10 1 0 0 0 0.0% |0 ( 0.0%) | | jonpaul-sullivan **| no activity Helping out with reviewing contributions is one of the best ways we can make good forward progress in TripleO. All of the above folks are valued reviewers and we'd love to see you review more submissions. If you feel that your focus has shifted away from TripleO and you'd no longer like to serve on the core team, please let me know. I also plan to remove Alexis Lee from core, who previously has expressed that he'd be stepping away from TripleO for a while. Alexis, thank you for reviews and contributions! __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] Fuel][Plugin] Contributor license agreement for fuel plugin code?
the developer community seems not yet convinced about is moving away from extensions. It seems everybody realises the flaws of evolving the API through extensions, but there are understandable concerns regarding impact on plugins/drivers as well as the ability to differentiate, which is something quite dear to several neutron teams. I tried to consider all those concerns and feedback received; hopefully everything has been captured in a satisfactory way in the latest revision of [1]. With this ML post I also seek feedback from the API-wg concerning the current proposal, whose salient points can be summarised as follows: #1 extensions are not part anymore of the neutron API. Evolution of the API will now be handled through versioning. Once microversions are introduced: - current extensions will be progressively moved into the Neutron unified API - no more extension will be accepted as part of the Neutron API #2 Introduction of features for addressing diversity in Neutron plugins It is possible that the combination of neutron plugins chosen by the operator won't be able to support the whole Neutron API. For this reason a concept of feature is included. What features are provided depends on the plugins loaded. The list of features is hardcoded as strictly dependent on the Neutron API version implemented by the server. The specification also mandates a minimum set of features every neutron deployment must provide (those would be the minimum set of features needed for integrating Neutron with Nova). #3 Advanced services are still extensions This a temporary measure, as APIs for load balancing, VPN, and Edge Firewall are still served through neutron WSGI. As in the future this API will live independently it does not make sense to version them with Neutron APIs. #4 Experimenting in the API One thing that has plagued Neutron in the past is the impossibility of getting people to reach any sort of agreement over the shape of certain APIs. With the proposed plan we encourage developers to submit experimental APIs. Experimental APIs are unversioned and no guarantee is made regarding deprecation or backward compatibility. Also they're optional, as a deployer can turn them off. While there are caveats, like forever-experimental APIs, this will enable developer to address user feedback during the APIs' experimental phase. The Neutron community and the API-wg can provide plenty of useful feeback, but ultimately is user feedback which determines whether an API proved successful or not. Please note that the current proposal goes in a direction different from that approved in Nova when it comes to experimental APIs [3] #5 Plugin/Vendor specific APIs Neutron is without doubt the project with the highest number of 3rd party (OSS and commercial) integration. After all it was mostly vendors who started this project. Vendors [4] use the extension mechanism to expose features in their products not covered by the Neutron API or to provide some sort of value-added service. The current proposal still allows 3rd parties to attach extensions to the neutron API, provided that: - they're not considered part of the Neutron API, in terms of versioning, documentation, and client support - they do not redefine resources defined by the Neutron API. - they do not live in the neutron source tree The aim of the provisions above is to minimize the impact of such extensions on API portability. Thanks for reading and thanks in advance for your feedback, Salvatore The title of this post has been inspired by [2] (the message in the banner may be unintelligible to readers not fluent in european football) [1] https://review.openstack.org/#/c/136760/ [2] http://a.espncdn.com/combiner/i/?img=/photo/2015/0502/fc-banner-jd-1296x729.jpgw=738site=espnfc [3] http://specs.openstack.org/openstack/nova-specs/specs/kilo/implemented/api-microversions.html [4] By vendor here we refer either to a cloud provider or a company providing Neutron integration for their products. __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev -- next part -- An HTML attachment was scrubbed... URL: http://lists.openstack.org/pipermail/openstack-dev/attachments/20150506/1e3ca3d4/attachment-0001.html -- Message: 2 Date: Wed, 06 May 2015 09:26:37 +0200 From: Thomas Goirand z...@debian.org To: OpenStack Development Mailing List (not for usage questions) openstack-dev@lists.openstack.org Subject: Re: [openstack-dev] [PKG-Openstack-devel][horizon][xstatic] XStatic-Angular-Bootstrap
[openstack-dev] [all] TC Communications planning
Hi all, In the interest of communicating sooner rather than later, I wanted to write a new thread to say that Flavio Percoco and I are going to work on a TC communications plan as co-chairs of a TC communications working group. I think we can find a happy medium amongst meeting minutes, gerrit reviews, and irregular blog entries by applying some comms planning, so that Flavio and I can dive in. Please answer these questions on the list if you're interested in shaping the communications plan: Audience considerations: Is the primary audience current OpenStack contributors or those in consumer roles? What percentage of the audience are fairly new contributors? Fairly new to OpenStack itself? Is the audience more likely to be an outsider looking in to OpenStack governance? Is the audience wanting to click links to learn more, or do they just want the summary? Does the audience always want an action to take, or is simply getting information their goal? Channel considerations: Is this audience with their goals more likely to use blogs, RSS, and Twitter or subscribe to mailing lists? Depending on the channels chosen, is cross-posting to multiple channels a huge error, or are we leaning towards a wide net rather than laser targeting? Is there another channel we haven't considered that is widely consumed? Does the cadence have to be weekly, even if not much happened with the TC is the activity rate for the week? Thanks all for participating and giving input. Anne and Flavio __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [Neutron][IPAM] Do we need migrate script for neutron IPAM now?
On Tue, May 5, 2015 at 11:47 AM, Salvatore Orlando sorla...@nicira.com wrote: I suggest to not enable it by default, and then consider in L-3 whether we should do this switch. I agree. At the least, the switch should be decoupled from that patch. I think decoupling them before merging the patch was the plan all along, it just hasn't happened yet. We should create a new patch dependent on this one to make it the default. This will tee it up for discussion and we should put a hold on that new patch until we can discuss in Liberty-3. I currently lean toward not making it the default in Liberty but we can discuss later. Carl __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [api] Changing 403 Forbidden to 400 Bad Request for OverQuota was: [nova] Which error code should we return when OverQuota
On May 6, 2015, at 1:58 PM, David Kranz dkr...@redhat.commailto:dkr...@redhat.com wrote: +1 The basic problem is we are trying to fit a square (generic api) peg in a round (HTTP request/response) hole. But if we do say we are recognizing sub-error-codes, it might be good to actually give them numbers somewhere in the response (maybe an error code header) rather than relying on string matching to determine the real error. String matching is fragile and has icky i18n implications. There is an effort underway around defining such sub-error-codes” [1]. Those error codes would be surfaced in the REST API here [2]. Naturally feedback is welcome. Everett [1] https://review.openstack.org/#/c/167793/ [2] https://review.openstack.org/#/c/167793/ __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [all] TC Communications planning
Excerpts from Maish Saidel-Keesing's message of 2015-05-06 17:11:23 +0300: On 05/06/15 16:13, Anne Gentle wrote: Hi all, In the interest of communicating sooner rather than later, I wanted to write a new thread to say that Flavio Percoco and I are going to work on a TC communications plan as co-chairs of a TC communications working group. I think we can find a happy medium amongst meeting minutes, gerrit reviews, and irregular blog entries by applying some comms planning, so that Flavio and I can dive in. Please answer these questions on the list if you're interested in shaping the communications plan: Audience considerations: Is the primary audience current OpenStack contributors or those in consumer roles? I would think Consumer Roles, most contributors are already in the know and on the mailing lists and meetings. I think you're overestimating the number of contributors who actually manage to keep up with all of the traffic on this list. We should address both audiences. What percentage of the audience are fairly new contributors? Fairly new to OpenStack itself? Is the audience more likely to be an outsider looking in to OpenStack governance? I would think so. The 'insider' already knows where and how to find the information. Is the audience wanting to click links to learn more, or do they just want the summary? Both would be great. There will be those who only want a summary, but sometimes would also like a bit more detail on a specific subject Does the audience always want an action to take, or is simply getting information their goal? I would leave that up to the audience. But if this is a communication channel - we should decide if it should be one-way or both ways. Channel considerations: Is this audience with their goals more likely to use blogs, RSS, and Twitter or subscribe to mailing lists? If we are talking about the non-contributor - a definite no to mailing and a huge yes to the first part of the list. Depending on the channels chosen, is cross-posting to multiple channels a huge error, or are we leaning towards a wide net rather than laser targeting? Cross posting should be fine since it will mostly be a link pointing to the main source of content - which will be a blog post of some sorts. Is there another channel we haven't considered that is widely consumed? FaceBook? But I personally don't go near it. Does the cadence have to be weekly, even if not much happened with the TC is the activity rate for the week? I do no think it has to be weekly, because perhaps that would become quite boring - if nothing really happened. I would say it should according to the need - but a minimum of once a month (even if there was nothing exciting). Thanks all for participating and giving input. Anne and Flavio __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [puppet][operators] How to specify Keystone v3 credentials?
We also run all masterless/puppet apply. And we just populate a bare bones keystone.conf on any box that does not have keystone installed, but Puppet needs to be able to create keystone resources. Also agreed on avoiding puppetdb, for the same reasons. (Something to note for those of us doing masterless today: there are plans from Puppet to move more of the manifest compiling functionality to run only in the puppet master process. So at some point, it’s likely that masterless setups may not be possible.) Mike If you do not wish to explicitly define Keystone resources for Glance on Keystone nodes but instead let Glance nodes manage their own resources, you could always use exported resources. You let Glance nodes export their keystone resources and then you ask Keystone nodes to realize them where admin credentials are available. (I know some people don't really like exported resources for various reasons) I'm not familiar with exported resources. Is this a viable option that has less impact than just requiring Keystone resources to be realized on the Keystone node? I'm not in favor of having exported resources because it requires PuppetDB, and a lot of people try to avoid that. For now, we've been able to setup all OpenStack without PuppetDB in TripleO and in some other installers, we might want to keep this benefit. +100 We're looking at using these puppet modules in a bit, but we're also a few steps away from getting rid of our puppetmaster and moving to a completely puppet apply based workflow. I would be double-plus sad-panda if we were not able to use the openstack puppet modules to do openstack because they'd been done in such as way as to require a puppetmaster or puppetdb. 100% agree. Even if you had a puppetmaster and puppetdb, you would still end up in this eventual consistency dance of puppet runs. __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
[openstack-dev] [neutron] Neutron meeting for the next few weeks is cancelled
Hi folks! Given most of us will be in Vancouver for the Summit and we've finished planning out the design summit, we'll go ahead and cancel the Neutron meeting for the next 3 weeks. We'll resume the week after the Summit, which is 6/2/2015 at 1400UTC [1]. Thanks! Kyle [1] https://wiki.openstack.org/wiki/Network/Meetings __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [all] TC Communications planning
On 06/05/15 09:13, Anne Gentle wrote: Hi all, In the interest of communicating sooner rather than later, I wanted to write a new thread to say that Flavio Percoco and I are going to work on a TC communications plan as co-chairs of a TC communications working group. I think we can find a happy medium amongst meeting minutes, gerrit reviews, and irregular blog entries by applying some comms planning, so that Flavio and I can dive in. Please answer these questions on the list if you're interested in shaping the communications plan: Audience considerations: Is the primary audience current OpenStack contributors or those in consumer roles? I think it has to be both. Maish's suggestion that most contributors are already in the know and on the mailing lists and meetings is absurd. Beyond the group of probably 25 people who pay close attention to governance, most core reviewers and even PTLs I speak to have a vague idea what is going on in the TC only when it pertains to an issue that was heavily discussed on openstack-dev, and even then they're unlikely to know what the outcome was unless/until it starts affecting them directly. What percentage of the audience are fairly new contributors? Fairly new to OpenStack itself? Is the audience more likely to be an outsider looking in to OpenStack governance? Not sure how to parse this. Substantially everybody is an outsider to OpenStack governance, so yes, but I think it should be primarily for insiders to OpenStack. Is the audience wanting to click links to learn more, or do they just want the summary? I don't think they want to be clicking links through to the governance repo (though it doesn't hurt to have them). IMHO folks need a summary, and maybe a summary of the summary so they can figure out when the summary is worth reading. Does the audience always want an action to take, or is simply getting information their goal? Information. Channel considerations: Is this audience with their goals more likely to use blogs, RSS, and Twitter or subscribe to mailing lists? Contributors are mostly on openstack-dev, and that's an audience the blog posts haven't been hitting. So I think executive summaries on mailing lists with links to blog posts will work and improve the current reach. I think the blog + RSS + newsletter approach approach used up until now is probably the best chance to get through to the non-openstack-dev readers. It's always going to be an uphill battle though, because people have to choose to subscribe to a mailing list or the newsletter or the feed or Planet OpenStack or whatever - there's no place we can go to them. Depending on the channels chosen, is cross-posting to multiple channels a huge error, or are we leaning towards a wide net rather than laser targeting? IMHO cross-posting is fine, but I wouldn't necessarily replicate the entire content to every channel. Is there another channel we haven't considered that is widely consumed? Not AFAIK. Does the cadence have to be weekly, even if not much happened with the TC is the activity rate for the week? IMO no. It's more likely to be read if it's just posted when there is actual important news to report. cheers, Zane. __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [puppet][operators] How to specify Keystone v3 credentials?
On Wed, May 6, 2015 at 4:26 PM, Mike Dorman mdor...@godaddy.com wrote: We also run all masterless/puppet apply. And we just populate a bare bones keystone.conf on any box that does not have keystone installed, but Puppet needs to be able to create keystone resources. Also agreed on avoiding puppetdb, for the same reasons. (Something to note for those of us doing masterless today: there are plans from Puppet to move more of the manifest compiling functionality to run only in the puppet master process. So at some point, it’s likely that masterless setups may not be possible.) I don't think that's true. I think making sure puppet apply works is a priority for them, just the implementation as they move to a C++-based agent has yet to be figured out. Colleen Mike __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [puppet][operators] How to specify Keystone v3 credentials?
Cool, fair enough. Pretty glad to hear that actually! From: Colleen Murphy Reply-To: OpenStack Development Mailing List (not for usage questions) Date: Wednesday, May 6, 2015 at 5:31 PM To: OpenStack Development Mailing List (not for usage questions) Subject: Re: [openstack-dev] [puppet][operators] How to specify Keystone v3 credentials? On Wed, May 6, 2015 at 4:26 PM, Mike Dorman mdor...@godaddy.commailto:mdor...@godaddy.com wrote: We also run all masterless/puppet apply. And we just populate a bare bones keystone.conf on any box that does not have keystone installed, but Puppet needs to be able to create keystone resources. Also agreed on avoiding puppetdb, for the same reasons. (Something to note for those of us doing masterless today: there are plans from Puppet to move more of the manifest compiling functionality to run only in the puppet master process. So at some point, it’s likely that masterless setups may not be possible.) I don't think that's true. I think making sure puppet apply works is a priority for them, just the implementation as they move to a C++-based agent has yet to be figured out. Colleen Mike __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [keystone] On dynamic policy, role hierarchies/groups/sets etc.
Nice summary Henry. My comments in brown. From: Adam Young [mailto:ayo...@redhat.com] Sent: Tuesday, May 5, 2015 8:35 PM To: openstack-dev@lists.openstack.org Subject: Re: [openstack-dev] [keystone] On dynamic policy, role hierarchies/groups/sets etc. On 05/05/2015 07:05 AM, Henry Nash wrote: We've been discussing changes to these areas for a while - and although I think there is general agreement among the keystone cores that we need to change *something*, we've been struggling to get agreement on exactly how.. So to try and ground the discussion that will (I am sure) occur in Vancouver, here's an attempt to take a step back, look at what we have now, as well as where, perhaps, we want to get to. This is a great summary. Thanks Henry. david8hu We need at least one use case to capture or to tight all of the specs together. I think an use case would really help the dynamic policy overview spec. I can help add 1 or 2. The core functionality all this is related to is that of how does keystone policy allow the checking of whether a given API call to an OpenStack service should be allowed to take place or not. Within OpenStack this is a two step process for an API caller1) Get yourself a token by authentication and getting authorised for a particular scope (e.g. a given project), and then 2) Use that token as part of your API call to the service you are interested in. Assuming you do, indeed, have the rights to execute this API, somehow steps 1) and 2) give the policy engine enough info to say yes or no. So first, how does this work today and (conceptually) how should we describe that? Well first of all, in fact, strictly we don't control access at the raw API level. In fact, each service defines a series capabilities (which usually, but not always, map one-to-one with an API call). These capabilities represent the finest grained access control we support via the policy engine. Now, in theory, the most transparent way we could have implemented steps 1) and 2) above would have been to say that users should be assigned capabilities to projectsand then those capabilities would be placed in the tokenallowing the policy engine to check if they match what is needed for a given capability to be executed. We didn't do that since, a) this would probably end up being very laborious for the administrator (there would be lots of capabilities any given user would need), and b) the tokens would get very big storing all those capabilities. Instead, it was recognised that, usually, there are sets of these capabilities that nearly always go together - so instead let's allow the creation of such setsand we'll assign those to users instead. So far, so good. What is perhaps unusual is how this was implemented. These capability sets are, today, called Roles...but rather than having a role definition that describes the capabilities represented by that roleinstead roles are just labels - which can be assigned to users/projects and get placed in a tokens. The expansion to capabilities happens through the definition of a json policy file (one for each service) which must be processed by the policy engine in order to work out what whether the roles in a token and the role-capability mapping means that a given API can go ahead. This implementation leads to a number issues (these have all been raised by others, just pulling them together here): i) The role-capability mapping is rather static. Until recently it had to be stored in service-specific files pushed out to the service nodes out-of-band. Keystone does now provide some REST APIs to store and retrieve whole policy files, but these are a) course-grained and b) not really used by services anyway yet. ii) As more and more clouds become multi-customer (i.e. a cloud provider hosting multiple companies on a single OpenStack installation), cloud providers will want to allow those customers to administer their bit of the cloud. Keystone uses the Domains concept to allow a cloud provider to create a namespace for a customer to create their own projects, users and groupsand there is a version of the keystone policy file that allows a cloud provider to effectively delegate management of these items to an administrator of that customer (sometimes called a domain administrator). However, Roles are not part of that namespace - they exists in a global namespace (within a keystone installation). Diverse customers may have different interpretations of what a VM admin or a net admin should be allowed to do for their bit of the cloud - but right now that differentiation is hard to provide. We have no support for roles or policy that are domain specific. david8hu I can see per domain policy becoming a hot topic for the reseller scenario. iii) Although as stated in ii) above, you can write a policy file that differentiates between various levels of admin, or fine-tunes access to certain
Re: [openstack-dev] [RefStackl] - http://refstack.org/ - does not resolve
On 2015-05-06 19:53:44 + (+), Rochelle Grober wrote: The Refstack team is working with Infra to get refstack.org up in a vm under Infra's purview. Right now, the demo is on refstack.net refstack.net will go away once refstack.org is up and managed. Yep, I recall the discussion. I simply didn't know if the Refstack developers needed that domain pointed to some particular demo site until ready to go live with the official infra-tized server. Sounds like it can just wait for the moment. Thanks! -- Jeremy Stanley __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
[openstack-dev] [neutron][lbaas][octavia] No Octavia meeting today
All, In order to work on the demo for Vancouver we will be skipping todays, 5/6/15 meeting. We will have another meeting on 5/13 to finalize for the summit -- If you have questions you can find us in the channel — and again please keep up the good work with reviews! Thanks, German __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [Murano] [Mistral] SSH workflow action
Hi, From Murano experience I can tell you that ssh to VM in general case will not work. In order to have an ssh access you will have to assign floating IPs so that Mistral service will be able to connect to VM. That is exactly the reason why Murano uses agent and MQ mechanism when client on VM initiates a connection. I believe the same issue was in Sahara when they used direct ssh connections to VMs. Thanks Gosha On Wed, May 6, 2015 at 9:00 AM, Pospisil, Radek radek.pospi...@hp.com wrote: Hello, I think that the generic question is - can be O~S services also accessible on Neutron networks, so VM (created by Nova) can access it? We (I and Filip) were discussing this today and we were not make a final decision. Another example is Murano agent running on VMs - it connects to RabbitMQ which is also accessed by Murano engine Regards, Radek -Original Message- From: Blaha, Filip Sent: Wednesday, May 06, 2015 5:43 PM To: openstack-dev@lists.openstack.org Subject: [openstack-dev] [Murano] [Mistral] SSH workflow action Hello We are considering implementing actions on services of a murano environment via mistral workflows. We are considering whether mistral std.ssh action could be used to run some command on an instance. Example of such action in murano could be restart action on Mysql DB service. Mistral workflow would ssh to that instance running Mysql and run service mysql restart. From my point of view trying to use SSH to access instances from mistral workflow is not good idea but I would like to confirm it. The biggest problem I see there is openstack networking. Mistral service running on some openstack node would not be able to access instance via its fixed IP (e.g. 10.0.0.5) via SSH. Instance could accessed via ssh from namespace of its gateway router e.g. ip netns exec qrouter-... ssh cirros@10.0.0.5 but I think it is not good to rely on implementation detail of neutron and use it. In multinode openstack deployment it could be even more complicated. In other words I am asking whether we can use std.ssh mistral action to access instances via ssh on theirs fixed IPs? I think no but I would like to confirm it. Thanks Filip __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev -- Georgy Okrokvertskhov Architect, OpenStack Platform Products, Mirantis http://www.mirantis.com Tel. +1 650 963 9828 Mob. +1 650 996 3284 __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [neutron] IPv4 transition/interoperation with IPv6
On Wed, May 6, 2015 at 12:46 AM, Mike Spreitzer mspre...@us.ibm.com wrote: While I am a Neutron operator, I am also a customer of a lower layer network provider. That network provider will happily give me a few /64. How do I serve IPv6 subnets to lots of my tenants? In the bad old v4 days this would be easy: a tenant puts all his stuff on his private networks and NATs (e.g., floating IP) his edge servers onto a public network --- no need to align tenant private subnets with public subnets. But with no NAT for v6, there is no public/private distinction --- I can only give out the public v6 subnets that I am given. Yes, NAT is bad. But not being able to get your job done is worse. Mike, in this paragraph, you're hitting on something that has been on my mind for a while. We plan to cover this problem in detail in this talk [1] and we're defining some work for Liberty to better address it [2][3]. You hit the nail on the head, there is no distinguishing private and public IP addresses in Neutron currently with IPv6. Kilo's new subnet pool feature is a start. It will allow you to create a shared subnet pool including the /64s from your service provider. Tenants can then create a subnet getting an allocation from it automatically. However, given the current state of things, there will be some manual work on the gateway router to route them to the tenant's router. Prefix delegation -- which looks on track for Liberty -- is another option which could fill this void. It will allow a router to get a prefix delegation from an external PD system which will be useable on a tenant subnet. Presumably the external system will take care of routing the subnet to the appropriate tenant router. Carl [1] http://sched.co/2qdm [2] https://review.openstack.org/#/c/180267/ [3] https://review.openstack.org/#/c/125401/ __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [Neutron] Success of the IPv6 Subteam - Proposal to disband
On 05/04/2015 08:37 PM, Sean M. Collins wrote: It is a bittersweet moment - I am proposing that due to the amazing success that we have had as a subteam, that because we have accomplished so much, that it makes sense for our team to disband and re-integrate with other subteams (the L3 subteam comes to mind) or have items in the on-demand agenda of the main meeting. Unless there is any pressing business, I believe that we will not need a recurring meeting, and tomorrow's meeting is cancelled. As always, I am in #openstack-neutron and happy to help. Sean, Thanks for leading the team, IPv6 is in a much better place now in Kilo! I'll be the first one to buy you a beer (beers?) in Vancouver. As long as we adopt the Linux kernel mantra of You can't do the IPv4 work now and punt the IPv6 work for later I'm fine with pushing future IPv6 work into the respective L3/L2/etc sub-teams. -Brian __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [neutron] Are routed TAP interfaces (and DHCP for them) in scope for Neutron?
This brings up something I'd like to discuss. We have a config option called allow_overlapping_ips which actually defaults to False. It has been suggested [1] that this should be removed from Neutron and I've just started playing around with ripping it out [2] to see what the consequences are. A purely L3 routed network, like Calico, is a case where it is more complex to implement allowing overlapping ip addresses. If we deprecate and eventually remove allow_overlapping_ips, will this be a problem for Calico? Is the shared address space in Calico confined to a single flat network or do you already support tenant private networks with this technology? If I recall from previous discussions, I think that it only supports Neutron's flat network model in the current form, so I don't think it should be a problem. Am I correct? Please confirm. Carl [1] http://lists.openstack.org/pipermail/openstack-dev/2014-May/036336.html [2] https://review.openstack.org/#/c/179953/ On Fri, May 1, 2015 at 8:22 AM, Neil Jerram neil.jer...@metaswitch.com wrote: Thanks for your reply, Kevin, and sorry for the delay in following up. On 21/04/15 09:40, Kevin Benton wrote: Is it compatible with overlapping IPs? i.e. Will it give two different VMs the same IP address if the reservations are setup that way? No, not as I've described it below, and as we've implemented Calico so far. Calico's first target is a shared address space without overlapping IPs, so that we can handle everything within the default namespace. But we do also anticipate a future Calico release to support private address spaces with overlapping IPs, while still routing all VM data rather than bridging. That will need the private address TAP interfaces to go into a separate namespace (per address space), and have their data routed there; and we'd run a Dnsmasq in that namespace to provide that space's IP addresses. Within each namespace - whether the default one or private ones - we'd still use the other changes I've described below for how the DHCP agent creates the ns-XXX interface and launches Dnsmasq. Does that make sense? Do you think that this kind of approach could be in scope under the Neutron umbrella, as an alternative to bridging the TAP interfaces? Thanks, Neil On 16/04/15 15:12, Neil Jerram wrote: I have a Neutron DHCP agent patch whose purpose is to launch dnsmasq with options such that it works (= provides DHCP service) for TAP interfaces that are _not_ bridged to the DHCP interface (ns-XXX). For the sake of being concrete, this involves: - creating the ns-XXX interface as a dummy, instead of as a veth pair - launching dnsmasq with --bind-dynamic --listen=ns-XXX --listen=tap* --bridge-interface=ns-XXX,tap* - not running in a separate namespace - running the DHCP agent on every compute host, instead of only on the network node - using the relevant subnet's gateway IP on the ns-XXX interface (on every host), instead of allocating a different IP for each ns-XXX interface. I proposed a spec for this in the Kilo cycle [1], but it didn't get enough traction, and I'm now wondering what to do with this work/function. Specifically, whether to look again at integrating it into Neutron during the Liberty cycle, or whether to maintain an independent DHCP agent for my project outside the upstream Neutron tree. I would very much appreciate any comments or advice on this. For answering that last question, I suspect the biggest factor is whether routed TAP interfaces - i.e. forms of networking implementation that rely on routing data between VMs instead of bridging it - is in scope for Neutron, at all. If it is, I understand that there could be a lot more detail to work on, such as how it meshes with other Neutron features such as DVR and the IPAM work, and that it might end up being quite different from the blueprint linked below. But it would be good to know whether this would ultimately be in scope and of interest for Neutron at all. Please do let me now what you think. Many thanks, Neil [1] https://blueprints.launchpad.net/neutron/+spec/dhcp-for-routed-ifs __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [keystone] On dynamic policy, role hierarchies/groups/sets etc.
Hi all, Inline. From: Adam Young ayo...@redhat.commailto:ayo...@redhat.com Reply-To: OpenStack Development Mailing List (not for usage questions) openstack-dev@lists.openstack.orgmailto:openstack-dev@lists.openstack.org Date: Tuesday, May 5, 2015 at 8:34 PM To: openstack-dev@lists.openstack.orgmailto:openstack-dev@lists.openstack.org openstack-dev@lists.openstack.orgmailto:openstack-dev@lists.openstack.org Subject: Re: [openstack-dev] [keystone] On dynamic policy, role hierarchies/groups/sets etc. On 05/05/2015 07:05 AM, Henry Nash wrote: We’ve been discussing changes to these areas for a while - and although I think there is general agreement among the keystone cores that we need to change *something*, we’ve been struggling to get agreement on exactly how.. So to try and ground the discussion that will (I am sure) occur in Vancouver, here’s an attempt to take a step back, look at what we have now, as well as where, perhaps, we want to get to. This is a great summary. Thanks Henry. Super helpful for sure! The core functionality all this is related to is that of how does keystone policy allow the checking of whether a given API call to an OpenStack service should be allowed to take place or not. Within OpenStack this is a two step process for an API caller….1) Get yourself a token by authentication and getting authorised for a particular scope (e.g. a given project), and then 2) Use that token as part of your API call to the service you are interested in. Assuming you do, indeed, have the rights to execute this API, somehow steps 1) and 2) give the policy engine enough info to say yes or no. So first, how does this work today and (conceptually) how should we describe that? Well first of all, in fact, strictly we don’t control access at the raw API level. In fact, each service defines a series “capabilities” (which usually, but not always, map one-to-one with an API call). These capabilities represent the finest grained access control we support via the policy engine. Now, in theory, the most transparent way we could have implemented steps 1) and 2) above would have been to say that users should be assigned capabilities to projects….and then those capabilities would be placed in the token….allowing the policy engine to check if they match what is needed for a given capability to be executed. We didn’t do that since, a) this would probably end up being very laborious for the administrator (there would be lots of capabilities any given user would need), and b) the tokens would get very big storing all those capabilities. Instead, it was recognised that, usually, there are sets of these capabilities that nearly always go together - so instead let’s allow the creation of such sets….and we’ll assign those to users instead. So far, so good. What is perhaps unusual is how this was implemented. These capability sets are, today, called Roles…but rather than having a role definition that describes the capabilities represented by that role….instead roles are just labels - which can be assigned to users/projects and get placed in a tokens. The expansion to capabilities happens through the definition of a json policy file (one for each service) which must be processed by the policy engine in order to work out what whether the roles in a token and the role-capability mapping means that a given API can go ahead. This implementation leads to a number issues (these have all been raised by others, just pulling them together here): As I understand how this works conceptually, a policy makes go/no-go decisions based on two kinds of properties: (1) properties about the user making the API call (which are encoded in the token) and (2) the API call name and arguments. Is that right? i) The role-capability mapping is rather static. Until recently it had to be stored in service-specific files pushed out to the service nodes out-of-band. Keystone does now provide some REST APIs to store and retrieve whole policy files, but these are a) course-grained and b) not really used by services anyway yet. ii) As more and more clouds become multi-customer (i.e. a cloud provider hosting multiple companies on a single OpenStack installation), cloud providers will want to allow those customers to administer “their bit of the cloud”. Keystone uses the Domains concept to allow a cloud provider to create a namespace for a customer to create their own projects, users and groups….and there is a version of the keystone policy file that allows a cloud provider to effectively delegate management of these items to an administrator of that customer (sometimes called a domain administrator). However, Roles are not part of that namespace - they exists in a global namespace (within a keystone installation). Diverse customers may have different interpretations of what a “VM admin” or a “net admin” should be allowed to do for their bit of the cloud - but right now that
Re: [openstack-dev] [api] Changing 403 Forbidden to 400 Bad Request for OverQuota was: [nova] Which error code should we return when OverQuota
On 05/06/2015 02:07 PM, Jay Pipes wrote: Adding [api] topic. API WG members, please do comment. On 05/06/2015 08:01 AM, Sean Dague wrote: On 05/06/2015 07:11 AM, Chris Dent wrote: On Wed, 6 May 2015, Sean Dague wrote: All other client errors, just be a 400. And use the emerging error reporting json to actually tell the client what's going on. Please do not do this. Please use the 4xx codes as best as you possibly can. Yes, they don't always match, but there are several of them for reasons™ and it is usually possible to find one that sort of fits. I agree with Jay here: there are only 100 error codes in the 400 namespace, and (way) more than 100 possible errors. The general 400 is perfectly good as a catch-all where the user can be expected to read the JSON error response for more information, and the other error codes should be used to make it easier for folks to distinguish specific conditions. Let's take the 403 case. If you are denied with your credentials, there's no error handling that you're going to be able to fix that. Using just 400 is bad for a healthy HTTP ecosystem. Sure, for the most part people are talking to OpenStack through official clients but a) what happens when they aren't, b) is that the kind of world we want? I certainly don't. I want a world where the HTTP APIs that OpenStack and other services present actually use HTTP and allow a diversity of clients (machine and human). Wanting other clients to be able to plug right in is why we try to be RESTful and make error codes that are usable by any client (see the error codes and messages specs). Using Conflict and Forbidden codes in addition to good error messages will help, if they denote very specific conditions that the user can act on. Absolutely. And the problem is there is not enough namespace in the HTTP error codes to accurately reflect the error conditions we hit. So the current model means the following: If you get any error code, it means multiple failure conditions. Throw it away, grep the return string to decide if you can recover. My proposal is to be *extremely* specific for the use of anything besides 400, so there is only 1 situation that causes that to arise. So 403 means a thing, only one thing, ever. Not 2 kinds of things that you need to then figure out what you need to do. Agreed If you get a 400, well, that's multiple kinds of errors, and you need to then go conditional. This should provide a better experience for all clients, human and machine. I agree with Sean on this one. Using response codes effectively makes it easier to write client code that is either simple or is able to use generic libraries effectively. Let's be honest: OpenStack doesn't have a great record of using HTTP effectively or correctly. Let's not make it worse. In the case of quota, 403 is fairly reasonable because you are in fact Forbidden from doing the thing you want to do. Yes, with the passage of time you may very well not be forbidden so the semantics are not strictly matching but it is more immediately expressive yet not quite as troubling as 409 (which has a more specific meaning). Except it's not, because you are saying to use 403 for 2 issues (Don't have permissions and Out of quota). Turns out, we have APIs for adjusting quotas, which your user might have access to. So part of 403 space is something you might be able to code yourself around, and part isn't. Which means you should always ignore it and write custom logic client side. Using something beyond 400 is *not* more expressive if it has more than one possible meaning. Then it's just muddy. My point is that all errors besides 400 should have *exactly* one cause, so they are specific. Yes, agreed. I think Sean makes an excellent point that if you have 1 condition that results in a 403 Forbidden, it actually does not make things more expressive. It actually just means both humans and clients need to now delve deeper into the error context to determine if this is something they actually don't have permission to do, or whether they've exceeded their quota but otherwise have permission to do some action. Best, -jay p.s. And, yes, Chris, I definitely do see your side of the coin on this. It's nuanced, and a grey area... __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev -- Ryan Brown / Software Engineer, Openstack / Red Hat, Inc. __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [api] Changing 403 Forbidden to 400 Bad Request for OverQuota was: [nova] Which error code should we return when OverQuota
On 05/06/2015 02:07 PM, Jay Pipes wrote: Adding [api] topic. API WG members, please do comment. On 05/06/2015 08:01 AM, Sean Dague wrote: On 05/06/2015 07:11 AM, Chris Dent wrote: On Wed, 6 May 2015, Sean Dague wrote: All other client errors, just be a 400. And use the emerging error reporting json to actually tell the client what's going on. Please do not do this. Please use the 4xx codes as best as you possibly can. Yes, they don't always match, but there are several of them for reasons™ and it is usually possible to find one that sort of fits. Using just 400 is bad for a healthy HTTP ecosystem. Sure, for the most part people are talking to OpenStack through official clients but a) what happens when they aren't, b) is that the kind of world we want? I certainly don't. I want a world where the HTTP APIs that OpenStack and other services present actually use HTTP and allow a diversity of clients (machine and human). Absolutely. And the problem is there is not enough namespace in the HTTP error codes to accurately reflect the error conditions we hit. So the current model means the following: If you get any error code, it means multiple failure conditions. Throw it away, grep the return string to decide if you can recover. My proposal is to be *extremely* specific for the use of anything besides 400, so there is only 1 situation that causes that to arise. So 403 means a thing, only one thing, ever. Not 2 kinds of things that you need to then figure out what you need to do. If you get a 400, well, that's multiple kinds of errors, and you need to then go conditional. This should provide a better experience for all clients, human and machine. I agree with Sean on this one. Using response codes effectively makes it easier to write client code that is either simple or is able to use generic libraries effectively. Let's be honest: OpenStack doesn't have a great record of using HTTP effectively or correctly. Let's not make it worse. In the case of quota, 403 is fairly reasonable because you are in fact Forbidden from doing the thing you want to do. Yes, with the passage of time you may very well not be forbidden so the semantics are not strictly matching but it is more immediately expressive yet not quite as troubling as 409 (which has a more specific meaning). Except it's not, because you are saying to use 403 for 2 issues (Don't have permissions and Out of quota). Turns out, we have APIs for adjusting quotas, which your user might have access to. So part of 403 space is something you might be able to code yourself around, and part isn't. Which means you should always ignore it and write custom logic client side. Using something beyond 400 is *not* more expressive if it has more than one possible meaning. Then it's just muddy. My point is that all errors besides 400 should have *exactly* one cause, so they are specific. Yes, agreed. I think Sean makes an excellent point that if you have 1 condition that results in a 403 Forbidden, it actually does not make things more expressive. It actually just means both humans and clients need to now delve deeper into the error context to determine if this is something they actually don't have permission to do, or whether they've exceeded their quota but otherwise have permission to do some action. Best, -jay +1 The basic problem is we are trying to fit a square (generic api) peg in a round (HTTP request/response) hole. But if we do say we are recognizing sub-error-codes, it might be good to actually give them numbers somewhere in the response (maybe an error code header) rather than relying on string matching to determine the real error. String matching is fragile and has icky i18n implications. -David p.s. And, yes, Chris, I definitely do see your side of the coin on this. It's nuanced, and a grey area... __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [RefStackl] - http://refstack.org/ - does not resolve
The Refstack team is working with Infra to get refstack.org up in a vm under Infra's purview. Right now, the demo is on refstack.net refstack.net will go away once refstack.org is up and managed. --rocky -Original Message- From: Jeremy Stanley [mailto:fu...@yuggoth.org] Sent: Wednesday, May 06, 2015 08:02 To: OpenStack Development Mailing List (not for usage questions) Cc: r...@zehicle.com Subject: Re: [openstack-dev] [RefStackl] - http://refstack.org/ - does not resolve On 2015-05-06 09:37:26 -0500 (-0500), arkady_kanev...@dell.com wrote: What are we doing to have name resolved? Meanwhile what is IP address to reach it? Do we really expect people to submit results to that web site? It looks like I can add that domain and whatever records we want for it... I'd simply need to know the IP address(es) and name(s) you want in those resource records. -- Jeremy Stanley __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [Nova][Ironic] Large number of ironic driver bugs in nova
JohnG, I work on Ironic and would be willing to be a cross project liaison for Nova and Ironic. I would just need a little info on what to do from the Nova side. Meetings to attend, web pages to monitor, etc... I assume I would start with this page: https://bugs.launchpad.net/nova/+bugs?field.tag=ironic And try to work with the Ironic and Nova teams on getting bugs resolved. I would appreciate any other info and suggestions to help improve the process. John On Wed, May 6, 2015 at 2:55 AM, John Garbutt j...@johngarbutt.com wrote: On 6 May 2015 at 09:39, Lucas Alvares Gomes lucasago...@gmail.com wrote: Hi I noticed last night that there are 23 bugs currently filed in nova tagged as ironic related. Whilst some of those are scheduler issues, a lot of them seem like things in the ironic driver itself. Does the ironic team have someone assigned to work on these bugs and generally keep an eye on their driver in nova? How do we get these bugs resolved? Thanks for this call out. I don't think we have anyone specifically assigned to keep an eye on the Ironic Nova driver, we would look at it from time to time or when someone ask us to in the Ironic channel/ML/etc... But that said, I think we need to pay more attention to the bugs in Nova. I've added one item about it to be discussed in the next Ironic meeting[1]. And in the meantime, I will take a look at some of the bugs myself. [1] https://wiki.openstack.org/wiki/Meetings/Ironic#Agenda_for_next_meeting Thanks to you both for raising this and pushing on this. Maybe we can get a named cross project liaison to bridge the Ironic and Nova meetings. We are working on building a similar pattern for Neutron. It doesn't necessarily mean attending every nova-meeting, just someone to act as an explicit bridge between our two projects? I am open to whatever works though, just hoping we can be more proactive about issues and dependencies that pop up. Thanks, John __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [Ceilometer][Gnocchi] How to try ceilometer with gnocchi ?
On 05/06/2015 01:36 PM, Tim Bell wrote: Julien, Has anyone started on the RPMs and/or Puppet modules ? We'd be interested in trying this out. We wrote https://github.com/stackforge/puppet-gnocchi But we have to wait for packaging. I know it's WIP in RDO, no clue for Debian/Ubuntu. Thanks Tim -Original Message- From: Julien Danjou [mailto:jul...@danjou.info] Sent: 06 May 2015 17:24 To: Luo Gangyi Cc: OpenStack Development Mailing L Subject: Re: [openstack-dev] [Ceilometer][Gnocchi] How to try ceilometer with gnocchi ? On Wed, May 06 2015, Luo Gangyi wrote: Hi Luo, I want to try using ceilometer with gnocchi, but I didn't any docs about how to configure it. Everything should be documented at: http://docs.openstack.org/developer/gnocchi/ The devstack installation should be pretty straighforward: http://docs.openstack.org/developer/gnocchi/devstack.html (and don't forget to also enable Ceilometer) I have check the master branch of ceilometer and didn't see how ceilometer interact with gnocchi neither (I think there should be something like a gnocchi-dispatcher?) The dispatcher is in the Gnocchi source tree (for now, we're moving it to Ceilometer for Liberty). -- Julien Danjou // Free Software hacker // http://julien.danjou.info __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev -- Emilien Macchi signature.asc Description: OpenPGP digital signature __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
[openstack-dev] [Cinder] Static Ceph mon connection info prevents VM restart
Hi, As we swapped a fraction of our Ceph mon servers between the pre-production and production cluster — something we considered to be transparent as the Ceph config points to the mon alias—, we ended up in a situation where VMs with volumes attached were not able to boot (with a probability that matched the fraction of the servers moved between the Ceph instances). We found that the reason for this is the connection_info in block_device_mapping which contains the IP adresses of the mon servers as extracted by the rbd driver in initialize_connection() at the moment when the connection is established. From what we see, however, this information is not updated as long as the connection exists, and will hence be re-applied without checking even when the XML is recreated. The idea to extract the mon servers by IP from the mon map was probably to get all mon servers (rather than only one from a load-balancer or an alias), but while our current scenario may be special, we will face a similar problem the day the Ceph mons need to be replaced. And that makes it a more general issue. For our current problem: Is there a user-transparent way to force an update of that connection information? (Apart from fiddling with the database entries, of course.) For the general issue: Would it be possible to simply use the information from the ceph.conf file directly (an alias in our case) throughout the whole stack to avoid hard-coding IPs that will be obsolete one day? Thanks! Arne — Arne Wiebalck CERN IT __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [api] Changing 403 Forbidden to 400 Bad Request for OverQuota was: [nova] Which error code should we return when OverQuota
On Wed, 6 May 2015, Jay Pipes wrote: I think Sean makes an excellent point that if you have 1 condition that results in a 403 Forbidden, it actually does not make things more expressive. It actually just means both humans and clients need to now delve deeper into the error context to determine if this is something they actually don't have permission to do, or whether they've exceeded their quota but otherwise have permission to do some action. As I said to Sean in IRC, I can see where you guys are coming from and I haven't really got a better counter-proposal than my experience writing servers and clients doesn't like this so it's not like I want to fight about it. I do think it is worth discussion and there are obviously costs either way that we should identify and balance. Interestingly, in the process of writing this response I think I've managed to come up with a few reasons. On the other hand maybe I'm just getting out yet more paint for the shed. Basically it seems to me that the proposal to use 400 just moves the problem around, one option has conditionals localized under 400, centralizing the ambiguity. The other option puts the ambiguity in categories. I guess my brain works better with the latter: a kind of cascading decision tree. Note: I think we're all perpetuating the myth or wish that we actually do do something in code in response to 400 errors. Maybe in some very special clients it might happen, but in ad-hoc clients (the best kind) for the most part we report the status and fail and let the human decide what's next. In that sort of context I want the response _codes_ to have some semantics because I want to branch on the codes (if I branch at all) and nothing else: * 400: bro, something bogus happened, I'm pretty sure it was your fault * 401: Tell me who you are and you might get to do this * 402: You might get to do this if you pay * 403: You didn't get to do this because the _server_ forbids you * 404: You didn't get to do this because it ain't there * 405: You didn't get to do this because that action is not available * 406: I've got the thing you want, but not in the form you want it * 407: Some man in the middle proxy needs auth * 408: You spoke too slowly for my awesome brains * 409: Somebody else got there first * 410: Seriously, it ain't there and it never will be * 411: Why u no content-length!? * 412: You sent conditional headers and I can't meet their requirements * 413: Too big in the body! * 414: Too big in the URI! * 415: You sent me a thing and I might have been able to do something with it if it were in a different form [...] These all mean things as defined by rfcs 7231 and 7235. Those rfcs were not pulled out of thin air: They are part of the suite of rfcs that define HTTP. Do we want to do HTTP? Yes, I think so. In that case, we ought to follow it where possible. Each of those codes above have different levels of ambiguity. Some are quite specific. For example 405, 406, 411, 412 and 415. Where we can be sure they are the correct response we should use them and most assuredly _not_ 400. 403, as you've both identified, is a lot more squiffy: the server understood the request but refuses to authorize it...a request might be forbidden for reasons unrelated to the credentials. Which leads us to 400. How I tend to use 400 is when none of 405, 406, 409, 411, 412 or 415 can be used because the representation is _claiming_ legitimate form (based on the headers) and no conditionals are being violated and where none of 401, 403 or 404 can be used because the thing is there, I am authentic and the server is not forbidding . What that means is that there's some crufty about the otherwise good representation: You've claimed to be sending JSON and you did, but you left out a required field. There is no other 4xx that covers that, thus 400. Now if we try to meld my rules with this idea about signifying over quota, I feel we've now discovered some collisions: My use of 400 means there's something wrong with your request. This is also what the spec says: the client seems to have erred. Both of these essentially say that request was pretty okay, but not quite right and you can change the _request_ (or perhaps the client side environment) and achieve success. In the case of quota you need to change the server side environment, not this request. In fact if you do change the server (your quota) and then do the same request again it will likely work. Looking at 403 again: the server understood the request but refuses to authorize it. 4xx means client side error (The 4xx (Client Error) class of status code indicates that the client seems to have erred.), so arguably over quota doesn't really work in _any_ 4xx because the client made no error, the service just has a quota lower than they need. We don't want to go down the non 4xx road at this time, so given our choices 403 is the one that most says the server
[openstack-dev] [QA] Meeting Thursday May 7th at 17:00 UTC
Hi everyone, Just a quick reminder that the weekly OpenStack QA team IRC meeting will be tomorrow Thursday, May 7th at 17:00 UTC in the #openstack-meeting channel. The agenda for tomorrow's meeting can be found here: https://wiki.openstack.org/wiki/Meetings/QATeamMeeting Anyone is welcome to add an item to the agenda. To help people figure out what time 17:00 UTC is in other timezones tomorrow's meeting will be at: 13:00 EDT 02:00 JST 02:30 ACST 19:00 CEST 12:00 CDT 10:00 PDT -Matt Treinish pgpz0uiwW7nTP.pgp Description: PGP signature __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [Cinder] Static Ceph mon connection info prevents VM restart
Hi Arne, We've had this EXACT same issue. I don't know of a way to force an update as you are basically pulling the rug out from under a running instance. I don't know if it is possible/feasible to update the virsh xml in place and then migrate to get it to actually use that data. (I think we tried that to no avail.) dumpxml=massage cephmons=import xml If you find a way, let me know, and that's part of the reason I'm replying so that I stay on this thread. NOTE: We did this on icehouse. Haven't tried since upgrading to Juno but I don't note any change therein that would mitigate this. So I'm guessing Liberty/post-Liberty for a real fix. On Wed, May 6, 2015 at 12:57 PM, Arne Wiebalck arne.wieba...@cern.ch wrote: Hi, As we swapped a fraction of our Ceph mon servers between the pre-production and production cluster -- something we considered to be transparent as the Ceph config points to the mon alias--, we ended up in a situation where VMs with volumes attached were not able to boot (with a probability that matched the fraction of the servers moved between the Ceph instances). We found that the reason for this is the connection_info in block_device_mapping which contains the IP adresses of the mon servers as extracted by the rbd driver in initialize_connection() at the moment when the connection is established. From what we see, however, this information is not updated as long as the connection exists, and will hence be re-applied without checking even when the XML is recreated. The idea to extract the mon servers by IP from the mon map was probably to get all mon servers (rather than only one from a load-balancer or an alias), but while our current scenario may be special, we will face a similar problem the day the Ceph mons need to be replaced. And that makes it a more general issue. For our current problem: Is there a user-transparent way to force an update of that connection information? (Apart from fiddling with the database entries, of course.) For the general issue: Would it be possible to simply use the information from the ceph.conf file directly (an alias in our case) throughout the whole stack to avoid hard-coding IPs that will be obsolete one day? Thanks! Arne -- Arne Wiebalck CERN IT __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] Gerrit downtime and upgrade on Saturday 2015-05-09 at 1600 UTC
On Tue, Apr 14, 2015 at 2:57 PM, James E. Blair cor...@inaugust.com wrote: On Saturday, May 9 at 16:00 UTC Gerrit will be unavailable for about 4 hours while we upgrade to the latest release of Gerrit: version 2.10. We are currently running Gerrit 2.8 so this is an upgrade across two major releases of Gerrit. The release notes for both versions are here: https://gerrit-documentation.storage.googleapis.com/ReleaseNotes/ReleaseNotes-2.10.html https://gerrit-documentation.storage.googleapis.com/ReleaseNotes/ReleaseNotes-2.9.html If you have any questions about the upgrade, please feel free to reply here or contact us in #openstack-infra on Freenode. Just a quick reminder that this upgrade is coming up this Saturday, May 9th, starting at 16:00 UTC. During this upgrade we anticipate that Gerrit will be unavailable for about 4 hours. -- Elizabeth Krumbach Joseph || Lyz || pleia2 __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [api] Changing 403 Forbidden to 400 Bad Request for OverQuota was: [nova] Which error code should we return when OverQuota
On 05/06/2015 03:15 PM, Chris Dent wrote: On Wed, 6 May 2015, Jay Pipes wrote: I think Sean makes an excellent point that if you have 1 condition that results in a 403 Forbidden, it actually does not make things more expressive. It actually just means both humans and clients need to now delve deeper into the error context to determine if this is something they actually don't have permission to do, or whether they've exceeded their quota but otherwise have permission to do some action. As I said to Sean in IRC, I can see where you guys are coming from and I haven't really got a better counter-proposal than my experience writing servers and clients doesn't like this so it's not like I want to fight about it. I do think it is worth discussion and there are obviously costs either way that we should identify and balance. Interestingly, in the process of writing this response I think I've managed to come up with a few reasons. On the other hand maybe I'm just getting out yet more paint for the shed. Basically it seems to me that the proposal to use 400 just moves the problem around, one option has conditionals localized under 400, centralizing the ambiguity. The other option puts the ambiguity in categories. I guess my brain works better with the latter: a kind of cascading decision tree. Note: I think we're all perpetuating the myth or wish that we actually do do something in code in response to 400 errors. Maybe in some very special clients it might happen, but in ad-hoc clients (the best kind) for the most part we report the status and fail and let the human decide what's next. Guilty as charged. It may be that the benefit of moving 403-400 isn't worth the trouble in any case (though I'd prefer it) since there are already clients out in the world that may/may not rely on this behavior. In that sort of context I want the response _codes_ to have some semantics because I want to branch on the codes (if I branch at all) and nothing else: * 400: bro, something bogus happened, I'm pretty sure it was your fault * 401: Tell me who you are and you might get to do this * 402: You might get to do this if you pay * 403: You didn't get to do this because the _server_ forbids you * 404: You didn't get to do this because it ain't there * 405: You didn't get to do this because that action is not available * 406: I've got the thing you want, but not in the form you want it * 407: Some man in the middle proxy needs auth * 408: You spoke too slowly for my awesome brains * 409: Somebody else got there first * 410: Seriously, it ain't there and it never will be * 411: Why u no content-length!? * 412: You sent conditional headers and I can't meet their requirements * 413: Too big in the body! * 414: Too big in the URI! * 415: You sent me a thing and I might have been able to do something with it if it were in a different form [...] These all mean things as defined by rfcs 7231 and 7235. Those rfcs were not pulled out of thin air: They are part of the suite of rfcs that define HTTP. Do we want to do HTTP? Yes, I think so. In that case, we ought to follow it where possible. Each of those codes above have different levels of ambiguity. Some are quite specific. For example 405, 406, 411, 412 and 415. Where we can be sure they are the correct response we should use them and most assuredly _not_ 400. 403, as you've both identified, is a lot more squiffy: the server understood the request but refuses to authorize it...a request might be forbidden for reasons unrelated to the credentials. Which leads us to 400. How I tend to use 400 is when none of 405, 406, 409, 411, 412 or 415 can be used because the representation is _claiming_ legitimate form (based on the headers) and no conditionals are being violated and where none of 401, 403 or 404 can be used because the thing is there, I am authentic and the server is not forbidding . What that means is that there's some crufty about the otherwise good representation: You've claimed to be sending JSON and you did, but you left out a required field. There is no other 4xx that covers that, thus 400. Now if we try to meld my rules with this idea about signifying over quota, I feel we've now discovered some collisions: My use of 400 means there's something wrong with your request. This is also what the spec says: the client seems to have erred. Both of these essentially say that request was pretty okay, but not quite right and you can change the _request_ (or perhaps the client side environment) and achieve success. In the case of quota you need to change the server side environment, not this request. In fact if you do change the server (your quota) and then do the same request again it will likely work. Looking at 403 again: the server understood the request but refuses to authorize it. Very good point. I was thinking
Re: [openstack-dev] [Fuel] Nominate Julia Aranovich for fuel-web core
So, there is no objections and Julia is now a core reviewer for fuel-web. Congratulations! 2015-05-05 16:17 GMT+03:00 Vitaly Kramskikh vkramsk...@mirantis.com: Thanks for voting. If nobody has objections by tomorrow, Julia will get +2 rights for fuel-web. 2015-05-05 15:30 GMT+03:00 Dmitry Pyzhov dpyz...@mirantis.com: +1 On Tue, May 5, 2015 at 1:06 PM, Evgeniy L e...@mirantis.com wrote: +1 On Tue, May 5, 2015 at 12:55 PM, Sebastian Kalinowski skalinow...@mirantis.com wrote: +1 2015-04-30 11:33 GMT+02:00 Przemyslaw Kaminski pkamin...@mirantis.com : +1, indeed Julia's reviews are very thorough. P. On 04/30/2015 11:28 AM, Vitaly Kramskikh wrote: Hi, I'd like to nominate Julia Aranovich http://stackalytics.com/report/users/jkirnosova for fuel-web https://github.com/stackforge/fuel-web core team. Julia's reviews are always thorough and have decent quality. She is one of the top contributors and reviewers in fuel-web repo (mostly for JS/UI stuff). Please vote by replying with +1/-1. -- Vitaly Kramskikh, Fuel UI Tech Lead, Mirantis, Inc. __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev -- Vitaly Kramskikh, Fuel UI Tech Lead, Mirantis, Inc. -- Vitaly Kramskikh, Fuel UI Tech Lead, Mirantis, Inc. __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [RefStackl] - http://refstack.org/ - does not resolve
- Original Message - From: Jeremy Stanley fu...@yuggoth.org To: OpenStack Development Mailing List (not for usage questions) openstack-dev@lists.openstack.org On 2015-05-06 09:37:26 -0500 (-0500), arkady_kanev...@dell.com wrote: What are we doing to have name resolved? Meanwhile what is IP address to reach it? Do we really expect people to submit results to that web site? It looks like I can add that domain and whatever records we want for it... I'd simply need to know the IP address(es) and name(s) you want in those resource records. -- Jeremy Stanley It looks like it got moved to refstack.net (or at least, that address resolves and looks to be the right content...). Thanks, Steve __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [chef] Feedback to move IRC Monday meeting and time.
Hi, for me (i live in Germany) the full hour (so 15:00 UTC) is fine. Cheers, Jan On May 6, 2015, at 7:11 PM, JJ Asghar jasg...@chef.io wrote: Hey everyone! As we move forward with our big tent move[1] Jan suggested we move from our traditional IRC meeting in our main channel #openstack-chef to one of the official OpenStack meeting channels[2]. This has actually caused a situation that I’d like to make public. In the documentation the times for the meetings are suggested at the top of the hour, we have ours that start at :30 past. This allows for our friends and community members on the west coast of the United States able to join at a pseudo-reasonable time. The challenge is, if we move it forward to the top of the hour, we may lose the west coast, but if we move it back to the top of the next hour we may lose our friends in Germany and earlier time zones. I’m not sure what to do here, so i’d like some feedback from the community. When we’ve come to a consensus we can attempt to find the open slot in the official IRC channels and i can put the stake in the ground here[3]. Thoughts, questions, concerns? -JJ [1]: https://review.openstack.org/#/c/175000/ https://review.openstack.org/#/c/175000/ [2]: https://wiki.openstack.org/wiki/Meetings/CreateaMeeting https://wiki.openstack.org/wiki/Meetings/CreateaMeeting [3]: https://wiki.openstack.org/wiki/Meetings#Chef_Cookbook_meetings https://wiki.openstack.org/wiki/Meetings#Chef_Cookbook_meetings__ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [oslo] Adding Joshua Harlow to oslo-core
Not a core but definitely a +1 from my side. Has great technical insights and is someone who is always happy to help others. -Vilobh On Tue, May 5, 2015 at 12:55 PM, David Medberry openst...@medberry.net wrote: Not a voting member, but +1 from me. He's core in my book. On Tue, May 5, 2015 at 11:27 AM, Ben Nemec openst...@nemebean.com wrote: -BEGIN PGP SIGNED MESSAGE- Hash: SHA256 +1 from me as well! On 05/05/2015 09:47 AM, Julien Danjou wrote: Hi fellows, I'd like to propose that we add Joshua Harlow to oslo-core. He is already maintaining some of the Oslo libraries (taskflow, tooz…) and he's helping on a lot of other ones for a while now. Let's bring him in for real! __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev -BEGIN PGP SIGNATURE- Version: GnuPG v2 iQEcBAEBCAAGBQJVSP2fAAoJEDehGd0Fy7uqEggH/3VMflb10XVGXFQb/061yrmo B1boYZdeqVeBOlURSgsSouKJwY8OahMygu18GhedLXHaefYUlMgZRW/nSeGoS8/7 fPWc1E4ebn/xupXPtSDo41CT8VswpeDZKod1DV74mTapMVQPzlslwnEmOwaik44h uuAwNEaMOPrelpHhv2qbINanOZco431BPmWqbPEEoRrOEkBJi0j7ikY36gHGL1Ny UTvtUW0rXDGOEswVi6/F9S6hZLYtvsyTFs+4ZspwQeHgQ+oTNdtFuw9w25oYhxLl lTJAKO29b7tcbZ3NHTJRBY1tldx3GVP9DkPAPWmXbZElwLvdfWMTKeLSrPbIdds= =aXKU -END PGP SIGNATURE- __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [Fuel] LBaaS in version 5.1
Thanks Stanislaw for reply. sure i can do that the only unknown question i have is related to the Fuel HA controllers. I assume i can easily ignore the controller HA (LBaaS doesn't support HA :) ) and just go the standard LBaaS? On Wed, May 6, 2015 at 2:55 PM, Stanislaw Bogatkin sbogat...@mirantis.com wrote: Hi Daniel, Unfortunately, we never supported LBaaS until Fuel 6.0 when plugin system was introduced and LBaaS plugin was created. So, I think than docs about it never existed for 5.1. But as I know, you can easily install LBaaS in 5.1 (it should be shipped in our repos) and configure it with accordance to standard OpenStack cloud administrator guide [1]. [1] http://docs.openstack.org/admin-guide-cloud/content/install_neutron-lbaas-agent.html On Wed, May 6, 2015 at 2:12 PM, Daniel Comnea comnea.d...@gmail.com wrote: HI all, Recently i used Fuel 5.1 to deploy Openstack Icehouse on a Lab (PoC) and a request came with enabling Neutron LBaaS. I have looked up on Fuel doc to see if this is supported in the version i'm running but failed ot find anything. Anyone can point me to any docs which mentioned a) yes it is supported and b) how to update it via Fuel? Thanks, Dani __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [Ceilometer][Gnocchi] How to try ceilometer with gnocchi ?
Sorry to add another question, can Gnocchi be installed on a Juno cloud or do we need to be running Kilo ? Tim -Original Message- From: Tim Bell [mailto:tim.b...@cern.ch] Sent: 06 May 2015 19:36 To: OpenStack Development Mailing List (not for usage questions); Luo Gangyi Subject: Re: [openstack-dev] [Ceilometer][Gnocchi] How to try ceilometer with gnocchi ? Julien, Has anyone started on the RPMs and/or Puppet modules ? We'd be interested in trying this out. Thanks Tim -Original Message- From: Julien Danjou [mailto:jul...@danjou.info] Sent: 06 May 2015 17:24 To: Luo Gangyi Cc: OpenStack Development Mailing L Subject: Re: [openstack-dev] [Ceilometer][Gnocchi] How to try ceilometer with gnocchi ? On Wed, May 06 2015, Luo Gangyi wrote: Hi Luo, I want to try using ceilometer with gnocchi, but I didn't any docs about how to configure it. Everything should be documented at: http://docs.openstack.org/developer/gnocchi/ The devstack installation should be pretty straighforward: http://docs.openstack.org/developer/gnocchi/devstack.html (and don't forget to also enable Ceilometer) I have check the master branch of ceilometer and didn't see how ceilometer interact with gnocchi neither (I think there should be something like a gnocchi-dispatcher?) The dispatcher is in the Gnocchi source tree (for now, we're moving it to Ceilometer for Liberty). -- Julien Danjou // Free Software hacker // http://julien.danjou.info _ _ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [Murano] [Mistral] SSH workflow action
If your Mistral engine is on the same host as the network node hosting the router for the tenant, then it would probably work there are a lot of conditions in that statement though... Too many for my tastes. :/ While I dislike agents running in the vm's, this still might be a good use case for one... This would also probably be a good use case for Zaqar I think. Have a generic run shell commands from Zaqar queue agent, that pulls commands from a Zaqar queue, and executes it. The vm's don't have to be directly reachable from the network then. You just have to push messages into Zaqar. From Murano's perspective though, maybe it shouldn't care. Should Mistral abstract away how to execute the action, leaving it up to Mistral how to get the action to the vm? If that's the case, then ssh vs queue/agent is just a Mistral implementation detail? Maybe the OpenStack Deployer chooses what's the best route for their cloud? Thanks, Kevin From: Filip Blaha [filip.bl...@hp.com] Sent: Wednesday, May 06, 2015 8:42 AM To: openstack-dev@lists.openstack.org Subject: [openstack-dev] [Murano] [Mistral] SSH workflow action Hello We are considering implementing actions on services of a murano environment via mistral workflows. We are considering whether mistral std.ssh action could be used to run some command on an instance. Example of such action in murano could be restart action on Mysql DB service. Mistral workflow would ssh to that instance running Mysql and run service mysql restart. From my point of view trying to use SSH to access instances from mistral workflow is not good idea but I would like to confirm it. The biggest problem I see there is openstack networking. Mistral service running on some openstack node would not be able to access instance via its fixed IP (e.g. 10.0.0.5) via SSH. Instance could accessed via ssh from namespace of its gateway router e.g. ip netns exec qrouter-... ssh cirros@10.0.0.5 but I think it is not good to rely on implementation detail of neutron and use it. In multinode openstack deployment it could be even more complicated. In other words I am asking whether we can use std.ssh mistral action to access instances via ssh on theirs fixed IPs? I think no but I would like to confirm it. Thanks Filip __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [Murano] [Mistral] SSH workflow action
Connection direction here is important only in the frame of networking connectivity problem solving. The networking in OpenStack in general works in such a way so that connections from VM are allowed to almost anywhere. In Murano production deployment we use separate MQ instance so that VMs have no access to OpenStack MQ. In the sense who initiates task execution it always a Murano service which publishes tasks (shell script + necessary files) in the MQ so that agent can pull them and execute. Thanks Gosha On Wed, May 6, 2015 at 9:31 AM, Filip Blaha filip.bl...@hp.com wrote: Hello one more note on that. There is difference in direction who initiates connection. In case of murano agent -- rabbit MQ is connection initiated from VM to openstack service(rabbit). In case of std.ssh mistral action is direction opposite from openstack service (mistral) to ssh server on VM. Filip On 05/06/2015 06:00 PM, Pospisil, Radek wrote: Hello, I think that the generic question is - can be O~S services also accessible on Neutron networks, so VM (created by Nova) can access it? We (I and Filip) were discussing this today and we were not make a final decision. Another example is Murano agent running on VMs - it connects to RabbitMQ which is also accessed by Murano engine Regards, Radek -Original Message- From: Blaha, Filip Sent: Wednesday, May 06, 2015 5:43 PM To: openstack-dev@lists.openstack.org Subject: [openstack-dev] [Murano] [Mistral] SSH workflow action Hello We are considering implementing actions on services of a murano environment via mistral workflows. We are considering whether mistral std.ssh action could be used to run some command on an instance. Example of such action in murano could be restart action on Mysql DB service. Mistral workflow would ssh to that instance running Mysql and run service mysql restart. From my point of view trying to use SSH to access instances from mistral workflow is not good idea but I would like to confirm it. The biggest problem I see there is openstack networking. Mistral service running on some openstack node would not be able to access instance via its fixed IP (e.g. 10.0.0.5) via SSH. Instance could accessed via ssh from namespace of its gateway router e.g. ip netns exec qrouter-... ssh cirros@10.0.0.5 but I think it is not good to rely on implementation detail of neutron and use it. In multinode openstack deployment it could be even more complicated. In other words I am asking whether we can use std.ssh mistral action to access instances via ssh on theirs fixed IPs? I think no but I would like to confirm it. Thanks Filip __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev -- Georgy Okrokvertskhov Architect, OpenStack Platform Products, Mirantis http://www.mirantis.com Tel. +1 650 963 9828 Mob. +1 650 996 3284 __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [Neutron] Success of the IPv6 Subteam - Proposal to disband
On 05/04/2015 09:51 PM, Kyle Mestery wrote: On Mon, May 4, 2015 at 7:37 PM, Sean M. Collins s...@coreitpro.com wrote: It is a bittersweet moment - I am proposing that due to the amazing success that we have had as a subteam, that because we have accomplished so much, that it makes sense for our team to disband and re-integrate with other subteams (the L3 subteam comes to mind) or have items in the on-demand agenda of the main meeting. Unless there is any pressing business, I believe that we will not need a recurring meeting, and tomorrow's meeting is cancelled. As always, I am in #openstack-neutron and happy to help. Thank you, Sean and team, thanks for all your awesome work on IPv6 in Neutron over the past two cycles! And thanks for volunteering to disband and go out on top, integrating back into the broader team. It's a good move and would make sense to cover IPv6 items in the L3 meeting as you say. Kyle Sean -- Sean M. Collins __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev Congratulations Sean and the IPv6 team. It is nice to see your purpose has been served. Anita. __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
[openstack-dev] [Gnocchi] Gnocchi 1.0.0 released
Hi fellows, I'm please to announce that Gnocchi 1.0.0 has been released today. https://bugs.launchpad.net/gnocchi/1.0/1.0.0 https://pypi.python.org/pypi/gnocchi The full documentation is online at: http://docs.openstack.org/developer/gnocchi Happy hacking! Cheers, -- Julien Danjou ;; Free Software hacker ;; http://julien.danjou.info signature.asc Description: PGP signature __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [Murano] [Mistral] SSH workflow action
On Wed, May 6, 2015 at 9:26 AM, Fox, Kevin M kevin@pnnl.gov wrote: If your Mistral engine is on the same host as the network node hosting the router for the tenant, then it would probably work there are a lot of conditions in that statement though... Too many for my tastes. :/ While I dislike agents running in the vm's, this still might be a good use case for one... This would also probably be a good use case for Zaqar I think. Have a generic run shell commands from Zaqar queue agent, that pulls commands from a Zaqar queue, and executes it. The vm's don't have to be directly reachable from the network then. You just have to push messages into Zaqar. From Murano's perspective though, maybe it shouldn't care. Should Mistral abstract away how to execute the action, leaving it up to Mistral how to get the action to the vm? If that's the case, then ssh vs queue/agent is just a Mistral implementation detail? Maybe the OpenStack Deployer chooses what's the best route for their cloud? Thanks, Kevins +1 for MQ. That is the path which proved itself to be working in most of the cases. -1 for ssh as this is a big headache. Thanks, Gosha From: Filip Blaha [filip.bl...@hp.com] Sent: Wednesday, May 06, 2015 8:42 AM To: openstack-dev@lists.openstack.org Subject: [openstack-dev] [Murano] [Mistral] SSH workflow action Hello We are considering implementing actions on services of a murano environment via mistral workflows. We are considering whether mistral std.ssh action could be used to run some command on an instance. Example of such action in murano could be restart action on Mysql DB service. Mistral workflow would ssh to that instance running Mysql and run service mysql restart. From my point of view trying to use SSH to access instances from mistral workflow is not good idea but I would like to confirm it. The biggest problem I see there is openstack networking. Mistral service running on some openstack node would not be able to access instance via its fixed IP (e.g. 10.0.0.5) via SSH. Instance could accessed via ssh from namespace of its gateway router e.g. ip netns exec qrouter-... ssh cirros@10.0.0.5 but I think it is not good to rely on implementation detail of neutron and use it. In multinode openstack deployment it could be even more complicated. In other words I am asking whether we can use std.ssh mistral action to access instances via ssh on theirs fixed IPs? I think no but I would like to confirm it. Thanks Filip __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev -- Georgy Okrokvertskhov Architect, OpenStack Platform Products, Mirantis http://www.mirantis.com Tel. +1 650 963 9828 Mob. +1 650 996 3284 __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [Murano] [Mistral] SSH workflow action
Hello one more note on that. There is difference in direction who initiates connection. In case of murano agent -- rabbit MQ is connection initiated from VM to openstack service(rabbit). In case of std.ssh mistral action is direction opposite from openstack service (mistral) to ssh server on VM. Filip On 05/06/2015 06:00 PM, Pospisil, Radek wrote: Hello, I think that the generic question is - can be O~S services also accessible on Neutron networks, so VM (created by Nova) can access it? We (I and Filip) were discussing this today and we were not make a final decision. Another example is Murano agent running on VMs - it connects to RabbitMQ which is also accessed by Murano engine Regards, Radek -Original Message- From: Blaha, Filip Sent: Wednesday, May 06, 2015 5:43 PM To: openstack-dev@lists.openstack.org Subject: [openstack-dev] [Murano] [Mistral] SSH workflow action Hello We are considering implementing actions on services of a murano environment via mistral workflows. We are considering whether mistral std.ssh action could be used to run some command on an instance. Example of such action in murano could be restart action on Mysql DB service. Mistral workflow would ssh to that instance running Mysql and run service mysql restart. From my point of view trying to use SSH to access instances from mistral workflow is not good idea but I would like to confirm it. The biggest problem I see there is openstack networking. Mistral service running on some openstack node would not be able to access instance via its fixed IP (e.g. 10.0.0.5) via SSH. Instance could accessed via ssh from namespace of its gateway router e.g. ip netns exec qrouter-... ssh cirros@10.0.0.5 but I think it is not good to rely on implementation detail of neutron and use it. In multinode openstack deployment it could be even more complicated. In other words I am asking whether we can use std.ssh mistral action to access instances via ssh on theirs fixed IPs? I think no but I would like to confirm it. Thanks Filip __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
[openstack-dev] [chef] Feedback to move IRC Monday meeting and time.
Hey everyone! As we move forward with our big tent move[1] Jan suggested we move from our traditional IRC meeting in our main channel #openstack-chef to one of the official OpenStack meeting channels[2]. This has actually caused a situation that I’d like to make public. In the documentation the times for the meetings are suggested at the top of the hour, we have ours that start at :30 past. This allows for our friends and community members on the west coast of the United States able to join at a pseudo-reasonable time. The challenge is, if we move it forward to the top of the hour, we may lose the west coast, but if we move it back to the top of the next hour we may lose our friends in Germany and earlier time zones. I’m not sure what to do here, so i’d like some feedback from the community. When we’ve come to a consensus we can attempt to find the open slot in the official IRC channels and i can put the stake in the ground here[3]. Thoughts, questions, concerns? -JJ [1]: https://review.openstack.org/#/c/175000/ https://review.openstack.org/#/c/175000/ [2]: https://wiki.openstack.org/wiki/Meetings/CreateaMeeting https://wiki.openstack.org/wiki/Meetings/CreateaMeeting [3]: https://wiki.openstack.org/wiki/Meetings#Chef_Cookbook_meetings https://wiki.openstack.org/wiki/Meetings#Chef_Cookbook_meetings__ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [nova] Proposal to add Melanie Witt to nova-core
Not a core but definitely a +1. She is very helpful. On Fri, May 1, 2015 at 9:43 PM, Gary Kotton gkot...@vmware.com wrote: +1 From: Alex Xu sou...@gmail.com Reply-To: OpenStack List openstack-dev@lists.openstack.org Date: Friday, May 1, 2015 at 6:30 AM To: OpenStack List openstack-dev@lists.openstack.org Subject: Re: [openstack-dev] [nova] Proposal to add Melanie Witt to nova-core I'm not core, but I want to +1 :) 2015-04-30 19:30 GMT+08:00 John Garbutt j...@johngarbutt.com: Hi, I propose we add Melanie to nova-core. She has been consistently doing great quality code reviews[1], alongside a wide array of other really valuable contributions to the Nova project. Please respond with comments, +1s, or objections within one week. Many thanks, John [1] https://review.openstack.org/#/dashboard/4690 __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [Ceilometer][Gnocchi] How to try ceilometer with gnocchi ?
Julien, Has anyone started on the RPMs and/or Puppet modules ? We'd be interested in trying this out. Thanks Tim -Original Message- From: Julien Danjou [mailto:jul...@danjou.info] Sent: 06 May 2015 17:24 To: Luo Gangyi Cc: OpenStack Development Mailing L Subject: Re: [openstack-dev] [Ceilometer][Gnocchi] How to try ceilometer with gnocchi ? On Wed, May 06 2015, Luo Gangyi wrote: Hi Luo, I want to try using ceilometer with gnocchi, but I didn't any docs about how to configure it. Everything should be documented at: http://docs.openstack.org/developer/gnocchi/ The devstack installation should be pretty straighforward: http://docs.openstack.org/developer/gnocchi/devstack.html (and don't forget to also enable Ceilometer) I have check the master branch of ceilometer and didn't see how ceilometer interact with gnocchi neither (I think there should be something like a gnocchi-dispatcher?) The dispatcher is in the Gnocchi source tree (for now, we're moving it to Ceilometer for Liberty). -- Julien Danjou // Free Software hacker // http://julien.danjou.info __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
[openstack-dev] [api] Changing 403 Forbidden to 400 Bad Request for OverQuota was: [nova] Which error code should we return when OverQuota
Adding [api] topic. API WG members, please do comment. On 05/06/2015 08:01 AM, Sean Dague wrote: On 05/06/2015 07:11 AM, Chris Dent wrote: On Wed, 6 May 2015, Sean Dague wrote: All other client errors, just be a 400. And use the emerging error reporting json to actually tell the client what's going on. Please do not do this. Please use the 4xx codes as best as you possibly can. Yes, they don't always match, but there are several of them for reasons™ and it is usually possible to find one that sort of fits. Using just 400 is bad for a healthy HTTP ecosystem. Sure, for the most part people are talking to OpenStack through official clients but a) what happens when they aren't, b) is that the kind of world we want? I certainly don't. I want a world where the HTTP APIs that OpenStack and other services present actually use HTTP and allow a diversity of clients (machine and human). Absolutely. And the problem is there is not enough namespace in the HTTP error codes to accurately reflect the error conditions we hit. So the current model means the following: If you get any error code, it means multiple failure conditions. Throw it away, grep the return string to decide if you can recover. My proposal is to be *extremely* specific for the use of anything besides 400, so there is only 1 situation that causes that to arise. So 403 means a thing, only one thing, ever. Not 2 kinds of things that you need to then figure out what you need to do. If you get a 400, well, that's multiple kinds of errors, and you need to then go conditional. This should provide a better experience for all clients, human and machine. I agree with Sean on this one. Using response codes effectively makes it easier to write client code that is either simple or is able to use generic libraries effectively. Let's be honest: OpenStack doesn't have a great record of using HTTP effectively or correctly. Let's not make it worse. In the case of quota, 403 is fairly reasonable because you are in fact Forbidden from doing the thing you want to do. Yes, with the passage of time you may very well not be forbidden so the semantics are not strictly matching but it is more immediately expressive yet not quite as troubling as 409 (which has a more specific meaning). Except it's not, because you are saying to use 403 for 2 issues (Don't have permissions and Out of quota). Turns out, we have APIs for adjusting quotas, which your user might have access to. So part of 403 space is something you might be able to code yourself around, and part isn't. Which means you should always ignore it and write custom logic client side. Using something beyond 400 is *not* more expressive if it has more than one possible meaning. Then it's just muddy. My point is that all errors besides 400 should have *exactly* one cause, so they are specific. Yes, agreed. I think Sean makes an excellent point that if you have 1 condition that results in a 403 Forbidden, it actually does not make things more expressive. It actually just means both humans and clients need to now delve deeper into the error context to determine if this is something they actually don't have permission to do, or whether they've exceeded their quota but otherwise have permission to do some action. Best, -jay p.s. And, yes, Chris, I definitely do see your side of the coin on this. It's nuanced, and a grey area... __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [chef] Feedback to move IRC Monday meeting and time.
On Wed, May 06, 2015 at 12:11:16PM -0500, JJ Asghar wrote: Hey everyone! As we move forward with our big tent move[1] Jan suggested we move from our traditional IRC meeting in our main channel #openstack-chef to one of the official OpenStack meeting channels[2]. Sounds like a good idea :) This has actually caused a situation that I’d like to make public. In the documentation the times for the meetings are suggested at the top of the hour, we have ours that start at :30 past. This allows for our friends and community members on the west coast of the United States able to join at a pseudo-reasonable time. The challenge is, if we move it forward to the top of the hour, we may lose the west coast, but if we move it back to the top of the next hour we may lose our friends in Germany and earlier time zones. Having a meeting at :30 past is fine, I believe that it's discouraged as it makes attending back-to-back meetings harder. There are several meetings that do this. Yours Tony. pgpCKxwMKOm36.pgp Description: PGP signature __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
[openstack-dev] [Swift] HEAD Object API status code
Hello Swift developers, I would like to ask you on a Swift API specification. Swift returns 204 status code to a valid HEAD Object request with a Content-Length header, whereas the latest HTTP/1.1 specification (RFC7230) states that you must not send the header with a 204 status code. 3.3.2. Content-Length (snip) A server MUST NOT send a Content-Length header field in any response with a status code of 1xx (Informational) or 204 (No Content). A server MUST NOT send a Content-Length header field in any 2xx (Successful) response to a CONNECT request (Section 4.3.6 of [RFC7231]). What I would like to know is, when you designed Swift APIs what was the reasoning behind choosing 204 status code to HEAD Object, over other status codes such as 200? Thanks, Atsuo -- Ouchi Atsuo / ouchi.at...@jp.fujitsu.com tel. 03-6424-6612 / ext. 72-60728968 Service Development Department, Foundation Service Division Fujitsu Limited __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [puppet] Re: Puppet-OpenStack API providers - Follow up
It seems ~/.openrc is the only default, so just replacing the RC file default: Workflow to find credentials details: 1. From environment (ENV[token] or ENV[project] in short) 2. From a RC file located by convention in current user homedir: ~/.openrc 3. From an openstack configuration file such as keystone.conf, glance.conf, etc. Thanks, Gilles On 06/05/15 12:49, Gilles Dubreuil wrote: Hi, To summarize from latest 2 discussions about this matter. Workflow to find credentials details: 1. From environment (ENV[token] or ENV[project] in short) 2. From a RC file located by convention in current user homedir: ~/openstackrc 3. From an openstack configuration file such as keystone.conf, glance.conf, etc. Just to avoid confusion, any user/tenant and password or token details could be used but they have to come from above list. The change, impacting openstacklib and the current reviews depending on it, is therefore to remove any other way besides above list of passing the credentials. And more specifically remove passing authenticattion dynamically to the provider because of the many reasons evoked. Also note that 3. would need to be added afterward as this represent a factorization work away from the providers. Regards, Gilles __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [puppet] Re: Puppet-OpenStack API providers - Follow up
On Wed, May 6, 2015 at 6:26 PM, Gilles Dubreuil gil...@redhat.com wrote: It seems ~/.openrc is the only default [...] The extras module places it at '/root/openrc' [1], so either the extras module should be changed or the providers should look in /root/openrc, either way it should be consistent. Colleen [1] http://git.openstack.org/cgit/stackforge/puppet-openstack_extras/tree/manifests/auth_file.pp#n86 __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [heat]: stack stays interminably under the status create in progress
BTW, you may want to post your questions on using Heat to openst...@lists.openstack.org and/or https://ask.openstack.org, instead of this mailinglist. - QM On Tue, May 05, 2015 at 03:02:59PM +0200, ICHIBA Sara wrote: Hello there, I started a project where I need to deploy stacks and orchastrate them using heat (autoscaling and so on..). I just started playing with heat and the creation of my first stack is never complete. It stays in the status create in progress. My log files don't say much. For my template i'm using a veery simple one to launch a small instance. Any ideas what that might be? In advance, thank you for your response. sara, __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
[openstack-dev] [TripleO] puppet pacemaker thoughts... and an idea
Looking over some of the Puppet pacemaker stuff today. I appreciate all the hard work going into this effort but I'm not quite happy about all of the conditionals we are adding to our puppet overcloud_controller.pp manifest. Specifically it seems that every service will basically have its resources duplicated for pacemaker and non-pacemaker version of the controller by checking the $enable_pacemaker variable. After seeing it play out for a couple services I think I might prefer it better if we had an entirely separate template for the pacemaker version of the controller. One easy way to kick off this effort would be to use the Heat resource registry to enable pacemaker rather than a parameter. Something like this: https://review.openstack.org/#/c/180833/ If we were to split out the controller into two separate templates I think it might be appropriate to move a few things into puppet-tripleo to de-duplicate a bit. Things like the database creation for example. But probably not all of the services... because we are trying as much as possible to use the stackforge puppet modules directly (and not our own composition layer). I think this split is a good compromise and would probably even speed up the implementation of the remaining pacemaker features too. And removing all the pacemaker conditionals we have from the non-pacemaker version puts us back in a reasonably clean state I think. Dan __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [api] Changing 403 Forbidden to 400 Bad Request for OverQuota was: [nova] Which error code should we return when OverQuota
Learn a lot again! ++ for sub-error-codes. From: Everett Toews [mailto:everett.to...@rackspace.com] Sent: Thursday, May 7, 2015 6:26 AM To: OpenStack Development Mailing List (not for usage questions) Subject: Re: [openstack-dev] [api] Changing 403 Forbidden to 400 Bad Request for OverQuota was: [nova] Which error code should we return when OverQuota On May 6, 2015, at 1:58 PM, David Kranz dkr...@redhat.commailto:dkr...@redhat.com wrote: +1 The basic problem is we are trying to fit a square (generic api) peg in a round (HTTP request/response) hole. But if we do say we are recognizing sub-error-codes, it might be good to actually give them numbers somewhere in the response (maybe an error code header) rather than relying on string matching to determine the real error. String matching is fragile and has icky i18n implications. There is an effort underway around defining such sub-error-codes [1]. Those error codes would be surfaced in the REST API here [2]. Naturally feedback is welcome. Everett [1] https://review.openstack.org/#/c/167793/ [2] https://review.openstack.org/#/c/167793/ __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [opentack-dev][meetings] Proposing changes in Rally meetings
On Wed, May 06, 2015 at 05:09:07PM +0300, Mikhail Dubov wrote: many thanks for noticing it, I didn't see it for some reason while looking at the iCal file / checking the wiki. No problem. Seeing conflicts is non-trivial. with 90+meetings in 4+ IRC channels. We will use another time then. As I said that time is fine as long as you switch to #openstack-meeting-4 Yours Tony. pgp5enN1eEzPt.pgp Description: PGP signature __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [puppet] Re: Puppet-OpenStack API providers - Follow up
On 07/05/15 11:33, Colleen Murphy wrote: On Wed, May 6, 2015 at 6:26 PM, Gilles Dubreuil gil...@redhat.com mailto:gil...@redhat.com wrote: It seems ~/.openrc is the only default [...] The extras module places it at '/root/openrc' [1], so either the extras module should be changed or the providers should look in /root/openrc, either way it should be consistent. Agreed. Let's use ~/openrc for now then. Colleen [1] http://git.openstack.org/cgit/stackforge/puppet-openstack_extras/tree/manifests/auth_file.pp#n86 __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [all] cross project communication: periodic developer newsletter?
On 6/05/2015 23:34, James Bottomley wrote: On Wed, 2015-05-06 at 11:54 +0200, Thierry Carrez wrote: Hugh Blemings wrote: +2 I think asking LWN if they have the bandwidth and interest to do this would be ideal - they've credibility in the Free/Open Source space and a proven track record. Nice people too. On the bandwidth side, as a regular reader I was under the impression that they struggled with their load already, but I guess if it comes with funding that could be an option. On the interest side, my past tries to invite them to the OpenStack Summit so that they could cover it (the way they cover other conferences) were rejected, so I have doubts in that area as well. Anyone having a personal connection that we could leverage to pursue that option further ? Sure, be glad to. I've added Jon to the cc list (if his openstack mail sorting scripts operate like mine, that will get his attention). I already had a preliminary discussion with him: lwn.net is interested but would need to hire an extra person to cover the added load. That makes it quite a big business investment for them. Excellent - I think Jon and Co. could bring a great deal to the table here and if that means finding a way to provide funding that would be effort well spent. Cheers, Hugh __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [Neutron][IPAM] Do we need migrate script for neutron IPAM now?
Ok, sounds good to me. I'll switch #153236 to built-in IPAM implementation by default, and pay additional attention on testing pluggable IPAM routines in this case. - Pavel On 06.05.2015 16:50, John Belamaric wrote: I agree, we should amend it to not run pluggable IPAM as the default for now. When we decide to make it the default, the migration scripts will be needed. John On 5/5/15, 1:47 PM, Salvatore Orlando sorla...@nicira.com mailto:sorla...@nicira.com wrote: Patch #153236 is introducing pluggable IPAM in the db base plugin class, and default to it at the same time, I believe. If the consensus is to default to IPAM driver then in order to satisfy grenade requirements those migrations scripts should be run. There should actually be a single script to be run in a one-off fashion. Even better is treated as a DB migration. However, the plan for Kilo was to not turn on pluggable IPAM for default. Now that we are targeting Liberty, we should have this discussion again, and not take for granted that we should default to pluggable IPAM just because a few months ago we assumed it would be default by Liberty. I suggest to not enable it by default, and then consider in L-3 whether we should do this switch. For the time being, would it be possible to amend patch #153236 to not run pluggable IPAM by default. I appreciate this would have some impact on unit tests as well, which should be run both for pluggable and traditional IPAM. Salvatore On 4 May 2015 at 20:11, Pavel Bondar pbon...@infoblox.com mailto:pbon...@infoblox.com wrote: Hi, During fixing failures in db_base_plugin_v2.py with new IPAM[1] I faced to check-grenade-dsvm-neutron failures[2]. check-grenade-dsvm-neutron installs stable/kilo, creates networks/subnets and upgrades to patched master. So it validates that migrations passes fine and installation is works fine after it. This is where failure occurs. Earlier there was an agreement about using pluggable IPAM only for greenhouse installation, so migrate script from built-in IPAM to pluggable IPAM was postponed. And check-grenade-dsvm-neutron validates greyhouse scenario. So do we want to update this agreement and implement migration scripts from built-in IPAM to pluggable IPAM now? Details about failures. Subnets created before patch was applied does not have correspondent IPAM subnet, so observed a lot of failures like this in [2]: Subnet 2c702e2a-f8c2-4ea9-a25d-924e32ef5503 could not be found Currently config option in patch is modified to use pluggable_ipam by default (to catch all possible UT/tempest failures). But before the merge patch will be switched back to non-ipam implementation by default. I would prefer to implement migrate script as a separate review, since [1] is already quite big and hard for review. [1] https://review.openstack.org/#/c/153236 [2] http://logs.openstack.org/36/153236/54/check/check-grenade-dsvm-neutron/42ab4ac/logs/grenade.sh.txt.gz - Pavel Bondar __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [Ceilometer][Gnocchi] How to try ceilometer with gnocchi ?
On Wed, May 06 2015, Luo Gangyi wrote: Hi Luo, I want to try using ceilometer with gnocchi, but I didn't any docs about how to configure it. Everything should be documented at: http://docs.openstack.org/developer/gnocchi/ The devstack installation should be pretty straighforward: http://docs.openstack.org/developer/gnocchi/devstack.html (and don't forget to also enable Ceilometer) I have check the master branch of ceilometer and didn't see how ceilometer interact with gnocchi neither (I think there should be something like a gnocchi-dispatcher?) The dispatcher is in the Gnocchi source tree (for now, we're moving it to Ceilometer for Liberty). -- Julien Danjou // Free Software hacker // http://julien.danjou.info signature.asc Description: PGP signature __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [oslo.config] MultiStrOpt VS. ListOpt
On Wed, May 06 2015, Steve Martinelli wrote: One key difference about this is that AFAIR ListOpt is delimited by commas? Whereas MultiStrOpt is specified multiple times. In the case of Keystone, we include LDAP values which often include commas. Also for longer values, it is easier to read MultiStrOpt instead of ListOpt Yes, that's the only difference I'm aware of. It should be easy enough to support both formats with one type. -- Julien Danjou // Free Software hacker // http://julien.danjou.info signature.asc Description: PGP signature __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [Fuel] Transaction scheme
We don't need transactions, for example, in GET methods. It doesn't matter whether want we or not. The SQLAlchemy implicitly starts transaction on first select query and it's ok. I mean, perhaps it's not ok, but definitely it won't lead to great performance degradation. A large number of projects prove this. I propose to rid of complex data flows in our code. Code with 'commit' call inside the the method should be split into independent units. Agree. We should get rid of non-obvious and unexpected commits. I like the solution with sending tasks to Astute at the end of handler execution. I don't know how much effort we should apply to implement this, but on first look it seems ok. I mean, we save tasks to send in some queue and then send them if and only iff HTTP handler reports success. Currently, we send it, but there are few places where HTTP handler may fail, report error and perform partial rollback (why partial? because we commit task before sending to Astute) and it looks weird. :( On Wed, May 6, 2015 at 1:22 PM, Alexander Kislitsky akislit...@mirantis.com wrote: I mean, that we should have explicitly wrapped http handlers. For example: @transaction def PUT(...): ... We don't need transactions, for example, in GET methods. I propose to rid of complex data flows in our code. Code with 'commit' call inside the the method should be split into independent units. I like the solution with sending tasks to Astute at the end of handler execution. On Wed, May 6, 2015 at 12:57 PM, Igor Kalnitsky ikalnit...@mirantis.com wrote: First of all I propose to wrap HTTP handlers by begin/commit/rollback I don't know what you are talking about, but we do wrap handlers in transaction for a long time. Here's the code https://github.com/stackforge/fuel-web/blob/2de3806128f398d192d7e31f4ca3af571afeb0b2/nailgun/nailgun/api/v1/handlers/base.py#L53-L84 The issue is that we sometimes perform `.commit()` inside the code (e.g. `task.execute()`) and therefore it's hard to predict which data are committed and which are not. In order to avoid this, we have to declare strict scopes for different layers. Yes, we definitely should base on idea that handlers should open transaction on the begin and close it on the end. But that won't solve all problems, because sometimes we should commit data before handler's end. For instance, commit some task before sending message to Astute. Such cases complicate the things.. and it would be cool if could avoid them by re-factoring our architecture. Perhaps, we could send tasks to Astute when the handler is done? What do you think? Thanks, igor On Wed, May 6, 2015 at 12:15 PM, Lukasz Oles lo...@mirantis.com wrote: On Wed, May 6, 2015 at 10:51 AM, Alexander Kislitsky akislit...@mirantis.com wrote: Hi! The refactoring of transactions management in Nailgun is critically required for scaling. First of all I propose to wrap HTTP handlers by begin/commit/rollback decorator. After that we should introduce transactions wrapping decorator into Task execute/message calls. And the last one is the wrapping of receiver calls. As result we should have begin/commit/rollback calls only in transactions decorator. Big +1 for this. I always wondered why we don't have it. Also I propose to separate working with DB objects into separate lair and use only high level Nailgun objects in the code and tests. This work was started long time ago, but not finished yet. On Thu, Apr 30, 2015 at 12:36 PM, Roman Prykhodchenko m...@romcheg.me wrote: Hi folks! Recently I faced a pretty sad fact that in Nailgun there’s no common approach to manage transactions. There are commits and flushes in random places of the code and it used to work somehow just because it was all synchronous. However, after just a few of the subcomponents have been moved to different processes, it all started producing races and deadlocks which are really hard to resolve because there is absolutely no way to predict how a specific transaction is managed but by analyzing the source code. That is rather an ineffective and error-prone approach that has to be fixed before it became uncontrollable. Let’s arrange a discussions to design a document which will describe where and how transactions are managed and refactor Nailgun according to it in 7.0. Otherwise results may be sad. - romcheg __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev __ OpenStack Development Mailing List (not for usage questions) Unsubscribe:
Re: [openstack-dev] [neutron][api] Extensions out, Micro-versions in
Thanks Bob. Two answers/comments below. On 6 May 2015 at 14:59, Bob Melander (bmelande) bmela...@cisco.com wrote: Hi Salvatore, Two questions/remarks below. From: Salvatore Orlando sorla...@nicira.com Reply-To: OpenStack Development Mailing List (not for usage questions) openstack-dev@lists.openstack.org Date: onsdag 6 maj 2015 00:13 To: OpenStack Development Mailing List openstack-dev@lists.openstack.org Subject: [openstack-dev] [neutron][api] Extensions out, Micro-versions in #5 Plugin/Vendor specific APIs Neutron is without doubt the project with the highest number of 3rd party (OSS and commercial) integration. After all it was mostly vendors who started this project. Vendors [4] use the extension mechanism to expose features in their products not covered by the Neutron API or to provide some sort of value-added service. The current proposal still allows 3rd parties to attach extensions to the neutron API, provided that: - they're not considered part of the Neutron API, in terms of versioning, documentation, and client support BOB There are today vendor specific commands in the Neutron CLI client. Such commands are prepended with the name of the vendor, like cisco_command and nec_command. I think that makes it quite visible to the user that the command is specific to a vendor feature and not part of neutron core. Would it be possible to allow for that also going forward? I would think that from a user perspective it can be convenient to be able to access vendor add-on features using a single CLI client. In a nutshell no, but maybe. Vendor extensions are not part of the Neutron API, but if the community decides to support them in the official client anyway, you will still be able to run vendor-specific CLI commands. Otherwise vendors will have to provide their own client tools, which is feasible as well. Personally, I would be against having vendor-specific CLI commands in python-neutronclient. To me it will be tantamount to saying: yes please do versioning, but don't take extensions away from us. However the developer, user, and operator community might have a different opinion, and as usual the decision will derive from community consensus. - they do not redefine resources defined by the Neutron API. BOB Does “redefine here include extending a resource with additional attributes? In my opinion yes. But I do not have a very strong point here. Also, enforcing this will require many vendors to do backward incompatible changes in the API, and therefore we would need a deprecation cycle. So let's say that ideally modifying the shape of neutron resource by adding attributes, might be considered a discouraged, but not forbidden practice. For instance if you want to attach a qos profile to a port rather then adding a 'vendor_qos_profile' to the port resource you might add a vendor_port_info resource with a reference to the vendor_qos_profile_id and the neutron port_id. - they do not live in the neutron source tree The aim of the provisions above is to minimize the impact of such extensions on API portability. Thanks for reading and thanks in advance for your feedback, Salvatore The title of this post has been inspired by [2] (the message in the banner may be unintelligible to readers not fluent in european football) [1] https://review.openstack.org/#/c/136760/ [2] http://a.espncdn.com/combiner/i/?img=/photo/2015/0502/fc-banner-jd-1296x729.jpgw=738site=espnfc [3] http://specs.openstack.org/openstack/nova-specs/specs/kilo/implemented/api-microversions.html [4] By vendor here we refer either to a cloud provider or a company providing Neutron integration for their products. __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [oslo.config] MultiStrOpt VS. ListOpt
One key difference about this is that AFAIR ListOpt is delimited by commas? Whereas MultiStrOpt is specified multiple times. In the case of Keystone, we include LDAP values which often include commas. Also for longer values, it is easier to read MultiStrOpt instead of ListOpt Thanks, Steve Martinelli OpenStack Keystone Core Davanum Srinivas dava...@gmail.com wrote on 05/06/2015 10:06:38 AM: From: Davanum Srinivas dava...@gmail.com To: OpenStack Development Mailing List (not for usage questions) openstack-dev@lists.openstack.org, ZhiQiang Fan aji.zq...@gmail.com Date: 05/06/2015 10:15 AM Subject: Re: [openstack-dev] [oslo.config] MultiStrOpt VS. ListOpt ZhiQiang, Please log a bug and we can try to do what jd suggested. -- dims On Wed, May 6, 2015 at 9:21 AM, Julien Danjou jul...@danjou.info wrote: On Wed, May 06 2015, ZhiQiang Fan wrote: I come across a problem that crudini cannot handle MultiStrOpt[1], I don't know why such type configuration option is needed. It seems ListOpt is a better choice. Currently I find lots of MultiStrOpt options in both Nova and Ceilometer, and I think other projects have too. Here are my questions: 1) how can I update such type of option without manually rewrite the config file? (like devstack scenario) 2) Is there any way to migrate MultiStrOpt to ListOpt? The ListOpt will take last specified value while MultiStrOpt takes all, so the compatibility is a big problem Any hints? I didn't check extensively, but this is something I hit regularly. It seems to me we have to two types doing more or less the same things and mapping to the same data structure (i.e. list). We should unify them. -- Julien Danjou // Free Software hacker // http://julien.danjou.info __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev -- Davanum Srinivas :: https://twitter.com/dims __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev