Re: [openstack-dev] [qa] Proposals for Tempest core
+1 for both! From: Sean Dague [s...@dague.net] Sent: Friday, November 15, 2013 2:38 PM To: OpenStack Development Mailing List (not for usage questions) Subject: [openstack-dev] [qa] Proposals for Tempest core It's post summit time, so time to evaluate our current core group for Tempest. There are a few community members that I'd like to nominate for Tempest core, as I've found their review feedback over the last few months to be invaluable. Tempest core folks, please +1 or -1 as you feel appropriate: Masayuki Igawa His review history is here - https://review.openstack.org/#/q/reviewer:masayuki.igawa%2540gmail.com+project:openstack/tempest,n,z Ken'ichi Ohmichi His review history is here - https://review.openstack.org/#/q/reviewer:ken1ohmichi%2540gmail.com+project:openstack/tempest,n,z They have both been actively engaged in the Tempest community, and have been actively contributing to both Tempest and OpenStack integrated projects, working hard to both enhance test coverage, and fix the issues found in the projects themselves. This has been hugely beneficial to OpenStack as a whole. At the same time, it's also time, I think, to remove Jay Pipes from tempest-core. Jay's not had much time for reviews of late, and it's important that the core review team is a working title about actively reviewing code. With this change Tempest core would end up no longer being majority north american, or even majority english as first language (that kind of excites me). Adjusting to both there will be another mailing list thread about changing our weekly meeting time to make it more friendly to our APAC contributors. -Sean -- Sean Dague http://dague.net ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
[openstack-dev] [stable/havana] gate broken
Hi, The gating for the stable version is broken when the running the neutron gate. Locally this works but the gate has problem. All of the services are up and running correctly. There are some exceptions with the ceilometer service but that is not related to the neutron gating. The error message is as follows: 2013-11-17 11:00:05.855http://logs.openstack.org/46/56746/1/check/check-tempest-devstack-vm-neutron/a02894b/console.html#_2013-11-17_11_00_05_855 | 2013-11-17 11:00:05 2013-11-17 11:00:17.239http://logs.openstack.org/46/56746/1/check/check-tempest-devstack-vm-neutron/a02894b/console.html#_2013-11-17_11_00_17_239 | Process leaked file descriptors. See http://wiki.jenkins-ci.org/display/JENKINS/Spawning+processes+from+build for more information 2013-11-17 11:00:17.437http://logs.openstack.org/46/56746/1/check/check-tempest-devstack-vm-neutron/a02894b/console.html#_2013-11-17_11_00_17_437 | Build step 'Execute shell' marked build as failure 2013-11-17 11:00:19.129http://logs.openstack.org/46/56746/1/check/check-tempest-devstack-vm-neutron/a02894b/console.html#_2013-11-17_11_00_19_129 | [SCP] Connecting to static.openstack.org Thanks Gary ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [stable/havana] gate broken
On 11/17/2013 08:46 AM, Gary Kotton wrote: Hi, The gating for the stable version is broken when the running the neutron gate. Locally this works but the gate has problem. All of the services are up and running correctly. There are some exceptions with the ceilometer service but that is not related to the neutron gating. Already? Wow, that was quick. The error message is as follows: 2013-11-17 11:00:05.855 http://logs.openstack.org/46/56746/1/check/check-tempest-devstack-vm-neutron/a02894b/console.html#_2013-11-17_11_00_05_855 | 2013-11-17 11:00:05 2013-11-17 11:00:17.239 http://logs.openstack.org/46/56746/1/check/check-tempest-devstack-vm-neutron/a02894b/console.html#_2013-11-17_11_00_17_239 | Process leaked file descriptors. See http://wiki.jenkins-ci.org/display/JENKINS/Spawning+processes+from+build for more information 2013-11-17 11:00:17.437 http://logs.openstack.org/46/56746/1/check/check-tempest-devstack-vm-neutron/a02894b/console.html#_2013-11-17_11_00_17_437 | Build step 'Execute shell' marked build as failure 2013-11-17 11:00:19.129 http://logs.openstack.org/46/56746/1/check/check-tempest-devstack-vm-neutron/a02894b/console.html#_2013-11-17_11_00_19_129 | [SCP] Connecting to static.openstack.org Those messages are normal and have to do with Jenkins slave communication. The error will have happened before that - but I have no idea what the issue is. Hopefully someone else will... ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [qa] Proposals for Tempest core
On 11/16/2013 11:26 PM, Matthew Treinish wrote: I think responding to this is worth breaking my vacation email embargo. Although I seem to remember a certain PTL whose vacation we waited for the last time we did core proposals... I think we need to put post summit vacation week on ttx's release planning sheets. Sean Dague s...@dague.net wrote: It's post summit time, so time to evaluate our current core group for Tempest. There are a few community members that I'd like to nominate for Tempest core, as I've found their review feedback over the last few months to be invaluable. Tempest core folks, please +1 or -1 as you feel appropriate: Masayuki Igawa His review history is here - https://review.openstack.org/#/q/reviewer:masayuki.igawa%2540gmail.com+project:openstack/tempest,n,z +1 Ken'ichi Ohmichi His review history is here - https://review.openstack.org/#/q/reviewer:ken1ohmichi%2540gmail.com+project:openstack/tempest,n,z +1 They have both been actively engaged in the Tempest community, and have been actively contributing to both Tempest and OpenStack integrated projects, working hard to both enhance test coverage, and fix the issues found in the projects themselves. This has been hugely beneficial to OpenStack as a whole. At the same time, it's also time, I think, to remove Jay Pipes from tempest-core. Jay's not had much time for reviews of late, and it's important that the core review team is a working title about actively reviewing code. +1, but sad to see you go Jay. Thanks for all the past effort. With this change Tempest core would end up no longer being majority north american, or even majority english as first language (that kind of excites me). Adjusting to both there will be another mailing list thread about changing our weekly meeting time to make it more friendly to our APAC contributors. -Sean ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [stable/havana] gate broken
On Sunday, November 17, 2013 7:46:39 AM, Gary Kotton wrote: Hi, The gating for the stable version is broken when the running the neutron gate. Locally this works but the gate has problem. All of the services are up and running correctly. There are some exceptions with the ceilometer service but that is not related to the neutron gating. The error message is as follows: 2013-11-17 11:00:05.855 http://logs.openstack.org/46/56746/1/check/check-tempest-devstack-vm-neutron/a02894b/console.html#_2013-11-17_11_00_05_855 | 2013-11-17 11:00:05 2013-11-17 11:00:17.239 http://logs.openstack.org/46/56746/1/check/check-tempest-devstack-vm-neutron/a02894b/console.html#_2013-11-17_11_00_17_239 | Process leaked file descriptors. Seehttp://wiki.jenkins-ci.org/display/JENKINS/Spawning+processes+from+build for more information 2013-11-17 11:00:17.437 http://logs.openstack.org/46/56746/1/check/check-tempest-devstack-vm-neutron/a02894b/console.html#_2013-11-17_11_00_17_437 | Build step 'Execute shell' marked build as failure 2013-11-17 11:00:19.129 http://logs.openstack.org/46/56746/1/check/check-tempest-devstack-vm-neutron/a02894b/console.html#_2013-11-17_11_00_19_129 | [SCP] Connecting to static.openstack.org Thanks Gary ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev I've seen this fail on at least two stable/havana patches in nova today, so I opened this bug: https://bugs.launchpad.net/openstack-ci/+bug/1252024 -- Thanks, Matt Riedemann ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [stable/havana] gate broken
Thanks. Looks like something fishy with tempest. When I just run neutron everything is well and fine. On 11/17/13 5:50 PM, Matt Riedemann mrie...@linux.vnet.ibm.com wrote: On Sunday, November 17, 2013 7:46:39 AM, Gary Kotton wrote: Hi, The gating for the stable version is broken when the running the neutron gate. Locally this works but the gate has problem. All of the services are up and running correctly. There are some exceptions with the ceilometer service but that is not related to the neutron gating. The error message is as follows: 2013-11-17 11:00:05.855 https://urldefense.proofpoint.com/v1/url?u=http://logs.openstack.org/46/ 56746/1/check/check-tempest-devstack-vm-neutron/a02894b/console.html%23_2 013-11-17_11_00_05_855k=oIvRg1%2BdGAgOoM1BIlLLqw%3D%3D%0Ar=eH0pxTUZo8NP ZyF6hgoMQu%2BfDtysg45MkPhCZFxPEq8%3D%0Am=VL2%2F41a4nc9qu2UHN5inNr9%2FGGz 5fBDhLKNRNs4pcNw%3D%0As=863f07cf06a33e05b6b0343fd1d72f51124f5a4b4812e758 e9c677b24759a2e0 | 2013-11-17 11:00:05 2013-11-17 11:00:17.239 https://urldefense.proofpoint.com/v1/url?u=http://logs.openstack.org/46/ 56746/1/check/check-tempest-devstack-vm-neutron/a02894b/console.html%23_2 013-11-17_11_00_17_239k=oIvRg1%2BdGAgOoM1BIlLLqw%3D%3D%0Ar=eH0pxTUZo8NP ZyF6hgoMQu%2BfDtysg45MkPhCZFxPEq8%3D%0Am=VL2%2F41a4nc9qu2UHN5inNr9%2FGGz 5fBDhLKNRNs4pcNw%3D%0As=bf4edc850a2f4350b3d5ae1374e2fd518d2c065fa17cbf3d bbf00a753e556e26 | Process leaked file descriptors. Seehttp://wiki.jenkins-ci.org/display/JENKINS/Spawning+processes+from+bui ld for more information 2013-11-17 11:00:17.437 https://urldefense.proofpoint.com/v1/url?u=http://logs.openstack.org/46/ 56746/1/check/check-tempest-devstack-vm-neutron/a02894b/console.html%23_2 013-11-17_11_00_17_437k=oIvRg1%2BdGAgOoM1BIlLLqw%3D%3D%0Ar=eH0pxTUZo8NP ZyF6hgoMQu%2BfDtysg45MkPhCZFxPEq8%3D%0Am=VL2%2F41a4nc9qu2UHN5inNr9%2FGGz 5fBDhLKNRNs4pcNw%3D%0As=918eca2e42d056e435c956c540ad22266272db674ec4cded e82d5a286b072190 | Build step 'Execute shell' marked build as failure 2013-11-17 11:00:19.129 https://urldefense.proofpoint.com/v1/url?u=http://logs.openstack.org/46/ 56746/1/check/check-tempest-devstack-vm-neutron/a02894b/console.html%23_2 013-11-17_11_00_19_129k=oIvRg1%2BdGAgOoM1BIlLLqw%3D%3D%0Ar=eH0pxTUZo8NP ZyF6hgoMQu%2BfDtysg45MkPhCZFxPEq8%3D%0Am=VL2%2F41a4nc9qu2UHN5inNr9%2FGGz 5fBDhLKNRNs4pcNw%3D%0As=542399fe0cb38aaee9ab1d272370f3e721010032e518b30d b50f63f5161f16a8 | [SCP] Connecting to static.openstack.org Thanks Gary ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org https://urldefense.proofpoint.com/v1/url?u=http://lists.openstack.org/cgi -bin/mailman/listinfo/openstack-devk=oIvRg1%2BdGAgOoM1BIlLLqw%3D%3D%0Ar =eH0pxTUZo8NPZyF6hgoMQu%2BfDtysg45MkPhCZFxPEq8%3D%0Am=VL2%2F41a4nc9qu2UH N5inNr9%2FGGz5fBDhLKNRNs4pcNw%3D%0As=4af7d1dfeb1ddcba19ac6d0d17b8aab58f1 c37ce4a335aadbcb19be7f9b1e437 I've seen this fail on at least two stable/havana patches in nova today, so I opened this bug: https://urldefense.proofpoint.com/v1/url?u=https://bugs.launchpad.net/open stack-ci/%2Bbug/1252024k=oIvRg1%2BdGAgOoM1BIlLLqw%3D%3D%0Ar=eH0pxTUZo8NP ZyF6hgoMQu%2BfDtysg45MkPhCZFxPEq8%3D%0Am=VL2%2F41a4nc9qu2UHN5inNr9%2FGGz5 fBDhLKNRNs4pcNw%3D%0As=028348b32601b6806fc8e7087b63e4a6858297905224ae85b0 d81a494c39f5fb -- Thanks, Matt Riedemann ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [nova][cinder][oslo][scheduler] How to leverage oslo schduler/filters for nova and cinder
Boris Pavlovic bpavlo...@mirantis.com wrote on 15/11/2013 05:57:20 PM: How do you envision the life cycle of such a scheduler in terms of code repository, build, test, etc? As a first step we could just make it inside nova, when we finish and prove that this approach works well we could split it out the nova in separated project and integrate with devstack and so on so on... So, Cinder (as well as Neutron, and potentially others) would need to be hooked to Nova rpc? What kind of changes to provisioning APIs do you envision to 'feed' such a scheduler? At this moment nova.scheduler is already separated service with amqp queue, what we need at this moment is to add 1 new rpc method to it. That will update state of some host. I was referring to external (REST) APIs. E.g., to specify affinity. Also, there are some interesting technical challenges (e.g., state management across potentially large number of instances of memcached). 10-100k keys-values is nothing for memcached. So what kind of instances? Instances of memcached. In an environment with multiple schedulers. I think you mentioned that if we have, say, 10 schedulers, we will also have 10 instances of memcached. Regards, Alex Best regards, Boris Pavlovic On Sun, Nov 10, 2013 at 4:20 PM, Alex Glikson glik...@il.ibm.com wrote: Hi Boris, This is a very interesting approach. How do you envision the life cycle of such a scheduler in terms of code repository, build, test, etc? What kind of changes to provisioning APIs do you envision to 'feed' such a scheduler? Any particular reason you didn't mention Neutron? Also, there are some interesting technical challenges (e.g., state management across potentially large number of instances of memcached). Thanks, Alex Boris Pavlovic bpavlo...@mirantis.com wrote on 10/11/2013 07:05:42 PM: From: Boris Pavlovic bpavlo...@mirantis.com To: OpenStack Development Mailing List (not for usage questions) openstack-dev@lists.openstack.org, Date: 10/11/2013 07:07 PM Subject: Re: [openstack-dev] [nova][cinder][oslo][scheduler] How to leverage oslo schduler/filters for nova and cinder Jay, Hi Jay, yes we were working about putting all common stuff in oslo- scheduler. (not only filters) As a result of this work we understood, that this is wrong approach. Because it makes result code very complex and unclear. And actually we didn't find the way to put all common stuff inside oslo. Instead of trying to make life too complex we found better approach. Implement scheduler aaS that can scale (current solution has some scale issues) store all data from nova, cinder probably other places. To implement such approach we should change a bit current architecture: 1) Scheduler should store all his data (not nova.db cinder.db) 2) Scheduler should always have own snapshot of wold state, and sync it with another schedulers using something that is quite fast (e.g. memcached) 3) Merge schedulers rpc methods from nova cinder in one scheduler (it is possbile if we store all data from cinder nova in one sceduler). 4) Drop cinder, and nova tables that store host states (as we don't need them) We implemented already base start (mechanism that store snapshot of world state sync it between different schedulers): https://review.openstack.org/#/c/45867/ (it is still bit in WIP) Best regards, Boris Pavlovic --- Mirantis Inc. On Sun, Nov 10, 2013 at 1:59 PM, Jay Lau jay.lau@gmail.com wrote: I noticed that there is already a bp in oslo tracing what I want to do: https://blueprints.launchpad.net/oslo/+spec/oslo-scheduler Thanks, Jay 2013/11/9 Jay Lau jay.lau@gmail.com Greetings, Now in oslo, we already put some scheduler filters/weights logic there and cinder is using oslo scheduler filters/weights logic, seems we want both novacinder use this logic in future. Found some problems as following: 1) In cinder, some filters/weight logic reside in cinder/openstack/ common/scheduler and some filter/weight logic in cinder/scheduler, this is not consistent and also will make some cinder hackers confused: where shall I put the scheduler filter/weight. 2) Nova is not using filter/weight from oslo and also not using entry point to handle all filter/weight. 3) There is not enough filters in oslo, we may need to add more there: such as same host filter, different host filter, retry filter etc. So my proposal is as following: 1) Add more filters to oslo, such as same host filter, different host filter, retry filter etc. 2) Move all filters/weight logic in cinder from cinder/scheduler to cinder/openstack/common/scheduler 3) Enable nova use filter/weight logic from oslo (Move all filter logic to nova/openstack/common/scheduler) and also use entry point to
Re: [openstack-dev] [stable/havana] gate broken
On Sun, Nov 17, 2013 at 7:53 AM, Gary Kotton gkot...@vmware.com wrote: Thanks. Looks like something fishy with tempest. When I just run neutron everything is well and fine. On 11/17/13 5:50 PM, Matt Riedemann mrie...@linux.vnet.ibm.com wrote: On Sunday, November 17, 2013 7:46:39 AM, Gary Kotton wrote: Hi, The gating for the stable version is broken when the running the neutron gate. Locally this works but the gate has problem. All of the services are up and running correctly. There are some exceptions with the ceilometer service but that is not related to the neutron gating. The error message is as follows: 2013-11-17 11:00:05.855 https://urldefense.proofpoint.com/v1/url?u=http://logs.openstack.org/46/ 56746/1/check/check-tempest-devstack-vm-neutron/a02894b/console.html%23_2 013-11-17_11_00_05_855k=oIvRg1%2BdGAgOoM1BIlLLqw%3D%3D%0Ar=eH0pxTUZo8NP ZyF6hgoMQu%2BfDtysg45MkPhCZFxPEq8%3D%0Am=VL2%2F41a4nc9qu2UHN5inNr9%2FGGz 5fBDhLKNRNs4pcNw%3D%0As=863f07cf06a33e05b6b0343fd1d72f51124f5a4b4812e758 e9c677b24759a2e0 | 2013-11-17 11:00:05 2013-11-17 11:00:17.239 https://urldefense.proofpoint.com/v1/url?u=http://logs.openstack.org/46/ 56746/1/check/check-tempest-devstack-vm-neutron/a02894b/console.html%23_2 013-11-17_11_00_17_239k=oIvRg1%2BdGAgOoM1BIlLLqw%3D%3D%0Ar=eH0pxTUZo8NP ZyF6hgoMQu%2BfDtysg45MkPhCZFxPEq8%3D%0Am=VL2%2F41a4nc9qu2UHN5inNr9%2FGGz 5fBDhLKNRNs4pcNw%3D%0As=bf4edc850a2f4350b3d5ae1374e2fd518d2c065fa17cbf3d bbf00a753e556e26 | Process leaked file descriptors. Seehttp://wiki.jenkins-ci.org/display/JENKINS/Spawning+processes+from+bui ld for more information 2013-11-17 11:00:17.437 https://urldefense.proofpoint.com/v1/url?u=http://logs.openstack.org/46/ 56746/1/check/check-tempest-devstack-vm-neutron/a02894b/console.html%23_2 013-11-17_11_00_17_437k=oIvRg1%2BdGAgOoM1BIlLLqw%3D%3D%0Ar=eH0pxTUZo8NP ZyF6hgoMQu%2BfDtysg45MkPhCZFxPEq8%3D%0Am=VL2%2F41a4nc9qu2UHN5inNr9%2FGGz 5fBDhLKNRNs4pcNw%3D%0As=918eca2e42d056e435c956c540ad22266272db674ec4cded e82d5a286b072190 | Build step 'Execute shell' marked build as failure 2013-11-17 11:00:19.129 https://urldefense.proofpoint.com/v1/url?u=http://logs.openstack.org/46/ 56746/1/check/check-tempest-devstack-vm-neutron/a02894b/console.html%23_2 013-11-17_11_00_19_129k=oIvRg1%2BdGAgOoM1BIlLLqw%3D%3D%0Ar=eH0pxTUZo8NP ZyF6hgoMQu%2BfDtysg45MkPhCZFxPEq8%3D%0Am=VL2%2F41a4nc9qu2UHN5inNr9%2FGGz 5fBDhLKNRNs4pcNw%3D%0As=542399fe0cb38aaee9ab1d272370f3e721010032e518b30d b50f63f5161f16a8 | [SCP] Connecting to static.openstack.org Thanks Gary ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org https://urldefense.proofpoint.com/v1/url?u=http://lists.openstack.org/cgi -bin/mailman/listinfo/openstack-devk=oIvRg1%2BdGAgOoM1BIlLLqw%3D%3D%0Ar =eH0pxTUZo8NPZyF6hgoMQu%2BfDtysg45MkPhCZFxPEq8%3D%0Am=VL2%2F41a4nc9qu2UH N5inNr9%2FGGz5fBDhLKNRNs4pcNw%3D%0As=4af7d1dfeb1ddcba19ac6d0d17b8aab58f1 c37ce4a335aadbcb19be7f9b1e437 I've seen this fail on at least two stable/havana patches in nova today, so I opened this bug: https://urldefense.proofpoint.com/v1/url?u=https://bugs.launchpad.net/open stack-ci/%2Bbug/1252024k=oIvRg1%2BdGAgOoM1BIlLLqw%3D%3D%0Ar=eH0pxTUZo8NP ZyF6hgoMQu%2BfDtysg45MkPhCZFxPEq8%3D%0Am=VL2%2F41a4nc9qu2UHN5inNr9%2FGGz5 fBDhLKNRNs4pcNw%3D%0As=028348b32601b6806fc8e7087b63e4a6858297905224ae85b0 d81a494c39f5fb -- Thanks, Matt Riedemann ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev There was a recent change to tempest to add a new tox environment (smoke-serial). This tox environment is used by devstack-gate when running the neutron tests. As a result, the new tox env needs to be backported to tempest stable/havana and stable/grizzly. I have proposed those changes at https://review.openstack.org/#/c/56825/ and https://review.openstack.org/#/c/56826/ and have updated the bug. Once the first change merges the gate should move for stable/havana. Clark ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [Heat] rough draft of Heat autoscaling API
On 11/15/2013 05:19 AM, Christopher Armstrong wrote: http://docs.heatautoscale.apiary.io/ I've thrown together a rough sketch of the proposed API for autoscaling. It's written in API-Blueprint format (which is a simple subset of Markdown) and provides schemas for inputs and outputs using JSON-Schema. The source document is currently at https://github.com/radix/heat/raw/as-api-spike/autoscaling.apibp Things we still need to figure out: - how to scope projects/domains. put them in the URL? get them from the token? - how webhooks are done (though this shouldn't affect the API too much; they're basically just opaque) Please read and comment :) Looking at the scaling policy I see |change: { type: integer, description: a number that has an effect based on change_type.}, change_type: { type: string, enum: [change_in_capacity, percentage_change_in_capacity, exact_capacity], description: describes the way that 'change' will apply to the active capacity of the scaling group},|| | There could be an issue with percentage_change_in_capacity whenever that evaluates to needing to scale by between zero and one resources. I thought that maybe the percentage_change_in_capacity option should be dropped, but it might be enough to always round up any non-zero capacity change. ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [stable/havana] gate broken
Clark, Thanks for tracking that down, just pushed those changes to the gate. On Sun, Nov 17, 2013 at 3:03 PM, Clark Boylan clark.boy...@gmail.com wrote: On Sun, Nov 17, 2013 at 7:53 AM, Gary Kotton gkot...@vmware.com wrote: Thanks. Looks like something fishy with tempest. When I just run neutron everything is well and fine. On 11/17/13 5:50 PM, Matt Riedemann mrie...@linux.vnet.ibm.com wrote: On Sunday, November 17, 2013 7:46:39 AM, Gary Kotton wrote: Hi, The gating for the stable version is broken when the running the neutron gate. Locally this works but the gate has problem. All of the services are up and running correctly. There are some exceptions with the ceilometer service but that is not related to the neutron gating. The error message is as follows: 2013-11-17 11:00:05.855 https://urldefense.proofpoint.com/v1/url?u=http://logs.openstack.org/46/ 56746/1/check/check-tempest-devstack-vm-neutron/a02894b/console.html%23_2 013-11-17_11_00_05_855k=oIvRg1%2BdGAgOoM1BIlLLqw%3D%3D%0Ar=eH0pxTUZo8NP ZyF6hgoMQu%2BfDtysg45MkPhCZFxPEq8%3D%0Am=VL2%2F41a4nc9qu2UHN5inNr9%2FGGz 5fBDhLKNRNs4pcNw%3D%0As=863f07cf06a33e05b6b0343fd1d72f51124f5a4b4812e758 e9c677b24759a2e0 | 2013-11-17 11:00:05 2013-11-17 11:00:17.239 https://urldefense.proofpoint.com/v1/url?u=http://logs.openstack.org/46/ 56746/1/check/check-tempest-devstack-vm-neutron/a02894b/console.html%23_2 013-11-17_11_00_17_239k=oIvRg1%2BdGAgOoM1BIlLLqw%3D%3D%0Ar=eH0pxTUZo8NP ZyF6hgoMQu%2BfDtysg45MkPhCZFxPEq8%3D%0Am=VL2%2F41a4nc9qu2UHN5inNr9%2FGGz 5fBDhLKNRNs4pcNw%3D%0As=bf4edc850a2f4350b3d5ae1374e2fd518d2c065fa17cbf3d bbf00a753e556e26 | Process leaked file descriptors. Seehttp://wiki.jenkins-ci.org/display/JENKINS/Spawning+processes+from+bui ld for more information 2013-11-17 11:00:17.437 https://urldefense.proofpoint.com/v1/url?u=http://logs.openstack.org/46/ 56746/1/check/check-tempest-devstack-vm-neutron/a02894b/console.html%23_2 013-11-17_11_00_17_437k=oIvRg1%2BdGAgOoM1BIlLLqw%3D%3D%0Ar=eH0pxTUZo8NP ZyF6hgoMQu%2BfDtysg45MkPhCZFxPEq8%3D%0Am=VL2%2F41a4nc9qu2UHN5inNr9%2FGGz 5fBDhLKNRNs4pcNw%3D%0As=918eca2e42d056e435c956c540ad22266272db674ec4cded e82d5a286b072190 | Build step 'Execute shell' marked build as failure 2013-11-17 11:00:19.129 https://urldefense.proofpoint.com/v1/url?u=http://logs.openstack.org/46/ 56746/1/check/check-tempest-devstack-vm-neutron/a02894b/console.html%23_2 013-11-17_11_00_19_129k=oIvRg1%2BdGAgOoM1BIlLLqw%3D%3D%0Ar=eH0pxTUZo8NP ZyF6hgoMQu%2BfDtysg45MkPhCZFxPEq8%3D%0Am=VL2%2F41a4nc9qu2UHN5inNr9%2FGGz 5fBDhLKNRNs4pcNw%3D%0As=542399fe0cb38aaee9ab1d272370f3e721010032e518b30d b50f63f5161f16a8 | [SCP] Connecting to static.openstack.org Thanks Gary ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org https://urldefense.proofpoint.com/v1/url?u=http://lists.openstack.org/cgi -bin/mailman/listinfo/openstack-devk=oIvRg1%2BdGAgOoM1BIlLLqw%3D%3D%0Ar =eH0pxTUZo8NPZyF6hgoMQu%2BfDtysg45MkPhCZFxPEq8%3D%0Am=VL2%2F41a4nc9qu2UH N5inNr9%2FGGz5fBDhLKNRNs4pcNw%3D%0As=4af7d1dfeb1ddcba19ac6d0d17b8aab58f1 c37ce4a335aadbcb19be7f9b1e437 I've seen this fail on at least two stable/havana patches in nova today, so I opened this bug: https://urldefense.proofpoint.com/v1/url?u=https://bugs.launchpad.net/open stack-ci/%2Bbug/1252024k=oIvRg1%2BdGAgOoM1BIlLLqw%3D%3D%0Ar=eH0pxTUZo8NP ZyF6hgoMQu%2BfDtysg45MkPhCZFxPEq8%3D%0Am=VL2%2F41a4nc9qu2UHN5inNr9%2FGGz5 fBDhLKNRNs4pcNw%3D%0As=028348b32601b6806fc8e7087b63e4a6858297905224ae85b0 d81a494c39f5fb -- Thanks, Matt Riedemann ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev There was a recent change to tempest to add a new tox environment (smoke-serial). This tox environment is used by devstack-gate when running the neutron tests. As a result, the new tox env needs to be backported to tempest stable/havana and stable/grizzly. I have proposed those changes at https://review.openstack.org/#/c/56825/ and https://review.openstack.org/#/c/56826/ and have updated the bug. Once the first change merges the gate should move for stable/havana. Clark ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev -- Sean Dague http://dague.net ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] sqlalchemy-migrate needs a new release
Given that sqla-migration is now in stackforge, it would be a really good idea to go down the path that wsme and pecan are going and run a devstack job to make sure they don't break the rest of OpenStack. It will definitely help on feeling confident for releases. On Fri, Nov 15, 2013 at 4:13 PM, Bhuvan Arumugam bhu...@apache.org wrote: On Fri, Nov 15, 2013 at 11:03 AM, Dan Prince dpri...@redhat.com wrote: - Original Message - From: David Ripton drip...@redhat.com To: openstack-dev@lists.openstack.org Sent: Friday, November 15, 2013 1:47:58 PM Subject: Re: [openstack-dev] sqlalchemy-migrate needs a new release On 11/15/2013 10:41 AM, David Ripton wrote: sqlalchemy-migrate-0.8.1 is now up on PyPI. Thanks fungi for kicking PyPI for me. So there was a hardcoded version number inside migrate/__init__.py, Correct, Here was my fix for Nova earlier today (for sqlalchemy-migrate 0.8.1... now bumped to 0.8.2 though): https://review.openstack.org/#/c/56667/ If you use 0.8.2 this shouldn't be required... but does eliminate some code so I figure we should probably go on and do it. It broke the build, Dan. You may not want to change the requirements here, but in requirements repo. -- Regards, Bhuvan Arumugam www.livecipher.com ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev -- Sean Dague http://dague.net ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [Heat] rough draft of Heat autoscaling API
On 11/15/2013 05:19 AM, Christopher Armstrong wrote: http://docs.heatautoscale.apiary.io/ I've thrown together a rough sketch of the proposed API for autoscaling. It's written in API-Blueprint format (which is a simple subset of Markdown) and provides schemas for inputs and outputs using JSON-Schema. The source document is currently at https://github.com/radix/heat/raw/as-api-spike/autoscaling.apibp Apologies if I'm about to re-litigate an old argument, but... At summit we discussed creating a new endpoint (and new pythonclient) for autoscaling. Instead I think the autoscaling API could just be added to the existing heat-api endpoint. Arguments for just making auto scaling part of heat api include: * Significantly less development, packaging and deployment configuration of not creating a heat-autoscaling-api and python-autoscalingclient * Autoscaling is orchestration (for some definition of orchestration) so belongs in the orchestration service endpoint * The autoscaling API includes heat template snippets, so a heat service is a required dependency for deployers anyway * End-users are still free to use the autoscaling portion of the heat API without necessarily being aware of (or directly using) heat templates and stacks * It seems acceptable for single endpoints to manage many resources (eg, the increasingly disparate list of resources available via the neutron API) Arguments for making a new auto scaling api include: * Autoscaling is not orchestration (for some narrower definition of orchestration) * Autoscaling implementation will be handled by something other than heat engine (I have assumed the opposite) (no doubt this list will be added to in this thread) What do you think? ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] sqlalchemy-migrate needs a new release
On 11/17/2013 03:56 PM, Sean Dague wrote: Given that sqla-migration is now in stackforge, it would be a really good idea to go down the path that wsme and pecan are going and run a devstack job to make sure they don't break the rest of OpenStack. It will definitely help on feeling confident for releases. Totally agree. On Fri, Nov 15, 2013 at 4:13 PM, Bhuvan Arumugam bhu...@apache.org wrote: On Fri, Nov 15, 2013 at 11:03 AM, Dan Prince dpri...@redhat.com wrote: - Original Message - From: David Ripton drip...@redhat.com To: openstack-dev@lists.openstack.org Sent: Friday, November 15, 2013 1:47:58 PM Subject: Re: [openstack-dev] sqlalchemy-migrate needs a new release On 11/15/2013 10:41 AM, David Ripton wrote: sqlalchemy-migrate-0.8.1 is now up on PyPI. Thanks fungi for kicking PyPI for me. So there was a hardcoded version number inside migrate/__init__.py, Correct, Here was my fix for Nova earlier today (for sqlalchemy-migrate 0.8.1... now bumped to 0.8.2 though): https://review.openstack.org/#/c/56667/ If you use 0.8.2 this shouldn't be required... but does eliminate some code so I figure we should probably go on and do it. It broke the build, Dan. You may not want to change the requirements here, but in requirements repo. -- Regards, Bhuvan Arumugam www.livecipher.com ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
[openstack-dev] [nova] [api] How to handle bug 1249526?
This is mainly just a newbie question but looks like it could be an easy fix. The bug report is just asking for the nova os-fixed-ips API extension to return the 'reserved' status for the fixed IP. I don't see that in the v3 API list though, was that dropped in V3? If it's not being ported to V3 I'm sure there was a good reason so maybe this isn't worth implementing in the V2 API, even though it seems like a pretty harmless backwards compatible change. Am I missing something here? -- Thanks, Matt Riedemann ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [Heat] rough draft of Heat autoscaling API
On Sun, Nov 17, 2013 at 2:57 PM, Steve Baker sba...@redhat.com wrote: On 11/15/2013 05:19 AM, Christopher Armstrong wrote: http://docs.heatautoscale.apiary.io/ I've thrown together a rough sketch of the proposed API for autoscaling. It's written in API-Blueprint format (which is a simple subset of Markdown) and provides schemas for inputs and outputs using JSON-Schema. The source document is currently at https://github.com/radix/heat/raw/as-api-spike/autoscaling.apibp Apologies if I'm about to re-litigate an old argument, but... At summit we discussed creating a new endpoint (and new pythonclient) for autoscaling. Instead I think the autoscaling API could just be added to the existing heat-api endpoint. Arguments for just making auto scaling part of heat api include: * Significantly less development, packaging and deployment configuration of not creating a heat-autoscaling-api and python-autoscalingclient * Autoscaling is orchestration (for some definition of orchestration) so belongs in the orchestration service endpoint * The autoscaling API includes heat template snippets, so a heat service is a required dependency for deployers anyway * End-users are still free to use the autoscaling portion of the heat API without necessarily being aware of (or directly using) heat templates and stacks * It seems acceptable for single endpoints to manage many resources (eg, the increasingly disparate list of resources available via the neutron API) Arguments for making a new auto scaling api include: * Autoscaling is not orchestration (for some narrower definition of orchestration) * Autoscaling implementation will be handled by something other than heat engine (I have assumed the opposite) (no doubt this list will be added to in this thread) What do you think? I would be fine with this. Putting the API at the same endpoint as Heat's API can be done whether we decide to document the API as a separate thing or not. Would you prefer to see it as literally just more features added to the Heat API, or an autoscaling API that just happens to live at the same endpoint? -- IRC: radix Christopher Armstrong Rackspace ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [nova] [api] How to handle bug 1249526?
Hi Matt, On Mon, Nov 18, 2013 at 8:35 AM, Matt Riedemann mrie...@linux.vnet.ibm.comwrote: This is mainly just a newbie question but looks like it could be an easy fix. The bug report is just asking for the nova os-fixed-ips API extension to return the 'reserved' status for the fixed IP. I don't see that in the v3 API list though, was that dropped in V3? If it's not being ported to V3 I'm sure there was a good reason so maybe this isn't worth implementing in the V2 API, even though it seems like a pretty harmless backwards compatible change. Am I missing something here? It's not ported to the V3 API because we only support neutron in the V3 API and fixed ip related queries/settings can be made directly to the neutron API. I think adding the reserved status for the fixed IP would be ok. It would have to be handled in the usual way for extending the V2 API though - eg adding another extension so it can be detected when that feature will be available. More broadly speaking we discussed at summit perhaps closing V2 API development at the end of I-2 (assuming V3 is looking good by that stage) - except for bug fixes of course. Chris ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [Neutron][LBaaS] Loadbalancer instance design.
At Fri, 15 Nov 2013 17:14:47 +0400, Eugene Nikanorov wrote: Hi folks, I've created a brief description of this feature. You can find it here: https://wiki.openstack.org/wiki/Neutron/LBaaS/LoadbalancerInstancehttps://blueprints.launchpad.net/neutron/+spec/lbaas-service-instance I would appreciate any comments/ideas about this. How do you plan to handle API compatibility? I think that was a major part of the discussion at the design summit. -- IWAMOTO Toshihiro ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [Nova] New API requirements, review of GCE
On Sat, Nov 16, 2013 at 6:01 AM, Joe Gordon joe.gord...@gmail.com wrote: On Fri, Nov 15, 2013 at 9:01 AM, Mark McLoughlin mar...@redhat.comwrote: On Fri, 2013-11-15 at 11:28 -0500, Russell Bryant wrote: Greetings, We've talked a lot about requirements for new compute drivers [1]. I think the same sort of standards shold be applied for a new third-party API, such as the GCE API [2]. Before we can consider taking on a new API, it should have full test suite coverage. Ideally this would be extensions to tempest. It should also be set up to run on every Nova patch via CI. Beyond that point, now is a good time to re-consider how we want to support new third-party APIs. Just because EC2 is in the tree doesn't mean that has to be how we support them going forward. Should new APIs go into their own repositories? I used to be against this idea. However, as Nova's has grown, the importance of finding the right spots to split is even higher. My objection was primarily based on assuming we'd have to make the Python APIs stable. I still do not think we should make them stable, but I don't think that's a huge issue, since it should be mitigated by running CI so the API maintainers quickly get notified when updates are necessary. Taking on a whole new API seems like an even bigger deal than accepting a new compute driver, so it's an important question. If we went this route, I would encourage new third-party APIs to build themselves up in a stackforge repo. Once it's far enough along, we could then evaluate officially bringing it in as an official sub-project of the OpenStack Compute program. I do think there should be a high bar for new APIs. More than just CI, but that there is a viable group of contributors around the API who are involved in OpenStack more generally than just maintaining the API in question. I don't at all like the idea of drivers or APIs living in separate repos and building on unstable Nova APIs. Anything which we accept is a part of OpenStack should not get randomly made unusable by one contributor while other contributors constantly have to scramble to catch up. Either stuff winds up being broken too often or we stifle progress in Nova because we're afraid to make breaking changes. the ceilometer plugin for nova hit this, and had to be scrapped. It hooked into nova-compute and at one point made nova-compute hang there for minutes at a time. I agree, that hooking into our underlying python APIs is a bad idea and a recipe for disaster. But at the same time I do like having things live in a separate repo, at the very least until they are mature enough to be pulled into mainline. But if we do go with the separate repo solution, what are the issues with proxying third party APIs on top of OpenStack REST APIs? Using the REST APIs would mean we have a stable contract for these third party APIs to consume, and we also get more feedback about fixing our own API at the same time. I'd love to hear what the barriers are to doing this. As you say it would have a lot of advantages if its feasible. One other issue with the proposed GCE changes is that it uses the custom wsgi which we are trying to phase out eventually. Should we be suggesting that new APIs use Pecan/WSME? Chris ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [Nova] New API requirements, review of GCE
On Sat, Nov 16, 2013 at 4:02 AM, Mark McLoughlin mar...@redhat.com wrote: If so, do we apply the same standards to the EC2 API? How much time do we give EC2 to get up to this standard before we rip it out? I hadn't understood the EC2 API to be in such a woeful state. Are we saying the implementation is so bad it's not at all useful for users? Or that a lack of testing means we see a far higher rather of regressions than in e.g. the OpenStack API? Or just that we don't see much progress on it? Of the 18 testcases we have in tempest for the ec2 api, 5 of them are currently skipped due to bugs. And from what I can remember there wasn't a whole lot of movement in Havana. They're a good place to start for someone wanting to improve the ec2 api quality as in the past fixing the bugs has resulted in the tests still needing to be skipped because fixing it has uncovered more bugs :-) Chris ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [Neutron][LBaaS] Loadbalancer instance design.
Hi, 2. Loadbalancer can be used to bind configuration to a provider, device, agent (host), router What's the plan about this ? Is an extension for each (eg. add router_id to a loadblancer resource) necessary ? Thanks. Itsuro Oda On Fri, 15 Nov 2013 17:14:47 +0400 Eugene Nikanorov enikano...@mirantis.com wrote: Hi folks, I've created a brief description of this feature. You can find it here: https://wiki.openstack.org/wiki/Neutron/LBaaS/LoadbalancerInstancehttps://blueprints.launchpad.net/neutron/+spec/lbaas-service-instance I would appreciate any comments/ideas about this. Thanks, Eugene. -- Itsuro ODA o...@valinux.co.jp ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
[openstack-dev] [Mistral] Agenda for IRC community meeting - 11/18/2013
Hi, Here’s the agenda for today’s IRC community meeting on #openstack-meeting at 16.00 UTC: Review last week's action items Discuss PoC scope Discuss Roadmap Discuss Blueprints Discuss and update https://etherpad.openstack.org/p/MistralDesignAndDependencies Open discussion You can also find it at https://wiki.openstack.org/wiki/Meetings/MistralAgenda as well as the links to the logs and minutes for the previous meetings. Thanks! Rena Akhmerov @ Mirantis Inc.___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [Neutron][LBaaS] Loadbalancer instance design.
How do you plan to handle API compatibility? The new API is not compatible and i think there was a consensus that such change is needed and incompatibility is justified. Is an extension for each (eg. add router_id to a loadblancer resource) necessary ? Basically, yes, there should be an extension for each kind of binding with the exception that binding to providers is a part of lbaas API. Thanks, Eugene. On Mon, Nov 18, 2013 at 7:26 AM, Itsuro ODA o...@valinux.co.jp wrote: Hi, 2. Loadbalancer can be used to bind configuration to a provider, device, agent (host), router What's the plan about this ? Is an extension for each (eg. add router_id to a loadblancer resource) necessary ? Thanks. Itsuro Oda On Fri, 15 Nov 2013 17:14:47 +0400 Eugene Nikanorov enikano...@mirantis.com wrote: Hi folks, I've created a brief description of this feature. You can find it here: https://wiki.openstack.org/wiki/Neutron/LBaaS/LoadbalancerInstance https://blueprints.launchpad.net/neutron/+spec/lbaas-service-instance I would appreciate any comments/ideas about this. Thanks, Eugene. -- Itsuro ODA o...@valinux.co.jp ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [Neutron] Plugin and Driver Inclusion Requirements
For third party testing, I am afraid these tests will make the patch merge process much longer since each patch, which even has nothing to do with the specific plugins, will triggers the unwanted third party testing jobs. On Fri, Nov 15, 2013 at 4:20 AM, Mark McClain mark.mccl...@dreamhost.comwrote: tl;dr - The Neutron team has experienced tremendous growth in vendor plugins and drivers over the last few cycles. As a result of the growth, the Neutron team is implementing new requirements for plugin and driver code for Icehouse cycle to ensure continued code quality and stability. - Each third party plugin/driver shall designate a point of contact for the coordinated release cycle. - To be designated as compatible, a third-party plugin and/or driver code must implement external third party integration testing. - Policy is in effect immediately for new plugin/drivers. - Existing plugin/drivers have until Icehouse-2 to become compliant. Introduction --- Ensuring release quality through proper testing is an important tenant of the OpenStack community and Neutron team wants to do our part. We are introducing changes below provide more visibility into the quality and stability of vendor plugin and driver code. The policies described here are in effect immediately. Rationale Code proposals for third party plugins have always presented a review challenge for the Neutron core team. In the early days, code was often proposed by core project contributors and our review process only validated whether the requirements were met for community coding style and unit testing. As Neutron has added new resources via extensions, it has become more difficult for Neutron reviewers to ensure the proposed code is functional. Many of the plugins and/or drivers require proprietary hardware and/or software to conduct such testing. In addition to testing changes, the Neutron team is revising the requirements for the point of contact for third party code. The changes bring the written expectations for contacts in line with current practice. Point of Contact Requirements - Each third party plugin and/or driver shall designate a point of contact for each coordinated release cycle. The contact will serve as a liaison between the Neutron core team and the vendor or community supporting the plugin or driver. The contact shall: - Attend weekly Neutron team IRC meetings - Be an active reviewer and contributor - Be an active participant on openstack-dev mailing list - Assist the core team with triaging bugs specific to the plugin and/or driver - Ensure OpenStack development deadlines are properly communicated back to their company and/or community NOTE: The this information can be maintained here: https://wiki.openstack.org/wiki/Neutron_Plugins_and_Drivers Testing Requirements - To be designated as compatible, a third-party plugin and/or driver code must implement external third party testing. The testing should be Tempest executed against a Devstack build with the proposed code changes. The environment managed by the vendor should be configured to incorporate the plugin and/or driver solution. The OpenStack Infrastructure team has provided details on how to integrate 3rd party testing at: http://ci.openstack.org/third_party.html and Tempest can be found at: https://github.com/openstack/tempest The Neutron team expects that the third party testing will provide a +/-1 verify vote for all changes to a plugin or driver’s code. In addition, the Neutron team expects that the third party test will also vote on all code submissions by the jenkins user. The jenkins user regularly submits requirements changes and the Neutron team hopes to catch any possible regressions as early as possible. Existing Plugin and Drivers - Plugins and drivers currently in the Neutron project repository will be given a grace period until the Icehouse-2 milestone to implement external third party testing. At that time, the Neutron team will release a list of the compatible plugins and drivers (i.e. those that meet the testing requirements). Plugins and drivers that do not have external testing will be deprecated at the Icehouse release and will be candidates for removal when the J-release cycle opens. ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
[openstack-dev] [Neutron][Tempest] Tempest API test for Neutron LBaaS
Hi folks, I'm working on major change to Neutron LBaaS API, obviously it will break existing tempest API tests for LBaaS. What would be the right process to deal with this? I guess I can't just push fixed tests to tempest because they will not pass against existing neutron code, and vice versa. Thanks, Eugene. ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
[openstack-dev] [Ironic][Ceilometer] get IPMI data for ceilometer
Hi stackers, During the summit session Expose hardware sensor (IPMI) data https://etherpad.openstack.org/p/icehouse-summit-ceilometer-hardware-sensors, it was proposed to deploy a ceilometer agent next to the ironic conductor to the get the ipmi data. Here I'd like to ask some questions to figure out what's the current missing pieces in ironic and ceilometer for that proposal. 1. Just double check, ironic won't provide API to get IPMI data, right? 2. If deploying a ceilometer agent next to the ironic conductor, how does the agent talk to the conductor? Through rpc? 3. Does the current ironic conductor have rpc_method to support getting generic ipmi data, i.e. let the rpc_method caller specifying arbitrary netfn/command to get any type of ipmi data? 4. I believe the ironic conductor uses some kind of node_id to associate the bmc with its credentials, right? If so, how can the ceilometer agent get those node_ids to ask the ironic conductor to poll the ipmi data? And how can the ceilometer agent extract meaningful information from that node_id to set those fields in the ceilometer Sample(e.g. recource_id, project_id, user_id, etc.) to identify which physical node the ipmi data is coming from? Best Regards, -Lianhao Lu ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
[openstack-dev] [nova] Blueprint - Disable child cell support
Hello, folks, I proposed a blueprint https://blueprints.launchpad.net/nova/+spec/disable-child-cell-support , but have not received any feedback yet. Anyone interested in this please have a look at, any feedback would be appreciated ! Thanks, Yingjun___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [nova][cinder][oslo][scheduler] How to leverage oslo schduler/filters for nova and cinder
Khanh-Toan, There is a need for a scheduler that can scheduling a group of resources as a whole, which is difficult to realize due to the separation of Nova, Cinder –scheduler. Thus I’m in favor of a dedicated scheduler component. What we need is to have one scheduler that is able effectively to store all data, However, before talking about API or implementation, wouldn't it be better to see if the nova/cinder scheduler is independent enough to be separated from the core, in particular the data that the require to make a proper scheduling decision? It is reasonable to look again at the current architecture of Nova and Cinder to see which relation that nova-scheduler and cinder-scheduler have with the rest of nova/cinder components, and which data they take from nova/cinder DB and whether these data can be separated from Nova/Cinder. Before starting implementation of new scheduler we made this investigation.. Actually schedulers in nova and cinder are almost the same. And they are pretty separated from the core of projects: 1) They are separated services 2) Other services (e.g. compute api) already calls scheduler through rpc 3) Scheduler services call other services (e.g. compute manager) through rpc 4) Code base for scheduler service and other services are different (common parts are already mostly in oslo) Only thing that is hard bind with scheduler is project DB. But after switching to separated storage (e.g. memcached) scheduler won't depend on project db. Is it really OK to drop these tables? Could Nova can work without them (e.g. rollback)? And if Ceilometer is about to ask nova for host state metrics ? Yes it is OK, because now ceilometer and other projects could ask scheduler about host state. (I don't see any problems) Alex, So, Cinder (as well as Neutron, and potentially others) would need to be hooked to Nova rpc? As a first step, to prove approach yes, but I hope that we won't have nova or cinder scheduler at all. We will have just scheduler that works well. I was referring to external (REST) APIs. E.g., to specify affinity. Yes this should be moved as well to scheduler API.. Instances of memcached. In an environment with multiple schedulers. I think you mentioned that if we have, say, 10 schedulers, we will also have 10 instances of memcached. Actually we are going to make implementation based on sqlalchemy as well. In case of memcached I just say one of arch, that you could run on each server with scheduler service memcahced instance. But it is not required, you can have even just one memcached instance for all scheulers (but it is not HA). Best regards, Boris Pavlovic --- Mirantis Inc. On Sun, Nov 17, 2013 at 9:27 PM, Alex Glikson glik...@il.ibm.com wrote: Boris Pavlovic bpavlo...@mirantis.com wrote on 15/11/2013 05:57:20 PM: How do you envision the life cycle of such a scheduler in terms of code repository, build, test, etc? As a first step we could just make it inside nova, when we finish and prove that this approach works well we could split it out the nova in separated project and integrate with devstack and so on so on... So, Cinder (as well as Neutron, and potentially others) would need to be hooked to Nova rpc? What kind of changes to provisioning APIs do you envision to 'feed' such a scheduler? At this moment nova.scheduler is already separated service with amqp queue, what we need at this moment is to add 1 new rpc method to it. That will update state of some host. I was referring to external (REST) APIs. E.g., to specify affinity. Also, there are some interesting technical challenges (e.g., state management across potentially large number of instances of memcached). 10-100k keys-values is nothing for memcached. So what kind of instances? Instances of memcached. In an environment with multiple schedulers. I think you mentioned that if we have, say, 10 schedulers, we will also have 10 instances of memcached. Regards, Alex Best regards, Boris Pavlovic On Sun, Nov 10, 2013 at 4:20 PM, Alex Glikson glik...@il.ibm.com wrote: Hi Boris, This is a very interesting approach. How do you envision the life cycle of such a scheduler in terms of code repository, build, test, etc? What kind of changes to provisioning APIs do you envision to 'feed' such a scheduler? Any particular reason you didn't mention Neutron? Also, there are some interesting technical challenges (e.g., state management across potentially large number of instances of memcached). Thanks, Alex Boris Pavlovic bpavlo...@mirantis.com wrote on 10/11/2013 07:05:42 PM: From: Boris Pavlovic bpavlo...@mirantis.com To: OpenStack Development Mailing List (not for usage questions) openstack-dev@lists.openstack.org, Date: 10/11/2013 07:07 PM Subject: Re: [openstack-dev] [nova][cinder][oslo][scheduler] How to leverage oslo schduler/filters for nova and cinder Jay, Hi Jay, yes
Re: [openstack-dev] Split of the openstack-dev list (summary so far)
A couple of quick points. 1) I think that splitting the list is the wrong approach. 2) Perhaps we need to look at adding a mechanism that enforces the use of tags in the subject line (send a nice sorry, but you need to indicate the topic(s) you are mailing about error back if it doesn't exist, keep an active list of these via infra?). 3) It might also make sense to have all stackforge projects include [stackforge] in the topic. That will help make filtering easier. Finally, I notice the difference in a threaded client from a flat client. I don't think I could subscribe to this list without a threaded client. TL;DR Don't split the community, work to improve the tools for those who are overwhelmed. (Email clients, enforcing use of subject tags, etc) On Sat, Nov 16, 2013 at 8:01 AM, Nick Chase nch...@mirantis.com wrote: I am one of those horizontal people (working on docs and basically one of the people responsible at my organization for keeping a handle on what's going on) and I'm totally against a split. Of COURSE we need to maintain the integrated/incubated/proposed spectrum. Saying that we need to keep all traffic on one list isn't suggesting we do away with that. But it IS a spectrum, and we should maintain that. Splitting the list is definitely splitting the community and I agree that it's a poison pill. Integrating new projects into the community is just as important as integrating them into the codebase. Without one the other won't happen nearly as effectively, and we do lose one of the strengths of the community as a whole. Part of this is psychology. Many of us are familiar with broken windows theory[1] in terms of code. For those of you who aren't, the idea is based on an experiment where they left an expensive car in a crime-ridden neighborhood and nothing happened to it -- until they broke a window. In coding it means you're less likely to kludge a patch to pristine code, but once you do you are more likely to do it again. Projects work hard to do things the OpenStack way because they feel from the start that they are already part of OpenStack, even if they aren't integrated. It also leads to another side effect, which I'll leave to you to decide whether it's good or bad. We do have a culture of there can be only one. Once a project is proposed in a space, that's it (mostly). We typically don't have multiple projects in that space. That's bad because it reduces innovation through competition, but it's good because we get focused development from the finite number of developers we have available. As I said, YMMV. Look, Monty is right: a good threaded client solves a multitude of problems. Definitely try that for a week before you set your mind on a decision. TL; DR Splitting the list is splitting the community, and that will lead to a decline in overall quality. [1] http://en.wikipedia.org/wiki/Broken_windows_theory ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] Openstack + OpenContrail
On Nov 16, 2013, at 9:43 AM, Dean Troyer dtro...@gmail.com wrote: On Sat, Nov 16, 2013 at 11:15 AM, Harshad Nakil hna...@contrailsystems.com wrote: Sean, We have diff in three repositories. Nova, Neutron and devstack. Each review is requiring other to happen first. Do you have recommendation how do we deal with these dependencies? First off, if you rebase on to a current DevStack you will find a new plugin mechanism specifically designed to address this sort of problem. Dean, Does it address the catch-22 problem that a Neutron reviewer asks for the plugin to be accepted upstream into devstack as pre-condition for acceptance whether a devstack reviewer as for the plugin to be upstreamed into Neutron first ? You've already worked out the plugin bits for Neutron, the parts for stack.sh are similar, located in extras.d. http://devstack.org/plugins.html describes how it works. Also, DevStack does not install support services that are not packaged in the underlying distro. Look at Docker's split between the support service(s) that start before stack.sh runs and the parts that specifically configure Nova. Can you please elaborate as to what are support services in this context ? The patches to Neutron and Nova should be handled by setting the *_BRANCH and *_REPO variables to point to your repo and branch. DevStack will check them out for you when it installs the project source. That would mean forking Neutron and Nova; the code in question is a plugin. It actually show not need to be a diff other than for the fact that the Nova network plugins have so far been done via if ... else statements in nova/virt/libvirt/vif.py. The patches that are being applied by the script have been submitted for review. Using a patch approach is helpful in attempting to demonstrate that the resulting code works against the master branch of Nova/Neutron. You should be able to re-arrange things to support this architecture. Also, expect to break the remaining DevStack changes into digestible bits. As it stands, your branch is unmergable, even if it was based on a semi-current commit. Can you please suggest a sequence of steps... ? I understand that it makes sense to follow the plugin documentation specified above. That would be the first step. But it is not clear to me how to break it down further while having something that is still functional. FWIW, the opencontrail.org website appears to be off the air making it harder to understand what it is you are trying to integrate here. It must have been a transient error. The web site is working at the moment. OpenContrail is a network virtualization solution that provides a service model comparable to AWS VPC without the need for L2 support for the underlying switching infrastructure. It implements distributed router functionality so that traffic that crosses different Neutron networks doesn't have to traverse a router appliance (virtual or otherwise). Thank you, Pedro. dt -- Dean Troyer dtro...@gmail.com ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [stable/havana] gate broken
Thanks for dealing with this! On 11/17/13 10:45 PM, Sean Dague s...@dague.net wrote: Clark, Thanks for tracking that down, just pushed those changes to the gate. On Sun, Nov 17, 2013 at 3:03 PM, Clark Boylan clark.boy...@gmail.com wrote: On Sun, Nov 17, 2013 at 7:53 AM, Gary Kotton gkot...@vmware.com wrote: Thanks. Looks like something fishy with tempest. When I just run neutron everything is well and fine. On 11/17/13 5:50 PM, Matt Riedemann mrie...@linux.vnet.ibm.com wrote: On Sunday, November 17, 2013 7:46:39 AM, Gary Kotton wrote: Hi, The gating for the stable version is broken when the running the neutron gate. Locally this works but the gate has problem. All of the services are up and running correctly. There are some exceptions with the ceilometer service but that is not related to the neutron gating. The error message is as follows: 2013-11-17 11:00:05.855 https://urldefense.proofpoint.com/v1/url?u=http://logs.openstack.org/ 46/56746/1/check/check-tempest-devstack-vm-neutron/a02894b/console .html%23_2 013-11-17_11_00_05_855k=oIvRg1%2BdGAgOoM1BIlLLqw%3D%3D%0Ar=eH0pxTUZo 8NP ZyF6hgoMQu%2BfDtysg45MkPhCZFxPEq8%3D%0Am=VL2%2F41a4nc9qu2UHN5inNr9%2F GGz 5fBDhLKNRNs4pcNw%3D%0As=863f07cf06a33e05b6b0343fd1d72f51124f5a4b4812e 758 e9c677b24759a2e0 | 2013-11-17 11:00:05 2013-11-17 11:00:17.239 https://urldefense.proofpoint.com/v1/url?u=http://logs.openstack.org/ 46/56746/1/check/check-tempest-devstack-vm-neutron/a02894b/console .html%23_2 013-11-17_11_00_17_239k=oIvRg1%2BdGAgOoM1BIlLLqw%3D%3D%0Ar=eH0pxTUZo 8NP ZyF6hgoMQu%2BfDtysg45MkPhCZFxPEq8%3D%0Am=VL2%2F41a4nc9qu2UHN5inNr9%2F GGz 5fBDhLKNRNs4pcNw%3D%0As=bf4edc850a2f4350b3d5ae1374e2fd518d2c065fa17cb f3d bbf00a753e556e26 | Process leaked file descriptors. Seehttp://wiki.jenkins-ci.org/display/JENKINS/Spawning+processes+from+ bui ld for more information 2013-11-17 11:00:17.437 https://urldefense.proofpoint.com/v1/url?u=http://logs.openstack.org/ 46/56746/1/check/check-tempest-devstack-vm-neutron/a02894b/console .html%23_2 013-11-17_11_00_17_437k=oIvRg1%2BdGAgOoM1BIlLLqw%3D%3D%0Ar=eH0pxTUZo 8NP ZyF6hgoMQu%2BfDtysg45MkPhCZFxPEq8%3D%0Am=VL2%2F41a4nc9qu2UHN5inNr9%2F GGz 5fBDhLKNRNs4pcNw%3D%0As=918eca2e42d056e435c956c540ad22266272db674ec4c ded e82d5a286b072190 | Build step 'Execute shell' marked build as failure 2013-11-17 11:00:19.129 https://urldefense.proofpoint.com/v1/url?u=http://logs.openstack.org/ 46/56746/1/check/check-tempest-devstack-vm-neutron/a02894b/console .html%23_2 013-11-17_11_00_19_129k=oIvRg1%2BdGAgOoM1BIlLLqw%3D%3D%0Ar=eH0pxTUZo 8NP ZyF6hgoMQu%2BfDtysg45MkPhCZFxPEq8%3D%0Am=VL2%2F41a4nc9qu2UHN5inNr9%2F GGz 5fBDhLKNRNs4pcNw%3D%0As=542399fe0cb38aaee9ab1d272370f3e721010032e518b 30d b50f63f5161f16a8 | [SCP] Connecting to static.openstack.org Thanks Gary ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org https://urldefense.proofpoint.com/v1/url?u=http://lists.openstack.org/ cgi -bin/mailman/listinfo/openstack-devk=oIvRg1%2BdGAgOoM1BIlLLqw%3D%3D%0 Ar =eH0pxTUZo8NPZyF6hgoMQu%2BfDtysg45MkPhCZFxPEq8%3D%0Am=VL2%2F41a4nc9qu 2UH N5inNr9%2FGGz5fBDhLKNRNs4pcNw%3D%0As=4af7d1dfeb1ddcba19ac6d0d17b8aab5 8f1 c37ce4a335aadbcb19be7f9b1e437 I've seen this fail on at least two stable/havana patches in nova today, so I opened this bug: https://urldefense.proofpoint.com/v1/url?u=https://bugs.launchpad.net/o pen stack-ci/%2Bbug/1252024k=oIvRg1%2BdGAgOoM1BIlLLqw%3D%3D%0Ar=eH0pxTUZo 8NP ZyF6hgoMQu%2BfDtysg45MkPhCZFxPEq8%3D%0Am=VL2%2F41a4nc9qu2UHN5inNr9%2FG Gz5 fBDhLKNRNs4pcNw%3D%0As=028348b32601b6806fc8e7087b63e4a6858297905224ae8 5b0 d81a494c39f5fb -- Thanks, Matt Riedemann ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org https://urldefense.proofpoint.com/v1/url?u=http://lists.openstack.org/cg i-bin/mailman/listinfo/openstack-devk=oIvRg1%2BdGAgOoM1BIlLLqw%3D%3D%0A r=eH0pxTUZo8NPZyF6hgoMQu%2BfDtysg45MkPhCZFxPEq8%3D%0Am=7ZK7sERvy8knXJB WZxy401dx7dr5FDpHqwm%2BawIwbc4%3D%0As=73b9b2e5146d7396e37cc5cd0b0adad1b db0a71572bdabca8815b02d56642e66 There was a recent change to tempest to add a new tox environment (smoke-serial). This tox environment is used by devstack-gate when running the neutron tests. As a result, the new tox env needs to be backported to tempest stable/havana and stable/grizzly. I have proposed those changes at https://urldefense.proofpoint.com/v1/url?u=https://review.openstack.org/% 23/c/56825/k=oIvRg1%2BdGAgOoM1BIlLLqw%3D%3D%0Ar=eH0pxTUZo8NPZyF6hgoMQu% 2BfDtysg45MkPhCZFxPEq8%3D%0Am=7ZK7sERvy8knXJBWZxy401dx7dr5FDpHqwm%2BawIw bc4%3D%0As=43a60c69b1e2e244311527f5aba0a99ecb479bda12823cca3fd825f0ef6db 68a and https://urldefense.proofpoint.com/v1/url?u=https://review.openstack.org/% 23/c/56826/k=oIvRg1%2BdGAgOoM1BIlLLqw%3D%3D%0Ar=eH0pxTUZo8NPZyF6hgoMQu% 2BfDtysg45MkPhCZFxPEq8%3D%0Am=7ZK7sERvy8knXJBWZxy401dx7dr5FDpHqwm%2BawIw bc4%3D%0As=ff6e1020daaa0b4063f52d1cb905cc69b97bf0c4a73cd0caaf153359229b7