[openstack-dev] [cinder] Etherpad for the Cinder midcycle meetup
The meetup will be in Austin, TX on January 27-29. You can find more information and post your topics on the etherpad: https://etherpad.openstack.org/p/cinder-kilo-midcycle-meetup -- Mike Perez __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [Cinder] Cutoff deadlines for cinder drivers
I think what we discussed was that existing drivers were supposed to have something working by the end of k-2, or at least have something close to working. For new drivers they had to have 3rd party CI working by the end of Kilo. Duncan, correct me if I am wrong. Jay On 01/10/2015 04:52 PM, Mike Perez wrote: On 14:42 Fri 09 Jan , Ivan Kolodyazhny wrote: Hi Erlon, We've got a thread mailing-list [1] for it and some details in wiki [2]. Anyway, need to get confirmation from our core devs and/or Mike. [1] http://lists.openstack.org/pipermail/openstack-dev/2014-October/049512.html [2] https://wiki.openstack.org/wiki/Cinder/tested-3rdParty-drivers#Testing_requirements_for_Kilo_release_and_beyond Regards, Ivan Kolodyazhny On Fri, Jan 9, 2015 at 2:26 PM, Erlon Cruz sombra...@gmail.com wrote: Hi all, hi cinder core devs, I have read on IRC discussions about a deadline for drivers vendors to have their CI running and voting until kilo-2, but I didn't find any post on this list to confirm this. Can anyone confirm this? Thanks, Erlon We did discuss and agree in the Cinder meeting that the deadline would be k-2, but I don't think anyone reached out to the driver maintainers about the deadline. Duncan had this action item [1], perhaps he can speak more about it. [1] - http://eavesdrop.openstack.org/meetings/cinder/2014/cinder.2014-11-19-16.00.html __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [oslo] dropping namespace packages
Ihar, I agree that we should do something to enforce using the appropriate namespace so that we don't have the wrong usage sneak in. I haven't gotten any rules written yet. Have had to attend to a family commitment the last few days. Hope that I can tackle the namspace changes next week. Jay On 01/08/2015 12:24 PM, Ihar Hrachyshka wrote: On 01/08/2015 07:03 PM, Doug Hellmann wrote: I’m not sure that’s something we need to enforce. Liaisons should be updating projects now as we release libraries, and then we’ll consider whether we can drop the namespace packages when we plan the next cycle. Without a hacking rule, there is a chance old namespace usage will sneak in, and then we'll need to get back to updating imports. I would rather avoid that and get migration committed with enforcement. /Ihar ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
[openstack-dev] [Neutron][L3][Devstack] Bug during delete floating IPs?
Not sure if its something seen by others. I hit this when I run tempest.scenario.test_network_basic_ops.TestNetworkBasicOps against master: 2015-01-10 17:45:13.227 5350 DEBUG neutron.plugins.ml2.plugin [req-2ab4b380-cf3a-4663-90c3-a05ef5f4da0f None] Deleting port e5deb014-0063-4d55-8ee3-5ba3524fee14 delete_port /opt/stack/new/neutron/neutron/plugins/ml2/plugin.py:995 2015-01-10 17:45:13.228 5350 DEBUG neutron.openstack.common.lockutils [req-2ab4b380-cf3a-4663-90c3-a05ef5f4da0f ] Created new semaphore db-access internal_lock /opt/stack/new/neutron/neutron/openstack/common/lockutils.py:206 2015-01-10 17:45:13.228 5350 DEBUG neutron.openstack.common.lockutils [req-2ab4b380-cf3a-4663-90c3-a05ef5f4da0f ] Acquired semaphore db-access lock /opt/stack/new/neutron/neutron/openstack/common/lockutils.py:229 2015-01-10 17:45:13.252 5350 DEBUG neutron.plugins.ml2.plugin [req-2ab4b380-cf3a-4663-90c3-a05ef5f4da0f None] Calling delete_port for e5deb014-0063-4d55-8ee3-5ba3524fee14 owned by network:floatingip delete_port /opt/stack/new/neutron/neutron/plugins/ml2/plugin.py:1043 2015-01-10 17:45:13.254 5350 DEBUG neutron.openstack.common.lockutils [req-2ab4b380-cf3a-4663-90c3-a05ef5f4da0f ] Releasing semaphore db-access lock /opt/stack/new/neutron/neutron/openstack/common/lockutils.py:238 2015-01-10 17:45:13.282 5350 ERROR neutron.api.v2.resource [req-2ab4b380-cf3a-4663-90c3-a05ef5f4da0f None] delete failed 2015-01-10 17:45:13.282 5350 TRACE neutron.api.v2.resource Traceback (most recent call last): 2015-01-10 17:45:13.282 5350 TRACE neutron.api.v2.resource File /opt/stack/new/neutron/neutron/api/v2/resource.py, line 83, in resource 2015-01-10 17:45:13.282 5350 TRACE neutron.api.v2.resource result = method(request=request, **args) 2015-01-10 17:45:13.282 5350 TRACE neutron.api.v2.resource File /opt/stack/new/neutron/neutron/api/v2/base.py, line 479, in delete 2015-01-10 17:45:13.282 5350 TRACE neutron.api.v2.resource obj_deleter(request.context, id, **kwargs) 2015-01-10 17:45:13.282 5350 TRACE neutron.api.v2.resource File /opt/stack/new/neutron/neutron/db/l3_dvr_db.py, line 198, in delete_floatingip 2015-01-10 17:45:13.282 5350 TRACE neutron.api.v2.resource self).delete_floatingip(context, id) 2015-01-10 17:45:13.282 5350 TRACE neutron.api.v2.resource File /opt/stack/new/neutron/neutron/db/l3_db.py, line 1237, in delete_floatingip 2015-01-10 17:45:13.282 5350 TRACE neutron.api.v2.resource router_id = self._delete_floatingip(context, id) 2015-01-10 17:45:13.282 5350 TRACE neutron.api.v2.resource File /opt/stack/new/neutron/neutron/db/l3_db.py, line 902, in _delete_floatingip 2015-01-10 17:45:13.282 5350 TRACE neutron.api.v2.resource l3_port_check=False) 2015-01-10 17:45:13.282 5350 TRACE neutron.api.v2.resource File /opt/stack/new/neutron/neutron/plugins/ml2/plugin.py, line 1050, in delete_port 2015-01-10 17:45:13.282 5350 TRACE neutron.api.v2.resource l3plugin.notify_routers_updated(context, router_ids) 2015-01-10 17:45:13.282 5350 TRACE neutron.api.v2.resource File /opt/stack/new/neutron/neutron/db/l3_db.py, line 1260, in notify_routers_updated 2015-01-10 17:45:13.282 5350 TRACE neutron.api.v2.resource context, list(router_ids), 'disassociate_floatingips', {}) 2015-01-10 17:45:13.282 5350 TRACE neutron.api.v2.resource TypeError: 'NoneType' object is not iterable Looks like the code is assuming that router_ids can never be None, which clearly is the case here. Is that a bug? Looking elsewhere in the l3_db.py, L3RpcNotifierMixin.notify_routers_updated() does make a check for router_ids (which means that that function does expect it to be empty some times), but the list() is killing it before it reaches that. This backtrace repeats itself many many times in the neutron logs. Thanks for your help. -Sunil __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
[openstack-dev] [all] Hacking 0.10 released
Hi all, I am happy to announce the release of hacking 0.10. Below is a list of whats new. Unlike most dependencies hacking changes are not automatically pushed out by the OpenStack Proposal Bot. In order to migrate to the new release each project will need a patch like this: https://review.openstack.org/#/c/145570/ - flake8 now uses multiprocessing by default! - Remove H402: first line of docstring should end with punctuation - Remove H904: Wrap long lines in parentheses and not backslash for line continuation - Update H501, don't use locals() for formatting strings. to also check for self.__dict__ - Add H105: don't use author tags - Add H238: check for old style class declarations - Remove all git commit message rules: H801, H802, H803 - Remove complex import rules: H302, H306, H307 Dependency changes: - pep8 from 1.5.6 to 1.5.7 (https://pypi.python.org/pypi/pep8) - flake8 from 2.1.0 to 2.2.4 (https://pypi.python.org/pypi/flake8) - six from = 1.60 to =1.7.0 __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] Vancouver Design Summit format changes
On 15:50 Fri 09 Jan , Thierry Carrez wrote: Hi everyone, The OpenStack Foundation staff is considering a number of changes to the Design Summit format for Vancouver, changes on which we'd very much like to hear your feedback. The problems we are trying to solve are the following: - Accommodate the needs of more OpenStack projects - Reduce separation and perceived differences between the Ops Summit and the Design/Dev Summit - Create calm and less-crowded spaces for teams to gather and get more work done While some sessions benefit from large exposure, loads of feedback and large rooms, some others are just workgroup-oriented work sessions that benefit from smaller rooms, less exposure and more whiteboards. Smaller rooms are also cheaper space-wise, so they allow us to scale more easily to a higher number of OpenStack projects. My proposal is the following. Each project team would have a track at the Design Summit. Ops feedback is in my opinion part of the design of OpenStack, so the Ops Summit would become a track within the forward-looking Design Summit. Tracks may use two separate types of sessions: * Fishbowl sessions Those sessions are for open discussions where a lot of participation and feedback is desirable. Those would happen in large rooms (100 to 300 people, organized in fishbowl style with a projector). Those would have catchy titles and appear on the general Design Summit schedule. We would have space for 6 or 7 of those in parallel during the first 3 days of the Design Summit (we would not run them on Friday, to reproduce the successful Friday format we had in Paris). * Working sessions Those sessions are for a smaller group of contributors to get specific work done or prioritized. Those would happen in smaller rooms (20 to 40 people, organized in boardroom style with loads of whiteboards). Those would have a blanket title (like infra team working session) and redirect to an etherpad for more precise and current content, which should limit out-of-team participation. Those would replace project pods. We would have space for 10 to 12 of those in parallel for the first 3 days, and 18 to 20 of those in parallel on the Friday (by reusing fishbowl rooms). Each project track would request some mix of sessions (We'd like 4 fishbowl sessions, 8 working sessions on Tue-Thu + half a day on Friday) and the TC would arbitrate how to allocate the limited resources. Agenda for the fishbowl sessions would need to be published in advance, but agenda for the working sessions could be decided dynamically from an etherpad agenda. By making larger use of smaller spaces, we expect that setup to let us accommodate the needs of more projects. By merging the two separate Ops Summit and Design Summit events, it should make the Ops feedback an integral part of the Design process rather than a second-class citizen. By creating separate working session rooms, we hope to evolve the pod concept into something where it's easier for teams to get work done (less noise, more whiteboards, clearer agenda). What do you think ? Could that work ? If not, do you have alternate suggestions ? Sounds good to me. Glad we're keeping the Friday format too! -- Mike Perez __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [Cinder] Cutoff deadlines for cinder drivers
On 14:42 Fri 09 Jan , Ivan Kolodyazhny wrote: Hi Erlon, We've got a thread mailing-list [1] for it and some details in wiki [2]. Anyway, need to get confirmation from our core devs and/or Mike. [1] http://lists.openstack.org/pipermail/openstack-dev/2014-October/049512.html [2] https://wiki.openstack.org/wiki/Cinder/tested-3rdParty-drivers#Testing_requirements_for_Kilo_release_and_beyond Regards, Ivan Kolodyazhny On Fri, Jan 9, 2015 at 2:26 PM, Erlon Cruz sombra...@gmail.com wrote: Hi all, hi cinder core devs, I have read on IRC discussions about a deadline for drivers vendors to have their CI running and voting until kilo-2, but I didn't find any post on this list to confirm this. Can anyone confirm this? Thanks, Erlon We did discuss and agree in the Cinder meeting that the deadline would be k-2, but I don't think anyone reached out to the driver maintainers about the deadline. Duncan had this action item [1], perhaps he can speak more about it. [1] - http://eavesdrop.openstack.org/meetings/cinder/2014/cinder.2014-11-19-16.00.html -- Mike Perez __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [cinder] Etherpad for the Cinder midcycle meetup
Mike, Thanks for putting this out there again. A reminder, if you are planning to attend, please update the etherpad so that I can get an accurate number in attendance and get all the security details set up. Thank you! Jay On 01/10/2015 04:00 PM, Mike Perez wrote: The meetup will be in Austin, TX on January 27-29. You can find more information and post your topics on the etherpad: https://etherpad.openstack.org/p/cinder-kilo-midcycle-meetup __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [Neutron][L3][Devstack] Bug during delete floating IPs?
This trivial patch fixes the tracebacks: $ cat disassociate_floating_ips.patch --- neutron/db/l3_db.py.orig2015-01-10 22:20:30.101506298 -0800 +++ neutron/db/l3_db.py 2015-01-10 22:24:18.111479818 -0800 @@ -1257,4 +1257,4 @@ def notify_routers_updated(self, context, router_ids): super(L3_NAT_db_mixin, self).notify_routers_updated( -context, list(router_ids), 'disassociate_floatingips', {}) +context, list(router_ids) if router_ids else None, 'disassociate_floatingips', {}) -Sunil From: Sunil Kumar [su...@embrane.com] Sent: Saturday, January 10, 2015 7:07 PM To: OpenStack Development Mailing List (not for usage questions) Subject: [openstack-dev] [Neutron][L3][Devstack] Bug during delete floating IPs? Not sure if its something seen by others. I hit this when I run tempest.scenario.test_network_basic_ops.TestNetworkBasicOps against master: 2015-01-10 17:45:13.227 5350 DEBUG neutron.plugins.ml2.plugin [req-2ab4b380-cf3a-4663-90c3-a05ef5f4da0f None] Deleting port e5deb014-0063-4d55-8ee3-5ba3524fee14 delete_port /opt/stack/new/neutron/neutron/plugins/ml2/plugin.py:995 2015-01-10 17:45:13.228 5350 DEBUG neutron.openstack.common.lockutils [req-2ab4b380-cf3a-4663-90c3-a05ef5f4da0f ] Created new semaphore db-access internal_lock /opt/stack/new/neutron/neutron/openstack/common/lockutils.py:206 2015-01-10 17:45:13.228 5350 DEBUG neutron.openstack.common.lockutils [req-2ab4b380-cf3a-4663-90c3-a05ef5f4da0f ] Acquired semaphore db-access lock /opt/stack/new/neutron/neutron/openstack/common/lockutils.py:229 2015-01-10 17:45:13.252 5350 DEBUG neutron.plugins.ml2.plugin [req-2ab4b380-cf3a-4663-90c3-a05ef5f4da0f None] Calling delete_port for e5deb014-0063-4d55-8ee3-5ba3524fee14 owned by network:floatingip delete_port /opt/stack/new/neutron/neutron/plugins/ml2/plugin.py:1043 2015-01-10 17:45:13.254 5350 DEBUG neutron.openstack.common.lockutils [req-2ab4b380-cf3a-4663-90c3-a05ef5f4da0f ] Releasing semaphore db-access lock /opt/stack/new/neutron/neutron/openstack/common/lockutils.py:238 2015-01-10 17:45:13.282 5350 ERROR neutron.api.v2.resource [req-2ab4b380-cf3a-4663-90c3-a05ef5f4da0f None] delete failed 2015-01-10 17:45:13.282 5350 TRACE neutron.api.v2.resource Traceback (most recent call last): 2015-01-10 17:45:13.282 5350 TRACE neutron.api.v2.resource File /opt/stack/new/neutron/neutron/api/v2/resource.py, line 83, in resource 2015-01-10 17:45:13.282 5350 TRACE neutron.api.v2.resource result = method(request=request, **args) 2015-01-10 17:45:13.282 5350 TRACE neutron.api.v2.resource File /opt/stack/new/neutron/neutron/api/v2/base.py, line 479, in delete 2015-01-10 17:45:13.282 5350 TRACE neutron.api.v2.resource obj_deleter(request.context, id, **kwargs) 2015-01-10 17:45:13.282 5350 TRACE neutron.api.v2.resource File /opt/stack/new/neutron/neutron/db/l3_dvr_db.py, line 198, in delete_floatingip 2015-01-10 17:45:13.282 5350 TRACE neutron.api.v2.resource self).delete_floatingip(context, id) 2015-01-10 17:45:13.282 5350 TRACE neutron.api.v2.resource File /opt/stack/new/neutron/neutron/db/l3_db.py, line 1237, in delete_floatingip 2015-01-10 17:45:13.282 5350 TRACE neutron.api.v2.resource router_id = self._delete_floatingip(context, id) 2015-01-10 17:45:13.282 5350 TRACE neutron.api.v2.resource File /opt/stack/new/neutron/neutron/db/l3_db.py, line 902, in _delete_floatingip 2015-01-10 17:45:13.282 5350 TRACE neutron.api.v2.resource l3_port_check=False) 2015-01-10 17:45:13.282 5350 TRACE neutron.api.v2.resource File /opt/stack/new/neutron/neutron/plugins/ml2/plugin.py, line 1050, in delete_port 2015-01-10 17:45:13.282 5350 TRACE neutron.api.v2.resource l3plugin.notify_routers_updated(context, router_ids) 2015-01-10 17:45:13.282 5350 TRACE neutron.api.v2.resource File /opt/stack/new/neutron/neutron/db/l3_db.py, line 1260, in notify_routers_updated 2015-01-10 17:45:13.282 5350 TRACE neutron.api.v2.resource context, list(router_ids), 'disassociate_floatingips', {}) 2015-01-10 17:45:13.282 5350 TRACE neutron.api.v2.resource TypeError: 'NoneType' object is not iterable Looks like the code is assuming that router_ids can never be None, which clearly is the case here. Is that a bug? Looking elsewhere in the l3_db.py, L3RpcNotifierMixin.notify_routers_updated() does make a check for router_ids (which means that that function does expect it to be empty some times), but the list() is killing it before it reaches that. This backtrace repeats itself many many times in the neutron logs. Thanks for your help. -Sunil __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev