Re: [openstack-dev] [stable] Exception proposals for 2014.2.1
2014-12-02 17:15 GMT+01:00 Jay S. Bryant jsbry...@electronicjungle.net: Cinder https://review.openstack.org/137537 - small change and limited to the VMWare driver +1 I think this is fine to make an exception for. one more Cinder exception proposal was added in StableJuno etherpad * https://review.openstack.org/#/c/138526/ (This is currently the master version but I will be proposing to stable/juno as soon as it is approved in Master) The Brocade FS San Lookup facility is currently broken and this revert is necessary to get it working again. Jay, what's the status there, I see master change failed in gate? Cheers, Alan ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [stable] Exception proposals for 2014.2.1
Horizon standing-after-freeze translation update, coming on Dec 3 This is now posted https://review.openstack.org/138798 David, Matthias, I'd appreciate one of you to have a quick look before approving. Cheers, Alan ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [stable] Exception proposals for 2014.2.1
Neutron https://review.openstack.org/136294 - default SNAT, see review for details, I cannot distil 1liner :) -1: I would rather fix the doc to match behavior, than change behavior to match the doc and lose people that were relying on it. Consensus is not to merge this and keep behavior. https://review.openstack.org/136275 - self-contained to the vendor code, extensively tested in several deployments +0: Feels a bit large for a last-minute exception. Kyle, Ihar, I'd like to see +2 from Neutron stable-maint before approving exception. Cheers, Alan ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [stable] New config options, no default change
What is the text that should be included in the commit messages to make sure that it is picked up for release notes? I'm not sure anyone tracks commit messages to create release notes. Let's use existing DocImpact tag, I'll add check for this in the release scripts. But I prefer if you could directly include the proposed text in the draft release notes (link below) better way to handle this is to create a draft, post it in review comments, and copy to release notes draft right before/after pushing the patch into gate. Forgive me, I think my question is more basic then. Where are the release notes for a stable branch located to make such changes? https://wiki.openstack.org/wiki/ReleaseNotes/2014.2.1#Known_Issues_and_Limitations Cheers, Alan ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [stable] Re: [neutron] the hostname regex pattern fix also changed behaviour :(
With the change, will existing instances work as before? Yes, this cuts straight to the heart of the matter: What's the purpose of these validation checks? Specifically, if someone is using an invalid hostname that passed the previous check but doesn't pass an improved/updated check, should we continue to allow it? ...snip... I suggest they also consider the DoS-fix-backport and the Kilo-and-forwards cases separately. If we don't have a solution for stable/juno yet, I need someone to propose a release note for 2014.2.1 (which is already frozen) at https://wiki.openstack.org/wiki/ReleaseNotes/2014.2.1#Known_Issues_and_Limitations Thanks, Alan ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
[openstack-dev] [stable] Exception proposals for 2014.2.1
Hi all, here are exception proposal I have collected when preparing for the 2014.2.1 release, stable-maint members please have a look! General: cap Oslo and client library versions - sync from openstack/requirements stable/juno, would be good to include in the release. https://review.openstack.org/#/q/status:open+branch:stable/juno+topic:openstack/requirements,n,z Ceilometer (all proposed by Ceilo PTL) https://review.openstack.org/138315 https://review.openstack.org/138317 https://review.openstack.org/138320 https://review.openstack.org/138321 https://review.openstack.org/138322 Cinder https://review.openstack.org/137537 - small change and limited to the VMWare driver Glance https://review.openstack.org/137704 - glance_store is backward compatible, but not sure about forcing version bump on stable https://review.openstack.org/137862 - Disable osprofiler by default to prevent upgrade issues, disabled by default in other services Horizon standing-after-freeze translation update, coming on Dec 3 https://review.openstack.org/138018 - visible issue, no translation string changes https://review.openstack.org/138313 - low risk patch for a highly problematic issue Neutron https://review.openstack.org/136294 - default SNAT, see review for details, I cannot distil 1liner :) https://review.openstack.org/136275 - self-contained to the vendor code, extensively tested in several deployments Nova https://review.openstack.org/#/q/status:open+project:openstack/nova+branch:stable/juno+topic:1386236/juno,n,z - soaked more than a week in master, makes numa actually work in Juno Sahara https://review.openstack.org/135549 - fix for auto security groups, there were some concerns, see review for details ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [stable] New config options, no default change
Will the bugs created by this end up in the openstack-manuals project (which I don't think is the right place for them in this case) or has it been set up to create them somewhere else (or not at all) when the commits are against the stable branches? Docs team, how do you handle DocImpact on stable/branches, do you mind (mis)using your tag like this? I thought DocImpat only triggers a notification for the docs team, if there's more machinery behind it like automatic bug filing then we should use a different tag as stable relnotes marker. Cheers, Alan ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [stable] Exception proposals for 2014.2.1
General: cap Oslo and client library versions - sync from openstack/requirements stable/juno, would be good to include in the release. https://review.openstack.org/#/q/status:open+branch:stable/juno+topic:openstack/requirements,n,z +2, let's keep all deps in sync. Those updates do not break anything for existing users. Just spotted it, there is now proposal to revert caps in Juno: https://review.openstack.org/138546 Doug, shall we stop merging caps to projects in Juno? Cheers, Alan ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [OpenStack-docs] [stable] New config options, no default change
In this case we're referring to how we handle DocImpact for specific branches of a project (say stable/juno of Nova, for example). I don't think we'd want to change the DocImpact handling for the entire project to go somewhere other than openstack-manuals. As far as I know the current setup doesn't support us changing the handling per branch though, only per project. Yep, I do agree. And honestly we don't have the resources to cover stable branches docs in addition to the ongoing. Is there another way to tag your work so that you remember to put it in release notes? We just started discussing this and it is not used yet and we'll pick some other tag. While at it, is there an OpenStack-wide registry for tags in the commit messages? For now I've been simply collecting stable releasenote-worthy changes manually in etherpad https://etherpad.openstack.org/p/StableJuno Cheers, Alan ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [stable] Proposal to add Dave Walker back to stable-maint-core
+1 ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [all] testtools 1.2.0 release breaks the world
We don't support 2.6 any more in OpenStack. If we decide to pin testtools on stable/*, we could just let this be. We still support 2.6 on the python clients and oslo libraries - but indeed not for trove itself with master. What Andreas said, also testtools claims testtools gives you the very latest in unit testing technology in a way that will work with Python 2.6, 2.7, 3.1 and 3.2. so it should be fixed, OpenStack or not. Cheers, Alan ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [all] testtools 1.2.0 release breaks the world
2014-11-15 23:06 GMT+01:00 Robert Collins robe...@robertcollins.net: We did find a further issue, which was due to the use of setUpClass in tempest (a thing that testtools has never supported per se - its always been a happy accident that it worked). I've hopefully fixed that in 1.3.0 and we're babysitting tempest now to see. Trove stable/juno py26 (py27 works) unit tests are failing with testtools 1.3.0 http://logs.openstack.org/periodic-stable/periodic-trove-python26-juno/fcf4db2/testr_results.html.gz ... File /home/jenkins/workspace/periodic-trove-python26-juno/trove/tests/unittests/mgmt/test_models.py, line 60, in setUpClass super(MockMgmtInstanceTest, cls).setUpClass() AttributeError: 'super' object has no attribute 'setUpClass' pip freeze diff since last good report is: -testtools==1.1.0 +testtools==1.3.0 +unittest2==0.8.0 Any ideas? Cheers, Alan ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
[openstack-dev] [stable] openstack-stable-maint list has been made read-only
2014-11-11 11:01 GMT+01:00 Alan Pevec ape...@gmail.com: ... All stable maintenance related discussion should happen on openstack-dev with [stable] tag in the subject. openstack-stable-maint list is now configured to discard posts from non-members and reject all posts from members with the followng message: openstack-stable-maint list has been made read-only, explicit Reply-To: header is set to openstack-dev@lists.openstack.org in list options. All stable branch maintenance related discussion should happen on openstack-dev list with [stable] tag in the subject. Currently the only address allowed to post is jenk...@openstack.org for periodic job failures. Distros are encourage to apply their email address from where they will post their 3rd party CI results on stable branches. Cheers, Alan ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
[openstack-dev] [stable] Re: [Openstack-stable-maint] Stable check of openstack/glance failed
2014-11-11 7:26 GMT+01:00 jenk...@openstack.org: - periodic-glance-python26-juno http://logs.openstack.org/periodic-stable/periodic-glance-python26-juno/d0ea683 : FAILURE in 21m 09s - periodic-glance-python27-juno http://logs.openstack.org/periodic-stable/periodic-glance-python27-juno/ec60681 : FAILURE in 17m 01s Glance Juno fails after glance-store 0.1.9 release. I see glance_store has only master branch, what's the plan for stable/juno: will there be glance_store stable/* branches or should it keep backward compatibility? Cheers, Alan ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
[openstack-dev] [stable] Fwd: New config options, no default change
Hi Dave, I'm forwarding your message to openstack-stable-maint, that list has been made read-only as decided on release management meetup on Friday Nov 7 2014, see under * Stable branches in https://etherpad.openstack.org/p/kilo-infrastructure-summit-topics I have set explicit Reply-To: header to openstack-dev@lists.openstack.org in list options but it doesn't seem to work, I'll investigate that. All stable maintenance related discussion should happen on openstack-dev with [stable] tag in the subject. Cheers, Alan -- Forwarded message -- From: Dave Walker em...@daviey.com To: openstack-stable-maint openstack-stable-ma...@lists.openstack.org Date: Mon, 10 Nov 2014 21:52:23 + Subject: New config options, no default change Hi, Looking at a stable/juno cinder proposed change[0], I came across one that introduces a new config option. The default is a noop change for the behaviour, so no bad surprises on upgrade. These sort of changes feel like they are outside the 'no config changes' rule, but we have not really discussed this. What do others think? Thanks [0] https://review.openstack.org/#/c/131987/ -- Kind Regards, Dave Walker ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [stable] Re: [Openstack-stable-maint] Stable check of openstack/glance failed
https://bugs.launchpad.net/glance/+bug/1391437 and marked it as Critical. ok, so current issue is a bug for both master and stable/juno but I'd still like glance_store backward compatibility be clarified by the Glance team. Cheers, Alan ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [stable] New config options, no default change
New config options may not change behavior (if default value preserves behavior), they still make documentation more incomplete (doc, books, and/or blogposts about Juno won't mention that option). That's why we definitely need such changes described clearly in stable release notes. I also lean to accept this as an exception for stable/juno, I'll request relnote text in the review. Cheers, Alan ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
[openstack-dev] [cinder] [barbican] Stable check of openstack/cinder failed
Hi, cinder juno tests are failing after new barbicanclient release - periodic-cinder-python26-juno http://logs.openstack.org/periodic-stable/periodic-cinder-python26-juno/d660c21 : FAILURE in 11m 37s - periodic-cinder-python27-juno http://logs.openstack.org/periodic-stable/periodic-cinder-python27-juno/d9bf4cb : FAILURE in 9m 04s I've filed https://bugs.launchpad.net/cinder/+bug/1388461 AFACT this affects master too. Cheers, Alan ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] Cross distribution talks on Friday
%install export OSLO_PACKAGE_VERSION=%{version} %{__python} setup.py install -O1 --skip-build --root %{buildroot} Then everything should be ok and PBR will become your friend. Still not my friend because I don't want a _build_ tool as runtime dependency :) e.g. you don't ship make(1) to run C programs, do you? For runtime, only pbr.version part is required but unfortunately oslo.version was abandoned. Cheers, Alan ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [Openstack-stable-maint] Stable check of openstack/glance failed
2014-10-15 11:24 GMT+02:00 Ihar Hrachyshka ihrac...@redhat.com: I've reported a bug for the failure [1], marked it as Critical and nominated for Juno and Icehouse. I guess that's all we need to do to pay attention of glance developers to the failure, right? Thanks Ihar, we have few Glance developers on stable-maint but since this is not just a stable issue AFAICT, I'm adding openstack-dev for the wider audience. Cheers, Alan [1]: https://bugs.launchpad.net/glance/+bug/1381419 On 15/10/14 08:28, jenk...@openstack.org wrote: Build failed. - periodic-glance-docs-icehouse http://logs.openstack.org/periodic-stable/periodic-glance-docs-icehouse/16541e4 : SUCCESS in 1m 46s - periodic-glance-python26-icehouse http://logs.openstack.org/periodic-stable/periodic-glance-python26-icehouse/7c14d20 : FAILURE in 19m 40s - periodic-glance-python27-icehouse http://logs.openstack.org/periodic-stable/periodic-glance-python27-icehouse/880455f : SUCCESS in 15m 39s ___ Openstack-stable-maint mailing list openstack-stable-ma...@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-stable-maint ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [all] icehouse failure rates are somewhat catastrophic - who is actually maintaining it?
The original idea was that these stable branches would be maintained by the distros, and that is clearly not happening if you look at the code review Stable branches are maintained by the _upstream_ stable-maint team[1] where most members might be from (two) distros but please note that all PTLs are also included and there are members who are not from a distro. But you're right, if this stays mostly one distro effort, we'll pull out and do it on our own. /me looks at other non-named distros latency there. We need to sort that out before we even consider supporting a release for more than the one year we currently do. Please consider that stable branches are also needed for the security fixes and we, as a responsible upstream project, need to provide that with or without distros. Stable branch was a master branch just few months ago and it inherited all the bugs present there, so everybody fixing a gate bug on master should consider backporting to stable at the same time. It can't be stable-maint-only responsiblity e.g. stable-maint doesn't have +2 in devstack stable/* or in tempest (now brancheless, so master) branches. Cheers, Alan [1] https://review.openstack.org/#/admin/groups/120,members ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [all] icehouse failure rates are somewhat catastrophic - who is actually maintaining it?
I'm on retry #7 of modifying the tox.ini file in devstack. Which review# is that so I can have a look? Cheers, Alan ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] Please do *NOT* use vendorized versions of anything (here: glanceclient using requests.packages.urllib3)
2014-09-17 17:15 GMT+02:00 Thomas Goirand z...@debian.org: File bla/tests/test_ssl.py, line 19, in module from requests.packages.urllib3 import poolmanager ImportError: No module named packages.urllib3 This is in tests only, in runtime code there is conditional import of vendorized urllib3 falling back to system library. So what about https://review.openstack.org/122379 Cheers, Alan ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
[openstack-dev] [all] Stable Havana 2013.2.4 preparation
Hi all, as planned[1] stable-maint team is going to prepare the final stable Havana release 2013.2.4, now that Juno-3 was released. Proposed release date is Sep 18, with the code freeze on stable/havana branches week before on Sep 11. Stable-maint members: please review open backports [2] taking into account the fact this is the final release. Potentially risky changes should be reviewed very closely since there won't be another release to fix possible regressions. Cheers, Alan [1] https://wiki.openstack.org/wiki/StableBranchRelease#Planned_stable.2Fhavana_releases design summit notes: https://etherpad.openstack.org/p/StableIcehouse [2] https://review.openstack.org/#/q/status:open+AND+branch:stable/havana+AND+(project:openstack/nova+OR+project:openstack/keystone+OR+project:openstack/glance+OR+project:openstack/cinder+OR+project:openstack/neutron+OR+project:openstack/horizon+OR+project:openstack/heat+OR+project:openstack/ceilometer),n,z ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [Openstack-stable-maint] [Neutron][stable] How to backport database schema fixes
It seems that currently it's hard to backport any database schema fix to Neutron [1] which uses alembic to manage db schema version. Nova has the same issue before and a workaround is to put some placeholder files before each release. So first do we allow db schema fixes to be backport to stable for Neutron ? DB schema backports was a topic at StableBranch session last design summit [*] and policy did not change: not allowed in general but exceptions could always be discussed on stable-maint list. If we do, then how about put some placeholder files similar to Nova at the end of each release cycle? or we have some better solution for alembic. AFAIK you can't have placeholders in alembic, there was an action item from design session for Mark to summarize his best practices for db backports. Mark, do you have that published somewhere? From the stable maintainer side, we have a policy for stable backport https://wiki.openstack.org/wiki/StableBranch DB schema changes is forbidden If we allow db schema backports for more than one project, I think we need to update the wiki. Again, policy stays but we can use this thread as an exception request for [1] My thoughts: adding index on (agent_type, host) is safe for backports as it doesn't affect code but we need to do it properly e.g. it must not break Icehouse-Juno upgrades and have clear instructions how to apply in stable release notes e.g. [2] for similar case in Keystone Havana. Also it would be good to describe the impact and why part in the commit message and/or bug 1350326 description, IIUC that would be prevent race condition in L2 plugin ? Cheers, Alan [1] https://review.openstack.org/#/c/110642/ [*] https://etherpad.openstack.org/p/StableIcehouse [2] https://wiki.openstack.org/wiki/ReleaseNotes/2013.2.2#Keystone ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [Openstack-stable-maint] [all][oslo] official recommendations to handle oslo-incubator sync requests
2. For stable branches, the process is a bit different. For those branches, we don't generally want to introduce changes that are not related to specific issues in a project. So in case of backports, we tend to do per-patch consideration when synchronizing from incubator. I'd call this cherry-sync: format-patch commit from oslo stable, update file and import paths and apply it on project's stable branch. That could be an oslo-incubator RFE: command option for update.py --cherry-pick COMMIT Cheers, Alan ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] stable/havana jobs failing due to keystone bug 1357652
2014-08-17 18:30 GMT+02:00 Nathan Kinder nkin...@redhat.com: This requirement change was backported for stable/icehouse: https://review.openstack.org/#/c/112337/ It seems like the right thing to do is to propose a similar change for stable/havana instead of reverting the keystoneclient change. We had similar issue last month, it's due to fact we test master clients against stable releases of services. Change is appropriate for stable because it changes test requirements, not runtime deps for the project itself: https://review.openstack.org/106974 Cheers, Alan ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [Openstack-stable-maint] Preparing for 2014.1.2 -- branches freeze Aug 7
... above relates only to freeze exceptions for critical issues which must be raised and discussed on openstack-stable-maint release. Hopefully there won't be any. openstack-stable-maint _list_ ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] keystone
After restarting keystone with the following command, $service openstack-keystone restart it is giving a message Aborting wait for keystone to start. Could you please help on what the problem could be? This is not an appropriate topic for the development mailing list, please open a question on ask.openstack.org with relevant information like operating system and package version, logfiles etc. Cheers, Alan ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] Help a poor Nova Grizzy Backport Bug Fix
Hi Michael, 2014-02-25 5:07 GMT+01:00 Michael Davies mich...@the-davies.net: I have a Nova Grizzly backport bug[1] in review[2] that has been hanging around for 4 months waiting for one more +2 from a stable team person. thanks for the backport! BTW stable team list is openstack-stable-maint (CCed). If there's someone kind enough to bump this through, it'd be appreciated ;) Grizzly branches are supposed to receive only security and life-support patches (those keeping them working when dependencies change). Currently Grizzly Nova needs https://review.openstack.org/76020 to support latest Boto. [1] https://launchpad.net/bugs/1188543 That bug is Low - does that correctly reflect its priority? Backports are generally Medium. [2] https://review.openstack.org/#/c/54460/ I've sent it to check queue, it should fail due to Boto issue above. Cheers, Alan ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] Help a poor Nova Grizzy Backport Bug Fix
[2] https://review.openstack.org/#/c/54460/ I've sent it to check queue, it should fail due to Boto issue above. It also failed due to pyopenssl 0.14 update pulling a new dep which fails to build in Grizzly devstack, should be fixed by https://review.openstack.org/76189 Cheers, Alan ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [All] Fixed recent gate issues
2014-02-23 10:52 GMT+01:00 Gary Kotton gkot...@vmware.com: It looks like this does not solve the issue. Yeah https://review.openstack.org/74451 doesn't solve the issue completely, we have SKIP_EXERCISES=boot_from_volume,bundle,client-env,euca,swift,client-args but failure is now in Grenade's Javelin script: + swift upload javelin /etc/hosts ...(same Traceback)... [ERROR] /opt/stack/new/grenade/setup-javelin:151 Swift upload failed I wonder if we need the same change for stable/havana. devstack-gate master branch handles all projects branches, patch above was inside if stable/grizzly Cheers, Alan ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [All] Fixed recent gate issues
https://review.openstack.org/74451 doesn't solve the issue completely, we have SKIP_EXERCISES=boot_from_volume,bundle,client-env,euca,swift,client-args but failure is now in Grenade's Javelin script: + swift upload javelin /etc/hosts ...(same Traceback)... [ERROR] /opt/stack/new/grenade/setup-javelin:151 Swift upload failed What about just removing that test from Javelin in stable/havana? https://review.openstack.org/76058 Cheers, Alan ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [Openstack-stable-maint] Stable gate status?
2014-02-20 8:57 GMT+01:00 Miguel Angel Ajo Pelayo mangel...@redhat.com: I rebased the https://review.openstack.org/#/c/72576/ no-op change. And it failed in check-tempest-dsvm-neutron-pg with bug 1254890 - Timed out waiting for thing ... to become ACTIVE while previous check on Feb 17 failed in check-tempest-dsvm-neutron-isolated with bug 1253896 - Attempts to verify guests are running via SSH fails. SSH connection to guest does not work. Cheers, Alan ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [swift]stable/havana Jenkins failed
I notice that we have changed from swiftclient import Connection, HTTPException to from swiftclient import Connection, RequestException at 2014-02-14, I don't know is it relational. I have reported a bug for this: https://bugs.launchpad.net/swift/+bug/1281886 Bug is a duplicate of https://bugs.launchpad.net/openstack-ci/+bug/1281540 and has been also discussed in the other thread http://lists.openstack.org/pipermail/openstack-dev/2014-February/027476.html Cheers, Alan ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [All] Fixed recent gate issues
Yeah it's pip weirdness where things falls apart because of version cap. It's basically installing bin/swift from 1.9 when it sees the version requirement but it leaves everything in python-swiftclient namespace from master. So I've actually been looking at this since late yesterday the conclusion we've reached is to just skip the exercises on grizzly. Removing the version cap isn't going to be simple on grizzly because there global requirements wasn't enforced back in grizzly. We'd have to change the requirement for both glance, horizon, and swift and being ~3 weeks away from eol for grizzly I don't think we should mess with that. This failure is only an issue with cli swiftclient on grizzly (and one swift functional test) which as it sits now is just the devstack exercises on grenade. So if we just don't run those exercises on the grizzly side of a grenade run there shouldn't be an issue. I've got 2 patches to do this here: https://review.openstack.org/#/c/74419/ https://review.openstack.org/#/c/74451/ Looks like only the latter is needed, devstack-gate core please approve it to unblock stable/havana. Cheers, Alan ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [Openstack-stable-maint] Stable gate status?
2014-02-11 16:14 GMT+01:00 Anita Kuno: On 02/11/2014 04:57 AM, Alan Pevec wrote: Hi Mark and Anita, could we declare stable/havana neutron gate jobs good enough at this point? There are still random failures as this no-op change shows https://review.openstack.org/72576 but I don't think they're stable/havana specific. ... I will reaffirm here what I had stated in IRC. If Mark McClain gives his assent for stable/havana patches to be approved, I will not remove Neutron stable/havana patches from the gate queue before they start running tests. If after they start running tests, they demonstrate that they are failing, I will remove them from the gate as a means to keep the gate flowing. If the stable/havana gate jobs are indeed stable, I will not be removing any patches that should be merged. As discussed on #openstack-infra last week, stable-maint team should start looking more closely at Tempest stable/havana branch and Matthew Treinish from Tempest core joined the stable-maint team to help us there. In the meantime, we need to do something more urgently, there are remaining failures showing up frequently in stable/havana jobs which seem to have been fixed or at least improved on master: * bug 1254890 - Timed out waiting for thing ... to become ACTIVE causes tempest-dsvm-* failures resolution unclear? * bug 1253896 - Attempts to verify guests are running via SSH fails. SSH connection to guest does not work. based on Salvatore's comment 56, I've marked it as Won't Fix in neutron/havana and opened tempest/havana to propose what Tempest test or jobs should skip for Havana. Please chime-in in the bug if you have suggestions. Cheers, Alan ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [All] Fixed recent gate issues
Hi John, thanks for the summary. I've noticed one more fall out from swiftclient update in Grenade jobs running on stable/havana changes e.g. http://logs.openstack.org/02/73402/1/check/check-grenade-dsvm/a5650ac/console.html ... 2014-02-18 13:00:02.103 | Test Swift 2014-02-18 13:00:02.103 | + swift --os-tenant-name=demo --os-username=demo --os-password=secret --os-auth-url=http://127.0.0.1:5000/v2.0 stat 2014-02-18 13:00:02.284 | Traceback (most recent call last): 2014-02-18 13:00:02.284 | File /usr/local/bin/swift, line 35, in module 2014-02-18 13:00:02.284 | from swiftclient import Connection, HTTPException 2014-02-18 13:00:02.285 | ImportError: cannot import name HTTPException 2014-02-18 13:00:02.295 | + STATUS_SWIFT=Failed ... Grenade job installs swiftclient from git master but then later due to python-swiftclient=1.2,2 requirement in Grizzly, older version 1.9.0 is pulled from pypi and then half-installed or something, producing above conflict between swift CLI binary and libs. Solution could be to remove swiftclient cap in Grizzly, any other suggestions? Cheers, Alan ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [neutron] [stable/havana] cherry backport, multiple external networks, passing tests
2014-02-12 10:48 GMT+01:00 Miguel Angel Ajo Pelayo mangel...@redhat.com: Could any core developer check/approve this if it does look good? https://review.openstack.org/#/c/68601/ I'd like to get it in for the new stable/havana release if it's possible. I'm afraid it's too late for 2013.2.2 (to be released tomorrow after week delay) It would be the same answer as http://lists.openstack.org/pipermail/openstack-stable-maint/2014-February/002124.html - both linked bugs are Medium only and known for long time, so targeting 2013.2.3 is more reasonable. Cheers, Alan ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [Openstack-stable-maint] Stable gate status?
2014-02-04 Doug Hellmann doug.hellm...@dreamhost.com: Do we have a list of those somewhere? Pulled out where following Neutron patches (IMHO all innocent for gate breaking): https://review.openstack.org/62206 https://review.openstack.org/67214 https://review.openstack.org/70232 I'm particularly interested in https://review.openstack.org/#/c/66149/ as a fix for https://bugs.launchpad.net/keystone/+bug/1251123 We can discuss it as an exception, I've opened 2013.2.2 exception requests thread on stable-maint last week. I was only +1 on that patch because it's stable/havana only and I didn't see reports from anyone running Havana with this fix, only Kieran reported running similar patch on _Grizzly_. There's a minor inline comment from Kieran, but that's not a blocker afaict. Also reviews from Keystone Core members would help. Cheers, Alan ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [nova] [stable] stevedore 0.14
2014-01-27 Doug Hellmann doug.hellm...@dreamhost.com: I have just released a new version of stevedore, 0.14, which includes a change to stop checking version numbers of dependencies for plugins. This should eliminate one class of problems we've seen where we get conflicting requirements to install, and the libraries are compatible, but the way stevedore was using pkg_resources was causing errors when the plugins were loaded. Thanks, that will be useful especial for stable releases. But looks like this broke Nova unit tests on stable/* http://lists.openstack.org/pipermail/openstack-stable-maint/2014-January/002055.html Master is not affected, here's latest successful run few minutes ago http://logs.openstack.org/11/69411/2/check/gate-nova-python27/8252be2/ Shall we pin stevedore on stable/* or fix the tests on stable/* ? Cheers, Alan ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [nova] [stable] stevedore 0.14
... or fix the tests on stable/* ? That would be: https://review.openstack.org/#/q/I5063c652c705fd512f90ff3897a4c590f7ba7c02,n,z and is already proposed for Havana. Sean, please submit it for stable/grizzly too. Thanks, Alan ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [Openstack-dev] [Nova] Updated Feature in Next Havana release 2013.2.2
2014/1/21 cosmos cosmos cosmos0...@gmail.com: But now i am wondering about start/stop and shelve/unshelve function. Because the function on boot from image(creates a new volume) is not working. That's not enough information, have you tried to find in Launchpad bug(s) which would match the error? Then we could determine if the fix is backported already or could still be backported before the 2013.2.2 planned freeze next week, Jan 30. Cheers, Alan ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
[openstack-dev] Call for testing: 2013.2.1 candidate tarballs
Hi, We are scheduled to publish Nova, Keystone, Glance, Neutron, Cinder, Horizon, Heat and Ceilometer 2013.2.1 stable Havana releases on Thursday Dec 12. The list of issues fixed so far can be seen here: https://launchpad.net/nova/+milestone/2013.2.1 https://launchpad.net/keystone/+milestone/2013.2.1 https://launchpad.net/glance/+milestone/2013.2.1 https://launchpad.net/neutron/+milestone/2013.2.1 https://launchpad.net/cinder/+milestone/2013.2.1 https://launchpad.net/horizon/+milestone/2013.2.1 https://launchpad.net/heat/+milestone/2013.2.1 https://launchpad.net/ceilometer/+milestone/2013.2.1 We'd appreciate anyone who could test the candidate 2013.2.1 tarballs: http://tarballs.openstack.org/nova/nova-stable-havana.tar.gz http://tarballs.openstack.org/keystone/keystone-stable-havana.tar.gz http://tarballs.openstack.org/glance/glance-stable-havana.tar.gz http://tarballs.openstack.org/neutron/neutron-stable-havana.tar.gz http://tarballs.openstack.org/cinder/cinder-stable-havana.tar.gz http://tarballs.openstack.org/horizon/horizon-stable-havana.tar.gz [*] http://tarballs.openstack.org/heat/heat-stable-havana.tar.gz http://tarballs.openstack.org/ceilometer/ceilometer-stable-havana.tar.gz [*] Horizon will include translations update in review https://review.openstack.org/60713 Thanks Alan ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] stable/havana 2013.2.1 freeze tomorrow
2013/12/6 Alan Pevec ape...@gmail.com: 2013/12/4 Alan Pevec ape...@gmail.com: first stable/havana release 2013.2.1 is scheduled[1] to be released next week on December 12th, so freeze on stable/havana goes into effect tomorrow EOD, one week before the release. We're behind with reviewing so we'll be doing soft-freeze today: stable-maint members can review and approve currently open stable reviews during Friday, but any new reviews coming in will be blocked. Remaining open reviews will get temporary automatic -2 at EOD today when call for testing will be posted. Just to give a quick status update: call for testing is delayed, bunch of approved changes were hit by tempest failures in gate jobs, I'm going to keep reverifying them until they get lucky and pass. Freeze is now in full effect, stable-maint members please don't approve any more changes unless they're freeze exception. The only requested freeze exceptions are Sync rpc fix from oslo-incubator for qpid v2 issues https://launchpad.net/bugs/1251757 and https://launchpad.net/bugs/1257293 If you have any other exception proposals, please raise them on stable-maint list. Cheers, Alan ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] stable/havana 2013.2.1 freeze tomorrow
2013/12/4 Alan Pevec ape...@gmail.com: first stable/havana release 2013.2.1 is scheduled[1] to be released next week on December 12th, so freeze on stable/havana goes into effect tomorrow EOD, one week before the release. We're behind with reviewing so we'll be doing soft-freeze today: stable-maint members can review and approve currently open stable reviews during Friday, but any new reviews coming in will be blocked. Remaining open reviews will get temporary automatic -2 at EOD today when call for testing will be posted. Cheers, Alan ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
[openstack-dev] stable/havana 2013.2.1 freeze tomorrow
Hi, first stable/havana release 2013.2.1 is scheduled[1] to be released next week on December 12th, so freeze on stable/havana goes into effect tomorrow EOD, one week before the release. Everybody is welcome to help review proposed changes[2] taking into account criteria for stable fixes[3]. Cheers, Alan [1] https://wiki.openstack.org/wiki/StableBranchRelease#Planned_stable.2Fhavana_releases [2] https://review.openstack.org/#/q/status:open+AND+branch:stable/havana+AND+(project:openstack/nova+OR+project:openstack/keystone+OR+project:openstack/glance+OR+project:openstack/cinder+OR+project:openstack/neutron+OR+project:openstack/horizon+OR+project:openstack/heat+OR+project:openstack/ceilometer),n,z [3] https://wiki.openstack.org/wiki/StableBranch#Appropriate_Fixes ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] excessively difficult to support both iso8601 0.1.4 and 0.1.8 as deps
2013/11/27 Sean Dague s...@dague.net: The problem is you can't really support both iso8601 was dormant for years, and the revived version isn't compatible with the old version. So supporting both means basically forking iso8601 and maintaining you own version of it monkey patched in your own tree. Right, hence glance was added https://review.openstack.org/55998 to unblock the previous gate failure. Issue now is that stable/grizzly Tempest uses clients from git trunk, which is not going to work since trunk will add more and more incompatible dependencies, even if backward compatbility is preserved against the old service APIs! Solutions could be that Tempest installs clients into separate venv to avoid dependecy conflicts or establish stable/* branches for clients[1] which are created around OpenStack release time. Cheers, Alan [1] we have those for openstack client packages in Fedora/RDO e.g. https://github.com/redhat-openstack/python-novaclient/branches Here's nice explanation by Jakub: http://openstack.redhat.com/Clients On Wed, Nov 27, 2013 at 1:58 AM, Yaguang Tang yaguang.t...@canonical.com wrote: after update to iso8601=0.1.8, it breaks stable/neutron jenkins tests, because stable/glance requires iso8601=0.1.4, log info https://jenkins02.openstack.org/job/periodic-tempest-devstack-vm-neutron-stable-grizzly/43/console, I have filed a bug to track this https://bugs.launchpad.net/glance/+bug/1255419. ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [Openstack-stable-maint] Stable/grizzly
2013/10/10 Sean Dague s...@dague.net: Hmph. So boto changed their connection function signatures to have a 3rd argument, and put it second, and nothing has defaults. So isn't that a boto bug? Not sure what their backward-compatibility statement is but it is silly to break API just like that[1] Cheers, Alan [1] https://github.com/boto/boto/commit/789ace93be380ecd36220b7009f0b497dacdc1cb ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] Gate issues - what you can do to help
2013/10/3 Gary Kotton gkot...@vmware.com: Please see https://review.openstack.org/#/c/49483/ That's s/quantum/neutron/ on stable - I'm confused why is that, it should have been quantum everywhere in Grizzly. Could you please expand your reasoning in the commit message? It also doesn't help, check-tempest-devstack-vm-neutron job still failed. Cheers, Alan ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] Gate issues - what you can do to help
The problems occur when the when the the following line is invoked: https://github.com/openstack-dev/devstack/blob/stable/grizzly/lib/quantum#L302 But that line is reached only in case baremetal is enabled which isn't the case in gate, is it? Cheers, Alan ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] Gate issues - what you can do to help
Hi, quantumclient is now fixed for stable/grizzly but there are issues with check-tempest-devstack-vm-neutron job where devstack install is dying in the middle of create_quantum_initial_network() without trace e.g. http://logs.openstack.org/71/49371/1/check/check-tempest-devstack-vm-neutron/6da159d/console.html Any ideas? Cheers, Alan ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] Gate issues - what you can do to help
1) Please *do not* Approve or Reverify stable/* patches. The pyparsing requirements conflict with neutron client from earlier in the week is still not resolved on stable/*. Also there's an issue with quantumclient and Nova stable/grizzly: https://jenkins01.openstack.org/job/periodic-nova-python27-stable-grizzly/34/console ...nova/network/security_group/quantum_driver.py, line 101, in get id = quantumv20.find_resourceid_by_name_or_id( AttributeError: 'module' object has no attribute 'find_resourceid_by_name_or_id' That should be fixed by https://review.openstack.org/49006 + new quantumclient release, thanks Matt! Adam, Thierry - given that stable/grizzly is still blocked by this, I suppose we should delay 2013.1.4 freeze (was planned this Thursday) until stable/grizzly is back in shape? Cheers, Alan ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] Gate issues - what you can do to help
1) Please *do not* Approve or Reverify stable/* patches. The pyparsing requirements conflict with neutron client from earlier in the week is still not resolved on stable/*. Also there's an issue with quantumclient and Nova stable/grizzly: https://jenkins01.openstack.org/job/periodic-nova-python27-stable-grizzly/34/console ...nova/network/security_group/quantum_driver.py, line 101, in get id = quantumv20.find_resourceid_by_name_or_id( AttributeError: 'module' object has no attribute 'find_resourceid_by_name_or_id' Relevant difference in pip freeze from last good is: -python-quantumclient==2.2.3 +python-neutronclient==2.3.1 +python-quantumclient==2.2.4.2 Looks like new quantumclient compatbility layer is missing few methods. Cheers, Alan ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
[openstack-dev] Call for testing: 2013.1.3 candidate tarballs
Hi all, We are scheduled to publish Nova, Keystone, Glance, Networking, Cinder and Horizon 2013.1.3 releases on Thursday Aug 8. Ceilometer and Heat were incubating in Grizzly so they're not covered by the stable branch policy but Ceilometer and Heat teams are preparing 2013.1.3 releases on their own, to coincide with other 2013.1.3 stable releases. The list of issues fixed so far can be seen here: https://launchpad.net/nova/+milestone/2013.1.3 https://launchpad.net/keystone/+milestone/2013.1.3 https://launchpad.net/glance/+milestone/2013.1.3 https://launchpad.net/neutron/+milestone/2013.1.3 https://launchpad.net/cinder/+milestone/2013.1.3 https://launchpad.net/horizon/+milestone/2013.1.3 https://launchpad.net/heat/+milestone/2013.1.3 https://launchpad.net/ceilometer/+milestone/2013.1.3 We'd appreciate anyone who could test the candidate 2013.1.3 tarballs: http://tarballs.openstack.org/nova/nova-stable-grizzly.tar.gz http://tarballs.openstack.org/keystone/keystone-stable-grizzly.tar.gz http://tarballs.openstack.org/glance/glance-stable-grizzly.tar.gz http://tarballs.openstack.org/neutron/neutron-stable-grizzly.tar.gz [*] http://tarballs.openstack.org/cinder/cinder-stable-grizzly.tar.gz http://tarballs.openstack.org/horizon/horizon-stable-grizzly.tar.gz http://tarballs.openstack.org/heat/heat-stable-grizzly.tar.gz http://tarballs.openstack.org/ceilometer/ceilometer-stable-grizzly.tar.gz Thanks Alan [*] Stable Networking tarball will be renamed to quantum-2013.1.3.tar.gz before upload to Launchpad ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
[openstack-dev] stable/grizzly 2013.1.3 approaching WAS Re: No Project release status meeting tomorrow
Hi Thierry, we'll be skipping the release status meeting tomorrow at 21:00 UTC I wanted to remind at that meeting about the next stable/grizzly release 2013.1.3, meeting next week would too late so I'll piggy back here. Proposed freeze is Aug 1st and release Aug 8th. Milestone 2013.1.3 has been created in Launchpad and I'd like to ask PTLs to target, in their opinion, important bugs to that milestone, even if backport is not proposed yet. That will help us prioritize among bugs tagged for grizzly. Cheers, Alan ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev