[openstack-dev] [cyborg]Weekly Team Meeting 2018.07.11
Hi Team, Weekly meeting as usual starting UTC1400 at #openstack-cyborg, since holiday is over, let's focus on getting Rocky features done :) -- Zhipeng (Howard) Huang Standard Engineer IT Standard & Patent/IT Product Line Huawei Technologies Co,. Ltd Email: huangzhip...@huawei.com Office: Huawei Industrial Base, Longgang, Shenzhen (Previous) Research Assistant Mobile Ad-Hoc Network Lab, Calit2 University of California, Irvine Email: zhipe...@uci.edu Office: Calit2 Building Room 2402 OpenStack, OPNFV, OpenDaylight, OpenCompute Aficionado __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
[openstack-dev] [tripleo] Plan to switch the undercloud to be containerized by default
with [tripleo] tag... On Tue, Jul 10, 2018 at 7:56 PM Emilien Macchi wrote: > This is an update on where things are regarding $topic, based on feedback > I've got from the work done recently: > > 1) Switch --use-heat to take a boolean and deprecate it > > We still want to allow users to deploy non containerized underclouds, so > we made this patch so they can use --use-heat=False: > https://review.openstack.org/#/c/581467/ > Also https://review.openstack.org/#/c/581468 and > https://review.openstack.org/581180 as dependencies > > 2) Configure CI jobs for containerized undercloud, except scenario001, 002 > for timeout reasons (and figure out this problem in a parallel effort) > > https://review.openstack.org/#/c/575330 > https://review.openstack.org/#/c/579755 > > 3) Switch tripleoclient to deploy by default a containerized undercloud > > https://review.openstack.org/576218 > > 4) Improve performances in general so scenario001/002 doesn't timeout when > containerized undercloud is enabled > > https://review.openstack.org/#/c/581183 is the patch that'll enable the > containerized undercloud > https://review.openstack.org/#/c/577889/ is a patch that enables > pipelining in ansible/quickstart, but more is about to come, I'll update > the patches tonight. > > 5) Cleanup quickstart to stop using use-heat except for fs003 (needed to > disable containers, and keep coverage for non containerized undercloud) > > https://review.openstack.org/#/c/581534/ > > > Reviews are welcome, we aim to merge this work by milestone 3, in less > than 2 weeks from now. > Thanks! > -- > Emilien Macchi > -- Emilien Macchi __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
[openstack-dev] Plan to switch the undercloud to be containerized by default
This is an update on where things are regarding $topic, based on feedback I've got from the work done recently: 1) Switch --use-heat to take a boolean and deprecate it We still want to allow users to deploy non containerized underclouds, so we made this patch so they can use --use-heat=False: https://review.openstack.org/#/c/581467/ Also https://review.openstack.org/#/c/581468 and https://review.openstack.org/581180 as dependencies 2) Configure CI jobs for containerized undercloud, except scenario001, 002 for timeout reasons (and figure out this problem in a parallel effort) https://review.openstack.org/#/c/575330 https://review.openstack.org/#/c/579755 3) Switch tripleoclient to deploy by default a containerized undercloud https://review.openstack.org/576218 4) Improve performances in general so scenario001/002 doesn't timeout when containerized undercloud is enabled https://review.openstack.org/#/c/581183 is the patch that'll enable the containerized undercloud https://review.openstack.org/#/c/577889/ is a patch that enables pipelining in ansible/quickstart, but more is about to come, I'll update the patches tonight. 5) Cleanup quickstart to stop using use-heat except for fs003 (needed to disable containers, and keep coverage for non containerized undercloud) https://review.openstack.org/#/c/581534/ Reviews are welcome, we aim to merge this work by milestone 3, in less than 2 weeks from now. Thanks! -- Emilien Macchi __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [requirements][taskflow] networkx migration
Octavia passed tempest with this change and networkx 2.1. Michael On Tue, Jul 10, 2018 at 9:29 AM Doug Hellmann wrote: > > Excerpts from Matthew Thode's message of 2018-07-10 10:59:33 -0500: > > On 18-07-09 15:15:23, Matthew Thode wrote: > > > We have a patch that looks good, can we get it merged? > > > > > > https://review.openstack.org/#/c/577833/ > > > > > > > Anyone from taskflow around? Maybe it's better to just mail the ptl. > > > > We could use more reviewers on taskflow (and have needed them for a > while). Perhaps we can get some of the consuming projects to give it a > little live so the Oslo folks who are less familiar with it feel > confident of the change(s) this close to the final release date for > non-client libraries. > > Doug > > __ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
[openstack-dev] [keystone] Adding Wangxiyuan to keystone core
Hi all, Today we added Wangxiyuan to the keystone core team [0]. He's been doing a bunch of great work over the last couple releases and has become a valuable reviewer [1][2]. He's also been instrumental in pushing forward the unified limits work not only in keystone, but across projects. Thanks Wangxiyuan for all your help and welcome to the team! Lance [0] http://eavesdrop.openstack.org/meetings/keystone/2018/keystone.2018-07-10-16.00.log.html#l-100 [1] http://stackalytics.com/?module=keystone-group [2] http://stackalytics.com/?module=keystone-group=queens signature.asc Description: OpenPGP digital signature __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [Release-job-failures][group-based-policy] Release of openstack/group-based-policy failed
Excerpts from Sumit Naiksatam's message of 2018-07-10 13:24:25 -0700: > Thanks Doug for noticing this. I am guessing this was a transient > issue. How do we trigger this job again to confirm? Someone from the infra team with access to the zuul interface can help you with that. > > On Tue, Jul 10, 2018 at 9:21 AM, Doug Hellmann wrote: > > Excerpts from zuul's message of 2018-07-10 06:38:24 +: > >> Build failed. > >> > >> - release-openstack-python > >> http://logs.openstack.org/5b/5bbcfd7b41d39339ff9b9f8654681406d2508205/release/release-openstack-python/269f8ce/ > >> : FAILURE in 6m 31s > >> - announce-release announce-release : SKIPPED > >> - propose-update-constraints propose-update-constraints : SKIPPED > >> > > > > The release job failed trying to pip install something due to an SSL > > error. > > > > http://logs.openstack.org/5b/5bbcfd7b41d39339ff9b9f8654681406d2508205/release/release-openstack-python/269f8ce/job-output.txt.gz#_2018-07-10_06_37_26_065386 > > > > __ > > OpenStack Development Mailing List (not for usage questions) > > Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe > > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
[openstack-dev] [ironic] Ironic Bug Day July 12 2018 1:00 - 2:00 PM UTC
Hey all, This month's bug day was delayed a week and will take place on Thursday the 12th from 1:00 UTC to 2:00 UTC For location, time, and agenda details please see https://etherpad.openstack.org/p/ironic-bug-day-july-2018 If you would like to propose topics, feel free to do it in the etherpad! Thanks, Mike Turek __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [Release-job-failures][group-based-policy] Release of openstack/group-based-policy failed
Thanks Doug for noticing this. I am guessing this was a transient issue. How do we trigger this job again to confirm? On Tue, Jul 10, 2018 at 9:21 AM, Doug Hellmann wrote: > Excerpts from zuul's message of 2018-07-10 06:38:24 +: >> Build failed. >> >> - release-openstack-python >> http://logs.openstack.org/5b/5bbcfd7b41d39339ff9b9f8654681406d2508205/release/release-openstack-python/269f8ce/ >> : FAILURE in 6m 31s >> - announce-release announce-release : SKIPPED >> - propose-update-constraints propose-update-constraints : SKIPPED >> > > The release job failed trying to pip install something due to an SSL > error. > > http://logs.openstack.org/5b/5bbcfd7b41d39339ff9b9f8654681406d2508205/release/release-openstack-python/269f8ce/job-output.txt.gz#_2018-07-10_06_37_26_065386 > > __ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [cinder] Planning Etherpad for Denver 2018 PTG
Thanks Jay! Em sex, 6 de jul de 2018 às 14:30, Jay S Bryant escreveu: > All, > > I have created an etherpad to start planning for the Denver PTG in > September. [1] Please start adding topics to the etherpad. > > Look forward to seeing you all there! > > Jay > > (jungleboyj) > > [1] https://etherpad.openstack.org/p/cinder-ptg-planning-denver-9-2018 > > > > __ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [OSSN-0084] Data retained after deletion of a ScaleIO volume
On Tue, Jul 10, 2018 at 3:28 PM, Martin Chlumsky wrote: > It is the workaround that is right and the discussion part that is wrong. > > I am familiar with this bug. Using thin volumes *and/or* enabling zero > padding DOES ensure data contained > in a volume is actually deleted. > Great, that's super helpful. Thanks! Is there someone (Luke?) on the list that can send a correction for this OSSN to all the lists it needs to go to? // jim > > On Tue, Jul 10, 2018 at 10:41 AM Jim Rollenhagen > wrote: > >> On Tue, Jul 10, 2018 at 4:20 AM, Luke Hinds wrote: >> >>> Data retained after deletion of a ScaleIO volume >>> --- >>> >>> ### Summary ### >>> Certain storage volume configurations allow newly created volumes to >>> contain previous data. This could lead to leakage of sensitive >>> information between tenants. >>> >>> ### Affected Services / Software ### >>> Cinder releases up to and including Queens with ScaleIO volumes >>> using thin volumes and zero padding. >>> >> >> According to discussion in the bug, this bug occurs with ScaleIO volumes >> using thick volumes and with zero padding disabled. >> >> If the bug is with thin volumes and zero padding, then the workaround >> seems quite wrong. :) >> >> I'm not super familiar with Cinder, so could some Cinder folks check this >> out and re-issue a more accurate OSSN, please? >> >> // jim >> >> >>> >>> ### Discussion ### >>> Using both thin volumes and zero padding does not ensure data contained >>> in a volume is actually deleted. The default volume provisioning rule is >>> set to thick so most installations are likely not affected. Operators >>> can check their configuration in `cinder.conf` or check for zero padding >>> with this command `scli --query_all`. >>> >>> Recommended Actions >>> >>> Operators can use the following two workarounds, until the release of >>> Rocky (planned 30th August 2018) which resolves the issue. >>> >>> 1. Swap to thin volumes >>> >>> 2. Ensure ScaleIO storage pools use zero-padding with: >>> >>> `scli --modify_zero_padding_policy >>> (((--protection_domain_id | >>> --protection_domain_name ) >>> --storage_pool_name ) | --storage_pool_id ) >>> (--enable_zero_padding | --disable_zero_padding)` >>> >>> ### Contacts / References ### >>> Author: Nick Tait >>> This OSSN : https://wiki.openstack.org/wiki/OSSN/OSSN-0084 >>> Original LaunchPad Bug : https://bugs.launchpad.net/ossn/+bug/1699573 >>> Mailing List : [Security] tag on openstack-dev@lists.openstack.org >>> OpenStack Security Project : https://launchpad.net/~openstack-ossg >>> >>> >>> >>> __ >>> OpenStack Development Mailing List (not for usage questions) >>> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject: >>> unsubscribe >>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >>> >>> >> __ >> OpenStack Development Mailing List (not for usage questions) >> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject: >> unsubscribe >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >> > > __ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [Openstack-sigs] [first-contact] Forum summary on recommendations for contributing organizations
Cross posting this to the dev-list as I think there will be good input from there as well :) -Kendall (diablo_rojo) On Tue, Jul 10, 2018 at 11:56 AM Jeremy Stanley wrote: > On 2018-06-12 19:53:25 + (+), Jeremy Stanley wrote: > [...] > > Finally, we came up with a handful of action items. One was me > > sending this summary (only a couple weeks late!), another was > > Matthew Oliver submitting a patch to the contributor guide repo > > with our initial stub text. > [...] > > An early draft for the Contributor Guide addition with > recommendations to contributing organizations was subsequently > proposed as https://review.openstack.org/578676 but could use some > additional input and polish from other interested members of the > community. Please have a look and provide any feedback you have as > review comments there or via followup to this thread (whichever is > more convenient). > -- > Jeremy Stanley > ___ > openstack-sigs mailing list > openstack-s...@lists.openstack.org > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-sigs > __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [OSSN-0084] Data retained after deletion of a ScaleIO volume
It is the workaround that is right and the discussion part that is wrong. I am familiar with this bug. Using thin volumes *and/or* enabling zero padding DOES ensure data contained in a volume is actually deleted. On Tue, Jul 10, 2018 at 10:41 AM Jim Rollenhagen wrote: > On Tue, Jul 10, 2018 at 4:20 AM, Luke Hinds wrote: > >> Data retained after deletion of a ScaleIO volume >> --- >> >> ### Summary ### >> Certain storage volume configurations allow newly created volumes to >> contain previous data. This could lead to leakage of sensitive >> information between tenants. >> >> ### Affected Services / Software ### >> Cinder releases up to and including Queens with ScaleIO volumes >> using thin volumes and zero padding. >> > > According to discussion in the bug, this bug occurs with ScaleIO volumes > using thick volumes and with zero padding disabled. > > If the bug is with thin volumes and zero padding, then the workaround > seems quite wrong. :) > > I'm not super familiar with Cinder, so could some Cinder folks check this > out and re-issue a more accurate OSSN, please? > > // jim > > >> >> ### Discussion ### >> Using both thin volumes and zero padding does not ensure data contained >> in a volume is actually deleted. The default volume provisioning rule is >> set to thick so most installations are likely not affected. Operators >> can check their configuration in `cinder.conf` or check for zero padding >> with this command `scli --query_all`. >> >> Recommended Actions >> >> Operators can use the following two workarounds, until the release of >> Rocky (planned 30th August 2018) which resolves the issue. >> >> 1. Swap to thin volumes >> >> 2. Ensure ScaleIO storage pools use zero-padding with: >> >> `scli --modify_zero_padding_policy >> (((--protection_domain_id | >> --protection_domain_name ) >> --storage_pool_name ) | --storage_pool_id ) >> (--enable_zero_padding | --disable_zero_padding)` >> >> ### Contacts / References ### >> Author: Nick Tait >> This OSSN : https://wiki.openstack.org/wiki/OSSN/OSSN-0084 >> Original LaunchPad Bug : https://bugs.launchpad.net/ossn/+bug/1699573 >> Mailing List : [Security] tag on openstack-dev@lists.openstack.org >> OpenStack Security Project : https://launchpad.net/~openstack-ossg >> >> >> __ >> OpenStack Development Mailing List (not for usage questions) >> Unsubscribe: >> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >> >> __ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [tripleo] Updates/upgrades equivalent for external_deploy_tasks
On Tue, Jul 10, 2018 at 10:21 AM Jiří Stránský wrote: > > Hi, > > with the move to config-download deployments, we'll be moving from > executing external installers (like ceph-ansible) via Heat resources > encapsulating Mistral workflows towards executing them via Ansible > directly (nested Ansible process via external_deploy_tasks). > > Updates and upgrades still need to be addressed here. I think we should > introduce external_update_tasks and external_upgrade_tasks for this > purpose, but i see two options how to construct the workflow with them. > > During update (mentioning just updates, but upgrades would be done > analogously) we could either: > > A) Run external_update_tasks, then external_deploy_tasks. > > This works with the assumption that updates are done very similarly to > deployment. The external_update_tasks could do some prep work and/or > export Ansible variables which then could affect what > external_deploy_tasks do (e.g. in case of ceph-ansible we'd probably > override the playbook path). This way we could also disable specific > parts of external_deploy_tasks on update, in case reuse is undesirable > in some places. > > B) Run only external_update_tasks. > > This would mean code for updates/upgrades of externally deployed > services would be completely separated from how their deployment is > done. If we wanted to reuse some of the deployment tasks, we'd have to > use the YAML anchor referencing mechanisms. (, *anchor) > > I think the options are comparable in terms of what is possible to > implement with them, the main difference is what use cases we want to > optimize for. > > Looking at what we currently have in external_deploy_tasks (e.g. > [1][2]), i think we'd have to do a lot of explicit reuse if we went with > B (inventory and variables generation, ...). So i'm leaning towards > option A (WIP patch at [3]) which should give us this reuse more > naturally. This approach would also be more in line with how we already > do normal updates and upgrades (also reusing deployment tasks). Please > let me know in case you have any concerns about such approach (looking > especially at Ceph and OpenShift integrators :) ). Thanks for thinking of this Jirka. I like option A and your WIP patch (579170). As you say, it fits with what we're already doing and avoids explicit reuse. John > > Thanks > > Jirka > > [1] > https://github.com/openstack/tripleo-heat-templates/blob/8d7525fdf79f915e3f880ea0f3fd299234ecc635/docker/services/ceph-ansible/ceph-base.yaml#L340-L467 > [2] > https://github.com/openstack/tripleo-heat-templates/blob/8d7525fdf79f915e3f880ea0f3fd299234ecc635/extraconfig/services/openshift-master.yaml#L70-L231 > [3] https://review.openstack.org/#/c/579170/ __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [stestr?][tox?][infra?] Unexpected success isn't a failure
On Tue, Jul 10, 2018 at 10:16:37AM +0100, Chris Dent wrote: > On Mon, 9 Jul 2018, Matthew Treinish wrote: > > > It's definitely a bug, and likely a bug in stestr (or one of the lower > > level > > packages like testtools or python-subunit), because that's what's generating > > the return code. Tox just looks at the return code from the commands to > > figure > > out if things were successful or not. I'm a bit surprised by this though I > > thought we covered the unxsuccess and xfail cases because I would have > > expected > > cdent to file a bug if it didn't. Looking at the stestr tests we don't have > > coverage for the unxsuccess case so I can see how this slipped through. > > This was reported on testrepository some years ago and a bit of > analysis was done: https://bugs.launchpad.net/testrepository/+bug/1429196 > This actually helps a lot, because I was seeing the same issue when I tried writing a quick patch to address this. When I manually poked the TestResult object it didn't have anything in the unxsuccess list. So instead of relying on that I wrote this patch: https://github.com/mtreinish/stestr/pull/188 which uses the output filter's internal function for counting results to find unxsuccess tests. It's still not perfect though because if someone runs with the --no-subunit-trace flag it still doesn't work (because that call path never gets run) but it's at least a starting point. I've marked it as WIP for now, but I'm thinking we could merge it as is and leave the --no-subunit-trace and unxsuccess as a known issues for now, since xfail and unxsuccess are pretty uncommon in practice. (gabbi is the only thing I've seen really use it) -Matt Treinish > So yeah, I did file a bug but it fell off the radar during those > dark times. > signature.asc Description: PGP signature __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
[openstack-dev] [designate] Meeting tomorrow
Unfortunately something has come up and I have an appointment I have to be at for our scheduled slot (11:00 UTC). Can someone else chair, or we can post pone the meeting for 1 week. Does anyone have any preferences? Thanks, - Graham __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
[openstack-dev] [first-contact] Recommendations for contributing organizations
If you're interested in helping with an addition to the Contributor Guide detailing places where those employing contributors to OpenStack might be able to help improve the experience for their employees and increase their ability to succeed within the community, please chime in on this SIGs ML thread or the review linked from it: http://lists.openstack.org/pipermail/openstack-sigs/2018-July/000429.html -- Jeremy Stanley signature.asc Description: PGP signature __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
[openstack-dev] [tc] Technical Committee update for 2018-07-10
This is the weekly summary of work being done by the Technical Committee members. The full list of active items is managed in the wiki: https://wiki.openstack.org/wiki/Technical_Committee_Tracker We also track TC objectives for the cycle using StoryBoard at: https://storyboard.openstack.org/#!/project/923 == Recent Activity == Other approved changes: * remove project team diversity tags: https://review.openstack.org/#/c/579870/ Office hour logs: * http://eavesdrop.openstack.org/irclogs/%23openstack-tc/%23openstack-tc.2018-07-04.log.html#t2018-07-04T01:00:01 * http://eavesdrop.openstack.org/irclogs/%23openstack-tc/%23openstack-tc.2018-07-05.log.html#t2018-07-05T15:00:09 * http://eavesdrop.openstack.org/irclogs/%23openstack-tc/%23openstack-tc.2018-07-10.log.html#t2018-07-10T09:01:41 == Ongoing Discussions == The Adjutant team application as the minimum number of votes required to be approved. It could not be formally accepted until 14 July and I know we have several TC members traveling this week so I will hold it open until next week to allow for final votes and discussion. * https://review.openstack.org/553643 Colleen is going to contact the election officials about scheduling the elections for the end of Rocky / beginning of Stein. Project team "health check" discussions are continuing. As Chris mentioned in his email this week, the point of this process is to have TC members actively engage with each team to understand any potential issues they are facing. We have a few teams impacted by the ZTE situation, and we have a few other teams with some affiliation diversity concerns that we would like to try to help address. We have also discovered that some teams are healthier than we expected based on how obvious (or not) their activity was. * http://lists.openstack.org/pipermail/openstack-dev/2018-July/132101.html I have made a few revisions to the python3-first goal based on feedback on the patch and testing. I expect a few more small updates with links to examples. * https://review.openstack.org/575933 I have also proposed a PTI update for the documentation jobs that is a prerequisite to moving ahead with the python 3 changes during Stein. * https://review.openstack.org/580495 * http://lists.openstack.org/pipermail/openstack-dev/2018-July/132025.html == TC member actions/focus/discussions for the coming week(s) == Thierry's changes to the Project Team Guide to include a technical guidance section need reviewers. * https://review.openstack.org/#/c/578070/1 Zane needs to update the proposal for diversity requirements or guidance for new project teams based on existing feedback. * https://review.openstack.org/#/c/567944/ Please vote on the Adjutant team application. https://review.openstack.org/553643 Remember that we agreed to send status updates on initiatives separately to openstack-dev every two weeks. If you are working on something for which there has not been an update in a couple of weeks, please consider summarizing the status. == Contacting the TC == The Technical Committee uses a series of weekly "office hour" time slots for synchronous communication. We hope that by having several such times scheduled, we will have more opportunities to engage with members of the community from different timezones. Office hour times in #openstack-tc: * 09:00 UTC on Tuesdays * 01:00 UTC on Wednesdays * 15:00 UTC on Thursdays If you have something you would like the TC to discuss, you can add it to our office hour conversation starter etherpad at: https://etherpad.openstack.org/p/tc-office-hour-conversation-starters Many of us also run IRC bouncers which stay in #openstack-tc most of the time, so please do not feel that you need to wait for an office hour time to pose a question or offer a suggestion. You can use the string "tc-members" to alert the members to your question. You will find channel logs with past conversations at http://eavesdrop.openstack.org/irclogs/%23openstack-tc/ If you expect your topic to require significant discussion or to need input from members of the community other than the TC, please start a mailing list discussion on openstack-dev at lists.openstack.org and use the subject tag "[tc]" to bring it to the attention of TC members. __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [requirements][taskflow] networkx migration
Excerpts from Matthew Thode's message of 2018-07-10 10:59:33 -0500: > On 18-07-09 15:15:23, Matthew Thode wrote: > > We have a patch that looks good, can we get it merged? > > > > https://review.openstack.org/#/c/577833/ > > > > Anyone from taskflow around? Maybe it's better to just mail the ptl. > We could use more reviewers on taskflow (and have needed them for a while). Perhaps we can get some of the consuming projects to give it a little live so the Oslo folks who are less familiar with it feel confident of the change(s) this close to the final release date for non-client libraries. Doug __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
[openstack-dev] [ironic] "mid-cycle" call Tuesday, July 17th - 3 PM UTC
Fellow ironicans! Lend me your ears! With the cycle quickly coming to a close, we wanted to take a couple hours for high bandwidth discussions covering the end of cycle for Ironic, as well as any items that need to be established in advance of the PTG. We're going to use bluejeans[1] since it seems to work well for everyone, and I've posted a rough agenda[2] to an etherpad. If there are additional items, please feel free to add them to the etherpad. -Julia [1]: https://bluejeans.com/437242882/ [2]: https://etherpad.openstack.org/p/ironic-rocky-midcycle __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [Release-job-failures][group-based-policy] Release of openstack/group-based-policy failed
Excerpts from zuul's message of 2018-07-10 06:38:24 +: > Build failed. > > - release-openstack-python > http://logs.openstack.org/5b/5bbcfd7b41d39339ff9b9f8654681406d2508205/release/release-openstack-python/269f8ce/ > : FAILURE in 6m 31s > - announce-release announce-release : SKIPPED > - propose-update-constraints propose-update-constraints : SKIPPED > The release job failed trying to pip install something due to an SSL error. http://logs.openstack.org/5b/5bbcfd7b41d39339ff9b9f8654681406d2508205/release/release-openstack-python/269f8ce/job-output.txt.gz#_2018-07-10_06_37_26_065386 __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [nova] [placement] placement update 18-27
On 07/06/2018 10:09 AM, Chris Dent wrote: # Questions * Will consumer id, project and user id always be a UUID? We've established for certain that user id will not, but things are less clear for the other two. This issue is compounded by the fact that these two strings are different but the same UUID: 5eb033fd-c550-420e-a31c-3ec2703a403c, 5eb033fdc550420ea31c3ec2703a403c (bug 1758057 mentioned above) but we treat them differently in our code. As mentioned by a couple people on IRC, a consumer's external project identifier and external user identifier come directly from Keystone. Since Keystone has no rule about these values being UUIDs or even UUID-like, we clearly cannot treat them as UUIDs in the placement service. Our backend data storage for these attributes is suitably a String(255) column and there is no validation done on these values. In fact, the project and user external identifiers are taken directly from the nova.context WSGI environ when sending from the placement client [1]. So, really, the only thing we're discussing is whether consumer_id is always a UUID. I believe it should be, and the fact that it's referred to as consumer_uuid in so many places should be indicative of its purpose. I know originally the field in the DB was a String(64), but it's since been changed to String(36), further evidence that consumer_id was intended to be a UUID. I believe we should validate it as such at the placement API layer. The only current consumers in the placement service are instances and migrations, both of which use a UUID identifier. I don't think it's too onerous to require future consumers to be identified with a UUID, and it would be nice to be able to rely on a structured, agreed format for unique identification of consumers across services. As noted the project_id and user_id are not required to be UUIDs and I don't believe we should add any validation for those fields. Best, -jay [1] For those curious, nova-scheduler calls scheduler.utils.claim_resources(...): https://github.com/openstack/nova/blob/8469fa70dafa83cb068538679100bede7679edc3/nova/scheduler/filter_scheduler.py#L219 which itself calls reportclient.claim_resources(...) with the instance.user_id and instance.project_id values: https://github.com/openstack/nova/blob/8469fa70dafa83cb068538679100bede7679edc3/nova/scheduler/utils.py#L500 The instance.project_id and instance.user_id values are populated from the WSGI environ here: https://github.com/openstack/nova/blob/8469fa70dafa83cb068538679100bede7679edc3/nova/compute/api.py#L831-L832 __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [requirements][taskflow] networkx migration
On 18-07-09 15:15:23, Matthew Thode wrote: > We have a patch that looks good, can we get it merged? > > https://review.openstack.org/#/c/577833/ > Anyone from taskflow around? Maybe it's better to just mail the ptl. -- Matthew Thode (prometheanfire) signature.asc Description: PGP signature __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [nova] [placement] placement update 18-27
On 07/09/2018 02:52 PM, Chris Dent wrote: On Fri, 6 Jul 2018, Chris Dent wrote: This is placement update 18-27, a weekly update of ongoing development related to the [OpenStack](https://www.openstack.org/) [placement service](https://developer.openstack.org/api-ref/placement/). This is a contract version. Forgot to mention: There won't be an 18-28 this Friday, I'll be out and about. If someone else would like to do one, that would be great. On it. -jay __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] creating instance
On 07/10/2018 03:04 AM, jayshankar nair wrote: Hi, I am trying to create an instance of cirros os(Project/Compute/Instances). I am getting the following error. Error: Failed to perform requested operation on instance "cirros1", the instance has an error status: Please try again later [Error: Build of instance 5de65e6d-fca6-4e78-a688-ead942e8ed2a aborted: The server has either erred or is incapable of performing the requested operation. (HTTP 500) (Request-ID: req-91535564-4caf-4975-8eff-7bca515d414e)]. How to debug the error. You'll want to look at the logs for the individual service. Since you were trying to create a server instance, you probably want to start with the logs for the "nova-api" service to see if there are any failure messages. You can then check the logs for "nova-scheduler", "nova-conductor", and "nova-compute". There should be something useful in one of those. Chris __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [OSSN-0084] Data retained after deletion of a ScaleIO volume
On Tue, Jul 10, 2018 at 4:20 AM, Luke Hinds wrote: > Data retained after deletion of a ScaleIO volume > --- > > ### Summary ### > Certain storage volume configurations allow newly created volumes to > contain previous data. This could lead to leakage of sensitive > information between tenants. > > ### Affected Services / Software ### > Cinder releases up to and including Queens with ScaleIO volumes > using thin volumes and zero padding. > According to discussion in the bug, this bug occurs with ScaleIO volumes using thick volumes and with zero padding disabled. If the bug is with thin volumes and zero padding, then the workaround seems quite wrong. :) I'm not super familiar with Cinder, so could some Cinder folks check this out and re-issue a more accurate OSSN, please? // jim > > ### Discussion ### > Using both thin volumes and zero padding does not ensure data contained > in a volume is actually deleted. The default volume provisioning rule is > set to thick so most installations are likely not affected. Operators > can check their configuration in `cinder.conf` or check for zero padding > with this command `scli --query_all`. > > Recommended Actions > > Operators can use the following two workarounds, until the release of > Rocky (planned 30th August 2018) which resolves the issue. > > 1. Swap to thin volumes > > 2. Ensure ScaleIO storage pools use zero-padding with: > > `scli --modify_zero_padding_policy > (((--protection_domain_id | > --protection_domain_name ) > --storage_pool_name ) | --storage_pool_id ) > (--enable_zero_padding | --disable_zero_padding)` > > ### Contacts / References ### > Author: Nick Tait > This OSSN : https://wiki.openstack.org/wiki/OSSN/OSSN-0084 > Original LaunchPad Bug : https://bugs.launchpad.net/ossn/+bug/1699573 > Mailing List : [Security] tag on openstack-dev@lists.openstack.org > OpenStack Security Project : https://launchpad.net/~openstack-ossg > > > __ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
[openstack-dev] [tripleo] Updates/upgrades equivalent for external_deploy_tasks
Hi, with the move to config-download deployments, we'll be moving from executing external installers (like ceph-ansible) via Heat resources encapsulating Mistral workflows towards executing them via Ansible directly (nested Ansible process via external_deploy_tasks). Updates and upgrades still need to be addressed here. I think we should introduce external_update_tasks and external_upgrade_tasks for this purpose, but i see two options how to construct the workflow with them. During update (mentioning just updates, but upgrades would be done analogously) we could either: A) Run external_update_tasks, then external_deploy_tasks. This works with the assumption that updates are done very similarly to deployment. The external_update_tasks could do some prep work and/or export Ansible variables which then could affect what external_deploy_tasks do (e.g. in case of ceph-ansible we'd probably override the playbook path). This way we could also disable specific parts of external_deploy_tasks on update, in case reuse is undesirable in some places. B) Run only external_update_tasks. This would mean code for updates/upgrades of externally deployed services would be completely separated from how their deployment is done. If we wanted to reuse some of the deployment tasks, we'd have to use the YAML anchor referencing mechanisms. (, *anchor) I think the options are comparable in terms of what is possible to implement with them, the main difference is what use cases we want to optimize for. Looking at what we currently have in external_deploy_tasks (e.g. [1][2]), i think we'd have to do a lot of explicit reuse if we went with B (inventory and variables generation, ...). So i'm leaning towards option A (WIP patch at [3]) which should give us this reuse more naturally. This approach would also be more in line with how we already do normal updates and upgrades (also reusing deployment tasks). Please let me know in case you have any concerns about such approach (looking especially at Ceph and OpenShift integrators :) ). Thanks Jirka [1] https://github.com/openstack/tripleo-heat-templates/blob/8d7525fdf79f915e3f880ea0f3fd299234ecc635/docker/services/ceph-ansible/ceph-base.yaml#L340-L467 [2] https://github.com/openstack/tripleo-heat-templates/blob/8d7525fdf79f915e3f880ea0f3fd299234ecc635/extraconfig/services/openshift-master.yaml#L70-L231 [3] https://review.openstack.org/#/c/579170/ __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
[openstack-dev] [tc] [all] TC Report 18-28
HTML: https://anticdent.org/tc-report-18-28.html With feature freeze approaching at the end of this month, it seems that people are busily working on getting-stuff-done so there is not vast amounts of TC discussion to report this week. Actually that's not entirely true. There's quite a bit of interesting discussion in [the logs](http://eavesdrop.openstack.org/irclogs/%23openstack-tc/) but it ranges widely and resists summary. If you're a fast reader, it can be pretty straightforward to read the whole week. Some highlights: ## Contextualizing Change The topics of sharing personal context, creating a new technical vision for OpenStack, and trying to breach the boundaries between the various OpenStack sub-projects flowed in amongst one another. In a vast bit of background and perspective sharing, Zane provided his feelings on [what OpenStack ought to be](http://lists.openstack.org/pipermail/openstack-dev/2018-July/132047.html). While long, such things help provide much more context to understanding some of the issues. Reading such things can be effort, but they fill in blanks in understanding, even if you don't agree. Meanwhile, and related, there are [continued requests](http://eavesdrop.openstack.org/irclogs/%23openstack-tc/%23openstack-tc.2018-07-06.log.html#t2018-07-06T15:20:50) for nova to engage in orchestration, in large part because there's nothing else commonly available to do it and while that's true we can't serve people's needs well. Some have said that the need for orchestration could in part be addressed by breaking down some of the boundaries between projects but [which boundaries is unclear](http://eavesdrop.openstack.org/irclogs/%23openstack-tc/%23openstack-tc.2018-07-04.log.html#t2018-07-04T01:12:27). Thierry says we should [organize work based on objectives](http://eavesdrop.openstack.org/irclogs/%23openstack-tc/%23openstack-tc.2018-07-04.log.html#t2018-07-04T08:33:44). ## Goals of Health Tracking In [last week's report](/tc-report-18-27.html) I drew a connection between the [removal of diversity tags](https://review.openstack.org/#/c/579870/) and the [health tracker](https://wiki.openstack.org/wiki/OpenStack_health_tracker). This [created some](http://eavesdrop.openstack.org/irclogs/%23openstack-tc/%23openstack-tc.2018-07-05.log.html#t2018-07-05T15:29:01) concern that there were going to be renewed evaluations of projects that would impact their standing in the community and that these evaluations were going to be too subjective. While it is true that the health tracker is a subjective review of how a project is doing, the evaluation is a way to discover and act on opportunities to help a project, not punish it or give it a black mark. It is important, however, that the TC is making an [independent evaluation](http://eavesdrop.openstack.org/irclogs/%23openstack-tc/%23openstack-tc.2018-07-05.log.html#t2018-07-05T15:45:59). -- Chris Dent ٩◔̯◔۶ https://anticdent.org/ freenode: cdent tw: @anticdent__ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
[openstack-dev] [neutron] Bug deputy report
Hi all, I'm zhaobo, I was the bug deputy for the last week and I'm afraid that cannot attending the comming upstream meeting so I'm sending out this report: Last week there are some high priority bugs for neutron . Also some bugs need to attention, I list them here: [High] Deleting a port on a system with 1K ports takes too long https://bugs.launchpad.net/neutron/+bug/1779882 As the desciption, delete 1k port will cost mostly 35s, and seems related to the policy checking for consume much time. We need to check how to increase the performance if it's an issue. Also, thanks @Ajo for correct. L3 AttributeError in doc job https://bugs.launchpad.net/neutron/+bug/1779801 Queens neutron broken with recent L3 removal from neutron-lib.constants https://bugs.launchpad.net/neutron/+bug/1780376 These bugs need to be attentioned, as the neutron-lib increase and remove some in-use code(https://github.com/openstack/neutron-lib/commit/ec829f9384547864aebb56390da8e17df7051aac). It already affected Neutron Queens release. [Medium] A race condition may occur when concurrent agent scheduling happens https://bugs.launchpad.net/neutron/+bug/1780357 The DHCP and L3 agent maybe in race condition during scheduling process. [Need Attention] Sending SIGHUP to neutron-server process causes it to hang https://bugs.launchpad.net/neutron/+bug/1780139 This bug hit in Queens release and in container env, need help from someone who is familiar with it to test. [fwaas] FWaaS instance stuck in PENDING_CREATE when devstack enable fwaas-v1 https://bugs.launchpad.net/neutron/+bug/1779978 FWAAS V1?? Add or remove firewall rules, caused the status of associated firewall becomes "PENDING_UPDATE" https://bugs.launchpad.net/neutron/+bug/1780883 These 2 bugs seems hit the same issue, I will fix and associated with the first one. Both of them seems the FWv1 devstack configuration issue. Thanks, Best Regards, ZhaoBo__ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [nova]Notification update week 28
On Mon, Jul 9, 2018 at 12:38 PM, Balázs Gibizer wrote: Hi, Here is the latest notification subteam update. [...] Weekly meeting -- The next meeting is planned to be held on 10th of June on #openstack-meeting-4 https://www.timeanddate.com/worldclock/fixedtime.html?iso=20180710T17 I cannot make it to the meeting today. Sorry for the short notice but the meeting is cancelled. Cheers, gibi Cheers, gibi __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [stestr?][tox?][infra?] Unexpected success isn't a failure
On Mon, 9 Jul 2018, Matthew Treinish wrote: It's definitely a bug, and likely a bug in stestr (or one of the lower level packages like testtools or python-subunit), because that's what's generating the return code. Tox just looks at the return code from the commands to figure out if things were successful or not. I'm a bit surprised by this though I thought we covered the unxsuccess and xfail cases because I would have expected cdent to file a bug if it didn't. Looking at the stestr tests we don't have coverage for the unxsuccess case so I can see how this slipped through. This was reported on testrepository some years ago and a bit of analysis was done: https://bugs.launchpad.net/testrepository/+bug/1429196 So yeah, I did file a bug but it fell off the radar during those dark times. -- Chris Dent ٩◔̯◔۶ https://anticdent.org/ freenode: cdent tw: @anticdent__ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
[openstack-dev] creating instance
Hi, I am trying to create an instance of cirros os(Project/Compute/Instances). I am getting the following error. Error: Failed to perform requested operation on instance "cirros1", the instance has an error status: Please try again later [Error: Build of instance 5de65e6d-fca6-4e78-a688-ead942e8ed2a aborted: The server has either erred or is incapable of performing the requested operation. (HTTP 500) (Request-ID: req-91535564-4caf-4975-8eff-7bca515d414e)]. How to debug the error. Thanks,Jayshankar__ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
[openstack-dev] [OSSN-0084] Data retained after deletion of a ScaleIO volume
Data retained after deletion of a ScaleIO volume --- ### Summary ### Certain storage volume configurations allow newly created volumes to contain previous data. This could lead to leakage of sensitive information between tenants. ### Affected Services / Software ### Cinder releases up to and including Queens with ScaleIO volumes using thin volumes and zero padding. ### Discussion ### Using both thin volumes and zero padding does not ensure data contained in a volume is actually deleted. The default volume provisioning rule is set to thick so most installations are likely not affected. Operators can check their configuration in `cinder.conf` or check for zero padding with this command `scli --query_all`. Recommended Actions Operators can use the following two workarounds, until the release of Rocky (planned 30th August 2018) which resolves the issue. 1. Swap to thin volumes 2. Ensure ScaleIO storage pools use zero-padding with: `scli --modify_zero_padding_policy (((--protection_domain_id | --protection_domain_name ) --storage_pool_name ) | --storage_pool_id ) (--enable_zero_padding | --disable_zero_padding)` ### Contacts / References ### Author: Nick Tait This OSSN : https://wiki.openstack.org/wiki/OSSN/OSSN-0084 Original LaunchPad Bug : https://bugs.launchpad.net/ossn/+bug/1699573 Mailing List : [Security] tag on openstack-dev@lists.openstack.org OpenStack Security Project : https://launchpad.net/~openstack-ossg signature.asc Description: OpenPGP digital signature __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [mistral] Clearing out old gerrit reviews
Agreed. On 10 July 2018 at 05:42, Renat Akhmerov wrote: > Dougal, I’m totally OK with this idea. > > Thanks > > Renat Akhmerov > @Nokia > On 9 Jul 2018, 22:14 +0700, Dougal Matthews , wrote: > > Hey folks, > > I'd like to propose that we start abandoning old Gerrit reviews. > > This report shows how stale and out of date some of the reviews are: > http://stackalytics.com/report/reviews/mistral-group/open > > I would like to initially abandon anything without any activity for a > year, but we might want to consider a shorter limit - maybe 6 months. > Reviews can be restored, so the risk is low. > > What do you think? Any objections or counter suggestions? > > If I don't hear any complaints, I'll go ahead with this next week (or > maybe the following week). > > Cheers, > Dougal > __ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > > __ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] swift containers.
Hi, The debugging output is as below. python firststack.py Manager defaults:unknown running task compute.GET.servers.detail REQ: curl -g -i -X GET http://192.168.0.19:5000/v3 -H "Accept: application/json" -H "User-Agent: openstacksdk/0.11.3 keystoneauth1/3.4.0 python-requests/2.14.2 CPython/2.7.5" RESP: [200] Date: Tue, 10 Jul 2018 01:15:04 GMT Server: Apache/2.4.6 (CentOS) Vary: X-Auth-Token,Accept-Encoding x-openstack-request-id: req-3c244cef-1c7c-4d51-9b18-e6e1d5418713 Content-Encoding: gzip Content-Length: 196 Keep-Alive: timeout=15, max=100 Connection: Keep-Alive Content-Type: application/json RESP BODY: {"version": {"status": "stable", "updated": "2018-02-28T00:00:00Z", "media-types": [{"base": "application/json", "type": "application/vnd.openstack.identity-v3+json"}], "id": "v3.10", "links": [{"href": "http://192.168.0.19:5000/v3/;, "rel": "self"}]}} GET call to None for http://192.168.0.19:5000/v3 used request id req-3c244cef-1c7c-4d51-9b18-e6e1d5418713 Making authentication request to http://192.168.0.19:5000/v3/auth/tokens {"token": {"is_domain": false, "methods": ["password"], "roles": [{"id": "2e79a56540684ebb8fc177433d67b2a5", "name": "admin"}], "expires_at": "2018-07-10T02:15:05.00Z", "project": {"domain": {"id": "default", "name": "Default"}, "id": "4a0d46f830044e74b1a84c93e5dbacda", "name": "admin"}, "catalog": [{"endpoints": [{"url": "http://192.168.0.19:9696;, "interface": "admin", "region": "RegionOne", "region_id": "RegionOne", "id": "0b574bc23cc54bd8a1266ed858a2e87f"}, {"url": "http://192.168.0.19:9696;, "interface": "public", "region": "RegionOne", "region_id": "RegionOne", "id": "3066119a6c9147fa8e4626725c3a34ad"}, {"url": "http://192.168.0.19:9696;, "interface": "internal", "region": "RegionOne", "region_id": "RegionOne", "id": "4243ebf7df0f46fbb062b828d7147ca4"}], "type": "network", "id": "2c9f0da1dc514008bdc8bf967be6eeaa", "name": "neutron"}, {"endpoints": [{"url": "http://192.168.0.19:8776/v2/4a0d46f830044e74b1a84c93e5dbacda;, "interface": "public", "region": "RegionOne", "region_id": "RegionOne", "id": "59c73f5064b7494faa5ca3b389403746"}, {"url": "http://192.168.0.19:8776/v2/4a0d46f830044e74b1a84c93e5dbacda;, "interface": "admin", "region": "RegionOne", "region_id": "RegionOne", "id": "addbbfdb56a244eba884e3995a548b16"}, {"url": "http://192.168.0.19:8776/v2/4a0d46f830044e74b1a84c93e5dbacda;, "interface": "internal", "region": "RegionOne", "region_id": "RegionOne", "id": "f44419832a474e3fa08716945b520219"}], "type": "volumev2", "id": "31160ee3b1c54c8ca5a90c417f4f1425", "name": "cinderv2"}, {"endpoints": [{"url": "http://192.168.0.19:8776/v3/4a0d46f830044e74b1a84c93e5dbacda;, "interface": "admin", "region": "RegionOne", "region_id": "RegionOne", "id": "19b9b5d72f4540f183c4ab574d3efd71"}, {"url": "http://192.168.0.19:8776/v3/4a0d46f830044e74b1a84c93e5dbacda;, "interface": "internal", "region": "RegionOne", "region_id": "RegionOne", "id": "1ded29d260604e9b9cf14706fa558a21"}, {"url": "http://192.168.0.19:8776/v3/4a0d46f830044e74b1a84c93e5dbacda;, "interface": "public", "region": "RegionOne", "region_id": "RegionOne", "id": "c4021436be5845cf8efa797f27e48b63"}], "type": "volumev3", "id": "3da7323094724d35b987fe60fbc7ea38", "name": "cinderv3"}, {"endpoints": [{"url": "http://192.168.0.19:8776/v1/4a0d46f830044e74b1a84c93e5dbacda;, "interface": "admin", "region": "RegionOne", "region_id": "RegionOne", "id": "6b5f1f96bef1441fa16947e3d2578732"}, {"url": "http://192.168.0.19:8776/v1/4a0d46f830044e74b1a84c93e5dbacda;, "interface": "internal", "region": "RegionOne", "region_id": "RegionOne", "id": "d9dfe7db65824874af7a093f16a7ebd0"}, {"url": "http://192.168.0.19:8776/v1/4a0d46f830044e74b1a84c93e5dbacda;, "interface": "public", "region": "RegionOne", "region_id": "RegionOne", "id": "fed975472ca849b0a4d39570c3ab941b"}], "type": "volume", "id": "600f1705da8a41aeb87d22cff26a7d49", "name": "cinder"}, {"endpoints": [{"url": "http://192.168.0.19:8774/v2.1/4a0d46f830044e74b1a84c93e5dbacda;, "interface": "admin", "region": "RegionOne", "region_id": "RegionOne", "id": "2bea479cb5ea4d128ce9e7f8009be760"}, {"url": "http://192.168.0.19:8774/v2.1/4a0d46f830044e74b1a84c93e5dbacda;, "interface": "internal", "region": "RegionOne", "region_id": "RegionOne", "id": "babb14683847492eb3129535bda12f78"}, {"url": "http://192.168.0.19:8774/v2.1/4a0d46f830044e74b1a84c93e5dbacda;, "interface": "public", "region": "RegionOne", "region_id": "RegionOne", "id": "c82c93df6ffa4780a1e4c8912877f710"}], "type": "compute", "id": "6b4e3642519941bbbfb9c4163da331c7", "name": "nova"}, {"endpoints": [{"url": "http://192.168.0.12:8041;, "interface": "public", "region": "RegionOne", "region_id": "RegionOne", "id": "00055fa2240248bf9e693a1d446c7c59"}, {"url": "http://192.168.0.12:8041;, "interface": "internal", "region": "RegionOne", "region_id": "RegionOne", "id": "88431c7c2f67409fb0fc41fe68ec3ead"}, {"url": "http://192.168.0.12:8041;, "interface": "admin",