Re: [openstack-dev] [python3] Enabling py37 unit tests
On Tue, 6 Nov 2018, Corey Bryant wrote: I'd like to get an official +1 here on the ML from parties such as the TC and infra in particular but anyone else's input would be welcomed too. Obviously individual projects would have the right to reject proposed changes that enable py37 unit tests. Hopefully they wouldn't, of course, but they could individually vote that way. Speaking as someone on the TC but not "the TC" as well as someone active in a few projects: +1. As shown elsewhere in the thread the impact on node consumption and queue lengths shouldn't be a huge amount and the benefits are high. From an openstack/placement standpoint, please go for it if nobody else beats you to it. To me the benefits are simply that we find bugs sooner. It's bizarre to me that we even need to think about this. The sooner we find them, the less they impact people who want to use our code. Will it cause breakage and extra work us now? Possibly, but it's like making an early payment on the mortgage: We are saving cost later. -- Chris Dent ٩◔̯◔۶ https://anticdent.org/ freenode: cdent tw: @anticdent__ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [publiccloud-wg] Serving vendor json from RFC 5785 well-known dir
On Sun, 4 Nov 2018, Monty Taylor wrote: I've floated a half-baked version of this idea to a few people, but lemme try again with some new words. What if we added support for serving vendor data files from the root of a primary URL as-per RFC 5785. Specifically, support deployers adding a json file to .well-known/openstack/client that would contain what we currently store in the openstacksdk repo and were just discussing splitting out. Sounds like a good plan. I'm still a vexed that we need to know a cloud's primary host, then this URL, then get a url for auth and from there start gathering up information about the services and then their endpoints. All of that seems of one piece to me and there should be one way to do it. But in the absence of that, this is a good plan. What do people think? I think cats are nice and so is this plan. -- Chris Dent ٩◔̯◔۶ https://anticdent.org/ freenode: cdent tw: @anticdent__ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [nova][placement] Placement requests and caching in the resource tracker
ituated to do some deeper profiling and benchmarking of placement to find the elbows in that. * It seems like Eric and Jay are probably best situated to define and refine what should really be going on with the resource tracker and other actions on the compute-node. * We need to have further discussion and investigation on allocations getting out of sync. Volunteers? What else? [1] https://review.openstack.org/#/c/614886/ [2] https://docs.google.com/document/d/1d5k1hA3DbGmMyJbXdVcekR12gyrFTaj_tJdFwdQy-8E/edit -- Chris Dent ٩◔̯◔۶ https://anticdent.org/ freenode: cdent tw: @anticdent__ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
[openstack-dev] [placement] update 18-44
enstack/blazar+topic:bp/placement-api> Blazar using the placement-api * <https://review.openstack.org/#/c/614896/> Placement role for ansible project config * <https://review.openstack.org/#/c/614285/> hyperv bump placement version # End Apologies if this is messier than normal, I'm rushing to get it out before I travel. -- Chris Dent ٩◔̯◔۶ https://anticdent.org/ freenode: cdent tw: @anticdent__ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [tripleo][openstack-ansible][nova][placement] Owners needed for placement extraction upgrade deployment tooling
On Wed, 31 Oct 2018, Eduardo Gonzalez wrote: - Run db syncs as there is not command for that yet in the master branch - Apply upgrade process for db changes The placement-side pieces for this are nearly ready, see the stack beginning at https://review.openstack.org/#/c/611441/ -- Chris Dent ٩◔̯◔۶ https://anticdent.org/ freenode: cdent tw: @anticdent__ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [tripleo][openstack-ansible][nova][placement] Owners needed for placement extraction upgrade deployment tooling
On Tue, 30 Oct 2018, Mohammed Naser wrote: We spoke about this today in the OpenStack Ansible meeting, we've come up with the following steps: Great! Thank you, Guilherme, and Lee very much. 1) Create a role for placement which will be called `os_placement` located in `openstack/openstack-ansible-os_placement` 2) Integrate that role with the OSA master and stop using the built-in placement service 3) Update the playbooks to handle upgrades and verify using our periodic upgrade jobs Makes sense. The difficult task really comes in the upgrade jobs, I really hope that we can get some help on this as this probably puts a bit of a load already on Guilherme, so anyone up to look into that part when the first 2 are completed? :) The upgrade-nova script in https://review.openstack.org/#/c/604454/ has been written to make it pretty clear what each of the steps mean. With luck those steps can translate to both the ansible and tripleo environments. Please feel free to add me to any of the reviews and come calling in #openstack-placement with questions if there are any. -- Chris Dent ٩◔̯◔۶ https://anticdent.org/ freenode: cdent tw: @anticdent__ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
[openstack-dev] [qa] [api] [all] gabbi-tempest for integration tests
Earlier this month I produced a blog post on something I was working on to combine gabbi (the API tester used in placement, gnocchi, heat, and a few other projects) with tempest to create a simple two step process for having some purely YAML-driven and HTTP API-based of any project that can test with tempest. That blog posting is at: https://anticdent.org/gabbi-in-the-gate.html I've got it working now and the necessary patches have merged in tempest and gabbi-tempest is now part of openstack's infra. A pending patch in nova shows how it can work: https://review.openstack.org/#/c/613386/ The two steps are: * Add a new job in .zuul.yaml with a parent of 'gabbi-tempest' * Create some gabbi YAML files containing tests in a directory named in that zuul job. * Profit. There are a few different pieces that have come together to make this possible: * The magic of zuul v3, local job config and job inheritance. * gabbi: https://gabbi.readthedocs.io/ * gabbi-tempest: https://gabbi-tempest.readthedocs.io/ and https://git.openstack.org/cgit/openstack/gabbi-tempest and the specific gabbi-tempest zuul job: https://git.openstack.org/cgit/openstack/gabbi-tempest/tree/.zuul.yaml#n11 * tempest plugins and other useful ways of getting placement to run in different ways I hope this is useful for people. Using gabbi is a great way to make sure that your HTTP API is usable with lots of different clients and without maintaining a lot of state. Let me know if you have any questions or if you are interested in helping to make gabbi-tempest more complete and well documented. I've been approving my own code the past few patches and that feels a bit dirty. -- Chris Dent ٩◔̯◔۶ https://anticdent.org/ freenode: cdent tw: @anticdent__ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
[openstack-dev] [placement] update 18-43
eview.openstack.org/#/c/601866/> Generate sample policy in placement directory (This is a bit stuck on not being sure what the right thing to do is.) * <https://review.openstack.org/#/q/topic:bp/initial-allocation-ratios> Improve handling of default allocation ratios * <https://review.openstack.org/#/q/topic:minimum-bandwidth-allocation-placement-api> Neutron minimum bandwidth implementation * <https://review.openstack.org/#/c/602160/> Add OWNERSHIP $SERVICE traits * <https://review.openstack.org/#/c/604182/> Puppet: Initial cookiecutter and import from nova::placement * <https://review.openstack.org/#/c/586960/> zun: Use placement for unified resource management * <https://review.openstack.org/#/q/topic:bug/1799727> Update allocation ratio when config changes * <https://review.openstack.org/#/q/topic:bug/1799892> Deal with root_id None in resource provider * <https://review.openstack.org/#/q/topic:bug/1795992> Use long rpc timeout in select_destinations * <https://review.openstack.org/#/c/529343/> Cleanups for scheduler code * <https://review.openstack.org/#/q/topic:bp/bandwidth-resource-provider> Bandwith Resource Providers! * <https://review.openstack.org/#/q/topic:bug/1799246> Harden placement init under wsgi * <https://review.openstack.org/#/q/topic:cd/gabbi-tempest-job> Using gabbi-tempest for integration tests. * <https://review.openstack.org/#/c/613118/> Make tox -ereleasenotes work * <https://review.openstack.org/#/c/613343/> placement: Add a doc describing a quick live environment # End It's tired around here. -- Chris Dent ٩◔̯◔۶ https://anticdent.org/ freenode: cdent tw: @anticdent__ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [all] [tc] [api] Paste Maintenance
On Wed, 24 Oct 2018, Jean-Philippe Evrard wrote: On Mon, 2018-10-22 at 07:50 -0700, Morgan Fainberg wrote: Also, doesn't bitbucket have a git interface now too (optionally)? It does :) But I think it requires a new repo, so it means that could as well move to somewhere else like github or openstack infra :p Right, so that combined with bitbucket oozing surveys and assorted other annoyances over me has meant that I've moved paste to github: https://github.com/cdent/paste I merged some of the outstanding patches, forced Zane to fix up a few more Python 3.7 related things, fixed up some of the docs and released a new version (3.0.0) to pypi: https://pypi.org/p/Paste And I published the docs (linked from the new release and the repo) to a new URL on RTD, as older versions of the docs were not something I was able to adopt: https://pythonpaste.readthedocs.io And some travis-ci stuff. I didn't bother to bring Paste into OpenDev infra because that felt like that was indicating a longer and more engaged commitment than it feels responses here indicated should happen. We want to encourage migration away. As Morgan stated elsewhere in the thread [1] work is in progress to make using something else easier for people. If you want to help with Paste, make some issues and pull requests in the repo above. Thanks. Next step? paste.deploy (which is a separate repo). [1] http://lists.openstack.org/pipermail/openstack-dev/2018-October/135937.html -- Chris Dent ٩◔̯◔۶ https://anticdent.org/ freenode: cdent tw: @anticdent__ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [all] [tc] [api] Paste Maintenance
On Mon, 22 Oct 2018, Chris Dent wrote: Thus far I'm not hearing any volunteers. If that continues to be the case, I'll just keep it on bitbucket as that's the minimal change. As there was some noise that suggested "if you make it use git I might help", I put it on github: https://github.com/cdent/paste I'm now in the process of getting it somewhat sane for modern python, however test coverage isn't that great so additional work is required. Once it seems mostly okay, I'll push out a new version to pypi. I welcome assistance from any and all. And, rather importantly, we also need to take over pastedeploy as well, as the functionality there is also important. I've started that ball rolling. If having it live in my github proves a problem we can easily move it along somewhere else, but this was the shortest hop. -- Chris Dent ٩◔̯◔۶ https://anticdent.org/ freenode: cdent tw: @anticdent__ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] Proposal for a process to keep up with Python releases
On Fri, 19 Oct 2018, Zane Bitter wrote: Just to make it easier to visualise, here is an example for how the Zuul config _might_ look now if we had adopted this proposal during Rocky: https://review.openstack.org/611947 And instead of having a project-wide goal in Stein to add `openstack-python36-jobs` to the list that currently includes `openstack-python35-jobs` in each project's Zuul config[1], we'd have had a goal to change `openstack-python3-rocky-jobs` to `openstack-python3-stein-jobs` in each project's Zuul config. I like this, because it involves conscious actions, awareness and self-testing by each project to move forward to a thing with a reasonable name (the cycle name). I don't think we should call that "churn". "Intention" might be a better word. -- Chris Dent ٩◔̯◔۶ https://anticdent.org/ freenode: cdent tw: @anticdent__ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [all] [tc] [api] Paste Maintenance
On Fri, 19 Oct 2018, Thierry Carrez wrote: Ed Leafe wrote: On Oct 15, 2018, at 7:40 AM, Chris Dent wrote: I'd like some input from the community on how we'd like this to go. I would say it depends on the long-term plans for paste. Are we planning on weaning ourselves off of paste, and simply need to maintain it until that can be completed, or are we planning on encouraging its use? Agree with Ed... is this something we plan to minimally maintain because we depend on it, something that needs feature work and that we want to encourage the adoption of, or something that we want to keep on life-support while we move away from it? That is indeed the question. I was rather hoping that some people who are using paste (besides Keystone) would chime in here with what they would like to do. My preference would be that we immediately start moving away from it and keep paste barely on life-support (a bit like WSME which I also somehow managed to get involved with despite thinking it is horrible). However, that's not easy to do because the paste.ini files have to be considered config because of the way some projects and deployments use them to drive custom middleware and the ordering of middleware. So we're in for at least a year or so. My assumption is that it's "something we plan to minimally maintain because we depend on it". in which case all options would work: the exact choice depends on whether there is anybody interested in helping maintaining it, and where those contributors prefer to do the work. Thus far I'm not hearing any volunteers. If that continues to be the case, I'll just keep it on bitbucket as that's the minimal change. My concern with that is my aforementioned feelings of "it is horrible". It might be better if someone who actually appreciates Paste was involved as well. -- Chris Dent ٩◔̯◔۶ https://anticdent.org/ freenode: cdent tw: @anticdent__ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
[openstack-dev] [placement] update 18-42
bit closer](https://review.openstack.org/#/c/611678/). Successful devstack is dependent on us having a reasonable solution to (2). For the moment [a hacked up script](https://review.openstack.org/#/c/600161/) is being used to create tables. Ed has started some work on [moving to alembic](https://review.openstack.org/#/q/topic:1alembic). We have work in progress to tune up the documentation but we are not yet publishing documentation (3). We need to work out a plan for this. Presumably we don't want to be publishing docs until we are publishing code, but the interdependencies need to be teased out. # Other Various placement changes out in the world. * <https://review.openstack.org/#/q/topic:bug/1798163> The fix, in placement, for the consumer id group by problem. * <https://review.openstack.org/#/c/601866/> Generate sample policy in placement directory (This is a bit stuck on not being sure what the right thing to do is.) * <https://review.openstack.org/#/q/topic:bp/initial-allocation-ratios> Improve handling of default allocation ratios * <https://review.openstack.org/#/q/topic:minimum-bandwidth-allocation-placement-api> Neutron minimum bandwidth implementation * <https://review.openstack.org/#/c/607953/> TripleO: Use valid_interfaces instead of os_interface for placement * <https://review.openstack.org/#/c/602160/> Add OWNERSHIP $SERVICE traits * <https://review.openstack.org/#/c/604182/> Puppet: Initial cookiecutter and import from nova::placement * <https://review.openstack.org/#/c/601407/> WIP: Add placement to devstack-gate PROJECTS This was done somewhere else wasn't it, so could this be abandoned? * <https://review.openstack.org/#/c/586960/> zun: Use placement for unified resource management # End Hi! -- Chris Dent ٩◔̯◔۶ https://anticdent.org/ freenode: cdent tw: @anticdent__ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
[openstack-dev] [placement] devstack, grenade, database management
TL;DR: We need reviews on https://review.openstack.org/#/q/topic:cd/placement-solo+status:open and work on database management command line tools. More detail within. The stack of code, mostly put together by Matt, to get migrating placement-in-nova to placement-in-placement working is passing its tests. You can see the remaining pieces of not yet merged code at https://review.openstack.org/#/q/topic:cd/placement-solo+status:open Once that is fully merged, the first bullet point on the extraction plan at http://lists.openstack.org/pipermail/openstack-dev/2018-September/134541.html will be complete and we'll have a model for how the next two bullet points can be done. At this time, there are two main sticking points to getting things merged: * The devstack, grenade, and devstack-gate changes need some review to make sure that some of the tricks Matt and I performed are acceptable to everyone. They are at: https://review.openstack.org/600162 https://review.openstack.org/604454 https://review.openstack.org/606853 * We need to address database creation scripts and database migrations. There's a general consensus that we should use alembic, and start things from a collapsed state. That is, we don't need to represent already existing migrations in the new repo, just the present-day structure of the tables. Right now the devstack code relies on a stubbed out command line tool at https://review.openstack.org/#/c/600161/ to create tables with a metadata.create_all(). This is a useful thing to have but doesn't follow the "db_sync" pattern set elsewhere, so I haven't followed through on making it pretty but can do so if people think it is useful. Whether we do that or not, we'll still need some kind of "db_sync" command. Do people want me to make a cleaned up "create" command? Ed has expressed some interest in exploring setting up alembic and the associated tools but that can easily be a more than one person job. Is anyone else interested? It would be great to get all this stuff working sooner than later. Without it we can't do two important tasks: * Integration tests with the extracted placement [1]. * Hacking on extracted placement in/with devstack. Another issue that needs some attention, but is not quite as urgent is the desire to support other databases during the upgrade, captured in this change https://review.openstack.org/#/c/604028/ [1] There's a stack of code for enabling placement integration tests starting at https://review.openstack.org/#/c/601614/ . It depends on the devstack changes. -- Chris Dent ٩◔̯◔۶ https://anticdent.org/ freenode: cdent tw: @anticdent__ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
[openstack-dev] [all] [tc] [api] Paste Maintenance
Back in August [1] there was an email thread about the Paste package being essentially unmaintained and several OpenStack projects still using it. At that time we reached the conclusion that we should investigate having OpenStack adopt Paste in some form as it would take some time or be not worth it to migrate services away from it. I went about trying to locate the last set of maintainers and get access to picking it up. It took a while, but I've now got owner bits for both bitbucket and PyPI and enthusiastic support from the previous maintainer for OpenStack to be the responsible party. I'd like some input from the community on how we'd like this to go. Some options. * Chris becomes the de-facto maintainer of paste and I do whatever I like to get it healthy and released. * Several volunteers from the community take over the existing bitbucket setup [2] and keep it going there. * Several volunteers from the community import the existing bitbucket setup to OpenStack^wOpenDev infra and manage it. What would people like? Who would like to volunteer? At this stage the main piece of blocking work is a patch [3] (and subsequent release) to get things working happily in Python 3.7. [1] http://lists.openstack.org/pipermail/openstack-dev/2018-August/132792.html [2] https://bitbucket.org/ianb/paste [3] https://bitbucket.org/ianb/paste/pull-requests/41/python-37-support/diff -- Chris Dent ٩◔̯◔۶ https://anticdent.org/ freenode: cdent tw: @anticdent__ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [tc][all] Discussing goals (upgrades) with community @ office hours
On Sat, 13 Oct 2018, Mohammed Naser wrote: Does this seem like it would be of interest to the community? I am currently trying to transform our office hours to be more of a space where we have more of the community and less of discussion between us. If we want discussion to actually be with the community at large (rather than giving lip service to the idea), then we need to be more oriented to using email. Each time we have an office hour or a meeting in IRC or elsewhere, or an ad hoc Hangout, unless we are super disciplined about reporting the details to email afterwards, a great deal of information falls on the floor and individuals who are unable to attend because of time, space, language or other constraints are left out. For community-wide issues, synchronous discussion should be the mode of last resort. Anything else creates a priesthood with a disempowered laity wondering how things got away from them. For community goals, in particular, preferring email for discussion and planning seems pretty key. I wonder if instead of specifying topics for TC office hours, we kill them instead? They've turned into gossiping echo chambers. -- Chris Dent ٩◔̯◔۶ https://anticdent.org/ freenode: cdent tw: @anticdent__ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [oslo][taskflow] Thoughts on moving taskflow out of openstack/oslo
On Wed, 10 Oct 2018, Greg Hill wrote: I guess I'm putting it forward to the larger community. Does anyone have any objections to us doing this? Are there any non-obvious technicalities that might make such a transition difficult? Who would need to be made aware so they could adjust their own workflows? I've been on both sides of conversations like this a few different times. Generally speaking people who are not already in the OpenStack environment express an unwillingness to participate because of perceptions of walled-garden and too-many-hoops. Whatever the reality of the situation, those perceptions matter, and for libraries that are already or potentially useful to people who are not in OpenStack, being "outside" is probably beneficial. And for a library that is normally installed (or should optimally be installed because, really, isn't it nice to be decoupled?) via pip, does it matter to OpenStack where it comes from? Or would it be preferable to just fork and rename the project so openstack can continue to use the current taskflow version without worry of us breaking features? Fork sounds worse. I've had gabbi contributors tell me, explicitly, that they would not bother contributing if they had to go through what they perceive to be the OpenStack hoops. That's anecdata, but for me it is pretty compelling. -- Chris Dent ٩◔̯◔۶ https://anticdent.org/ freenode: cdent tw: @anticdent__ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
[openstack-dev] [placement] update 18-40
oviders happy and fixing bugs discovered in placement in the process. This is creeping ahead. There is plenty of discussion going along nearby with regards to various ways they are being used, notably GPUs. * <https://review.openstack.org/#/q/topic:bp/use-nested-allocation-candidates> I feel like I'm missing some things in this area. Please let me know if there are others. This is related: * <https://review.openstack.org/#/c/589085/> Pass allocations to virt drivers when resizing ## Extraction There continue to be three main tasks in regard to placement extraction: 1. upgrade and integration testing 2. database schema migration and management 3. documentation publishing The upgrade aspect of (1) is in progress with a [patch to grenade](https://review.openstack.org/#/c/604454/) and a [patch to devstack](https://review.openstack.org/#/c/600162/). This is very close to working. The remaining failures are with jobs that do not have `openstack/placement` in `$PROJECTS`. Once devstack is happy then we can start thinking about integration testing using tempest. I've started some experiments with [using gabbi](https://review.openstack.org/#/c/601614/) for that. I've explained my reasoning in [a blog post](https://anticdent.org/gabbi-in-the-gate.html). Successful devstack is dependent on us having a reasonable solution to (2). For the moment [a hacked up script](https://review.openstack.org/#/c/600161/) is being used to create tables. This works, but is not sufficient for deployers nor for any migrations we might need to do. Moving to alembic seems a reasonable thing to do, as a part of that. We have work in progress to tune up the documentation but we are not yet publishing documentation (3). We need to work out a plan for this. Presumably we don't want to be publishing docs until we are publishing code, but the interdependencies need to be teased out. # Other Going to start highlighting some specific changes across several projects. If you're aware of something I'm missing, please let me know. * <https://review.openstack.org/#/c/601866/> Generate sample policy in placement directory (This is a bit stuck on not being sure what the right thing to do is.) * <https://review.openstack.org/#/q/topic:reduce-complexity+status:open> Some efforts by Eric to reduce code complexity * <https://review.openstack.org/#/q/topic:bp/initial-allocation-ratios> Improve handling of default allocation ratios * <https://review.openstack.org/#/q/topic:minimum-bandwidth-allocation-placement-api> Neutron minimum bandwidth implementation * <https://review.openstack.org/#/c/607953/> TripleO: Use valid_interfaces instead of os_interface for placement * <https://review.openstack.org/#/c/605507/> Puppet: Separate placement database is not deprecated * <https://review.openstack.org/#/c/602160/> Add OWNERSHIP $SERVICE traits * <https://review.openstack.org/#/c/604182/> Puppet: Initial cookiecutter and import from nova::placement * <https://review.openstack.org/#/c/601407/> WIP: Add placement to devstack-gate PROJECTS * <https://review.openstack.org/#/c/586960/> zun: Use placement for unified resource management # End I'm going to be away next week, so if any my pending code needs some fixes and is blocking other stuff, please fix it. Also, there will be no pupdate next week (unless someone else does one). -- Chris Dent ٩◔̯◔۶ https://anticdent.org/ freenode: cdent tw: @anticdent__ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [tc] bringing back formal TC meetings
On Thu, 4 Oct 2018, Doug Hellmann wrote: TC members, please reply to this thread and indicate if you would find meeting at 1300 UTC on the first Thursday of every month acceptable, and of course include any other comments you might have (including alternate times). +1 Also, if we're going to set aside a time for a semi-formal meeting, I hope we will have some form of agenda and minutes, with a fairly clear process for setting that agenda as well as a process for making sure that the fast and/or rude typers do not dominate the discussion during the meetings, as they used to back in the day when there were weekly meetings. The "raising hands" thing that came along towards the end sort of worked, so a variant on that may be sufficient. -- Chris Dent ٩◔̯◔۶ https://anticdent.org/ freenode: cdent tw: @anticdent__ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [placement] [infra] [qa] tuning some zuul jobs from "it works" to "proper"
On Wed, 3 Oct 2018, Chris Dent wrote: I'd really like to see this become a real thing, so if I could get some help from tempest people on how to make it in line with expectations that would be great. I've written up the end game of what I'm trying to achieve in a bit more detail at https://anticdent.org/gabbi-in-the-gate.html -- Chris Dent ٩◔̯◔۶ https://anticdent.org/ freenode: cdent tw: @anticdent__ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [placement] [infra] [qa] tuning some zuul jobs from "it works" to "proper"
On Tue, 2 Oct 2018, Chris Dent wrote: One of the comments in there is about the idea of making a zuul job which is effectively "run the gabbits in these dirs" against a tempest set up. Doing so will require some minor changes to the tempest tox passenv settings but I think it ought to straightforwardish. I've made a first stab at this: * Small number of changes to tempest: https://review.openstack.org/#/c/607507/ (The important change here, the one that strictly required changes to tempest, is adjusting passenv in tox.ini) * Much smaller job on the placement side: https://review.openstack.org/#/c/607508/ I'd really like to see this become a real thing, so if I could get some help from tempest people on how to make it in line with expectations that would be great. -- Chris Dent ٩◔̯◔۶ https://anticdent.org/ freenode: cdent tw: @anticdent__ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
[openstack-dev] [tc] [all] TC Report 18-40
HTML: https://anticdent.org/tc-report-18-40.html I'm going to take a break from writing the TC reports for a while. If other people (whether on the TC or not) are interested in producing their own form of a subjective review of the week's TC activity, I very much encourage you to do so. It's proven an effective way to help at least some people maintain engagement. I may pick it up again when I feel like I have sufficient focus and energy to produce something that has more value and interpretation than simply pointing at [the IRC logs](http://eavesdrop.openstack.org/irclogs/%23openstack-tc/). However, at this time, I'm not producing a product that is worth the time it takes me to do it and the time it takes away from doing other things. I'd rather make more significant progress on fewer things. In the meantime, please join me in congratulating and welcoming the newly elected members of the TC: Lance Bragstad, Jean-Philippe Evrard, Doug Hellman, Julia Kreger, Ghanshyam Mann, and Jeremy Stanley. -- Chris Dent ٩◔̯◔۶ https://anticdent.org/ freenode: cdent tw: @anticdent__ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [placement] [infra] [qa] tuning some zuul jobs from "it works" to "proper"
On Wed, 19 Sep 2018, Monty Taylor wrote: Yes. Your life will be much better if you do not make more legacy jobs. They are brittle and hard to work with. New jobs should either use the devstack base job, the devstack-tempest base job or the devstack-tox-functional base job - depending on what things are intended. I have a thing mostly working at https://review.openstack.org/#/c/601614/ The commit message has some ideas on how it could be better and the various hacks I needed to do to get things working. One of the comments in there is about the idea of making a zuul job which is effectively "run the gabbits in these dirs" against a tempest set up. Doing so will require some minor changes to the tempest tox passenv settings but I think it ought to straightforwardish. Some reviews from people who understand these things more than me would be most welcome. -- Chris Dent ٩◔̯◔۶ https://anticdent.org/ freenode: cdent tw: @anticdent__ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [placement] The "intended purpose" of traits
On Sat, 29 Sep 2018, Jay Pipes wrote: I don't think that's a fair statement. You absolutely *do* care which way we go. You want to encode multiple bits of information into a trait string -- such as "PCI_ADDRESS_01_AB_23_CD" -- and leave it up to the caller to have to understand that this trait string has multiple bits of information encoded in it (the fact that it's a PCI device and that the PCI device is at 01_AB_23_CD). You don't see a problem encoding these variants inside a string. Chris doesn't either. Lest I be misconstrued, I'd like to clarify: What I was trying to say elsewhere in the thread was that placement should never be aware of _anything_ that is in the trait string (except CUSTOM_* when validating ones that are added, and MISC_SHARES[...] for sharing providers). On the placement server side, input is compared solely for equality with stored data and nothing else, and we should never allow value comparisons, string fragments, regex, etc. So from a code perspective _placement_ is completely agnostic to whether a trait is "PCI_ADDRESS_01_AB_23_CD", "STORAGE_DISK_SSD", or "JAY_LIKES_CRUNCHIE_BARS". However, things which are using traits (e.g., nova, ironic) need to make their own decisions about how the value of traits are interpreted. I don't have a strong position on that except to say that _if_ we end up in a position of there being lots of traits willy nilly, people who have chosen to do that need to know that the contract presented by traits right now (present or not present, no value comprehension) is fixed. I *do* see a problem with it, based on my experience in Nova where this kind of thing leads to ugly, unmaintainable, and incomprehensible code as I have pointed to in previous responses. I think there are many factors that have led to nova being incomprehensible and indeed bad representations is one of them, but I think reasonable people can disagree on which factors are the most important and with sufficient discussion come to some reasonable compromises. I personally feel that while the bad representations (encoding stuff in strings or json blobs) thing is a big deal, another major factor is a predilection to make new apis, new abstractions, and new representations rather than working with and adhering to the constraints of the existing ones. This leads to a lot of code that encodes business logic in itself (e.g., several different ways and layers of indirection to think about allocation ratios) rather than working within strong and constraining contracts. From my standpoint there isn't much to talk about here from a placement code standpoint. We should clearly document the functional contract (and stick to it) and we should come up with exemplars for how to make the best use of traits. I think this conversation could allow us to find those examples. I don't, however, want placement to be a traffic officer for how people do things. In the context of the orchestration between nova and ironic and how that interaction happens, nova has every right to set some guidelines if it needs to. -- Chris Dent ٩◔̯◔۶ https://anticdent.org/ freenode: cdent tw: @anticdent__ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [placement] The "intended purpose" of traits
On Fri, 28 Sep 2018, melanie witt wrote: I'm concerned about a lot of repetition here and maintenance headache for operators. That's where the thoughts about whether we should provide something like a key-value construct to API callers where they can instead say: * OWNER=CINDER * RAID=10 * NUMA_CELL=0 for each resource provider. If I'm off base with my example, please let me know. I'm not a placement expert. Anyway, I hope that gives an idea of what I'm thinking about in this discussion. I agree we need to pick a direction and go with it. I'm just trying to look out for the experience operators are going to be using this and maintaining it in their deployments. Despite saying "let's never do this" with regard to having formal support for key/values in placement, if we did choose to do it (if that's what we chose, I'd live with it), when would we do it? We have a very long backlog of features that are not yet done. I believe (I hope obviously) that we will be able to accelerate placement's velocity with it being extracted, but that won't be enough to suddenly be able to do quickly do all the things we have on the plate. Are we going to make people wait for some unknown amount of time, in the meantime? While there is a grammar that could do some of these things? Unless additional resources come on the scene I don't think is either feasible or reasonable for us to considering doing any model extending at this time (irrespective of the merit of the idea). In some kind of weird belief way I'd really prefer we keep the grammar placement exposes simple, because my experience with HTTP APIs strongly suggests that's very important, and that experience is effectively why I am here, but I have no interest in being a fundamentalist about it. We should argue about it strongly to make sure we get the right result, but it's not a huge deal either way. -- Chris Dent ٩◔̯◔۶ https://anticdent.org/ freenode: cdent tw: @anticdent__ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [placement] The "intended purpose" of traits
us you (Jay) made and I'd hope we can keep it clean like that. If we weren't a multiple-service oriented system, and instead had some kind of k8s-like etcd-like keeper-of-all-the-info-about-everything, then sure, having what we currently model as resource providers be a giant blob of metadata (with quantities, qualitiies, and key-values) that is an authority for the entire system might make some kind of sense. But we don't. If we wanted to migrate to having something like that, using placement as the trojan horse for such a change, either with intent or by accident, would be unfortunate. Propose such a thing and I'll gladly support it. But I won't support bastardizing the simple concept of a boolean capability just because we don't want to change the API or database schema. For me, it is not a matter of not wanting to change the API or the database schema. It's about not wanting to expand the concepts, and thus the purpose, of the system. It's about wanting to keep focus and functionality narrow so we can have a target which is "maturity" and know when we're there. My summary: Traits are symbols that are 255 characters long that are associated with a resource provider. It's possible to query for resource providers that have or do not have a specific trait. This has the effect of making the meaning of a trait a descriptor of the resource provider. What the descriptor signifies is up to the thing creating and using the resource provider, not placement. We need to harden that contract and stick to it. Placement is like a common carrier, it doesn't care what's in the box. /me cues brad pitt -- Chris Dent ٩◔̯◔۶ https://anticdent.org/ freenode: cdent tw: @anticdent__ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
[openstack-dev] [placement] update 18-39
. Presumably we don't want to be publishing docs until we are publishing code, but the interdependencies need to be teased out. * We need to decide how we are going to manage database schema migrations (alembic is the modern way) and we need to create the tooling for running those migrations (as well as upgrade checks). This includes deciding how we want to manage command line tools (using nova's example or something else). Until those things happen we don't have a "thing" which people can install and run, unless they do some extra hacking around which we don't want to impose upon people any longer than necessary. # Other As with last time, I'm not going to make a list of links to pending changes that aren't already listed above. I'll start doing that again eventually (once priorities are more clear), but for now it is useful to look at [open placement patches](https://review.openstack.org/#/q/project:openstack/placement+status:open) and patches from everywhere which [mention placement in the commit message](https://review.openstack.org/#/q/message:placement+status:open). # End Taking a few days off is a great way to get out of sync. -- Chris Dent ٩◔̯◔۶ https://anticdent.org/ freenode: cdent tw: @anticdent__ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [qa] [infra] [placement] tempest plugins virtualenv
On Fri, 28 Sep 2018, Matthew Treinish wrote: http://logs.openstack.org/14/601614/21/check/placement-tempest-gabbi/f44c185/job-output.txt.gz#_2018-09-28_11_13_25_798683 Right above this line it shows that the gabbi-tempest plugin is installed in the venv: http://logs.openstack.org/14/601614/21/check/placement-tempest-gabbi/f44c185/job-output.txt.gz#_2018-09-28_11_13_25_650661 Ah, so it is, thanks. My grepping and visual-grepping failed because of the weird linebreaks. Le sigh. For curiosity: What's the processing that is making it be installed twice? I ask because I'm hoping to (eventually) trim this to as small and light as possible. And then even more eventually I hope to make it so that if a project chooses the right job and has a gabbits directory, they'll get run. The part that was confusing for me was that the virtual env that lib/tempest (from devstack) uses is not even mentioned in tempest's tox.ini, so is using its own directory as far as I could tell. My guess is that the plugin isn't returning any tests that match the regex. I'm going to run it without a regex and see what it produces. It might be that pre job I'm using to try to get the gabbits in the right place is not working as desired. A few patchsets ago when I was using the oogly way of doing things it was all working. -- Chris Dent ٩◔̯◔۶ https://anticdent.org/ freenode: cdent tw: @anticdent__ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
[openstack-dev] [qa] [infra] [placement] tempest plugins virtualenv
I'm still trying to figure out how to properly create a "modern" (as in zuul v3 oriented) integration test for placement using gabbi and tempest. That work is happening at https://review.openstack.org/#/c/601614/ There was lots of progress made after the last message on this topic http://lists.openstack.org/pipermail/openstack-dev/2018-September/134837.html but I've reached another interesting impasse. From devstack's standpoint, the way to say "I want to use a tempest plugin" is to set TEMPEST_PLUGINS to alist of where the plugins are. devstack:lib/tempest then does a: tox -evenv-tempest -- pip install -c $REQUIREMENTS_DIR/upper-constraints.txt $TEMPEST_PLUGINS http://logs.openstack.org/14/601614/21/check/placement-tempest-gabbi/f44c185/job-output.txt.gz#_2018-09-28_11_12_58_138163 I have this part working as expected. However, The advice is then to create a new job that has a parent of devstack-tempest. That zuul job runs a variety of tox environments, depending on the setting of the `tox_envlist` var. If you wish to use a `tempest_test_regex` (I do) the preferred tox environment is 'all'. That venv doesn't have the plugin installed, thus no gabbi tests are found: http://logs.openstack.org/14/601614/21/check/placement-tempest-gabbi/f44c185/job-output.txt.gz#_2018-09-28_11_13_25_798683 How do I get my plugin installed into the right venv while still following the guidelines for good zuul behavior? -- Chris Dent ٩◔̯◔۶ https://anticdent.org/ freenode: cdent tw: @anticdent__ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
[openstack-dev] [placement] Tetsuro Nakamura now core
Since there were no objections and a week has passed, I've made Tetsuro a member of placement-core. Thanks for your willingness and continued help. Use your powers wisely. -- Chris Dent ٩◔̯◔۶ https://anticdent.org/ freenode: cdent tw: @anticdent__ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
[openstack-dev] [placement] update 18-38
NUMA topology with RPs * <https://review.openstack.org/#/c/552105/> Support initial allocation ratios (There are at least two pending allocation ratio handling cleanup specs. It's not clear from the PTG etherpad which of these was chosen as the future (we did choose, but the etherpad is confusing). 544683 (above) is the other one.) * <https://review.openstack.org/#/c/569011/> Count quota based on resource class # Main Themes These are interim themes while we work out what priorities are. ## Making Nested Useful An acknowledged outcome from the PTG was that we need to do the work to make workloads that want to use nested resource providers actually able to land on a host somewhere. This involves work across many parts of nova and could easily lead to a mass of bug fixes in placement. I'm probably missing a fair bit but the following topics are good starting points: * <https://review.openstack.org/#/q/topic:bp/use-nested-allocation-candidates> * <https://review.openstack.org/#/q/topic:use-nested-allocation-candidates> * <https://review.openstack.org/#/q/topic:bug/1792503> ## Consumer Generations gibi is still working hard to drive home support for consumer generations on the nova side. Because of some dependency management that stuff is currently in the following topic: * <https://review.openstack.org/#/q/topic:bp/use-nested-allocation-candidates> ## Extraction As mentioned above, getting the extracted placement happy is proceeding apace. Besides many of the generic cleanups happening [to the repo](https://review.openstack.org/#/q/project:openstack/placement+status:open) we need to focus some effort on upgrade and integration testing, docs publishing, and doc correctness. Dan has started a [database migration script](https://review.openstack.org/#/c/603234/) which will be used by deployers and grenade for upgrades. Matt is hoping to make some progress on the grenade side of things. I have a [hacked up devstack](https://review.openstack.org/#/c/600162/) for using the extracted placement. All of this is dependent on: * database migrations being "collapsed" * the existence of a `placement-manage` script to initialize the database I made a faked up [placement-manage](https://review.openstack.org/#/c/600161/) for the devstack patch above, but it only creates tables, doesn't migrate, and is not fit for purpose as a generic CLI. I have started [some experiments](https://review.openstack.org/#/c/601614/) on using [gabbi-tempest](https://pypi.org/project/gabbi-tempest/) to drive some integration tests for placement with solely gabbi YAML files. I initially did this using "legacy" style zuul jobs, and made it work, but it was ugly and I've since started using more modern zuul, but haven't yet made it work. # Other As with last time, I'm not going to make a list of links to pending changes that aren't already listed above. I'll start doing that again eventually (once priorities are more clear), but for now it is useful to look at [open placement patches](https://review.openstack.org/#/q/project:openstack/placement+status:open) and patches from everywhere which [mention placement in the commit message](https://review.openstack.org/#/q/message:placement+status:open). # End In case anyone is wondering where I am, I'm out M-W next week. -- Chris Dent ٩◔̯◔۶ https://anticdent.org/ freenode: cdent tw: @anticdent__ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
[openstack-dev] Nominating Tetsuro Nakamura for placement-core
I'd like to nominate Tetsuro Nakamura for membership in the placement-core team. Throughout placement's development Tetsuro has provided quality reviews; done the hard work of creating rigorous functional tests, making them fail, and fixing them; and implemented some of the complex functionality required at the persistence layer. He's aware of and respects the overarching goals of placement and has demonstrated pragmatism when balancing those goals against the requirements of nova, blazar and other projects. Please follow up with a +1/-1 to express your preference. No need to be an existing placement core, everyone with an interest is welcome. Thanks. -- Chris Dent ٩◔̯◔۶ https://anticdent.org/ reenode: cdent tw: @anticdent__ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
[openstack-dev] [placement] [infra] [qa] tuning some zuul jobs from "it works" to "proper"
I have a patch in progress to add some simple integration tests to placement: https://review.openstack.org/#/c/601614/ They use https://github.com/cdent/gabbi-tempest . The idea is that the method for adding more tests is to simply add more yaml in gate/gabbits, without needing to worry about adding to or think about tempest. What I have at that patch works; there are two yaml files, one of which goes through the process of confirming the existence of a resource provider and inventory, booting a server, seeing a change in allocations, resizing the server, seeing a change in allocations. But this is kludgy in a variety of ways and I'm hoping to get some help or pointers to the right way. I'm posting here instead of asking in IRC as I assume other people confront these same confusions. The issues: * The associated playbooks are cargo-culted from stuff labelled "legacy" that I was able to find in nova's jobs. I get the impression that these are more verbose and duplicative than they need to be and are not aligned with modern zuul v3 coolness. * It takes an age for the underlying devstack to build, I can presumably save some time by installing fewer services, and making it obvious how to add more when more are required. What's the canonical way to do this? Mess with {enable,disable}_service, cook the ENABLED_SERVICES var, do something with required_projects? * This patch, and the one that follows it [1] dynamically install stuff from pypi in the post test hooks, simply because that was the quick and dirty way to get those libs in the environment. What's the clean and proper way? gabbi-tempest itself needs to be in the tempest virtualenv. * The post.yaml playbook which gathers up logs seems like a common thing, so I would hope could be DRYed up a bit. What's the best way to that? Thanks very much for any input. [1] perf logging of a loaded placement: https://review.openstack.org/#/c/602484/ -- Chris Dent ٩◔̯◔۶ https://anticdent.org/ freenode: cdent tw: @anticdent__ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [User-committee] [tc] Joint UC/TC Meeting
On Tue, 18 Sep 2018, Doug Hellmann wrote: [Redirecting this from the openstack-tc list to the -dev list.] Excerpts from Melvin Hillsman's message of 2018-09-18 17:43:57 -0500: UC is proposing a joint UC/TC meeting at the end of the month say starting after Berlin to work more closely together. The last Monday of the month at 1pm US Central time is current proposal, throwing it out here now for feedback/discussion, so that would make the first one Monday, November 26th, 2018. I agree that the UC and TC should work more closely together. If the best way to do that is to have a meeting then great, let's do it. We're you thinking IRC or something else? But we probably need to resolve our ambivalence towards meetings. On Sunday at the PTG we discussed maybe going back to having a TC meeting but didn't realy decide (at least as far as I recall) and didn't discuss in too much depth the reasons why we killed meetings in the first place. How would this meeting be different? -- Chris Dent ٩◔̯◔۶ https://anticdent.org/ freenode: cdent tw: @anticdent__ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
[openstack-dev] [tc] [all] TC Report 18-38
HTML: https://anticdent.org/tc-report-18-38.html Rather than writing a TC Report this week, I've written a report on the [OpenStack Stein PTG](https://anticdent.org/openstack-stein-ptg.html). -- Chris Dent ٩◔̯◔۶ https://anticdent.org/ freenode: cdent tw: @anticdent__ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [openstack][infra]Including Functional Tests in Coverage
On Wed, 12 Sep 2018, Michael Johnson wrote: We do this in Octavia. The openstack-tox-cover calls the cover environment in tox.ini, so you can add it there. We've got this in progress for placement as well: https://review.openstack.org/#/c/600501/ https://review.openstack.org/#/c/600502/ It works well and is pretty critical in placement because most of the "important" tests are functional. -- Chris Dent ٩◔̯◔۶ https://anticdent.org/ freenode: cdent tw: @anticdent__ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
[openstack-dev] [placement] update 18-36
org/#/q/topic:consumer_gen> # Other The placement repo is currently small enough that looking at [all open patches](https://review.openstack.org/#/q/project:openstack/placement+status:open) isn't too overwhelming. Because of all the recent work with extraction, and because the PTG is next week I'm not up to date on what patches that are related to placement are in need of review. In the meantime if you want to go looking around, [anything with 'placement' in the commit mesage](https://review.openstack.org/#/q/message:placement+status:open) is fun. Next time I'll provide more detail. # End Thanks to everyone for getting placement this far. -- Chris Dent ٩◔̯◔۶ https://anticdent.org/ freenode: cdent tw: @anticdent__ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
[openstack-dev] [nova] [placement] modified devstack using openstack/placement
Yesterday I experimented to discover the changes needed in devstack to get it working with the code in openstack/placement. The results are at https://review.openstack.org/#/c/600162/ and it is passing tempest. It isn't passing grenade but that's expected at this stage. Firstly, thanks to everyone who helped this week to create and merge a bunch of placement code to get the repo working. Waking up this morning to see a green tempest was rather nice. Secondly, the work—as expected—exposes a few gaps, most that are already known. If you're not interested in the details, here's a good place to stop reading, but if you are, see below. This is mostly notes, for sake of sharing information, not a plan. Please help me make a plan. 1) To work around the fact that there is currently no "placement-manage db_sync" equivalent I needed to hack up something to make sure the database tables exist. So I faked a "placmeent-manage db table_create". That's in https://review.openstack.org/#/c/600161/ That uses sqlalchemy's 'create_all' functionality to create the tables from their Models, rather than using any migrations. I did it this way for two reasons: 1) I already had code for it in placedock[1] that I could copy, 2) I wanted to set aside migrations for the immediate tests. We'll need to come back to that, because the lack of dealing with already existing tables is _part_ of what is blocking grenade. However, for new installs 'create_all' is fast and correct and something we might want to keep. 2) The grenade jobs don't have 'placement' in $PROJECTS so die during upgrade. 3) The nova upgrade.sh will need some adjustments to do the data migrations we've talked about over the "(technical)" thread. Also we'll need to decide how much of the placement stuff stays in there and how much goes somewhere else. That's all stuff we can work out, especially if some grenade-oriented people join in the fun. One question I have on the lib/placement changes in devstack: Is it useful to make those changes be guarded by a conditional of the form: if placement came from its own repo: do the new stuff else: do the old stuff ? [1] https://github.com/cdent/placedock -- Chris Dent ٩◔̯◔۶ https://anticdent.org/ freenode: cdent tw: @anticdent__ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
[openstack-dev] [api] microversion-parse core updates
After some discussion with other cores I've made some adjustments to the core team on microversion-parse [1] * added dtantsur (welcome!) * removed sdague In case you're not aware, microversion-parse is middleware and utilities for managing microversions in openstack service apis. [1] https://pypi.org/project/microversion_parse/ http://git.openstack.org/cgit/openstack/microversion-parse -- Chris Dent ٩◔̯◔۶ https://anticdent.org/ freenode: cdent tw: @anticdent__ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [nova] [placement] extraction (technical) update
On Tue, 4 Sep 2018, Eric Fried wrote: 030 is okay as long as nothing goes wrong. If something does it raises exceptions which would currently fail as the exceptions are not there. See below for more about exceptions. Maybe I'm misunderstanding what these migration thingies are supposed to be doing, but 030 [1] seems like it's totally not applicable to placement and should be removed. The placement database doesn't (and shouldn't) have 'flavors', 'cell_mappings', or 'host_mappings' tables in the first place. What am I missing? Nothing, as far as I can tell, but as we hadn't had a clear plan about how to proceed with the trimming of migrations, I've been trying to point out where they form little speed bumps as we've gone through this process and carried them with us. And tried to annotate where there may present some more, until we trim them. There are numerous limits to my expertise, and the db migrations is one of several areas where I decided I wasn't going to hold the ball, I'd just get us to the game and hope other people would find and fill in the blanks. That seems to be working okay, so far. * Presumably we can trim the placement DB migrations to just stuff that is relevant to placement Yah, I would hope so. What possible reason could there be to do otherwise? Mel's plans looks good to me. -- Chris Dent ٩◔̯◔۶ https://anticdent.org/ freenode: cdent tw: @anticdent__ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
[openstack-dev] [tc] [all] TC Report 18-36
HTML: https://anticdent.org/tc-report-18-36.html It's been a rather busy day, so this TC Report will be a quick update of some discussions that have happened in the past week. # PEP 8002 With Guido van Rossum stepping back from his role as the BDFL of Python, there's work in progress to review different methods of governance used in other communities to come up with some ideas for the future of Python. Those reviews are being gathered in PEP 8002. Doug Hellman has been helping with those conversations and asked for [input on a draft](http://eavesdrop.openstack.org/irclogs/%23openstack-tc/%23openstack-tc.2018-08-28.log.html#t2018-08-28T20:40:41). There was some good conversation, especially the bits about the differences between ["direct democracy" and whatever what we do here in OpenStack](http://eavesdrop.openstack.org/irclogs/%23openstack-tc/%23openstack-tc.2018-08-29.log.html#t2018-08-29T11:00:50). The result of the draft was quickly merged into [PEP 8002](https://www.python.org/dev/peps/pep-8002/). # Summit Sessions There was discussion about concerns some people experience with some [summit sessions feeling like advertising](http://eavesdrop.openstack.org/irclogs/%23openstack-tc/%23openstack-tc.2018-08-29.log.html#t2018-08-29T18:21:08). # PTG Coming Soon The PTG is next week! TC sessions are described on [this etherpad](https://etherpad.openstack.org/p/tc-stein-ptg). # Elections Reminder TC [election season](https://governance.openstack.org/election/) is right now. Nomination period ends at the end of the day (UTC) 6th of September so there isn't much time left. If you're toying with the idea, nominate yourself, the community wants your input. If you have any questions please feel free to ask. -- Chris Dent ٩◔̯◔۶ https://anticdent.org/ freenode: cdent tw: @anticdent__ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] better name for placement
On Tue, 4 Sep 2018, Jay Pipes wrote: I wasn't in YVR, which explains why I's never heard of it. There's a number of misconceptions in the above document about the placement service that don't seem to have been addressed. I'm wondering if its worth revisiting the topic in Denver with the Cinder team or whether the Cinder team isn't interested in working with the placement service? It was also discussed as part of the reshaper spec and implemented for future use by a potential fast forward upgrade tool: http://specs.openstack.org/openstack/nova-specs/specs/rocky/approved/reshape-provider-tree.html#direct-placement https://git.openstack.org/cgit/openstack/nova/tree/nova/api/openstack/placement/direct.py I agree, talking to Cinder some more in denver about use of placement, either over HTTP or direct, whatever form, is good. But I don't think any of that should impact the naming situation. It's placement now, and placement is not really any less unique than a lot of the other words we use, the direct situation is a very special and edge case (likely in containers anyway, so naming not as much of a big deal). Changing the name, again, is painful. Please, let's not do it. -- Chris Dent ٩◔̯◔۶ https://anticdent.org/ freenode: cdent tw: @anticdent__ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] better name for placement
On Tue, 4 Sep 2018, Jay Pipes wrote: Either one works for me. Though I'm pretty sure that it isn't necessary. The reason it isn't necessary is because the stuff in the top-level placement package isn't meant to be imported by anything at all. It's the placement server code. Yes. If some part of the server repo is meant to be imported into some other system, say nova, then it will be pulled into a separate lib, ala ironiclib or neutronlib. Also yes. At this stage I _really_ don't want to go through the trouble of doing a second rename: we're in the process of finishing a rename now. -- Chris Dent ٩◔̯◔۶ https://anticdent.org/ freenode: cdent tw: @anticdent__ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] better name for placement
On Tue, 4 Sep 2018, Jay Pipes wrote: Is there a reason we couldn't have openstack-placement be the package name? I would hope we'd be able to do that, and probably should do that. 'openstack-placement' seems a find pypi package name for a think from which you do 'import placement' to do some openstack stuff, yeah? Last I checked the concept of the package name is sort of put off until we have passing tests, but we're nearly there on that. -- Chris Dent ٩◔̯◔۶ https://anticdent.org/ freenode: cdent tw: @anticdent__ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] better name for placement (was:Nominating Chris Dent for placement-core)
On Tue, 4 Sep 2018, Thomas Goirand wrote: Just a nit-pick... It's a shame we call it just placement. It could have been something like: foo: OpenStack placement Just like we have: nova: OpenStack compute No? Is it too late? There was some discussion about this on one of the extraction-related etherpads [1] and the gist is that while it would be possible to change it, at this point "placement" is the name people use and are used to so there would have to be a very good reason to change it. All the docs and code talk about "placement", and python package names are already placement. It used to be the case that the service-oriented projects would have a project name different from their service-type because that was cool/fun [2] and it allowed for the possibility that there could be another project which provided the same service-type. That hasn't really come to pass and now that we are on the far side of the hype curve, doesn't really make much sense in terms of focusing energy. My feeling is that there is already a lot of identity associated with the term "placement" and changing it would be too disruptive. Also, I hope that it will operate as a constraint on feature creep. But if we were to change it, I vote for "katabatic", as a noun, even though it is an adjective. [1] https://etherpad.openstack.org/p/placement-extract-stein-copy That was a copy of the original, which stopped working, but now that one has stopped working too. I'm going to attempt to reconstruct it today from copies that people. [2] For certain values of... -- Chris Dent ٩◔̯◔۶ https://anticdent.org/ freenode: cdent tw: @anticdent__ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [nova] [placement] extraction (technical) update
There's been some progress on the technical side of extracting placement to it own repo. The summary is: * https://git.openstack.org/cgit/openstack/placement exists * https://review.openstack.org/#/c/599416/ is at the top of a series of patches. That patch is passing and voting on unit and functional for py 2.7 and 3.5 and is passing pep8. More below, in the steps. On Tue, 28 Aug 2018, Chris Dent wrote: On Mon, 27 Aug 2018, melanie witt wrote: 1. We copy the placement code into the openstack/placement repo and have it passing all of its own unit and functional tests. To break that down to more detail, how does this look? (note the ALL CAPS where more than acknowledgement is requested) 1.1 Run the git filter-branch on a copy of nova 1.1.1 Add missing files to the file list: 1.1.1.1 .gitignore 1.1.1.2 # ANYTHING ELSE? 1.2 Push -f that thing, acknowledge to be broken, to a seed repo on github (ed's repo should be fine) 1.3 Do the repo creation bits described in https://docs.openstack.org/infra/manual/creators.html to seed openstack/placement 1.3.1 set zuul jobs. Either to noop-jobs, or non voting basic func and unit # INPUT DESIRED HERE 1.4 Once the repo exists with some content, incrementally bring it to working 1.4.1 Update tox.ini to be placement oriented 1.4.2 Update setup.cfg to be placement oriented 1.4.3 Correct .stesr.conf 1.4.4 Move base of placement to "right" place 1.4.5 Move unit and functionals to right place 1.4.6 Do automated path fixings 1.4.7 Set up translation domain and i18n.py corectly 1.4.8 Trim placement/conf to just the conf settings required (api, base, database, keystone, paths, placement) 1.4.9 Remove database files that are not relevant (the db api is not used by placement) 1.4.10 Fix the Database Fixture to be just one database 1.4.11 Disable migrations that can't work (because of dependencies on nova code, 014 and 030 are examples) # INPUT DESIRED HERE AND ON SCHEMA MIGRATIONS IN GENERAL 030 is okay as long as nothing goes wrong. If something does it raises exceptions which would currently fail as the exceptions are not there. See below for more about exceptions. 1.4.12 Incrementally get tests working 1.4.13 Fix pep8 1.5 Make zuul pep, unit and functional voting This is where we are now at https://review.openstack.org/#/c/599416/ 1.6 Create tools for db table sync/create It made some TODOs about this in setup.cfg, also nothing that in additional to a placement-manage we'll want a placement-status. 1.7 Concurrently go to step 2, where the harder magic happens. 1.8 Find and remove dead code (there will be some). Some dead code has been removed, but there will definitely be plenty more to find. 1.9 Tune up and confirm docs 1.10 Grep for remaining "nova" (as string and spirit) and fix Item 1.4.12 may deserve some discussion. When I've done this the several times before, the strategy I've used is to be test driven: run either functional or unit tests, find and fix one of the errors revealed, commit, move on. In the patch set that ends with the review linked above, this is pretty much what I did. Switching between a tox run of the full suite and using testtools.run to run an individual test file. 2. We have a stack of changes to zuul jobs that show nova working but deploying placement in devstack from the new repo instead of nova's repo. This includes the grenade job, ensuring that upgrade works. Do people have the time or info needed to break this step down into multiple steps like the '1' section above. Things I can think of: * devstack patch to deploy placement from the new repo * and use placement.conf * stripping of placement out of nova, a bit like https://review.openstack.org/#/c/596291/ , unless we leave that enitrely to step 4 * grenade tweaks (?) * more 3. When those pass, we merge them, effectively orphaning nova's copy of placement. Switch those jobs to voting. 4. Finally, we delete the orphaned code from nova (without needing to make any changes to non-placement-only test code -- code is truly orphaned). Some questions I have: * Presumably we can trim the placement DB migrations to just stuff that is relevant to placement and renumber accordingly? * Could we also make it so we only run the migrations if we are not in a fresh install? In a fresh install we ought to be able to skip the migrations entirely and create the tables by reflection with the class models [1]. * I had another but I forgot. [1] I did something similar to placedock for when starting from scratch: https://github.com/cdent/placedock/blob/b5ca753a0d97e0d9a324e196349e3a19eb62668b/sync.py#L68-L73 -- Chris Dent ٩◔̯◔۶ https://anticdent.org/ freenode: cdent tw: @anticdent__
[openstack-dev] [all][api] POST /api-sig/news
Greetings OpenStack community, There was nothing specific on the agenda this week, so much of the API-SIG meeting was spent discussing API-related topics that we'd encountered recently. One was: K8s Custom Resources [9] Cool or Chaos? The answer is, of course, "it depends". Another was a recent thread asking about the relevance of Open API 3.0 in the OpenStack environment [10]. We had trouble deciding what the desired outcome is, so for now are merely tracking the thread. In the world of guidelines and bugs, not a lot of recent action. Some approved changes need to be rebased to actually get published, and the stack about version discovery [11] needs to be refreshed and potentially adopted by someone who is not Monty. If you're reading, Monty, and have thoughts on that, share them. Next week we will be actively planning [7] for the PTG [8]. We have a room on Monday. We always have interesting and fun discussions when we're at the PTG, join us. As always if you're interested in helping out, in addition to coming to the meetings, there's also: * The list of bugs [5] indicates several missing or incomplete guidelines. * The existing guidelines [2] always need refreshing to account for changes over time. If you find something that's not quite right, submit a patch [6] to fix it. * Have you done something for which you think guidance would have made things easier but couldn't find any? Submit a patch and help others [6]. # Newly Published Guidelines * None # API Guidelines Proposed for Freeze * None # Guidelines that are ready for wider review by the whole community. * None # Guidelines Currently Under Review [3] * Add an api-design doc with design advice https://review.openstack.org/592003 * Update parameter names in microversion sdk spec https://review.openstack.org/#/c/557773/ * Add API-schema guide (still being defined) https://review.openstack.org/#/c/524467/ * A (shrinking) suite of several documents about doing version and service discovery Start at https://review.openstack.org/#/c/459405/ * WIP: microversion architecture archival doc (very early; not yet ready for review) https://review.openstack.org/444892 # Highlighting your API impacting issues If you seek further review and insight from the API SIG about APIs that you are developing or changing, please address your concerns in an email to the OpenStack developer mailing list[1] with the tag "[api]" in the subject. In your email, you should include any relevant reviews, links, and comments to help guide the discussion of the specific challenge you are facing. To learn more about the API SIG mission and the work we do, see our wiki page [4] and guidelines [2]. Thanks for reading and see you next week! # References [1] http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev [2] http://specs.openstack.org/openstack/api-wg/ [3] https://review.openstack.org/#/q/status:open+project:openstack/api-sig,n,z [4] https://wiki.openstack.org/wiki/API_SIG [5] https://storyboard.openstack.org/#!/project/1039 [6] https://git.openstack.org/cgit/openstack/api-sig [7] https://etherpad.openstack.org/p/api-sig-stein-ptg [8] https://www.openstack.org/ptg/ [9] https://kubernetes.io/docs/concepts/extend-kubernetes/api-extension/custom-resources/ [10] http://lists.openstack.org/pipermail/openstack-dev/2018-August/133960.html [11] https://review.openstack.org/#/c/459405/ Meeting Agenda https://wiki.openstack.org/wiki/Meetings/API-SIG#Agenda Past Meeting Records http://eavesdrop.openstack.org/meetings/api_sig/ Open Bugs https://bugs.launchpad.net/openstack-api-wg -- Chris Dent ٩◔̯◔۶ https://anticdent.org/ freenode: cdent tw: @anticdent__ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
[openstack-dev] [tc] [all] TC Report 18-35
ctions, the tensions that are playing out are not directly tied to any specific technical issues (which, thankfully, are resolving in the short term for placement) but are from the accumulation and aggregation over time of difficulties and frustrations associated with unresolved problems in the exercise and distribution of control and trust, unfinished goals, and unfulfilled promises. When changes like the placement extraction come up, they can act as proxies for deep and lingering problems that we have not developed good systems for resolving. What we do instead of investigating the deep issues is address the immediate symptomatic problems in a technical way and try to move on. People who are not satisfied with this have little recourse. They can either move elsewhere or attempt to cope. We've lost plenty of good people as a result. Some of those that choose to stick around get tetchy. If you have thoughts and feelings about these (or any other) deep and systemic issues in OpenStack, anyone in the TC should be happy to speak with you about them. For best results you should be willing to speak about your concerns publicly. If for some reason you are not comfortable doing so, that is itself an issue that needs to be addressed, but starting out privately is welcomed. The big goal here is for OpenStack to be good, as a technical production _and_ as a community. -- Chris Dent ٩◔̯◔۶ https://anticdent.org/ freenode: cdent tw: @anticdent__ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [nova] [placement] extraction (technical) update
On Tue, 28 Aug 2018, Matt Riedemann wrote: Are people okay with that and willing to commit to being okay with that answer in reviews? To some extent we need to have some faith on the end result: the tests work. If people are not okay with that, we need the people who are not to determine and prove the alternate strategy. I've had this one work and work well. Seems reasonable to me. But to be clear, if there are 70 failed tests, are you going to have 70 separate patches? Or this is just one of those things where you start with 70, fix something, get down to 50 failed tests, and iterate until you're down to all passing. If so, I'm OK with that. It's hard to say without knowing how many patches get from 70 failures to 0 and what the size/complexity of those changes is, but without knowing I'd default to the incremental approach for ease of review. It's lumpy. But at least at the begining it will be something like: 0 passing, stil 0 passing; still 0 passing; still 0 passing; 150 passing, 700 failing; 295 passing, X failing, etc. Because in the early stages, test discovery and listing doesn't work at all, for quite a few different reasons. Based on the discussion here, resolving those "different reasons" is things people want to see in different commits. One way to optimize this (if people preferred) would be to not use stestr as called by tox, with its built in test discovery, but instead run testtools or subunit in a non-parallel and failfast where not all tests need to be discovered first. That would provide a more visible sense of "it's getting better" to someone who is running the tests locally using that alternate method, but would not do much for the jobs run by zuul, so probably not all that useful. Thanks for the other info on the devstack and grenade stuff. If I read you right, from your perspective it's a case of "we'll see" and "we'll figure it out", which sounds good to me. -- Chris Dent ٩◔̯◔۶ https://anticdent.org/ freenode: cdent tw: @anticdent__ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [nova] [placement] XenServer CI failed frequently because of placement update
On Tue, 28 Aug 2018, Bob Ball wrote: Just looking at Naichuan's output, I wonder if this is because allocation_ratio is registered as 0 in the inventory. Yes. Whatever happened to cause that is the root, that will throw the math off into zeroness in lots of different places. The default (if you don't send and allocation_ratio) is 1.0, so maybe there's some code somewhere that is trying to use the default (by not sending) but is accidentally sending 0 instead? -- Chris Dent ٩◔̯◔۶ https://anticdent.org/ freenode: cdent tw: @anticdent__ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [nova] [placement] extraction (technical) update
comes up. I'm neither able nor willing to be responsible for creating those lists for all these points, but very happy to help. 3. When those pass, we merge them, effectively orphaning nova's copy of placement. Switch those jobs to voting. 4. Finally, we delete the orphaned code from nova (without needing to make any changes to non-placement-only test code -- code is truly orphaned). In case you missed it, one of the things I did earlier in the discussion was make it so that the wsgi script for placement defined in nova's setup.cfg [1] could: * continue to exist * with the same name * using the nova.conf file * running the extracted placement code That was easy to do because of the work over the last year or so that has been hardening the boundary between placement and nova, in place. I've been assuming that maintaining the option to use original conf file is a helpful trick for people. Is that the case? Thanks. [1] https://review.openstack.org/#/c/596291/3/nova/api/openstack/placement/wsgi.py -- Chris Dent ٩◔̯◔۶ https://anticdent.org/ freenode: cdent tw: @anticdent__ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [oslo] UUID sentinel needs a home
On Fri, 24 Aug 2018, Doug Hellmann wrote: I guess all of the people who complained so loudly about the global in oslo.config are gone? It's a diffent context. In a testing environment where there is already a well established pattern of use it's not a big deal. Global in oslo.config is still horrible, but again: a well established pattern of use. This is part of why I think it is better positioned in oslotest as that signals its limitations. However, like I said in my other message, copying nova's thing has proven fine. -- Chris Dent ٩◔̯◔۶ https://anticdent.org/ freenode: cdent tw: @anticdent__ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [nova] [placement] extraction (technical) update
On Fri, 24 Aug 2018, Chris Dent wrote: That work is in gerrit at https://review.openstack.org/#/c/596291/ with a hopefully clear commit message about what's going on. As with the rest of this work, this is not something to merge, rather an experiment to learn from. The hot spots in the changes are relatively limited and about what you would expect so, with luck, should be pretty easy to deal with, some of them even before we actually do any extracting (to enhance the boundaries between the two services). After some prompting from gibi, that code has now been adjusted so that requirements.txt and tox.ini [1] make sure that the extract placement branch is installed into the test virtualenvs. So in the gate the unit and functional tests pass. Other jobs do not because of [1]. In the intervening time I've taken that code, built a devstack that uses a nova-placement-api wsgi script that uses nova.conf and the extracted placement code. It runs against the nova-api database. Created a few servers. Worked. Then I switched the devstack@placement-unit unit file to point to the placement-api wsgi script, and configured /etc/placement/placement.conf to have a [placement_database]/connection of the nova-api db. Created a few servers. Worked. Thanks. [1] As far as I can tell a requirements.txt entry of -e git+https://github.com/cdent/placement-1.git@cd/make-it-work#egg=placement will install just fine with 'pip install -r requirements.txt', but if I do 'pip install nova' and that line is in requirements.txt it does not work. This means I had to change tox.ini to have a deps setting of: deps = -r{toxinidir}/test-requirements.txt -r{toxinidir}/requirements.txt to get the functional and unit tests to build working virtualenvs. That this is not happening in the dsvm-based zuul jobs mean that the tests can't run or pass. What's going on here? Ideas? -- Chris Dent ٩◔̯◔۶ https://anticdent.org/ freenode: cdent tw: @anticdent__ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
[openstack-dev] [nova] [placement] extraction (technical) update
Over the past few days a few of us have been experimenting with extracting placement to its own repo, as has been discussed at length on this list, and in some etherpads: https://etherpad.openstack.org/p/placement-extract-stein https://etherpad.openstack.org/p/placement-extraction-file-notes As part of that, I've been doing some exploration to tease out the issues we're going to hit as we do it. None of this is work that will be merged, rather it is stuff to figure out what we need to know to do the eventual merging correctly and efficiently. Please note that doing that is just the near edge of a large collection of changes that will cascade in many ways to many projects, tools, distros, etc. The people doing this are aware of that, and the relative simplicity (and fairly immediate success) of these experiments is not misleading people into thinking "hey, no big deal". It's a big deal. There's a strategy now (described at the end of the first etherpad listed above) for trimming the nova history to create a thing which is placement. From the first run of that Ed created a github repo and I branched that to eventually create: https://github.com/EdLeafe/placement/pull/2 In that, all the placement unit and functional tests are now passing, and my placecat [1] integration suite also passes. That work has highlighted some gaps in the process for trimming history which will be refined to create another interim repo. We'll repeat this until the process is smooth, eventually resulting in an openstack/placement. To take things further, this morning I pip installed the placement code represented by that pull request into a nova repo and made some changes to remove placement from nova. With some minor adjustments I got the remaining unit and functional tests working. That work is in gerrit at https://review.openstack.org/#/c/596291/ with a hopefully clear commit message about what's going on. As with the rest of this work, this is not something to merge, rather an experiment to learn from. The hot spots in the changes are relatively limited and about what you would expect so, with luck, should be pretty easy to deal with, some of them even before we actually do any extracting (to enhance the boundaries between the two services). If you're interested in this process please have a look at all the links and leave comments there, in response to this email, or join #openstack-placement on freenode to talk about it. Thanks. [1] https://github.com/cdent/placecat -- Chris Dent ٩◔̯◔۶ https://anticdent.org/ freenode: cdent tw: @anticdent__ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [oslo] UUID sentinel needs a home
On Thu, 23 Aug 2018, Dan Smith wrote: ...and it doesn't work like mock.sentinel does, which is part of the value. I really think we should put this wherever it needs to be so that it can continue to be as useful as is is today. Even if that means just copying it into another project -- it's not that complicated of a thing. Yeah, I agree. I had hoped that we could make something that was generally useful, but its main value is its interface and if we can't have that interface in a library, having it per codebase is no biggie. For example it's been copied straight from nova into the placement extractions experiments with no changes and, as one would expect, works just fine. Unless people are wed to doing something else, Dan's right, let's just do that. -- Chris Dent ٩◔̯◔۶ https://anticdent.org/ freenode: cdent tw: @anticdent__ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [all] [nova] [placement] placement below or beside compute after extraction?
by whom. [1] Line 55 https://etherpad.openstack.org/p/SYD-forum-nova-placement-update [2] http://eavesdrop.openstack.org/irclogs/%23openstack-tc/%23openstack-tc.2018-08-20.log.html#t2018-08-20T20:35:51 [3] https://pypi.org/project/microversion_parse/ [4] http://specs.openstack.org/openstack/api-sig/guidelines/api_interoperability.html -- Chris Dent ٩◔̯◔۶ https://anticdent.org/ freenode: cdent tw: @anticdent__ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [all] [nova] [placement] placement below or beside compute after extraction?
On Mon, 20 Aug 2018, Zane Bitter wrote: If you want my personal opinion then I'm a big believer in incremental change. So, despite recognising that it is born of long experience of which I have been blissfully mostly unaware, I have to disagree with Chris's position that if anybody lets you change something then you should try to change as much as possible in case they don't let you try again. Because you called me out specifically, I feel obliged to say, this is neither what I said nor what I meant. It wasn't "in case they don't let you try again". It was "we've been trying to do some of this for two years and if we do it incrementally, the end game is further away, because it seems us take us forever to do anything." Perhaps not a huge difference. -- Chris Dent ٩◔̯◔۶ https://anticdent.org/ freenode: cdent tw: @anticdent__ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
[openstack-dev] [all] [tc] Who is responsible for the mission and scope of OpenStack?
TC-members and everyone else, In the discussion on a draft technical vision for OpenStack at https://review.openstack.org/#/c/592205/ there is a question about whether the TC is "responsible for the mission and scope of OpenStack". As the discussion there indicates, there is plenty of nuance, but underlying it is a pretty fundamental question that seems important to answer as we're going into yet another TC election period. I've always assumed it was the case: the TC is an elected representative body of the so-called active technical contributors to OpenStack. So while the TC is not responsible for creating the mission from whole cloth, they are responsible for representing the goals of the people who elected them and thus for refining, documenting and caring for the mission and scope while working with all the other people invested in the community. Does anyone disagree? If so, who is responsible if not the TC? -- Chris Dent ٩◔̯◔۶ https://anticdent.org/ freenode: cdent tw: @anticdent__ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [all] [nova] [placement] placement below or beside compute after extraction?
On Fri, 17 Aug 2018, Tom Barron wrote: Has there been a discussion on record of how use of placement by cinder would affect "standalone" cinder (or manila) initiatives where there is a desire to be able to run cinder by itself (with no-auth) or just with keystone (where OpenStack style multi-tenancy is desired)? This has been sort of glancingly addressed elsewhere in the thread, but I wanted to make it explicit: * It's possible now to run placement now with faked auth (the noauth2 concept) or keystone. Making auth handling more flexible would be a matter of choosing a different piece of middleware. * Partly driven by discussion with Cinder people and also with fast forward upgrade people, there's a feature in placement called "PlacementDirect". This makes it possible to interact with placement in the same process as the thing that is using it, rather than over HTTP. So no additional placement server is required, if that's how people want it. More info at: https://github.com/openstack/nova/blob/master/nova/api/openstack/placement/direct.py http://specs.openstack.org/openstack/nova-specs/specs/rocky/approved/reshape-provider-tree.html#direct-interface-to-placement However, since placement is lightweight (a simple-ish wsgi app over some database tables) it likely easier just to run it like normal, maybe in some containers to allow it to scale up and down easily. If you have a look at https://github.com/cdent/placedock and some of the links in the README, the flexibility and lightness may become a bit more clear. -- Chris Dent ٩◔̯◔۶ https://anticdent.org/ freenode: cdent tw: @anticdent__ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [all] [nova] [placement] placement below or beside compute after extraction?
within the compute umbrella might help a small amount with some of the competition described in item 3, it would be insufficient. The same forces would apply. Similarly, _if_ there are factors which are preventing some people from being willing to participate with a compute-associated project, a repo within compute is an insufficient break. Also, if we are going to go to the trouble of doing any kind of disrupting transition of the placement code, we may as well take as a big a step as possible in this one instance as these opportunities are rare and our capacity for change is slow. I started working on placement in early 2016, at that time we had plans to extract it to "it's own thing". We've passed the half-way point in 2018. 5. In OpenStack we have a tradition of the contributors having a strong degree of self-determination. If that tradition is to be upheld, then it would make sense that the people who designed and wrote the code that is being extracted would get to choose what happens with it. As much as Mel's and Dan's (only picking on them here because they are the dissenting voices that have showed up so far) input has been extremely important and helpful in the evolution of placement, they are not those people. So my hope is that (in no particular order) Jay Pipes, Eric Fried, Takashi Natsume, Tetsuro Nakamura, Matt Riedemann, Andrey Volkov, Alex Xu, Balazs Gibizer, Ed Leafe, and any other contributor to placement whom I'm forgetting [1] would express their preference on what they'd like to see happen. At the same time, if people from neutron, cinder, blazar, zun, mogan, ironic, and cyborg could express their preferences, we can get through this by acclaim and get on with getting things done. Thank you. [1] My apologies if I have left you out. It's Saturday, I'm tried from trying to make this happen for so long, and I'm using various forms of git blame and git log to extract names from the git history and there's some degree of magic and guessing going on. -- Chris Dent ٩◔̯◔۶ https://anticdent.org/ freenode: cdent tw: @anticdent__ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
[openstack-dev] [all] [nova] [placement] placement below or beside compute after extraction?
Earlier I posted a message about a planning etherpad for the extraction of placement http://lists.openstack.org/pipermail/openstack-dev/2018-August/133319.html https://etherpad.openstack.org/p/placement-extract-stein One of the goals of doing the planning and having the etherpad was to be able to get to the PTG with some of the issues resolved so that what little time we had at the PTG could be devoted to resolving any difficult technical details we uncovered in the lead up. One of the questions that has come up on the etherpad is about how placement should be positioned, as a project, after the extraction. The options are: * A repo within the compute project * Its own project, either: * working towards being official and governed * official and governed from the start The etherpad has some discussion about this, but since that etherpad is primarily for listing out the technical concerns I thought it might be useful to bring the discussion out into a wider audience, in a medium more oriented towards discussion. As placement is a service targeted to serving the entire OpenStack community, talking about it widely seems warranted. The outcome I'd like to see happen is the one that makes sure placement becomes useful to the most people and is worked on by the most people, as quickly as possible. If how it is arranged as a project will impact that, now is a good time to figure that out. If you have thoughts about this, please share them in response. -- Chris Dent ٩◔̯◔۶ https://anticdent.org/ freenode: cdent tw: @anticdent__ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
[openstack-dev] [placement] extraction etherpad for PTG
I've created an etherpad to prepare ideas and plans for a discussion at the PTG about extracting placement to its own thing. https://etherpad.openstack.org/p/placement-extract-stein Right now it is in a fairly long form as it gathers ideas and references. The goal is to compress it to something concise after we've had plenty of input so we have a (small) series of discussion points to resolve at the PTG. If this is a topic you think is important or you have an interest in, please add your thoughts to the etherpad. -- Chris Dent ٩◔̯◔۶ https://anticdent.org/ freenode: cdent tw: @anticdent__ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
[openstack-dev] [tc] [all] TC Report 18-33
HTML: https://anticdent.org/tc-report-18-33.html ## Dead, Gone or Stable [Last week](https://anticdent.org/tc-report-18-32.html) saw plenty of discussion about how to deal with projects for which no PTL was found by election or acclaim. That discussion continued this week, but also stimulated discussion on the differences between a project being dead, gone from OpenStack, or stable. * [Needing a point person](http://eavesdrop.openstack.org/irclogs/%23openstack-tc/%23openstack-tc.2018-08-07.log.html#t2018-08-07T13:05:48) * [Risks (or lack) of being leaderless](http://eavesdrop.openstack.org/irclogs/%23openstack-tc/%23openstack-tc.2018-08-13.log.html#t2018-08-13T08:09:23) Mixed in with that are some dribbles of a theme which has become increasingly common of late: * [Contribution from foundation member corps](http://eavesdrop.openstack.org/irclogs/%23openstack-tc/%23openstack-tc.2018-08-09.log.html#t2018-08-09T12:49:27) * [The need for janitors, and board members not being the people able to provide](http://eavesdrop.openstack.org/irclogs/%23openstack-tc/%23openstack-tc.2018-08-09.log.html#t2018-08-09T13:30:12) As a group, the TC has very mixed feelings on these issues. On the one hand everyone would like to keep projects healthy and within OpenStack, where possible. On the other hand, it is important that people who are upstream contributors stop over-committing to compensate for lack of commitment from a downstream that benefits hugely from their labor. Letting projects "die" or become unofficial is one way to clearly signal that there are resourcing problems. In fact, doing so with both [Searchlight](https://review.openstack.org/#/c/588644/) and [Freezer](https://review.openstack.org/#/c/588645/) raised some volunteers to help out as PTLs. However, both of those projects have been languishing for many months. How many second chances do projects get? ## IRC for PTLs Within all the discussion about the health of projects, there was some [discussion](http://eavesdrop.openstack.org/irclogs/%23openstack-tc/%23openstack-tc.2018-08-08.log.html#t2018-08-08T17:25:20) of whether it was appropriate to expect PTLs to have and use IRC nicks. As the character of the pool of potential PTLs evolves, it might not fit. See the log for a bit more nuance on the issues. ## TC Elections Soon Six seats on the TC will be up for election. The nomination period will start at the end of this month. If you're considering running and have any questions, please feel free to ask me. -- Chris Dent ٩◔̯◔۶ https://anticdent.org/ freenode: cdent tw: @anticdent__ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
[openstack-dev] [tc] [all] documenting appointed PTLs
We've had several appointed PTLs this cycle, in some cases because people forgot to nominate themselves, in other cases because existing maintainers have been pulled away and volunteers stepped up. Thanks to those people who did. We haven't had a formal process for documenting those appointments and there's been some confusion on who and where it should all happen. I've proposed a plan at https://review.openstack.org/#/c/590790/ that may not yet be perfect, but gives a starting point on which to accrete a reasonable solution. If you have opinions on the matter, please leave something on the review. Thanks. -- Chris Dent ٩◔̯◔۶ https://anticdent.org/ freenode: cdent tw: @anticdent__ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
[openstack-dev] [all][api] POST /api-sig/news
Greetings OpenStack community, As is our recent custom, short meeting this week. Our main topic of conversation was discussing the planning etherpad [7] for the API-SIG gathering at the Denver PTG. If you will be there, and have topics of interest, please add them to the etherpad. There are no new guidelines under review, but there is a stack of changes which do some reformatting and explicitly link to useful resources [8]. As always if you're interested in helping out, in addition to coming to the meetings, there's also: * The list of bugs [5] indicates several missing or incomplete guidelines. * The existing guidelines [2] always need refreshing to account for changes over time. If you find something that's not quite right, submit a patch [6] to fix it. * Have you done something for which you think guidance would have made things easier but couldn't find any? Submit a patch and help others [6]. # Newly Published Guidelines * None # API Guidelines Proposed for Freeze * None # Guidelines that are ready for wider review by the whole community. * None # Guidelines Currently Under Review [3] * Update parameter names in microversion sdk spec https://review.openstack.org/#/c/557773/ * Add API-schema guide (still being defined) https://review.openstack.org/#/c/524467/ * A (shrinking) suite of several documents about doing version and service discovery Start at https://review.openstack.org/#/c/459405/ * WIP: microversion architecture archival doc (very early; not yet ready for review) https://review.openstack.org/444892 # Highlighting your API impacting issues If you seek further review and insight from the API SIG about APIs that you are developing or changing, please address your concerns in an email to the OpenStack developer mailing list[1] with the tag "[api]" in the subject. In your email, you should include any relevant reviews, links, and comments to help guide the discussion of the specific challenge you are facing. To learn more about the API SIG mission and the work we do, see our wiki page [4] and guidelines [2]. Thanks for reading and see you next week! # References [1] http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev [2] http://specs.openstack.org/openstack/api-wg/ [3] https://review.openstack.org/#/q/status:open+project:openstack/api-wg,n,z [4] https://wiki.openstack.org/wiki/API_SIG [5] https://bugs.launchpad.net/openstack-api-wg [6] https://git.openstack.org/cgit/openstack/api-wg [7] https://etherpad.openstack.org/p/api-sig-stein-ptg [8] https://review.openstack.org/#/c/589131/ Meeting Agenda https://wiki.openstack.org/wiki/Meetings/API-SIG#Agenda Past Meeting Records http://eavesdrop.openstack.org/meetings/api_sig/ Open Bugs https://bugs.launchpad.net/openstack-api-wg -- Chris Dent ٩◔̯◔۶ https://anticdent.org/ freenode: cdent tw: @anticdent__ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] Paste unmaintained
On Wed, 8 Aug 2018, Thomas Goirand wrote: I'll try to investigate then. However, the way you're suggesting mandates systemd which is probably not desirable. "or some other supervisor" -- Chris Dent ٩◔̯◔۶ https://anticdent.org/ freenode: cdent tw: @anticdent__ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] Paste unmaintained
On Wed, 8 Aug 2018, Thomas Goirand wrote: I'd be more than happy to have a better logging without the need of paste/pastescript, but so far, that's the only way I found that worked with uwsgi. Do you know any other way? Yes, use systemd or some other supervisor which is responsible for catching stderr. That's why I pointed to devstack and my container thing. Not because I think devstack is glorious or anything, but because the logging works and presumably something can be learned from that. Apparently what you're doing in the debian packages doesn't work (without logging middleware), which isn't surprising because that's exactly how uwsgi and WSGI is supposed to work. What I've been trying to suggest throughout this subthread is that it sounds like however things are being packaged in debian is not right, and that something needs to be changed. Also that your bold assertion that uwsgi doesn't work without paste is only true in the narrow way in which you are using it (which is the wrong way to use it). -- Chris Dent ٩◔̯◔۶ https://anticdent.org/ freenode: cdent tw: @anticdent__ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] Paste unmaintained
On Wed, 8 Aug 2018, Thomas Goirand wrote: If you don't configure uwsgi to do any special logging, then then only thing you'll see in the log file is client requests, without any kind of logging from the wsgi application. To have proper logging, one needs to add, in the uwsgi config file: paste-logger = true If you do that, then you need the python3-pastescript installed, which itself depends on the python3-paste package. Really, I don't see how an operator could run without the paste-logger option activated. Without it, you see nothing in the logs. I'm pretty sure your statements here are not true. In the uwsgi configs for services in devstack, paste-logger is not used. In the uwsgi set up [1] I use in placedock [2], paste-logger is not used. Yet both have perfectly reasonable logs showing a variety of log levels, including request logs at INFO, and server debugging and warnings where you would expect it to be. Can you please point me to where you are seeing these problems? Clearly something is confused somewhere. Is the difference in our experiences that both of the situations I describe above are happy with logging being on stderr and you're talking about being able to config logging to files, within the application itself? If that's the case then my response would b: don't do that. Let systemd, or your container, or apache2, or whatever process/service orchestration system you have going manage that. That's what they are there for. [1] https://github.com/cdent/placedock/blob/master/shared/placement-uwsgi.ini [2] https://github.com/cdent/placedock -- Chris Dent ٩◔̯◔۶ https://anticdent.org/ freenode: cdent tw: @anticdent__ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] Paste unmaintained
On Tue, 7 Aug 2018, Thomas Goirand wrote: That's nice to have direct dependency, but this doesn't cover everything. If using uwsgi, if you want any kind of logging from the wsgi application, you need to use pastescript, which itself runtimes depends on paste. So, anything which potentially has an API also depends indirectly on Paste. Can you point to more info on this, as it doesn't correspond with my experience of using uwsgi? In my experience uwsgi has built in support for logging without dependencies: https://uwsgi-docs.readthedocs.io/en/latest/LogFormat.html As I said in IRC a while ago: It doesn't really matter how many of our projects are using Paste or PasteDeploy: If any of them are, then we have a problem to address. We already know that some of the big/popular ones use it. That's enough to require us to work on a solution. -- Chris Dent ٩◔̯◔۶ https://anticdent.org/ freenode: cdent tw: @anticdent__ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
[openstack-dev] [tc] [all] TC Report 18-32
HTML: https://anticdent.org/tc-report-18-32.html The TC discussions of interest in the past week have been related to the recent [PTL elections](https://governance.openstack.org/election/) and planning for the [forthcoming PTG](https://www.openstack.org/ptg). ## PTL Election Gaps A few official projects had no nominee for the PTL position. An [etherpad](https://etherpad.openstack.org/p/stein-leaderless) was created to track this, and most of the situations have been resolved. Pointers to some of the discussion: * [Near the end of nomination period](http://eavesdrop.openstack.org/irclogs/%23openstack-tc/%23openstack-tc.2018-07-31.log.html#t2018-07-31T17:39:28). * [Discussion about Trove](http://eavesdrop.openstack.org/irclogs/%23openstack-tc/%23openstack-tc.2018-08-02.log.html#t2018-08-02T13:59:11). There's quite a bit here about how we evaluate the health of a project and the value of volunteers, and for how long we are willing to extend grace periods for projects which have a history of imperfect health. * [What to do about RefStack](http://eavesdrop.openstack.org/irclogs/%23openstack-tc/%23openstack-tc.2018-08-02.log.html#t2018-08-02T16:01:12) which evolved to become a discussion about the role of the QA team. * [Freezer and Searchlight](http://eavesdrop.openstack.org/irclogs/%23openstack-tc/%23openstack-tc.2018-08-07.log.html#t2018-08-07T09:06:37). Where we (the TC) seem to have some minor disagreement is the extent to which we should be extending a lifeline to official projects which are (for whatever reason) struggling to keep up with responsibilities or we should be using the power to remove official status as a way to highlight need. ## PTG Planning The PTG is a month away, so the TC is doing a bit of planning to prepare. There will be two different days during which the TC will meet: Sunday afternoon before the PTG, and all day Friday. Most planning is happening on [this etherpad](https://etherpad.openstack.org/p/tc-stein-ptg). There is also of specific etherpad about [the relationship between the TC and the Foundation and Foundation corporate members](https://etherpad.openstack.org/p/tc-board-foundation). And one for [post-lunch topics](https://etherpad.openstack.org/p/PTG4-postlunch). IRC links: * [Discussion about limiting the agenda](http://eavesdrop.openstack.org/irclogs/%23openstack-tc/%23openstack-tc.2018-08-03.log.html#t2018-08-03T12:31:38). If there's any disagreement in this planning process, it is over whether we should focus our time on topics we have some chance of resolving or at least making some concrete progress, or we should spend the time having open-ended discussions. Ideally there would be time for both, as the latter is required to develop the shared language that is needed to take real action. But as is rampant in the community we are constrained by time and other responsibilities. -- Chris Dent ٩◔̯◔۶ https://anticdent.org/ freenode: cdent tw: @anticdent__ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
[openstack-dev] [nova] [placement] placement update 18-31
w.openstack.org/#/c/542745/> Random names for [osc-placement] functional tests * <https://review.openstack.org/#/c/588470/> Fix nits in resource_provider.py * <https://review.openstack.org/#/c/588350/> [placement] Debug log per granular request group * <https://review.openstack.org/#/c/584218/> Consider forbidden traits in early exit of _get_by_one_request * <https://review.openstack.org/#/c/585672/> Enable nested allocation candidates in scheduler * <https://review.openstack.org/#/q/topic:refactor-fixture> Placement fixture refactorings and cleanups * <https://review.openstack.org/#/c/561770/> PCPU: Define numa dedicated CPU resource class * <https://review.openstack.org/#/c/567191/> Imposing restrictions on resource providers create uuid # End This is the last one of these I'm going to do for a while. It's less useful at the end and beginning of the cycle when there are often plenty of other resources shaping our attention. Also, I pretty badly need a break and an opportunity to more narrowly focus on fewer things for a while (you can translate that as "get things done rather than tracking things"). Unless someone else would like to pick up the mantle, I expect to pick it back up sometime in September. Ideally someone else would do it. It's been a very useful tool for me, and I hope for others, so it's not my wish that it go away. -- Chris Dent ٩◔̯◔۶ https://anticdent.org/ freenode: cdent tw: @anticdent__ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] Paste unmaintained
On Thu, 2 Aug 2018, Stephen Finucane wrote: Given that multiple projects are using this, we may want to think about reaching out to the author and seeing if there's anything we can do to at least keep this maintained going forward. I've talked to cdent about this already but if anyone else has ideas, please let me know. I've sent some exploratory email to Ian, the original author, to get a sense of where things are and whether there's an option for us (or if for some reason us wasn't okay, me) to adopt it. If email doesn't land I'll try again with other media I agree with the idea of trying to move away from using it, as mentioned elsewhere in this thread and in IRC, but it's not a simple step as at least in some projects we are using paste files as configuration that people are allowed (and do) change. Moving away from that is the hard part, not figuring out how to load WSGI middleware in a modern way. -- Chris Dent ٩◔̯◔۶ https://anticdent.org/ freenode: cdent tw: @anticdent__ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [nova] How to debug no valid host failures with placement
Responses to some of Jay's comments below, but first, to keep this on track with the original goal of the thread ("How to debug no valid host failures with placement") before I drag it to the side, some questions. When people ask for something like what Chris mentioned: hosts with enough CPU: hosts that also have enough disk: hosts that also have enough memory: hosts that also meet extra spec host aggregate keys: hosts that also meet image properties host aggregate keys: hosts that also have requested PCI devices: What are the operational questions that people are trying to answer with those results? Is the idea to be able to have some insight into the resource usage and reporting on and from the various hosts and discover that things are being used differently than thought? Is placement a resource monitoring tool, or is it more simple and focused than that? Or is it that we might have flavors or other resource requesting constraints that have bad logic and we want to see at what stage the failure is? I don't know and I haven't really seen it stated explicitly here, and knowing it would help. Do people want info like this for requests as they happen, or to be able to go back later and try the same request again with some flag on that says: "diagnose what happened"? Or to put it another way: Before we design something that provides the information above, which is a solution to an undescribed problem, can we describe the problem more completely first to make sure that what solution we get is the right one. The thing above, that set of information, is context free. On Wed, 1 Aug 2018, Jay Pipes wrote: On 08/01/2018 02:02 PM, Chris Friesen wrote: I think the only way to get useful info on a failure would be to break down the huge SQL statement into subclauses and store the results of the intermediate queries. This is a good idea and something that can be done. I can see how it would be a good idea from an explicit debugging standpoint, but is it a good idea on all fronts? From the very early days when placement was just a thing under your pen on whiteboards, we were trying to achieve something that wasn't the FilterScheduler but achieved efficiencies and some measure of black boxed-ness by being as near as possible to a single giant SQL statement as we could get it. Do we want to get too far away from that? Another thing to consider is that in a large installation, logging these intermediate results (if done in the listing-hosts way indicated above) would be very large without some truncating or "only if < N results" guards. Would another approach be to make it easy to replay a resource request that incrementally retries the request with a less constrained set of requirements (expanding by some heuristic we design)? Something on a different URI where the response is in neither of what /allocation_candidates or /resourcer_providers returns, but allows the caller to know the boundary of results and no results is. One could also imagine a non-http interface to placement that outputs something a bit like 'top': a regularly updating scan of resource usage. But it's hard to know if that is even relevant without more info as asked above. It could very well be that explicit debugging of filtering stages is the right way to go, but we should look closely at the costs of doing so. Part of me is all: Please, yes, let's do it, it would make the code _so_ much more comprehensible. But there were reasons we made the complex SQL in the first place. Unfortunately, it's refactoring work and as a community, we tend to prioritize fancy features like NUMA topology and CPU pinning over refactoring work. I think if we, as a community, said "no", that would be okay. That's really all it would take. We effectively say "no" to features all the time anyway, because we've generated software to which it takes 3 years to add something like placement to anyway, for very little appreciable gain in that time (Yes there are many improvements under the surface and with things like race conditions, but in terms of what can be accomplished with the new tooling, we're still not there). If our labour is indeed valuable we can choose to exercise greater control over its direction. -- Chris Dent ٩◔̯◔۶ https://anticdent.org/ freenode: cdent tw: @anticdent__ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
[openstack-dev] [placement] #openstack-placement IRC channel requires registered nicks
I thought I should post a message here for visibility that yesterday we made the openstack-placement IRC channel +r so that the recent spate of spammers could be blocked. This means that you must have a registered nick to gain access to the channel. There's information on how to register at: https://freenode.net/kb/answer/registration Plenty of other channels have been doing the same thing, see: https://etherpad.openstack.org/p/freenode-plus-r-08-2018 -- Chris Dent ٩◔̯◔۶ https://anticdent.org/ freenode: cdent tw: @anticdent__ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
[openstack-dev] [tc] [all] TC Report 18-31
HTML: https://anticdent.org/tc-report-18-31.html Welcome to this week's TC Report. Again a slow week. A small number of highlights to report. [Last Thursday](http://eavesdrop.openstack.org/irclogs/%23openstack-tc/%23openstack-tc.2018-07-26.log.html#t2018-07-26T15:03:57) there was some discussion of the health of the Trove project and how one of the issues that may have limited their success were struggles to achieve a [sane security model](https://review.openstack.org/#/c/438134/). That and other struggles led to lots of downstream forking and variance which complicates presenting a useful tool. [On Monday](http://eavesdrop.openstack.org/irclogs/%23openstack-tc/%23openstack-tc.2018-07-30.log.html) there was talk about the nature of the PTL role and whether it needs to change somewhat to help break down the silos between projects and curtail burnout. This was initially prompted by some concern that PTL nominations were lagging. As usual, there were many last minute nominations. The volume of work that continues to consolidate on individuals is concerning. We must figure out how to let some things drop. This is an area where the TC must demonstrate some leadership, but it's very unclear at this point how to change things. Based on [this message](http://lists.openstack.org/pipermail/openstack-dev/2018-July/132651.html) from Thierry on a slightly longer Stein cycle, the idea that the first PTG in 2019 is going to be co-located with the Summit is, if not definite, near as. There's more on that in the second paragraph of the [Vancouver Summit Joint Leadership Meeting Update](http://lists.openstack.org/pipermail/foundation/2018-June/002598.html). If you have issues that you would like the TC to discuss—or to discuss with the TC—at the [PTG coming in September](https://www.openstack.org/ptg), please add to the [planning etherpad](https://etherpad.openstack.org/p/tc-stein-ptg). -- Chris Dent ٩◔̯◔۶ https://anticdent.org/ freenode: cdent tw: @anticdent__ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [nova] [placement] compute nodes use of placement
On Mon, 30 Jul 2018, Jay Pipes wrote: On 07/26/2018 12:15 PM, Chris Dent wrote: The `in_tree` calls happen from the report client method `_get_providers_in_tree` which is called by `_ensure_resource_provider` which can be called from multiple places, but in this case is being called both times from `get_provider_tree_and_ensure_root`, which is also responsible for two of the inventory request. `get_provider_tree_and_ensure_root` is called by `_update` in the resource tracker. `_update` is called by both `_init_compute_node` and `_update_available_resource`. Every single period job iteration. `_init_compute_node` is called from _update_available_resource` itself. That accounts for the overall doubling. Actually, no. What accounts for the overall doubling is the fact that we no longer short-circuit return from _update() when there are no known changes in the node's resources. I think we're basically agreeing on this: I'm describing the current state of affairs, not attempting to describe why it is that way. Your insight helps to explain why. I have a set of change in progress which experiments with what happens if we don't call placement a second time in the _update call: https://review.openstack.org/#/c/587050/ Just to see what might blow up. -- Chris Dent ٩◔̯◔۶ https://anticdent.org/ freenode: cdent tw: @anticdent__ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
[openstack-dev] [nova] [placement] placement update 18-30
ed (This is addressing a TODO in the report client) * <https://review.openstack.org/#/q/topic:bug/1469179> local disk inventory reporting related * <https://review.openstack.org/#/c/579922/> Delete orphan compute nodes before updating resources * <https://review.openstack.org/#/c/583489/> Remove Ocata comments which expires now * <https://review.openstack.org/#/c/523006/> Ignore some updates from virt driver * <https://review.openstack.org/#/c/584338/> Docs: Add Placement to Nova system architecture # End Lots to review, test, and document. -- Chris Dent ٩◔̯◔۶ https://anticdent.org/ freenode: cdent tw: @anticdent__ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
[openstack-dev] [nova] [placement] compute nodes use of placement
quests for aggregates and traits happen via `_refresh_associations` in `_ensure_resource_provider`. The single allocation request is from the resource tracker calling `_remove_deleted_instances_allocations` checking to see if it is possible to clean up any allocations left over from migrations. ## Summary/Actions So what now? There are two avenues for potential investigation: 1. Each time `_update` is called it calls `get_provider_tree_and_ensure_root`. Can one of those be skipped while keeping the rest of `_update`? Or perhaps it is possible to avoid one of the calls to `_update` entirely? 2. Can `get_provider_tree_and_ensure_root` tries to manage inventory twice be rationalized for simple cases? I've run out of time for now, so this doesn't address the requests that happen once an instance exists. I'll get to that another time. -- Chris Dent ٩◔̯◔۶ https://anticdent.org/ freenode: cdent tw: @anticdent__ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
[openstack-dev] [tc] [all] TC Report 18-30
HTML: https://anticdent.org/tc-report-18-30.html Yet another slow week at TC office hours. This is part of the normal ebb and flow of work, especially with feature freeze looming, but for some reason it bothers me. It reinforces my fears that the TC is either not particularly relevant or looking at the wrong things. Help make sure we are looking at the right things by: * coming to office hours and telling us what matters * responding to these reports and the ones that Doug produces * adding something to the [PTG planning etherpad](https://etherpad.openstack.org/p/tc-stein-ptg). [Last Thursday](http://eavesdrop.openstack.org/irclogs/%23openstack-tc/%23openstack-tc.2018-07-19.log.html#t2018-07-19T15:07:31) there was some discussion about forthcoming elections. First up are PTL elections for Stein. Note that it is quite likely that _if_ (as far as I can tell there's not much if about it, it is going to happen, sadly there's not very much transparency on these decisions and discussions, I wish there were) the Denver PTG is the last standalone PTG, then the Stein cycle may be longer than normal to sync up with summit schedules. [On Friday](http://eavesdrop.openstack.org/irclogs/%23openstack-tc/%23openstack-tc.2018-07-20.log.html#t2018-07-20T14:14:12) there was a bit of discussion on progress towards upgrading to Mailman 3 and using that as an opportunity to shrink the number of mailing lists. By having fewer, the hope is that some of the boundaries between groups within the community will be more permeable and will help email be the reliable information sharing mechanism. [This morning](http://eavesdrop.openstack.org/irclogs/%23openstack-tc/%23openstack-tc.2018-07-24.log.html#t2018-07-24T12:08:03) there was yet more discussion about differences of opinion and approach when it comes to accepting projects to be official OpenStack projects. This is something that will be discussed at the PTG. It would be helpful if people who care about this could make their positions known. -- Chris Dent ٩◔̯◔۶ https://anticdent.org/ freenode: cdent tw: @anticdent__ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
[openstack-dev] [nova] [placement] placement update 18-29
up_policy qparams * <https://review.openstack.org/#/c/578048/> [placement] api-ref: add traits parameter * <https://review.openstack.org/#/c/578826/> Convert 'placement_api_docs' into a Sphinx extension * <https://review.openstack.org/#/c/568713/> Test for multiple limit/group_policy qparams * <https://review.openstack.org/#/c/576693/> Disable limits if force_hosts or force_nodes is set * <https://review.openstack.org/#/c/576820/> Rename auth_uri to www_authenticate_uri * <https://review.openstack.org/#/q/project:openstack/blazar+topic:bp/placement-api> Blazar's work on using placement * <https://review.openstack.org/#/c/581771/> Add placement.concurrent_udpate to generation pre-checks * <https://review.openstack.org/#/c/583907/> [placement] disallow additional fields in allocations * <https://review.openstack.org/#/c/582899/> Delete allocations when it is re-allocated (This is addressing a TODO in the report client) * <https://review.openstack.org/#/q/topic:bug/1469179> local disk inventory reporting related * <https://review.openstack.org/#/c/579922/> Delete orphan compute nodes before updating resources * <https://review.openstack.org/#/c/584218/> Consider forbidden traits in early exit of _get_by_one_request (Another TODO-related fix) * <https://review.openstack.org/#/c/583489/> Remove Ocata comments which expires now * <https://review.openstack.org/#/c/523006/> Ignore some updates from virt driver * <https://review.openstack.org/#/c/584338/> Docs: Add Placement to Nova system architecture * <https://review.openstack.org/#/c/553461/> Resource provider examples (osc-placement) # End Thanks to everyone for all their hard work making this happen. -- Chris Dent ٩◔̯◔۶ https://anticdent.org/ freenode: cdent tw: @anticdent__ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
[openstack-dev] [all][api] POST /api-sig/news
Greetings OpenStack community, Today's meeting was again very brief as this time elmiko and dtantsur were out. There were no major items of discussion, but we made plans to check on the status of the GraphQL prototyping (Hi! How's it going?). In addition to the light discussion there was also one guideline that was frozen for wider review and a new one introduced (see below). Both are realted to the handling of the "code" attribute in error responses. As always if you're interested in helping out, in addition to coming to the meetings, there's also: * The list of bugs [5] indicates several missing or incomplete guidelines. * The existing guidelines [2] always need refreshing to account for changes over time. If you find something that's not quite right, submit a patch [6] to fix it. * Have you done something for which you think guidance would have made things easier but couldn't find any? Submit a patch and help others [6]. # Newly Published Guidelines * Expand error code document to expect clarity https://review.openstack.org/#/c/577118/ # API Guidelines Proposed for Freeze Guidelines that are ready for wider review by the whole community. * Add links to errors-example.json https://review.openstack.org/#/c/578369/ # Guidelines Currently Under Review [3] * Expand schema for error.codes to reflect reality https://review.openstack.org/#/c/580703/ * Update parameter names in microversion sdk spec https://review.openstack.org/#/c/557773/ * Add API-schema guide (still being defined) https://review.openstack.org/#/c/524467/ * A (shrinking) suite of several documents about doing version and service discovery Start at https://review.openstack.org/#/c/459405/ * WIP: microversion architecture archival doc (very early; not yet ready for review) https://review.openstack.org/444892 # Highlighting your API impacting issues If you seek further review and insight from the API SIG about APIs that you are developing or changing, please address your concerns in an email to the OpenStack developer mailing list[1] with the tag "[api]" in the subject. In your email, you should include any relevant reviews, links, and comments to help guide the discussion of the specific challenge you are facing. To learn more about the API SIG mission and the work we do, see our wiki page [4] and guidelines [2]. Thanks for reading and see you next week! # References [1] http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev [2] http://specs.openstack.org/openstack/api-wg/ [3] https://review.openstack.org/#/q/status:open+project:openstack/api-wg,n,z [4] https://wiki.openstack.org/wiki/API_SIG [5] https://bugs.launchpad.net/openstack-api-wg [6] https://git.openstack.org/cgit/openstack/api-wg Meeting Agenda https://wiki.openstack.org/wiki/Meetings/API-SIG#Agenda Past Meeting Records http://eavesdrop.openstack.org/meetings/api_sig/ Open Bugs https://bugs.launchpad.net/openstack-api-wg -- Chris Dent ٩◔̯◔۶ https://anticdent.org/ freenode: cdent tw: @anticdent__ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
[openstack-dev] [tc] [all] TC Report 18-29
HTML: https://anticdent.org/tc-report-18-29.html Again a relatively slow week for TC discussion. Several members were travelling for one reason or another. A theme from the past week is a recurring one: How can OpenStack, the community, highlight gaps where additional contribution may be needed, and what can the TC, specifically, do to help? Julia relayed [that question on Wednesday](http://eavesdrop.openstack.org/irclogs/%23openstack-tc/%23openstack-tc.2018-07-11.log.html#t2018-07-11T00:39:16) and it meandered a bit from there. Are the mechanics of open source a bit strange in OpenStack because of continuing boundaries between the people who sell it, package it, build it, deploy it, operate it, and use it? If so, how do we accelerate blurring those boundaries? The [combined PTG](http://eavesdrop.openstack.org/irclogs/%23openstack-tc/%23openstack-tc.2018-07-11.log.html#t2018-07-11T00:39:16) will help, some. At Thursday's office hours Alan Clark [listened in](http://eavesdrop.openstack.org/irclogs/%23openstack-tc/%23openstack-tc.2018-07-12.log.html#t2018-07-12T15:02:34). He's a welcome presence from the Foundation Board. At the last summit in Vancouver members of the TC and the Board made a commitment to improve communication. Meanwhile, [back on Wednesday](http://eavesdrop.openstack.org/irclogs/%23openstack-tc/%23openstack-tc.2018-07-11.log.html#t2018-07-11T15:29:30) I expressed a weird sense of jealousy of all the nice visible things one sees the foundation doing for the newer strategic areas in the foundation. The issue here is not that the foundation doesn't do stuff for OpenStack-classic, but that the new stuff is visible and _over there_. That office hour included [more talk](http://eavesdrop.openstack.org/irclogs/%23openstack-tc/%23openstack-tc.2018-07-12.log.html#t2018-07-12T15:07:27) about project-need visibility. Lately, I've been feeling that it is more important to make the gaps in contribution visible than it is to fill them. If we continue to perform above and beyond, there is no incentive for our corporate value extractors to supplement their investment. That way lies burnout. The [health tracker](https://wiki.openstack.org/wiki/OpenStack_health_tracker) is part of making things more visible. So are [OpenStack wide goals](https://governance.openstack.org/tc/goals/index.html). But there is more we can do as a community and as individuals. Don't be a hero: If you're overwhelmed or overworked tell your peers and your management. In other news: Zane summarized some of his thoughts about [Limitations of the Layered Model of OpenStack](https://www.zerobanana.com/archive/2018/07/17#openstack-layer-model-limitations). This is a continuation of the technical vision discussions that have been happening on [an etherpad](https://etherpad.openstack.org/p/tech-vision-2018) and [email thread](http://lists.openstack.org/pipermail/openstack-dev/2018-July/131955.html). -- Chris Dent ٩◔̯◔۶ https://anticdent.org/ freenode: cdent tw: @anticdent__ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
[openstack-dev] [tc] [all] TC Report 18-28
HTML: https://anticdent.org/tc-report-18-28.html With feature freeze approaching at the end of this month, it seems that people are busily working on getting-stuff-done so there is not vast amounts of TC discussion to report this week. Actually that's not entirely true. There's quite a bit of interesting discussion in [the logs](http://eavesdrop.openstack.org/irclogs/%23openstack-tc/) but it ranges widely and resists summary. If you're a fast reader, it can be pretty straightforward to read the whole week. Some highlights: ## Contextualizing Change The topics of sharing personal context, creating a new technical vision for OpenStack, and trying to breach the boundaries between the various OpenStack sub-projects flowed in amongst one another. In a vast bit of background and perspective sharing, Zane provided his feelings on [what OpenStack ought to be](http://lists.openstack.org/pipermail/openstack-dev/2018-July/132047.html). While long, such things help provide much more context to understanding some of the issues. Reading such things can be effort, but they fill in blanks in understanding, even if you don't agree. Meanwhile, and related, there are [continued requests](http://eavesdrop.openstack.org/irclogs/%23openstack-tc/%23openstack-tc.2018-07-06.log.html#t2018-07-06T15:20:50) for nova to engage in orchestration, in large part because there's nothing else commonly available to do it and while that's true we can't serve people's needs well. Some have said that the need for orchestration could in part be addressed by breaking down some of the boundaries between projects but [which boundaries is unclear](http://eavesdrop.openstack.org/irclogs/%23openstack-tc/%23openstack-tc.2018-07-04.log.html#t2018-07-04T01:12:27). Thierry says we should [organize work based on objectives](http://eavesdrop.openstack.org/irclogs/%23openstack-tc/%23openstack-tc.2018-07-04.log.html#t2018-07-04T08:33:44). ## Goals of Health Tracking In [last week's report](/tc-report-18-27.html) I drew a connection between the [removal of diversity tags](https://review.openstack.org/#/c/579870/) and the [health tracker](https://wiki.openstack.org/wiki/OpenStack_health_tracker). This [created some](http://eavesdrop.openstack.org/irclogs/%23openstack-tc/%23openstack-tc.2018-07-05.log.html#t2018-07-05T15:29:01) concern that there were going to be renewed evaluations of projects that would impact their standing in the community and that these evaluations were going to be too subjective. While it is true that the health tracker is a subjective review of how a project is doing, the evaluation is a way to discover and act on opportunities to help a project, not punish it or give it a black mark. It is important, however, that the TC is making an [independent evaluation](http://eavesdrop.openstack.org/irclogs/%23openstack-tc/%23openstack-tc.2018-07-05.log.html#t2018-07-05T15:45:59). -- Chris Dent ٩◔̯◔۶ https://anticdent.org/ freenode: cdent tw: @anticdent__ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [stestr?][tox?][infra?] Unexpected success isn't a failure
On Mon, 9 Jul 2018, Matthew Treinish wrote: It's definitely a bug, and likely a bug in stestr (or one of the lower level packages like testtools or python-subunit), because that's what's generating the return code. Tox just looks at the return code from the commands to figure out if things were successful or not. I'm a bit surprised by this though I thought we covered the unxsuccess and xfail cases because I would have expected cdent to file a bug if it didn't. Looking at the stestr tests we don't have coverage for the unxsuccess case so I can see how this slipped through. This was reported on testrepository some years ago and a bit of analysis was done: https://bugs.launchpad.net/testrepository/+bug/1429196 So yeah, I did file a bug but it fell off the radar during those dark times. -- Chris Dent ٩◔̯◔۶ https://anticdent.org/ freenode: cdent tw: @anticdent__ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [nova] [placement] placement update 18-27
On Fri, 6 Jul 2018, Chris Dent wrote: This is placement update 18-27, a weekly update of ongoing development related to the [OpenStack](https://www.openstack.org/) [placement service](https://developer.openstack.org/api-ref/placement/). This is a contract version. Forgot to mention: There won't be an 18-28 this Friday, I'll be out and about. If someone else would like to do one, that would be great. -- Chris Dent ٩◔̯◔۶ https://anticdent.org/ freenode: cdent tw: @anticdent__ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
[openstack-dev] [nova] [placement] placement update 18-27
ng whatever nova-based test fixture we might like # Other 24 entries last week. 20 now (this is a contract week, there's plenty of new reviews not listed here). * <https://review.openstack.org/#/c/546660/> Purge comp_node and res_prvdr records during deletion of cells/hosts * <https://review.openstack.org/#/c/527791/> Get resource provider by uuid or name (osc-placement) * <https://review.openstack.org/#/c/556669/> Tighten up ReportClient use of generation * <https://review.openstack.org/#/c/537614/> Add unit test for non-placement resize * <https://review.openstack.org/#/c/535517/> Move refresh time from report client to prov tree * <https://review.openstack.org/#/c/561770/> PCPU resource class * <https://review.openstack.org/#/c/566166/> rework how we pass candidate request information * <https://review.openstack.org/#/c/564876/> add root parent NULL online migration * <https://review.openstack.org/#/q/topic:bp/bandwidth-resource-provider> add resource_requests field to RequestSpec * <https://review.openstack.org/#/c/538498/> Convert driver supported capabilities to compute node provider traits * <https://review.openstack.org/#/c/568639/> Use placement.inventory.inuse in report client * <https://review.openstack.org/#/c/517921/> ironic: Report resources as reserved when needed * <https://review.openstack.org/#/c/568713/> Test for multiple limit/group_policy qparams * <https://review.openstack.org/#/c/578048/> [placement] api-ref: add traits parameter * <https://review.openstack.org/#/c/578826/> Convert 'placement_api_docs' into a Sphinx extension * <https://review.openstack.org/#/c/568713/> Test for multiple limit/group_policy qparams * <https://review.openstack.org/#/c/576693/> Disable limits if force_hosts or force_nodes is set * <https://review.openstack.org/#/c/576820/> Rename auth_uri to www_authenticate_uri * <https://review.openstack.org/#/q/project:openstack/blazar+topic:bp/placement-api> Blazar's work on using placement # End You are the key to the coming revolution. -- Chris Dent ٩◔̯◔۶ https://anticdent.org/ freenode: cdent tw: @anticdent__ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
[openstack-dev] [all][api] POST /api-sig/news
Greetings OpenStack community, At today's meeting we discussed an issue that came up on a nova/placement review [9] wherein there was some indecision about whether a response code of 400 or 404 is more appropriate when a path segement expects a UUID, the request doesn't supply something that is actually a UUID, and the method being used on the URI may be creating a resource. We agreed with the earlier discussion that a 400 was approrpiate in this narrow case. Other cases may be different. With that warm up exercise out of the way, we moved on to discussing pending guidelines, freezing one of them [10] and declaring that another [11] required a followup to clarify the format of strings codes used in error responses. After that, we did some group learning about StoryBoard [8]. This is becoming something of a regular activity. As always if you're interested in helping out, in addition to coming to the meetings, there's also: * The list of bugs [5] indicates several missing or incomplete guidelines. * The existing guidelines [2] always need refreshing to account for changes over time. If you find something that's not quite right, submit a patch [6] to fix it. * Have you done something for which you think guidance would have made things easier but couldn't find any? Submit a patch and help others [6]. # Newly Published Guidelines None # API Guidelines Proposed for Freeze Guidelines that are ready for wider review by the whole community. * Expand error code document to expect clarity https://review.openstack.org/#/c/577118/ # Guidelines Currently Under Review [3] * Add links to errors-example.json https://review.openstack.org/#/c/578369/ * Update parameter names in microversion sdk spec https://review.openstack.org/#/c/557773/ * Add API-schema guide (still being defined) https://review.openstack.org/#/c/524467/ * A (shrinking) suite of several documents about doing version and service discovery Start at https://review.openstack.org/#/c/459405/ * WIP: microversion architecture archival doc (very early; not yet ready for review) https://review.openstack.org/444892 # Highlighting your API impacting issues If you seek further review and insight from the API SIG about APIs that you are developing or changing, please address your concerns in an email to the OpenStack developer mailing list[1] with the tag "[api]" in the subject. In your email, you should include any relevant reviews, links, and comments to help guide the discussion of the specific challenge you are facing. To learn more about the API SIG mission and the work we do, see our wiki page [4] and guidelines [2]. Thanks for reading and see you next week! # References [1] http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev [2] http://specs.openstack.org/openstack/api-wg/ [3] https://review.openstack.org/#/q/status:open+project:openstack/api-wg,n,z [4] https://wiki.openstack.org/wiki/API_SIG [5] https://bugs.launchpad.net/openstack-api-wg [6] https://git.openstack.org/cgit/openstack/api-wg [7] http://lists.openstack.org/pipermail/openstack-dev/2018-June/131881.html [8] https://storyboard.openstack.org/#!/project/1039 [9] https://review.openstack.org/#/c/580373/ [10] https://review.openstack.org/#/c/577118/ [11] https://review.openstack.org/#/c/578369/ Meeting Agenda https://wiki.openstack.org/wiki/Meetings/API-SIG#Agenda Past Meeting Records http://eavesdrop.openstack.org/meetings/api_sig/ Open Bugs https://bugs.launchpad.net/openstack-api-wg -- Chris Dent ٩◔̯◔۶ https://anticdent.org/ freenode: cdent tw: @anticdent__ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
[openstack-dev] [tc] [all] TC Report 18-27
HTML: https://anticdent.org/tc-report-18-27.html This week's TC Report will be relatively short. I wrote a lot of OpenStack related words yesterday in [Some Opinions On Openstack](https://anticdent.org/some-opinions-on-openstack.html). That post was related to one of the main themes that has shown up in IRC and email discussions recently: creating a [technical vision](https://etherpad.openstack.org/p/tech-vision-2018) for the near future of OpenStack. One idea has been to [separate plumbing from porcelain](http://eavesdrop.openstack.org/irclogs/%23openstack-tc/%23openstack-tc.2018-06-26.log.html#t2018-06-26T15:30:33). There's also a [long email thread](http://lists.openstack.org/pipermail/openstack-dev/2018-July/131944.html) considering many ideas. One idea from that which I particularly like is unifying all the various agents that live on a compute node into one agent, one that likely talks to `etcd`. `nodelet` like a `kubelet`. None of this is something that will happen overnight. I hope at least some if it does, eventually. Some change that's actually in progress now: For a long time OpenStack has tracked the organizational diversity of contributors to the various sub-projects. There's been a fair bit of talk that the tracking doesn't map to reality in a useful way and we need to [figure out what to do about it](http://eavesdrop.openstack.org/irclogs/%23openstack-tc/%23openstack-tc.2018-06-28.log.html#t2018-06-28T15:06:49). That has resulted in a plan to [remove team diversity tags](http://eavesdrop.openstack.org/irclogs/%23openstack-tc/%23openstack-tc.2018-06-28.log.html#t2018-06-28T15:06:49) and instead use a more holistic approach to being aware of and dealing with what's now being called "fragility" in teams. One aspect of this is the human-managed [health tracker](https://wiki.openstack.org/wiki/OpenStack_health_tracker). Julia went to China for an OpenStack event and her eyes were opened about the different context contributors there experience. She wrote a [superuser post](http://superuser.openstack.org/articles/translating-context-understanding-the-global-open-source-community/) and there's been subsequent related IRC discussion about [the challenges that people in China experience](http://eavesdrop.openstack.org/irclogs/%23openstack-tc/%23openstack-tc.2018-06-27.log.html#t2018-06-27T14:06:00) trying to participate in OpenStack. More generally there is a need to figure out some ways to build a [shared context](http://eavesdrop.openstack.org/irclogs/%23openstack-tc/%23openstack-tc.2018-07-03.log.html#t2018-07-03T09:11:10) that involves people who are not a part of our usual circles. As usual, one of the main outcomes of that was that we need to make the time to share and talk more and in a more accessible fashion. We see bursts of that (we're seeing one now) but how do we sustain it and how do we extract some agreement that can lead to concerted action? -- Chris Dent ٩◔̯◔۶ https://anticdent.org/ freenode: cdent tw: @anticdent__ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [tc] Technical Committee Update for 3 July
On Tue, 3 Jul 2018, Doug Hellmann wrote: Chris and Thierry have been discussing a technical vision for OpenStack. * https://etherpad.openstack.org/p/tech-vision-2018 Just so it's clear and credit where credit (or blame!) is due: Zane has been a leading part of this too. -- Chris Dent ٩◔̯◔۶ https://anticdent.org/ freenode: cdent tw: @anticdent__ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [tc] [all] TC Report 18-26
On Thu, 28 Jun 2018, Fox, Kevin M wrote: I think if OpenStack wants to gain back some of the steam it had before, it needs to adjust to the new world it is living in. This means: * Consider abolishing the project walls. They are driving bad architecture (not intentionally but as a side affect of structure) * focus on the commons first. * simplify the architecture for ops: * make as much as possible stateless and centralize remaining state. * stop moving config options around with every release. Make it promote automatically and persist it somewhere. * improve serial performance before sharding. k8s can do 5000 nodes on one control plane. No reason to do nova cells and make ops deal with it except for the most huge of clouds * consider a reference product (think Linux vanilla kernel. distro's can provide their own variants. thats ok) * come up with an architecture team for the whole, not the subsystem. The whole thing needs to work well. * encourage current OpenStack devs to test/deploy Kubernetes. It has some very good ideas that OpenStack could benefit from. If you don't know what they are, you can't adopt them. These are ideas worth thinking about. We may not be able to do them (unclear) but they are stimulating and interesting and we need to keep the converstaion going. Thank you. I referenced this thread from a blog post I just made https://anticdent.org/some-opinions-on-openstack.html which is just a bunch of random ideas on tweaking OpenStack in the face of growth and change. It's quite likely it's junk, but there may be something useful to extract as we try to achieve some focus. -- Chris Dent ٩◔̯◔۶ https://anticdent.org/ freenode: cdent tw: @anticdent__ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [nova] [placement] placement update 18-26
Thanks for the notes Matt, I'll try to incorporate this stuff into the next one where it makes some. A response within. On Fri, 29 Jun 2018, Matt Riedemann wrote: On 6/29/2018 8:03 AM, Chris Dent wrote: There are a patches left on the consumer generation topic, some tidy ups, and some stuff related to healing allocations: * <https://review.openstack.org/#/q/topic:bp/add-consumer-generation> Is someone already working on code for making use of this in the resource tracker? In what way? The RT, except for I think the Ironic driver, shouldn't be dealing with allocations (PUTing them anyway). I know such things never happen in my writing, but that's basically a typo. It should say "report client". By which I mean, is anyone working on handling a generation conflict when PUT /allocations from nova-scheduler? -- Chris Dent ٩◔̯◔۶ https://anticdent.org/ freenode: cdent tw: @anticdent__ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
[openstack-dev] [nova] [placement] placement update 18-26
ew.openstack.org/#/c/556669/> Tighten up ReportClient use of generation * <https://review.openstack.org/#/c/537614/> Add unit test for non-placement resize * <https://review.openstack.org/#/c/493865/> cover migration cases with functional tests * <https://review.openstack.org/#/c/535517/> Move refresh time from report client to prov tree * <https://review.openstack.org/#/c/561770/> PCPU resource class * <https://review.openstack.org/#/c/566166/> rework how we pass candidate request information * <https://review.openstack.org/#/c/564876/> add root parent NULL online migration * <https://review.openstack.org/#/q/topic:bp/bandwidth-resource-provider> add resource_requests field to RequestSpec * <https://review.openstack.org/#/c/560107/> normalize_name helper (in os-traits) * <https://review.openstack.org/#/c/538498/> Convert driver supported capabilities to compute node provider traits * <https://review.openstack.org/#/c/568639/> Use placement.inventory.inuse in report client * <https://review.openstack.org/#/c/517921/> ironic: Report resources as reserved when needed * <https://review.openstack.org/#/c/568713/> Test for multiple limit/group_policy qparams * <https://review.openstack.org/#/c/578048/> [placement] api-ref: add traits parameter * <https://review.openstack.org/#/c/578826/> Convert 'placement_api_docs' into a Sphinx extension * <https://review.openstack.org/#/c/577915/> [placement] Fix bug in consumer generation handling (This is likely to be replaced by something better, but including it for reference.) * <https://review.openstack.org/#/c/568713/> Test for multiple limit/group_policy qparams * <https://review.openstack.org/#/c/579110/> Fix placement incompatible with webob 1.7 * <https://review.openstack.org/#/c/576693/> Disable limits if force_hosts or force_nodes is set * <https://review.openstack.org/#/c/576820/> Rename auth_uri to www_authenticate_uri * <https://review.openstack.org/#/q/project:openstack/blazar+topic:bp/placement-api> Blazar's work on using placement # End A butterfly just used my house as a south to north shortcut. That'll do. -- Chris Dent ٩◔̯◔۶ https://anticdent.org/ freenode: cdent tw: @anticdent__ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
[openstack-dev] [tc] [all] TC Report 18-26
HTML: https://anticdent.org/tc-report-18-26.html All the bits and pieces of OpenStack are interconnected and interdependent across the many groupings of technology and people. When we plan or make changes, wiggling something _here_ has consequences over _there_. Some intended, some unintended. This is such commonly accepted wisdom that to say it risks being a cliche but acting accordingly remains hard. This [morning](http://eavesdrop.openstack.org/irclogs/%23openstack-tc/%23openstack-tc.2018-06-26.log.html#t2018-06-26T09:09:57) Thierry and I had a useful conversation about the [Tech Vision 2018 etherpad](https://etherpad.openstack.org/p/tech-vision-2018). One of the issues there is agreeing on what we're even talking about. How can we have a vision for a "cloud" if we don't agree what that is? There's hope that clarifying the vision will help unify and direct energy, but as the discussion and the etherpad show, there's work to do. The lack of clarity on the vision is one of the reasons why Adjutant's [application to be official](https://review.openstack.org/#/c/553643/) still has [no clear outcome](http://eavesdrop.openstack.org/irclogs/%23openstack-tc/%23openstack-tc.2018-06-19.log.html#t2018-06-19T18:59:43). Meanwhile, to continue [last week's theme](/tc-report-18-25.html), the TC's role as listener, mediator, and influencer lacks definition. Zane wrote up a blog post explaining the various ways in which the OpenStack Foundation is [expanding](https://www.zerobanana.com/archive/2018/06/14#osf-expansion). But this raises [questions](http://eavesdrop.openstack.org/irclogs/%23openstack-tc/%23openstack-tc.2018-06-20.log.html#t2018-06-20T15:41:41) about what, if any, role the TC has in that expansion. It appears that the board has decided to not to do a [joint leadership meeting](http://eavesdrop.openstack.org/irclogs/%23openstack-tc/%23openstack-tc.2018-06-21.log.html#t2018-06-21T16:32:17) at the PTG, which means discussions about such things will need to happen in other media, or be delayed until the next summit in Berlin. To make up for the gap, the TC is [planning](http://eavesdrop.openstack.org/irclogs/%23openstack-tc/%23openstack-tc.2018-06-21.log.html#t2018-06-21T16:54:43) to hold [a gathering](http://lists.openstack.org/pipermail/openstack-tc/2018-June/001510.html) to work on some of the much needed big-picture and shared-understanding building. While that shared understanding is critical, we have to be sure that it incorporates what we can hear from people who are not long-term members of the community. In a long discussion asking if [our tooling makes things harder for new contributors](http://eavesdrop.openstack.org/irclogs/%23openstack-tc/%23openstack-tc.2018-06-21.log.html#t2018-06-21T15:21:24) several of us tried to make it clear that we have an incomplete understanding about the barriers people experience, that we often assume rather than verify, and that sometimes our interest in and enthusiasm for making incremental progress (because if iterating in code is good and just, perhaps it is in social groups too?) can mean that we avoid the deeper analysis required for paradigm shifts. -- Chris Dent ٩◔̯◔۶ https://anticdent.org/ freenode: cdent tw: @anticdent__ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
[openstack-dev] [nova] [placement] placement update 18-25
reshape-provider-tree> There are WIPs for the HTTP parts and the resource tracker parts, on that topic. ## Mirror Host Aggregates I thought this was done but there's one thing left. A command line tool: * <https://review.openstack.org/#/c/575912/> ## Extraction The optional placement database stuff has merged, and is running in the nova-next job. As mentioned above there are documentation tasks to do with this. A while back, Jay made a first pass at an [os-resource-classes](https://github.com/jaypipes/os-resource-classes/), which needs some additional eyes on it. I personally thought it might be heavier than required. If you have ideas please share them. An area we will need to prepare for is dealing with the various infra and co-gating issues that will come up once placement is extracted. We also need to think about how to manage the fixtures currently made available by nova that we might need or want to use in placement. Some of them might be worth sharing. How should we do that? # Other 23 entries last week. 18 now. Nice merging. But we've added quite a few, we just don't see them because this is a contract week. * <https://review.openstack.org/#/c/546660/> Purge comp_node and res_prvdr records during deletion of cells/hosts * <https://review.openstack.org/#/q/topic:bp/placement-osc-plugin-rocky> A huge pile of improvements to osc-placement * <https://review.openstack.org/#/c/527791/> Get resource provider by uuid or name (osc-placement) * <https://review.openstack.org/#/c/477478/> placement: Make API history doc more consistent * <https://review.openstack.org/#/c/556669/> Tighten up ReportClient use of generation * <https://review.openstack.org/#/c/537614/> Add unit test for non-placement resize * <https://review.openstack.org/#/c/493865/> cover migration cases with functional tests * <https://review.openstack.org/#/q/topic:bug/1732731> Bug fixes for sharing resource providers * <https://review.openstack.org/#/c/535517/> Move refresh time from report client to prov tree * <https://review.openstack.org/#/c/561770/> PCPU resource class * <https://review.openstack.org/#/c/566166/> rework how we pass candidate request information * <https://review.openstack.org/#/c/564876/> add root parent NULL online migration * <https://review.openstack.org/#/q/topic:bp/bandwidth-resource-provider> add resource_requests field to RequestSpec * <https://review.openstack.org/#/c/560107/> normalize_name helper (in os-traits) * <https://review.openstack.org/#/c/538498/> Convert driver supported capabilities to compute node provider traits * <https://review.openstack.org/#/c/568639/> Use placement.inventory.inuse in report client * <https://review.openstack.org/#/c/517921/> ironic: Report resources as reserved when needed * <https://review.openstack.org/#/c/568713/> Test for multiple limit/group_policy qparams # End Hi. -- Chris Dent ٩◔̯◔۶ https://anticdent.org/ freenode: cdent tw: @anticdent__ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [telemetry][ceilometer][monasca] Monasca publisher for Ceilometer
On Fri, 22 Jun 2018, Bedyk, Witold wrote: You've said lacking manpower is currently the main issue in Ceilometer which stops you from accepting new publishers and that you don't want to add maintenance overhead. I've lost track of the details of the thread, can you remind me why keeping the plugin as an external (perhaps packaged with monasca itself) is not a good option? As I understood things, that was the benefit of the plugin architecture. We're offering help on maintaining the project. I think this could potentially be a great option, if everyone involved thinks it is a good idea, but it is somewhat orthogonal to the question above about being an external plugin. -- Chris Dent ٩◔̯◔۶ https://anticdent.org/ freenode: cdent tw: @anticdent__ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [oslo] [placement] setting oslo config opts from environment
On Tue, 19 Jun 2018, Doug Hellmann wrote: I certainly have no objection to doing the work in oslo.config. As I described on IRC today, I think we would want to implement it using the new driver feature we're working on this cycle, even if the driver is enabled automatically so users don't have to turn it on. We already special case command line options and the point of the driver interface is to give us a way to extend the lookup logic without having to add more special cases. I've started a draft spec at https://review.openstack.org/#/c/576860/ Some details still need to be filled in, but it's enough to frame the idea. -- Chris Dent ٩◔̯◔۶ https://anticdent.org/ freenode: cdent tw: @anticdent__ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
[openstack-dev] [oslo] [placement] setting oslo config opts from environment
Every now and again I keep working on my experiments to containerize placement in a useful way [1]. At the moment I have it down to using a very small oslo_config-style conf file. I'd like to take it the rest of the way and have no file at all so that my container can be an immutable black box that's presence on the network and use of a database is all external to itself and I can add and remove them at will with very little effort and no mounts or file copies. This is the way placement has been designed from the start. Internal to itself all it really knows is what database it wants to talk to, and how to talk to keystone for auth. That's what's in the conf file. We recently added support for policy, but it is policy-in-code and the defaults are okay, so no policy file required. Placement cannot create fully qualified URLs within itself. This is good and correct: it doesn't need to. With that preamble out of the way, what I'd like to be able to do is make it so the placement service can start up and get its necessary configuration information from environment variables (which docker or k8s or whatever other orchestration you're using would set). There are plenty of ways to hack this into the existing code, but I would prefer to do it in a way that is useful and reusable by other people who want to do the same thing. So I'd like people's feedback and ideas on what they think of the following ways, and any other ideas they have. Or if oslo_config already does this and I just missed it, please set me straight. 1) I initially thought that the simplest way to do this would be to set a default when describing the options to do something like `default=os.environ.get('something', the_original_default)` but this has a bit of a flaw. It means that the conf.file wins over the environment and this is backwards from the expected [2] priority. 2) When the service starts up, after it reads its own config, but before it actually does anything, it inspects the environment for a suite of variables which it uses to clobber the settings that came from files with the values in the environment. 3) 2, but it happens in oslo_config instead of the service's own code, perhaps with a `from_env` kwarg when defining the opts. Maybe just for StrOpt, and maybe with some kind of automated env-naming scheme. 4) Something else? What do you think? Note that the main goal here is to avoid files, so solutions that are "read the environment variables to then write a custom config file" are not in this domain (although surely useful in other domains). We had some IRC discussion about this [3] if you want a bit more context. Thanks for your interest and attention. [1] https://anticdent.org/placement-container-playground-6.html [2] https://bugs.launchpad.net/oslo-incubator/+bug/1196368 [3] http://eavesdrop.openstack.org/irclogs/%23openstack-dev/%23openstack-dev.2018-06-19.log.html#t2018-06-19T18:30:12 -- Chris Dent ٩◔̯◔۶ https://anticdent.org/ freenode: cdent tw: @anticdent__ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
[openstack-dev] [tc] [all] TC Report 18-25
HTML: https://anticdent.org/tc-report-18-25.html Over the time that I've been observing the TC, there's been quite a lot of indecision about how and when to exercise power. The rules and regulations of OpenStack governance have it that the TC has pretty broad powers in terms of allowing and disallowing projects to be "official" and in terms of causing or preventing the merging of _any_ code in _any_ of those official projects. Unfortunately, the negative aspect of these powers make them the sort of powers that no one really wants to use. Instead the TC has a history of, when it wants to pro-actively change things, using techniques of gently nudging or trying to make obvious activities that would be useful. [OpenStack-wide goals](https://governance.openstack.org/tc/goals/index.html) and the [help most-needed list](https://governance.openstack.org/tc/reference/help-most-needed.html) are examples of this sort of thing. Now that OpenStack is no longer sailing high on the hype seas, resources are more scarce and some tactics and strategies are no longer as useful as they once were. Some have expressed a desire for the TC to provide a more active leadership role. One that allows the community to adapt more quickly to changing times. There's a delicate balance here that a few different conversations in the past week have highlighted. [Last Thursday](http://eavesdrop.openstack.org/irclogs/%23openstack-tc/%23openstack-tc.2018-06-14.log.html#t2018-06-14T15:07:31), a discussion about the (vast) volume of code getting review and merged in the nova project led to some discussion on how to either enforce or support a goal of decomposing nova into smaller, less-coupled pieces. It was hard to find middle ground between outright blocking code that didn't fit with that goal and believing nothing could be done. Mixed in with that were valid concerns that the TC [shouldn't be parenting people who are adults](http://eavesdrop.openstack.org/irclogs/%23openstack-tc/%23openstack-tc.2018-06-14.log.html#t2018-06-14T16:03:23) and [is unable to be effective](http://eavesdrop.openstack.org/irclogs/%23openstack-tc/%23openstack-tc.2018-06-14.log.html#t2018-06-14T16:17:31). (_Note: the context of those two linked statements is very important, lest you be inclined to consider them out of context._) And then [today](http://eavesdrop.openstack.org/irclogs/%23openstack-tc/%23openstack-tc.2018-06-19.log.html#t2018-06-19T09:03:19), some discussion about keeping the help wanted list up to date led to thinking about ways to encourage reorganizing "[work around objectives rather than code boundaries](http://eavesdrop.openstack.org/irclogs/%23openstack-tc/%23openstack-tc.2018-06-19.log.html#t2018-06-19T09:17:27)", despite that being a very large cultural shift that may be very difficult to make. So what is the TC (or any vaguely powered governance group) to do? We have some recent examples of the right thing: These are written works—some completed, some in-progress—that layout a vision of how things could or should be that community members can react and refer to. As concrete documents they provide what amounts to an evolving constitution of who we are or what we intend to be that people may point to as a third-party authority that they choose to accept, reject or modify without the complexity of "so and so said…". * [Written principles for peer review](https://governance.openstack.org/tc/reference/principles.html#we-value-constructive-peer-review) and [clear documentation](https://docs.openstack.org/project-team-guide/review-the-openstack-way.html) of the same. * Starting a [Technical Vision for 2018](https://etherpad.openstack.org/p/tech-vision-2018). * There should be more here. There will be more here. Many of the things that get written will start off wrong but the only way they have a chance of becoming right is if they are written in the first place. Providing ideas allows people to say "that's right" or "that's wrong" or "that's right, except...". Writing provides a focal point for including many different people in the generation and refinement of ideas and an archive of long-lived meaning and shared belief. Beliefs are what we use to choose between what matters and what does not. As the community evolves, and in some ways shrinks while demands remain high, we have to make it easier for people to find and understand, with greater alacrity, what we, as a community, choose to care about. We've done a pretty good job in the past talking about things like the [four opens](https://governance.openstack.org/tc/reference/opens.html), but now we need to be more explicit about what we are making and how we make it. -- Chris Dent ٩◔̯◔۶ https://anticdent.org/ freenode: cdent tw: @anticdent__
[openstack-dev] [nova] [placement] placement update 18-24
ration conflict in report client * <https://review.openstack.org/#/c/537614/> Add unit test for non-placement resize * <https://review.openstack.org/#/c/493865/> cover migration cases with functional tests * <https://review.openstack.org/#/q/topic:bug/1732731> Bug fixes for sharing resource providers * <https://review.openstack.org/#/c/535517/> Move refresh time from report client to prov tree * <https://review.openstack.org/#/c/561770/> PCPU resource class * <https://review.openstack.org/#/c/566166/> rework how we pass candidate request information * <https://review.openstack.org/#/c/564876/> add root parent NULL online migration * <https://review.openstack.org/#/q/topic:bp/bandwidth-resource-provider> add resource_requests field to RequestSpec * <https://review.openstack.org/#/c/575127/> replace deprecated accept.best_match * <https://review.openstack.org/#/c/575222/> Don't heal allocations for deleted servers * <https://review.openstack.org/#/c/575237/> Ignore UserWarning for scope checks during test runs * <https://review.openstack.org/#/c/568965/> Enforce placement minimum in nova.cmd.status * <https://review.openstack.org/#/c/560107/> normalize_name helper (in os-traits) * <https://review.openstack.org/#/c/573475/> Fix nits in nested provider allocation candidates(2) * <https://review.openstack.org/#/c/538498/> Convert driver supported capabilities to compute node provider traits * <https://review.openstack.org/#/c/568639/> Use placement.inventory.inuse in report client * <https://review.openstack.org/#/c/517921/> ironic: Report resources as reserved when needed * <https://review.openstack.org/#/c/568713/> Test for multiple limit/group_policy qparams # End Yow. That was long. Thanks for reading. Review some code please. -- Chris Dent ٩◔̯◔۶ https://anticdent.org/ freenode: cdent tw: @anticdent__ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [cinder] [placement] cinder + placement forum session etherpad
On Fri, 15 Jun 2018, Eric Fried wrote: We just merged an initial pass at direct access to the placement service [1]. See the test_direct suite for simple usage examples. Note that this was written primarily to satisfy the FFU use case in blueprint reshape-provider-tree [2] and therefore likely won't have everything cinder needs. So play around with it, but please do not put it anywhere near production until we've had some more collab. Find us in #openstack-placement. Just to word this a bit more strongly (see also http://p.anticdent.org/2nbF, where this is paraphrased from): It would be bad news for cinder to start from placement direct. Better would be for cinder to figure out how to use placement "normally", and then for the standalone special case, consider placement direct or something derived from it. PlacementDirect, as currently written, is really for special cases only, for use in extremis only. -- Chris Dent ٩◔̯◔۶ https://anticdent.org/ freenode: cdent tw: @anticdent__ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev