Re: [OpenStack-Infra] OpenDev git hosting migration and Gerrit downtime April 19, 2019

2019-04-17 Thread Chris Dent

On Wed, 17 Apr 2019, Thierry Carrez wrote:


Clark Boylan wrote:
Fungi has generated a master list of project renames for the openstack 
namespaces: http://paste.openstack.org/show/749402/. If you have a moment 
please quickly review these planned renames for any obvious errors or 
issues.


One thing that bothers me is the massive openstack-infra/ -> openstack/ 
rename, with things like:


openstack-infra/gerrit -> openstack/gerrit

Shouldn't that be directly moved to opendev/gerrit? Moving it to openstack/ 
sounds like a step backward.


Yeah, agree, that does seem the wrong direction.

The impression I had was that one of the outcomes of this would be
that what's in openstack/* would be a more clear picture of the thing
which is OpenStack.

--
Chris Dent   ٩◔̯◔۶   https://anticdent.org/
freenode: cdent tw: @anticdent___
OpenStack-Infra mailing list
OpenStack-Infra@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-infra

Re: [Openstack] Create VMs with Power architecture(ppc64) on OpenStack running on x86_64 nodes??

2018-11-19 Thread Chris Friesen

On 11/19/2018 10:25 AM, Yedhu Sastri wrote:

Hello All,

I have some use-cases which I want to test in PowerPC 
architecture(ppc64). As I dont have any Power machines I would like to 
try it with ppc64 VM's. Is it possible to run these kind of VM's on my 
OpenStack cluster(Queens) which runs on X86_64 architecture nodes(OS 
RHEL 7)??


I set the image property architecture=ppc64 to the ppc64 image I 
uploaded to glance but no success in launching VM with those images. I 
am using KVM as hypervisor(qemu 2.10.0) in my compute nodes and I think 
it is not built to support power architecture. For testing without 
OpenStack I manually built qemu on a x86_64 host with ppc64 
support(qemu-ppc64) and then I am able to host the ppc64 VM. But I dont 
know how to do this on my OpenStack cluster. Whether I need to manually 
build qemu on compute nodes with ppc64 support or I need to add some 
lines in my nova.conf to do this?? Any help to solve this issue would be 
much appreciated.


I think that within an OpenStack cluster you'd have to dedicate a whole 
compute node to running ppc64 and have it advertise the architecture as 
ppc64.  Then when you ask for "architecture=ppc64" it should land on 
that node.


If this is for "development, testing or migration of applications to 
Power" have you checked out these people?  They provide free Power VMs.


http://openpower.ic.unicamp.br/minicloud/

Chris

___
Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Post to : openstack@lists.openstack.org
Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack


[Openstack-operators] operators get-together today at the Berlin Summit

2018-11-13 Thread Chris Morgan
We never did come up with a good plan for a separate event for operators
this evening, so I think maybe we should just meet up at the marketplace
mixer, so may I propose meet at the front at 6pm?

Chris

-- 
Chris Morgan 
___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


Re: [Openstack-operators] Queens metadata agent error 500

2018-11-12 Thread Chris Apsey
Did you change the nova_metadata_ip option to nova_metadata_host in 
metadata_agent.ini?  The former value was deprecated several releases ago 
and now no longer functions as of pike.  The metadata service will throw 
500 errors if you don't change it.


On November 12, 2018 19:00:46 Ignazio Cassano  wrote:

Any other suggestion ?
It does not work.
Nova metatada is on port 8775 in listen but no way to solve this issue.
Thanks
Ignazio

Il giorno lun 12 nov 2018 alle ore 22:40 Slawomir Kaplonski 
 ha scritto:

Hi,

From logs which You attached it looks that Your neutron-metadata-agent 
can’t connect to nova-api service. Please check if nova-metadata-api is 
reachable from node where Your neutron-metadata-agent is running.


Wiadomość napisana przez Ignazio Cassano  w dniu 
12.11.2018, o godz. 22:34:


Hello again,
I have another installation of ocata .
On ocata the metadata for a network id is displayed by ps -afe like this:
 /usr/bin/python2 /bin/neutron-ns-metadata-proxy 
 --pid_file=/var/lib/neutron/external/pids/c4731392-9b91-4663-adb3-b10b5ebcc4f1.pid 
 --metadata_proxy_socket=/var/lib/neutron/metadata_proxy 
 --network_id=c4731392-9b91-4663-adb3-b10b5ebcc4f1 
 --state_path=/var/lib/neutron --metadata_port=80 --metadata_proxy_user=996 
 --metadata_proxy_group=993 
 --log-file=neutron-ns-metadata-proxy-c4731392-9b91-4663-adb3-b10b5ebcc4f1.log 
 --log-dir=/var/log/neutron


On queens like this:
 haproxy -f 
 /var/lib/neutron/ns-metadata-proxy/e8ba8c09-a7dc-4a22-876e-b8d4187a23fe.conf


Is it the correct behaviour ?


Yes, that is correct. It was changed some time ago, see 
https://bugs.launchpad.net/neutron/+bug/1524916




Regards
Ignazio



Il giorno lun 12 nov 2018 alle ore 21:37 Slawomir Kaplonski 
 ha scritto:

Hi,

Can You share logs from Your haproxy-metadata-proxy service which is 
running in qdhcp namespace? There should be some info about reason of those 
errors 500.


> Wiadomość napisana przez Ignazio Cassano  w 
dniu 12.11.2018, o godz. 19:49:

>
> Hi All,
> I upgraded  manually my centos 7 openstack ocata to pike.
> All worked fine.
> Then I upgraded from pike to Queens and instances stopped to reach 
metadata on 169.254.169.254 with error 500.
> I am using isolated metadata true in my dhcp conf and in dhcp namespace  
the port 80 is in listen.

> Please, anyone can help me?
> Regards
> Ignazio
>
> ___
> OpenStack-operators mailing list
> OpenStack-operators@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators

—
Slawek Kaplonski
Senior software engineer
Red Hat



—
Slawek Kaplonski
Senior software engineer
Red Hat

___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


[openstack-dev] [docs] New Four Opens Project

2018-11-12 Thread Chris Hoge
Earlier this year, the OpenStack Foundation staff had the opportunity to 
brainstorm some ideas about how to express the values behind The Four Opens and 
how they are applied in practice. As the Foundation grows in scope to include 
new strategic focus areas and new projects, we felt it was important to provide 
explanation and guidance on the principles that guide our community.

We’ve collected these notes and have written some seeds to start this document. 
I’ve staged this work into github and have prepared a review to move the work 
into OpenStack hosting, turning this over to the community to help guide and 
shape the document.

This is very much a work in progress, but we have a goal to polish this up and 
make it an important document that captures our vision and values for the 
OpenStack development community, guides the establishment of governance for new 
top-level projects, and is a reference for the open-source development 
community as a whole.

I also want to be clear that the original Four Opens, as listed in the 
OpenStack governance page, is an OpenStack TC document. This project doesn’t 
change that. Instead, it is meant to be applied to the Foundation as a whole 
and be a reference to the new projects that land both as pilot top-level 
projects and projects hosted by our new infrastructure efforts.

Thanks to all of the original authors of the Four-Opens for your visionary work 
that started this process, and thanks in advance to the community members who 
will continue to grow and evolve this document.

Chris Hoge
OpenStack Foundation

Four Opens: https://governance.openstack.org/tc/reference/opens.html
New Project Review Patch: https://review.openstack.org/#/c/617005/
Four Opens Document Staging: https://github.com/hogepodge/four-opens

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Openstack-operators] [nova] about resize the instance

2018-11-08 Thread Chris Friesen

On 11/8/2018 5:30 AM, Rambo wrote:


  When I resize the instance, the compute node report that 
"libvirtError: internal error: qemu unexpectedly closed the monitor: 
2018-11-08T09:42:04.695681Z qemu-kvm: cannot set up guest memory 
'pc.ram': Cannot allocate memory".Has anyone seen this situation?And 
the ram_allocation_ratio is set 3 in nova.conf.The total memory is 
125G.When I use the "nova hypervisor-show server" command to show the 
compute node's free_ram_mb is -45G.If it is the result of excessive use 
of memory?

Can you give me some suggestions about this?Thank you very much.


I suspect that you simply don't have any available memory on that system.

What is your kernel overcommit setting on the host?  If 
/proc/sys/vm/overcommit_memory is set to 2, then try either changing the 
overcommit ratio or setting it to 1 to see if that makes a difference.


Chris

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [python3] Enabling py37 unit tests

2018-11-07 Thread Chris Dent

On Tue, 6 Nov 2018, Corey Bryant wrote:


I'd like to get an official +1 here on the ML from parties such as the TC
and infra in particular but anyone else's input would be welcomed too.
Obviously individual projects would have the right to reject proposed
changes that enable py37 unit tests. Hopefully they wouldn't, of course,
but they could individually vote that way.


Speaking as someone on the TC but not "the TC" as well as someone
active in a few projects: +1. As shown elsewhere in the thread the
impact on node consumption and queue lengths shouldn't be a huge
amount and the benefits are high.


From an openstack/placement standpoint, please go for it if nobody

else beats you to it.

To me the benefits are simply that we find bugs sooner. It's bizarre
to me that we even need to think about this. The sooner we find
them, the less they impact people who want to use our code. Will it
cause breakage and extra work us now? Possibly, but it's like making
an early payment on the mortgage: We are saving cost later.

--
Chris Dent   ٩◔̯◔۶   https://anticdent.org/
freenode: cdent tw: @anticdent__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[Openstack-operators] no formal ops meetups team meeting today

2018-11-06 Thread Chris Morgan
Hello Ops,

It appears there will not be enough attendance on IRC today for a useful
ops meetups team meeting. I think everyone is getting ready for berlin next
week, which at this stage is likely a better use of the time. We'll try to
find a good venue for a social get-together on the Tuesday, which will be
communicated nearer the time on the IRC channel and via email. Otherwise we
will see you at the Forum!

Chris

-- 
Chris Morgan 
___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


Re: [openstack-dev] [publiccloud-wg] Serving vendor json from RFC 5785 well-known dir

2018-11-05 Thread Chris Dent

On Sun, 4 Nov 2018, Monty Taylor wrote:

I've floated a half-baked version of this idea to a few people, but lemme try 
again with some new words.


What if we added support for serving vendor data files from the root of a 
primary URL as-per RFC 5785. Specifically, support deployers adding a json 
file to .well-known/openstack/client that would contain what we currently 
store in the openstacksdk repo and were just discussing splitting out.


Sounds like a good plan.

I'm still a vexed that we need to know a cloud's primary host, then
this URL, then get a url for auth and from there start gathering up
information about the services and then their endpoints.

All of that seems of one piece to me and there should be one way to
do it.

But in the absence of that, this is a good plan.


What do people think?


I think cats are nice and so is this plan.

--
Chris Dent   ٩◔̯◔۶   https://anticdent.org/
freenode: cdent tw: @anticdent__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova][placement] Placement requests and caching in the resource tracker

2018-11-05 Thread Chris Dent
bably best situated to define
  and refine what should really be going on with the resource
  tracker and other actions on the compute-node.

* We need to have further discussion and investigation on
  allocations getting out of sync. Volunteers?

What else?

[1] https://review.openstack.org/#/c/614886/
[2] 
https://docs.google.com/document/d/1d5k1hA3DbGmMyJbXdVcekR12gyrFTaj_tJdFwdQy-8E/edit

--
Chris Dent   ٩◔̯◔۶   https://anticdent.org/
freenode: cdent tw: @anticdent__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [placement] update 18-44

2018-11-02 Thread Chris Dent
topic:bp/placement-api>
  Blazar using the placement-api

* <https://review.openstack.org/#/c/614896/>
  Placement role for ansible project config

* <https://review.openstack.org/#/c/614285/>
  hyperv bump placement version

# End

Apologies if this is messier than normal, I'm rushing to get it out
before I travel.

--
Chris Dent   ٩◔̯◔۶   https://anticdent.org/
freenode: cdent tw: @anticdent__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tripleo][openstack-ansible][nova][placement] Owners needed for placement extraction upgrade deployment tooling

2018-10-31 Thread Chris Dent

On Wed, 31 Oct 2018, Eduardo Gonzalez wrote:


- Run db syncs as there is not command for that yet in the master branch
- Apply upgrade process for db changes


The placement-side pieces for this are nearly ready, see the stack
beginning at https://review.openstack.org/#/c/611441/

--
Chris Dent   ٩◔̯◔۶   https://anticdent.org/
freenode: cdent tw: @anticdent__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tripleo][openstack-ansible][nova][placement] Owners needed for placement extraction upgrade deployment tooling

2018-10-30 Thread Chris Dent

On Tue, 30 Oct 2018, Mohammed Naser wrote:


We spoke about this today in the OpenStack Ansible meeting, we've come
up with the following steps:


Great! Thank you, Guilherme, and Lee very much.


1) Create a role for placement which will be called `os_placement`
located in `openstack/openstack-ansible-os_placement`
2) Integrate that role with the OSA master and stop using the built-in
placement service
3) Update the playbooks to handle upgrades and verify using our
periodic upgrade jobs


Makes sense.


The difficult task really comes in the upgrade jobs, I really hope
that we can get some help on this as this probably puts a bit of a
load already on Guilherme, so anyone up to look into that part when
the first 2 are completed? :)


The upgrade-nova script in https://review.openstack.org/#/c/604454/
has been written to make it pretty clear what each of the steps
mean. With luck those steps can translate to both the ansible and
tripleo environments.

Please feel free to add me to any of the reviews and come calling in
#openstack-placement with questions if there are any.

--
Chris Dent   ٩◔̯◔۶   https://anticdent.org/
freenode: cdent tw: @anticdent__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[Openstack-operators] Ops Meetups team meeting 2018-10-30

2018-10-30 Thread Chris Morgan
Brief meeting today on #openstack-operators, minutes below.

If you are attending Berlin, please start contributing to the Forum by
selecting sesions of interest and then adding to the etherpads (see
https://wiki.openstack.org/wiki/Forum/Berlin2018). I hear there's going to
be a really great one about ceph, for example.

Minutes:
http://eavesdrop.openstack.org/meetings/ops_meetup_team/2018/ops_meetup_team.2018-10-30-14.01.html
Minutes (text):
http://eavesdrop.openstack.org/meetings/ops_meetup_team/2018/ops_meetup_team.2018-10-30-14.01.txt
Log:
http://eavesdrop.openstack.org/meetings/ops_meetup_team/2018/ops_meetup_team.2018-10-30-14.01.log.html

Chris

-- 
Chris Morgan 
___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


[openstack-dev] Ops Meetups team meeting 2018-10-30

2018-10-30 Thread Chris Morgan
Brief meeting today on #openstack-operators, minutes below.

If you are attending Berlin, please start contributing to the Forum by
selecting sesions of interest and then adding to the etherpads (see
https://wiki.openstack.org/wiki/Forum/Berlin2018). I hear there's going to
be a really great one about ceph, for example.

Minutes:
http://eavesdrop.openstack.org/meetings/ops_meetup_team/2018/ops_meetup_team.2018-10-30-14.01.html
Minutes (text):
http://eavesdrop.openstack.org/meetings/ops_meetup_team/2018/ops_meetup_team.2018-10-30-14.01.txt
Log:
http://eavesdrop.openstack.org/meetings/ops_meetup_team/2018/ops_meetup_team.2018-10-30-14.01.log.html

Chris

-- 
Chris Morgan 
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [qa] [api] [all] gabbi-tempest for integration tests

2018-10-29 Thread Chris Dent


Earlier this month I produced a blog post on something I was working
on to combine gabbi (the API tester used in placement, gnocchi,
heat, and a few other projects) with tempest to create a simple two
step process for having some purely YAML-driven and HTTP API-based
of any project that can test with tempest. That blog posting is at:

   https://anticdent.org/gabbi-in-the-gate.html

I've got it working now and the necessary patches have merged in
tempest and gabbi-tempest is now part of openstack's infra.

A pending patch in nova shows how it can work:

   https://review.openstack.org/#/c/613386/

The two steps are:

* Add a new job in .zuul.yaml with a parent of 'gabbi-tempest'
* Create some gabbi YAML files containing tests in a directory
  named in that zuul job.
* Profit.

There are a few different pieces that have come together to make
this possible:

* The magic of zuul v3, local job config and job inheritance.
* gabbi: https://gabbi.readthedocs.io/
* gabbi-tempest: https://gabbi-tempest.readthedocs.io/ and
  https://git.openstack.org/cgit/openstack/gabbi-tempest
  and the specific gabbi-tempest zuul job:
  https://git.openstack.org/cgit/openstack/gabbi-tempest/tree/.zuul.yaml#n11
* tempest plugins and other useful ways of getting placement to run
  in different ways

I hope this is useful for people. Using gabbi is a great way to make
sure that your HTTP API is usable with lots of different clients and
without maintaining a lot of state.

Let me know if you have any questions or if you are interested in
helping to make gabbi-tempest more complete and well documented.
I've been approving my own code the past few patches and that feels
a bit dirty.

--
Chris Dent   ٩◔̯◔۶   https://anticdent.org/
freenode: cdent tw: @anticdent__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [placement] update 18-43

2018-10-26 Thread Chris Dent
01866/>
  Generate sample policy in placement directory
  (This is a bit stuck on not being sure what the right thing to do
  is.)

* <https://review.openstack.org/#/q/topic:bp/initial-allocation-ratios>
  Improve handling of default allocation ratios

* 
<https://review.openstack.org/#/q/topic:minimum-bandwidth-allocation-placement-api>
  Neutron minimum bandwidth implementation

* <https://review.openstack.org/#/c/602160/>
  Add OWNERSHIP $SERVICE traits

* <https://review.openstack.org/#/c/604182/>
  Puppet: Initial cookiecutter and import from nova::placement

* <https://review.openstack.org/#/c/586960/>
  zun: Use placement for unified resource management

* <https://review.openstack.org/#/q/topic:bug/1799727>
  Update allocation ratio when config changes

* <https://review.openstack.org/#/q/topic:bug/1799892>
  Deal with root_id None in resource provider

* <https://review.openstack.org/#/q/topic:bug/1795992>
  Use long rpc timeout in select_destinations

* <https://review.openstack.org/#/c/529343/>
  Cleanups for scheduler code

* <https://review.openstack.org/#/q/topic:bp/bandwidth-resource-provider>
  Bandwith Resource Providers!

* <https://review.openstack.org/#/q/topic:bug/1799246>
  Harden placement init under wsgi

* <https://review.openstack.org/#/q/topic:cd/gabbi-tempest-job>
  Using gabbi-tempest for integration tests.

* <https://review.openstack.org/#/c/613118/>
  Make tox -ereleasenotes work

* <https://review.openstack.org/#/c/613343/>
  placement: Add a doc describing a quick live environment

# End

It's tired around here.

--
Chris Dent   ٩◔̯◔۶   https://anticdent.org/
freenode: cdent tw: @anticdent__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova][limits] Does ANYONE at all use the quota class functionality in Nova?

2018-10-25 Thread Chris Friesen

On 10/25/2018 12:00 PM, Jay Pipes wrote:

On 10/25/2018 01:38 PM, Chris Friesen wrote:

On 10/24/2018 9:10 AM, Jay Pipes wrote:
Nova's API has the ability to create "quota classes", which are 
basically limits for a set of resource types. There is something 
called the "default quota class" which corresponds to the limits in 
the CONF.quota section. Quota classes are basically templates of 
limits to be applied if the calling project doesn't have any stored 
project-specific limits.


Has anyone ever created a quota class that is different from "default"?


The Compute API specifically says:

"Only ‘default’ quota class is valid and used to set the default 
quotas, all other quota class would not be used anywhere."


What this API does provide is the ability to set new default quotas 
for *all* projects at once rather than individually specifying new 
defaults for each project.


It's a "defaults template", yes.


Chris, are you advocating for *keeping* the os-quota-classes API?


Nope.  I had two points:

1) It's kind of irrelevant whether anyone has created a quota class 
other than "default" because nova wouldn't use it anyways.


2) The main benefit (as I see it) of the quota class API is to allow 
dynamic adjustment of the default quotas without restarting services.


I totally agree that keystone limits should replace it.  I just didn't 
want the discussion to be focused on the non-default class portion 
because it doesn't matter.


Chris

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova][limits] Does ANYONE at all use the quota class functionality in Nova?

2018-10-25 Thread Chris Friesen

On 10/24/2018 9:10 AM, Jay Pipes wrote:
Nova's API has the ability to create "quota classes", which are 
basically limits for a set of resource types. There is something called 
the "default quota class" which corresponds to the limits in the 
CONF.quota section. Quota classes are basically templates of limits to 
be applied if the calling project doesn't have any stored 
project-specific limits.


Has anyone ever created a quota class that is different from "default"?


The Compute API specifically says:

"Only ‘default’ quota class is valid and used to set the default quotas, 
all other quota class would not be used anywhere."


What this API does provide is the ability to set new default quotas for 
*all* projects at once rather than individually specifying new defaults 
for each project.


Chris

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] [tc] [api] Paste Maintenance

2018-10-24 Thread Chris Dent

On Wed, 24 Oct 2018, Jean-Philippe Evrard wrote:


On Mon, 2018-10-22 at 07:50 -0700, Morgan Fainberg wrote:

Also, doesn't bitbucket have a git interface now too (optionally)?


It does :)
But I think it requires a new repo, so it means that could as well move
to somewhere else like github or openstack infra :p


Right, so that combined with bitbucket oozing surveys and assorted
other annoyances over me has meant that I've moved paste to github:

https://github.com/cdent/paste

I merged some of the outstanding patches, forced Zane to fix up a few
more Python 3.7 related things, fixed up some of the docs and
released a new version (3.0.0) to pypi:

https://pypi.org/p/Paste

And I published the docs (linked from the new release and the repo) to
a new URL on RTD, as older versions of the docs were not something I
was able to adopt:

https://pythonpaste.readthedocs.io

And some travis-ci stuff.

I didn't bother to bring Paste into OpenDev infra because that felt
like that was indicating a longer and more engaged commitment than
it feels responses here indicated should happen. We want to
encourage migration away. As Morgan stated elsewhere in the thread [1]
work is in progress to make using something else easier for people.

If you want to help with Paste, make some issues and pull requests
in the repo above. Thanks.

Next step? paste.deploy (which is a separate repo).

[1] http://lists.openstack.org/pipermail/openstack-dev/2018-October/135937.html

--
Chris Dent   ٩◔̯◔۶   https://anticdent.org/
freenode: cdent tw: @anticdent__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] OpenStack Foundation Community Meeting - October 24 - StarlingX

2018-10-23 Thread Chris Hoge
On Wednesday, October 24 we will host our next Foundation community
meeting at 8:00 PT / 15:00 UTC. This meeting will focus on an update
on StarlingX, one of the projects in the Edge Computing Strategic Focus
Area.

The full agenda is here:
https://etherpad.openstack.org/p/openstack-community-meeting

Do you have something you'd like to discuss or share with the community?
Please share them with me so that I can schedule them for future meetings.

Thanks,
Chris

BEGIN:VCALENDAR
PRODID:-//Google Inc//Google Calendar 70.9054//EN
VERSION:2.0
CALSCALE:GREGORIAN
METHOD:REQUEST
BEGIN:VEVENT
DTSTART:20181024T15Z
DTEND:20181024T16Z
DTSTAMP:20181015T174244Z
ORGANIZER;CN=cla...@openstack.org:mailto:cla...@openstack.org
UID:DB8C8EB6-5D6A-4DF9-AE1C-F8483DAAE005
ATTENDEE;CUTYPE=INDIVIDUAL;ROLE=REQ-PARTICIPANT;PARTSTAT=NEEDS-ACTION;RSVP=
 TRUE;CN=Ildiko Vancsa;X-NUM-GUESTS=0:mailto:ild...@openstack.org
ATTENDEE;CUTYPE=INDIVIDUAL;ROLE=REQ-PARTICIPANT;PARTSTAT=ACCEPTED;RSVP=TRUE
 ;CN=Allison Price;X-NUM-GUESTS=0:mailto:alli...@openstack.org
ATTENDEE;CUTYPE=INDIVIDUAL;ROLE=REQ-PARTICIPANT;PARTSTAT=ACCEPTED;RSVP=TRUE
 ;CN=cla...@openstack.org;X-NUM-GUESTS=0:mailto:cla...@openstack.org
ATTENDEE;CUTYPE=INDIVIDUAL;ROLE=REQ-PARTICIPANT;PARTSTAT=NEEDS-ACTION;RSVP=
 TRUE;CN=Ian Jolliffe;X-NUM-GUESTS=0:mailto:ian.jolli...@windriver.com
ATTENDEE;CUTYPE=INDIVIDUAL;ROLE=REQ-PARTICIPANT;PARTSTAT=NEEDS-ACTION;RSVP=
 TRUE;CN=scott.w.doene...@intel.com;X-NUM-GUESTS=0:mailto:scott.w.doenecke@i
 ntel.com
ATTENDEE;CUTYPE=INDIVIDUAL;ROLE=REQ-PARTICIPANT;PARTSTAT=NEEDS-ACTION;RSVP=
 TRUE;CN=Bruce E Jones;X-NUM-GUESTS=0:mailto:bruce.e.jo...@intel.com
ATTENDEE;CUTYPE=INDIVIDUAL;ROLE=REQ-PARTICIPANT;PARTSTAT=NEEDS-ACTION;RSVP=
 TRUE;CN=Chris Hoge;X-NUM-GUESTS=0:mailto:ch...@openstack.org
ATTENDEE;CUTYPE=INDIVIDUAL;ROLE=REQ-PARTICIPANT;PARTSTAT=NEEDS-ACTION;RSVP=
 TRUE;CN=Jeff;X-NUM-GUESTS=0:mailto:jeff.go...@windriver.com
ATTENDEE;CUTYPE=INDIVIDUAL;ROLE=REQ-PARTICIPANT;PARTSTAT=NEEDS-ACTION;RSVP=
 TRUE;CN=glenn.sei...@windriver.com;X-NUM-GUESTS=0:mailto:glenn.seiler@windr
 iver.com
ATTENDEE;CUTYPE=INDIVIDUAL;ROLE=REQ-PARTICIPANT;PARTSTAT=NEEDS-ACTION;RSVP=
 TRUE;CN=Jennifer Fowler;X-NUM-GUESTS=0:mailto:jenni...@cathey.co
ATTENDEE;CUTYPE=INDIVIDUAL;ROLE=REQ-PARTICIPANT;PARTSTAT=NEEDS-ACTION;RSVP=
 TRUE;CN=Lauren Sell;X-NUM-GUESTS=0:mailto:lau...@openstack.org
ATTENDEE;CUTYPE=INDIVIDUAL;ROLE=REQ-PARTICIPANT;PARTSTAT=NEEDS-ACTION;RSVP=
 TRUE;CN=Robert Cathey;X-NUM-GUESTS=0:mailto:rob...@cathey.co
ATTENDEE;CUTYPE=INDIVIDUAL;ROLE=REQ-PARTICIPANT;PARTSTAT=NEEDS-ACTION;RSVP=
 TRUE;CN=starlingx-disc...@lists.starlingx.io;X-NUM-GUESTS=0:mailto:starling
 x-disc...@lists.starlingx.io
URL:https://zoom.us/j/112003649
CREATED:20181003T170850Z
DESCRIPTION:https://etherpad.openstack.org/p/openstack-community-meeting\n\
 n-::~:~::~:~:~:~:~:~:~:~:~:~:~:~:~:~:~:~:~:~:~:~:~:~:~:~:~:~:~:~:~:~:~:~:~:
 ~:~:~::~:~::-\nPlease do not edit this section of the description.\n\nView 
 your event at https://www.google.com/calendar/event?action=VIEW=XzhoMTN
 nZ3BvOGwxM2NiOWw4Z3I0MmI5azhoMzNpYmExOGtvazZiYTY3MHEzZ2NxNDg1MGthYzFnNmsgc3
 Rhcmxpbmd4LWRpc2N1c3NAbGlzdHMuc3Rhcmxpbmd4Lmlv=MjAjY2xhaXJlQG9wZW5zdGFj
 ay5vcmc0YjdkMzYwMzNkY2NjNjAxNGFmYjQ4Y2MzMGY4NGU3NGRkNmI0OTU3=America%2F
 Chicago=en=1.\n-::~:~::~:~:~:~:~:~:~:~:~:~:~:~:~:~:~:~:~:~:~:~:~:~:~:
 ~:~:~:~:~:~:~:~:~:~:~:~:~::~:~::-
LAST-MODIFIED:20181015T174243Z
LOCATION:https://zoom.us/j/112003649
SEQUENCE:0
STATUS:CONFIRMED
SUMMARY:StarlingX First Release\, Community Webinar
TRANSP:OPAQUE
X-APPLE-TRAVEL-ADVISORY-BEHAVIOR:AUTOMATIC
END:VEVENT
END:VCALENDAR
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] [tc] [api] Paste Maintenance

2018-10-23 Thread Chris Dent

On Mon, 22 Oct 2018, Chris Dent wrote:


Thus far I'm not hearing any volunteers. If that continues to be the
case, I'll just keep it on bitbucket as that's the minimal change.


As there was some noise that suggested "if you make it use git I
might help", I put it on github:

https://github.com/cdent/paste

I'm now in the process of getting it somewhat sane for modern
python, however test coverage isn't that great so additional work is
required. Once it seems mostly okay, I'll push out a new version to
pypi.

I welcome assistance from any and all.

And, rather importantly, we also need to take over pastedeploy
as well, as the functionality there is also important. I've started
that ball rolling.

If having it live in my github proves a problem we can easily move
it along somewhere else, but this was the shortest hop.

--
Chris Dent   ٩◔̯◔۶   https://anticdent.org/
freenode: cdent tw: @anticdent__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Proposal for a process to keep up with Python releases

2018-10-22 Thread Chris Dent

On Fri, 19 Oct 2018, Zane Bitter wrote:

Just to make it easier to visualise, here is an example for how the Zuul 
config _might_ look now if we had adopted this proposal during Rocky:


https://review.openstack.org/611947

And instead of having a project-wide goal in Stein to add 
`openstack-python36-jobs` to the list that currently includes 
`openstack-python35-jobs` in each project's Zuul config[1], we'd have had a 
goal to change `openstack-python3-rocky-jobs` to 
`openstack-python3-stein-jobs` in each project's Zuul config.


I like this, because it involves conscious actions, awareness and
self-testing by each project to move forward to a thing with a
reasonable name (the cycle name).

I don't think we should call that "churn". "Intention" might be a
better word.


--
Chris Dent   ٩◔̯◔۶   https://anticdent.org/
freenode: cdent tw: @anticdent__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] [tc] [api] Paste Maintenance

2018-10-22 Thread Chris Dent

On Fri, 19 Oct 2018, Thierry Carrez wrote:


Ed Leafe wrote:

On Oct 15, 2018, at 7:40 AM, Chris Dent  wrote:


I'd like some input from the community on how we'd like this to go.


I would say it depends on the long-term plans for paste. Are we planning on 
weaning ourselves off of paste, and simply need to maintain it until that 
can be completed, or are we planning on encouraging its use?


Agree with Ed... is this something we plan to minimally maintain because we 
depend on it, something that needs feature work and that we want to encourage 
the adoption of, or something that we want to keep on life-support while we 
move away from it?


That is indeed the question. I was rather hoping that some people
who are using paste (besides Keystone) would chime in here with what
they would like to do.

My preference would be that we immediately start moving away from it
and keep paste barely on life-support (a bit like WSME which I also
somehow managed to get involved with despite thinking it is horrible).

However, that's not easy to do because the paste.ini files have to
be considered config because of the way some projects and
deployments use them to drive custom middleware and the ordering of
middleware. So we're in for at least a year or so.

My assumption is that it's "something we plan to minimally maintain because 
we depend on it". in which case all options would work: the exact choice 
depends on whether there is anybody interested in helping maintaining it, and 
where those contributors prefer to do the work.


Thus far I'm not hearing any volunteers. If that continues to be the
case, I'll just keep it on bitbucket as that's the minimal change.

My concern with that is my aforementioned feelings of "it is
horrible". It might be better if someone who actually appreciates
Paste was involved as well.

--
Chris Dent   ٩◔̯◔۶   https://anticdent.org/
freenode: cdent tw: @anticdent__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [placement] update 18-42

2018-10-19 Thread Chris Dent
enstack.org/#/c/611678/).

Successful devstack is dependent on us having a reasonable solution
to (2). For the moment [a hacked up
script](https://review.openstack.org/#/c/600161/) is being used to
create tables. Ed has started some work on [moving to
alembic](https://review.openstack.org/#/q/topic:1alembic).

We have work in progress to tune up the documentation but we are not
yet publishing documentation (3). We need to work out a plan for
this. Presumably we don't want to be publishing docs until we are
publishing code, but the interdependencies need to be teased out.

# Other

Various placement changes out in the world.

* <https://review.openstack.org/#/q/topic:bug/1798163>
  The fix, in placement, for the consumer id group by problem.

* <https://review.openstack.org/#/c/601866/>
  Generate sample policy in placement directory
  (This is a bit stuck on not being sure what the right thing to do
  is.)

* <https://review.openstack.org/#/q/topic:bp/initial-allocation-ratios>
  Improve handling of default allocation ratios

* 
<https://review.openstack.org/#/q/topic:minimum-bandwidth-allocation-placement-api>
  Neutron minimum bandwidth implementation

* <https://review.openstack.org/#/c/607953/>
  TripleO: Use valid_interfaces instead of os_interface for placement

* <https://review.openstack.org/#/c/602160/>
  Add OWNERSHIP $SERVICE traits

* <https://review.openstack.org/#/c/604182/>
  Puppet: Initial cookiecutter and import from nova::placement

* <https://review.openstack.org/#/c/601407/>
  WIP: Add placement to devstack-gate PROJECTS
  This was done somewhere else wasn't it, so could this be
  abandoned?

* <https://review.openstack.org/#/c/586960/>
  zun: Use placement for unified resource management

# End

Hi!

--
Chris Dent   ٩◔̯◔۶   https://anticdent.org/
freenode: cdent tw: @anticdent__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [Openstack-operators] Glance Image Visibility Issue? - Non admin users can see private images from other tenants

2018-10-18 Thread Chris Apsey

We are using multiple keystone domains - still can't reproduce this.

Do you happen to have a customized keystone policy.json?

Worst case, I would launch a devstack of your targeted release.  If you 
can't reproduce the issue there, you would at least know its caused by a 
nonstandard config rather than a bug (or at least not a bug that's present 
when using a default config)


On October 18, 2018 18:50:12 iain MacDonnell  
wrote:



That all looks fine.

I believe that the "default" policy applies in place of any that's not
explicitly specified - i.e. "if there's no matching policy below, you
need to have the admin role to be able to do it". I do have that line in
my policy.json, and I cannot reproduce your problem (see below).

I'm not using domains (other than "default"). I wonder if that's a factor...

~iain


$ openstack user create --password foo user1
+-+--+
| Field   | Value|
+-+--+
| domain_id   | default  |
| enabled | True |
| id  | d18c0031ec56430499a2d690cb1f125c |
| name| user1|
| options | {}   |
| password_expires_at | None |
+-+--+
$ openstack user create --password foo user2
+-+--+
| Field   | Value|
+-+--+
| domain_id   | default  |
| enabled | True |
| id  | be9f1061a5104abd834eabe98dff055d |
| name| user2|
| options | {}   |
| password_expires_at | None |
+-+--+
$ openstack project create project1
+-+--+
| Field   | Value|
+-+--+
| description |  |
| domain_id   | default  |
| enabled | True |
| id  | 826876d6d3724018bae6253c7f540cb3 |
| is_domain   | False|
| name| project1 |
| parent_id   | default  |
| tags| []   |
+-+--+
$ openstack project create project2
+-+--+
| Field   | Value|
+-+--+
| description |  |
| domain_id   | default  |
| enabled | True |
| id  | b446b93ac6e24d538c1943acbdd13cb2 |
| is_domain   | False|
| name| project2 |
| parent_id   | default  |
| tags| []   |
+-+--+
$ openstack role add --user user1 --project project1 _member_
$ openstack role add --user user2 --project project2 _member_
$ export OS_PASSWORD=foo
$ export OS_USERNAME=user1
$ export OS_PROJECT_NAME=project1
$ openstack image list
+--+++
| ID   | Name   | Status |
+--+++
| ad497523-b497-4500-8e6c-b5fb12a30cee | cirros | active |
+--+++
$ openstack image create --private image1
+--+--+
| Field| Value
 |
+--+--+
| checksum | None
 |
| container_format | bare
 |
| created_at   | 2018-10-18T22:17:41Z
 |
| disk_format  | raw
 |
| file |
/v2/images/6a0c1928-b79c-4dbf-a9c9-305b599056e4/file
|
| id   | 6a0c1928-b79c-4dbf-a9c9-305b599056e4
 |
| min_disk | 0
 |
| min_ram  | 0
 |
| name | image1
 |
| owner| 826876d6d3724018bae6253c7f540cb3
 |
| properties   | locations='[]', os_hash_algo='None',
os_hash_value='None', os_hidden='False' |
| protected| False
  

Re: [Openstack-operators] Glance Image Visibility Issue? - Non admin users can see private images from other tenants

2018-10-18 Thread Chris Apsey
Do you have a liberal/custom policy.json that perhaps is causing unexpected 
behavior?  Can't seem to reproduce this.


On October 18, 2018 18:13:22 "Moore, Michael Dane (GSFC-720.0)[BUSINESS 
INTEGRA, INC.]"  wrote:


I have replicated this unexpected behavior in a Pike test environment, in 
addition to our Queens environment.




Mike Moore, M.S.S.E.

Systems Engineer, Goddard Private Cloud
michael.d.mo...@nasa.gov

Hydrogen fusion brightens my day.


On 10/18/18, 2:30 PM, "Moore, Michael Dane (GSFC-720.0)[BUSINESS INTEGRA, 
INC.]"  wrote:


   Yes. I verified it by creating a non-admin user in a different tenant. I 
   created a new image, set to private with the project defined as our admin 
   tenant.


   In the database I can see that the image is 'private' and the owner is the 
   ID of the admin tenant.


   Mike Moore, M.S.S.E.

   Systems Engineer, Goddard Private Cloud
   michael.d.mo...@nasa.gov

   Hydrogen fusion brightens my day.


   On 10/18/18, 1:07 AM, "iain MacDonnell"  wrote:



   On 10/17/2018 12:29 PM, Moore, Michael Dane (GSFC-720.0)[BUSINESS
   INTEGRA, INC.] wrote:

I’m seeing unexpected behavior in our Queens environment related to
Glance image visibility. Specifically users who, based on my
understanding of the visibility and ownership fields, should NOT be able
to see or view the image.

If I create a new image with openstack image create and specify –project
 and –private a non-admin user in a different tenant can see and
boot that image.

That seems to be the opposite of what should happen. Any ideas?


   Yep, something's not right there.

   Are you sure that the user that can see the image doesn't have the admin
   role (for the project in its keystone token) ?

   Did you verify that the image's owner is what you intended, and that the
   visibility really is "private" ?

~iain

   ___
   OpenStack-operators mailing list
   OpenStack-operators@lists.openstack.org
   http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


   ___
   OpenStack-operators mailing list
   OpenStack-operators@lists.openstack.org
   http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators





___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


[Openstack-operators] Ops Meetups team meeting 2018-10-16

2018-10-16 Thread Chris Morgan
The OpenStack Ops Meetups team met today on #openstack-operators, meeting
minutes linked below.

As discussed previously the ops meetups team intends to arrange two ops
meetups in 2019, the first aimed for February or March in Europe, the
second in August or September in North America. A Call for Proposals (CFP)
will be issued shortly.

For those of you attending the OpenStack Summit in Berlin next month,
please note we'll arrange an informal social events for openstack operators
(and anyone else who wants to come) on the Tuesday night. Several of the
meetups team are also moderating sessions at the forum. See you there!

Chris

Minutes :
http://eavesdrop.openstack.org/meetings/ops_meetup_team/2018/ops_meetup_team.2018-10-16-14.04.html
Minutes (text):
http://eavesdrop.openstack.org/meetings/ops_meetup_team/2018/ops_meetup_team.2018-10-16-14.04.txt
Log   :
http://eavesdrop.openstack.org/meetings/ops_meetup_team/2018/ops_meetup_team.2018-10-16-14.04.log.html

-- 
Chris Morgan 
___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


[openstack-dev] [placement] devstack, grenade, database management

2018-10-16 Thread Chris Dent


TL;DR: We need reviews on
https://review.openstack.org/#/q/topic:cd/placement-solo+status:open
and work on database management command line tools. More detail
within.

The stack of code, mostly put together by Matt, to get migrating
placement-in-nova to placement-in-placement working is passing its
tests. You can see the remaining pieces of not yet merged code at

https://review.openstack.org/#/q/topic:cd/placement-solo+status:open

Once that is fully merged, the first bullet point on the extraction
plan at


http://lists.openstack.org/pipermail/openstack-dev/2018-September/134541.html

will be complete and we'll have a model for how the next two bullet
points can be done.

At this time, there are two main sticking points to getting things
merged:

* The devstack, grenade, and devstack-gate changes need some review
  to make sure that some of the tricks Matt and I performed are
  acceptable to everyone. They are at:

  https://review.openstack.org/600162
  https://review.openstack.org/604454
  https://review.openstack.org/606853

* We need to address database creation scripts and database migrations.

  There's a general consensus that we should use alembic, and start
  things from a collapsed state. That is, we don't need to represent
  already existing migrations in the new repo, just the present-day
  structure of the tables.

  Right now the devstack code relies on a stubbed out command line
  tool at https://review.openstack.org/#/c/600161/ to create tables
  with a metadata.create_all(). This is a useful thing to have but
  doesn't follow the "db_sync" pattern set elsewhere, so I haven't
  followed through on making it pretty but can do so if people think
  it is useful. Whether we do that or not, we'll still need some
  kind of "db_sync" command. Do people want me to make a cleaned up
  "create" command?

  Ed has expressed some interest in exploring setting up alembic and
  the associated tools but that can easily be a more than one person
  job. Is anyone else interested?

It would be great to get all this stuff working sooner than later.
Without it we can't do two important tasks:

* Integration tests with the extracted placement [1].
* Hacking on extracted placement in/with devstack.

Another issue that needs some attention, but is not quite as urgent
is the desire to support other databases during the upgrade,
captured in this change

https://review.openstack.org/#/c/604028/

[1] There's a stack of code for enabling placement integration tests
starting at https://review.openstack.org/#/c/601614/ . It depends on
the devstack changes.

--
Chris Dent   ٩◔̯◔۶   https://anticdent.org/
freenode: cdent tw: @anticdent__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [all] [tc] [api] Paste Maintenance

2018-10-15 Thread Chris Dent


Back in August [1] there was an email thread about the Paste package
being essentially unmaintained and several OpenStack projects still
using it. At that time we reached the conclusion that we should
investigate having OpenStack adopt Paste in some form as it would
take some time or be not worth it to migrate services away from it.

I went about trying to locate the last set of maintainers and get
access to picking it up. It took a while, but I've now got owner
bits for both bitbucket and PyPI and enthusiastic support from the
previous maintainer for OpenStack to be the responsible party.

I'd like some input from the community on how we'd like this to go.
Some options.

* Chris becomes the de-facto maintainer of paste and I do whatever I
  like to get it healthy and released.

* Several volunteers from the community take over the existing
  bitbucket setup [2] and keep it going there.

* Several volunteers from the community import the existing
  bitbucket setup to OpenStack^wOpenDev infra and manage it.

What would people like? Who would like to volunteer?

At this stage the main piece of blocking work is a patch [3] (and
subsequent release) to get things working happily in Python 3.7.

[1] http://lists.openstack.org/pipermail/openstack-dev/2018-August/132792.html
[2] https://bitbucket.org/ianb/paste
[3] https://bitbucket.org/ianb/paste/pull-requests/41/python-37-support/diff

--
Chris Dent   ٩◔̯◔۶   https://anticdent.org/
freenode: cdent tw: @anticdent__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tc][all] Discussing goals (upgrades) with community @ office hours

2018-10-15 Thread Chris Dent

On Sat, 13 Oct 2018, Mohammed Naser wrote:


Does this seem like it would be of interest to the community?  I am
currently trying to transform our office hours to be more of a space
where we have more of the community and less of discussion between us.


If we want discussion to actually be with the community at large
(rather than giving lip service to the idea), then we need to be
more oriented to using email. Each time we have an office hour or a
meeting in IRC or elsewhere, or an ad hoc Hangout, unless we are
super disciplined about reporting the details to email afterwards, a
great deal of information falls on the floor and individuals who are
unable to attend because of time, space, language or other
constraints are left out.

For community-wide issues, synchronous discussion should be the mode
of last resort. Anything else creates a priesthood with a
disempowered laity wondering how things got away from them.

For community goals, in particular, preferring email for discussion
and planning seems pretty key.

I wonder if instead of specifying topics for TC office hours, we kill
them instead? They've turned into gossiping echo chambers.

--
Chris Dent   ٩◔̯◔۶   https://anticdent.org/
freenode: cdent tw: @anticdent__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [oslo][taskflow] Thoughts on moving taskflow out of openstack/oslo

2018-10-15 Thread Chris Dent

On Wed, 10 Oct 2018, Greg Hill wrote:


I guess I'm putting it forward to the larger community. Does anyone have
any objections to us doing this? Are there any non-obvious technicalities
that might make such a transition difficult? Who would need to be made
aware so they could adjust their own workflows?


I've been on both sides of conversations like this a few different
times. Generally speaking people who are not already in the
OpenStack environment express an unwillingness to participate
because of perceptions of walled-garden and too-many-hoops.

Whatever the reality of the situation, those perceptions matter, and
for libraries that are already or potentially useful to people who
are not in OpenStack, being "outside" is probably beneficial. And
for a library that is normally installed (or should optimally be
installed because, really, isn't it nice to be decoupled?) via pip,
does it matter to OpenStack where it comes from?


Or would it be preferable to just fork and rename the project so openstack
can continue to use the current taskflow version without worry of us
breaking features?


Fork sounds worse.

I've had gabbi contributors tell me, explicitly, that they would not
bother contributing if they had to go through what they perceive to
be the OpenStack hoops. That's anecdata, but for me it is pretty
compelling.

--
Chris Dent   ٩◔̯◔۶   https://anticdent.org/
freenode: cdent tw: @anticdent__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [Openstack] [openstack] openstack setups at Universities

2018-10-15 Thread Chris Dent

On Wed, 10 Oct 2018, Jay See wrote:


Hai everyone,

May be a different question , not completely related to issues associated
with openstack.

Does anyone know any university or universities using opnstack for cloud
deployment and resource sharing.


Jetstream is OpenStack-based and put together by a consortium of
universities: https://jetstream-cloud.org/


--
Chris Dent   ٩◔̯◔۶   https://anticdent.org/
freenode: cdent tw: @anticdent___
Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Post to : openstack@lists.openstack.org
Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack


Re: [openstack-dev] [nova] Supporting force live-migrate and force evacuate with nested allocations

2018-10-09 Thread Chris Friesen

On 10/9/2018 1:20 PM, Jay Pipes wrote:

On 10/09/2018 11:04 AM, Balázs Gibizer wrote:

If you do the force flag removal in a nw microversion that also means
(at least to me) that you should not change the behavior of the force
flag in the old microversions.


Agreed.

Keep the old, buggy and unsafe behaviour for the old microversion and in 
a new microversion remove the --force flag entirely and always call GET 
/a_c, followed by a claim_resources() on the destination host.


Agreed.  Once you start looking at more complicated resource topologies, 
you pretty much need to handle allocations properly.


Chris

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [placement] update 18-40

2018-10-05 Thread Chris Dent


HTML: https://anticdent.org/placement-update-18-40.html

Here's this week's placement update. We remain focused on
specs and pressing issues with extraction, mostly because until the
extraction is "done" in some form doing much other work is a bit
premature.

# Most Important

There have been several discussions recently about what to do with
options that impact both scheduling and configuration. Some of this
was in the thread about [intended purposes of
traits](http://lists.openstack.org/pipermail/openstack-dev/2018-October/thread.html#135301),
but more recently there was discussion on how to support guests
that want an HPET. Chris Friesen [summarized a
hangout](http://lists.openstack.org/pipermail/openstack-dev/2018-October/135446.html)
that happened yesterday that will presumably be reflected in an
[in-progress spec](https://review.openstack.org/#/c/607989/1).

The work to get [grenade upgrading to
placement](https://review.openstack.org/#/c/604454/) is very close.
After several iterations of tweaking, the grenade jobs are now
passing. There are still some adjustments to get devstack jobs
working, but the way is relatively clear. More on this in
"extraction" below, but the reason this is a most important is that
this stuff allows us to do proper integration and upgrade testing,
without which it is hard to have confidence.

# What's Changed

In both placement and nova, placement is no longer using
`get_legacy_facade()`. This will remove some annoying deprecation
warnings.

The nova->placement database migration script for MySQL has merged.
The postgresql version is still [up for
review](https://review.openstack.org/#/c/604028/).

Consumer generations are now being used in some allocation handling
in nova.

# Questions

* What should we do about nova calling the placement db, like in
  
[nova-manage](https://github.com/openstack/nova/blob/master/nova/cmd/manage.py#L416)
  and
  
[nova-status](https://github.com/openstack/nova/blob/master/nova/cmd/status.py#L254).

* Should we consider starting a new extraction etherpad? The [old
  one](https://etherpad.openstack.org/p/placement-extract-stein-3)
  has become a bit noisy and out of date.

# Bugs

* Placement related [bugs not yet in progress](https://goo.gl/TgiPXb): 17.
  -1.
* [In progress placement bugs](https://goo.gl/vzGGDQ) 8. -1.

# Specs

Many of these specs don't seem to be getting much attention. Can the
dead ones be abandoned?

* <https://review.openstack.org/#/c/544683/>
  Account for host agg allocation ratio in placement
  (Still in rocky/)

* <https://review.openstack.org/#/c/595236/>
  Add subtree filter for GET /resource_providers

* <https://review.openstack.org/#/c/597601/>
  Resource provider - request group mapping in allocation candidate

* <https://review.openstack.org/#/c/549067/>
  VMware: place instances on resource pool
  (still in rocky/)

* <https://review.openstack.org/#/c/555081/>
  Standardize CPU resource tracking

* <https://review.openstack.org/#/c/599957/>
  Allow overcommit of dedicated CPU
  (Has an alternative which changes allocations to a float)

* <https://review.openstack.org/#/c/600016/>
  List resource providers having inventory

* <https://review.openstack.org/#/c/593475/>
  Bi-directional enforcement of traits

* <https://review.openstack.org/#/c/599598/>
  allow transferring ownership of instance

* <https://review.openstack.org/#/c/591037/>
  Modelling passthrough devices for report to placement

* <https://review.openstack.org/#/c/509042/>
  Propose counting quota usage from placement and API database
  (A bit out of date but may be worth resurrecting)

* <https://review.openstack.org/#/c/603585/>
  Spec: allocation candidates in tree

* <https://review.openstack.org/#/c/603805/>
  [WIP] generic device discovery policy

* <https://review.openstack.org/#/c/603955/>
  Nova Cyborg interaction specification.

* <https://review.openstack.org/#/c/601596/>
  supporting virtual NVDIMM devices

* <https://review.openstack.org/#/c/603352/>
  Spec: Support filtering by forbidden aggregate

* <https://review.openstack.org/#/c/552924/>
  Proposes NUMA topology with RPs

* <https://review.openstack.org/#/c/552105/>
  Support initial allocation ratios

* <https://review.openstack.org/#/c/569011/>
  Count quota based on resource class

* <https://review.openstack.org/#/c/607989/>
  WIP: High Precision Event Timer (HPET) on x86 guests

* <https://review.openstack.org/#/c/57/>
  Add support for emulated virtual TPM

* <https://review.openstack.org/#/c/510235/>
  Limit instance create max_count (spec) (has some concurrency
  issues related placement)

* <https://review.openstack.org/#/c/141219/>
  Adds spec for instance live resize

So many specs.

# Main Themes

## Making Nested Useful

Work on getting nova's use of nested resource providers happy and
fixing 

Re: [openstack-dev] [tc] bringing back formal TC meetings

2018-10-05 Thread Chris Dent

On Thu, 4 Oct 2018, Doug Hellmann wrote:


TC members, please reply to this thread and indicate if you would find
meeting at 1300 UTC on the first Thursday of every month acceptable, and
of course include any other comments you might have (including alternate
times).


+1

Also, if we're going to set aside a time for a semi-formal meeting, I
hope we will have some form of agenda and minutes, with a fairly
clear process for setting that agenda as well as a process for
making sure that the fast and/or rude typers do not dominate the
discussion during the meetings, as they used to back in the day when
there were weekly meetings.

The "raising hands" thing that came along towards the end sort of
worked, so a variant on that may be sufficient.

--
Chris Dent   ٩◔̯◔۶   https://anticdent.org/
freenode: cdent tw: @anticdent__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [nova] agreement on how to specify options that impact scheduling and configuration

2018-10-04 Thread Chris Friesen
While discussing the "Add HPET timer support for x86 guests" 
blueprint[1] one of the items that came up was how to represent what are 
essentially flags that impact both scheduling and configuration.  Eric 
Fried posted a spec to start a discussion[2], and a number of nova 
developers met on a hangout to hash it out.  This is the result.


In this specific scenario the goal was to allow the user to specify that 
their image required a virtual HPET.  For efficient scheduling we wanted 
this to map to a placement trait, and the virt driver also needed to 
enable the feature when booting the instance.  (This can be generalized 
to other similar problems, including how to specify scheduling and 
configuration information for Ironic.)


We discussed two primary approaches:

The first approach was to specify an arbitrary "key=val" in flavor 
extra-specs or image properties, which nova would automatically 
translate into the appropriate placement trait before passing it to 
placement.  Once scheduled to a compute node, the virt driver would look 
for "key=val" in the flavor/image to determine how to proceed.


The second approach was to directly specify the placement trait in the 
flavor extra-specs or image properties.  Once scheduled to a compute 
node, the virt driver would look for the placement trait in the 
flavor/image to determine how to proceed.


Ultimately, the decision was made to go with the second approach.  The 
result is that it is officially acceptable for virt drivers to key off 
placement traits specified in the image/flavor in order to turn on/off 
configuration options for the instance.  If we do get down to the virt 
driver and the trait is set, and the driver for whatever reason 
determines it's not capable of flipping the switch, it should fail.


It should be noted that it only makes sense to use placement traits for 
things that affect scheduling.  If it doesn't affect scheduling, then it 
can be stored in the flavor extra-specs or image properties separate 
from the placement traits.  Also, this approach only makes sense for 
simple booleans.  Anything requiring more complex configuration will 
likely need additional extra-spec and/or config and/or unicorn dust.


Chris

[1] https://blueprints.launchpad.net/nova/+spec/support-hpet-on-guest
[2] 
https://review.openstack.org/#/c/607989/1/specs/stein/approved/support-hpet-on-guest.rst


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] OpenStack Community Meeting - October 10 - Strategic Area Governance Update

2018-10-04 Thread Chris Hoge
Following the interest in the first OpenStack Foundation community
meeting, where we discussed the OpenStack Rocky release as well as quick
updates from Kata, Airship, StarlingX and Zuul, we want to keep the
community meetings going. The second community meeting will be October 10
at 8:00 PT / 15:00 UTC, and the agenda is for Jonathan Bryce and Thierry
Carrez to share the latest plans for strategic project governance at the
Foundation. These updates will include the process for creating new
Strategic Focus Areas, and the lifecycle of new Foundation supported
projects. We will have an opportunity to share feedback as well as a
question and answer session at the end of the presentation.

For a little context about the proposed plan for strategic project
governance, you can read Jonathan’s email to the Foundation mailing list:
http://lists.openstack.org/pipermail/foundation/2018-August/002617.html 
<http://lists.openstack.org/pipermail/foundation/2018-August/002617.html>

This meeting will be recorded and made publicly available. This is part
of our plan to introduce bi-weekly OpenStack Foundation community meetings
that will cover topics like Foundation strategic area updates, project
demonstrations, and other community efforts. We expect the next meeting
to take place October 24 and focus on the anticipated StarlingX release.
Do you have something you'd like to discuss or share with the community?
Please share them with me so that I can schedule them for future meetings.

OpenStack Community Meeting - Strategic Area Governance Update
Date & Time: October 10, 8:00 PT / 15:00 UTC
Zoom Meeting Link: https://zoom.us/j/312447172 <https://zoom.us/j/312447172>
Agenda: https://etherpad.openstack.org/p/openstack-community-meeting 
<https://etherpad.openstack.org/p/openstack-community-meeting>

Thanks!
Chris Hoge
Strategic Program Manager
OpenStack Foundation
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [placement] [infra] [qa] tuning some zuul jobs from "it works" to "proper"

2018-10-04 Thread Chris Dent

On Wed, 3 Oct 2018, Chris Dent wrote:

I'd really like to see this become a real thing, so if I could get
some help from tempest people on how to make it in line with
expectations that would be great.


I've written up the end game of what I'm trying to achieve in a bit
more detail at https://anticdent.org/gabbi-in-the-gate.html

--
Chris Dent   ٩◔̯◔۶   https://anticdent.org/
freenode: cdent tw: @anticdent__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [placement] [infra] [qa] tuning some zuul jobs from "it works" to "proper"

2018-10-03 Thread Chris Dent

On Tue, 2 Oct 2018, Chris Dent wrote:


One of the comments in there is about the idea of making a zuul job
which is effectively "run the gabbits in these dirs" against a
tempest set up. Doing so will require some minor changes to the
tempest tox passenv settings but I think it ought to
straightforwardish.


I've made a first stab at this:

* Small number of changes to tempest:
  https://review.openstack.org/#/c/607507/
  (The important change here, the one that strictly required changes
  to tempest, is adjusting passenv in tox.ini)

* Much smaller job on the placement side:
  https://review.openstack.org/#/c/607508/

I'd really like to see this become a real thing, so if I could get
some help from tempest people on how to make it in line with
expectations that would be great.

--
Chris Dent   ٩◔̯◔۶   https://anticdent.org/
freenode: cdent tw: @anticdent__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [helm] multiple nova compute nodes

2018-10-02 Thread Chris Friesen

On 10/2/2018 4:15 PM, Giridhar Jayavelu wrote:

Hi,
Currently, all nova components are packaged in same helm chart "nova". Are 
there any plans to separate nova-compute from rest of the services ?
What should be the approach for deploying multiple nova computes nodes using 
OpenStack helm charts?


The nova-compute pods are part of a daemonset which will automatically 
create a nova-compute pod on each node that has the 
"openstack-compute-node=enabled" label.


Chris

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [tc] [all] TC Report 18-40

2018-10-02 Thread Chris Dent


HTML: https://anticdent.org/tc-report-18-40.html

I'm going to take a break from writing the TC reports for a while.
If other people (whether on the TC or not) are interested in
producing their own form of a subjective review of the week's TC
activity, I very much encourage you to do so. It's proven an
effective way to help at least some people maintain engagement.

I may pick it up again when I feel like I have sufficient focus and
energy to produce something that has more value and interpretation
than simply pointing at
[the IRC logs](http://eavesdrop.openstack.org/irclogs/%23openstack-tc/).
However, at this time, I'm not producing a product that is worth the
time it takes me to do it and the time it takes away from doing
other things. I'd rather make more significant progress on fewer
things.

In the meantime, please join me in congratulating and welcoming the
newly elected members of the TC: Lance Bragstad, Jean-Philippe
Evrard, Doug Hellman, Julia Kreger, Ghanshyam Mann, and Jeremy
Stanley.


--
Chris Dent   ٩◔̯◔۶   https://anticdent.org/
freenode: cdent tw: @anticdent__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[Openstack-operators] Ops Meetup Team meeting 2018/10/2

2018-10-02 Thread Chris Morgan
We had a good meeting on IRC today, minutes below. Current focus is on the
Forum event at the upcoming OpenStack Summit in Berlin in November. We are
going to try and pull together a social events for openstack operators on
the Tuesday night after the marketplace mixer. Further items under
discussion include the first inter-summit meetup, which is likely to be in
europe in early march and will most likely feature a research track, early
discussions about the first 2019 summit, forum, ptg event and finally
target region for the second meetup (likely to be north america).

if you'd like to hear more, get involved or submit your suggestions for
future OpenStack Operators related events, please get in touch!

Minutes:
http://eavesdrop.openstack.org/meetings/ops_meetup_team/2018/ops_meetup_team.2018-10-02-14.02.html
Minutes (text):
http://eavesdrop.openstack.org/meetings/ops_meetup_team/2018/ops_meetup_team.2018-10-02-14.02.txt
Log:
http://eavesdrop.openstack.org/meetings/ops_meetup_team/2018/ops_meetup_team.2018-10-02-14.02.log.html

Chris

-- 
Chris Morgan 
___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


[openstack-dev] [k8s][magnum][zun] Notification of removal of in-tree K8s OpenStack Provider

2018-10-02 Thread Chris Hoge
For those projects that use OpenStack as a cloud provider for K8s, there
is a patch in flight[1] to remove the in-tree OpenStack provider from the
kubernetes/kubernetes repository. The provider has been deprecated for
two releases, with a replacement external provider available[2]. Before
we merge this patch for the 1.13 K8s release cycle, we want to make sure
that projects dependent on the in-tree provider (expecially thinking
about projects like Magnum and Zun) have an opportunity to express their
readiness to switch over.

[1] https://github.com/kubernetes/kubernetes/pull/67782
[2] https://github.com/kubernetes/cloud-provider-openstack


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [placement] [infra] [qa] tuning some zuul jobs from "it works" to "proper"

2018-10-02 Thread Chris Dent

On Wed, 19 Sep 2018, Monty Taylor wrote:

Yes. Your life will be much better if you do not make more legacy jobs. They 
are brittle and hard to work with.


New jobs should either use the devstack base job, the devstack-tempest base 
job or the devstack-tox-functional base job - depending on what things are 
intended.


I have a thing mostly working at https://review.openstack.org/#/c/601614/

The commit message has some ideas on how it could be better and the
various hacks I needed to do to get things working.

One of the comments in there is about the idea of making a zuul job
which is effectively "run the gabbits in these dirs" against a
tempest set up. Doing so will require some minor changes to the
tempest tox passenv settings but I think it ought to
straightforwardish.

Some reviews from people who understand these things more than me
would be most welcome.

--
Chris Dent   ٩◔̯◔۶   https://anticdent.org/
freenode: cdent tw: @anticdent__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [placement] The "intended purpose" of traits

2018-10-01 Thread Chris Dent

On Sat, 29 Sep 2018, Jay Pipes wrote:
I don't think that's a fair statement. You absolutely *do* care which way we 
go. You want to encode multiple bits of information into a trait string -- 
such as "PCI_ADDRESS_01_AB_23_CD" -- and leave it up to the caller to have to 
understand that this trait string has multiple bits of information encoded in 
it (the fact that it's a PCI device and that the PCI device is at 
01_AB_23_CD).


You don't see a problem encoding these variants inside a string. Chris 
doesn't either.


Lest I be misconstrued, I'd like to clarify: What I was trying to
say elsewhere in the thread was that placement should never be aware
of _anything_ that is in the trait string (except CUSTOM_* when
validating ones that are added, and MISC_SHARES[...] for sharing
providers). On the placement server side, input is compared solely
for equality with stored data and nothing else, and we should never
allow value comparisons, string fragments, regex, etc.

So from a code perspective _placement_ is completely agnostic to
whether a trait is "PCI_ADDRESS_01_AB_23_CD", "STORAGE_DISK_SSD", or
"JAY_LIKES_CRUNCHIE_BARS".

However, things which are using traits (e.g., nova, ironic) need to
make their own decisions about how the value of traits are
interpreted. I don't have a strong position on that except to say
that _if_ we end up in a position of there being lots of traits
willy nilly, people who have chosen to do that need to know that the
contract presented by traits right now (present or not present, no
value comprehension) is fixed.

I *do* see a problem with it, based on my experience in Nova where this kind 
of thing leads to ugly, unmaintainable, and incomprehensible code as I have 
pointed to in previous responses.


I think there are many factors that have led to nova being
incomprehensible and indeed bad representations is one of them, but
I think reasonable people can disagree on which factors are the most
important and with sufficient discussion come to some reasonable
compromises. I personally feel that while the bad representations
(encoding stuff in strings or json blobs) thing is a big deal,
another major factor is a predilection to make new apis, new
abstractions, and new representations rather than working with and
adhering to the constraints of the existing ones. This leads to a
lot of code that encodes business logic in itself (e.g., several
different ways and layers of indirection to think about allocation
ratios) rather than working within strong and constraining
contracts.


From my standpoint there isn't much to talk about here from a

placement code standpoint. We should clearly document the functional
contract (and stick to it) and we should come up with exemplars
for how to make the best use of traits.

I think this conversation could allow us to find those examples.

I don't, however, want placement to be a traffic officer for how
people do things. In the context of the orchestration between nova
and ironic and how that interaction happens, nova has every right to
set some guidelines if it needs to.

--
Chris Dent   ٩◔̯◔۶   https://anticdent.org/
freenode: cdent tw: @anticdent__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [placement] The "intended purpose" of traits

2018-09-28 Thread Chris Dent

On Fri, 28 Sep 2018, melanie witt wrote:

I'm concerned about a lot of repetition here and maintenance headache for 
operators. That's where the thoughts about whether we should provide 
something like a key-value construct to API callers where they can instead 
say:


* OWNER=CINDER
* RAID=10
* NUMA_CELL=0

for each resource provider.

If I'm off base with my example, please let me know. I'm not a placement 
expert.


Anyway, I hope that gives an idea of what I'm thinking about in this 
discussion. I agree we need to pick a direction and go with it. I'm just 
trying to look out for the experience operators are going to be using this 
and maintaining it in their deployments.


Despite saying "let's never do this" with regard to having formal
support for key/values in placement, if we did choose to do it (if
that's what we chose, I'd live with it), when would we do it? We
have a very long backlog of features that are not yet done. I
believe (I hope obviously) that we will be able to accelerate
placement's velocity with it being extracted, but that won't be
enough to suddenly be able to do quickly do all the things we have
on the plate.

Are we going to make people wait for some unknown amount of time,
in the meantime? While there is a grammar that could do some of
these things?

Unless additional resources come on the scene I don't think is
either feasible or reasonable for us to considering doing any model
extending at this time (irrespective of the merit of the idea).

In some kind of weird belief way I'd really prefer we keep the
grammar placement exposes simple, because my experience with HTTP
APIs strongly suggests that's very important, and that experience is
effectively why I am here, but I have no interest in being a
fundamentalist about it. We should argue about it strongly to make
sure we get the right result, but it's not a huge deal either way.

--
Chris Dent   ٩◔̯◔۶   https://anticdent.org/
freenode: cdent tw: @anticdent__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [placement] The "intended purpose" of traits

2018-09-28 Thread Chris Dent
em, and instead had
some kind of k8s-like etcd-like
keeper-of-all-the-info-about-everything, then sure, having what we
currently model as resource providers be a giant blob of metadata
(with quantities, qualitiies, and key-values) that is an authority
for the entire system might make some kind of sense.

But we don't. If we wanted to migrate to having something like that,
using placement as the trojan horse for such a change, either with
intent or by accident, would be unfortunate.

Propose such a thing and I'll gladly support it. But I won't support 
bastardizing the simple concept of a boolean capability just because we don't 
want to change the API or database schema.


For me, it is not a matter of not wanting to change the API or the
database schema. It's about not wanting to expand the concepts, and
thus the purpose, of the system. It's about wanting to keep focus
and functionality narrow so we can have a target which is "maturity"
and know when we're there.

My summary: Traits are symbols that are 255 characters long that are
associated with a resource provider. It's possible to query for
resource providers that have or do not have a specific trait. This
has the effect of making the meaning of a trait a descriptor of the
resource provider. What the descriptor signifies is up to the thing
creating and using the resource provider, not placement. We need to
harden that contract and stick to it. Placement is like a common
carrier, it doesn't care what's in the box.

/me cues brad pitt

--
Chris Dent   ٩◔̯◔۶   https://anticdent.org/
freenode: cdent tw: @anticdent__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [placement] update 18-39

2018-09-28 Thread Chris Dent
t to be publishing docs until we are
  publishing code, but the interdependencies need to be teased out.

* We need to decide how we are going to manage database schema
  migrations (alembic is the modern way) and we need to create the
  tooling for running those migrations (as well as upgrade checks).
  This includes deciding how we want to manage command line tools
  (using nova's example or something else).

Until those things happen we don't have a "thing" which people can
install and run, unless they do some extra hacking around which we
don't want to impose upon people any longer than necessary.

# Other

As with last time, I'm not going to make a list of links to pending
changes that aren't already listed above. I'll start doing that again
eventually (once priorities are more clear), but for now it is
useful to look at [open placement
patches](https://review.openstack.org/#/q/project:openstack/placement+status:open)
and patches from everywhere which [mention placement in the commit
message](https://review.openstack.org/#/q/message:placement+status:open).

# End

Taking a few days off is a great way to get out of sync.

--
Chris Dent   ٩◔̯◔۶   https://anticdent.org/
freenode: cdent tw: @anticdent__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [qa] [infra] [placement] tempest plugins virtualenv

2018-09-28 Thread Chris Dent

On Fri, 28 Sep 2018, Matthew Treinish wrote:


http://logs.openstack.org/14/601614/21/check/placement-tempest-gabbi/f44c185/job-output.txt.gz#_2018-09-28_11_13_25_798683


Right above this line it shows that the gabbi-tempest plugin is installed in
the venv:

http://logs.openstack.org/14/601614/21/check/placement-tempest-gabbi/f44c185/job-output.txt.gz#_2018-09-28_11_13_25_650661


Ah, so it is, thanks. My grepping and visual-grepping failed
because of the weird linebreaks. Le sigh.

For curiosity: What's the processing that is making it be installed
twice? I ask because I'm hoping to (eventually) trim this to as
small and light as possible. And then even more eventually I hope to
make it so that if a project chooses the right job and has a gabbits
directory, they'll get run.

The part that was confusing for me was that the virtual env that
lib/tempest (from devstack) uses is not even mentioned in tempest's
tox.ini, so is using its own directory as far as I could tell.


My guess is that the plugin isn't returning any tests that match the regex.


I'm going to run it without a regex and see what it produces.

It might be that pre job I'm using to try to get the gabbits in the
right place is not working as desired.

A few patchsets ago when I was using the oogly way of doing things
it was all working.

--
Chris Dent   ٩◔̯◔۶   https://anticdent.org/
freenode: cdent tw: @anticdent__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [qa] [infra] [placement] tempest plugins virtualenv

2018-09-28 Thread Chris Dent


I'm still trying to figure out how to properly create a "modern" (as
in zuul v3 oriented) integration test for placement using gabbi and
tempest. That work is happening at https://review.openstack.org/#/c/601614/

There was lots of progress made after the last message on this
topic 
http://lists.openstack.org/pipermail/openstack-dev/2018-September/134837.html
but I've reached another interesting impasse.


From devstack's standpoint, the way to say "I want to use a tempest

plugin" is to set TEMPEST_PLUGINS to alist of where the plugins are.
devstack:lib/tempest then does a:

tox -evenv-tempest -- pip install -c 
$REQUIREMENTS_DIR/upper-constraints.txt $TEMPEST_PLUGINS

http://logs.openstack.org/14/601614/21/check/placement-tempest-gabbi/f44c185/job-output.txt.gz#_2018-09-28_11_12_58_138163

I have this part working as expected.

However,

The advice is then to create a new job that has a parent of
devstack-tempest. That zuul job runs a variety of tox environments,
depending on the setting of the `tox_envlist` var. If you wish to
use a `tempest_test_regex` (I do) the preferred tox environment is
'all'.

That venv doesn't have the plugin installed, thus no gabbi tests are
found:

http://logs.openstack.org/14/601614/21/check/placement-tempest-gabbi/f44c185/job-output.txt.gz#_2018-09-28_11_13_25_798683

How do I get my plugin installed into the right venv while still
following the guidelines for good zuul behavior?

--
Chris Dent   ٩◔̯◔۶   https://anticdent.org/
freenode: cdent tw: @anticdent__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [k8s][tc] List of OpenStack and K8s Community Updates

2018-09-27 Thread Chris Hoge
In the last year the SIG-K8s/SIG-OpenStack group has facilitated quite
a bit of discussion between the OpenStack and Kubernetes communities.
In doing this work we've delivered a number of presentations and held
several working sessions. I've created an etherpad that contains links
to these documents as a reference to the work and the progress we've
made. I'll continue to keep the document updated, and if I've missed
any links please feel free to add them.

https://etherpad.openstack.org/p/k8s-openstack-updates

-Chris


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [placement] Tetsuro Nakamura now core

2018-09-27 Thread Chris Dent


Since there were no objections and a week has passed, I've made
Tetsuro a member of placement-core.

Thanks for your willingness and continued help. Use your powers
wisely.

--
Chris Dent   ٩◔̯◔۶   https://anticdent.org/
freenode: cdent tw: @anticdent__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[Openstack-operators] ops meetup team meeting 2018-9-25 (minutes)

2018-09-26 Thread Chris Morgan
There was an ops meetups team meeting yesteryday on #openstack-operators.
Minutes linked below.

Please note that submissions for the forum in Berlin this November close
today. If you were thinking of adding to the planning etherpad for
Ops-related sessions, it's too late for that now, please go directly to the
official submission tool :

https://www.openstack.org/summit-login/login?BackURL=%2Fsummit%2Fberlin-2018%2Fcall-for-presentations

Meeting ended Tue Sep 25 14:51:06 2018 UTC. Information about MeetBot at
http://wiki.debian.org/MeetBot . (v 0.1.4)
10:51 AM Minutes:
http://eavesdrop.openstack.org/meetings/ops_meetup_team/2018/ops_meetup_team.2018-09-25-14.00.html
10:51 AM Minutes (text):
http://eavesdrop.openstack.org/meetings/ops_meetup_team/2018/ops_meetup_team.2018-09-25-14.00.txt
10:51 AM Log:
http://eavesdrop.openstack.org/meetings/ops_meetup_team/2018/ops_meetup_team.2018-09-25-14.00.log.html

Chris
-- 
Chris Morgan 
___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


[openstack-dev] [storyboard] why use different "bug" tags per project?

2018-09-26 Thread Chris Friesen

Hi,

At the PTG, it was suggested that each project should tag their bugs 
with "-bug" to avoid tags being "leaked" across projects, or 
something like that.


Could someone elaborate on why this was recommended?  It seems to me 
that it'd be better for all projects to just use the "bug" tag for 
consistency.


If you want to get all bugs in a specific project it would be pretty 
easy to search for stories with a tag of "bug" and a project of "X".


Chris

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [Openstack-operators] ops meetups team meeting in 30 minutes

2018-09-25 Thread Chris Morgan
Oops my mistake, it's in almost an hour from now, sorry

On Tue, Sep 25, 2018 at 9:00 AM Chris Morgan  wrote:

> Hey All,
>   The Ops Meetups team meeting is in 30 minutes on #openstack-operators
>
>   Forum submissions for the Denver summit are due TODAY, please see the
> links on today's agenda here :
> https://etherpad.openstack.org/p/ops-meetups-team
>
> Chris
>
> --
> Chris Morgan 
>


-- 
Chris Morgan 
___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


[Openstack-operators] ops meetups team meeting in 30 minutes

2018-09-25 Thread Chris Morgan
Hey All,
  The Ops Meetups team meeting is in 30 minutes on #openstack-operators

  Forum submissions for the Denver summit are due TODAY, please see the
links on today's agenda here :
https://etherpad.openstack.org/p/ops-meetups-team

Chris

-- 
Chris Morgan 
___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


[openstack-dev] [placement] update 18-38

2018-09-21 Thread Chris Dent
#/c/552105/>
  Support initial allocation ratios
  (There are at least two pending allocation ratio handling cleanup
  specs. It's not clear from the PTG etherpad which of these was
  chosen as the future (we did choose, but the etherpad is
  confusing). 544683 (above) is the other one.)

* <https://review.openstack.org/#/c/569011/>
  Count quota based on resource class

# Main Themes

These are interim themes while we work out what priorities are.

## Making Nested Useful

An acknowledged outcome from the PTG was that we need to do the work
to make workloads that want to use nested resource providers
actually able to land on a host somewhere. This involves work across
many parts of nova and could easily lead to a mass of bug fixes in
placement. I'm probably missing a fair bit but the following topics
are good starting points:

* <https://review.openstack.org/#/q/topic:bp/use-nested-allocation-candidates>
* <https://review.openstack.org/#/q/topic:use-nested-allocation-candidates>
* <https://review.openstack.org/#/q/topic:bug/1792503>

## Consumer Generations

gibi is still working hard to drive home support for consumer
generations on the nova side. Because of some dependency management
that stuff is currently in the following topic:

* <https://review.openstack.org/#/q/topic:bp/use-nested-allocation-candidates>

## Extraction

As mentioned above, getting the extracted placement happy is
proceeding apace. Besides many of the generic cleanups happening [to
the
repo](https://review.openstack.org/#/q/project:openstack/placement+status:open)
we need to focus some effort on upgrade and integration testing,
docs publishing, and doc correctness.

Dan has started a [database migration
script](https://review.openstack.org/#/c/603234/) which will be used
by deployers and grenade for upgrades. Matt is hoping to make some
progress on the grenade side of things. I have a [hacked up
devstack](https://review.openstack.org/#/c/600162/) for using the
extracted placement.

All of this is dependent on:

* database migrations being "collapsed"
* the existence of a `placement-manage` script to initialize the
  database

I made a faked up
[placement-manage](https://review.openstack.org/#/c/600161/) for the
devstack patch above, but it only creates tables, doesn't migrate,
and is not fit for purpose as a generic CLI.

I have started [some
experiments](https://review.openstack.org/#/c/601614/) on using
[gabbi-tempest](https://pypi.org/project/gabbi-tempest/) to drive
some integration tests for placement with solely gabbi YAML files. I
initially did this using "legacy" style zuul jobs, and made it work,
but it was ugly and I've since started using more modern zuul, but
haven't yet made it work.

# Other

As with last time, I'm not going to make a list of links to pending
changes that aren't already listed above. I'll start doing that again
eventually (once priorities are more clear), but for now it is
useful to look at [open placement
patches](https://review.openstack.org/#/q/project:openstack/placement+status:open)
and patches from everywhere which [mention placement in the commit
message](https://review.openstack.org/#/q/message:placement+status:open).

# End

In case anyone is wondering where I am, I'm out M-W next week.
--
Chris Dent   ٩◔̯◔۶   https://anticdent.org/
freenode: cdent tw: @anticdent__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Fwd: Denver Ops Meetup post-mortem

2018-09-20 Thread Chris Morgan
The issue I ran into with IRC was a bit more obscure. "real IRC" is
entirely blocked from all networks provided to me by my employer (even the
office wifi).

The web interface I was using (irccloud) didn't work for nickname
registration either.

When trying real (non-web-wrapped) IRC from my laptop via an LTE hotspot it
also failed. We eventually worked out that it's because Freenode has
blacklisted large IP ranges including my AT service.

Can't connect unless authenticated, can't register nickname for auth
because not connected.

The answer in that case is to register the nickname on
http://webchat.freenode.net

This "chicken and egg" problem is explained here:
https://superuser.com/questions/1220409/irc-how-to-register-on-freenode-using-hexchat-when-i-get-disconnected-immediat

Chris

On Thu, Sep 20, 2018 at 12:18 AM Kendall Nelson 
wrote:

> Hello!
>
> On Tue, Sep 18, 2018 at 12:36 PM Chris Morgan  wrote:
>
>>
>>
>> ------ Forwarded message -
>> From: Chris Morgan 
>> Date: Tue, Sep 18, 2018 at 2:13 PM
>> Subject: Denver Ops Meetup post-mortem
>> To: OpenStack Operators 
>>
>>
>>  Hello All,
>>   Last week we had a successful Ops Meetup embedded in the OpenStack
>> Project Team Gathering in Denver.
>>
>> Despite generally being a useful gathering, there were definitely lessons
>> learned and things to work on, so I thought it would be useful to share a
>> post-mortem. I encourage everyone to share their thoughts on this as well.
>>
>> What went well:
>>
>> - some of the sessions were great and a lot of progress was made
>> - overall attendance in the ops room was good
>> - more developers were able to join the discussions
>> - facilities were generally fine
>> - some operators leveraged being at PTG to have useful involvement in
>> other sessions/discussions such as Keystone, User Committee, Self-Healing
>> SIG, not to mention the usual "hallway conversations", and similarly some
>> project devs were able to bring pressing questions directly to operators.
>>
>> What didn't go so well:
>>
>> - Merging into upgrade SIG didn't go particularly well
>> - fewer ops attended (in particular there were fewer from outside the US)
>> - Some of the proposed sessions were not well vetted
>> - some ops who did attend stated the event identity was diluted, it was
>> less attractive
>> - we tried to adjust the day 2 schedule to include late submissions,
>> however it was probably too late in some cases
>>
>> I don't think it's so important to drill down into all the whys and
>> wherefores of how we fell down here except to say that the ops meetups team
>> is a small bunch of volunteers all with day jobs (presumably just like
>> everyone else on this mailing list). The usual, basically.
>>
>> Much more important : what will be done to improve things going forward:
>>
>> - The User Committee has offered to get involved with the technical
>> content. In particular to bring forward topics from other relevant events
>> into the ops meetup planning process, and then take output from ops meetups
>> forward to subsequent events. We (ops meetup team) have welcomed this.
>>
>> - The Ops Meetups Team will endeavor to start topic selection earlier and
>> have a more critical approach. Having a longer list of possible sessions
>> (when starting with material from earlier events) should make it at least
>> possible to devise a better agenda. Agenda quality drives attendance to
>> some extent and so can ensure a virtuous circle.
>>
>> - We need to work out whether we're doing fixed schedule events (similar
>> to previous mid-cycle Ops Meetups) or fully flexible PTG-style events, but
>> grafting one onto the other ad-hoc clearly is a terrible idea. This needs
>> more discussion.
>>
>> - The Ops Meetups Team continues to explore strange new worlds, or at
>> least get in touch with more and more OpenStack operators to find out what
>> the meetups team and these events could do for them and hence drive the
>> process better. One specific work item here is to help the (widely
>> disparate) operator community with technical issues such as getting setup
>> with the openstack git/gerrit and IRC. The latter is the preferred way for
>> the community to meet, but is particularly difficult now with the
>> registered nickname requirement. We will add help documentation on how to
>> get over this hurdle.
>>
>
> After you get onto freenode at IRC you can register your nickname with a
> single command and then you should be able to join any of the channels. The
> c

[openstack-dev] Nominating Tetsuro Nakamura for placement-core

2018-09-19 Thread Chris Dent



I'd like to nominate Tetsuro Nakamura for membership in the
placement-core team. Throughout placement's development Tetsuro has
provided quality reviews; done the hard work of creating rigorous
functional tests, making them fail, and fixing them; and implemented
some of the complex functionality required at the persistence layer.
He's aware of and respects the overarching goals of placement and has
demonstrated pragmatism when balancing those goals against the
requirements of nova, blazar and other projects.

Please follow up with a +1/-1 to express your preference. No need to
be an existing placement core, everyone with an interest is welcome.

Thanks.

--
Chris Dent   ٩◔̯◔۶   https://anticdent.org/
reenode: cdent tw: @anticdent__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [placement] [infra] [qa] tuning some zuul jobs from "it works" to "proper"

2018-09-19 Thread Chris Dent


I have a patch in progress to add some simple integration tests to
placement:

https://review.openstack.org/#/c/601614/

They use https://github.com/cdent/gabbi-tempest . The idea is that
the method for adding more tests is to simply add more yaml in
gate/gabbits, without needing to worry about adding to or think
about tempest.

What I have at that patch works; there are two yaml files, one of
which goes through the process of confirming the existence of a
resource provider and inventory, booting a server, seeing a change
in allocations, resizing the server, seeing a change in allocations.

But this is kludgy in a variety of ways and I'm hoping to get some
help or pointers to the right way. I'm posting here instead of
asking in IRC as I assume other people confront these same
confusions. The issues:

* The associated playbooks are cargo-culted from stuff labelled
  "legacy" that I was able to find in nova's jobs. I get the
  impression that these are more verbose and duplicative than they
  need to be and are not aligned with modern zuul v3 coolness.

* It takes an age for the underlying devstack to build, I can
  presumably save some time by installing fewer services, and making
  it obvious how to add more when more are required. What's the
  canonical way to do this? Mess with {enable,disable}_service, cook
  the ENABLED_SERVICES var, do something with required_projects?

* This patch, and the one that follows it [1] dynamically install
  stuff from pypi in the post test hooks, simply because that was
  the quick and dirty way to get those libs in the environment.
  What's the clean and proper way? gabbi-tempest itself needs to be
  in the tempest virtualenv.

* The post.yaml playbook which gathers up logs seems like a common
  thing, so I would hope could be DRYed up a bit. What's the best
  way to that?

Thanks very much for any input.

[1] perf logging of a loaded placement: https://review.openstack.org/#/c/602484/

--
Chris Dent   ٩◔̯◔۶   https://anticdent.org/
freenode: cdent tw: @anticdent__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [User-committee] [tc] Joint UC/TC Meeting

2018-09-19 Thread Chris Dent

On Tue, 18 Sep 2018, Doug Hellmann wrote:


[Redirecting this from the openstack-tc list to the -dev list.]
Excerpts from Melvin Hillsman's message of 2018-09-18 17:43:57 -0500:

UC is proposing a joint UC/TC meeting at the end of the month say starting
after Berlin to work more closely together. The last Monday of the month at
1pm US Central time is current proposal, throwing it out here now for
feedback/discussion, so that would make the first one Monday, November
26th, 2018.


I agree that the UC and TC should work more closely together. If the
best way to do that is to have a meeting then great, let's do it.
We're you thinking IRC or something else?

But we probably need to resolve our ambivalence towards meetings. On
Sunday at the PTG we discussed maybe going back to having a TC
meeting but didn't realy decide (at least as far as I recall) and
didn't discuss in too much depth the reasons why we killed meetings
in the first place. How would this meeting be different?

--
Chris Dent   ٩◔̯◔۶   https://anticdent.org/
freenode: cdent tw: @anticdent__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] Fwd: Denver Ops Meetup post-mortem

2018-09-18 Thread Chris Morgan
-- Forwarded message -
From: Chris Morgan 
Date: Tue, Sep 18, 2018 at 2:13 PM
Subject: Denver Ops Meetup post-mortem
To: OpenStack Operators 


 Hello All,
  Last week we had a successful Ops Meetup embedded in the OpenStack
Project Team Gathering in Denver.

Despite generally being a useful gathering, there were definitely lessons
learned and things to work on, so I thought it would be useful to share a
post-mortem. I encourage everyone to share their thoughts on this as well.

What went well:

- some of the sessions were great and a lot of progress was made
- overall attendance in the ops room was good
- more developers were able to join the discussions
- facilities were generally fine
- some operators leveraged being at PTG to have useful involvement in other
sessions/discussions such as Keystone, User Committee, Self-Healing SIG,
not to mention the usual "hallway conversations", and similarly some
project devs were able to bring pressing questions directly to operators.

What didn't go so well:

- Merging into upgrade SIG didn't go particularly well
- fewer ops attended (in particular there were fewer from outside the US)
- Some of the proposed sessions were not well vetted
- some ops who did attend stated the event identity was diluted, it was
less attractive
- we tried to adjust the day 2 schedule to include late submissions,
however it was probably too late in some cases

I don't think it's so important to drill down into all the whys and
wherefores of how we fell down here except to say that the ops meetups team
is a small bunch of volunteers all with day jobs (presumably just like
everyone else on this mailing list). The usual, basically.

Much more important : what will be done to improve things going forward:

- The User Committee has offered to get involved with the technical
content. In particular to bring forward topics from other relevant events
into the ops meetup planning process, and then take output from ops meetups
forward to subsequent events. We (ops meetup team) have welcomed this.

- The Ops Meetups Team will endeavor to start topic selection earlier and
have a more critical approach. Having a longer list of possible sessions
(when starting with material from earlier events) should make it at least
possible to devise a better agenda. Agenda quality drives attendance to
some extent and so can ensure a virtuous circle.

- We need to work out whether we're doing fixed schedule events (similar to
previous mid-cycle Ops Meetups) or fully flexible PTG-style events, but
grafting one onto the other ad-hoc clearly is a terrible idea. This needs
more discussion.

- The Ops Meetups Team continues to explore strange new worlds, or at least
get in touch with more and more OpenStack operators to find out what the
meetups team and these events could do for them and hence drive the process
better. One specific work item here is to help the (widely disparate)
operator community with technical issues such as getting setup with the
openstack git/gerrit and IRC. The latter is the preferred way for the
community to meet, but is particularly difficult now with the registered
nickname requirement. We will add help documentation on how to get over
this hurdle.

- YOUR SUGGESTION HERE

Chris

-- 
Chris Morgan 


-- 
Chris Morgan 
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[Openstack-operators] Denver Ops Meetup post-mortem

2018-09-18 Thread Chris Morgan
 Hello All,
  Last week we had a successful Ops Meetup embedded in the OpenStack
Project Team Gathering in Denver.

Despite generally being a useful gathering, there were definitely lessons
learned and things to work on, so I thought it would be useful to share a
post-mortem. I encourage everyone to share their thoughts on this as well.

What went well:

- some of the sessions were great and a lot of progress was made
- overall attendance in the ops room was good
- more developers were able to join the discussions
- facilities were generally fine
- some operators leveraged being at PTG to have useful involvement in other
sessions/discussions such as Keystone, User Committee, Self-Healing SIG,
not to mention the usual "hallway conversations", and similarly some
project devs were able to bring pressing questions directly to operators.

What didn't go so well:

- Merging into upgrade SIG didn't go particularly well
- fewer ops attended (in particular there were fewer from outside the US)
- Some of the proposed sessions were not well vetted
- some ops who did attend stated the event identity was diluted, it was
less attractive
- we tried to adjust the day 2 schedule to include late submissions,
however it was probably too late in some cases

I don't think it's so important to drill down into all the whys and
wherefores of how we fell down here except to say that the ops meetups team
is a small bunch of volunteers all with day jobs (presumably just like
everyone else on this mailing list). The usual, basically.

Much more important : what will be done to improve things going forward:

- The User Committee has offered to get involved with the technical
content. In particular to bring forward topics from other relevant events
into the ops meetup planning process, and then take output from ops meetups
forward to subsequent events. We (ops meetup team) have welcomed this.

- The Ops Meetups Team will endeavor to start topic selection earlier and
have a more critical approach. Having a longer list of possible sessions
(when starting with material from earlier events) should make it at least
possible to devise a better agenda. Agenda quality drives attendance to
some extent and so can ensure a virtuous circle.

- We need to work out whether we're doing fixed schedule events (similar to
previous mid-cycle Ops Meetups) or fully flexible PTG-style events, but
grafting one onto the other ad-hoc clearly is a terrible idea. This needs
more discussion.

- The Ops Meetups Team continues to explore strange new worlds, or at least
get in touch with more and more OpenStack operators to find out what the
meetups team and these events could do for them and hence drive the process
better. One specific work item here is to help the (widely disparate)
operator community with technical issues such as getting setup with the
openstack git/gerrit and IRC. The latter is the preferred way for the
community to meet, but is particularly difficult now with the registered
nickname requirement. We will add help documentation on how to get over
this hurdle.

- YOUR SUGGESTION HERE

Chris

-- 
Chris Morgan 
___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


[openstack-dev] [tc] [all] TC Report 18-38

2018-09-18 Thread Chris Dent


HTML: https://anticdent.org/tc-report-18-38.html

Rather than writing a TC Report this week, I've written a report on
the [OpenStack Stein
PTG](https://anticdent.org/openstack-stein-ptg.html).

--
Chris Dent   ٩◔̯◔۶   https://anticdent.org/
freenode: cdent tw: @anticdent__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[Openstack-operators] OpenStack Ops Meetups team meeting in ~40 minutes

2018-09-18 Thread Chris Morgan
Calendar link http://eavesdrop.openstack.org/calendars/ops-meetup-team.ics

Join us on #openstack-operators to discuss last weeks embedded ops meetup
at the Denver PTG, the upcoming Forum at the Summit in Berlin this November
and possible meetups in 2019.

Chris

-- 
Chris Morgan 
___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


Re: [openstack-dev] [openstack][infra]Including Functional Tests in Coverage

2018-09-13 Thread Chris Dent

On Wed, 12 Sep 2018, Michael Johnson wrote:


We do this in Octavia. The openstack-tox-cover calls the cover
environment in tox.ini, so you can add it there.


We've got this in progress for placement as well:

https://review.openstack.org/#/c/600501/
https://review.openstack.org/#/c/600502/

It works well and is pretty critical in placement because most of
the "important" tests are functional.
--
Chris Dent   ٩◔̯◔۶   https://anticdent.org/
freenode: cdent tw: @anticdent__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [goals][python3] mixed versions?

2018-09-12 Thread Chris Friesen

On 9/12/2018 12:04 PM, Doug Hellmann wrote:


This came up in a Vancouver summit session (the python3 one I think). General 
consensus there seemed to be that we should have grenade jobs that run python2 
on the old side and python3 on the new side and test the update from one to 
another through a release that way. Additionally there was thought that the 
nova partial job (and similar grenade jobs) could hold the non upgraded node on 
python2 and that would talk to a python3 control plane.

I haven't seen or heard of anyone working on this yet though.

Clark



IIRC, we also talked about not supporting multiple versions of
python on a given node, so all of the services on a node would need
to be upgraded together.


As I understand it, the various services talk to each other using 
over-the-wire protocols.  Assuming this is correct, why would we need to 
ensure they are using the same python version?


Chris

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [charms] Propose Felipe Reyes for OpenStack Charmers team

2018-09-11 Thread Chris MacNaughton
+1 Felipe has been a solid contributor to the Openstack Charms for some 
time now.


Chris


On 11-09-18 23:07, Ryan Beisner wrote:

+1  I'm always happy to see Felipe's contributions and fixes come through.

Cheers!

Ryan




On Tue, Sep 11, 2018 at 1:10 PM James Page <mailto:james.p...@canonical.com>> wrote:


+1

On Wed, 5 Sep 2018 at 15:48 Billy Olsen mailto:billy.ol...@gmail.com>> wrote:

Hi,

I'd like to propose Felipe Reyes to join the OpenStack
Charmers team as
a core member. Over the past couple of years Felipe has
contributed
numerous patches and reviews to the OpenStack charms [0]. His
experience
and knowledge of the charms used in OpenStack and the usage of
Juju make
him a great candidate.

[0] -

https://review.openstack.org/#/q/owner:%22Felipe+Reyes+%253Cfelipe.reyes%2540canonical.com%253E%22

Thanks,

Billy Olsen


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe:
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
<http://openstack-dev-requ...@lists.openstack.org?subject:unsubscribe>
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe:
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
<http://openstack-dev-requ...@lists.openstack.org?subject:unsubscribe>
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[Openstack-operators] Finishing off feedback and Berlin planning?

2018-09-11 Thread Chris Morgan
For those of us still at the PTG, we have a bit more to usefully discuss
about this PTG, Berlin Forum topics etc. Perhaps we can use the same room
(Aspen) tomorrow (Wednesday) and get a bit more done? We have the room,
just no projector.

If you can join on Wednesday, what time works? Shintaro will leave after
Wednesday and anyone remotely near North or South Carolina may well also
want to get out, understandably. There's a couple of items Lance Bragstad
wants to go over at 9.30 and then some ops will be heading to the UC
meeting. So maybe something after lunch?

Chris

-- 
Chris Morgan 
___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


[Openstack-operators] revamped ops meetup day 2

2018-09-10 Thread Chris Morgan
Hi All,
  We (ops meetups team) got several additional suggestions for ops meetups
session, so we've attempted to revamp day 2 to fit them in, please see

https://docs.google.com/spreadsheets/d/1EUSYMs3GfglnD8yfFaAXWhLe0F5y9hCUKqCYe0Vp1oA/edit#gid=981527336

Given the timing, we'll attempt to confirm the rest of the day starting at
9am over coffee. If you're moderating something tomorrow please check out
the adjusted times. If something doesn't work for you we'll try and swap
sessions to make it work.

Cheers
Chris, Erik, Sean

-- 
Chris Morgan 
___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


[openstack-dev] [k8s] SIG-K8s PTG Meetings, Monday September 10, 2018

2018-09-09 Thread Chris Hoge
SIG-K8s has space reserved in Ballroom A for all of Monday, September 10
at the PTG. We will begin at 9:00 with a planning session, similar to that
in Dublin, where we will organize topics and times for the remainder of
the day.

The planning etherpad can be found here: 
https://etherpad.openstack.org/p/sig-k8s-2018-denver-ptg
The link to the Dublin planning etherpad: 
https://etherpad.openstack.org/p/sig-k8s-2018-dublin-ptg

Thanks,
Chris


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [placement] update 18-36

2018-09-07 Thread Chris Dent
is currently small enough that looking at [all
open 
patches](https://review.openstack.org/#/q/project:openstack/placement+status:open)
isn't too overwhelming.

Because of all the recent work with extraction, and because the
PTG is next week I'm not up to date on what patches that are related
to placement are in need of review. In the meantime if you want to
go looking around, [anything with 'placement' in the commit
mesage](https://review.openstack.org/#/q/message:placement+status:open)
is fun.

Next time I'll provide more detail.

# End

Thanks to everyone for getting placement this far.

--
Chris Dent   ٩◔̯◔۶   https://anticdent.org/
freenode: cdent tw: @anticdent__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[Openstack-operators] Draft Ops Meetup schedule for Denver PTG

2018-09-06 Thread Chris Morgan
Hello Everyone,
  The Ops Meetups team is happy to announce we've put together a schedule
for the ops meetup days at next week's OpenStack PTG, please see the
attached PDF. Not all moderators are confirmed and the schedule is subject
to further change for other reasons, so if you have feedback please share
in this email thread.

After working hard all Monday, a bunch of operators and other openstack
folk are considering venturing to the Wynkoop Brewing Co. for refreshments
and perhaps a game of pool. This is not currently sponsored, but should
still be a fun outing.

See you in Denver

Chris
-- 
Chris Morgan 


Ops Meetup Planning (PHL, YVR, PAO, TYO, MAN, AUS, NYC, BCN, MIL, MEC, DEN) - Denver.pdf
Description: Adobe PDF document
___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


[openstack-dev] [nova] [placement] modified devstack using openstack/placement

2018-09-06 Thread Chris Dent


Yesterday I experimented to discover the changes needed in devstack
to get it working with the code in openstack/placement. The results
are at

https://review.openstack.org/#/c/600162/

and it is passing tempest. It isn't passing grenade but that's
expected at this stage.

Firstly, thanks to everyone who helped this week to create and merge
a bunch of placement code to get the repo working. Waking up this
morning to see a green tempest was rather nice.

Secondly, the work—as expected—exposes a few gaps, most that are
already known. If you're not interested in the details, here's a
good place to stop reading, but if you are, see below. This is
mostly notes, for sake of sharing information, not a plan. Please
help me make a plan.

1) To work around the fact that there is currently no
"placement-manage db_sync" equivalent I needed to hack up something
to make sure the database tables exist. So I faked a
"placmeent-manage db table_create". That's in

https://review.openstack.org/#/c/600161/

That uses sqlalchemy's 'create_all' functionality to create the
tables from their Models, rather than using any migrations. I did it
this way for two reasons: 1) I already had code for it in placedock[1]
that I could copy, 2) I wanted to set aside migrations for the
immediate tests.

We'll need to come back to that, because the lack of dealing with
already existing tables is _part_ of what is blocking grenade.
However, for new installs 'create_all' is fast and correct and
something we might want to keep.

2) The grenade jobs don't have 'placement' in $PROJECTS so die
during upgrade.

3) The nova upgrade.sh will need some adjustments to do the data
migrations we've talked about over the "(technical)" thread. Also
we'll need to decide how much of the placement stuff stays in there
and how much goes somewhere else.

That's all stuff we can work out, especially if some
grenade-oriented people join in the fun.

One question I have on the lib/placement changes in devstack: Is it
useful to make those changes be guarded by a conditional of the
form:

   if placement came from its own repo:
   do the new stuff
   else:
   do the old stuff

?


[1] https://github.com/cdent/placedock
--
Chris Dent   ٩◔̯◔۶   https://anticdent.org/
freenode: cdent tw: @anticdent__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [api] microversion-parse core updates

2018-09-05 Thread Chris Dent


After some discussion with other cores I've made some adjustments to
the core team on microversion-parse [1]

* added dtantsur (welcome!)
* removed sdague

In case you're not aware, microversion-parse is middleware and
utilities for managing microversions in openstack service apis.

[1] https://pypi.org/project/microversion_parse/
http://git.openstack.org/cgit/openstack/microversion-parse

--
Chris Dent   ٩◔̯◔۶   https://anticdent.org/
freenode: cdent tw: @anticdent__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] [placement] extraction (technical) update

2018-09-04 Thread Chris Dent

On Tue, 4 Sep 2018, Eric Fried wrote:


030 is okay as long as nothing goes wrong. If something does it
raises exceptions which would currently fail as the exceptions are
not there. See below for more about exceptions.


Maybe I'm misunderstanding what these migration thingies are supposed to
be doing, but 030 [1] seems like it's totally not applicable to
placement and should be removed. The placement database doesn't (and
shouldn't) have 'flavors', 'cell_mappings', or 'host_mappings' tables in
the first place.

What am I missing?


Nothing, as far as I can tell, but as we hadn't had a clear
plan about how to proceed with the trimming of migrations, I've been
trying to point out where they form little speed bumps as we've
gone through this process and carried them with us. And tried to
annotate where there may present some more, until we trim them.

There are numerous limits to my expertise, and the db migrations is
one of several areas where I decided I wasn't going to hold the ball,
I'd just get us to the game and hope other people would find and
fill in the blanks. That seems to be working okay, so far.


* Presumably we can trim the placement DB migrations to just stuff
  that is relevant to placement


Yah, I would hope so. What possible reason could there be to do otherwise?


Mel's plans looks good to me.

--
Chris Dent   ٩◔̯◔۶   https://anticdent.org/
freenode: cdent tw: @anticdent__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [tc] [all] TC Report 18-36

2018-09-04 Thread Chris Dent


HTML: https://anticdent.org/tc-report-18-36.html

It's been a rather busy day, so this TC Report will be a quick
update of some discussions that have happened in the past week.

# PEP 8002

With Guido van Rossum stepping back from his role as the BDFL of
Python, there's work in progress to review different methods of
governance used in other communities to come up with some ideas for
the future of Python. Those reviews are being gathered in PEP 8002.
Doug Hellman has been helping with those conversations and asked for
[input on a 
draft](http://eavesdrop.openstack.org/irclogs/%23openstack-tc/%23openstack-tc.2018-08-28.log.html#t2018-08-28T20:40:41).

There was some good conversation, especially the bits about the
differences between ["direct democracy" and whatever what we do here
in 
OpenStack](http://eavesdrop.openstack.org/irclogs/%23openstack-tc/%23openstack-tc.2018-08-29.log.html#t2018-08-29T11:00:50).

The result of the draft was quickly merged into
[PEP 8002](https://www.python.org/dev/peps/pep-8002/).

# Summit Sessions

There was discussion about concerns some people experience with some
[summit sessions feeling like 
advertising](http://eavesdrop.openstack.org/irclogs/%23openstack-tc/%23openstack-tc.2018-08-29.log.html#t2018-08-29T18:21:08).

# PTG Coming Soon

The PTG is next week! TC sessions are described on [this
etherpad](https://etherpad.openstack.org/p/tc-stein-ptg).

# Elections Reminder

TC [election season](https://governance.openstack.org/election/) is
right now. Nomination period ends at the end of the day (UTC) 6th of
September so there isn't much time left. If you're toying with the idea,
nominate yourself, the community wants your input. If you have any
questions please feel free to ask.
--
Chris Dent   ٩◔̯◔۶   https://anticdent.org/
freenode: cdent tw: @anticdent__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] better name for placement

2018-09-04 Thread Chris Dent

On Tue, 4 Sep 2018, Jay Pipes wrote:

I wasn't in YVR, which explains why I's never heard of it. There's a number 
of misconceptions in the above document about the placement service that 
don't seem to have been addressed. I'm wondering if its worth revisiting the 
topic in Denver with the Cinder team or whether the Cinder team isn't 
interested in working with the placement service?


It was also discussed as part of the reshaper spec and implemented
for future use by a potential fast forward upgrade tool:


http://specs.openstack.org/openstack/nova-specs/specs/rocky/approved/reshape-provider-tree.html#direct-placement

https://git.openstack.org/cgit/openstack/nova/tree/nova/api/openstack/placement/direct.py

I agree, talking to Cinder some more in denver about use of
placement, either over HTTP or direct, whatever form, is good.

But I don't think any of that should impact the naming situation.
It's placement now, and placement is not really any less unique than
a lot of the other words we use, the direct situation is a very
special and edge case (likely in containers anyway, so naming not as
much of a big deal). Changing the name, again, is painful. Please,
let's not do it.

--
Chris Dent   ٩◔̯◔۶   https://anticdent.org/
freenode: cdent tw: @anticdent__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] better name for placement

2018-09-04 Thread Chris Dent

On Tue, 4 Sep 2018, Jay Pipes wrote:

Either one works for me. Though I'm pretty sure that it isn't necessary. The 
reason it isn't necessary is because the stuff in the top-level placement 
package isn't meant to be imported by anything at all. It's the placement 
server code.


Yes.

If some part of the server repo is meant to be imported into some other 
system, say nova, then it will be pulled into a separate lib, ala ironiclib 
or neutronlib.


Also yes.

At this stage I _really_ don't want to go through the trouble of
doing a second rename: we're in the process of finishing a rename
now.

--
Chris Dent   ٩◔̯◔۶   https://anticdent.org/
freenode: cdent tw: @anticdent__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] better name for placement

2018-09-04 Thread Chris Dent

On Tue, 4 Sep 2018, Jay Pipes wrote:


Is there a reason we couldn't have openstack-placement be the package name?


I would hope we'd be able to do that, and probably should do that.
'openstack-placement' seems a find pypi package name for a think
from which you do 'import placement' to do some openstack stuff,
yeah?

Last I checked the concept of the package name is sort of put off
until we have passing tests, but we're nearly there on that.

--
Chris Dent   ٩◔̯◔۶   https://anticdent.org/
freenode: cdent tw: @anticdent__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] better name for placement (was:Nominating Chris Dent for placement-core)

2018-09-04 Thread Chris Dent

On Tue, 4 Sep 2018, Thomas Goirand wrote:


Just a nit-pick... It's a shame we call it just placement. It could have
been something like:

foo: OpenStack placement

Just like we have:

nova: OpenStack compute

No? Is it too late?


There was some discussion about this on one of the
extraction-related etherpads [1] and the gist is that while it would
be possible to change it, at this point "placement" is the name
people use and are used to so there would have to be a very good
reason to change it. All the docs and code talk about "placement",
and python package names are already placement.

It used to be the case that the service-oriented projects would have
a project name different from their service-type because that was
cool/fun [2] and it allowed for the possibility that there could be
another project which provided the same service-type. That hasn't
really come to pass and now that we are on the far side of the hype
curve, doesn't really make much sense in terms of focusing energy.

My feeling is that there is already a lot of identity associated
with the term "placement" and changing it would be too disruptive.
Also, I hope that it will operate as a constraint on feature creep.

But if we were to change it, I vote for "katabatic", as a noun, even
though it is an adjective.

[1] https://etherpad.openstack.org/p/placement-extract-stein-copy
That was a copy of the original, which stopped working, but now that
one has stopped working too. I'm going to attempt to reconstruct it
today from copies that people.

[2] For certain values of...

--
Chris Dent   ٩◔̯◔۶   https://anticdent.org/
freenode: cdent tw: @anticdent__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] [placement] extraction (technical) update

2018-09-03 Thread Chris Dent


There's been some progress on the technical side of extracting
placement to it own repo. The summary is:

* https://git.openstack.org/cgit/openstack/placement exists
* https://review.openstack.org/#/c/599416/ is at the top of a
  series of patches. That patch is passing and voting on unit and
  functional for py 2.7 and 3.5 and is passing pep8.

More below, in the steps.

On Tue, 28 Aug 2018, Chris Dent wrote:

On Mon, 27 Aug 2018, melanie witt wrote:
1. We copy the placement code into the openstack/placement repo and have it 
passing all of its own unit and functional tests.


To break that down to more detail, how does this look?
(note the ALL CAPS where more than acknowledgement is requested)

1.1 Run the git filter-branch on a copy of nova
   1.1.1 Add missing files to the file list:
 1.1.1.1 .gitignore
 1.1.1.2 # ANYTHING ELSE?
1.2 Push -f that thing, acknowledge to be broken, to a seed repo on github
   (ed's repo should be fine)
1.3 Do the repo creation bits described in
   https://docs.openstack.org/infra/manual/creators.html
   to seed openstack/placement
   1.3.1 set zuul jobs. Either to noop-jobs, or non voting basic
   func and unit # INPUT DESIRED HERE
1.4 Once the repo exists with some content, incrementally bring it to
   working
   1.4.1 Update tox.ini to be placement oriented
   1.4.2 Update setup.cfg to be placement oriented
   1.4.3 Correct .stesr.conf
   1.4.4 Move base of placement to "right" place
   1.4.5 Move unit and functionals to right place
   1.4.6 Do automated path fixings
   1.4.7 Set up translation domain and i18n.py corectly
   1.4.8 Trim placement/conf to just the conf settings required
 (api, base, database, keystone, paths, placement)
   1.4.9 Remove database files that are not relevant (the db api is
 not used by placement)
   1.4.10 Fix the Database Fixture to be just one database
   1.4.11 Disable migrations that can't work (because of
  dependencies on nova code, 014 and 030 are examples)
  # INPUT DESIRED HERE AND ON SCHEMA MIGRATIONS IN GENERAL


030 is okay as long as nothing goes wrong. If something does it
raises exceptions which would currently fail as the exceptions are
not there. See below for more about exceptions.


   1.4.12 Incrementally get tests working
   1.4.13 Fix pep8
1.5 Make zuul pep, unit and functional voting


This is where we are now at https://review.openstack.org/#/c/599416/


1.6 Create tools for db table sync/create


It made some TODOs about this in setup.cfg, also nothing that in
additional to a placement-manage we'll want a placement-status.


1.7 Concurrently go to step 2, where the harder magic happens.
1.8 Find and remove dead code (there will be some).


Some dead code has been removed, but there will definitely be plenty
more to find.


1.9 Tune up and confirm docs
1.10 Grep for remaining "nova" (as string and spirit) and fix



Item 1.4.12 may deserve some discussion. When I've done this the
several times before, the strategy I've used is to be test driven:
run either functional or unit tests, find and fix one of the errors
revealed, commit, move on.


In the patch set that ends with the review linked above, this is
pretty much what I did. Switching between a tox run of the full
suite and using testtools.run to run an individual test file.

2. We have a stack of changes to zuul jobs that show nova working but 
deploying placement in devstack from the new repo instead of nova's repo. 
This includes the grenade job, ensuring that upgrade works.


Do people have the time or info needed to break this step down into
multiple steps like the '1' section above. Things I can think of:

* devstack patch to deploy placement from the new repo
  * and use placement.conf
* stripping of placement out of nova, a bit like
  https://review.openstack.org/#/c/596291/ , unless we leave that
  enitrely to step 4
* grenade tweaks (?)
* more

3. When those pass, we merge them, effectively orphaning nova's copy of 
placement. Switch those jobs to voting.


4. Finally, we delete the orphaned code from nova (without needing to make 
any changes to non-placement-only test code -- code is truly orphaned).


Some questions I have:

* Presumably we can trim the placement DB migrations to just stuff
  that is relevant to placement and renumber accordingly?

* Could we also make it so we only run the migrations if we are not
  in a fresh install? In a fresh install we ought to be able to skip
  the migrations entirely and create the tables by reflection with
  the class models [1].

* I had another but I forgot.

[1] I did something similar to placedock for when starting from
scratch:
https://github.com/cdent/placedock/blob/b5ca753a0d97e0d9a324e196349e3a19eb62668b/sync.py#L68-L73


--
Chris Dent   ٩◔̯◔۶   https://anticdent.org/
freenode: cdent tw: @anticdent

Re: [Openstack] [openstack-dev] [all] Bringing the community together (combine the lists!)

2018-08-30 Thread Chris Friesen

On 08/30/2018 11:03 AM, Jeremy Stanley wrote:


The proposal is simple: create a new openstack-discuss mailing list
to cover all the above sorts of discussion and stop using the other
four.


Do we want to merge usage and development onto one list?  That could be a busy 
list for someone who's just asking a simple usage question.


Alternately, if we are going to merge everything then why not just use the 
"openstack" mailing list since it already exists and there are references to it 
on the web.


(Or do you want to force people to move to something new to make them recognize 
that something has changed?)


Chris

___
Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Post to : openstack@lists.openstack.org
Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack


Re: [Openstack-operators] [openstack-dev] [all] Bringing the community together (combine the lists!)

2018-08-30 Thread Chris Friesen

On 08/30/2018 11:03 AM, Jeremy Stanley wrote:


The proposal is simple: create a new openstack-discuss mailing list
to cover all the above sorts of discussion and stop using the other
four.


Do we want to merge usage and development onto one list?  That could be a busy 
list for someone who's just asking a simple usage question.


Alternately, if we are going to merge everything then why not just use the 
"openstack" mailing list since it already exists and there are references to it 
on the web.


(Or do you want to force people to move to something new to make them recognize 
that something has changed?)


Chris

___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


Re: [openstack-dev] [all] Bringing the community together (combine the lists!)

2018-08-30 Thread Chris Friesen

On 08/30/2018 11:03 AM, Jeremy Stanley wrote:


The proposal is simple: create a new openstack-discuss mailing list
to cover all the above sorts of discussion and stop using the other
four.


Do we want to merge usage and development onto one list?  That could be a busy 
list for someone who's just asking a simple usage question.


Alternately, if we are going to merge everything then why not just use the 
"openstack" mailing list since it already exists and there are references to it 
on the web.


(Or do you want to force people to move to something new to make them recognize 
that something has changed?)


Chris

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [Openstack-operators] [Openstack-sigs] [all] Bringing the community together (combine the lists!)

2018-08-30 Thread Chris Hoge
I also propose that we merge the interop-wg mailing list also,
as the volume on that list is small but topics posted to it are of
general interest to the community.

Chris Hoge
(Interop WG Secretary, amongst other things)

> On Aug 30, 2018, at 10:03 AM, Jeremy Stanley  wrote:
> 
> The openstack, openstack-dev, openstack-sigs and openstack-operators
> mailing lists on lists.openstack.org see an increasing amount of
> cross-posting and thread fragmentation as conversants attempt to
> reach various corners of our community with topics of interest to
> one or more (and sometimes all) of those overlapping groups of
> subscribers. For some time we've been discussing and trying ways to
> bring our developers, distributors, operators and end users together
> into a less isolated, more cohesive community. An option which keeps
> coming up is to combine these different but overlapping mailing
> lists into one single discussion list. As we covered[1] in Vancouver
> at the last Forum there are a lot of potential up-sides:
> 
> 1. People with questions are no longer asking them in a different
> place than many of the people who have the answers to those
> questions (the "not for usage questions" in the openstack-dev ML
> title only serves to drive the wedge between developers and users
> deeper).
> 
> 2. The openstack-sigs mailing list hasn't seem much uptake (an order
> of magnitude fewer subscribers and posts) compared to the other
> three lists, yet it was intended to bridge the communication gap
> between them; combining those lists would have been a better
> solution to the problem than adding yet another turned out to be.
> 
> 3. At least one out of every ten messages to any of these lists is
> cross-posted to one or more of the others, because we have topics
> that span across these divided groups yet nobody is quite sure which
> one is the best venue for them; combining would eliminate the
> fragmented/duplicative/divergent discussion which results from
> participants following up on the different subsets of lists to which
> they're subscribed,
> 
> 4. Half of the people who are actively posting to at least one of
> the four lists subscribe to two or more, and a quarter to three if
> not all four; they would no longer be receiving multiple copies of
> the various cross-posts if these lists were combined.
> 
> The proposal is simple: create a new openstack-discuss mailing list
> to cover all the above sorts of discussion and stop using the other
> four. As the OpenStack ecosystem continues to mature and its
> software and services stabilize, the nature of our discourse is
> changing (becoming increasingly focused with fewer heated debates,
> distilling to a more manageable volume), so this option is looking
> much more attractive than in the past. That's not to say it's quiet
> (we're looking at roughly 40 messages a day across them on average,
> after deduplicating the cross-posts), but we've grown accustomed to
> tagging the subjects of these messages to make it easier for other
> participants to quickly filter topics which are relevant to them and
> so would want a good set of guidelines on how to do so for the
> combined list (a suggested set is already being brainstormed[2]).
> None of this is set in stone of course, and I expect a lot of
> continued discussion across these lists (oh, the irony) while we try
> to settle on a plan, so definitely please follow up with your
> questions, concerns, ideas, et cetera.
> 
> As an aside, some of you have probably also seen me talking about
> experiments I've been doing with Mailman 3... I'm hoping new
> features in its Hyperkitty and Postorius WebUIs make some of this
> easier or more accessible to casual participants (particularly in
> light of the combined list scenario), but none of the plan above
> hinges on MM3 and should be entirely doable with the MM2 version
> we're currently using.
> 
> Also, in case you were wondering, no the irony of cross-posting this
> message to four mailing lists is not lost on me. ;)
> 
> [1] https://etherpad.openstack.org/p/YVR-ops-devs-one-community
> [2] https://etherpad.openstack.org/p/common-openstack-ml-topics
> -- 
> Jeremy Stanley
> ___
> openstack-sigs mailing list
> openstack-s...@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-sigs


___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


Re: [openstack-dev] [Openstack-sigs] [all] Bringing the community together (combine the lists!)

2018-08-30 Thread Chris Hoge
I also propose that we merge the interop-wg mailing list also,
as the volume on that list is small but topics posted to it are of
general interest to the community.

Chris Hoge
(Interop WG Secretary, amongst other things)

> On Aug 30, 2018, at 10:03 AM, Jeremy Stanley  wrote:
> 
> The openstack, openstack-dev, openstack-sigs and openstack-operators
> mailing lists on lists.openstack.org see an increasing amount of
> cross-posting and thread fragmentation as conversants attempt to
> reach various corners of our community with topics of interest to
> one or more (and sometimes all) of those overlapping groups of
> subscribers. For some time we've been discussing and trying ways to
> bring our developers, distributors, operators and end users together
> into a less isolated, more cohesive community. An option which keeps
> coming up is to combine these different but overlapping mailing
> lists into one single discussion list. As we covered[1] in Vancouver
> at the last Forum there are a lot of potential up-sides:
> 
> 1. People with questions are no longer asking them in a different
> place than many of the people who have the answers to those
> questions (the "not for usage questions" in the openstack-dev ML
> title only serves to drive the wedge between developers and users
> deeper).
> 
> 2. The openstack-sigs mailing list hasn't seem much uptake (an order
> of magnitude fewer subscribers and posts) compared to the other
> three lists, yet it was intended to bridge the communication gap
> between them; combining those lists would have been a better
> solution to the problem than adding yet another turned out to be.
> 
> 3. At least one out of every ten messages to any of these lists is
> cross-posted to one or more of the others, because we have topics
> that span across these divided groups yet nobody is quite sure which
> one is the best venue for them; combining would eliminate the
> fragmented/duplicative/divergent discussion which results from
> participants following up on the different subsets of lists to which
> they're subscribed,
> 
> 4. Half of the people who are actively posting to at least one of
> the four lists subscribe to two or more, and a quarter to three if
> not all four; they would no longer be receiving multiple copies of
> the various cross-posts if these lists were combined.
> 
> The proposal is simple: create a new openstack-discuss mailing list
> to cover all the above sorts of discussion and stop using the other
> four. As the OpenStack ecosystem continues to mature and its
> software and services stabilize, the nature of our discourse is
> changing (becoming increasingly focused with fewer heated debates,
> distilling to a more manageable volume), so this option is looking
> much more attractive than in the past. That's not to say it's quiet
> (we're looking at roughly 40 messages a day across them on average,
> after deduplicating the cross-posts), but we've grown accustomed to
> tagging the subjects of these messages to make it easier for other
> participants to quickly filter topics which are relevant to them and
> so would want a good set of guidelines on how to do so for the
> combined list (a suggested set is already being brainstormed[2]).
> None of this is set in stone of course, and I expect a lot of
> continued discussion across these lists (oh, the irony) while we try
> to settle on a plan, so definitely please follow up with your
> questions, concerns, ideas, et cetera.
> 
> As an aside, some of you have probably also seen me talking about
> experiments I've been doing with Mailman 3... I'm hoping new
> features in its Hyperkitty and Postorius WebUIs make some of this
> easier or more accessible to casual participants (particularly in
> light of the combined list scenario), but none of the plan above
> hinges on MM3 and should be entirely doable with the MM2 version
> we're currently using.
> 
> Also, in case you were wondering, no the irony of cross-posting this
> message to four mailing lists is not lost on me. ;)
> 
> [1] https://etherpad.openstack.org/p/YVR-ops-devs-one-community
> [2] https://etherpad.openstack.org/p/common-openstack-ml-topics
> -- 
> Jeremy Stanley
> ___
> openstack-sigs mailing list
> openstack-s...@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-sigs


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [Openstack] [nova] Nova-scheduler: when are filters applied?

2018-08-30 Thread Chris Friesen

On 08/30/2018 08:54 AM, Eugen Block wrote:

Hi Jay,


You need to set your ram_allocation_ratio nova.CONF option to 1.0 if you're
running into OOM issues. This will prevent overcommit of memory on your
compute nodes.


I understand that, the overcommitment works quite well most of the time.

It just has been an issue twice when I booted an instance that had been shutdown
a while ago. In the meantime there were new instances created on that
hypervisor, and this old instance caused the OOM.

I would expect that with a ratio of 1.0 I would experience the same issue,
wouldn't I? As far as I understand the scheduler only checks at instance
creation, not when booting existing instances. Is that a correct assumption?


The system keeps track of how much memory is available and how much has been 
assigned to instances on each compute node.  With a ratio of 1.0 it shouldn't 
let you consume more RAM than is available even if the instances have been shut 
down.


Chris

___
Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Post to : openstack@lists.openstack.org
Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack


[openstack-dev] [all][api] POST /api-sig/news

2018-08-30 Thread Chris Dent


Greetings OpenStack community,

There was nothing specific on the agenda this week, so much of the API-SIG meeting was 
spent discussing API-related topics that we'd encountered recently. One was: K8s Custom 
Resources [9] Cool or Chaos? The answer is, of course, "it depends". Another 
was a recent thread asking about the relevance of Open API 3.0 in the OpenStack 
environment [10]. We had trouble deciding what the desired outcome is, so for now are 
merely tracking the thread.

In the world of guidelines and bugs, not a lot of recent action. Some approved 
changes need to be rebased to actually get published, and the stack about 
version discovery [11] needs to be refreshed and potentially adopted by someone 
who is not Monty. If you're reading, Monty, and have thoughts on that, share 
them.

Next week we will be actively planning [7] for the PTG [8]. We have a room on 
Monday. We always have interesting and fun discussions when we're at the PTG, 
join us.

As always if you're interested in helping out, in addition to coming to the 
meetings, there's also:

* The list of bugs [5] indicates several missing or incomplete guidelines.
* The existing guidelines [2] always need refreshing to account for changes 
over time. If you find something that's not quite right, submit a patch [6] to 
fix it.
* Have you done something for which you think guidance would have made things 
easier but couldn't find any? Submit a patch and help others [6].

# Newly Published Guidelines

* None

# API Guidelines Proposed for Freeze

* None

# Guidelines that are ready for wider review by the whole community.

* None

# Guidelines Currently Under Review [3]

* Add an api-design doc with design advice
  https://review.openstack.org/592003

* Update parameter names in microversion sdk spec
  https://review.openstack.org/#/c/557773/

* Add API-schema guide (still being defined)
  https://review.openstack.org/#/c/524467/

* A (shrinking) suite of several documents about doing version and service 
discovery
  Start at https://review.openstack.org/#/c/459405/

* WIP: microversion architecture archival doc (very early; not yet ready for 
review)
  https://review.openstack.org/444892

# Highlighting your API impacting issues

If you seek further review and insight from the API SIG about APIs that you are 
developing or changing, please address your concerns in an email to the OpenStack 
developer mailing list[1] with the tag "[api]" in the subject. In your email, 
you should include any relevant reviews, links, and comments to help guide the discussion 
of the specific challenge you are facing.

To learn more about the API SIG mission and the work we do, see our wiki page 
[4] and guidelines [2].

Thanks for reading and see you next week!

# References

[1] http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
[2] http://specs.openstack.org/openstack/api-wg/
[3] https://review.openstack.org/#/q/status:open+project:openstack/api-sig,n,z
[4] https://wiki.openstack.org/wiki/API_SIG
[5] https://storyboard.openstack.org/#!/project/1039
[6] https://git.openstack.org/cgit/openstack/api-sig
[7] https://etherpad.openstack.org/p/api-sig-stein-ptg
[8] https://www.openstack.org/ptg/
[9] 
https://kubernetes.io/docs/concepts/extend-kubernetes/api-extension/custom-resources/
[10] http://lists.openstack.org/pipermail/openstack-dev/2018-August/133960.html
[11] https://review.openstack.org/#/c/459405/

Meeting Agenda
https://wiki.openstack.org/wiki/Meetings/API-SIG#Agenda
Past Meeting Records
http://eavesdrop.openstack.org/meetings/api_sig/
Open Bugs
https://bugs.launchpad.net/openstack-api-wg

--
Chris Dent   ٩◔̯◔۶   https://anticdent.org/
freenode: cdent tw: @anticdent__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [Openstack-operators] [nova][cinder][neutron] Cross-cell cold migration

2018-08-29 Thread Chris Friesen

On 08/29/2018 10:02 AM, Jay Pipes wrote:


Also, I'd love to hear from anyone in the real world who has successfully
migrated (live or otherwise) an instance that "owns" expensive hardware
(accelerators, SR-IOV PFs, GPUs or otherwise).


I thought cold migration of instances with such devices was supported upstream?

Chris

___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


[openstack-dev] [tc] [all] TC Report 18-35

2018-08-28 Thread Chris Dent
directly tied to any specific technical issues (which,
thankfully, are resolving in the short term for placement) but are
from the accumulation and aggregation over time of difficulties and
frustrations associated with unresolved problems in the exercise and
distribution of control and trust, unfinished goals, and unfulfilled
promises. When changes like the placement extraction come up, they
can act as proxies for deep and lingering problems that we have not
developed good systems for resolving.

What we do instead of investigating the deep issues is address the
immediate symptomatic problems in a technical way and try to move
on. People who are not satisfied with this have little recourse.
They can either move elsewhere or attempt to cope. We've lost plenty
of good people as a result. Some of those that choose to stick
around get tetchy.

If you have thoughts and feelings about these (or any other) deep
and systemic issues in OpenStack, anyone in the TC should be happy
to speak with you about them. For best results you should be willing
to speak about your concerns publicly. If for some reason you are
not comfortable doing so, that is itself an issue that needs to be
addressed, but starting out privately is welcomed.

The big goal here is for OpenStack to be good, as a technical
production _and_ as a community.

--
Chris Dent   ٩◔̯◔۶   https://anticdent.org/
freenode: cdent tw: @anticdent__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] [placement] extraction (technical) update

2018-08-28 Thread Chris Dent

On Tue, 28 Aug 2018, Matt Riedemann wrote:


Are people okay with that and willing to commit to being okay with
that answer in reviews? To some extent we need to have some faith on
the end result: the tests work. If people are not okay with that, we
need the people who are not to determine and prove the alternate
strategy. I've had this one work and work well.


Seems reasonable to me. But to be clear, if there are 70 failed tests, are 
you going to have 70 separate patches? Or this is just one of those things 
where you start with 70, fix something, get down to 50 failed tests, and 
iterate until you're down to all passing. If so, I'm OK with that. It's hard 
to say without knowing how many patches get from 70 failures to 0 and what 
the size/complexity of those changes is, but without knowing I'd default to 
the incremental approach for ease of review.


It's lumpy. But at least at the begining it will be something like:
0 passing, stil 0 passing; still 0 passing; still 0 passing; 150
passing, 700 failing; 295 passing, X failing, etc. Because in the
early stages, test discovery and listing doesn't work at all, for
quite a few different reasons. Based on the discussion here,
resolving those "different reasons" is things people want to see in
different commits.

One way to optimize this (if people preferred) would be to not use
stestr as called by tox, with its built in test discovery, but
instead run testtools or subunit in a non-parallel and failfast
where not all tests need to be discovered first. That would provide a
more visible sense of "it's getting better" to someone who is running
the tests locally using that alternate method, but would not do much
for the jobs run by zuul, so probably not all that useful.

Thanks for the other info on the devstack and grenade stuff. If I
read you right, from your perspective it's a case of "we'll see" and
"we'll figure it out", which sounds good to me.

--
Chris Dent   ٩◔̯◔۶   https://anticdent.org/
freenode: cdent tw: @anticdent__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] [placement] XenServer CI failed frequently because of placement update

2018-08-28 Thread Chris Dent

On Tue, 28 Aug 2018, Bob Ball wrote:


Just looking at Naichuan's output, I wonder if this is because allocation_ratio 
is registered as 0 in the inventory.


Yes.

Whatever happened to cause that is the root, that will throw the
math off into zeroness in lots of different places. The default (if
you don't send and allocation_ratio) is 1.0, so maybe there's some
code somewhere that is trying to use the default (by not sending)
but is accidentally sending 0 instead?

--
Chris Dent   ٩◔̯◔۶   https://anticdent.org/
freenode: cdent tw: @anticdent__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] [placement] extraction (technical) update

2018-08-28 Thread Chris Dent
 these points, but very happy to help.

3. When those pass, we merge them, effectively orphaning nova's copy of 
placement. Switch those jobs to voting.


4. Finally, we delete the orphaned code from nova (without needing to make 
any changes to non-placement-only test code -- code is truly orphaned).


In case you missed it, one of the things I did earlier in the
discussion was make it so that the wsgi script for placement defined
in nova's setup.cfg [1] could:

* continue to exist
* with the same name
* using the nova.conf file
* running the extracted placement code

That was easy to do because of the work over the last year or so
that has been hardening the boundary between placement and nova, in
place. I've been assuming that maintaining the option to use
original conf file is a helpful trick for people. Is that the case?

Thanks.

[1] 
https://review.openstack.org/#/c/596291/3/nova/api/openstack/placement/wsgi.py
--
Chris Dent   ٩◔̯◔۶   https://anticdent.org/
freenode: cdent tw: @anticdent__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [oslo] UUID sentinel needs a home

2018-08-24 Thread Chris Dent

On Fri, 24 Aug 2018, Doug Hellmann wrote:


I guess all of the people who complained so loudly about the global in 
oslo.config are gone?


It's a diffent context. In a testing environment where there is
already a well established pattern of use it's not a big deal.
Global in oslo.config is still horrible, but again: a well
established pattern of use.

This is part of why I think it is better positioned in oslotest as
that signals its limitations.

However, like I said in my other message, copying nova's thing has
proven fine.

--
Chris Dent   ٩◔̯◔۶   https://anticdent.org/
freenode: cdent tw: @anticdent__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] [placement] extraction (technical) update

2018-08-24 Thread Chris Dent

On Fri, 24 Aug 2018, Chris Dent wrote:


That work is in gerrit at

   https://review.openstack.org/#/c/596291/

with a hopefully clear commit message about what's going on. As with
the rest of this work, this is not something to merge, rather an
experiment to learn from. The hot spots in the changes are
relatively limited and about what you would expect so, with luck,
should be pretty easy to deal with, some of them even before we
actually do any extracting (to enhance the boundaries between the
two services).


After some prompting from gibi, that code has now been adjusted so
that requirements.txt and tox.ini [1] make sure that the extract
placement branch is installed into the test virtualenvs. So in the
gate the unit and functional tests pass. Other jobs do not because
of [1].

In the intervening time I've taken that code, built a devstack that
uses a nova-placement-api wsgi script that uses nova.conf and the
extracted placement code. It runs against the nova-api database.

Created a few servers. Worked.

Then I switched the devstack@placement-unit unit file to point to
the placement-api wsgi script, and configured
/etc/placement/placement.conf to have a
[placement_database]/connection of the nova-api db.

Created a few servers. Worked.

Thanks.

[1] As far as I can tell a requirements.txt entry of

-e 
git+https://github.com/cdent/placement-1.git@cd/make-it-work#egg=placement

will install just fine with 'pip install -r requirements.txt', but
if I do 'pip install nova' and that line is in requirements.txt it
does not work. This means I had to change tox.ini to have a deps
setting of:

deps = -r{toxinidir}/test-requirements.txt
   -r{toxinidir}/requirements.txt

to get the functional and unit tests to build working virtualenvs.
That this is not happening in the dsvm-based zuul jobs mean that the
tests can't run or pass. What's going on here? Ideas?
--
Chris Dent   ٩◔̯◔۶   https://anticdent.org/
freenode: cdent tw: @anticdent__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [nova] [placement] extraction (technical) update

2018-08-24 Thread Chris Dent


Over the past few days a few of us have been experimenting with
extracting placement to its own repo, as has been discussed at
length on this list, and in some etherpads:

https://etherpad.openstack.org/p/placement-extract-stein
https://etherpad.openstack.org/p/placement-extraction-file-notes

As part of that, I've been doing some exploration to tease out the
issues we're going to hit as we do it. None of this is work that
will be merged, rather it is stuff to figure out what we need to
know to do the eventual merging correctly and efficiently.

Please note that doing that is just the near edge of a large
collection of changes that will cascade in many ways to many
projects, tools, distros, etc. The people doing this are aware of
that, and the relative simplicity (and fairly immediate success) of
these experiments is not misleading people into thinking "hey, no
big deal". It's a big deal.

There's a strategy now (described at the end of the first etherpad
listed above) for trimming the nova history to create a thing which
is placement. From the first run of that Ed created a github repo
and I branched that to eventually create:

https://github.com/EdLeafe/placement/pull/2

In that, all the placement unit and functional tests are now
passing, and my placecat [1] integration suite also passes.

That work has highlighted some gaps in the process for trimming
history which will be refined to create another interim repo. We'll
repeat this until the process is smooth, eventually resulting in an
openstack/placement.

To take things further, this morning I pip installed the placement
code represented by that pull request into a nova repo and made some
changes to remove placement from nova.

With some minor adjustments I got the remaining unit and functional
tests working.

That work is in gerrit at

https://review.openstack.org/#/c/596291/

with a hopefully clear commit message about what's going on. As with
the rest of this work, this is not something to merge, rather an
experiment to learn from. The hot spots in the changes are
relatively limited and about what you would expect so, with luck,
should be pretty easy to deal with, some of them even before we
actually do any extracting (to enhance the boundaries between the
two services).

If you're interested in this process please have a look at all the
links and leave comments there, in response to this email, or join
#openstack-placement on freenode to talk about it.

Thanks.

[1] https://github.com/cdent/placecat
--
Chris Dent   ٩◔̯◔۶   https://anticdent.org/
freenode: cdent tw: @anticdent__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [Openstack] [cinder] Pruning Old Volume Backups with Ceph Backend

2018-08-23 Thread Chris Martin
Apologies -- I'm running Pike release of Cinder, Luminous release of
Ceph. Deployed with OpenStack-Ansible and Ceph-Ansible respectively.

On Thu, Aug 23, 2018 at 8:27 PM, David Medberry  wrote:
> Hi Chris,
>
> Unless I overlooked something, I don't see Cinder or Ceph versions posted.
>
> Feel free to just post the codenames but give us some inkling.
>
> On Thu, Aug 23, 2018 at 3:26 PM, Chris Martin  wrote:
>>
>> I back up my volumes daily, using incremental backups to minimize
>> network traffic and storage consumption. I want to periodically remove
>> old backups, and during this pruning operation, avoid entering a state
>> where a volume has no recent backups. Ceph RBD appears to support this
>> workflow, but unfortunately, Cinder does not. I can only delete the
>> *latest* backup of a given volume, and this precludes any reasonable
>> way to prune backups. Here, I'll show you.
>>
>> Let's make three backups of the same volume:
>> ```
>> openstack volume backup create --name backup-1 --force volume-foo
>> openstack volume backup create --name backup-2 --force volume-foo
>> openstack volume backup create --name backup-3 --force volume-foo
>> ```
>>
>> Cinder reports the following via `volume backup show`:
>> - backup-1 is not an incremental backup, but backup-2 and backup-3 are
>> (`is_incremental`).
>> - All but the latest backup have dependent backups
>> (`has_dependent_backups`).
>>
>> We take a backup every day, and after a week we're on backup-7. We
>> want to start deleting older backups so that we don't keep
>> accumulating backups forever! What happens when we try?
>>
>> ```
>> # openstack volume backup delete backup-1
>> Failed to delete backup with name or ID 'backup-1': Invalid backup:
>> Incremental backups exist for this backup. (HTTP 400)
>> ```
>>
>> We can't delete backup-1 because Cinder considers it a "base" backup
>> which `has_dependent_backups`. What about backup-2? Same story. Adding
>> the `--force` flag just gives a slightly different error message. The
>> *only* backup that Cinder will delete is backup-7 -- the very latest
>> one. This means that if we want to remove the oldest backups of a
>> volume, *we must first remove all newer backups of the same volume*,
>> i.e. delete literally all of our backups.
>>
>> Also, we cannot force creation of another *full* (non-incrmental)
>> backup in order to free all of the earlier backups for removal.
>> (Omitting the `--incremental` flag has no effect; you still get an
>> incremental backup.)
>>
>> Can we hope for better? Let's reach behind Cinder to the Ceph backend.
>> Volume backups are represented as a "base" RBD image with a snapshot
>> for each incremental backup:
>>
>> ```
>> # rbd snap ls volume-e742c4e2-e331-4297-a7df-c25e729fdd83.backup.base
>> SNAPID NAME
>>SIZE TIMESTAMP
>>577 backup.e3c1bcff-c1a4-450f-a2a5-a5061c8e3733.snap.1535046973.43
>> 10240 MB Thu Aug 23 10:57:48 2018
>>578 backup.93fbd83b-f34d-45bc-a378-18268c8c0a25.snap.1535047520.44
>> 10240 MB Thu Aug 23 11:05:43 2018
>>579 backup.b6bed35a-45e7-4df1-bc09-257aa01efe9b.snap.1535047564.46
>> 10240 MB Thu Aug 23 11:06:47 2018
>>580 backup.10128aba-0e18-40f1-acfb-11d7bb6cb487.snap.1535048513.71
>> 10240 MB Thu Aug 23 11:22:23 2018
>>581 backup.8cd035b9-63bf-4920-a8ec-c07ba370fb94.snap.1535048538.72
>> 10240 MB Thu Aug 23 11:22:47 2018
>>582 backup.cb7b6920-a79e-408e-b84f-5269d80235b2.snap.1535048559.82
>> 10240 MB Thu Aug 23 11:23:04 2018
>>583 backup.a7871768-1863-435f-be9d-b50af47c905a.snap.1535048588.26
>> 10240 MB Thu Aug 23 11:23:31 2018
>>584 backup.b18522e4-d237-4ee5-8786-78eac3d590de.snap.1535052729.52
>> 10240 MB Thu Aug 23 12:32:43 2018
>> ```
>>
>> It seems that each snapshot stands alone and doesn't depend on others.
>> Ceph lets me delete the older snapshots.
>>
>> ```
>> # rbd snap rm
>> volume-e742c4e2-e331-4297-a7df-c25e729fdd83.backup.b...@backup.e3c1bcff-c1a4-450f-a2a5-a5061c8e3733.snap.1535046973.43
>> Removing snap: 100% complete...done.
>> # rbd snap rm
>> volume-e742c4e2-e331-4297-a7df-c25e729fdd83.backup.b...@backup.10128aba-0e18-40f1-acfb-11d7bb6cb487.snap.1535048513.71
>> Removing snap: 100% complete...done.
>> ```
>>
>> Now that we nuked backup-1 and backup-4, can we still restore from
>> backup-7 and launch an instance with it?
>>
>> ```
>> openstack volume create --size 10 --bootable volume-foo-restored
>> openstack volume backup rest

[Openstack] [cinder] Pruning Old Volume Backups with Ceph Backend

2018-08-23 Thread Chris Martin
I back up my volumes daily, using incremental backups to minimize
network traffic and storage consumption. I want to periodically remove
old backups, and during this pruning operation, avoid entering a state
where a volume has no recent backups. Ceph RBD appears to support this
workflow, but unfortunately, Cinder does not. I can only delete the
*latest* backup of a given volume, and this precludes any reasonable
way to prune backups. Here, I'll show you.

Let's make three backups of the same volume:
```
openstack volume backup create --name backup-1 --force volume-foo
openstack volume backup create --name backup-2 --force volume-foo
openstack volume backup create --name backup-3 --force volume-foo
```

Cinder reports the following via `volume backup show`:
- backup-1 is not an incremental backup, but backup-2 and backup-3 are
(`is_incremental`).
- All but the latest backup have dependent backups (`has_dependent_backups`).

We take a backup every day, and after a week we're on backup-7. We
want to start deleting older backups so that we don't keep
accumulating backups forever! What happens when we try?

```
# openstack volume backup delete backup-1
Failed to delete backup with name or ID 'backup-1': Invalid backup:
Incremental backups exist for this backup. (HTTP 400)
```

We can't delete backup-1 because Cinder considers it a "base" backup
which `has_dependent_backups`. What about backup-2? Same story. Adding
the `--force` flag just gives a slightly different error message. The
*only* backup that Cinder will delete is backup-7 -- the very latest
one. This means that if we want to remove the oldest backups of a
volume, *we must first remove all newer backups of the same volume*,
i.e. delete literally all of our backups.

Also, we cannot force creation of another *full* (non-incrmental)
backup in order to free all of the earlier backups for removal.
(Omitting the `--incremental` flag has no effect; you still get an
incremental backup.)

Can we hope for better? Let's reach behind Cinder to the Ceph backend.
Volume backups are represented as a "base" RBD image with a snapshot
for each incremental backup:

```
# rbd snap ls volume-e742c4e2-e331-4297-a7df-c25e729fdd83.backup.base
SNAPID NAME
   SIZE TIMESTAMP
   577 backup.e3c1bcff-c1a4-450f-a2a5-a5061c8e3733.snap.1535046973.43
10240 MB Thu Aug 23 10:57:48 2018
   578 backup.93fbd83b-f34d-45bc-a378-18268c8c0a25.snap.1535047520.44
10240 MB Thu Aug 23 11:05:43 2018
   579 backup.b6bed35a-45e7-4df1-bc09-257aa01efe9b.snap.1535047564.46
10240 MB Thu Aug 23 11:06:47 2018
   580 backup.10128aba-0e18-40f1-acfb-11d7bb6cb487.snap.1535048513.71
10240 MB Thu Aug 23 11:22:23 2018
   581 backup.8cd035b9-63bf-4920-a8ec-c07ba370fb94.snap.1535048538.72
10240 MB Thu Aug 23 11:22:47 2018
   582 backup.cb7b6920-a79e-408e-b84f-5269d80235b2.snap.1535048559.82
10240 MB Thu Aug 23 11:23:04 2018
   583 backup.a7871768-1863-435f-be9d-b50af47c905a.snap.1535048588.26
10240 MB Thu Aug 23 11:23:31 2018
   584 backup.b18522e4-d237-4ee5-8786-78eac3d590de.snap.1535052729.52
10240 MB Thu Aug 23 12:32:43 2018
```

It seems that each snapshot stands alone and doesn't depend on others.
Ceph lets me delete the older snapshots.

```
# rbd snap rm 
volume-e742c4e2-e331-4297-a7df-c25e729fdd83.backup.b...@backup.e3c1bcff-c1a4-450f-a2a5-a5061c8e3733.snap.1535046973.43
Removing snap: 100% complete...done.
# rbd snap rm 
volume-e742c4e2-e331-4297-a7df-c25e729fdd83.backup.b...@backup.10128aba-0e18-40f1-acfb-11d7bb6cb487.snap.1535048513.71
Removing snap: 100% complete...done.
```

Now that we nuked backup-1 and backup-4, can we still restore from
backup-7 and launch an instance with it?

```
openstack volume create --size 10 --bootable volume-foo-restored
openstack volume backup restore backup-7 volume-foo-restored
openstack server create --volume volume-foo-restored --flavor medium1
instance-restored-from-backup-7
```

Yes! We can SSH to the instance and it appears intact.

Perhaps each snapshot in Ceph stores a complete diff from the base RBD
image (rather than each successive snapshot depending on the last). If
this is true, then Cinder is unnecessarily protective of older
backups. Cinder represents these as "with dependents" and doesn't let
us touch them, even though Ceph will let us delete older RBD
snapshots, apparently without disrupting newer snapshots of the same
volume. If we could remove this limitation, Cinder backups would be
significantly more useful for us. We mostly host servers with
non-cloud-native workloads (IaaS for research scientists). For these,
full-disk backups at the infrastructure level are an important
supplement to file-level or application-level backups.

It would be great if someone else could confirm or disprove what I'm
seeing here. I'd also love to hear from anyone else using Cinder
backups this way.

Regards,

Chris Martin at CyVerse

___
Mailing list: http://lists.openstack.org/cgi-bin/m

  1   2   3   4   5   6   7   8   9   10   >