Re: [Openstack-operators] [openstack-dev] [ironic] [nova] [tripleo] Deprecation of Nova's integration with Ironic Capabilities and ComputeCapabilitiesFilter

2018-09-27 Thread melanie witt

On Thu, 27 Sep 2018 17:23:26 -0500, Matt Riedemann wrote:

On 9/27/2018 3:02 PM, Jay Pipes wrote:

A great example of this would be the proposed "deploy template" from
[2]. This is nothing more than abusing the placement traits API in order
to allow passthrough of instance configuration data from the nova flavor
extra spec directly into the nodes.instance_info field in the Ironic
database. It's a hack that is abusing the entire concept of the
placement traits concept, IMHO.

We should have a way *in Nova* of allowing instance configuration
key/value information to be passed through to the virt driver's spawn()
method, much the same way we provide for user_data that gets exposed
after boot to the guest instance via configdrive or the metadata service
API. What this deploy template thing is is just a hack to get around the
fact that nova doesn't have a basic way of passing through some collated
instance configuration key/value information, which is a darn shame and
I'm really kind of annoyed with myself for not noticing this sooner. :(


We talked about this in Dublin through right? We said a good thing to do
would be to have some kind of template/profile/config/whatever stored
off in glare where schema could be registered on that thing, and then
you pass a handle (ID reference) to that to nova when creating the
(baremetal) server, nova pulls it down from glare and hands it off to
the virt driver. It's just that no one is doing that work.


If I understood correctly, that discussion was around adding a way to 
pass a desired hardware configuration to nova when booting an ironic 
instance. And that it's something that isn't yet possible to do using 
the existing ComputeCapabilitiesFilter. Someone please correct me if I'm 
wrong there.


That said, I still don't understand why we are talking about deprecating 
the ComputeCapabilitiesFilter if there's no supported way to replace it 
yet. If boolean traits are not enough to replace it, then we need to 
hold off on deprecating it, right? Would the 
template/profile/config/whatever in glare approach replace what the 
ComputeCapabilitiesFilter is doing or no? Sorry, I'm just not clearly 
understanding this yet.


-melanie




___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


Re: [Openstack-operators] [openstack-dev] [ironic] [nova] [tripleo] Deprecation of Nova's integration with Ironic Capabilities and ComputeCapabilitiesFilter

2018-09-27 Thread Matt Riedemann

On 9/27/2018 3:02 PM, Jay Pipes wrote:
A great example of this would be the proposed "deploy template" from 
[2]. This is nothing more than abusing the placement traits API in order 
to allow passthrough of instance configuration data from the nova flavor 
extra spec directly into the nodes.instance_info field in the Ironic 
database. It's a hack that is abusing the entire concept of the 
placement traits concept, IMHO.


We should have a way *in Nova* of allowing instance configuration 
key/value information to be passed through to the virt driver's spawn() 
method, much the same way we provide for user_data that gets exposed 
after boot to the guest instance via configdrive or the metadata service 
API. What this deploy template thing is is just a hack to get around the 
fact that nova doesn't have a basic way of passing through some collated 
instance configuration key/value information, which is a darn shame and 
I'm really kind of annoyed with myself for not noticing this sooner. :(


We talked about this in Dublin through right? We said a good thing to do 
would be to have some kind of template/profile/config/whatever stored 
off in glare where schema could be registered on that thing, and then 
you pass a handle (ID reference) to that to nova when creating the 
(baremetal) server, nova pulls it down from glare and hands it off to 
the virt driver. It's just that no one is doing that work.


--

Thanks,

Matt

___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


Re: [Openstack-operators] Best kernel options for openvswitch on network nodes on a large setup

2018-09-27 Thread Jean-Philippe Méthot
I got some answers from the openvswitch mailing list, essentially indicating 
the issue is in the connection between neutron-openvswitch-agent and ovs.

Here’s an output of ovs-vsctl list controller:

_uuid   : ff2dca74-9628-43c8-b89c-8d2f1242dd3f
connection_mode : out-of-band
controller_burst_limit: []
controller_rate_limit: []
enable_async_messages: []
external_ids: {}
inactivity_probe: []
is_connected: false
local_gateway   : []
local_ip: []
local_netmask   : []
max_backoff : []
other_config: {}
role: other
status  : {last_error="Connection timed out", 
sec_since_connect="22", sec_since_disconnect="1", state=BACKOFF}
target  : "tcp:127.0.0.1:6633 »

So OVS is still working but the connection between neutron-openvswitch-agent 
and OVS gets interrupted somehow. It may also be linked to the HA vrrp 
switching host at random as the connection between both network nodes get 
severed. We also see SSH lagging momentarily. I’m starting to think that a 
limit of some kind in linux is reached, preventing connections from happening. 
However, I don’t think it’s max open file since the number of open files is 
nowhere close to what I’ve set it.

Ideas?
  
Jean-Philippe Méthot
Openstack system administrator
Administrateur système Openstack
PlanetHoster inc.




> Le 26 sept. 2018 à 15:16, Jean-Philippe Méthot  
> a écrit :
> 
> Yes, I notice that every time that message appears, at least a few packets 
> get dropped and some of our instances pop up in nagios, even though they are 
> reachable 1 or 2 seconds after. It’s really causing us some issues as we 
> can’t ensure proper network quality for our customers. Have you noticed the 
> same?
> 
> By that point I think it may be best to contact openvswitch directly since it 
> seems to be an issue with their component. I am about to do that and hope I 
> don’t get sent back to the openstack mailing list. I would really like to 
> know what this probe is and why it disconnects constantly under load.
> 
> Jean-Philippe Méthot
> Openstack system administrator
> Administrateur système Openstack
> PlanetHoster inc.
> 
> 
> 
> 
>> Le 26 sept. 2018 à 11:48, Simon Leinen > > a écrit :
>> 
>> Jean-Philippe Méthot writes:
>>> This particular message makes it sound as if openvswitch is getting 
>>> overloaded.
>>> Sep 23 03:54:08 network1 ovsdb-server: 
>>> ovs|01253|reconnect|ERR|tcp:127.0.0.1:50814: no response to inactivity 
>>> probe after 5.01 seconds, disconnecting
>> 
>> We get these as well :-(
>> 
>>> A lot of those keep appear, and openvswitch always reconnects almost
>>> instantly though. I’ve done some research about that particular
>>> message, but it didn’t give me anything I can use to fix it.
>> 
>> Would be interested in solutions as well.  But I'm sceptical whether
>> kernel settings can help here, because the timeout/slowness seems to be
>> located in the user-space/control-plane parts of Open vSwitch,
>> i.e. OVSDB.
>> -- 
>> Simon.
>> 
>>> Jean-Philippe Méthot
>>> Openstack system administrator
>>> Administrateur système Openstack
>>> PlanetHoster inc.
>> 
>>> Le 25 sept. 2018 à 19:37, Erik McCormick >> > a écrit :
>> 
>>> Ate you getting any particular log messages that lead you to conclude your 
>>> issue lies with OVS? I've hit lots of kernel limits under those conditions 
>>> before OVS itself ever
>>> noticed. Anything in dmesg, journal or neutron logs of interest? 
>> 
>>> On Tue, Sep 25, 2018, 7:27 PM Jean-Philippe Méthot 
>>> mailto:jp.met...@planethoster.info>> wrote:
>> 
>>> Hi,
>> 
>>> Are there some recommendations regarding kernel settings configuration for 
>>> openvswitch? We’ve just been hit by what we believe may be an attack of 
>>> some kind we
>>> have never seen before and we’re wondering if there’s a way to optimize our 
>>> network nodes kernel for openvswitch operation and thus minimize the impact 
>>> of such an
>>> attack, or whatever it was.
>> 
>>> Best regards,
>> 
>>> Jean-Philippe Méthot
>>> Openstack system administrator
>>> Administrateur système Openstack
>>> PlanetHoster inc.
>> 
>>> ___
>>> OpenStack-operators mailing list
>>> OpenStack-operators@lists.openstack.org 
>>> 
>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators 
>>> 
>> 
>>> ___
>>> OpenStack-operators mailing list
>>> OpenStack-operators@lists.openstack.org 
>>> 
>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators
>> 
> 
> ___
> OpenStack-operators mailing list
> OpenStack-operators@lists.openstack.org
> 

Re: [Openstack-operators] [openstack-dev] [ironic] [nova] [tripleo] Deprecation of Nova's integration with Ironic Capabilities and ComputeCapabilitiesFilter

2018-09-27 Thread Jay Pipes

On 09/26/2018 05:48 PM, melanie witt wrote:

On Tue, 25 Sep 2018 12:08:03 -0500, Matt Riedemann wrote:

On 9/25/2018 8:36 AM, John Garbutt wrote:

 Another thing is about existing flavors configured for these
 capabilities-scoped specs. Are you saying during the deprecation 
we'd
 continue to use those even if the filter is disabled? In the 
review I

 had suggested that we add a pre-upgrade check which inspects the
 flavors
 and if any of these are found, we report a warning meaning those
 flavors
 need to be updated to use traits rather than capabilities. Would
 that be
 reasonable?


I like the idea of a warning, but there are features that have not yet
moved to traits:
https://specs.openstack.org/openstack/ironic-specs/specs/juno-implemented/uefi-boot-for-ironic.html 



There is a more general plan that will help, but its not quite ready 
yet:

https://review.openstack.org/#/c/504952/

As such, I think we can't get pull the plug on flavors including
capabilities and passing them to Ironic, but (after a cycle of
deprecation) I think we can now stop pushing capabilities from Ironic
into Nova and using them for placement.


Forgive my ignorance, but if traits are not on par with capabilities,
why are we deprecating the capabilities filter?


I would like to know the answer to this as well.


In short, traits were never designed to be key/value pairs. They are 
simple strings indicating boolean capabilities.


Ironic "capabilities" are key/value metadata pairs. *Some* of those 
Ironic "capabilities" are possible to create as boolean traits.


For example, you can change the boot_mode=uefi and boot_mode=bios Ironic 
capabilities to be a trait called CUSTOM_BOOT_MODE_UEFI or 
CUSTOM_BOOT_MODE_BIOS [1].


Other Ironic "capabilities" are not, in fact, capabilities at all. 
Instead, they are just random key/value pairs that are not boolean in 
nature nor do they represent a capability of the baremetal hardware.


A great example of this would be the proposed "deploy template" from 
[2]. This is nothing more than abusing the placement traits API in order 
to allow passthrough of instance configuration data from the nova flavor 
extra spec directly into the nodes.instance_info field in the Ironic 
database. It's a hack that is abusing the entire concept of the 
placement traits concept, IMHO.


We should have a way *in Nova* of allowing instance configuration 
key/value information to be passed through to the virt driver's spawn() 
method, much the same way we provide for user_data that gets exposed 
after boot to the guest instance via configdrive or the metadata service 
API. What this deploy template thing is is just a hack to get around the 
fact that nova doesn't have a basic way of passing through some collated 
instance configuration key/value information, which is a darn shame and 
I'm really kind of annoyed with myself for not noticing this sooner. :(


-jay

[1] As I've asked for in the past, it would be great to have Ironic 
contributors push patches to the os-traits library for standardized 
baremetal capabilities like boot modes. Please do consider contributing 
there.


[2] 
https://review.openstack.org/#/c/504952/16/specs/approved/deploy-templates.rst


___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


[Openstack-operators] QoS Nova and Cinder

2018-09-27 Thread Florian Engelmann

Hi,

starting a new instance on ephemeral storage all "quota:disk_*" setting 
are honored and work great with ceph as ephemeral backend and KVM as 
hypervisor.

Starting a new instance with "--volume":

--volume  Create server using this volume as the boot disk 



the quota settings of the flavor are not honored. Questions:

1. Is there any way to tell nova to still honor the flavor quota 
settings if ---volume is used?


2. How to create a default volume type with an associated cinder qos to 
still have an option to prevent that volume to get unlimited iops?


Thank you so much!

All the best,
Florian



smime.p7s
Description: S/MIME cryptographic signature
___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


Re: [Openstack-operators] [Openstack-sigs] [openstack-dev] [goals][tc][ptl][uc] starting goal selection for T series

2018-09-27 Thread Doug Hellmann
Dean Troyer  writes:

> On Wed, Sep 26, 2018 at 3:44 PM, Matt Riedemann  wrote:
>> I started documenting the compute API gaps in OSC last release [1]. It's a
>> big gap and needs a lot of work, even for existing CLIs (the cold/live
>> migration CLIs in OSC are a mess, and you can't even boot from volume where
>> nova creates the volume for you). That's also why I put something into the
>> etherpad about the OSC core team even being able to handle an onslaught of
>> changes for a goal like this.
>
> The OSC core team is very thin, yes, it seems as though companies
> don't like to spend money on client-facing things...I'll be in the
> hall following this thread should anyone want to talk...
>
> The migration commands are a mess, mostly because I got them wrong to
> start with and we have only tried to patch it up, this is one area I
> think we need to wipe clean and fix properly.  Yay! Major version
> release!

I definitely think having details about the gaps would be a prerequisite
for approving a goal, but I wonder if that's something 1 person could
even do alone. Is this an area where a small team is needed?

>> I thought the same, and we talked about this at the Austin summit, but OSC
>> is inconsistent about this (you can live migrate a server but you can't
>> evacuate it - there is no CLI for evacuation). It also came up at the Stein
>> PTG with Dean in the nova room giving us some direction. [2] I believe the
>> summary of that discussion was:
>
>> a) to deal with the core team sprawl, we could move the compute stuff out of
>> python-openstackclient and into an osc-compute plugin (like the
>> osc-placement plugin for the placement service); then we could create a new
>> core team which would have python-openstackclient-core as a superset
>
> This is not my first choice but is not terrible either...

We built cliff to be based on plugins to support this sort of work
distribution, right?

>> b) Dean suggested that we close the compute API gaps in the SDK first, but
>> that could take a long time as well...but it sounded like we could use the
>> SDK for things that existed in the SDK and use novaclient for things that
>> didn't yet exist in the SDK
>
> Yup, this can be done in parallel.  The unit of decision for use sdk
> vs use XXXclient lib is per-API call.  If the client lib can use an
> SDK adapter/session it becomes even better.  I think the priority for
> what to address first should be guided by complete gaps in coverage
> and the need for microversion-driven changes.
>
>> This might be a candidate for one of these multi-release goals that the TC
>> started talking about at the Stein PTG. I could see something like this
>> being a goal for Stein:
>>
>> "Each project owns its own osc- plugin for OSC CLIs"
>>
>> That deals with the core team and sprawl issue, especially with stevemar
>> being gone and dtroyer being distracted by shiny x-men bird related things.
>> That also seems relatively manageable for all projects to do in a single
>> release. Having a single-release goal of "close all gaps across all service
>> types" is going to be extremely tough for any older projects that had CLIs
>> before OSC was created (nova/cinder/glance/keystone). For newer projects,
>> like placement, it's not a problem because they never created any other CLI
>> outside of OSC.

Yeah, I agree this work is going to need to be split up. I'm still not
sold on the idea of multi-cycle goals, personally.

> I think the major difficulty here is simply how to migrate users from
> today state to future state in a reasonable manner.  If we could teach
> OSC how to handle the same command being defined in multiple plugins
> properly (hello entrypoints!) it could be much simpler as we could
> start creating the new plugins and switch as the new command
> implementations become available rather than having a hard cutover.
>
> Or maybe the definition of OSC v4 is as above and we just work at it
> until complete and cut over at the end.  Note that the current APIs
> that are in-repo (Compute, Identity, Image, Network, Object, Volume)
> are all implemented using the plugin structure, OSC v4 could start as
> the breaking out of those without command changes (except new
> migration commands!) and then the plugins all re-write and update at
> their own tempo.  Dang, did I just deconstruct my project?

It sure sounds like it. Congratulations!

I like the idea of moving the existing code into libraries, having
python-openstackclient depend on them, and then asking project teams for
more help with them.

> One thing I don't like about that is we just replace N client libs
> with N (or more) plugins now and the number of things a user must
> install doesn't go down.  I would like to hear from anyone who deals
> with installing OSC if that is still a big deal or should I let go of
> that worry?

Don't package managers just deal with this? I can pip/yum/apt install
something and get all of its dependencies, right?

Doug


Re: [Openstack-operators] [OpenStack][Neutron][SFC] Regarding SFC support on provider VLAN N/W

2018-09-27 Thread nicolas

On 2018-09-26 14:06, Amit Kumar wrote:


Hi All,

We are using Ocata release and we have installed networking-sfc for 
Service Function Chaining functionality. Installation was successful 
and then we tried to create port pairs on VLAN N/W and it failed. We 
tried creating port-pairs on VXLAN based N/W and it worked. So, is it 
that SFC functionality is supported only on VXLAN based N/Ws?


Regards,
Amit
___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


Hi,
I had similar problems with networking-sfc (not able to create port pair 
groups and not able to delete port pairs). I also had trouble 
understanding the documentation of networking-sfc.


I sent a mail (see below) to the people listed in the doc and to 
commiters on the github repo, but I didn't get any answer.


I am interested in any feedback about my questions below! TY!



~~~
My previous email about networking-sfc begins here.
~~~

Hi,

I want to test the Service Function Chaining SFC functionalities of 
OpenStack
when using the networking_sfc driver. But I have some problems with 
reproducing

the tutorial in the doc [1][2].
If I execute the command in the tuto [1][2], it fails.

There is a chance that I miss something, either in the networking_sfc
installation phase or in the tuto test config phase. If you could be 
kind enough
to read the following, that could help me and maybe improve my 
understanding

of the tutorial/doc.

You need to read this with a text editor to see the figures.



#
## Installation of networking_sfc
#

## My environment

First, I deploy my OpenStack env with the OpenStack Ansible framework.
This is a quick description of my lab environment:

  OpenStack version : stable/queens
  OpenStack Ansible OSA version : 17.0.9.dev22
  python env version: python2.7
  operating system  : Ubuntu Server 16.04
  1 controller node, 1 dedicated neutron node, 2 computes nodes


## Installation of networking_sfc

Then, I manually install [over my OSA deployment] and configure 
networking_sfc

following these links:
* https://docs.openstack.org/networking-sfc/latest/install/install.html
* https://docs.openstack.org/releasenotes/networking-sfc/queens.html

I install with pip (python2.7).


First, I must source the right python venv (OSA is prepared for that 
[3]):

  ```
  user@neutron-serveur: source 
/openstack/venvs/neutron-17.0.9/bin/activate

  ```
(NB: following [3], OSA should deploy OpenStack with networkin-sfc, but 
it did not work for me. Therefore I installed networkin-sfc manually.)


Then I install networking-sfc:
  ```
  (neutron-17.0.9) user@neutron-serveur: pip install -c 
https://git.openstack.org/cgit/openstack/requirements/plain/upper-constraints.txt?h=stable/queens 
networking-sfc==6.0.0

  ```

The install seems to be ok (no error, only Ignoring python3.x version of 
soft).


Then, I modify the neutron config files to meet this:
https://docs.openstack.org/networking-sfc/latest/install/configuration.html




###
## Using networking_sfc CLI
###

I want to reproduce the following steps to check my installation and get 
a

better understanding:
* [1] https://docs.openstack.org/newton/networking-guide/config-sfc.html
* [2] 
https://docs.openstack.org/networking-sfc/latest/contributor/system_design_and_workflow.html


But after reading this, I don't understand a few things.


When I read the description of the example, this is what I understand:

```
+-+  +-++-++-+  
+-+
| service |  | VM1 || VM2 || VM3 |  | 
service |
| VM vm1  |->--p1| SF1 |p2->--p3| SF2 |p4->--p5| SF3 |p6->--| VM vm2 
 |
|22.1.20.1:23 |  +-++-++-+  
|171.4.5.6:100|
| Source  | | 
Destination |
+-+ 
+-+

```




But when I read the next steps, this is what I see:

```
   +-++-++-+
   | VM1 || VM2 || VM3 |
 22.1.20.1:23->--p1| SF1 |p2->--p3| SF2 |p4->--p5| SF3 
|p6->--171.4.5.6:100

   +-++-++-+
```




Here I have several questions:
 1. How do you configure the net1 network ?
 2. Shouldn't we add an IP subnet to net1 ? Because I can not create an
instance if there are no IP subnet. Maybe the 3 SFx instances VM1, 2 
& 3

need 1 port for admin and 2 ports for their sfc port pair.
 3. Where are the 2 objects (the 2 service VMs) with the IP address 
22.1.20.1

and 172.4.5.6 ?
 4. Is the proxy 

Re: [Openstack-operators] [openstack-dev] [Openstack-sigs] [goals][tc][ptl][uc] starting goal selection for T series

2018-09-27 Thread Thierry Carrez

First I think that is a great goal, but I want to pick up on Dean's comment:

Dean Troyer wrote:

[...]
The OSC core team is very thin, yes, it seems as though companies
don't like to spend money on client-facing things...I'll be in the
hall following this thread should anyone want to talk...


I think OSC (and client-facing tooling in general) is a great place for 
OpenStack users (deployers of OpenStack clouds) to contribute. It's a 
smaller territory, it's less time-consuming than the service side, they 
are the most obvious interested party, and a small, 20% time investment 
would have a dramatic impact.


It's arguably difficult for OpenStack users to get involved in 
"OpenStack development": keeping track of what's happening in a large 
team is already likely to consume most of the time you can dedicate to 
it. But OSC is a specific, smaller area which would be a good match for 
the expertise and time availability of anybody running an OpenStack 
cloud that wants to contribute back and make OpenStack better.


Shameless plug: I proposed a Forum session in Berlin to discuss "Getting 
OpenStack users involved in the project" -- and we'll discuss such areas 
that are a particularly good match for users to get involved.


--
Thierry Carrez (ttx)

___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


Re: [Openstack-operators] RFC: Next minimum libvirt / QEMU versions for 'T' release

2018-09-27 Thread Kashyap Chamarthy
On Mon, Sep 24, 2018 at 09:11:42AM -0700, iain MacDonnell wrote:
> 
> 
> On 09/24/2018 06:22 AM, Kashyap Chamarthy wrote:
> > (b) Oracle Linux: Can you please confirm if you'll be able to
> >  release libvirt and QEMU to 4.0.0 and 2.11, respectively?
> 
> Hi Kashyap,
> 
> Those are already available at:
> 
> http://yum.oracle.com/repo/OracleLinux/OL7/developer/kvm/utils/x86_64/index.html

Hi Iain,

Thanks for confirming.  When you get a moment, please update the "FIXME"
for Oracle Linux:
https://wiki.openstack.org/wiki/LibvirtDistroSupportMatrix#Distro_minimum_versions

-- 
/kashyap

___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


[Openstack-operators] [publiccloud-wg] Reminder weekly meeting Public Cloud WG

2018-09-27 Thread Tobias Rydberg

Hi everyone,

Time for a new meeting for PCWG - today (27th) 1400 UTC in 
#openstack-publiccloud! Agenda found at 
https://etherpad.openstack.org/p/publiccloud-wg


We will again have a short brief from the PTG for those of you that 
missed that last week. Also, time to start planning for the upcoming 
summit - forum sessions submitted etc. Another important item on the 
agenda is the prio/ranking of our "missing features" list. We have 
identified a few cross project goals already that we see as important, 
but we need more operators to engage in this ranking.


Talk to you later today!

Cheers,
Tobias

--
Tobias Rydberg
Senior Developer
Twitter & IRC: tobberydberg

www.citynetwork.eu | www.citycloud.com

INNOVATION THROUGH OPEN IT INFRASTRUCTURE
ISO 9001, 14001, 27001, 27015 & 27018 CERTIFIED


___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators