Re: [Openstack-operators] [openstack-dev] [Octavia] [Kolla] SSL errors polling amphorae and missing tenant network interface

2018-10-25 Thread Michael Johnson
FYI, I took some time out this afternoon and wrote a detailed
certificate configuration guide. Hopefully this will help.

https://review.openstack.org/613454

Reviews would be welcome!

Michael
On Thu, Oct 25, 2018 at 7:00 AM Tobias Urdin  wrote:
>
> Might as well throw it out here.
>
> After a lot of troubleshooting we were able to narrow our issue down to
> our test environment running qemu virtualization, we moved our compute
> node to hardware and
> used kvm full virtualization instead.
>
> We could properly reproduce the issue where generating a CSR from a
> private key and then trying to verify the CSR would fail complaining about
> "Signature did not match the certificate request"
>
> We suspect qemu floating point emulation caused this, the same OpenSSL
> function that validates a CSR is the one used when validating the SSL
> handshake which caused our issue.
> After going through the whole stack, we have Octavia working flawlessly
> without any issues at all.
>
> Best regards
> Tobias
>
> On 10/23/2018 04:31 PM, Tobias Urdin wrote:
> > Hello Erik,
> >
> > Could you specify the DNs you used for all certificates just so that I
> > can rule it out on my side.
> > You can redact anything sensitive with some to just get the feel on how
> > it's configured.
> >
> > Best regards
> > Tobias
> >
> > On 10/22/2018 04:47 PM, Erik McCormick wrote:
> >> On Mon, Oct 22, 2018 at 4:23 AM Tobias Urdin  
> >> wrote:
> >>> Hello,
> >>>
> >>> I've been having a lot of issues with SSL certificates myself, on my
> >>> second trip now trying to get it working.
> >>>
> >>> Before I spent a lot of time walking through every line in the DevStack
> >>> plugin and fixing my config options, used the generate
> >>> script [1] and still it didn't work.
> >>>
> >>> When I got the "invalid padding" issue it was because of the DN I used
> >>> for the CA and the certificate IIRC.
> >>>
> >>>> 19:34 < tobias-urdin> 2018-09-10 19:43:15.312 15032 WARNING
> >>> octavia.amphorae.drivers.haproxy.rest_api_driver [-] Could not connect
> >>> to instance. Retrying.: SSLError: ("bad handshake: Error([('rsa
> >>> routines', 'RSA_padding_check_PKCS1_type_1', 'block type is not 01'),
> >>> ('rsa routines', 'RSA_EAY_PUBLIC_DECRYPT', 'padding check failed'),
> >>> ('SSL routines', 'ssl3_get_key_exchange', 'bad signature')],)",)
> >>>> 19:47 < tobias-urdin> after a quick google "The problem was that my
> >>> CA DN was the same as the certificate DN."
> >>>
> >>> IIRC I think that solved it, but then again I wouldn't remember fully
> >>> since I've been at so many different angles by now.
> >>>
> >>> Here is my IRC logs history from the #openstack-lbaas channel, perhaps
> >>> it can help you out
> >>> http://paste.openstack.org/show/732575/
> >>>
> >> Tobias, I owe you a beer. This was precisely the issue. I'm deploying
> >> Octavia with kolla-ansible. It only deploys a single CA. After hacking
> >> the templates and playbook to incorporate a separate server CA, the
> >> amphorae now load and provision the required namespace. I'm adding a
> >> kolla tag to the subject of this in hopes that someone might want to
> >> take on changing this behavior in the project. Hopefully after I get
> >> through Upstream Institute in Berlin I'll be able to do it myself if
> >> nobody else wants to do it.
> >>
> >> For certificate generation, I extracted the contents of
> >> octavia_certs_install.yml (which sets up the directory structure,
> >> openssl.cnf, and the client CA), and octavia_certs.yml (which creates
> >> the server CA and the client certificate) and mashed them into a
> >> separate playbook just for this purpose. At the end I get:
> >>
> >> ca_01.pem - Client CA Certificate
> >> ca_01.key - Client CA Key
> >> ca_server_01.pem - Server CA Certificate
> >> cakey.pem - Server CA Key
> >> client.pem - Concatenated Client Key and Certificate
> >>
> >> If it would help to have the playbook, I can stick it up on github
> >> with a huge "This is a hack" disclaimer on it.
> >>
> >>> -
> >>>
> >>> Sorry for hijacking the thread but I'm stuck as well.
> >>>
> >>> I've in the past tried to generate the certificates with [1] but now
> >>> moved on to using the openstack-ansible way of generating them [2]
> >>> with some modifications.
> >>>
> >>> Right now I'm just getting: Could not connect to instance. Retrying.:
> >>> SSLError: [SSL: BAD_SIGNATURE] bad signature (_ssl.c:579)
> >>> from the amphoras, haven't got any further but I've eliminated a lot of
> >>> stuck in the middle.
> >>>
> >>> Tried deploying Ocatavia on Ubuntu with python3 to just make sure there
> >>> wasn't an issue with CentOS and OpenSSL versions since it tends to lag
> >>> behind.
> >>> Checking the amphora with openssl s_client [3] it gives the same one,
> >>> but the verification is successful just that I don't understand what the
> >>> bad signature
> >>> part is about, from browsing some OpenSSL code it seems to be related to
> >>> RSA signatures somehow.
> >>>
> >>> 

Re: [Openstack-operators] Glance Image Visibility Issue? - Non admin users can see private images from other tenants

2018-10-25 Thread Moore, Michael Dane (GSFC-720.0)[BUSINESS INTEGRA, INC.]

I have dug deep into the code for glance, shoving debug outputs to see what I 
can find in our queens environment.

Here is my debug code (I have a lot more but this is the salient part)

LOG.debug("in enforce(), action='%s', policyvalues='%s'", action, 
context.to_policy_values())
return super(Enforcer, self).enforce(action, target,
 context.to_policy_values(),
 do_raise=True,
 exc=exception.Forbidden,
 action=action)

below is the output attempting to set an image that I own while being an admin 
to public via `openstack image set –public cirros`

2018-10-25 18:29:16.575 17561 DEBUG glance.api.policy 
[req-e343bb10-8ec8-40df-8c0c-47d1b217ca0d - - - - -] in enforce(), 
action='publicize_image', policyvalues='{'service_roles': [], 'user_id': None, 
'roles': [], 'user_domain_id': None, 'service_project_id': None, 
'service_user_id': None, 'service_user_domain_id': None, 
'service_project_domain_id': None, 'is_admin_project': True, 'user': None, 
'project_id': None, 'tenant': None, 'project_domain_id': None}' enforce 
/usr/lib/python2.7/site-packages/glance/api/policy.py:64

And here is what shows up when I `openstack image list`  as our test user 
(`jonathan`) that is NOT an admin

2018-10-25 18:32:24.841 17564 DEBUG glance.api.policy 
[req-22abdcf2-14cd-4680-8deb-e48902a7ddef - - - - -] in enforce(), 
action='get_images', policyvalues='{'service_roles': [], 'user_id': None, 
'roles': [], 'user_domain_id': None, 'service_project_id': None, 
'service_user_id': None, 'service_user_domain_id': None, 
'service_project_domain_id': None, 'is_admin_project': True, 'user': None, 
'project_id': None, 'tenant': None, 'project_domain_id': None}' enforce 
/usr/lib/python2.7/site-packages/glance/api/policy.py:64


The takeaway that I have is that in the case of get_images, is_admin_project is 
True, which is WRONG for that test but since it’s a read-only operation it’s 
content to shortcircuit and return all those images.

In the case of publicize_image, the is_admin_project being True isn’t enough, 
and when it checks user (which is None) it says NOPE.


So somehow for some reason glance APIs context is super duper wrong.


Mike Moore, M.S.S.E.

Systems Engineer, Goddard Private Cloud
michael.d.mo...@nasa.gov

Hydrogen fusion brightens my day.

 
___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


Re: [Openstack-operators] [openstack-dev] [nova][limits] Does ANYONE at all use the quota class functionality in Nova?

2018-10-25 Thread William M Edmonds

melanie witt  wrote on 10/25/2018 02:14:40 AM:
> On Thu, 25 Oct 2018 14:12:51 +0900, ボーアディネシュ[bhor Dinesh] wrote:
> > We were having a similar use case like *Preemptible Instances* called
as
> > *Rich-VM’s* which
> >
> > are high in resources and are deployed each per hypervisor. We have a
> > custom code in
> >
> > production which tracks the quota for such instances separately and for

> > the same reason
> >
> > we have *rich_instances* custom quota class same as *instances* quota
class.
>
> Please see the last reply I recently sent on this thread. I have been
> thinking the same as you about how we could use quota classes to
> implement the quota piece of preemptible instances. I think we can
> achieve the same thing using unified limits, specifically registered
> limits [1], which span across all projects. So, I think we are covered
> moving forward with migrating to unified limits and deprecation of quota
> classes. Let me know if you spot any issues with this idea.

And we could finally close https://bugs.launchpad.net/nova/+bug/1602396
___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


Re: [Openstack-operators] [octavia][rocky] Octavia and VxLAN without DVR

2018-10-25 Thread Fox, Kevin M
Can you use a provider network to expose galera to the vm?

alternately, you could put a db out in the vm side. You don't strictly need to 
use the same db for every component. If crossing the streams is hard, maybe 
avoiding crossing at all is easier?

Thanks,
Kevin

From: Florian Engelmann [florian.engelm...@everyware.ch]
Sent: Thursday, October 25, 2018 8:37 AM
To: Fox, Kevin M; openstack-operators@lists.openstack.org
Subject: Re: [Openstack-operators] [octavia][rocky] Octavia and VxLAN without 
DVR

you mean deploy octavia into an openstack project? But I will than need
to connect the octavia services with my galera DBs... so same problem.

Am 10/25/18 um 5:31 PM schrieb Fox, Kevin M:
> Would it make sense to move the control plane for this piece into the 
> cluster? (vm in a mangement tenant?)
>
> Thanks,
> Kevin
> 
> From: Florian Engelmann [florian.engelm...@everyware.ch]
> Sent: Thursday, October 25, 2018 7:39 AM
> To: openstack-operators@lists.openstack.org
> Subject: Re: [Openstack-operators] [octavia][rocky] Octavia and VxLAN without 
> DVR
>
> It looks like devstack implemented some o-hm0 interface to connect the
> physical control host to a VxLAN.
> In our case there is no VxLAN at the control nodes nor is OVS.
>
> Is it a option to deploy those Octavia services needing this conenction
> to the compute or network nodes and use o-hm0?
>
> Am 10/25/18 um 10:22 AM schrieb Florian Engelmann:
>> Or could I create lb-mgmt-net as VxLAN and connect the control nodes to
>> this VxLAN? How to do something like that?
>>
>> Am 10/25/18 um 10:03 AM schrieb Florian Engelmann:
>>> Hmm - so right now I can't see any routed option because:
>>>
>>> The gateway connected to the VLAN provider networks (bond1 on the
>>> network nodes) is not able to route any traffic to my control nodes in
>>> the spine-leaf layer3 backend network.
>>>
>>> And right now there is no br-ex at all nor any "streched" L2 domain
>>> connecting all compute nodes.
>>>
>>>
>>> So the only solution I can think of right now is to create an overlay
>>> VxLAN in the spine-leaf backend network, connect all compute and
>>> control nodes to this overlay L2 network, create a OVS bridge
>>> connected to that network on the compute nodes and allow the Amphorae
>>> to get an IPin this network as well.
>>> Not to forget about DHCP... so the network nodes will need this bridge
>>> as well.
>>>
>>> Am 10/24/18 um 10:01 PM schrieb Erik McCormick:


 On Wed, Oct 24, 2018, 3:33 PM Engelmann Florian
 >>> > wrote:

  On the network nodes we've got a dedicated interface to deploy VLANs
  (like the provider network for internet access). What about creating
  another VLAN on the network nodes, give that bridge a IP which is
  part of the subnet of lb-mgmt-net and start the octavia worker,
  healthmanager and controller on the network nodes binding to that
 IP?

 The problem with that is you can't out an IP in the vlan interface
 and also use it as an OVS bridge, so the Octavia processes would have
 nothing to bind to.


 
  *From:* Erik McCormick >>>  >
  *Sent:* Wednesday, October 24, 2018 6:18 PM
  *To:* Engelmann Florian
  *Cc:* openstack-operators
  *Subject:* Re: [Openstack-operators] [octavia][rocky] Octavia and
  VxLAN without DVR


  On Wed, Oct 24, 2018, 12:02 PM Florian Engelmann
  >>>  > wrote:

  Am 10/24/18 um 2:08 PM schrieb Erik McCormick:
   >
   >
   > On Wed, Oct 24, 2018, 3:14 AM Florian Engelmann
   > >>>  
  >>
   > wrote:
   >
   > Ohoh - thank you for your empathy :)
   > And those great details about how to setup this mgmt
 network.
   > I will try to do so this afternoon but solving that
  routing "puzzle"
   > (virtual network to control nodes) I will need our
  network guys to help
   > me out...
   >
   > But I will need to tell all Amphorae a static route to
  the gateway that
   > is routing to the control nodes?
   >
   >
   > Just set the default gateway when you create the neutron
  subnet. No need
   > for excess static routes. The route on the other connection
  won't
   > interfere with it as it lives in a namespace.

Re: [Openstack-operators] [octavia][rocky] Octavia and VxLAN without DVR

2018-10-25 Thread Florian Engelmann
you mean deploy octavia into an openstack project? But I will than need 
to connect the octavia services with my galera DBs... so same problem.


Am 10/25/18 um 5:31 PM schrieb Fox, Kevin M:

Would it make sense to move the control plane for this piece into the cluster? 
(vm in a mangement tenant?)

Thanks,
Kevin

From: Florian Engelmann [florian.engelm...@everyware.ch]
Sent: Thursday, October 25, 2018 7:39 AM
To: openstack-operators@lists.openstack.org
Subject: Re: [Openstack-operators] [octavia][rocky] Octavia and VxLAN without 
DVR

It looks like devstack implemented some o-hm0 interface to connect the
physical control host to a VxLAN.
In our case there is no VxLAN at the control nodes nor is OVS.

Is it a option to deploy those Octavia services needing this conenction
to the compute or network nodes and use o-hm0?

Am 10/25/18 um 10:22 AM schrieb Florian Engelmann:

Or could I create lb-mgmt-net as VxLAN and connect the control nodes to
this VxLAN? How to do something like that?

Am 10/25/18 um 10:03 AM schrieb Florian Engelmann:

Hmm - so right now I can't see any routed option because:

The gateway connected to the VLAN provider networks (bond1 on the
network nodes) is not able to route any traffic to my control nodes in
the spine-leaf layer3 backend network.

And right now there is no br-ex at all nor any "streched" L2 domain
connecting all compute nodes.


So the only solution I can think of right now is to create an overlay
VxLAN in the spine-leaf backend network, connect all compute and
control nodes to this overlay L2 network, create a OVS bridge
connected to that network on the compute nodes and allow the Amphorae
to get an IPin this network as well.
Not to forget about DHCP... so the network nodes will need this bridge
as well.

Am 10/24/18 um 10:01 PM schrieb Erik McCormick:



On Wed, Oct 24, 2018, 3:33 PM Engelmann Florian
mailto:florian.engelm...@everyware.ch>> wrote:

 On the network nodes we've got a dedicated interface to deploy VLANs
 (like the provider network for internet access). What about creating
 another VLAN on the network nodes, give that bridge a IP which is
 part of the subnet of lb-mgmt-net and start the octavia worker,
 healthmanager and controller on the network nodes binding to that
IP?

The problem with that is you can't out an IP in the vlan interface
and also use it as an OVS bridge, so the Octavia processes would have
nothing to bind to.



 *From:* Erik McCormick mailto:emccorm...@cirrusseven.com>>
 *Sent:* Wednesday, October 24, 2018 6:18 PM
 *To:* Engelmann Florian
 *Cc:* openstack-operators
 *Subject:* Re: [Openstack-operators] [octavia][rocky] Octavia and
 VxLAN without DVR


 On Wed, Oct 24, 2018, 12:02 PM Florian Engelmann
 mailto:florian.engelm...@everyware.ch>> wrote:

 Am 10/24/18 um 2:08 PM schrieb Erik McCormick:
  >
  >
  > On Wed, Oct 24, 2018, 3:14 AM Florian Engelmann
  > mailto:florian.engelm...@everyware.ch>
 >>
  > wrote:
  >
  > Ohoh - thank you for your empathy :)
  > And those great details about how to setup this mgmt
network.
  > I will try to do so this afternoon but solving that
 routing "puzzle"
  > (virtual network to control nodes) I will need our
 network guys to help
  > me out...
  >
  > But I will need to tell all Amphorae a static route to
 the gateway that
  > is routing to the control nodes?
  >
  >
  > Just set the default gateway when you create the neutron
 subnet. No need
  > for excess static routes. The route on the other connection
 won't
  > interfere with it as it lives in a namespace.


 My compute nodes have no br-ex and there is no L2 domain spread
 over all
 compute nodes. As far as I understood lb-mgmt-net is a provider
 network
 and has to be flat or VLAN and will need a "physical" gateway
 (as there
 is no virtual router).
 So the question - is it possible to get octavia up and running
 without a
 br-ex (L2 domain spread over all compute nodes) on the compute
 nodes?


 Sorry, I only meant it was *like* br-ex on your network nodes. You
 don't need that on your computes.

 The router here would be whatever does routing in your physical
 network. Setting the gateway in the neutron subnet simply adds that
 to the DHCP information sent to the amphorae.

 This does bring up another thingI forgot  though. You'll probably
 want to add the management network / bridge to your network nodes or
 wherever you run the DHCP agents. 

Re: [Openstack-operators] [octavia][rocky] Octavia and VxLAN without DVR

2018-10-25 Thread Florian Engelmann
I managed to configure o-hm0 on the compute nodes and I am able to 
communicate with the amphorae:



# create Octavia management net
openstack network create lb-mgmt-net -f value -c id
# and the subnet
openstack subnet create --subnet-range 172.31.0.0/16 --allocation-pool 
start=172.31.17.10,end=172.31.255.250 --network lb-mgmt-net lb-mgmt-subnet

# get the subnet ID
openstack subnet show lb-mgmt-subnet -f value -c id
# create a port in this subnet for the compute node (ewos1-com1a-poc2)
openstack port create --security-group octavia --device-owner 
Octavia:health-mgr --host=ewos1-com1a-poc2 -c id -f value --network 
lb-mgmt-net --fixed-ip 
subnet=b4c70178-949b-4d60-8d9f-09d13f720b6a,ip-address=172.31.0.101 
octavia-health-manager-ewos1-com1a-poc2-listen-port

openstack port show 6fb13c3f-469e-4a81-a504-a161c6848654
openstack network show lb-mgmt-net -f value -c id
# edit octavia_amp_boot_network_list: 3633be41-926f-4a2c-8803-36965f76ea8d
vi /etc/kolla/globals.yml
# reconfigure octavia
kolla-ansible -i inventory reconfigure -t octavia


# create o-hm0 on the compute node
docker exec ovs-vsctl -- --may-exist add-port br-int o-hm0 -- \
set Interface o-hm0 type=internal -- \
set Interface o-hm0 external-ids:iface-status=active -- \
set Interface o-hm0 external-ids:attached-mac=fa:16:3e:51:e9:c3 -- \
set Interface o-hm0 
external-ids:iface-id=6fb13c3f-469e-4a81-a504-a161c6848654 -- \

set Interface o-hm0 external-ids:skip_cleanup=true

# fix MAC of o-hm0
ip link set dev o-hm0 address fa:16:3e:51:e9:c3

# get IP from neutron DHCP agent (should get IP: 172.31.0.101 in this 
example)

ip link set dev o-hm0 up
dhclient -v o-hm0

# create a loadbalancer and test connectivity, eg. amphorae IP is 
172.31.17.15

root@ewos1-com1a-poc2:~# ping 172.31.17.15

But

octavia_worker
octavia_housekeeping
octavia_health_manager

are running on our control nodes and those are not running any OVS networks.

Next test is to deploy those three services to my network nodes and 
configure o-hm0 on the network nodes. I will have to fix


bind_port = 
bind_ip = 10.33.16.11
controller_ip_port_list = 10.33.16.11:

to bind to all IPs or the IP of o-hm0.





Am 10/25/18 um 4:39 PM schrieb Florian Engelmann:
It looks like devstack implemented some o-hm0 interface to connect the 
physical control host to a VxLAN.

In our case there is no VxLAN at the control nodes nor is OVS.

Is it a option to deploy those Octavia services needing this conenction 
to the compute or network nodes and use o-hm0?


Am 10/25/18 um 10:22 AM schrieb Florian Engelmann:
Or could I create lb-mgmt-net as VxLAN and connect the control nodes 
to this VxLAN? How to do something like that?


Am 10/25/18 um 10:03 AM schrieb Florian Engelmann:

Hmm - so right now I can't see any routed option because:

The gateway connected to the VLAN provider networks (bond1 on the 
network nodes) is not able to route any traffic to my control nodes 
in the spine-leaf layer3 backend network.


And right now there is no br-ex at all nor any "streched" L2 domain 
connecting all compute nodes.



So the only solution I can think of right now is to create an overlay 
VxLAN in the spine-leaf backend network, connect all compute and 
control nodes to this overlay L2 network, create a OVS bridge 
connected to that network on the compute nodes and allow the Amphorae 
to get an IPin this network as well.
Not to forget about DHCP... so the network nodes will need this 
bridge as well.


Am 10/24/18 um 10:01 PM schrieb Erik McCormick:



On Wed, Oct 24, 2018, 3:33 PM Engelmann Florian 
> wrote:


    On the network nodes we've got a dedicated interface to deploy 
VLANs
    (like the provider network for internet access). What about 
creating

    another VLAN on the network nodes, give that bridge a IP which is
    part of the subnet of lb-mgmt-net and start the octavia worker,
    healthmanager and controller on the network nodes binding to 
that IP?


The problem with that is you can't out an IP in the vlan interface 
and also use it as an OVS bridge, so the Octavia processes would 
have nothing to bind to.



 


    *From:* Erik McCormick mailto:emccorm...@cirrusseven.com>>
    *Sent:* Wednesday, October 24, 2018 6:18 PM
    *To:* Engelmann Florian
    *Cc:* openstack-operators
    *Subject:* Re: [Openstack-operators] [octavia][rocky] Octavia and
    VxLAN without DVR


    On Wed, Oct 24, 2018, 12:02 PM Florian Engelmann
    mailto:florian.engelm...@everyware.ch>> wrote:

    Am 10/24/18 um 2:08 PM schrieb Erik McCormick:
 >
 >
 > On Wed, Oct 24, 2018, 3:14 AM Florian Engelmann
 > mailto:florian.engelm...@everyware.ch>
    >>
 > wrote:
 >
 >     Ohoh - thank you for your empathy :)
 >     And those great details about 

Re: [Openstack-operators] [octavia][rocky] Octavia and VxLAN without DVR

2018-10-25 Thread Fox, Kevin M
Would it make sense to move the control plane for this piece into the cluster? 
(vm in a mangement tenant?)

Thanks,
Kevin

From: Florian Engelmann [florian.engelm...@everyware.ch]
Sent: Thursday, October 25, 2018 7:39 AM
To: openstack-operators@lists.openstack.org
Subject: Re: [Openstack-operators] [octavia][rocky] Octavia and VxLAN without 
DVR

It looks like devstack implemented some o-hm0 interface to connect the
physical control host to a VxLAN.
In our case there is no VxLAN at the control nodes nor is OVS.

Is it a option to deploy those Octavia services needing this conenction
to the compute or network nodes and use o-hm0?

Am 10/25/18 um 10:22 AM schrieb Florian Engelmann:
> Or could I create lb-mgmt-net as VxLAN and connect the control nodes to
> this VxLAN? How to do something like that?
>
> Am 10/25/18 um 10:03 AM schrieb Florian Engelmann:
>> Hmm - so right now I can't see any routed option because:
>>
>> The gateway connected to the VLAN provider networks (bond1 on the
>> network nodes) is not able to route any traffic to my control nodes in
>> the spine-leaf layer3 backend network.
>>
>> And right now there is no br-ex at all nor any "streched" L2 domain
>> connecting all compute nodes.
>>
>>
>> So the only solution I can think of right now is to create an overlay
>> VxLAN in the spine-leaf backend network, connect all compute and
>> control nodes to this overlay L2 network, create a OVS bridge
>> connected to that network on the compute nodes and allow the Amphorae
>> to get an IPin this network as well.
>> Not to forget about DHCP... so the network nodes will need this bridge
>> as well.
>>
>> Am 10/24/18 um 10:01 PM schrieb Erik McCormick:
>>>
>>>
>>> On Wed, Oct 24, 2018, 3:33 PM Engelmann Florian
>>> >> > wrote:
>>>
>>> On the network nodes we've got a dedicated interface to deploy VLANs
>>> (like the provider network for internet access). What about creating
>>> another VLAN on the network nodes, give that bridge a IP which is
>>> part of the subnet of lb-mgmt-net and start the octavia worker,
>>> healthmanager and controller on the network nodes binding to that
>>> IP?
>>>
>>> The problem with that is you can't out an IP in the vlan interface
>>> and also use it as an OVS bridge, so the Octavia processes would have
>>> nothing to bind to.
>>>
>>>
>>> 
>>> *From:* Erik McCormick >> >
>>> *Sent:* Wednesday, October 24, 2018 6:18 PM
>>> *To:* Engelmann Florian
>>> *Cc:* openstack-operators
>>> *Subject:* Re: [Openstack-operators] [octavia][rocky] Octavia and
>>> VxLAN without DVR
>>>
>>>
>>> On Wed, Oct 24, 2018, 12:02 PM Florian Engelmann
>>> >> > wrote:
>>>
>>> Am 10/24/18 um 2:08 PM schrieb Erik McCormick:
>>>  >
>>>  >
>>>  > On Wed, Oct 24, 2018, 3:14 AM Florian Engelmann
>>>  > >> 
>>> >> >>
>>>  > wrote:
>>>  >
>>>  > Ohoh - thank you for your empathy :)
>>>  > And those great details about how to setup this mgmt
>>> network.
>>>  > I will try to do so this afternoon but solving that
>>> routing "puzzle"
>>>  > (virtual network to control nodes) I will need our
>>> network guys to help
>>>  > me out...
>>>  >
>>>  > But I will need to tell all Amphorae a static route to
>>> the gateway that
>>>  > is routing to the control nodes?
>>>  >
>>>  >
>>>  > Just set the default gateway when you create the neutron
>>> subnet. No need
>>>  > for excess static routes. The route on the other connection
>>> won't
>>>  > interfere with it as it lives in a namespace.
>>>
>>>
>>> My compute nodes have no br-ex and there is no L2 domain spread
>>> over all
>>> compute nodes. As far as I understood lb-mgmt-net is a provider
>>> network
>>> and has to be flat or VLAN and will need a "physical" gateway
>>> (as there
>>> is no virtual router).
>>> So the question - is it possible to get octavia up and running
>>> without a
>>> br-ex (L2 domain spread over all compute nodes) on the compute
>>> nodes?
>>>
>>>
>>> Sorry, I only meant it was *like* br-ex on your network nodes. You
>>> don't need that on your computes.
>>>
>>> The router here would be whatever does routing in your physical
>>> network. Setting the gateway in the neutron subnet simply adds that
>>> to the DHCP information sent to the amphorae.
>>>
>>> This does bring up another thingI 

Re: [Openstack-operators] [octavia][rocky] Octavia and VxLAN without DVR

2018-10-25 Thread Florian Engelmann
It looks like devstack implemented some o-hm0 interface to connect the 
physical control host to a VxLAN.

In our case there is no VxLAN at the control nodes nor is OVS.

Is it a option to deploy those Octavia services needing this conenction 
to the compute or network nodes and use o-hm0?


Am 10/25/18 um 10:22 AM schrieb Florian Engelmann:
Or could I create lb-mgmt-net as VxLAN and connect the control nodes to 
this VxLAN? How to do something like that?


Am 10/25/18 um 10:03 AM schrieb Florian Engelmann:

Hmm - so right now I can't see any routed option because:

The gateway connected to the VLAN provider networks (bond1 on the 
network nodes) is not able to route any traffic to my control nodes in 
the spine-leaf layer3 backend network.


And right now there is no br-ex at all nor any "streched" L2 domain 
connecting all compute nodes.



So the only solution I can think of right now is to create an overlay 
VxLAN in the spine-leaf backend network, connect all compute and 
control nodes to this overlay L2 network, create a OVS bridge 
connected to that network on the compute nodes and allow the Amphorae 
to get an IPin this network as well.
Not to forget about DHCP... so the network nodes will need this bridge 
as well.


Am 10/24/18 um 10:01 PM schrieb Erik McCormick:



On Wed, Oct 24, 2018, 3:33 PM Engelmann Florian 
> wrote:


    On the network nodes we've got a dedicated interface to deploy VLANs
    (like the provider network for internet access). What about creating
    another VLAN on the network nodes, give that bridge a IP which is
    part of the subnet of lb-mgmt-net and start the octavia worker,
    healthmanager and controller on the network nodes binding to that 
IP?


The problem with that is you can't out an IP in the vlan interface 
and also use it as an OVS bridge, so the Octavia processes would have 
nothing to bind to.




    *From:* Erik McCormick mailto:emccorm...@cirrusseven.com>>
    *Sent:* Wednesday, October 24, 2018 6:18 PM
    *To:* Engelmann Florian
    *Cc:* openstack-operators
    *Subject:* Re: [Openstack-operators] [octavia][rocky] Octavia and
    VxLAN without DVR


    On Wed, Oct 24, 2018, 12:02 PM Florian Engelmann
    mailto:florian.engelm...@everyware.ch>> wrote:

    Am 10/24/18 um 2:08 PM schrieb Erik McCormick:
 >
 >
 > On Wed, Oct 24, 2018, 3:14 AM Florian Engelmann
 > mailto:florian.engelm...@everyware.ch>
    >>
 > wrote:
 >
 >     Ohoh - thank you for your empathy :)
 >     And those great details about how to setup this mgmt 
network.

 >     I will try to do so this afternoon but solving that
    routing "puzzle"
 >     (virtual network to control nodes) I will need our
    network guys to help
 >     me out...
 >
 >     But I will need to tell all Amphorae a static route to
    the gateway that
 >     is routing to the control nodes?
 >
 >
 > Just set the default gateway when you create the neutron
    subnet. No need
 > for excess static routes. The route on the other connection
    won't
 > interfere with it as it lives in a namespace.


    My compute nodes have no br-ex and there is no L2 domain spread
    over all
    compute nodes. As far as I understood lb-mgmt-net is a provider
    network
    and has to be flat or VLAN and will need a "physical" gateway
    (as there
    is no virtual router).
    So the question - is it possible to get octavia up and running
    without a
    br-ex (L2 domain spread over all compute nodes) on the compute
    nodes?


    Sorry, I only meant it was *like* br-ex on your network nodes. You
    don't need that on your computes.

    The router here would be whatever does routing in your physical
    network. Setting the gateway in the neutron subnet simply adds that
    to the DHCP information sent to the amphorae.

    This does bring up another thingI forgot  though. You'll probably
    want to add the management network / bridge to your network nodes or
    wherever you run the DHCP agents. When you create the subnet, be
    sure to leave some space in the address scope for the physical
    devices with static IPs.

    As for multiple L2 domains, I can't think of a way to go about that
    for the lb-mgmt network. It's a single network with a single subnet.
    Perhaps you could limit load balancers to an AZ in a single rack?
    Seems not very HA friendly.



 >
 >
 >
 >     Am 10/23/18 um 6:57 PM schrieb Erik McCormick:
 >      > So in your other email you said asked if there was a
    guide for
 >      > deploying it with Kolla ansible...
  

Re: [Openstack-operators] [openstack-dev] [Octavia] [Kolla] SSL errors polling amphorae and missing tenant network interface

2018-10-25 Thread Tobias Urdin

Might as well throw it out here.

After a lot of troubleshooting we were able to narrow our issue down to 
our test environment running qemu virtualization, we moved our compute 
node to hardware and

used kvm full virtualization instead.

We could properly reproduce the issue where generating a CSR from a 
private key and then trying to verify the CSR would fail complaining about

"Signature did not match the certificate request"

We suspect qemu floating point emulation caused this, the same OpenSSL 
function that validates a CSR is the one used when validating the SSL 
handshake which caused our issue.
After going through the whole stack, we have Octavia working flawlessly 
without any issues at all.


Best regards
Tobias

On 10/23/2018 04:31 PM, Tobias Urdin wrote:

Hello Erik,

Could you specify the DNs you used for all certificates just so that I
can rule it out on my side.
You can redact anything sensitive with some to just get the feel on how
it's configured.

Best regards
Tobias

On 10/22/2018 04:47 PM, Erik McCormick wrote:

On Mon, Oct 22, 2018 at 4:23 AM Tobias Urdin  wrote:

Hello,

I've been having a lot of issues with SSL certificates myself, on my
second trip now trying to get it working.

Before I spent a lot of time walking through every line in the DevStack
plugin and fixing my config options, used the generate
script [1] and still it didn't work.

When I got the "invalid padding" issue it was because of the DN I used
for the CA and the certificate IIRC.

   > 19:34 < tobias-urdin> 2018-09-10 19:43:15.312 15032 WARNING
octavia.amphorae.drivers.haproxy.rest_api_driver [-] Could not connect
to instance. Retrying.: SSLError: ("bad handshake: Error([('rsa
routines', 'RSA_padding_check_PKCS1_type_1', 'block type is not 01'),
('rsa routines', 'RSA_EAY_PUBLIC_DECRYPT', 'padding check failed'),
('SSL routines', 'ssl3_get_key_exchange', 'bad signature')],)",)
   > 19:47 < tobias-urdin> after a quick google "The problem was that my
CA DN was the same as the certificate DN."

IIRC I think that solved it, but then again I wouldn't remember fully
since I've been at so many different angles by now.

Here is my IRC logs history from the #openstack-lbaas channel, perhaps
it can help you out
http://paste.openstack.org/show/732575/


Tobias, I owe you a beer. This was precisely the issue. I'm deploying
Octavia with kolla-ansible. It only deploys a single CA. After hacking
the templates and playbook to incorporate a separate server CA, the
amphorae now load and provision the required namespace. I'm adding a
kolla tag to the subject of this in hopes that someone might want to
take on changing this behavior in the project. Hopefully after I get
through Upstream Institute in Berlin I'll be able to do it myself if
nobody else wants to do it.

For certificate generation, I extracted the contents of
octavia_certs_install.yml (which sets up the directory structure,
openssl.cnf, and the client CA), and octavia_certs.yml (which creates
the server CA and the client certificate) and mashed them into a
separate playbook just for this purpose. At the end I get:

ca_01.pem - Client CA Certificate
ca_01.key - Client CA Key
ca_server_01.pem - Server CA Certificate
cakey.pem - Server CA Key
client.pem - Concatenated Client Key and Certificate

If it would help to have the playbook, I can stick it up on github
with a huge "This is a hack" disclaimer on it.


-

Sorry for hijacking the thread but I'm stuck as well.

I've in the past tried to generate the certificates with [1] but now
moved on to using the openstack-ansible way of generating them [2]
with some modifications.

Right now I'm just getting: Could not connect to instance. Retrying.:
SSLError: [SSL: BAD_SIGNATURE] bad signature (_ssl.c:579)
from the amphoras, haven't got any further but I've eliminated a lot of
stuck in the middle.

Tried deploying Ocatavia on Ubuntu with python3 to just make sure there
wasn't an issue with CentOS and OpenSSL versions since it tends to lag
behind.
Checking the amphora with openssl s_client [3] it gives the same one,
but the verification is successful just that I don't understand what the
bad signature
part is about, from browsing some OpenSSL code it seems to be related to
RSA signatures somehow.

140038729774992:error:1408D07B:SSL routines:ssl3_get_key_exchange:bad
signature:s3_clnt.c:2032:

So I've basicly ruled out Ubuntu (openssl-1.1.0g) and CentOS
(openssl-1.0.2k) being the problem, ruled out signing_digest, so I'm
back to something related
to the certificates or the communication between the endpoints, or what
actually responds inside the amphora (gunicorn IIUC?). Based on the
"verify" functions actually causing that bad signature error I would
assume it's the generated certificate that the amphora presents that is
causing it.

I'll have to continue the troubleshooting to the inside of the amphora,
I've used the test-only amphora image before but have now built my own
one that is
using the amphora-agent from the actual 

[Openstack-operators] [publiccloud-wg] Reminder weekly meeting Public Cloud WG

2018-10-25 Thread Tobias Rydberg

Hi everyone,

Time for a new meeting for PCWG - today 1400 UTC in 
#openstack-publiccloud! Agenda found at 
https://etherpad.openstack.org/p/publiccloud-wg


Cheers,
Tobias

--
Tobias Rydberg
Senior Developer
Twitter & IRC: tobberydberg

www.citynetwork.eu | www.citycloud.com

INNOVATION THROUGH OPEN IT INFRASTRUCTURE
ISO 9001, 14001, 27001, 27015 & 27018 CERTIFIED


___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


Re: [Openstack-operators] [octavia][rocky] Octavia and VxLAN without DVR

2018-10-25 Thread Florian Engelmann
Or could I create lb-mgmt-net as VxLAN and connect the control nodes to 
this VxLAN? How to do something like that?


Am 10/25/18 um 10:03 AM schrieb Florian Engelmann:

Hmm - so right now I can't see any routed option because:

The gateway connected to the VLAN provider networks (bond1 on the 
network nodes) is not able to route any traffic to my control nodes in 
the spine-leaf layer3 backend network.


And right now there is no br-ex at all nor any "streched" L2 domain 
connecting all compute nodes.



So the only solution I can think of right now is to create an overlay 
VxLAN in the spine-leaf backend network, connect all compute and control 
nodes to this overlay L2 network, create a OVS bridge connected to that 
network on the compute nodes and allow the Amphorae to get an IPin this 
network as well.
Not to forget about DHCP... so the network nodes will need this bridge 
as well.


Am 10/24/18 um 10:01 PM schrieb Erik McCormick:



On Wed, Oct 24, 2018, 3:33 PM Engelmann Florian 
> wrote:


    On the network nodes we've got a dedicated interface to deploy VLANs
    (like the provider network for internet access). What about creating
    another VLAN on the network nodes, give that bridge a IP which is
    part of the subnet of lb-mgmt-net and start the octavia worker,
    healthmanager and controller on the network nodes binding to that IP?

The problem with that is you can't out an IP in the vlan interface and 
also use it as an OVS bridge, so the Octavia processes would have 
nothing to bind to.






    *From:* Erik McCormick mailto:emccorm...@cirrusseven.com>>
    *Sent:* Wednesday, October 24, 2018 6:18 PM
    *To:* Engelmann Florian
    *Cc:* openstack-operators
    *Subject:* Re: [Openstack-operators] [octavia][rocky] Octavia and
    VxLAN without DVR


    On Wed, Oct 24, 2018, 12:02 PM Florian Engelmann
    mailto:florian.engelm...@everyware.ch>> wrote:

    Am 10/24/18 um 2:08 PM schrieb Erik McCormick:
 >
 >
 > On Wed, Oct 24, 2018, 3:14 AM Florian Engelmann
 > mailto:florian.engelm...@everyware.ch>
    >>
 > wrote:
 >
 >     Ohoh - thank you for your empathy :)
 >     And those great details about how to setup this mgmt 
network.

 >     I will try to do so this afternoon but solving that
    routing "puzzle"
 >     (virtual network to control nodes) I will need our
    network guys to help
 >     me out...
 >
 >     But I will need to tell all Amphorae a static route to
    the gateway that
 >     is routing to the control nodes?
 >
 >
 > Just set the default gateway when you create the neutron
    subnet. No need
 > for excess static routes. The route on the other connection
    won't
 > interfere with it as it lives in a namespace.


    My compute nodes have no br-ex and there is no L2 domain spread
    over all
    compute nodes. As far as I understood lb-mgmt-net is a provider
    network
    and has to be flat or VLAN and will need a "physical" gateway
    (as there
    is no virtual router).
    So the question - is it possible to get octavia up and running
    without a
    br-ex (L2 domain spread over all compute nodes) on the compute
    nodes?


    Sorry, I only meant it was *like* br-ex on your network nodes. You
    don't need that on your computes.

    The router here would be whatever does routing in your physical
    network. Setting the gateway in the neutron subnet simply adds that
    to the DHCP information sent to the amphorae.

    This does bring up another thingI forgot  though. You'll probably
    want to add the management network / bridge to your network nodes or
    wherever you run the DHCP agents. When you create the subnet, be
    sure to leave some space in the address scope for the physical
    devices with static IPs.

    As for multiple L2 domains, I can't think of a way to go about that
    for the lb-mgmt network. It's a single network with a single subnet.
    Perhaps you could limit load balancers to an AZ in a single rack?
    Seems not very HA friendly.



 >
 >
 >
 >     Am 10/23/18 um 6:57 PM schrieb Erik McCormick:
 >      > So in your other email you said asked if there was a
    guide for
 >      > deploying it with Kolla ansible...
 >      >
 >      > Oh boy. No there's not. I don't know if you've seen my
    recent
 >     mails on
 >      > Octavia, but I am going through this deployment
    process with
 >      > kolla-ansible right now and it is lacking in a few 
areas.

 >      >
 >      > If you 

Re: [Openstack-operators] [octavia][rocky] Octavia and VxLAN without DVR

2018-10-25 Thread Florian Engelmann

Hmm - so right now I can't see any routed option because:

The gateway connected to the VLAN provider networks (bond1 on the 
network nodes) is not able to route any traffic to my control nodes in 
the spine-leaf layer3 backend network.


And right now there is no br-ex at all nor any "streched" L2 domain 
connecting all compute nodes.



So the only solution I can think of right now is to create an overlay 
VxLAN in the spine-leaf backend network, connect all compute and control 
nodes to this overlay L2 network, create a OVS bridge connected to that 
network on the compute nodes and allow the Amphorae to get an IPin this 
network as well.
Not to forget about DHCP... so the network nodes will need this bridge 
as well.


Am 10/24/18 um 10:01 PM schrieb Erik McCormick:



On Wed, Oct 24, 2018, 3:33 PM Engelmann Florian 
mailto:florian.engelm...@everyware.ch>> 
wrote:


On the network nodes we've got a dedicated interface to deploy VLANs
(like the provider network for internet access). What about creating
another VLAN on the network nodes, give that bridge a IP which is
part of the subnet of lb-mgmt-net and start the octavia worker,
healthmanager and controller on the network nodes binding to that IP?

The problem with that is you can't out an IP in the vlan interface and 
also use it as an OVS bridge, so the Octavia processes would have 
nothing to bind to.




*From:* Erik McCormick mailto:emccorm...@cirrusseven.com>>
*Sent:* Wednesday, October 24, 2018 6:18 PM
*To:* Engelmann Florian
*Cc:* openstack-operators
*Subject:* Re: [Openstack-operators] [octavia][rocky] Octavia and
VxLAN without DVR


On Wed, Oct 24, 2018, 12:02 PM Florian Engelmann
mailto:florian.engelm...@everyware.ch>> wrote:

Am 10/24/18 um 2:08 PM schrieb Erik McCormick:
 >
 >
 > On Wed, Oct 24, 2018, 3:14 AM Florian Engelmann
 > mailto:florian.engelm...@everyware.ch>
>>
 > wrote:
 >
 >     Ohoh - thank you for your empathy :)
 >     And those great details about how to setup this mgmt network.
 >     I will try to do so this afternoon but solving that
routing "puzzle"
 >     (virtual network to control nodes) I will need our
network guys to help
 >     me out...
 >
 >     But I will need to tell all Amphorae a static route to
the gateway that
 >     is routing to the control nodes?
 >
 >
 > Just set the default gateway when you create the neutron
subnet. No need
 > for excess static routes. The route on the other connection
won't
 > interfere with it as it lives in a namespace.


My compute nodes have no br-ex and there is no L2 domain spread
over all
compute nodes. As far as I understood lb-mgmt-net is a provider
network
and has to be flat or VLAN and will need a "physical" gateway
(as there
is no virtual router).
So the question - is it possible to get octavia up and running
without a
br-ex (L2 domain spread over all compute nodes) on the compute
nodes?


Sorry, I only meant it was *like* br-ex on your network nodes. You
don't need that on your computes.

The router here would be whatever does routing in your physical
network. Setting the gateway in the neutron subnet simply adds that
to the DHCP information sent to the amphorae.

This does bring up another thingI forgot  though. You'll probably
want to add the management network / bridge to your network nodes or
wherever you run the DHCP agents. When you create the subnet, be
sure to leave some space in the address scope for the physical
devices with static IPs.

As for multiple L2 domains, I can't think of a way to go about that
for the lb-mgmt network. It's a single network with a single subnet.
Perhaps you could limit load balancers to an AZ in a single rack?
Seems not very HA friendly.



 >
 >
 >
 >     Am 10/23/18 um 6:57 PM schrieb Erik McCormick:
 >      > So in your other email you said asked if there was a
guide for
 >      > deploying it with Kolla ansible...
 >      >
 >      > Oh boy. No there's not. I don't know if you've seen my
recent
 >     mails on
 >      > Octavia, but I am going through this deployment
process with
 >      > kolla-ansible right now and it is lacking in a few areas.
 >      >
 >      > If you plan to use different CA certificates for
client and server in
 >      > Octavia, you'll need to add that into the playbook.
Presently it only
 >  

Re: [Openstack-operators] [openstack-dev] [nova][limits] Does ANYONE at all use the quota class functionality in Nova?

2018-10-25 Thread melanie witt

On Thu, 25 Oct 2018 14:12:51 +0900, ボーアディネシュ[bhor Dinesh] wrote:
We were having a similar use case like *Preemptible Instances* called as 
*Rich-VM’s* which


are high in resources and are deployed each per hypervisor. We have a 
custom code in


production which tracks the quota for such instances separately and for 
the same reason


we have *rich_instances* custom quota class same as *instances* quota class.


Please see the last reply I recently sent on this thread. I have been 
thinking the same as you about how we could use quota classes to 
implement the quota piece of preemptible instances. I think we can 
achieve the same thing using unified limits, specifically registered 
limits [1], which span across all projects. So, I think we are covered 
moving forward with migrating to unified limits and deprecation of quota 
classes. Let me know if you spot any issues with this idea.


Cheers,
-melanie

[1] 
https://developer.openstack.org/api-ref/identity/v3/?expanded=create-registered-limits-detail,create-limits-detail#create-registered-limits





___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


Re: [Openstack-operators] [openstack-dev] [nova][limits] Does ANYONE at all use the quota class functionality in Nova?

2018-10-25 Thread melanie witt

On Wed, 24 Oct 2018 12:54:00 -0700, Melanie Witt wrote:

On Wed, 24 Oct 2018 13:57:05 -0500, Matt Riedemann wrote:

On 10/24/2018 10:10 AM, Jay Pipes wrote:

I'd like to propose deprecating this API and getting rid of this
functionality since it conflicts with the new Keystone /limits endpoint,
is highly coupled with RAX's turnstile middleware and I can't seem to
find anyone who has ever used it. Deprecating this API and functionality
would make the transition to a saner quota management system much easier
and straightforward.

I was trying to do this before it was cool:

https://review.openstack.org/#/c/411035/

I think it was the Pike PTG in ATL where people said, "meh, let's just
wait for unified limits from keystone and let this rot on the vine".

I'd be happy to restore and update that spec.


Yeah, we were thinking the presence of the API and code isn't harming
anything and sometimes we talk about situations where we could use them.

Quota classes come up occasionally whenever we talk about preemptible
instances. Example: we could create and use a quota class "preemptible"
and decorate preemptible flavors with that quota_class in order to give
them unlimited quota. There's also talk of quota classes in the "Count
quota based on resource class" spec [1] where we could have leveraged
quota classes to create and enforce quota limits per custom resource
class. But I think the consensus there was to hold off on quota by
custom resource class until we migrate to unified limits and oslo.limit.

So, I think my concern in removing the internal code that is capable of
enforcing quota limit per quota class is the preemptible instance use
case. I don't have my mind wrapped around if/how we could solve it using
unified limits yet.

And I was just thinking, if we added a project_id column to the
quota_classes table and correspondingly added it to the
os-quota-class-sets API, we could pretty simply implement quota by
flavor, which is a feature operators like Oath need. An operator could
create a quota class limit per project_id and then decorate flavors with
quota_class to enforce them per flavor.

I recognize that maybe it would be too confusing to solve use cases with
quota classes given that we're going to migrate to united limits. At the
same time, I'm hesitant to close the door on a possibility before we
have some idea about how we'll solve them without quota classes. Has
anyone thought about how we can solve the use cases with unified limits
for things like preemptible instances and quota by flavor?

[1] https://review.openstack.org/56901


After I sent this, I realized that I _have_ thought about how to solve 
these use cases with unified limits before and commented about it on the 
"Count quota based on resource class" spec some months ago.


For preemptible instances, we could leverage registered limits in 
keystone [2] (registered limits span across all projects) by creating a 
limit with resource_name='preemptible', for example. Then we could 
decorate a flavor with quota_resource_name='preemptible' which would 
designate a preemptible instance type. Then we use the 
quota_resource_name from the flavor to check the quota for the 
corresponding registered limit in keystone. This way, preemptible 
instances can be assigned their own special quota (probably unlimited).


And for quota by flavor, same concept. I think we could use registered 
limits and project limits [3] by creating limits with 
resource_name='flavorX', for example. We could decorate flavors with 
quota_resource_name='flavorX' and check quota for special quota for flavorX.


Unified limits provide all of the same ability as quota classes, as far 
as I can tell. Given that, I think we are OK to deprecate quota classes.


Cheers,
-melanie

[2] 
https://developer.openstack.org/api-ref/identity/v3/?expanded=create-registered-limits-detail,create-limits-detail#create-registered-limits
[3] 
https://developer.openstack.org/api-ref/identit/v3/?expanded=create-registered-limits-detail,create-limits-detail#create-limits





___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators