Re: [Openstack-operators] [nova] nova-compute automatically disabling itself?

2018-02-06 Thread Matt Riedemann

On 2/6/2018 2:14 PM, Chris Apsey wrote:
but we would rather have intermittent build failures rather than compute 
nodes falling over in the future.


Note that once a compute has a successful build, the consecutive build 
failures counter is reset. So if your limit is the default (10) and you 
have 10 failures in a row, the compute service is auto-disabled. But if 
you have say 5 failures and then a pass, it's reset to 0 failures.


Obviously if you're doing a pack-first scheduling strategy rather than 
spreading instances across the deployment, a burst of failures could 
easily disable a compute, especially if that host is overloaded like you 
saw. I'm not sure if rescheduling is helping you or not - that would be 
useful information since we consider the need to reschedule off a failed 
compute host as a bad thing. At the Forum in Boston when this idea came 
up, it was specifically for the case that operators in the room didn't 
want a bad compute to become a "black hole" in their deployment causing 
lots of reschedules until they get that one fixed.


--

Thanks,

Matt

___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


[Openstack-operators] [openstack-community] Feb 8 CFP Deadline - OpenStack Summit Vancouver

2018-02-06 Thread Jimmy McArthur

Hi everyone,

The Vancouver Summit 
 CFP closes in two 
days: February 8 at 11:59pm Pacific Time (February 9 at 6:59am UTC).


For the Vancouver, the Summit Tracks have evolved to cover the entire 
open infrastructure landscape. Get your talks in for:

• Container infrastructure
• Edge computing
• CI/CD
• HPC/GPU/AI
• Open source community
• OpenStack private, public and hybrid cloud

The Programming Committees for each Track have provided suggest topics 
for Summit sessions. View topic ideas for each track 
 and 
submit your proposals 
 before 
this week's deadline!


If you have any questions, please email sum...@openstack.org 
.


Cheers,
Kendall


Kendall Waters
OpenStack Marketing
kend...@openstack.org 

___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


Re: [Openstack-operators] [nova] nova-compute automatically disabling itself?

2018-02-06 Thread Chris Apsey

All,

This was the core issue - setting 
consecutive_build_service_disable_threshold = 0 in nova.conf (on 
controllers and compute nodes) solved this.  It was being triggered by 
neutron dropping requests (and/or responses) for vif-plugging due to cpu 
usage on the neutron endpoints being pegged at 100% for too long.  We 
increased our rpc_response_timeout value and this issue appears to be 
resolved for the time being.  We can probably safely remove the 
consecutive_build_service_disable_threshold option at this point, but we 
would rather have intermittent build failures rather than compute nodes 
falling over in the future.


Slightly related, we are noticing that neutron endpoints are using 
noticeably more CPU time recently than in the past w/ a similar workload 
(we run linuxbridge w/ vxlan).  We believe this is tied to our 
application of KPTI for meltdown mitigation across the various hosts in 
our cluster (the timeline matches).  Has anyone else experienced similar 
impacts or can suggest anything to try to lessen the impact?


---
v/r

Chris Apsey
bitskr...@bitskrieg.net
https://www.bitskrieg.net

On 2018-01-31 04:47 PM, Chris Apsey wrote:

That looks promising.  I'll report back to confirm the solution.

Thanks!

---
v/r

Chris Apsey
bitskr...@bitskrieg.net
https://www.bitskrieg.net

On 2018-01-31 04:40 PM, Matt Riedemann wrote:

On 1/31/2018 3:16 PM, Chris Apsey wrote:

All,

Running in to a strange issue I haven't seen before.

Randomly, the nova-compute services on compute nodes are disabling 
themselves (as if someone ran openstack compute service set --disable 
hostX nova-compute.  When this happens, the node continues to report 
itself as 'up' - the service is just disabled.  As a result, if 
enough of these occur, we get scheduling errors due to lack of 
available resources (which makes sense).  Re-enabling them works just 
fine and they continue on as if nothing happened.  I looked through 
the logs and I can find the API calls where we re-enable the services 
(PUT /v2.1/os-services/enable), but I do not see any API calls where 
the services are getting disabled initially.


Is anyone aware of any cases where compute nodes will automatically 
disable their nova-compute service on their own, or has anyone seen 
this before and might know a root cause?  We have plenty of spare 
vcpus and RAM on each node - like less than 25% utilization (both in 
absolute terms and in terms of applied ratios).


We're seeing follow-on errors regarding rmq messages getting lost and 
vif-plug failures, but we think those are a symptom, not a cause.


Currently running pike on Xenial.

---
v/r

Chris Apsey
bitskr...@bitskrieg.net
https://www.bitskrieg.net

___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators



This is actually a feature added in Pike:

https://review.openstack.org/#/c/463597/

This came up in discussion with operators at the Forum in Boston.

The vif-plug failures are likely the reason those computes are getting 
disabled.


There is a config option "consecutive_build_service_disable_threshold"
which you can set to disable the auto-disable behavior as some have
experienced issues with it:

https://bugs.launchpad.net/nova/+bug/1742102


___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


Re: [Openstack-operators] Octavia LBaaS - networking requirements

2018-02-06 Thread Michael Johnson
No issue with using an L2 network for the lb-mgmt-net.

It only requires the following:
Controllers can reach amphora-agent IPs on the TCP bind_port (default 9443)
Amphora-agents can reach the controllers in the
controller_ip_port_list via UDP (default )

This can be via an L2 lb-mgmt-net (provider or other) or in some
routing combination.

I have started work on a detailed installation guide that will cover
these options. Hopefully I can get it done during Rocky.

Michael

On Tue, Feb 6, 2018 at 1:57 AM, Flint WALRUS  wrote:
> Ok, that’s what I was understanding from the documentation but as I couldn’t
> find any information related to the L3 specifics I prefer to have another
> check that mine only x)
>
> I’ll have to install and operate Octavia within an unusual L2 only network
> and I would like to be sure I’ll not push myself from the cliff :-)
>
> Le mar. 6 févr. 2018 à 10:53, Volodymyr Litovka  a écrit :
>>
>> Hi Flint,
>>
>> I think, Octavia expects reachibility between components over management
>> network, regardless of network's technology.
>>
>>
>> On 2/6/18 11:41 AM, Flint WALRUS wrote:
>>
>> Hi guys, I’m wondering if the Octavia lb-mgmt-net can be a L2 provider
>> network instead of a neutron L3 vxlan ?
>>
>> Is Octavia specifically relying on L3 networking or can it operate without
>> neutron L3 features ?
>>
>> I didn't find anything specifically related to the network requirements
>> except for the network itself.
>>
>> Thanks guys!
>>
>>
>> ___
>> OpenStack-operators mailing list
>> OpenStack-operators@lists.openstack.org
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators
>>
>>
>> --
>> Volodymyr Litovka
>>   "Vision without Execution is Hallucination." -- Thomas Edison
>
>
> ___
> OpenStack-operators mailing list
> OpenStack-operators@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators
>

___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


[Openstack-operators] Ops Meetups Team meeting minutes and so much more!

2018-02-06 Thread Chris Morgan
Good meeting today, here are the minutes:

Meeting ended Tue Feb 6 15:47:56 2018 UTC. Information about MeetBot at
http://wiki.debian.org/MeetBot . (v 0.1.4)
10:47 AM Minutes:
http://eavesdrop.openstack.org/meetings/ops_meetup_team/2018/ops_meetup_team.2018-02-06-15.00.html
10:48 AM Minutes (text):
http://eavesdrop.openstack.org/meetings/ops_meetup_team/2018/ops_meetup_team.2018-02-06-15.00.txt
10:48 AM Log:
http://eavesdrop.openstack.org/meetings/ops_meetup_team/2018/ops_meetup_team.2018-02-06-15.00.log.html

The new time slot is 10am EST always, which if our sums are correct works
out to be 1400 UTC during DST and 1500 UTC otherwise. The team wiki page
has been updated (see https://wiki.openstack.org/wiki/Ops_Meetups_Team)
with this

The OpenStack Operations Guide has been successfully converted from
openstack documentation to community-edited wiki, see
https://wiki.openstack.org/wiki/OpsGuide : A few volunteers such as:
myself, Sean McGinnis, David Medberry (comments received) and possibly
others plan to modernize this since most of the content is a bit dated, so
if you want to contribute please get in touch here on the operators mailing
list.

Reminder: The next Ops Meetup is coming up in March in Tokyo, see
https://www.eventbrite.com/e/openstack-ops-meetup-tokyo-tickets-39089912982

The Meetup after that looks to be either in NYC, or possibly as part of
OpenStack Days Nordic. Please attend next weeks meeting to learn more on
those possibilities

Cheers,

Chris

-- 
Chris Morgan 
___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


[Openstack-operators] Ops Meetups team meeting 10 minute warning!

2018-02-06 Thread Chris Morgan
Meeting starts in 10 minutes - new regular time. See you on
#openstack-operators !

Chris

-- 
Chris Morgan 
___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


Re: [Openstack-operators] Is there an Ops Meetup today?

2018-02-06 Thread Erik McCormick
 It was moved to 10am EST die to lots of conflicts. Need to update the wiki.

On Feb 6, 2018 9:11 AM, "Jimmy McArthur"  wrote:

> Was it canceled?
>
> https://wiki.openstack.org/wiki/Ops_Meetups_Team
>
> ___
> OpenStack-operators mailing list
> OpenStack-operators@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators
>
___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


[Openstack-operators] Is there an Ops Meetup today?

2018-02-06 Thread Jimmy McArthur

Was it canceled?

https://wiki.openstack.org/wiki/Ops_Meetups_Team

___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


Re: [Openstack-operators] AggregateMultiTenancyIsolation with multiple (many) projects

2018-02-06 Thread Jay Pipes

On 02/06/2018 04:26 AM, Flint WALRUS wrote:

Aren’t CellsV2 more adapted to what you’re trying to do?


No, cellsv2 are not user-facing nor is there a way to segregate certain 
tenants on to certain cells.


Host aggregates are the appropriate way to structure this grouping.

Best,
-jay

Le mar. 6 févr. 2018 à 06:45, Massimo Sgaravatto 
> a 
écrit :


Hi

I want to partition my OpenStack cloud so that:

- Project p1, p2, .., pn can use only compute nodes C1, C2, ... Cx
- Projects pn+1.. pm can use only compute nodes Cx+1 ... Cy

I read that CERN addressed this use case implementing the
ProjectsToAggregateFilter but, as far as I understand, this in-house
developments eventually wasn't pushed upstream.

So I am trying to rely on the  AggregateMultiTenancyIsolation filter
to create  2 host aggregates:

- the first one including C1, C2, ... Cx and with
filter_tenant_id=p1, p2, .., pn
- the second one including Cx+1 ... Cy and with
filter_tenant_id=pn+1.. pm


But if I try to specify the long list of projects, I get:a "Value
... is too long" error message [*].

I can see two workarounds for this problem:

1) Create an host aggregate per project:

HA1 including CA1, C2, ... Cx and with filter_tenant_id=p1
HA2 including CA1, C2, ... Cx and with filter_tenant_id=p2
etc

2) Use the AggregateInstanceExtraSpecsFilter, creating two
aggregates and having each flavor visible only by a set of projects,
and tagged with a specific string that should match the value
specified in the correspondent host aggregate

Is this correct ? Can you see better options ?

Thanks, Massimo



[*]
# nova aggregate-set-metadata 1

filter_tenant_id=ee1865a76440481cbcff08544c7d580a,1c63ef80901b46bebc9e1e8e10d85019,a22db12575694c9e9f8650dde73ef565,b4a1039f2f7c419596662950d8a1d397,d8ce7a449d8e4d68bbf2c40314da357e,e81df4c0b493439abb8b85bfd4cbe071,ee1865a76440481cbcff08544c7d580a,29d6233c0b2741e2a30324079f05aa9f,2b92483138dc4a61b1133c8c177ff298
ERROR (BadRequest): Invalid input for field/attribute
filter_tenant_id. Value:

ee1865a76440481cbcff08544c7d580a,1c63ef80901b46bebc9e1e8e10d85019,a22db12575694c9e9f8650dde73ef565,b4a1039f2f7c419596662950d8a1d397,d8ce7a449d8e4d68bbf2c40314da357e,e81df4c0b493439abb8b85bfd4cbe071,ee1865a76440481cbcff08544c7d580a,29d6233c0b2741e2a30324079f05aa9f,2b92483138dc4a61b1133c8c177ff298.

u'ee1865a76440481cbcff08544c7d580a,1c63ef80901b46bebc9e1e8e10d85019,a22db12575694c9e9f8650dde73ef565,b4a1039f2f7c419596662950d8a1d397,d8ce7a449d8e4d68bbf2c40314da357e,e81df4c0b493439abb8b85bfd4cbe071,ee1865a76440481cbcff08544c7d580a,29d6233c0b2741e2a30324079f05aa9f,2b92483138dc4a61b1133c8c177ff298'
is too long (HTTP 400) (Request-ID:
req-b971d686-72e5-4c54-aaa1-fef5eb7c7001)


___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org

http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators



___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators



___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


Re: [Openstack-operators] AggregateMultiTenancyIsolation with multiple (many) projects

2018-02-06 Thread Flint WALRUS
If you’re willing to, I could share with you a way to get a FrankeinCloud
using a Docker method with kolla to get a pike/queens/whatever cloud at the
same time that your Ocata one.
Le mar. 6 févr. 2018 à 11:15, Massimo Sgaravatto <
massimo.sgarava...@gmail.com> a écrit :

> Thanks for your answer.
> As far as I understand CellsV2 are present in Pike and later. I need to
> implement such use case in an Ocata Openstack based cloud
>
> Thanks, Massimo
>
> 2018-02-06 10:26 GMT+01:00 Flint WALRUS :
>
>> Aren’t CellsV2 more adapted to what you’re trying to do?
>> Le mar. 6 févr. 2018 à 06:45, Massimo Sgaravatto <
>> massimo.sgarava...@gmail.com> a écrit :
>>
>>> Hi
>>>
>>> I want to partition my OpenStack cloud so that:
>>>
>>> - Project p1, p2, .., pn can use only compute nodes C1, C2, ... Cx
>>> - Projects pn+1.. pm can use only compute nodes Cx+1 ... Cy
>>>
>>> I read that CERN addressed this use case implementing the
>>> ProjectsToAggregateFilter but, as far as I understand, this in-house
>>> developments eventually wasn't pushed upstream.
>>>
>>> So I am trying to rely on the  AggregateMultiTenancyIsolation filter to
>>> create  2 host aggregates:
>>>
>>> - the first one including C1, C2, ... Cx and with filter_tenant_id=p1,
>>> p2, .., pn
>>> - the second one including Cx+1 ... Cy and with filter_tenant_id=pn+1..
>>> pm
>>>
>>>
>>> But if I try to specify the long list of projects, I get:a "Value ... is
>>> too long" error message [*].
>>>
>>> I can see two workarounds for this problem:
>>>
>>> 1) Create an host aggregate per project:
>>>
>>> HA1 including CA1, C2, ... Cx and with filter_tenant_id=p1
>>> HA2 including CA1, C2, ... Cx and with filter_tenant_id=p2
>>> etc
>>>
>>> 2) Use the AggregateInstanceExtraSpecsFilter, creating two aggregates
>>> and having each flavor visible only by a set of projects, and tagged with a
>>> specific string that should match the value specified in the correspondent
>>> host aggregate
>>>
>>> Is this correct ? Can you see better options ?
>>>
>>> Thanks, Massimo
>>>
>>>
>>>
>>> [*]
>>> # nova aggregate-set-metadata 1
>>> filter_tenant_id=ee1865a76440481cbcff08544c7d580a,1c63ef80901b46bebc9e1e8e10d85019,a22db12575694c9e9f8650dde73ef565,b4a1039f2f7c419596662950d8a1d397,d8ce7a449d8e4d68bbf2c40314da357e,e81df4c0b493439abb8b85bfd4cbe071,ee1865a76440481cbcff08544c7d580a,29d6233c0b2741e2a30324079f05aa9f,2b92483138dc4a61b1133c8c177ff298
>>> ERROR (BadRequest): Invalid input for field/attribute filter_tenant_id.
>>> Value:
>>> ee1865a76440481cbcff08544c7d580a,1c63ef80901b46bebc9e1e8e10d85019,a22db12575694c9e9f8650dde73ef565,b4a1039f2f7c419596662950d8a1d397,d8ce7a449d8e4d68bbf2c40314da357e,e81df4c0b493439abb8b85bfd4cbe071,ee1865a76440481cbcff08544c7d580a,29d6233c0b2741e2a30324079f05aa9f,2b92483138dc4a61b1133c8c177ff298.
>>> u'ee1865a76440481cbcff08544c7d580a,1c63ef80901b46bebc9e1e8e10d85019,a22db12575694c9e9f8650dde73ef565,b4a1039f2f7c419596662950d8a1d397,d8ce7a449d8e4d68bbf2c40314da357e,e81df4c0b493439abb8b85bfd4cbe071,ee1865a76440481cbcff08544c7d580a,29d6233c0b2741e2a30324079f05aa9f,2b92483138dc4a61b1133c8c177ff298'
>>> is too long (HTTP 400) (Request-ID:
>>> req-b971d686-72e5-4c54-aaa1-fef5eb7c7001)
>>>
>>>
>>> ___
>>> OpenStack-operators mailing list
>>> OpenStack-operators@lists.openstack.org
>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators
>>>
>>
>
___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


Re: [Openstack-operators] AggregateMultiTenancyIsolation with multiple (many) projects

2018-02-06 Thread Massimo Sgaravatto
Thanks for your answer.
As far as I understand CellsV2 are present in Pike and later. I need to
implement such use case in an Ocata Openstack based cloud

Thanks, Massimo

2018-02-06 10:26 GMT+01:00 Flint WALRUS :

> Aren’t CellsV2 more adapted to what you’re trying to do?
> Le mar. 6 févr. 2018 à 06:45, Massimo Sgaravatto <
> massimo.sgarava...@gmail.com> a écrit :
>
>> Hi
>>
>> I want to partition my OpenStack cloud so that:
>>
>> - Project p1, p2, .., pn can use only compute nodes C1, C2, ... Cx
>> - Projects pn+1.. pm can use only compute nodes Cx+1 ... Cy
>>
>> I read that CERN addressed this use case implementing the
>> ProjectsToAggregateFilter but, as far as I understand, this in-house
>> developments eventually wasn't pushed upstream.
>>
>> So I am trying to rely on the  AggregateMultiTenancyIsolation filter to
>> create  2 host aggregates:
>>
>> - the first one including C1, C2, ... Cx and with filter_tenant_id=p1,
>> p2, .., pn
>> - the second one including Cx+1 ... Cy and with filter_tenant_id=pn+1..
>> pm
>>
>>
>> But if I try to specify the long list of projects, I get:a "Value ... is
>> too long" error message [*].
>>
>> I can see two workarounds for this problem:
>>
>> 1) Create an host aggregate per project:
>>
>> HA1 including CA1, C2, ... Cx and with filter_tenant_id=p1
>> HA2 including CA1, C2, ... Cx and with filter_tenant_id=p2
>> etc
>>
>> 2) Use the AggregateInstanceExtraSpecsFilter, creating two aggregates
>> and having each flavor visible only by a set of projects, and tagged with a
>> specific string that should match the value specified in the correspondent
>> host aggregate
>>
>> Is this correct ? Can you see better options ?
>>
>> Thanks, Massimo
>>
>>
>>
>> [*]
>> # nova aggregate-set-metadata 1 filter_tenant_id=
>> ee1865a76440481cbcff08544c7d580a,1c63ef80901b46bebc9e1e8e10d85019,
>> a22db12575694c9e9f8650dde73ef565,b4a1039f2f7c419596662950d8a1d397,
>> d8ce7a449d8e4d68bbf2c40314da357e,e81df4c0b493439abb8b85bfd4cbe071,
>> ee1865a76440481cbcff08544c7d580a,29d6233c0b2741e2a30324079f05aa9f,
>> 2b92483138dc4a61b1133c8c177ff298
>> ERROR (BadRequest): Invalid input for field/attribute filter_tenant_id.
>> Value: ee1865a76440481cbcff08544c7d580a,1c63ef80901b46bebc9e1e8e10d85019,
>> a22db12575694c9e9f8650dde73ef565,b4a1039f2f7c419596662950d8a1d397,
>> d8ce7a449d8e4d68bbf2c40314da357e,e81df4c0b493439abb8b85bfd4cbe071,
>> ee1865a76440481cbcff08544c7d580a,29d6233c0b2741e2a30324079f05aa9f,
>> 2b92483138dc4a61b1133c8c177ff298. u'ee1865a76440481cbcff08544c7d580a,
>> 1c63ef80901b46bebc9e1e8e10d85019,a22db12575694c9e9f8650dde73ef565,
>> b4a1039f2f7c419596662950d8a1d397,d8ce7a449d8e4d68bbf2c40314da357e,
>> e81df4c0b493439abb8b85bfd4cbe071,ee1865a76440481cbcff08544c7d580a,
>> 29d6233c0b2741e2a30324079f05aa9f,2b92483138dc4a61b1133c8c177ff298' is
>> too long (HTTP 400) (Request-ID: req-b971d686-72e5-4c54-aaa1-
>> fef5eb7c7001)
>>
>>
>> ___
>> OpenStack-operators mailing list
>> OpenStack-operators@lists.openstack.org
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators
>>
>
___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


Re: [Openstack-operators] Octavia LBaaS - networking requirements

2018-02-06 Thread Flint WALRUS
Ok, that’s what I was understanding from the documentation but as I
couldn’t find any information related to the L3 specifics I prefer to have
another check that mine only x)

I’ll have to install and operate Octavia within an unusual L2 only network
and I would like to be sure I’ll not push myself from the cliff :-)
Le mar. 6 févr. 2018 à 10:53, Volodymyr Litovka  a écrit :

> Hi Flint,
>
> I think, Octavia expects reachibility between components over management
> network, regardless of network's technology.
>
>
> On 2/6/18 11:41 AM, Flint WALRUS wrote:
>
> Hi guys, I’m wondering if the Octavia lb-mgmt-net can be a L2 provider
> network instead of a neutron L3 vxlan ?
>
> Is Octavia specifically relying on L3 networking or can it operate without
> neutron L3 features ?
>
> I didn't find anything specifically related to the network requirements
> except for the network itself.
>
> Thanks guys!
>
>
> ___
> OpenStack-operators mailing 
> listOpenStack-operators@lists.openstack.orghttp://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators
>
>
> --
> Volodymyr Litovka
>   "Vision without Execution is Hallucination." -- Thomas Edison
>
>
___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


Re: [Openstack-operators] Octavia LBaaS - networking requirements

2018-02-06 Thread Volodymyr Litovka

Hi Flint,

I think, Octavia expects reachibility between components over management 
network, regardless of network's technology.


On 2/6/18 11:41 AM, Flint WALRUS wrote:
Hi guys, I’m wondering if the Octavia lb-mgmt-net can be a L2 provider 
network instead of a neutron L3 vxlan ?


Is Octavia specifically relying on L3 networking or can it operate 
without neutron L3 features ?


I didn't find anything specifically related to the network 
requirements except for the network itself.


Thanks guys!


___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


--
Volodymyr Litovka
  "Vision without Execution is Hallucination." -- Thomas Edison

___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


[Openstack-operators] Octavia LBaaS - networking requirements

2018-02-06 Thread Flint WALRUS
Hi guys, I’m wondering if the Octavia lb-mgmt-net can be a L2 provider
network instead of a neutron L3 vxlan ?

Is Octavia specifically relying on L3 networking or can it operate without
neutron L3 features ?

I didn't find anything specifically related to the network requirements
except for the network itself.

Thanks guys!
___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


Re: [Openstack-operators] AggregateMultiTenancyIsolation with multiple (many) projects

2018-02-06 Thread Flint WALRUS
Aren’t CellsV2 more adapted to what you’re trying to do?
Le mar. 6 févr. 2018 à 06:45, Massimo Sgaravatto <
massimo.sgarava...@gmail.com> a écrit :

> Hi
>
> I want to partition my OpenStack cloud so that:
>
> - Project p1, p2, .., pn can use only compute nodes C1, C2, ... Cx
> - Projects pn+1.. pm can use only compute nodes Cx+1 ... Cy
>
> I read that CERN addressed this use case implementing the
> ProjectsToAggregateFilter but, as far as I understand, this in-house
> developments eventually wasn't pushed upstream.
>
> So I am trying to rely on the  AggregateMultiTenancyIsolation filter to
> create  2 host aggregates:
>
> - the first one including C1, C2, ... Cx and with filter_tenant_id=p1, p2,
> .., pn
> - the second one including Cx+1 ... Cy and with filter_tenant_id=pn+1.. pm
>
>
> But if I try to specify the long list of projects, I get:a "Value ... is
> too long" error message [*].
>
> I can see two workarounds for this problem:
>
> 1) Create an host aggregate per project:
>
> HA1 including CA1, C2, ... Cx and with filter_tenant_id=p1
> HA2 including CA1, C2, ... Cx and with filter_tenant_id=p2
> etc
>
> 2) Use the AggregateInstanceExtraSpecsFilter, creating two aggregates and
> having each flavor visible only by a set of projects, and tagged with a
> specific string that should match the value specified in the correspondent
> host aggregate
>
> Is this correct ? Can you see better options ?
>
> Thanks, Massimo
>
>
>
> [*]
> # nova aggregate-set-metadata 1
> filter_tenant_id=ee1865a76440481cbcff08544c7d580a,1c63ef80901b46bebc9e1e8e10d85019,a22db12575694c9e9f8650dde73ef565,b4a1039f2f7c419596662950d8a1d397,d8ce7a449d8e4d68bbf2c40314da357e,e81df4c0b493439abb8b85bfd4cbe071,ee1865a76440481cbcff08544c7d580a,29d6233c0b2741e2a30324079f05aa9f,2b92483138dc4a61b1133c8c177ff298
> ERROR (BadRequest): Invalid input for field/attribute filter_tenant_id.
> Value:
> ee1865a76440481cbcff08544c7d580a,1c63ef80901b46bebc9e1e8e10d85019,a22db12575694c9e9f8650dde73ef565,b4a1039f2f7c419596662950d8a1d397,d8ce7a449d8e4d68bbf2c40314da357e,e81df4c0b493439abb8b85bfd4cbe071,ee1865a76440481cbcff08544c7d580a,29d6233c0b2741e2a30324079f05aa9f,2b92483138dc4a61b1133c8c177ff298.
> u'ee1865a76440481cbcff08544c7d580a,1c63ef80901b46bebc9e1e8e10d85019,a22db12575694c9e9f8650dde73ef565,b4a1039f2f7c419596662950d8a1d397,d8ce7a449d8e4d68bbf2c40314da357e,e81df4c0b493439abb8b85bfd4cbe071,ee1865a76440481cbcff08544c7d580a,29d6233c0b2741e2a30324079f05aa9f,2b92483138dc4a61b1133c8c177ff298'
> is too long (HTTP 400) (Request-ID:
> req-b971d686-72e5-4c54-aaa1-fef5eb7c7001)
>
>
> ___
> OpenStack-operators mailing list
> OpenStack-operators@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators
>
___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators