Re: [Openstack-operators] Draft Agenda for MAN Ops Meetup (Feb 15, 16)

2016-02-12 Thread Jose Castro Leon
Hi Tom,

> * Keystone Federation - discussion session
I will be attending the operators meetup in Manchester as a member of the CERN 
Cloud service.
I'd be really interested in moderate the keystone federation session.
Kind regards,

Jose Castro Leon
CERN IT-RPS tel:+41.22.76.74272
mob: +41.75.41.19222
fax:+41.22.76.67955
Office: 31-R-021CH-1211  Geneve 23
email: jose.castro.l...@cern.ch

>-Original Message-
>From: Tom Fifield [mailto:t...@openstack.org] 
>Sent: Thursday, February 04, 2016 8:28 PM
>To: OpenStack Operators 
>Subject: Re: [Openstack-operators] Draft Agenda for MAN Ops Meetup (Feb 15,
>16)
>
>Hi all,
>
>We still need moderators for the following:
>
>* Upgrade challenges, LTS releases, patches and packaging
>* Keystone Federation - discussion session
>* Post-Puppet deployment patterns - discussion
>* HA at the Hypervisor level
>* OSOps - what is it, where is it going, what you can do
>* OSOps working session
>
>
>Have a look at the moderator's guide @
>https://wiki.openstack.org/wiki/Operations/Meetups#Moderators_Guide
>
>and let us know if you're interested!
>
>
>Regards,
>
>
>Tom
>
>On 01/02/16 17:29, Matt Jarvis wrote:
>> That's a very good point !
>>
>> OK, so the ones we definitely don't seem to have anyone moderating or 
>> presenting against currently are :
>>
>> Tokyo Highlights - going to assume this was a talk Keystone Federation 
>> - discussion session Post-Puppet deployment patterns - discussion HA 
>> at the Hypervisor level - assume this was a talk too Ceph integration 
>> - discussion Writing User Stories - working group OSOps - what is it, 
>> where is it going, what you can do OSOps working session Monitoring 
>> and Tools WG
>>
>> These were almost all taken from the original etherpad ( 
>> https://etherpad.openstack.org/p/MAN-ops-meetup ), so if you suggested 
>> them or would like to present/moderate then let us know.
>>
>> If you would like to help with moderating any of the other sessions 
>> apart from those above, let us know - for most of the sessions we can 
>> always use two moderators.
>>
>>
>>
>>
>>
>>
>>
>> On 1 February 2016 at 09:20, Shamail Tahir > > wrote:
>>
>> Hi Matt,
>>
>>
>> On Mon, Feb 1, 2016 at 3:47 AM, Matt Jarvis
>> > > wrote:
>>
>> Hello All
>>
>> The event in Manchester is rapidly approaching, and we're still
>> looking for moderators and presenters for some of these
>> sessions. If you proposed one of the sessions currently in the
>> schedule, please let us know so we can assign you in the
>> schedule. If you'd be willing to help out and moderate one or
>> more sessions, we'd really appreciate your help. Thanks to
>> everyone who's volunteered so far !
>>
>> How can we identify which sessions are missing moderators currently?
>>
>>
>> On 20 January 2016 at 17:54, Tom Fifield > > wrote:
>>
>> Hi all,
>>
>> Matt, Nick and myself took some time to take our suggestions
>> from the etherpad and attempted to wrangle them into
>> something that would fit in the space we have over 2 days.
>>
>> As a reminder, we have two different kind of sessions -
>> General Sessions, which are discussions for the operator
>> community aimed to produce actions (eg best practices,
>> feedback on badness), and**Working groups**focus on specific
>> topics aiming to make concrete progress on tasks in that area.
>>
>> As always, some stuff has been munged and mangled in an
>> attempt to fit it in so please take a look at the below and
>> reply with your comments! Is anything missing? Something
>> look like a terrible idea? Want to completely change the
>> room layout? There's still a little bit of flexibility at
>> this stage.
>>
>>
>> Day 1Room 1
>>  Room 2
>>  Room 3
>> 9:00 - 10:00 Registration
>>  
>> 10:00 - 10:30Introduction + History of Ops Meetups +
>Intro
>> to working groups
>>  
>> 10:30 - 11:15How to engage with the community
>>  
>> 11:15 - 11:20Breakout explanation
>>  
>> 11:20 - 12:00Tokyo highlightsKeystone and
>Federation 
>> 12:00 - 13:30Lunch   
>>  
>> 13:30 - 14:10Upgrade challenges, LTS releases, patches
>and
>> packagingExperience with Puppet Deployments  HPC
>/
>> Scientific WG
>>  

Re: [Openstack-operators] Openstack liberty neutron vmware nsx plugin

2016-02-12 Thread Ihar Hrachyshka

Ignazio Cassano  wrote:


Hi all,
I am going to configure openstack liberty with vmware nsx.
I am using centos 7 controller but I cannot find the neutron plugin  
package for

vmware nsx .
It seems to exist in kilo but not in liberty repositories.
Regards
___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


I guess you want to ask in rdo-list instead.

https://www.redhat.com/mailman/listinfo/rdo-list

Ihar

___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


[Openstack-operators] Openstack liberty neutron vmware nsx plugin

2016-02-12 Thread Ignazio Cassano
Hi all,
I am going to configure openstack liberty with vmware nsx.
I am using centos 7 controller but I cannot find the neutron plugin package
for
vmware nsx .
It seems to exist in kilo but not in liberty repositories.
Regards
___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


Re: [Openstack-operators] Openstack liberty neutron vmware nsx plugin

2016-02-12 Thread Ignazio Cassano
Many thanks

2016-02-12 12:14 GMT+01:00 Ihar Hrachyshka :

> Ignazio Cassano  wrote:
>
> Hi all,
>> I am going to configure openstack liberty with vmware nsx.
>> I am using centos 7 controller but I cannot find the neutron plugin
>> package for
>> vmware nsx .
>> It seems to exist in kilo but not in liberty repositories.
>> Regards
>> ___
>> OpenStack-operators mailing list
>> OpenStack-operators@lists.openstack.org
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators
>>
>
> I guess you want to ask in rdo-list instead.
>
> https://www.redhat.com/mailman/listinfo/rdo-list
>
> Ihar
>
___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


[Openstack-operators] [kolla] Question about how Operators deploy

2016-02-12 Thread Steven Dake (stdake)
Hi folks,

Unfortunately I won't be able to make it to the Operator midcycle because of 
budget constraints or I would find the answer to this question there.  The 
Kolla upstream is busy sorting out external ssl termination and a question 
arose in the Kolla community around operator requirements for publicURL vs 
internalURL VIP management.

At present, Kolla creates 3 Haproxy containers across 3 HA nodes with one VIP 
managed by keepalived.  The VIP is used for internal communication only.  Our 
PUBLIC_URL is set to a DNS name, and we expect the Operator to sort out how to 
map that DNS name to the internal VIP used by Kolla.  The way I do this in my 
home lab is to use NAT to NAT my public_URL from the internet (hosted by 
dyndns) to my internal VIP that haproxies to my 3 HA control nodes.  This is 
secure assuming someone doesn't bust through my NAT.

An alternative has been suggested which is to use TWO vips.  One for 
internal_url, one for public_url.  Then the operator would only be responsible 
for selecting where to to allocate the public_url endpoint's VIP.  I think this 
allows more flexibility without necessarily requiring NAT while still 
delivering a secure solution.

Not having ever run an OpenStack cloud in production, how do the Operators want 
it?  Our deciding factor here is what Operators want, not what is necessarily 
currently in the code base.  We still have time to make this work differently 
for Mitaka, but I need feedback/advice quickly.

The security guide seems to imply two VIPs are the way to Operate: (big 
diagram):
http://docs.openstack.org/security-guide/networking/architecture.html

The IRC discussion is here for reference:
http://eavesdrop.openstack.org/irclogs/%23kolla/%23kolla.2016-02-12.log.html#t2016-02-12T12:09:08

Thanks in Advance!
-steve

___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


Re: [Openstack-operators] [kolla] Question about how Operators deploy

2016-02-12 Thread Fox, Kevin M
We usually use two vips.

Thanks,
Kevin


From: Steven Dake (stdake)
Sent: Friday, February 12, 2016 6:04:45 AM
To: openstack-operators@lists.openstack.org
Subject: [Openstack-operators] [kolla] Question about how Operators deploy

Hi folks,

Unfortunately I won't be able to make it to the Operator midcycle because of 
budget constraints or I would find the answer to this question there.  The 
Kolla upstream is busy sorting out external ssl termination and a question 
arose in the Kolla community around operator requirements for publicURL vs 
internalURL VIP management.

At present, Kolla creates 3 Haproxy containers across 3 HA nodes with one VIP 
managed by keepalived.  The VIP is used for internal communication only.  Our 
PUBLIC_URL is set to a DNS name, and we expect the Operator to sort out how to 
map that DNS name to the internal VIP used by Kolla.  The way I do this in my 
home lab is to use NAT to NAT my public_URL from the internet (hosted by 
dyndns) to my internal VIP that haproxies to my 3 HA control nodes.  This is 
secure assuming someone doesn't bust through my NAT.

An alternative has been suggested which is to use TWO vips.  One for 
internal_url, one for public_url.  Then the operator would only be responsible 
for selecting where to to allocate the public_url endpoint's VIP.  I think this 
allows more flexibility without necessarily requiring NAT while still 
delivering a secure solution.

Not having ever run an OpenStack cloud in production, how do the Operators want 
it?  Our deciding factor here is what Operators want, not what is necessarily 
currently in the code base.  We still have time to make this work differently 
for Mitaka, but I need feedback/advice quickly.

The security guide seems to imply two VIPs are the way to Operate: (big 
diagram):
http://docs.openstack.org/security-guide/networking/architecture.html

The IRC discussion is here for reference:
http://eavesdrop.openstack.org/irclogs/%23kolla/%23kolla.2016-02-12.log.html#t2016-02-12T12:09:08

Thanks in Advance!
-steve

___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


Re: [Openstack-operators] Liberty and OVS Agent restarts

2016-02-12 Thread Matt Kassawara
Out of curiosity, what do you have for the "external_network_bridge" option
in the L3 agent config?

On Wed, Feb 10, 2016 at 2:42 PM, Bajin, Joseph  wrote:

> Clayton,
>
> This is really good information.
>
> I’m wondering how we can help support you and get the necessary dev
> support to get this resolved sooner than later. I totally agree with you
> that this should be backported to at least Liberty.
>
> Please let me know how I and other can help!
>
> —Joe
>
>
>
>
>
>
>
>
>
> On 2/10/16, 8:55 AM, "Clayton O'Neill"  wrote:
>
> >Summary: Liberty OVS agent restarts are better, but still need work.
> >See: https://bugs.launchpad.net/neutron/+bug/1514056
> >
> >As many of you know, Liberty has a fix for OVS agent restarts such
> >that it doesn’t dump all flows when starting, resulting in a loss of
> >traffic.  Unfortunately, Liberty neutron still has issues with OVS
> >agent restarts.  The fix that went into Liberty prevents it from
> >dropping flows on the br-tun and br-int bridges and that helps
> >greatly, but the br-ex bridge still has it’s flows cleared on startup.
> >
> >You may be thinking: Wait, br-ex only has like 3 flows on it, how can
> >that be a problem?  The issue appears to be that the br-ex flows are
> >cleared early and not setup again until late in the process.  This
> >means that routers on the node where OVS agent is lose network
> >connectivity for the majority of the restart time.
> >
> >I did some testing with this yesterday, comparing a few scenarios with
> >100 FIPS, 100 instances and various scenarios for routers.  You can
> >find the the complete data here:
> >
> https://docs.google.com/spreadsheets/d/1ZGra_MszBlL0fNsFqd4nOvh1PsgWu58-GxEeh1m1BPw/edit?usp=sharing
> >
> >The summary looks like this:
> >100 routers, 100 networks, 100 floating ips, 100 instances, single node
> test:
> >Kilo average outage time: 47 seconds
> >Liberty average outage time: 37 seconds
> >
> >1 router, 1 network, 100 floating ips, 100 instances, single node test:
> >Kilo average outage time: 46 seconds
> >Liberty average outage time: 13 seconds
> >
> >1 router, 1 network, 100 floating its, 100 instances, router on a
> >separate node, all instances on a single node, OVS restart on compute
> >node:
> >Kilo average outage time: 25 seconds
> >Liberty average outage time: 0 to 1 seconds
> >
> >I did my testing using 1 second pings using fping to all of the
> >floating IPs.  With the last test, it frequently lost no packets, and
> >as a result I was not really able to test the scenario other than to
> >qualify it as good.
> >
> >This is a huge operational issue for us and I suspect for many of the
> >rest of you using OVS.  I’d encourage everyone that is using OVS to
> >register interest in having this fixed in the LP bug
> >(https://bugs.launchpad.net/neutron/+bug/1514056).  Right now this bug
> >as marked as low priority.
> >
> >___
> >OpenStack-operators mailing list
> >OpenStack-operators@lists.openstack.org
> >http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators
>
> ___
> OpenStack-operators mailing list
> OpenStack-operators@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators
>
>
___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


Re: [Openstack-operators] Liberty and OVS Agent restarts

2016-02-12 Thread Clayton O'Neill
I’ve tried it with both a blank value and the specific value.  It
doesn’t appear to make a difference.

In other news, Assaf Muller has upgraded the priority of the bug from
low to high.

On Fri, Feb 12, 2016 at 9:27 AM, Matt Kassawara  wrote:
> Out of curiosity, what do you have for the "external_network_bridge" option
> in the L3 agent config?
>
> On Wed, Feb 10, 2016 at 2:42 PM, Bajin, Joseph  wrote:
>>
>> Clayton,
>>
>> This is really good information.
>>
>> I’m wondering how we can help support you and get the necessary dev
>> support to get this resolved sooner than later. I totally agree with you
>> that this should be backported to at least Liberty.
>>
>> Please let me know how I and other can help!
>>
>> —Joe
>>
>>
>>
>>
>>
>>
>>
>>
>>
>> On 2/10/16, 8:55 AM, "Clayton O'Neill"  wrote:
>>
>> >Summary: Liberty OVS agent restarts are better, but still need work.
>> >See: https://bugs.launchpad.net/neutron/+bug/1514056
>> >
>> >As many of you know, Liberty has a fix for OVS agent restarts such
>> >that it doesn’t dump all flows when starting, resulting in a loss of
>> >traffic.  Unfortunately, Liberty neutron still has issues with OVS
>> >agent restarts.  The fix that went into Liberty prevents it from
>> >dropping flows on the br-tun and br-int bridges and that helps
>> >greatly, but the br-ex bridge still has it’s flows cleared on startup.
>> >
>> >You may be thinking: Wait, br-ex only has like 3 flows on it, how can
>> >that be a problem?  The issue appears to be that the br-ex flows are
>> >cleared early and not setup again until late in the process.  This
>> >means that routers on the node where OVS agent is lose network
>> >connectivity for the majority of the restart time.
>> >
>> >I did some testing with this yesterday, comparing a few scenarios with
>> >100 FIPS, 100 instances and various scenarios for routers.  You can
>> >find the the complete data here:
>>
>> > >https://docs.google.com/spreadsheets/d/1ZGra_MszBlL0fNsFqd4nOvh1PsgWu58-GxEeh1m1BPw/edit?usp=sharing
>> >
>> >The summary looks like this:
>> >100 routers, 100 networks, 100 floating ips, 100 instances, single node
>> > test:
>> >Kilo average outage time: 47 seconds
>> >Liberty average outage time: 37 seconds
>> >
>> >1 router, 1 network, 100 floating ips, 100 instances, single node test:
>> >Kilo average outage time: 46 seconds
>> >Liberty average outage time: 13 seconds
>> >
>> >1 router, 1 network, 100 floating its, 100 instances, router on a
>> >separate node, all instances on a single node, OVS restart on compute
>> >node:
>> >Kilo average outage time: 25 seconds
>> >Liberty average outage time: 0 to 1 seconds
>> >
>> >I did my testing using 1 second pings using fping to all of the
>> >floating IPs.  With the last test, it frequently lost no packets, and
>> >as a result I was not really able to test the scenario other than to
>> >qualify it as good.
>> >
>> >This is a huge operational issue for us and I suspect for many of the
>> >rest of you using OVS.  I’d encourage everyone that is using OVS to
>> >register interest in having this fixed in the LP bug
>> >(https://bugs.launchpad.net/neutron/+bug/1514056).  Right now this bug
>> >as marked as low priority.
>> >
>> >___
>> >OpenStack-operators mailing list
>> >OpenStack-operators@lists.openstack.org
>> >http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators
>>
>> ___
>> OpenStack-operators mailing list
>> OpenStack-operators@lists.openstack.org
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators
>>
>

___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


Re: [Openstack-operators] Draft Agenda for MAN Ops Meetup (Feb 15, 16)

2016-02-12 Thread Toshikazu Ichikawa
Hi Tom,

> * HA at the Hypervisor level

I and some members from NTT can share what we are doing (masakari) and
recent activities in HA team meeting, if the time slot is still available.

Thanks,
Kazu

-Original Message-
From: Tom Fifield [mailto:t...@openstack.org] 
Sent: Thursday, February 04, 2016 8:28 PM
To: OpenStack Operators 
Subject: Re: [Openstack-operators] Draft Agenda for MAN Ops Meetup (Feb 15,
16)

Hi all,

We still need moderators for the following:

* Upgrade challenges, LTS releases, patches and packaging
* Keystone Federation - discussion session
* Post-Puppet deployment patterns - discussion
* HA at the Hypervisor level
* OSOps - what is it, where is it going, what you can do
* OSOps working session


Have a look at the moderator's guide @
https://wiki.openstack.org/wiki/Operations/Meetups#Moderators_Guide

and let us know if you're interested!


Regards,


Tom

On 01/02/16 17:29, Matt Jarvis wrote:
> That's a very good point !
>
> OK, so the ones we definitely don't seem to have anyone moderating or 
> presenting against currently are :
>
> Tokyo Highlights - going to assume this was a talk Keystone Federation 
> - discussion session Post-Puppet deployment patterns - discussion HA 
> at the Hypervisor level - assume this was a talk too Ceph integration 
> - discussion Writing User Stories - working group OSOps - what is it, 
> where is it going, what you can do OSOps working session Monitoring 
> and Tools WG
>
> These were almost all taken from the original etherpad ( 
> https://etherpad.openstack.org/p/MAN-ops-meetup ), so if you suggested 
> them or would like to present/moderate then let us know.
>
> If you would like to help with moderating any of the other sessions 
> apart from those above, let us know - for most of the sessions we can 
> always use two moderators.
>
>
>
>
>
>
>
> On 1 February 2016 at 09:20, Shamail Tahir  > wrote:
>
> Hi Matt,
>
>
> On Mon, Feb 1, 2016 at 3:47 AM, Matt Jarvis
>  > wrote:
>
> Hello All
>
> The event in Manchester is rapidly approaching, and we're still
> looking for moderators and presenters for some of these
> sessions. If you proposed one of the sessions currently in the
> schedule, please let us know so we can assign you in the
> schedule. If you'd be willing to help out and moderate one or
> more sessions, we'd really appreciate your help. Thanks to
> everyone who's volunteered so far !
>
> How can we identify which sessions are missing moderators currently?
>
>
> On 20 January 2016 at 17:54, Tom Fifield  > wrote:
>
> Hi all,
>
> Matt, Nick and myself took some time to take our suggestions
> from the etherpad and attempted to wrangle them into
> something that would fit in the space we have over 2 days.
>
> As a reminder, we have two different kind of sessions -
> General Sessions, which are discussions for the operator
> community aimed to produce actions (eg best practices,
> feedback on badness), and**Working groups**focus on specific
> topics aiming to make concrete progress on tasks in that area.
>
> As always, some stuff has been munged and mangled in an
> attempt to fit it in so please take a look at the below and
> reply with your comments! Is anything missing? Something
> look like a terrible idea? Want to completely change the
> room layout? There's still a little bit of flexibility at
> this stage.
>
>
> Day 1 Room 1
>   Room 2
>   Room 3
> 9:00 - 10:00  Registration
>   
> 10:00 - 10:30 Introduction + History of Ops Meetups +
Intro
> to working groups 
>   
> 10:30 - 11:15 How to engage with the community
>   
> 11:15 - 11:20 Breakout explanation
>   
> 11:20 - 12:00 Tokyo highlightsKeystone and
Federation  
> 12:00 - 13:30 Lunch   
>   
> 13:30 - 14:10 Upgrade challenges, LTS releases, patches
and
> packaging Experience with Puppet Deployments  HPC
/
> Scientific WG
> 14:10 - 14:50 Upgrade challenges, LTS releases, patches
and
> packaging Post-Puppet deployment patterns HPC
/ Scientific WG
> 14:50 - 15:20 Coffee  
>   
> 15:20 - 16:00 Neutron Operational best practices  HA
at the
> Hypervisor level  
> 16:00 - 16:40 OSOps - what 

Re: [Openstack-operators] [kolla] Question about how Operators deploy

2016-02-12 Thread Dan Sneddon
On 02/12/2016 06:04 AM, Steven Dake (stdake) wrote:
> Hi folks,
> 
> Unfortunately I won't be able to make it to the Operator midcycle
> because of budget constraints or I would find the answer to this
> question there.  The Kolla upstream is busy sorting out external ssl
> termination and a question arose in the Kolla community around operator
> requirements for publicURL vs internalURL VIP management.
> 
> At present, Kolla creates 3 Haproxy containers across 3 HA nodes with
> one VIP managed by keepalived.  The VIP is used for internal
> communication only.  Our PUBLIC_URL is set to a DNS name, and we expect
> the Operator to sort out how to map that DNS name to the internal VIP
> used by Kolla.  The way I do this in my home lab is to use NAT to NAT
> my public_URL from the internet (hosted by dyndns) to my internal VIP
> that haproxies to my 3 HA control nodes.  This is secure assuming
> someone doesn't bust through my NAT.
> 
> An alternative has been suggested which is to use TWO vips.  One for
> internal_url, one for public_url.  Then the operator would only be
> responsible for selecting where to to allocate the public_url
> endpoint's VIP.  I think this allows more flexibility without
> necessarily requiring NAT while still delivering a secure solution.
> 
> Not having ever run an OpenStack cloud in production, how do the
> Operators want it?  Our deciding factor here is what Operators want,
> not what is necessarily currently in the code base.  We still have time
> to make this work differently for Mitaka, but I need feedback/advice
> quickly.
> 
> The security guide seems to imply two VIPs are the way to Operate: (big
> diagram):
> http://docs.openstack.org/security-guide/networking/architecture.html
> 
> The IRC discussion is here for reference:
> http://eavesdrop.openstack.org/irclogs/%23kolla/%23kolla.2016-02-12.log.html#t2016-02-12T12:09:08
> 
> Thanks in Advance!
> -steve
> 
> 
> 
> ___
> OpenStack-operators mailing list
> OpenStack-operators@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators
> 

I am not an operator, but I work with large-scale operators to design
OpenStack networks regularly (more than one per week). In general, the
operators I work with want a separation of their Public from their
Internal APIs. This helps with accounting, since tracking accesses to
the Public API is easier when you don't have to filter out all the
internal service API calls. I have also seen some operators place the
Public APIs into a protected zone that required VPN access to get to,
while the Internal APIs were only accessible from inside the deployment.

Another interesting use case I have seen several times is when a
service VM needs to connect to the Public APIs. I have seen this when a
VM inside the cloud was used to host a self-service portal, so that VM
needs to be able to issue commands against the Public APIs in order to
provision services. In this case, it would have been difficult to
engineer a solution that allowed both the VM and the internal services
to connect to a single API without increasing the attack surface and
reducing security.

-- 
Dan Sneddon |  Principal OpenStack Engineer
dsned...@redhat.com |  redhat.com/openstack
650.254.4025|  dsneddon:irc   @dxs:twitter

___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


Re: [Openstack-operators] [kolla] Question about how Operators deploy

2016-02-12 Thread Joe Topjian
2 VIPs as well.

On Fri, Feb 12, 2016 at 8:27 AM, Matt Fischer  wrote:

> We also use 2 VIPs. public and internal, with admin being a CNAME for
> internal.
>
> On Fri, Feb 12, 2016 at 7:28 AM, Fox, Kevin M  wrote:
>
>> We usually use two vips.
>>
>> Thanks,
>> Kevin
>>
>> --
>> *From:* Steven Dake (stdake)
>> *Sent:* Friday, February 12, 2016 6:04:45 AM
>> *To:* openstack-operators@lists.openstack.org
>> *Subject:* [Openstack-operators] [kolla] Question about how Operators
>> deploy
>>
>> Hi folks,
>>
>> Unfortunately I won't be able to make it to the Operator midcycle because
>> of budget constraints or I would find the answer to this question there.
>> The Kolla upstream is busy sorting out external ssl termination and a
>> question arose in the Kolla community around operator requirements for
>> publicURL vs internalURL VIP management.
>>
>> At present, Kolla creates 3 Haproxy containers across 3 HA nodes with one
>> VIP managed by keepalived.  The VIP is used for internal communication
>> only.  Our PUBLIC_URL is set to a DNS name, and we expect the Operator to
>> sort out how to map that DNS name to the internal VIP used by Kolla.  The
>> way I do this in my home lab is to use NAT to NAT my public_URL from the
>> internet (hosted by dyndns) to my internal VIP that haproxies to my 3 HA
>> control nodes.  This is secure assuming someone doesn't bust through my NAT.
>>
>> An alternative has been suggested which is to use TWO vips.  One for
>> internal_url, one for public_url.  Then the operator would only be
>> responsible for selecting where to to allocate the public_url endpoint's
>> VIP.  I think this allows more flexibility without necessarily requiring
>> NAT while still delivering a secure solution.
>>
>> Not having ever run an OpenStack cloud in production, how do the
>> Operators want it?  Our deciding factor here is what Operators want, not
>> what is necessarily currently in the code base.  We still have time to make
>> this work differently for Mitaka, but I need feedback/advice quickly.
>>
>> The security guide seems to imply two VIPs are the way to Operate: (big
>> diagram):
>> http://docs.openstack.org/security-guide/networking/architecture.html
>>
>> The IRC discussion is here for reference:
>>
>> http://eavesdrop.openstack.org/irclogs/%23kolla/%23kolla.2016-02-12.log.html#t2016-02-12T12:09:08
>>
>> Thanks in Advance!
>> -steve
>>
>>
>> ___
>> OpenStack-operators mailing list
>> OpenStack-operators@lists.openstack.org
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators
>>
>>
>
> ___
> OpenStack-operators mailing list
> OpenStack-operators@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators
>
>
___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


[Openstack-operators] Help Needed! Architecture Guide OpenStack Bug Smash

2016-02-12 Thread Devon Boatwright
Hi everyone! My name is Devon Boatwright and I am a part of the Ops Guide
specialty team, which is part of the Documentation team.

There is a global OpenStack bug smash scheduled for the Mitaka release in
March. Details can be found here:
https://etherpad.openstack.org/p/OpenStack-Bug-Smash-Mitaka

We're looking for people to specifically help with the Architecture Guide,
which is currently going through a reorganization. Here is the link for
more details:
https://wiki.openstack.org/wiki/Architecture_Design_Guide_work_items

Please see the site specific ether pads if you are interested and sign up
to help.

If you are interested in helping out, more details can be found about our
team and our meetings here:
https://wiki.openstack.org/wiki/Documentation/OpsGuide

Please attend our meetings if you are interested. We'd love to have you!

Thank you!
Devon Boatwright
___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators