Re: [Openstack-operators] DVR and public IP consumption

2016-01-28 Thread Tomas Vondra
Robert Starmer  writes:

> 
> 
> I think I've created a bit of confusion, because I forgot that DVR still
does SNAT (generic non Floating IP tied NAT) on a central network node just
like in the non-DVR model.  The extra address that is consumed is allocated
to a FIP specific namespace when a DVR is made responsible for supporting a
tenant's floating IP, and the question then is: Why do I need this _extra_
external address from the floating IP pool for the FIP namespace, since it's
the allocation of a tenant requested floating IP to a tenant VM that
triggers the DVR to implement the FIP namespace function in the first place. 
> In both the Paris and Vancouver DVR presentations "We add distributed FIP
support at the expense of an _extra_ external address per device, but the
FIP namespace is then shared across all tenants". Given that there is no
"external" interface for the DVR interface for floating IPs until at least
one tenant allocates one, a new namespace needs to be created to act as the
termination for the tenant's floating IP.  A normal tenant router would have
an address allocated already, because it has a port allocated onto the
external network (this is the address that SNAT overloads for those
non-floating associated machines that lets them communicate with the
Internet at large), but in this case, no such interface exists until the
namespace is created and attached to the external network, so when the
floating IP port is created, an address is simply allocated from the
External (e.g. floating) pool for the interface.  And _then_ the floating IP
is allocated to the namespace as well. The fact that this extra address is
used is a part of the normal port allocation process (and default
port-security anti-spoofing processes) that exist already, and simplifies
the process of moving tenant allocated floating addresses around (the port
state for the floating namespace doesn't change, it keeps it's special mac
and address regardless of what ever else goes on). So don't think of it as a
Floating IP allocated to the DVR, it's just the DVR's local representative
for it's port on the external network.  Tenant addresses are then "on top"
of this setup.
> 
> So, in-efficient, yes.  Part of DVR history, yes.  Confusing to us mere
network mortals, yes.  But that's how I see it. And sorry for the SNAT
reference, just adding my own additional layer of "this is how it should be"
 on top.
> 
> Robert
> 

Dear Robert,
thanks for clarifying why there has to always be an address in the FIP
namespace. But is still feels like a someone left it there from an alpha
phase. If I need AN address, I would use a worthless one like 169.254
link-local one, not a public IP. There are already link-local addresses in
use in Neutron.. somewhere :-).

The IP consumption that bothers me more than this is that of the Router
External Interfaces, which are all on the network nodes and do SNAT for
every tenant, separately. I would like the centralized SNAT of DVR to be
more ..centralized. The quickest way, I think, would be to allocate these
from a different pool than the Floating IPs and let my datacenter SNAT take
care of them.

To your earlier post, some of the Neutron provider implementations are much
more L3-oriented than the default implementation. But I have, for example,
scratched Contrail out of my installation, because the added cost of 2
Juniper or Cisco routers does not balance out the benefits, IMHO. And it is
another complex system besides OpenStack consisting of about 6 components
written in 4 programming languages that you have to take care about.

Would Contrail use less IP addresses per tenant and node? (They will be
worth their weight in gold soon :-).) What about more open-source
open-source SDNs than Contrail, which use software routers at the edge? Is
anyone using Midonet or OpenDaylight?

I personally think that DVR, DragonFlow, or the next integrated Neutron
solution are the ways to go in OpenStack, not some external plugins. But
DVR, as I find out, has its quirks. Which could be solved by introducing a
few more configuration options. I like the way it can use L2 and provider
networks to integrate with the rest of the datacenter. No BGP L3VPN tunnels,
which cannot be done in open-source.

Tomas

> 
> On Wed, Jan 27, 2016 at 3:33 PM, Fox, Kevin M
 wrote:
> 
> 
> 
> 
> 
> But there already is a second external address, the fip address that's
nating. Is there a double nat? I'm a little confused.
> Thanks,
> Kevin
> From: Robert Starmer [robert-irot69hj...@public.gmane.org]Sent: Wednesday,
January 27, 2016 3:20 PMTo: Carl BaldwinCc: OpenStack Operators; Tomas
VondraSubject: Re: [Openstack-operators] DVR and public IP consumption
> 
> 
> 
> 
> You can't get rid of the "External" address as it's used to direct return
traffic to the right router node.  DVR as implemented is really just a local
NAT gateway per physical compute node.  The outside of your NAT needs to be
publicly unique,
>  so it needs it's own address.  Som

[Openstack-operators] Announcing Debian Mitaka b2 packages with backports for Jessie and Trusty

2016-01-28 Thread Thomas Goirand
Hi everyone,

I'm delighted to announce the release of Debian packages for Mitaka b2.

Debian Experimental
===
I have uploaded it all to Debian Experimental. This is the only place
where you may find official packages. It will stay this way until Debian
Bikesheds (Bikesheds is what will be Debian's version of PPA) are
operational, or when Mitaka final is out, at which point all will be
uploaded to Sid, then to Jessie-backports.

Mitaka in Debian Stretch

As the Debian release team announced that Debian 9.0 (aka: Stretch) will
be frozen late in 2016, Mitaka will be the OpenStack release which I
will maintain in Stretch. Though, if before the release, we have working
Bikesheds, I will ask for the removal of all OpenStack stuff from Stable
and Testing, and will only maintain the last 2 stable releases in
specific Bikesheds.

Non-official Jessie and Trusty backports

All of Mitaka b2 is also available on the automatic Jenkins backport
build servers for Debian Jessie and Ubuntu Trusty. The repository
addresses are described here:

http://mitaka-jessie.pkgs.mirantis.com/

and here:

http://mitaka-trusty.pkgs.mirantis.com/

Note: these are Mirantis sponsored servers to automatically rebuild
backports, but the source packages are the exact copy of what's in
Debian without any change.

If you use puppet-openstack, and would like to use Ubuntu Trusty as the
base OS, you need to install the puppet-openstack-debian-fact on all of
your servers, so that the Puppet scripts know that you're using Debian
style packages on top of Ubuntu. This way, puppet will know the
difference in Horizon, Nova & Neutron (the other packages are using the
same names). Alternatively, you can do it manually (same effect):

echo os_package_type=debian > /etc/facter/facts.d/os_package_type.txt

Also, note that the Trusty backports have been rebuilt entirely using
Debian packages. No source package available in this repository where
downloaded from Ubuntu (but from Debian), meaning that these packages
are fully redistributable as it pleases you, without modification or
rebuild, without any risk with the Ubuntu trademark problems [1] (of
course, remains the problem of redistributing the base OS... but I'm not
redistributing this myself!).

Included in this release

The following server packages are available:

* aodh
* barbican
* ceilometer
* cinder
* designate
* glance
* gnocchi
* heat
* ironic
* keystone
* manila
* mistral
* murano
* murano-agent
* neutron
* nova
* openstack-trove
* sahara
* senlin
* zaqar

I couldn't upload these to Experimental (as Horizon needs to support
Django 1.9 support), but packages are done and backported to both Jessie
and Trusty:
- horizon
- murano-dashboard
- designate-dashboard
- trove-dashboard
- sahara-dashboard
- senlin-dashboard

At this point, even though the package is functional, I have a working
Congress package, but I can't upload it to Debian due to its
"thirdparty" folder containing non-free files, such as windows .dll and
such. I hope upstream maintainers can fix that.
- congress

These were still not tagged for Mitaka b2, so I didn't package them yet:
- magnum
- manila-ui
- zaqar-ui

Report bugs
===
This is a preview, which hasn't been tested much. Bugs are to be
expected, just like it is in upstream code. So by all means, report bugs
to the Debian BTS [2] if you find any.

Thanks to so many people

I'd like to hereby thank everyone who helped this release to happen.
This includes, but not limited to: cdent which I annoyed with one
package, when the issue was really in Debian, Corey Bryant from
Canonical for continuing to co-maintain OpenStack Python modules
directly in Debian, folks from Telemetry who are always very helpful to
help me fix a few things. Thanks to anyone who helped closing bugs I've
opened. I'm sure I am forgetting many people who helped a lot.

No keyboard (or any other hardware) was hurt doing this release.

Cheers,

Thomas Goirand (zigo)

[1] If you don't know what I'm talking about, you'd better urgently read
these blog posts from Matthew Garrett:
http://mjg59.dreamwidth.org/35969.html
http://mjg59.dreamwidth.org/36312.html
http://mjg59.dreamwidth.org/37113.html
http://mjg59.dreamwidth.org/38467.html

[2] https://www.debian.org/Bugs/Reporting

___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


Re: [Openstack-operators] Live migration neutron issues

2016-01-28 Thread Adam Dibiase
I have an update to my migration issue and thought I would share. It seems
this happens on instances that are CentOS 5.x. It does not happen on
instances that are CentOS 6.x. Does anyone know what neutron is sending to
the guest to start the network post migration and why it would not work in
CentOS 5.x?

Thanks,

Adam




On Thu, Jan 7, 2016 at 4:00 PM, Adam Dibiase 
wrote:

> Yes. I have tried hitting it from an instance on the same L2 network with
> the same result.
>
> Just to make things interesting, I cannot be sure yet, but I think I am
> seeing a correlation between instances that have been created in Liberty
> letting nova create the neutron port (always seems to migrate successfully)
> and instances that I have attached a manually created port (always seem to
> fail).
>
>
>
> Thanks,
>
> Adam
>
>
>
>
> On Thu, Jan 7, 2016 at 3:36 PM, Mathieu Gagné  wrote:
>
>> On 2016-01-07 3:28 PM, Adam Dibiase wrote:
>> > Greetings,
>> >
>> > I am having connectivity issues to instances after live migrating
>> > between compute nodes. Scenario is as follows:
>> >
>> >   * Start pinging instance IP
>> >   * Send nova live-migration  
>> >   * Once nova reports the pause/resume on the new compute node after
>> > migration, ping replies stop
>> >   * Instance migrates successfully and I can hit it via VNC console, but
>> > no network connectivity
>> >   * If I live migrate it back to losing compute node, it will still NOT
>> > restore network connectivity
>> >   * The only way to restore network is to nova stop/start or nova reboot
>> > instance
>> >   * When the instance comes back up, I am able to hit it again on the
>> > new compute node
>> >
>> > I have checked to see if the neutron port is showing the new
>> > binding:host_id, ovs shows the correct port for instance on new compute
>> > node, and no other errors present in the logs.
>> >
>> > I am running Liberty with cinder FYI. Anyone else having similar issues?
>> >
>>
>> Have you tried starting another instance on the same L2 network and ping
>> from there once it's migrated? This is to make sure it's not an issue
>> with ARP caching on a device between you and the instance.
>>
>> --
>> Mathieu
>>
>
>
___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


Re: [Openstack-operators] DVR and public IP consumption

2016-01-28 Thread Fox, Kevin M
Ah. so it was done just to make it simple to reuse lots of existing code to get 
DVR working quickly and thus a current requirement, but there is nothing 
stopping further enhancements to be made to eliminate it in the future?

What about a step in between what's there now, and eliminating it completely. 
If the router code expects there to be an ip allocated for it on every compute 
node, could you share one external ip between all the compute node routers? 
Since the network will never actually use it, it probably doesn't matter if its 
conflicting but it would still allow the existing code to function the way it 
always has, greatly simplifying implementation?

Thanks,
Kevin


From: Robert Starmer [rob...@kumul.us]
Sent: Wednesday, January 27, 2016 8:34 PM
To: Fox, Kevin M
Cc: Carl Baldwin; OpenStack Operators; Tomas Vondra
Subject: Re: [Openstack-operators] DVR and public IP consumption

I think I've created a bit of confusion, because I forgot that DVR still does 
SNAT (generic non Floating IP tied NAT) on a central network node just like in 
the non-DVR model.  The extra address that is consumed is allocated to a FIP 
specific namespace when a DVR is made responsible for supporting a tenant's 
floating IP, and the question then is: Why do I need this _extra_ external 
address from the floating IP pool for the FIP namespace, since it's the 
allocation of a tenant requested floating IP to a tenant VM that triggers the 
DVR to implement the FIP namespace function in the first place.

In both the Paris and Vancouver DVR presentations "We add distributed FIP 
support at the expense of an _extra_ external address per device, but the FIP 
namespace is then shared across all tenants". Given that there is no "external" 
interface for the DVR interface for floating IPs until at least one tenant 
allocates one, a new namespace needs to be created to act as the termination 
for the tenant's floating IP.  A normal tenant router would have an address 
allocated already, because it has a port allocated onto the external network 
(this is the address that SNAT overloads for those non-floating associated 
machines that lets them communicate with the Internet at large), but in this 
case, no such interface exists until the namespace is created and attached to 
the external network, so when the floating IP port is created, an address is 
simply allocated from the External (e.g. floating) pool for the interface.  And 
_then_ the floating IP is allocated to the namespace as well. The fact that 
this extra address is used is a part of the normal port allocation process (and 
default port-security anti-spoofing processes) that exist already, and 
simplifies the process of moving tenant allocated floating addresses around 
(the port state for the floating namespace doesn't change, it keeps it's 
special mac and address regardless of what ever else goes on). So don't think 
of it as a Floating IP allocated to the DVR, it's just the DVR's local 
representative for it's port on the external network.  Tenant addresses are 
then "on top" of this setup.

So, in-efficient, yes.  Part of DVR history, yes.  Confusing to us mere network 
mortals, yes.  But that's how I see it. And sorry for the SNAT reference, just 
adding my own additional layer of "this is how it should be"  on top.

Robert

On Wed, Jan 27, 2016 at 3:33 PM, Fox, Kevin M 
mailto:kevin@pnnl.gov>> wrote:
But there already is a second external address, the fip address that's nating. 
Is there a double nat? I'm a little confused.

Thanks,
Kevin

From: Robert Starmer [rob...@kumul.us]
Sent: Wednesday, January 27, 2016 3:20 PM
To: Carl Baldwin
Cc: OpenStack Operators; Tomas Vondra
Subject: Re: [Openstack-operators] DVR and public IP consumption

You can't get rid of the "External" address as it's used to direct return 
traffic to the right router node.  DVR as implemented is really just a local 
NAT gateway per physical compute node.  The outside of your NAT needs to be 
publicly unique, so it needs it's own address.  Some SDN solutions can provide 
a truly distributed router model, because they globally know the inside state 
of the NAT environment, and can forward packets back to the internal source 
properly, regardless of which distributed forwarder receives the incoming 
"external" packets.

If the number of external addresses consumed is an issue, you may consider the 
dual gateway HA model instead of DVR.  This uses classic multi-router models 
where one router takes on the task of forwading packets, and the other device 
just acts as a backup.  You do still have a software bottleneck at your router, 
unless you then also use one of the plugins that supports hardware L3 (last I 
checked, Juniper, Arista, Cisco, etc. all provide an L3 plugin that is HA 
capable), but you only burn 3 External addresses for the router (and 3 internal 
network addresses per tenant side interface

Re: [Openstack-operators] DVR and public IP consumption

2016-01-28 Thread Fox, Kevin M
Hi Tomas,

The using external addresses per tenant router is a feature to a lot of sites, 
like ours. We want to know for sure, at minimum, which tenant was responsible 
for bad activity on the external network. Having the external address tied to a 
tenant router allows you to track bad activity back at least to the ip, then to 
the tenant router. You won't be able to tell which vm's of the tenant performed 
the bad activity because of the snat, but you at least have some to talk to 
about it, instead of your local security friends asking you to unplug the whole 
cloud.

Thanks,
Kevin

From: Tomas Vondra [von...@czech-itc.cz]
Sent: Thursday, January 28, 2016 3:15 AM
To: openstack-operators@lists.openstack.org
Subject: Re: [Openstack-operators] DVR and public IP consumption

Robert Starmer  writes:

>
>
> I think I've created a bit of confusion, because I forgot that DVR still
does SNAT (generic non Floating IP tied NAT) on a central network node just
like in the non-DVR model.  The extra address that is consumed is allocated
to a FIP specific namespace when a DVR is made responsible for supporting a
tenant's floating IP, and the question then is: Why do I need this _extra_
external address from the floating IP pool for the FIP namespace, since it's
the allocation of a tenant requested floating IP to a tenant VM that
triggers the DVR to implement the FIP namespace function in the first place.
> In both the Paris and Vancouver DVR presentations "We add distributed FIP
support at the expense of an _extra_ external address per device, but the
FIP namespace is then shared across all tenants". Given that there is no
"external" interface for the DVR interface for floating IPs until at least
one tenant allocates one, a new namespace needs to be created to act as the
termination for the tenant's floating IP.  A normal tenant router would have
an address allocated already, because it has a port allocated onto the
external network (this is the address that SNAT overloads for those
non-floating associated machines that lets them communicate with the
Internet at large), but in this case, no such interface exists until the
namespace is created and attached to the external network, so when the
floating IP port is created, an address is simply allocated from the
External (e.g. floating) pool for the interface.  And _then_ the floating IP
is allocated to the namespace as well. The fact that this extra address is
used is a part of the normal port allocation process (and default
port-security anti-spoofing processes) that exist already, and simplifies
the process of moving tenant allocated floating addresses around (the port
state for the floating namespace doesn't change, it keeps it's special mac
and address regardless of what ever else goes on). So don't think of it as a
Floating IP allocated to the DVR, it's just the DVR's local representative
for it's port on the external network.  Tenant addresses are then "on top"
of this setup.
>
> So, in-efficient, yes.  Part of DVR history, yes.  Confusing to us mere
network mortals, yes.  But that's how I see it. And sorry for the SNAT
reference, just adding my own additional layer of "this is how it should be"
 on top.
>
> Robert
>

Dear Robert,
thanks for clarifying why there has to always be an address in the FIP
namespace. But is still feels like a someone left it there from an alpha
phase. If I need AN address, I would use a worthless one like 169.254
link-local one, not a public IP. There are already link-local addresses in
use in Neutron.. somewhere :-).

The IP consumption that bothers me more than this is that of the Router
External Interfaces, which are all on the network nodes and do SNAT for
every tenant, separately. I would like the centralized SNAT of DVR to be
more ..centralized. The quickest way, I think, would be to allocate these
from a different pool than the Floating IPs and let my datacenter SNAT take
care of them.

To your earlier post, some of the Neutron provider implementations are much
more L3-oriented than the default implementation. But I have, for example,
scratched Contrail out of my installation, because the added cost of 2
Juniper or Cisco routers does not balance out the benefits, IMHO. And it is
another complex system besides OpenStack consisting of about 6 components
written in 4 programming languages that you have to take care about.

Would Contrail use less IP addresses per tenant and node? (They will be
worth their weight in gold soon :-).) What about more open-source
open-source SDNs than Contrail, which use software routers at the edge? Is
anyone using Midonet or OpenDaylight?

I personally think that DVR, DragonFlow, or the next integrated Neutron
solution are the ways to go in OpenStack, not some external plugins. But
DVR, as I find out, has its quirks. Which could be solved by introducing a
few more configuration options. I like the way it can use L2 and provider
networks to integrate with the rest

[Openstack-operators] best practice to manage multiple Data center using openstack

2016-01-28 Thread XueSong Ma
Hi:
does anyone can tell me what are the best way or manage multiple DC using one 
openstack system?
or the good reasons for it?
We have sevel openstack env(multiple region), but its really difficult to 
manage them, python code, software update. etc.
Thanks a lot!

Jeff

___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators