Re: [Openstack-operators] DVR and public IP consumption

2016-02-10 Thread Carl Baldwin
On Thu, Feb 4, 2016 at 5:41 AM, Tomas Vondra  wrote:
> Hi Carl,
> sorry for the late reply, but these links of yours expanded to about 12 tabs
> in my browser, most with serveral pages of text. "Given lots of thought" may
> be an understatement.
>
> Both the specs sound very resonable to me. The second one is exactly what I
> was saying here before. (Evidently I was not the first.) Why was it not
> accepted? It seems quite easy to implement in contrast to full routed 
> networks.

All of those links are out of date.  As I mentioned to Neil in another
thread just now, I'm going to write a new spec for this based on the
current direction Neutron is taking.

> The work on routed networks will be beneficial mainly for large deployments,
> whose needs exceed the capacity of a few L2 domains. Small public deployers
> are working on the scale of tens of boxes, but hundreds of tenants. Each
> tenant gets a virtual router, which eats an IP. I only have 1024 IPs from
> RIPE and will probably get no more. If most of the tenants are small and
> only use a one or two VMs, I'm wasting up to 50% addresses and it is
> severely limiting my growth potential.

Understood.  I think it is about time we solved this.  Let's see what
we can get going in the rfe / spec process for Newton.

> I do not really understand why routed networks would be a prerequisite to
> using private IPs for router interfaces. I'm aiming at the last point from
> the Etherpad - Carrier grade NAT. Do you think that I could use the "Allow
> setting a tenant router's external IP" function and disable any checks if
> the specified IP is in the network defined as external? I already have a
> private subnet on the same L2 segment, that is NATted by the datacenter
> routers. The API is admin-only, so it would not create a risk. I would
> pre-create a router for each tenant and everyone would be happy. Floating
> IPs are taken care of at the compute nodes in DVR.

It isn't necessarily a prerequisite.  It has just been given more
priority and the work for routed networks will include a solution (at
least in part) for this.

I'm not sure that setting the router's external IP will work.  If you
decide to experiment, I'd be very interested in your results.  I think
we need a way to distinguish between two pools on the same network.
Find the post where I just replied to Neil and read that.  Hopefully
it makes sense.  This is exactly what I have mind currently and
hopefully can propose it as a spec or rfe soon.

Carl

___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


Re: [Openstack-operators] DVR and public IP consumption

2016-02-04 Thread Tomas Vondra
Carl Baldwin  writes:

> You're right, the IP in the fip namespace doesn't ever get written in
> to any packets or used as an arp destination.  It is currently
> meaningless.  That will change with BGP's capability to routed DVR
> traffic in Mitaka because that IP will be used as a next hop.
> However, it still doesn't need to be a public IP.  The routed networks
> work that I'm doing in Newton will allow us to eventually make these
> private IPs instead of public so that public IPs are not wasted.
> 
> I've given these things a lot of thought but haven't had time to
> pursue any such thoughts yet except to implement routed networks as
> groundwork.  Here are a few old links [1][2] but they are really out
> of date.  I need to write another spec following the first routed
> networks spec explaining how these things will work.
> 
> Here is an etherpad [3] that I put together a couple of years ago
> trying to compare different approaches to getting rid of centralized
> SNAT too.  We just never got any traction on any of these approaches.
> Also, without the routed networks work in Newton, many of them are
> difficult to accomplish.
> 
> Let me know if anything resonates with you.  We might be in a better
> position to do some of this work when routed networks is under way.
> For example, one thing that routed networks may allow is using private
> IPs for the router's address.  I think that was in one of the above
> blueprints somewhere.  Let me go write a new spec and post it.  I'll
> update this thread when I've got it up.
> 
> Carl
> 
> [1]
https://review.openstack.org/#/c/174657/2/specs/liberty/eliminate-dvr-fip-ns.rst
> [2] https://review.openstack.org/#/c/175517/1/specs/liberty/no-router-ip.rst
> [3] https://etherpad.openstack.org/p/decentralized-snat
> 

Hi Carl,
sorry for the late reply, but these links of yours expanded to about 12 tabs
in my browser, most with serveral pages of text. "Given lots of thought" may
be an understatement.

Both the specs sound very resonable to me. The second one is exactly what I
was saying here before. (Evidently I was not the first.) Why was it not
accepted? It seems quite easy to implement in contrast to full routed networks.

The work on routed networks will be beneficial mainly for large deployments,
whose needs exceed the capacity of a few L2 domains. Small public deployers
are working on the scale of tens of boxes, but hundreds of tenants. Each
tenant gets a virtual router, which eats an IP. I only have 1024 IPs from
RIPE and will probably get no more. If most of the tenants are small and
only use a one or two VMs, I'm wasting up to 50% addresses and it is
severely limiting my growth potential.

I do not really understand why routed networks would be a prerequisite to
using private IPs for router interfaces. I'm aiming at the last point from
the Etherpad - Carrier grade NAT. Do you think that I could use the "Allow
setting a tenant router's external IP" function and disable any checks if
the specified IP is in the network defined as external? I already have a
private subnet on the same L2 segment, that is NATted by the datacenter
routers. The API is admin-only, so it would not create a risk. I would
pre-create a router for each tenant and everyone would be happy. Floating
IPs are taken care of at the compute nodes in DVR.
Tomas


___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


Re: [Openstack-operators] DVR and public IP consumption

2016-01-29 Thread Tomas Vondra
Fox, Kevin M  writes:

> 
> Hi Tomas,
> 
> The using external addresses per tenant router is a feature to a lot of
sites, like ours. We want to know for
> sure, at minimum, which tenant was responsible for bad activity on the
external network. Having the
> external address tied to a tenant router allows you to track bad activity
back at least to the ip, then to the
> tenant router. You won't be able to tell which vm's of the tenant
performed the bad activity because of the
> snat, but you at least have some to talk to about it, instead of your
local security friends asking you to
> unplug the whole cloud.
> 
> Thanks,
> Kevin

Hi Kevin!
Don't worry, I also had this in mind. We do traffic logging at the
datacenter's firewall, so using a private IP per tenant router would still
satisfy this requirement.
Tomas




___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


Re: [Openstack-operators] DVR and public IP consumption

2016-01-29 Thread Robert Starmer
I don't think there's anything wrong with your suggestion, as I can't find
a path where the extra address is actually used (it doesn't get used in any
NAT mapping, so it is really vestigial). The question now is, will anyone
in the community be interested in extending the DVR code in this fashion
(interested in writing a spec?).

I personally am a bigger proponent of dropping the whole Floating IP
charade, and moving wholesale to v6 and routing right to the VM/container
endpoint.  But maybe that's just my own odd view.


On Thu, Jan 28, 2016 at 10:10 AM, Fox, Kevin M <kevin@pnnl.gov> wrote:

> Ah. so it was done just to make it simple to reuse lots of existing code
> to get DVR working quickly and thus a current requirement, but there is
> nothing stopping further enhancements to be made to eliminate it in the
> future?
>
> What about a step in between what's there now, and eliminating it
> completely. If the router code expects there to be an ip allocated for it
> on every compute node, could you share one external ip between all the
> compute node routers? Since the network will never actually use it, it
> probably doesn't matter if its conflicting but it would still allow the
> existing code to function the way it always has, greatly simplifying
> implementation?
>
> Thanks,
> Kevin
>
> --
> *From:* Robert Starmer [rob...@kumul.us]
> *Sent:* Wednesday, January 27, 2016 8:34 PM
> *To:* Fox, Kevin M
> *Cc:* Carl Baldwin; OpenStack Operators; Tomas Vondra
>
> *Subject:* Re: [Openstack-operators] DVR and public IP consumption
>
> I think I've created a bit of confusion, because I forgot that DVR still
> does SNAT (generic non Floating IP tied NAT) on a central network node just
> like in the non-DVR model.  The extra address that is consumed is allocated
> to a FIP specific namespace when a DVR is made responsible for supporting a
> tenant's floating IP, and the question then is: Why do I need this _extra_
> external address from the floating IP pool for the FIP namespace, since
> it's the allocation of a tenant requested floating IP to a tenant VM that
> triggers the DVR to implement the FIP namespace function in the first
> place.
>
> In both the Paris and Vancouver DVR presentations "We add distributed FIP
> support at the expense of an _extra_ external address per device, but the
> FIP namespace is then shared across all tenants". Given that there is no
> "external" interface for the DVR interface for floating IPs until at least
> one tenant allocates one, a new namespace needs to be created to act as the
> termination for the tenant's floating IP.  A normal tenant router would
> have an address allocated already, because it has a port allocated onto the
> external network (this is the address that SNAT overloads for those
> non-floating associated machines that lets them communicate with the
> Internet at large), but in this case, no such interface exists until the
> namespace is created and attached to the external network, so when the
> floating IP port is created, an address is simply allocated from the
> External (e.g. floating) pool for the interface.  And _then_ the floating
> IP is allocated to the namespace as well. The fact that this extra address
> is used is a part of the normal port allocation process (and default
> port-security anti-spoofing processes) that exist already, and simplifies
> the process of moving tenant allocated floating addresses around (the port
> state for the floating namespace doesn't change, it keeps it's special mac
> and address regardless of what ever else goes on). So don't think of it as
> a Floating IP allocated to the DVR, it's just the DVR's local
> representative for it's port on the external network.  Tenant addresses are
> then "on top" of this setup.
>
> So, in-efficient, yes.  Part of DVR history, yes.  Confusing to us mere
> network mortals, yes.  But that's how I see it. And sorry for the SNAT
> reference, just adding my own additional layer of "this is how it should
> be"  on top.
>
> Robert
>
> On Wed, Jan 27, 2016 at 3:33 PM, Fox, Kevin M <kevin@pnnl.gov> wrote:
>
>> But there already is a second external address, the fip address that's
>> nating. Is there a double nat? I'm a little confused.
>>
>> Thanks,
>> Kevin
>> --
>> *From:* Robert Starmer [rob...@kumul.us]
>> *Sent:* Wednesday, January 27, 2016 3:20 PM
>> *To:* Carl Baldwin
>> *Cc:* OpenStack Operators; Tomas Vondra
>> *Subject:* Re: [Openstack-operators] DVR and public IP consumption
>>
>> You can't get rid of the "External" address as it's used to direct return
>> traffic to the right rout

Re: [Openstack-operators] DVR and public IP consumption

2016-01-29 Thread Fox, Kevin M
I wish I had time for it. :/ Maybe in a few months when I get this new 
production cloud deployed. Can't promise anything though


I'm very much in favor of floating ip's. I'd even argue for them for ipv6. Not 
because of SNAT, or saving IP's. Its because they are an important abstraction 
that lets you turn pets into cattle.

Let me explain... ip addresses tend to be state that gets placed into various 
places. dns, config files for other services, etc. As such it is expected to be 
somewhat long lived. A VM on the other hand, shouldn't be. You should be able 
to stand up a new one, get it ready, and once checked out, move load from the 
old to the new. A floating ip is a perfect way to do that. You leave it on the 
old vm until everything checks out, then move the floating ip. You can even use 
the trick to further scale things Its originally on a VM, but you make a 
load balancer, add the old vm to the load balancer, then move the floating ip 
to the loadbalancer. then you can add more vm's to the LB. Seamless. Without a 
floating ip, none of that is possible. You have to take painful downtimes.

So, please neutron team, floating ip's for v6 are a thing that should be 
possible. The implementation can be quite different. I think ipv6 has a 
mobility extension that maybe can be used instead of snatting. But the goal of 
having an abstraction for a network service endpoint that can be moved around 
(floating ip) really needs to stay.

I'd also really like to see them for tenant networks for similar reasons. Them 
only working on external networks is limiting.

Thanks,
Kevin




From: Robert Starmer [rob...@kumul.us]
Sent: Friday, January 29, 2016 1:21 AM
To: Fox, Kevin M
Cc: Carl Baldwin; OpenStack Operators; Tomas Vondra
Subject: Re: [Openstack-operators] DVR and public IP consumption

I don't think there's anything wrong with your suggestion, as I can't find a 
path where the extra address is actually used (it doesn't get used in any NAT 
mapping, so it is really vestigial). The question now is, will anyone in the 
community be interested in extending the DVR code in this fashion (interested 
in writing a spec?).

I personally am a bigger proponent of dropping the whole Floating IP charade, 
and moving wholesale to v6 and routing right to the VM/container endpoint.  But 
maybe that's just my own odd view.


On Thu, Jan 28, 2016 at 10:10 AM, Fox, Kevin M 
<kevin@pnnl.gov<mailto:kevin@pnnl.gov>> wrote:
Ah. so it was done just to make it simple to reuse lots of existing code to get 
DVR working quickly and thus a current requirement, but there is nothing 
stopping further enhancements to be made to eliminate it in the future?

What about a step in between what's there now, and eliminating it completely. 
If the router code expects there to be an ip allocated for it on every compute 
node, could you share one external ip between all the compute node routers? 
Since the network will never actually use it, it probably doesn't matter if its 
conflicting but it would still allow the existing code to function the way it 
always has, greatly simplifying implementation?

Thanks,
Kevin


From: Robert Starmer [rob...@kumul.us<mailto:rob...@kumul.us>]
Sent: Wednesday, January 27, 2016 8:34 PM
To: Fox, Kevin M
Cc: Carl Baldwin; OpenStack Operators; Tomas Vondra

Subject: Re: [Openstack-operators] DVR and public IP consumption

I think I've created a bit of confusion, because I forgot that DVR still does 
SNAT (generic non Floating IP tied NAT) on a central network node just like in 
the non-DVR model.  The extra address that is consumed is allocated to a FIP 
specific namespace when a DVR is made responsible for supporting a tenant's 
floating IP, and the question then is: Why do I need this _extra_ external 
address from the floating IP pool for the FIP namespace, since it's the 
allocation of a tenant requested floating IP to a tenant VM that triggers the 
DVR to implement the FIP namespace function in the first place.

In both the Paris and Vancouver DVR presentations "We add distributed FIP 
support at the expense of an _extra_ external address per device, but the FIP 
namespace is then shared across all tenants". Given that there is no "external" 
interface for the DVR interface for floating IPs until at least one tenant 
allocates one, a new namespace needs to be created to act as the termination 
for the tenant's floating IP.  A normal tenant router would have an address 
allocated already, because it has a port allocated onto the external network 
(this is the address that SNAT overloads for those non-floating associated 
machines that lets them communicate with the Internet at large), but in this 
case, no such interface exists until the namespace is created and attached to 
the external network, so when the floating IP port is created, an address is 
simply allocated from the External 

Re: [Openstack-operators] DVR and public IP consumption

2016-01-28 Thread Fox, Kevin M
Hi Tomas,

The using external addresses per tenant router is a feature to a lot of sites, 
like ours. We want to know for sure, at minimum, which tenant was responsible 
for bad activity on the external network. Having the external address tied to a 
tenant router allows you to track bad activity back at least to the ip, then to 
the tenant router. You won't be able to tell which vm's of the tenant performed 
the bad activity because of the snat, but you at least have some to talk to 
about it, instead of your local security friends asking you to unplug the whole 
cloud.

Thanks,
Kevin

From: Tomas Vondra [von...@czech-itc.cz]
Sent: Thursday, January 28, 2016 3:15 AM
To: openstack-operators@lists.openstack.org
Subject: Re: [Openstack-operators] DVR and public IP consumption

Robert Starmer <robert@...> writes:

>
>
> I think I've created a bit of confusion, because I forgot that DVR still
does SNAT (generic non Floating IP tied NAT) on a central network node just
like in the non-DVR model.  The extra address that is consumed is allocated
to a FIP specific namespace when a DVR is made responsible for supporting a
tenant's floating IP, and the question then is: Why do I need this _extra_
external address from the floating IP pool for the FIP namespace, since it's
the allocation of a tenant requested floating IP to a tenant VM that
triggers the DVR to implement the FIP namespace function in the first place.
> In both the Paris and Vancouver DVR presentations "We add distributed FIP
support at the expense of an _extra_ external address per device, but the
FIP namespace is then shared across all tenants". Given that there is no
"external" interface for the DVR interface for floating IPs until at least
one tenant allocates one, a new namespace needs to be created to act as the
termination for the tenant's floating IP.  A normal tenant router would have
an address allocated already, because it has a port allocated onto the
external network (this is the address that SNAT overloads for those
non-floating associated machines that lets them communicate with the
Internet at large), but in this case, no such interface exists until the
namespace is created and attached to the external network, so when the
floating IP port is created, an address is simply allocated from the
External (e.g. floating) pool for the interface.  And _then_ the floating IP
is allocated to the namespace as well. The fact that this extra address is
used is a part of the normal port allocation process (and default
port-security anti-spoofing processes) that exist already, and simplifies
the process of moving tenant allocated floating addresses around (the port
state for the floating namespace doesn't change, it keeps it's special mac
and address regardless of what ever else goes on). So don't think of it as a
Floating IP allocated to the DVR, it's just the DVR's local representative
for it's port on the external network.  Tenant addresses are then "on top"
of this setup.
>
> So, in-efficient, yes.  Part of DVR history, yes.  Confusing to us mere
network mortals, yes.  But that's how I see it. And sorry for the SNAT
reference, just adding my own additional layer of "this is how it should be"
 on top.
>
> Robert
>

Dear Robert,
thanks for clarifying why there has to always be an address in the FIP
namespace. But is still feels like a someone left it there from an alpha
phase. If I need AN address, I would use a worthless one like 169.254
link-local one, not a public IP. There are already link-local addresses in
use in Neutron.. somewhere :-).

The IP consumption that bothers me more than this is that of the Router
External Interfaces, which are all on the network nodes and do SNAT for
every tenant, separately. I would like the centralized SNAT of DVR to be
more ..centralized. The quickest way, I think, would be to allocate these
from a different pool than the Floating IPs and let my datacenter SNAT take
care of them.

To your earlier post, some of the Neutron provider implementations are much
more L3-oriented than the default implementation. But I have, for example,
scratched Contrail out of my installation, because the added cost of 2
Juniper or Cisco routers does not balance out the benefits, IMHO. And it is
another complex system besides OpenStack consisting of about 6 components
written in 4 programming languages that you have to take care about.

Would Contrail use less IP addresses per tenant and node? (They will be
worth their weight in gold soon :-).) What about more open-source
open-source SDNs than Contrail, which use software routers at the edge? Is
anyone using Midonet or OpenDaylight?

I personally think that DVR, DragonFlow, or the next integrated Neutron
solution are the ways to go in OpenStack, not some external plugins. But
DVR, as I find out, has its quirks. Which could be solved by introducing a
few more configuration

Re: [Openstack-operators] DVR and public IP consumption

2016-01-28 Thread Tomas Vondra
Robert Starmer <robert@...> writes:

> 
> 
> I think I've created a bit of confusion, because I forgot that DVR still
does SNAT (generic non Floating IP tied NAT) on a central network node just
like in the non-DVR model.  The extra address that is consumed is allocated
to a FIP specific namespace when a DVR is made responsible for supporting a
tenant's floating IP, and the question then is: Why do I need this _extra_
external address from the floating IP pool for the FIP namespace, since it's
the allocation of a tenant requested floating IP to a tenant VM that
triggers the DVR to implement the FIP namespace function in the first place. 
> In both the Paris and Vancouver DVR presentations "We add distributed FIP
support at the expense of an _extra_ external address per device, but the
FIP namespace is then shared across all tenants". Given that there is no
"external" interface for the DVR interface for floating IPs until at least
one tenant allocates one, a new namespace needs to be created to act as the
termination for the tenant's floating IP.  A normal tenant router would have
an address allocated already, because it has a port allocated onto the
external network (this is the address that SNAT overloads for those
non-floating associated machines that lets them communicate with the
Internet at large), but in this case, no such interface exists until the
namespace is created and attached to the external network, so when the
floating IP port is created, an address is simply allocated from the
External (e.g. floating) pool for the interface.  And _then_ the floating IP
is allocated to the namespace as well. The fact that this extra address is
used is a part of the normal port allocation process (and default
port-security anti-spoofing processes) that exist already, and simplifies
the process of moving tenant allocated floating addresses around (the port
state for the floating namespace doesn't change, it keeps it's special mac
and address regardless of what ever else goes on). So don't think of it as a
Floating IP allocated to the DVR, it's just the DVR's local representative
for it's port on the external network.  Tenant addresses are then "on top"
of this setup.
> 
> So, in-efficient, yes.  Part of DVR history, yes.  Confusing to us mere
network mortals, yes.  But that's how I see it. And sorry for the SNAT
reference, just adding my own additional layer of "this is how it should be"
 on top.
> 
> Robert
> 

Dear Robert,
thanks for clarifying why there has to always be an address in the FIP
namespace. But is still feels like a someone left it there from an alpha
phase. If I need AN address, I would use a worthless one like 169.254
link-local one, not a public IP. There are already link-local addresses in
use in Neutron.. somewhere :-).

The IP consumption that bothers me more than this is that of the Router
External Interfaces, which are all on the network nodes and do SNAT for
every tenant, separately. I would like the centralized SNAT of DVR to be
more ..centralized. The quickest way, I think, would be to allocate these
from a different pool than the Floating IPs and let my datacenter SNAT take
care of them.

To your earlier post, some of the Neutron provider implementations are much
more L3-oriented than the default implementation. But I have, for example,
scratched Contrail out of my installation, because the added cost of 2
Juniper or Cisco routers does not balance out the benefits, IMHO. And it is
another complex system besides OpenStack consisting of about 6 components
written in 4 programming languages that you have to take care about.

Would Contrail use less IP addresses per tenant and node? (They will be
worth their weight in gold soon :-).) What about more open-source
open-source SDNs than Contrail, which use software routers at the edge? Is
anyone using Midonet or OpenDaylight?

I personally think that DVR, DragonFlow, or the next integrated Neutron
solution are the ways to go in OpenStack, not some external plugins. But
DVR, as I find out, has its quirks. Which could be solved by introducing a
few more configuration options. I like the way it can use L2 and provider
networks to integrate with the rest of the datacenter. No BGP L3VPN tunnels,
which cannot be done in open-source.

Tomas

> 
> On Wed, Jan 27, 2016 at 3:33 PM, Fox, Kevin M
<kevin.fox-mijbx5db...@public.gmane.org> wrote:
> 
> 
> 
> 
> 
> But there already is a second external address, the fip address that's
nating. Is there a double nat? I'm a little confused.
> Thanks,
> Kevin
> From: Robert Starmer [robert-irot69hj...@public.gmane.org]Sent: Wednesday,
January 27, 2016 3:20 PMTo: Carl BaldwinCc: OpenStack Operators; Tomas
VondraSubject: Re: [Openstack-operators] DVR and public IP consumption
> 
> 
> 
> 
> You can't get rid of the "External" address as it's used to direct return
traffic t

Re: [Openstack-operators] DVR and public IP consumption

2016-01-28 Thread Fox, Kevin M
Ah. so it was done just to make it simple to reuse lots of existing code to get 
DVR working quickly and thus a current requirement, but there is nothing 
stopping further enhancements to be made to eliminate it in the future?

What about a step in between what's there now, and eliminating it completely. 
If the router code expects there to be an ip allocated for it on every compute 
node, could you share one external ip between all the compute node routers? 
Since the network will never actually use it, it probably doesn't matter if its 
conflicting but it would still allow the existing code to function the way it 
always has, greatly simplifying implementation?

Thanks,
Kevin


From: Robert Starmer [rob...@kumul.us]
Sent: Wednesday, January 27, 2016 8:34 PM
To: Fox, Kevin M
Cc: Carl Baldwin; OpenStack Operators; Tomas Vondra
Subject: Re: [Openstack-operators] DVR and public IP consumption

I think I've created a bit of confusion, because I forgot that DVR still does 
SNAT (generic non Floating IP tied NAT) on a central network node just like in 
the non-DVR model.  The extra address that is consumed is allocated to a FIP 
specific namespace when a DVR is made responsible for supporting a tenant's 
floating IP, and the question then is: Why do I need this _extra_ external 
address from the floating IP pool for the FIP namespace, since it's the 
allocation of a tenant requested floating IP to a tenant VM that triggers the 
DVR to implement the FIP namespace function in the first place.

In both the Paris and Vancouver DVR presentations "We add distributed FIP 
support at the expense of an _extra_ external address per device, but the FIP 
namespace is then shared across all tenants". Given that there is no "external" 
interface for the DVR interface for floating IPs until at least one tenant 
allocates one, a new namespace needs to be created to act as the termination 
for the tenant's floating IP.  A normal tenant router would have an address 
allocated already, because it has a port allocated onto the external network 
(this is the address that SNAT overloads for those non-floating associated 
machines that lets them communicate with the Internet at large), but in this 
case, no such interface exists until the namespace is created and attached to 
the external network, so when the floating IP port is created, an address is 
simply allocated from the External (e.g. floating) pool for the interface.  And 
_then_ the floating IP is allocated to the namespace as well. The fact that 
this extra address is used is a part of the normal port allocation process (and 
default port-security anti-spoofing processes) that exist already, and 
simplifies the process of moving tenant allocated floating addresses around 
(the port state for the floating namespace doesn't change, it keeps it's 
special mac and address regardless of what ever else goes on). So don't think 
of it as a Floating IP allocated to the DVR, it's just the DVR's local 
representative for it's port on the external network.  Tenant addresses are 
then "on top" of this setup.

So, in-efficient, yes.  Part of DVR history, yes.  Confusing to us mere network 
mortals, yes.  But that's how I see it. And sorry for the SNAT reference, just 
adding my own additional layer of "this is how it should be"  on top.

Robert

On Wed, Jan 27, 2016 at 3:33 PM, Fox, Kevin M 
<kevin@pnnl.gov<mailto:kevin@pnnl.gov>> wrote:
But there already is a second external address, the fip address that's nating. 
Is there a double nat? I'm a little confused.

Thanks,
Kevin

From: Robert Starmer [rob...@kumul.us<mailto:rob...@kumul.us>]
Sent: Wednesday, January 27, 2016 3:20 PM
To: Carl Baldwin
Cc: OpenStack Operators; Tomas Vondra
Subject: Re: [Openstack-operators] DVR and public IP consumption

You can't get rid of the "External" address as it's used to direct return 
traffic to the right router node.  DVR as implemented is really just a local 
NAT gateway per physical compute node.  The outside of your NAT needs to be 
publicly unique, so it needs it's own address.  Some SDN solutions can provide 
a truly distributed router model, because they globally know the inside state 
of the NAT environment, and can forward packets back to the internal source 
properly, regardless of which distributed forwarder receives the incoming 
"external" packets.

If the number of external addresses consumed is an issue, you may consider the 
dual gateway HA model instead of DVR.  This uses classic multi-router models 
where one router takes on the task of forwading packets, and the other device 
just acts as a backup.  You do still have a software bottleneck at your router, 
unless you then also use one of the plugins that supports hardware L3 (last I 
checked, Juniper, Arista, Cisco, etc. all provide an L3 plugin that is HA 
capable), but you only b

Re: [Openstack-operators] DVR and public IP consumption

2016-01-27 Thread Fox, Kevin M
But there already is a second external address, the fip address that's nating. 
Is there a double nat? I'm a little confused.

Thanks,
Kevin

From: Robert Starmer [rob...@kumul.us]
Sent: Wednesday, January 27, 2016 3:20 PM
To: Carl Baldwin
Cc: OpenStack Operators; Tomas Vondra
Subject: Re: [Openstack-operators] DVR and public IP consumption

You can't get rid of the "External" address as it's used to direct return 
traffic to the right router node.  DVR as implemented is really just a local 
NAT gateway per physical compute node.  The outside of your NAT needs to be 
publicly unique, so it needs it's own address.  Some SDN solutions can provide 
a truly distributed router model, because they globally know the inside state 
of the NAT environment, and can forward packets back to the internal source 
properly, regardless of which distributed forwarder receives the incoming 
"external" packets.

If the number of external addresses consumed is an issue, you may consider the 
dual gateway HA model instead of DVR.  This uses classic multi-router models 
where one router takes on the task of forwading packets, and the other device 
just acts as a backup.  You do still have a software bottleneck at your router, 
unless you then also use one of the plugins that supports hardware L3 (last I 
checked, Juniper, Arista, Cisco, etc. all provide an L3 plugin that is HA 
capable), but you only burn 3 External addresses for the router (and 3 internal 
network addresses per tenant side interface if that matters).

Hope that clarifies a bit,
Robert

On Tue, Jan 26, 2016 at 4:14 AM, Carl Baldwin 
<c...@ecbaldwin.net<mailto:c...@ecbaldwin.net>> wrote:
On Thu, Jan 14, 2016 at 2:45 AM, Tomas Vondra 
<von...@czech-itc.cz<mailto:von...@czech-itc.cz>> wrote:
> Hi!
> I have just deployed an OpenStack Kilo installation with DVR and expected
> that it will consume one Public IP per network node as per
> http://assafmuller.com/2015/04/15/distributed-virtual-routing-floating-ips/,
> but it still eats one per virtual Router.
> What is the correct behavior?

Regardless of DVR, a Neutron router burns one IP per virtual router
which it uses to SNAT traffic from instances that do not have floating
IPs.

When you use DVR, an additional IP is consumed for each compute host
running an L3 agent in DVR mode.  There has been some discussion about
how this can be eliminated but no action has been taken to do this.

> Otherwise, it works as a DVR should according to documentation. There are
> router namespaces at both compute and network nodes, snat namespaces at the
> network nodes and fip namespaces at the compute nodes. Every router has a
> router_interface_distributed and a router_centralized_snat with private IPs,
> however the router_gateway has a public IP, which I would like to getr id of
> to increase density.

I'm not sure if it is possible to avoid burning these IPs at this
time.  Maybe someone else can chime in with more detail.

Carl

___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org<mailto:OpenStack-operators@lists.openstack.org>
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators

___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


Re: [Openstack-operators] DVR and public IP consumption

2016-01-27 Thread Robert Starmer
I think I've created a bit of confusion, because I forgot that DVR still
does SNAT (generic non Floating IP tied NAT) on a central network node just
like in the non-DVR model.  The extra address that is consumed is allocated
to a FIP specific namespace when a DVR is made responsible for supporting a
tenant's floating IP, and the question then is: Why do I need this _extra_
external address from the floating IP pool for the FIP namespace, since
it's the allocation of a tenant requested floating IP to a tenant VM that
triggers the DVR to implement the FIP namespace function in the first
place.

In both the Paris and Vancouver DVR presentations "We add distributed FIP
support at the expense of an _extra_ external address per device, but the
FIP namespace is then shared across all tenants". Given that there is no
"external" interface for the DVR interface for floating IPs until at least
one tenant allocates one, a new namespace needs to be created to act as the
termination for the tenant's floating IP.  A normal tenant router would
have an address allocated already, because it has a port allocated onto the
external network (this is the address that SNAT overloads for those
non-floating associated machines that lets them communicate with the
Internet at large), but in this case, no such interface exists until the
namespace is created and attached to the external network, so when the
floating IP port is created, an address is simply allocated from the
External (e.g. floating) pool for the interface.  And _then_ the floating
IP is allocated to the namespace as well. The fact that this extra address
is used is a part of the normal port allocation process (and default
port-security anti-spoofing processes) that exist already, and simplifies
the process of moving tenant allocated floating addresses around (the port
state for the floating namespace doesn't change, it keeps it's special mac
and address regardless of what ever else goes on). So don't think of it as
a Floating IP allocated to the DVR, it's just the DVR's local
representative for it's port on the external network.  Tenant addresses are
then "on top" of this setup.

So, in-efficient, yes.  Part of DVR history, yes.  Confusing to us mere
network mortals, yes.  But that's how I see it. And sorry for the SNAT
reference, just adding my own additional layer of "this is how it should
be"  on top.

Robert

On Wed, Jan 27, 2016 at 3:33 PM, Fox, Kevin M <kevin@pnnl.gov> wrote:

> But there already is a second external address, the fip address that's
> nating. Is there a double nat? I'm a little confused.
>
> Thanks,
> Kevin
> --
> *From:* Robert Starmer [rob...@kumul.us]
> *Sent:* Wednesday, January 27, 2016 3:20 PM
> *To:* Carl Baldwin
> *Cc:* OpenStack Operators; Tomas Vondra
> *Subject:* Re: [Openstack-operators] DVR and public IP consumption
>
> You can't get rid of the "External" address as it's used to direct return
> traffic to the right router node.  DVR as implemented is really just a
> local NAT gateway per physical compute node.  The outside of your NAT needs
> to be publicly unique, so it needs it's own address.  Some SDN solutions
> can provide a truly distributed router model, because they globally know
> the inside state of the NAT environment, and can forward packets back to
> the internal source properly, regardless of which distributed forwarder
> receives the incoming "external" packets.
>
> If the number of external addresses consumed is an issue, you may consider
> the dual gateway HA model instead of DVR.  This uses classic multi-router
> models where one router takes on the task of forwading packets, and the
> other device just acts as a backup.  You do still have a software
> bottleneck at your router, unless you then also use one of the plugins that
> supports hardware L3 (last I checked, Juniper, Arista, Cisco, etc. all
> provide an L3 plugin that is HA capable), but you only burn 3 External
> addresses for the router (and 3 internal network addresses per tenant side
> interface if that matters).
>
> Hope that clarifies a bit,
> Robert
>
> On Tue, Jan 26, 2016 at 4:14 AM, Carl Baldwin <c...@ecbaldwin.net> wrote:
>
>> On Thu, Jan 14, 2016 at 2:45 AM, Tomas Vondra <von...@czech-itc.cz>
>> wrote:
>> > Hi!
>> > I have just deployed an OpenStack Kilo installation with DVR and
>> expected
>> > that it will consume one Public IP per network node as per
>> >
>> http://assafmuller.com/2015/04/15/distributed-virtual-routing-floating-ips/
>> ,
>> > but it still eats one per virtual Router.
>> > What is the correct behavior?
>>
>> Regardless of DVR, a Neutron router burns one IP per virtual router
>> which it uses to SNAT traffi

Re: [Openstack-operators] DVR and public IP consumption

2016-01-26 Thread Carl Baldwin
On Thu, Jan 14, 2016 at 2:45 AM, Tomas Vondra  wrote:
> Hi!
> I have just deployed an OpenStack Kilo installation with DVR and expected
> that it will consume one Public IP per network node as per
> http://assafmuller.com/2015/04/15/distributed-virtual-routing-floating-ips/,
> but it still eats one per virtual Router.
> What is the correct behavior?

Regardless of DVR, a Neutron router burns one IP per virtual router
which it uses to SNAT traffic from instances that do not have floating
IPs.

When you use DVR, an additional IP is consumed for each compute host
running an L3 agent in DVR mode.  There has been some discussion about
how this can be eliminated but no action has been taken to do this.

> Otherwise, it works as a DVR should according to documentation. There are
> router namespaces at both compute and network nodes, snat namespaces at the
> network nodes and fip namespaces at the compute nodes. Every router has a
> router_interface_distributed and a router_centralized_snat with private IPs,
> however the router_gateway has a public IP, which I would like to getr id of
> to increase density.

I'm not sure if it is possible to avoid burning these IPs at this
time.  Maybe someone else can chime in with more detail.

Carl

___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators