Re: [Openstack-operators] Routed provider networks...

2017-05-26 Thread Saverio Proto
> We use provider networks to essentially take neutron-l3 out of the equation. 
> Generally they are shared on all compute hosts, but usually there aren't huge 
> numbers of computes.

Hello,

we have a datacenter completely L3, routing to the host.

to implement the provider networks we are using the neutron-l2gw plugin.

basically there is a Switch where we bridge a physical port attached
to the provider network, with a virtual VXLAN tenant network. The
config of the switch is managed by neutron using the l2gw-plugin.

Cheers,

Saverio

___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


Re: [Openstack-operators] Routed provider networks...

2017-05-23 Thread Curtis
On Mon, May 22, 2017 at 1:47 PM, Chris Marino  wrote:

> Hello operators, I will be talking about the new routed provider network
> 
> features in OpenStack at a Meetup
> next week and would
> like to get a better sense of how provider networks are currently being
> used and if anyone has deployed routed provider networks?
>

We use provider networks to essentially take neutron-l3 out of the
equation. Generally they are shared on all compute hosts, but usually there
aren't huge numbers of computes.

I have not deployed routed provider networks yet, but I think the premise
is great, and the /23 per rack is probably what we would do with routed
provider networks for multi-rack deployments. Combining routed provider
networks with Cells V2 (not sure if the two work together well, and once
Cells V2 is completely done) could be quite powerful IMHO.


>
> A typical L2 provider network is deployed as VLANs to every host. But
> curious to know how how many hosts or VMs an operator would allow on this
> network before you wanted to split into segments? Would you split hosts
> between VLANs, or trunk the VLANs to all hosts? How do you handle
> scheduling VMs across two provider networks?
>
> If you were to go with L3 provider networks, would it be L3 to the ToR, or
> L3 to the host?
>
> Are the new routed provider network features useful in their current form?
>
> Any experience you can share would be very helpful.
> CM
>
>
> ᐧ
>
> ___
> OpenStack-operators mailing list
> OpenStack-operators@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators
>
>


-- 
Blog: serverascode.com
___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


Re: [Openstack-operators] Routed provider networks...

2017-05-23 Thread Chris Marino
Kevin, should have been more clear

For the specific operator that is running L3 to host, with only a few /20
blocks...dynamic routing is absolutely necessary.

The /16 scenario you describe is totally fine without it.

CM



On Tue, May 23, 2017 at 2:40 PM, Kevin Benton  wrote:

> >Dynamic routing is absolutely necessary, though. Large blocks of 1918
> addresses are scarce, even inside the DC.
>
> I just described a 65 thousand VM topology and it used a /16. Dynamic
> routing is not necessary or even helpful in this scenario if you plan on
> ever running close to your max server density.
>
> Routed networks allows you to size your subnets specifically to the
> maximum number of VMs you can support in a segment, so there is very little
> IP waste once you actually start to use your servers to run VMs.
>
> On Tue, May 23, 2017 at 6:38 AM, Chris Marino  wrote:
>
>> On Mon, May 22, 2017 at 9:12 PM, Kevin Benton  wrote:
>>
>>> The operators that were asking for the spec were using private IP space
>>> and that is probably going to be the most common use case for routed
>>> networks. Splitting a /21 up across the entire data center isn't really
>>> something you would want to do because you would run out of IPs quickly
>>> like you mentioned.
>>>
>>> The use case routed networks is almost exactly like your Romana project.
>>> For example, you have a large chunk of IPs (e.g. 10.0.0.0/16) and
>>> you've setup the infrastructure so each rack gets a /23 with the ToR as the
>>> gateway which would buy you 509 VMs across 128 racks.
>>>
>>
>> Yes, it is. That's what brought me back to this. Working with an operator
>> that's using L2 provider networks today, but will bring L3 to host in their
>> new design.
>>
>> Dynamic routing is absolutely necessary, though. Large blocks of 1918
>> addresses are scarce, even inside the DC. VRFs and/or NAT just not an
>> option.
>>
>> CM
>>
>>
>>>
>>>
>>> On May 22, 2017 2:53 PM, "Chris Marino"  wrote:
>>>
>>> Thanks Jon, very helpful.
>>>
>>> I think a more common use case for provider networks (in enterprise,
>>> AFAIK) is that they'd have a small number of /20 or /21 networks (VLANs)
>>> that they would trunk to all hosts. The /21s are part of the larger
>>> datacenter network with segment firewalls and access to other datacenter
>>> resources (no NAT). Each functional area would get their own network (i.e.
>>> QA, Prod, Dev, Test, etc.) but users would have access to only certain
>>> networks.
>>>
>>> For various reasons, they're moving to spine/leaf L3 networks and they
>>> want to use the same provider network CIDRs with the new L3 network. While
>>> technically this is covered by the use case described in the spec,
>>> splitting a /21 into segments (i.e.one for each rack/ToRs) severely limits
>>> the scheduler (since each rack only get a part of the whole /21).
>>>
>>> This can be solved with route advertisement/distribution and/or IPAM
>>> coordination w/Nova, but this isn't possible today. Which brings me back to
>>> my earlier question, how useful are routed provider network?
>>>
>>> CM
>>> ᐧ
>>>
>>> On Mon, May 22, 2017 at 1:08 PM, Jonathan Proulx 
>>> wrote:
>>>

 Not sure if this is what you're looking for but...

 For my private cloud in research environment we have a public provider
 network available to all projects.

 This is externally routed and has basically been in the same config
 since Folsom (currently we're upto Mitaka).  It provides public ipv4
 addresses. DHCP is done in neutron (of course) the lower portion of
 the allocated subnet is excluded from the dynamic range.  We allow
 users to register DNS names in this range (through pre-exisiting
 custom, external IPAM tools) and to specify the fixed ip address when
 launching VMs.

 This network typically has 1k VMs running. We've assigned a /18 to
 which is obviously overkill.

 A few projects also have provider networks plumbed in to bridge they
 legacy physical networks into OpenStack.  For these there's no dynamic
 range and users must specify fixed ip, these are generally considered
 "a bad idea" and were used to facilitate dumping VMs from old Xen
 infrastructures into OpenStack with minimal changes.

 These are old patterns I wouldn't necessarily suggest anyone
 replicate, but they are the truth of my world...

 -Jon

 On Mon, May 22, 2017 at 12:47:01PM -0700, Chris Marino wrote:
 :Hello operators, I will be talking about the new routed provider
 network
 :
 :features in OpenStack at a Meetup
 :next week and
 would
 :like to get a better sense of how provider networks are currently being
 :used and if anyone has deployed 

Re: [Openstack-operators] Routed provider networks...

2017-05-23 Thread Kevin Benton
>Dynamic routing is absolutely necessary, though. Large blocks of 1918
addresses are scarce, even inside the DC.

I just described a 65 thousand VM topology and it used a /16. Dynamic
routing is not necessary or even helpful in this scenario if you plan on
ever running close to your max server density.

Routed networks allows you to size your subnets specifically to the maximum
number of VMs you can support in a segment, so there is very little IP
waste once you actually start to use your servers to run VMs.

On Tue, May 23, 2017 at 6:38 AM, Chris Marino  wrote:

> On Mon, May 22, 2017 at 9:12 PM, Kevin Benton  wrote:
>
>> The operators that were asking for the spec were using private IP space
>> and that is probably going to be the most common use case for routed
>> networks. Splitting a /21 up across the entire data center isn't really
>> something you would want to do because you would run out of IPs quickly
>> like you mentioned.
>>
>> The use case routed networks is almost exactly like your Romana project.
>> For example, you have a large chunk of IPs (e.g. 10.0.0.0/16) and you've
>> setup the infrastructure so each rack gets a /23 with the ToR as the
>> gateway which would buy you 509 VMs across 128 racks.
>>
>
> Yes, it is. That's what brought me back to this. Working with an operator
> that's using L2 provider networks today, but will bring L3 to host in their
> new design.
>
> Dynamic routing is absolutely necessary, though. Large blocks of 1918
> addresses are scarce, even inside the DC. VRFs and/or NAT just not an
> option.
>
> CM
>
>
>>
>>
>> On May 22, 2017 2:53 PM, "Chris Marino"  wrote:
>>
>> Thanks Jon, very helpful.
>>
>> I think a more common use case for provider networks (in enterprise,
>> AFAIK) is that they'd have a small number of /20 or /21 networks (VLANs)
>> that they would trunk to all hosts. The /21s are part of the larger
>> datacenter network with segment firewalls and access to other datacenter
>> resources (no NAT). Each functional area would get their own network (i.e.
>> QA, Prod, Dev, Test, etc.) but users would have access to only certain
>> networks.
>>
>> For various reasons, they're moving to spine/leaf L3 networks and they
>> want to use the same provider network CIDRs with the new L3 network. While
>> technically this is covered by the use case described in the spec,
>> splitting a /21 into segments (i.e.one for each rack/ToRs) severely limits
>> the scheduler (since each rack only get a part of the whole /21).
>>
>> This can be solved with route advertisement/distribution and/or IPAM
>> coordination w/Nova, but this isn't possible today. Which brings me back to
>> my earlier question, how useful are routed provider network?
>>
>> CM
>> ᐧ
>>
>> On Mon, May 22, 2017 at 1:08 PM, Jonathan Proulx 
>> wrote:
>>
>>>
>>> Not sure if this is what you're looking for but...
>>>
>>> For my private cloud in research environment we have a public provider
>>> network available to all projects.
>>>
>>> This is externally routed and has basically been in the same config
>>> since Folsom (currently we're upto Mitaka).  It provides public ipv4
>>> addresses. DHCP is done in neutron (of course) the lower portion of
>>> the allocated subnet is excluded from the dynamic range.  We allow
>>> users to register DNS names in this range (through pre-exisiting
>>> custom, external IPAM tools) and to specify the fixed ip address when
>>> launching VMs.
>>>
>>> This network typically has 1k VMs running. We've assigned a /18 to
>>> which is obviously overkill.
>>>
>>> A few projects also have provider networks plumbed in to bridge they
>>> legacy physical networks into OpenStack.  For these there's no dynamic
>>> range and users must specify fixed ip, these are generally considered
>>> "a bad idea" and were used to facilitate dumping VMs from old Xen
>>> infrastructures into OpenStack with minimal changes.
>>>
>>> These are old patterns I wouldn't necessarily suggest anyone
>>> replicate, but they are the truth of my world...
>>>
>>> -Jon
>>>
>>> On Mon, May 22, 2017 at 12:47:01PM -0700, Chris Marino wrote:
>>> :Hello operators, I will be talking about the new routed provider network
>>> :>> outed-networks.html>
>>> :features in OpenStack at a Meetup
>>> :next week and would
>>> :like to get a better sense of how provider networks are currently being
>>> :used and if anyone has deployed routed provider networks?
>>> :
>>> :A typical L2 provider network is deployed as VLANs to every host. But
>>> :curious to know how how many hosts or VMs an operator would allow on
>>> this
>>> :network before you wanted to split into segments? Would you split hosts
>>> :between VLANs, or trunk the VLANs to all hosts? How do you handle
>>> :scheduling VMs across two provider networks?
>>> :
>>> :If you were to go with L3 provider 

Re: [Openstack-operators] Routed provider networks...

2017-05-23 Thread Chris Marino
On Mon, May 22, 2017 at 9:12 PM, Kevin Benton  wrote:

> The operators that were asking for the spec were using private IP space
> and that is probably going to be the most common use case for routed
> networks. Splitting a /21 up across the entire data center isn't really
> something you would want to do because you would run out of IPs quickly
> like you mentioned.
>
> The use case routed networks is almost exactly like your Romana project.
> For example, you have a large chunk of IPs (e.g. 10.0.0.0/16) and you've
> setup the infrastructure so each rack gets a /23 with the ToR as the
> gateway which would buy you 509 VMs across 128 racks.
>

Yes, it is. That's what brought me back to this. Working with an operator
that's using L2 provider networks today, but will bring L3 to host in their
new design.

Dynamic routing is absolutely necessary, though. Large blocks of 1918
addresses are scarce, even inside the DC. VRFs and/or NAT just not an
option.

CM


>
>
> On May 22, 2017 2:53 PM, "Chris Marino"  wrote:
>
> Thanks Jon, very helpful.
>
> I think a more common use case for provider networks (in enterprise,
> AFAIK) is that they'd have a small number of /20 or /21 networks (VLANs)
> that they would trunk to all hosts. The /21s are part of the larger
> datacenter network with segment firewalls and access to other datacenter
> resources (no NAT). Each functional area would get their own network (i.e.
> QA, Prod, Dev, Test, etc.) but users would have access to only certain
> networks.
>
> For various reasons, they're moving to spine/leaf L3 networks and they
> want to use the same provider network CIDRs with the new L3 network. While
> technically this is covered by the use case described in the spec,
> splitting a /21 into segments (i.e.one for each rack/ToRs) severely limits
> the scheduler (since each rack only get a part of the whole /21).
>
> This can be solved with route advertisement/distribution and/or IPAM
> coordination w/Nova, but this isn't possible today. Which brings me back to
> my earlier question, how useful are routed provider network?
>
> CM
> ᐧ
>
> On Mon, May 22, 2017 at 1:08 PM, Jonathan Proulx 
> wrote:
>
>>
>> Not sure if this is what you're looking for but...
>>
>> For my private cloud in research environment we have a public provider
>> network available to all projects.
>>
>> This is externally routed and has basically been in the same config
>> since Folsom (currently we're upto Mitaka).  It provides public ipv4
>> addresses. DHCP is done in neutron (of course) the lower portion of
>> the allocated subnet is excluded from the dynamic range.  We allow
>> users to register DNS names in this range (through pre-exisiting
>> custom, external IPAM tools) and to specify the fixed ip address when
>> launching VMs.
>>
>> This network typically has 1k VMs running. We've assigned a /18 to
>> which is obviously overkill.
>>
>> A few projects also have provider networks plumbed in to bridge they
>> legacy physical networks into OpenStack.  For these there's no dynamic
>> range and users must specify fixed ip, these are generally considered
>> "a bad idea" and were used to facilitate dumping VMs from old Xen
>> infrastructures into OpenStack with minimal changes.
>>
>> These are old patterns I wouldn't necessarily suggest anyone
>> replicate, but they are the truth of my world...
>>
>> -Jon
>>
>> On Mon, May 22, 2017 at 12:47:01PM -0700, Chris Marino wrote:
>> :Hello operators, I will be talking about the new routed provider network
>> :> outed-networks.html>
>> :features in OpenStack at a Meetup
>> :next week and would
>> :like to get a better sense of how provider networks are currently being
>> :used and if anyone has deployed routed provider networks?
>> :
>> :A typical L2 provider network is deployed as VLANs to every host. But
>> :curious to know how how many hosts or VMs an operator would allow on this
>> :network before you wanted to split into segments? Would you split hosts
>> :between VLANs, or trunk the VLANs to all hosts? How do you handle
>> :scheduling VMs across two provider networks?
>> :
>> :If you were to go with L3 provider networks, would it be L3 to the ToR,
>> or
>> :L3 to the host?
>> :
>> :Are the new routed provider network features useful in their current
>> form?
>> :
>> :Any experience you can share would be very helpful.
>> :CM
>> :
>> :
>> :ᐧ
>>
>> :___
>> :OpenStack-operators mailing list
>> :OpenStack-operators@lists.openstack.org
>> :http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators
>>
>>
>> --
>>
>
>
> ___
> OpenStack-operators mailing list
> OpenStack-operators@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators
>
>
>
ᐧ