> On 17 Jun 2015, at 10:56 am, Armando M. <arma...@gmail.com> wrote:
> 
> 
> 
> On 16 June 2015 at 17:31, Sam Morrison <sorri...@gmail.com 
> <mailto:sorri...@gmail.com>> wrote:
> We at NeCTAR are starting the transition to neutron from nova-net and neutron 
> almost does what we want.
> 
> We have 10 “public" networks and 10 “service" networks and depending on which 
> compute node you land on you get attached to one of them.
> 
> In neutron speak we have multiple shared externally routed provider networks. 
> We don’t have any tenant networks or any other fancy stuff yet.
> How I’ve currently got this set up is by creating 10 networks and subsequent 
> subnets eg. public-1, public-2, public-3 … and service-1, service-2, 
> service-3 and so on.
> 
> In nova we have made a slight change in allocate for instance [1] whereby the 
> compute node has a designated hardcoded network_ids for the public and 
> service network it is physically attached to.
> We have also made changes in the nova API so users can’t select a network and 
> the neutron endpoint is not registered in keystone.
> 
> That all works fine but ideally I want a user to be able to choose if they 
> want a public and or service network. We can’t let them as we have 10 public 
> networks, we almost need something in neutron like a "network group” or 
> something that allows a user to select “public” and it allocates them a port 
> in one of the underlying public networks.
> 
> I tried going down the route of having 1 public and 1 service network in 
> neutron then creating 10 subnets under each. That works until you get to 
> things like dhcp-agent and metadata agent although this looks like it could 
> work with a few minor changes. Basically I need a dhcp-agent to be spun up 
> per subnet and ensure they are spun up in the right place.
> 
> I’m not sure what the correct way of doing this. What are other people doing 
> in the interim until this kind of use case can be done in Neutron?
> 
> Would something like [1] be adequate to address your use case? If not, I'd 
> suggest you to file an RFE bug (more details in [2]), so that we can keep the 
> discussion focused on this specific case.
> 
> HTH
> Armando
> 
> [1] https://blueprints.launchpad.net/neutron/+spec/rbac-networks 
> <https://blueprints.launchpad.net/neutron/+spec/rbac-networks>
That’s not applicable in this case. We don’t care about what tenants are when 
in this case.

> [2] 
> https://github.com/openstack/neutron/blob/master/doc/source/policies/blueprints.rst#neutron-request-for-feature-enhancements
>  
> <https://github.com/openstack/neutron/blob/master/doc/source/policies/blueprints.rst#neutron-request-for-feature-enhancements>
The bug Kris mentioned outlines all I want too I think.

Sam


> 
>  
> 
> Cheers,
> Sam
> 
> [1] 
> https://github.com/NeCTAR-RC/nova/commit/1bc2396edc684f83ce471dd9dc9219c4635afb12
>  
> <https://github.com/NeCTAR-RC/nova/commit/1bc2396edc684f83ce471dd9dc9219c4635afb12>
> 
> 
> 
> > On 17 Jun 2015, at 12:20 am, Jay Pipes <jaypi...@gmail.com 
> > <mailto:jaypi...@gmail.com>> wrote:
> >
> > Adding -dev because of the reference to the Neutron "Get me a network 
> > spec". Also adding [nova] and [neutron] subject markers.
> >
> > Comments inline, Kris.
> >
> > On 05/22/2015 09:28 PM, Kris G. Lindgren wrote:
> >> During the Openstack summit this week I got to talk to a number of other
> >> operators of large Openstack deployments about how they do networking.
> >>  I was happy, surprised even, to find that a number of us are using a
> >> similar type of networking strategy.  That we have similar challenges
> >> around networking and are solving it in our own but very similar way.
> >>  It is always nice to see that other people are doing the same things
> >> as you or see the same issues as you are and that "you are not crazy".
> >> So in that vein, I wanted to reach out to the rest of the Ops Community
> >> and ask one pretty simple question.
> >>
> >> Would it be accurate to say that most of your end users want almost
> >> nothing to do with the network?
> >
> > That was my experience at AT&T, yes. The vast majority of end users could 
> > not care less about networking, as long as the connectivity was reliable, 
> > performed well, and they could connect to the Internet (and have others 
> > connect from the Internet to their VMs) when needed.
> >
> >> In my experience what the majority of them (both internal and external)
> >> want is to consume from Openstack a compute resource, a property of
> >> which is it that resource has an IP address.  They, at most, care about
> >> which "network" they are on.  Where a "network" is usually an arbitrary
> >> definition around a set of real networks, that are constrained to a
> >> location, in which the company has attached some sort of policy.  For
> >> example, I want to be in the production network vs's the xyz lab
> >> network, vs's the backup network, vs's the corp network.  I would say
> >> for Godaddy, 99% of our use cases would be defined as: I want a compute
> >> resource in the production network zone, or I want a compute resource in
> >> this other network zone.  The end user only cares that the IP the vm
> >> receives works in that zone, outside of that they don't care any other
> >> property of that IP.  They do not care what subnet it is in, what vlan
> >> it is on, what switch it is attached to, what router its attached to, or
> >> how data flows in/out of that network.  It just needs to work. We have
> >> also found that by giving the users a floating ip address that can be
> >> moved between vm's (but still constrained within a "network" zone) we
> >> can solve almost all of our users asks.  Typically, the internal need
> >> for a floating ip is when a compute resource needs to talk to another
> >> protected internal or external resource. Where it is painful (read:
> >> slow) to have the acl's on that protected resource updated. The external
> >> need is from our hosting customers who have a domain name (or many) tied
> >> to an IP address and changing IP's/DNS is particularly painful.
> >
> > This is precisely my experience as well.
> >
> >> Since the vast majority of our end users don't care about any of the
> >> technical network stuff, we spend a large amount of time/effort in
> >> abstracting or hiding the technical stuff from the users view. Which has
> >> lead to a number of patches that we carry on both nova and neutron (and
> >> are available on our public github).
> >
> > You may be interested to learn about the "Get Me a Network" specification 
> > that was discussed in a session at the summit. I had requested some time at 
> > the summit to discuss this exact use case -- where users of Nova actually 
> > didn't care much at all about network constructs and just wanted to see 
> > Nova exhibit similar behaviour as the nova-network behaviour of "admin sets 
> > up a bunch of unassigned networks and the first time a tenant launches a 
> > VM, she just gets an available network and everything is just done for her".
> >
> > The spec is here:
> >
> > https://review.openstack.org/#/c/184857/ 
> > <https://review.openstack.org/#/c/184857/>
> >
> > > At the same time we also have a
> >> *very* small subset of (internal) users who are at the exact opposite
> >> end of the scale.  They care very much about the network details,
> >> possibly all the way down to that they want to boot a vm to a specific
> >> HV, with a specific IP address on a specific network segment.  The
> >> difference however, is that these users are completely aware of the
> >> topology of the network and know which HV's map to which network
> >> segments and are essentially trying to make a very specific ask for
> >> scheduling.
> >
> > Agreed, at Mirantis (and occasionally at AT&T), we do get some customers 
> > (mostly telcos, of course) that would like total control over all things 
> > networking.
> >
> > Nothing wrong with this, of course. But the point of the above spec is to 
> > allow "normal" users to not have to think or know about all the advanced 
> > networking stuffs if they don't need it. The Neutron API should be able to 
> > handle both sets of users equally well.
> >
> > Best,
> > -jay
> >
> > _______________________________________________
> > OpenStack-operators mailing list
> > OpenStack-operators@lists.openstack.org 
> > <mailto:OpenStack-operators@lists.openstack.org>
> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators 
> > <http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators>
> 
> 
> __________________________________________________________________________
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe 
> <http://openstack-dev-requ...@lists.openstack.org/?subject:unsubscribe>
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev 
> <http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev>
> 
> __________________________________________________________________________
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org 
> <mailto:openstack-dev-requ...@lists.openstack.org>?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev 
> <http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev>
_______________________________________________
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators

Reply via email to