On 17/06/15 16:17, Kris G. Lindgren wrote:
See inline.
____________________________________________

Kris Lindgren
Senior Linux Systems Engineer
GoDaddy, LLC.



On 6/17/15, 5:12 AM, "Neil Jerram" <neil.jer...@metaswitch.com> wrote:

Hi Kris,

Apologies in advance for questions that are probably really dumb - but
there are several points here that I don't understand.

On 17/06/15 03:44, Kris G. Lindgren wrote:
We are doing pretty much the same thing - but in a slightly different
way.
   We extended the nova scheduler to help choose networks (IE. don't put
vm's on a network/host that doesn't have any available IP address).

Why would a particular network/host not have any available IP address?

  If a created network has 1024 ip's on it (/22) and we provision 1020 vms,
  anything deployed after that will not have an additional ip address
because
  the network doesn't have any available ip addresses (loose some ip's to
  the network).

OK, thanks, that certainly explains the "particular network" possibility.

So I guess this applies where your preference would be for network A, but it would be OK to fall back to network B, and so on. That sounds like it could be a useful general enhancement.

(But, if a new VM absolutely _has_ to be on, say, the 'production' network, and the 'production' network is already fully used, you're fundamentally stuck, aren't you?)

What about the "/host" part? Is it possible in your system for a network to have IP addresses available, but for them not to be usable on a particular host?

Then,
we add into the host-aggregate that each HV is attached to a network
metadata item which maps to the names of the neutron networks that host
supports.  This basically creates the mapping of which host supports
what
networks, so we can correctly filter hosts out during scheduling. We do
allow people to choose a network if they wish and we do have the neutron
end-point exposed. However, by default if they do not supply a boot
command with a network, we will filter the networks down and choose one
for them.  That way they never hit [1].  This also works well for us,
because the default UI that we provide our end-users is not horizon.

Why do you define multiple networks - as opposed to just one - and why
would one of your users want to choose a particular one of those?

(Do you mean multiple as in public-1, public-2, ...; or multiple as in
public, service, ...?)

  This is answered in the other email and original email as well.  But
basically
  we have multiple L2 segments that only exists on certain switches and
thus are
  only tied to certain hosts.  With the way neutron is currently structured
we
  need to create a network for each L2. So that¹s why we define multiple
networks.

Thanks!  Ok, just to check that I really understand this:

- You have real L2 segments connecting some of your compute hosts together - and also I guess to a ToR that does L3 to the rest of the data center.

- You presumably then just bridge all the TAP interfaces, on each host, to the host's outwards-facing interface.

                       +---- VM
                       |
       +----- Host ----+---- VM
       |               |
       |               +---- VM
       |
       |               +---- VM
       |               |
       +----- Host ----+---- VM
       |               |
ToR ---+               +---- VM
       |
       |               +---- VM
       |               |
       |----- Host ----+---- VM
                       |
                       +---- VM

- You specify each such setup as a network in the Neutron API - and hence you have multiple similar networks, for your data center as a whole.

Out of interest, do you do this just because it's the Right Thing according to the current Neutron API - i.e. because a Neutron network is L2 - or also because it's needed in order to get the Neutron implementation components that you use to work correctly? For example, so that you have a DHCP agent for each L2 network (if you use the Neutron DHCP agent).

  For our end users - they only care about getting a vm with a single ip
address
  in a "network" which is really a zone like "prod" or "dev" or "test".
They stop
  caring after that point.  So in the scheduler filter that we created we
do
  exactly that.  We will filter down from all the hosts and networks down
to a
  combo that intersects at a host that has space, with a network that has
space,
  And the network that was chosen is actually available to that host.

Thanks, makes perfect sense now.

So I think there are two possible representations, overall, of what you are looking for.

1. A 'network group' of similar L2 networks. When a VM is launched, tenant specifies the network group instead of a particular L2 network, and Nova/Neutron select a host and network with available compute power and IP addressing. This sounds like what you've described above.

2. A new kind of network whose ports are partitioned into various L2 segments. This is like what I've described at [1].

[1] http://lists.openstack.org/pipermail/openstack-dev/2015-June/067274.html

I would prefer (2) over (1), because I'm interested in a fully routed form of connectivity, and if that was expressed in model (1) it would need a network definition for every VM.

Also, with (1) I guess individual IP ranges (or subnet pools?) would need defining for each network, whereas with (2) there would naturally be a single IP range or subnet pool definition for the whole network.

Although you have currently modified the Nova scheduler for an approach like (1), do you think (2) would work in principle for you as well?

Many thanks,
        Neil

__________________________________________________________________________
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

Reply via email to