small update:

I am still stuck at the point "how to route the L2 lb-mgmt-net to my different physical L3 networks?".

As we wanna distribute our control nodes over diffrent leafs, each with its own L2 domain we will have to route to all of those leafs.

Should we enable the compute nodes to route to those controler networks?
And how?


Am 10/24/18 um 9:52 AM schrieb Florian Engelmann:
Hi Michael,

yes I definitely would prefer to build a routed setup. Would it be an option for you to provide some rough step by step "how-to" with openvswitch in a non-DVR setup?

All the best,
Flo


Am 10/23/18 um 7:48 PM schrieb Michael Johnson:
I am still catching up on e-mail from the weekend.

There are a lot of different options for how to implement the
lb-mgmt-network for the controller<->amphora communication. I can't
talk to what options Kolla provides, but I can talk to how Octavia
works.

One thing to note on the lb-mgmt-net issue, if you can setup routes
such that the controllers can reach the IP addresses used for the
lb-mgmt-net, and that the amphora can reach the controllers, Octavia
can run with a routed lb-mgmt-net setup. There is no L2 requirement
between the controllers and the amphora instances.

Michael

On Tue, Oct 23, 2018 at 9:57 AM Erik McCormick
<emccorm...@cirrusseven.com> wrote:

So in your other email you said asked if there was a guide for
deploying it with Kolla ansible...

Oh boy. No there's not. I don't know if you've seen my recent mails on
Octavia, but I am going through this deployment process with
kolla-ansible right now and it is lacking in a few areas.

If you plan to use different CA certificates for client and server in
Octavia, you'll need to add that into the playbook. Presently it only
copies over ca_01.pem, cacert.key, and client.pem and uses them for
everything. I was completely unable to make it work with only one CA
as I got some SSL errors. It passes gate though, so I aasume it must
work? I dunno.

Networking comments and a really messy kolla-ansible / octavia how-to below...

On Tue, Oct 23, 2018 at 10:09 AM Florian Engelmann
<florian.engelm...@everyware.ch> wrote:

Am 10/23/18 um 3:20 PM schrieb Erik McCormick:
On Tue, Oct 23, 2018 at 7:53 AM Florian Engelmann
<florian.engelm...@everyware.ch> wrote:

Hi,

We did test Octavia with Pike (DVR deployment) and everything was
working right our of the box. We changed our underlay network to a
Layer3 spine-leaf network now and did not deploy DVR as we don't wanted
to have that much cables in a rack.

Octavia is not working right now as the lb-mgmt-net does not exist on
the compute nodes nor does a br-ex.

The control nodes running

octavia_worker
octavia_housekeeping
octavia_health_manager
octavia_api

and as far as I understood octavia_worker, octavia_housekeeping and
octavia_health_manager have to talk to the amphora instances. But the
control nodes are spread over three different leafs. So each control
node in a different L2 domain.

So the question is how to deploy a lb-mgmt-net network in our setup?

- Compute nodes have no "stretched" L2 domain
- Control nodes, compute nodes and network nodes are in L3 networks like
api, storage, ...
- Only network nodes are connected to a L2 domain (with a separated NIC)
providing the "public" network

You'll need to add a new bridge to your compute nodes and create a
provider network associated with that bridge. In my setup this is
simply a flat network tied to a tagged interface. In your case it
probably makes more sense to make a new VNI and create a vxlan
provider network. The routing in your switches should handle the rest.

Ok that's what I try right now. But I don't get how to setup something
like a VxLAN provider Network. I thought only vlan and flat is supported
as provider network? I guess it is not possible to use the tunnel
interface that is used for tenant networks?
So I have to create a separated VxLAN on the control and compute nodes like:

# ip link add vxoctavia type vxlan id 42 dstport 4790 group 239.1.1.1
dev vlan3535 ttl 5
# ip addr add 172.16.1.11/20 dev vxoctavia
# ip link set vxoctavia up

and use it like a flat provider network, true?

This is a fine way of doing things, but it's only half the battle.
You'll need to add a bridge on the compute nodes and bind it to that
new interface. Something like this if you're using openvswitch:

docker exec openvswitch_db
/usr/local/bin/kolla_ensure_openvswitch_configured br-mgmt vxoctavia

Also you'll want to remove the IP address from that interface as it's
going to be a bridge. Think of it like your public (br-ex) interface
on your network nodes.

 From there you'll need to update the bridge mappings via kolla
overrides. This would usually be in /etc/kolla/config/neutron. Create
a subdirectory for your compute inventory group and create an
ml2_conf.ini there. So you'd end up with something like:

[root@kolla-deploy ~]# cat /etc/kolla/config/neutron/compute/ml2_conf.ini
[ml2_type_flat]
flat_networks = mgmt-net

[ovs]
bridge_mappings = mgmt-net:br-mgmt

run kolla-ansible --tags neutron reconfigure to push out the new
configs. Note that there is a bug where the neutron containers may not
restart after the change, so you'll probably need to do a 'docker
container restart neutron_openvswitch_agent' on each compute node.

At this point, you'll need to create the provider network in the admin
project like:

openstack network create --provider-network-type flat
--provider-physical-network mgmt-net lb-mgmt-net

And then create a normal subnet attached to this network with some
largeish address scope. I wouldn't use 172.16.0.0/16 because docker
uses that by default. I'm not sure if it matters since the network
traffic will be isolated on a bridge, but it makes me paranoid so I
avoided it.

For your controllers, I think you can just let everything function off
your api interface since you're routing in your spines. Set up a
gateway somewhere from that lb-mgmt network and save yourself the
complication of adding an interface to your controllers. If you choose
to use a separate interface on your controllers, you'll need to make
sure this patch is in your kolla-ansible install or cherry pick it.

https://github.com/openstack/kolla-ansible/commit/0b6e401c4fdb9aa4ff87d0bfd4b25c91b86e0d60#diff-6c871f6865aecf0057a5b5f677ae7d59

I don't think that's been backported at all, so unless you're running
off master you'll need to go get it.

 From here on out, the regular Octavia instruction should serve you.
Create a flavor, Create a security group, and capture their UUIDs
along with the UUID of the provider network you made. Override them in
globals.yml with:

octavia_amp_boot_network_list: <uuid>
octavia_amp_secgroup_list: <uuid>
octavia_amp_flavor_id: <uuid>

This is all from my scattered notes and bad memory. Hopefully it makes
sense. Corrections welcome.

-Erik






-Erik

All the best,
Florian
_______________________________________________
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators

_______________________________________________
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


_______________________________________________
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


--

EveryWare AG
Florian Engelmann
Systems Engineer
Zurlindenstrasse 52a
CH-8003 Zürich

tel: +41 44 466 60 00
fax: +41 44 466 60 10
mail: mailto:florian.engelm...@everyware.ch
web: http://www.everyware.ch

Attachment: smime.p7s
Description: S/MIME cryptographic signature

_______________________________________________
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators

Reply via email to