Re: [openstack-dev] [kolla] add service discovery, proxysql, vault, fabio and FQDN endpoints

2018-10-19 Thread Piotr Misiak


On 19.10.2018 10:21, Florian Engelmann wrote:


On 17.10.2018 15:45, Florian Engelmann wrote:

On 10.10.2018 09:06, Florian Engelmann wrote:
Now I get you. I would say all configuration templates need to be 
changed to allow, eg.


$ grep http /etc/kolla/cinder-volume/cinder.conf
glance_api_servers = http://10.10.10.5:9292
auth_url = http://internal.somedomain.tld:35357
www_authenticate_uri = http://internal.somedomain.tld:5000
auth_url = http://internal.somedomain.tld:35357
auth_endpoint = http://internal.somedomain.tld:5000

to look like:

glance_api_servers = http://glance.service.somedomain.consul:9292
auth_url = http://keystone.service.somedomain.consul:35357
www_authenticate_uri = http://keystone.service.somedomain.consul:5000
auth_url = http://keystone.service.somedomain.consul:35357
auth_endpoint = http://keystone.service.somedomain.consul:5000



The idea with Consul looks interesting.

But I don't get your issue with VIP address and spine-leaf network.

What we have:
- controller1 behind leaf1 A/B pair with MLAG
- controller2 behind leaf2 A/B pair with MLAG
- controller3 behind leaf3 A/B pair with MLAG

The VIP address is active on one controller server.
When the server fail then the VIP will move to another controller 
server.

Where do you see a SPOF in this configuration?



So leaf1 2 and 3 have to share the same L2 domain, right (in IPv4 
network)?



Yes, they share L2 domain but we have ARP and ND suppression enabled.

It is an EVPN network where there is a L3 with VxLANs between leafs 
and spines.


So we don't care where a server is connected. It can be connected to 
any leaf.


Ok that sounds very interesting. Is it possible to share some 
internals? Which switch vendor/model do you use? How does you IP 
address schema look like?
If VxLAN is used between spine and leafs are you using VxLAN 
networking for Openstack as well? Where is your VTEP?




We have Mellanox switches with Cumulus Linux installed. Here you have a 
documentation: 
https://docs.cumulusnetworks.com/display/DOCS/Ethernet+Virtual+Private+Network+-+EVPN


EVPN is a well known standard and it is also supported by Juniper, Cisco 
etc.


We have standard VLANs between servers and leaf switches, they are 
mapped to VxLANs between leafs and spines. In our env every leaf switch 
is a VTEP. Servers have MLAG/CLAG connections to two leaf switches. We 
also have anycast gateways on leaf switches. From servers point of view 
our network is like a very big switch with hundreds of ports and 
standard VLANs.


We are using VxLAN networking for OpenStack, but it is configured on top 
of network VxLANs, we dont mix them.




__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [kolla] add service discovery, proxysql, vault, fabio and FQDN endpoints

2018-10-17 Thread Piotr Misiak


On 17.10.2018 15:45, Florian Engelmann wrote:

On 10.10.2018 09:06, Florian Engelmann wrote:
Now I get you. I would say all configuration templates need to be 
changed to allow, eg.


$ grep http /etc/kolla/cinder-volume/cinder.conf
glance_api_servers = http://10.10.10.5:9292
auth_url = http://internal.somedomain.tld:35357
www_authenticate_uri = http://internal.somedomain.tld:5000
auth_url = http://internal.somedomain.tld:35357
auth_endpoint = http://internal.somedomain.tld:5000

to look like:

glance_api_servers = http://glance.service.somedomain.consul:9292
auth_url = http://keystone.service.somedomain.consul:35357
www_authenticate_uri = http://keystone.service.somedomain.consul:5000
auth_url = http://keystone.service.somedomain.consul:35357
auth_endpoint = http://keystone.service.somedomain.consul:5000



The idea with Consul looks interesting.

But I don't get your issue with VIP address and spine-leaf network.

What we have:
- controller1 behind leaf1 A/B pair with MLAG
- controller2 behind leaf2 A/B pair with MLAG
- controller3 behind leaf3 A/B pair with MLAG

The VIP address is active on one controller server.
When the server fail then the VIP will move to another controller 
server.

Where do you see a SPOF in this configuration?



So leaf1 2 and 3 have to share the same L2 domain, right (in IPv4 
network)?



Yes, they share L2 domain but we have ARP and ND suppression enabled.

It is an EVPN network where there is a L3 with VxLANs between leafs and 
spines.


So we don't care where a server is connected. It can be connected to any 
leaf.



But we wanna deploy a layer3 spine-leaf network were every leaf is 
it's own L2 domain and everything above is layer3.


eg:

leaf1 = 10.1.1.0/24
leaf2 = 10.1.2.0/24
leaf2 = 10.1.3.0/24

So a VIP like, eg. 10.1.1.10 could only exist in leaf1


In my opinion it's a very constrained environment, I don't like the idea.


Regards,

Piotr



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [kolla] add service discovery, proxysql, vault, fabio and FQDN endpoints

2018-10-10 Thread Piotr Misiak

On 10.10.2018 09:06, Florian Engelmann wrote:
Now I get you. I would say all configuration templates need to be 
changed to allow, eg.


$ grep http /etc/kolla/cinder-volume/cinder.conf
glance_api_servers = http://10.10.10.5:9292
auth_url = http://internal.somedomain.tld:35357
www_authenticate_uri = http://internal.somedomain.tld:5000
auth_url = http://internal.somedomain.tld:35357
auth_endpoint = http://internal.somedomain.tld:5000

to look like:

glance_api_servers = http://glance.service.somedomain.consul:9292
auth_url = http://keystone.service.somedomain.consul:35357
www_authenticate_uri = http://keystone.service.somedomain.consul:5000
auth_url = http://keystone.service.somedomain.consul:35357
auth_endpoint = http://keystone.service.somedomain.consul:5000



The idea with Consul looks interesting.

But I don't get your issue with VIP address and spine-leaf network.

What we have:
- controller1 behind leaf1 A/B pair with MLAG
- controller2 behind leaf2 A/B pair with MLAG
- controller3 behind leaf3 A/B pair with MLAG

The VIP address is active on one controller server.
When the server fail then the VIP will move to another controller server.
Where do you see a SPOF in this configuration?

Thanks


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [neutron] one SDN controller and many OpenStack environments

2015-10-14 Thread Piotr Misiak
Hi,

Do you know if there is a possibility to manage tenants networks in many
OpenStack environments by one SDN controller (via Neutron plugin)?

Suppose I have in one DC two or three OpenStack env's (deployed for
example by Fuel) and I have a SDN controller which manages all switches
in DC. Can I connect all those OpenStack env's to the SDN controller
using plugin for Neutron? I'm curious what will happen if there will be
for example the same MAC address in two OpenStack env's?
Maybe I should not connect those OpenStack env's to the SDN controller
and use a standard OVS configuration?

Which SDN controller would you recommend? I'm researching these currently:
- OpenDaylight
- Floodlight
- Ryu

Thanks,
Piotr Misiak

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev