Hi Hanis,
See my reply inline.
Op 17/05/2024 om 12:38 schreef Hanis Irfan:
I think this is more about BGP EVPN than CloudStack but would appreciate
anyone that could help. So basically, I’ve tried the Advanced Networking
with VLAN isolation for my POC and now want to migrate to VXLAN.
I would say that I’ve little to no knowledge about VXLAN in particular
and BGP EVPN. Why don’t I use multicast instead is because our spine
leaf switches have been already configured with basic EVPN (though not
tested yet).
We have 2 spine switches (Mellanox SN2700) and 2 leaf switches (Mellanox
SN2410) running BGP unnumbered for underlay between them. Underlay
between the hypervisor that is running FRR and the leaf switches also
configured with BGP unnumbered.
All the switches and hypervisor are assigned with 4-byte private ASN
individually. Here is the FRR config on the hypervisor, all basic config:
```
ip forwarding
ipv6 forwarding
interface ens3f0np0
no ipv6 nd suppress-ra
exit
interface ens3f1np1
no ipv6 nd suppress-ra
exit
router bgp 4200100005
bgp router-id 10.XXX.118.1
no bgp ebgp-requires-policy
neighbor uplink peer-group
neighbor uplink remote-as external
neighbor ens3f0np0 interface peer-group uplink
neighbor ens3f1np1 interface peer-group uplink
address-family ipv4 unicast
network 10.XXX.118.1/32
exit-address-family
address-family ipv6 unicast
network 2407:XXXX:0:1::1/128
neighbor uplink activate
neighbor uplink soft-reconfiguration inbound
exit-address-family
address-family l2vpn evpn
neighbor uplink activate
neighbor uplink attribute-unchanged next-hop
advertise-all-vni
advertise-svi-ip
exit-address-family
```
Now I want to configure cloudbr0 bridge as the management interface for
ACS. I’ve done it like so:
```
nmcli connection add type bridge con-name cloudbr0 ifname cloudbr0 \
ipv4.method manual ipv4.addresses 10.XXX.113.11/24 ipv4.gateway
10.XXX.113.1 \
ipv4.dns 1.1.1.1,8.8.8.8 \
ipv6.method manual ipv6.addresses 2407:XXXX:200:c002::11/64 ipv6.gateway
2407:XXXX:200:c002::1 \
ipv6.dns 2606:4700:4700::1111,2001:4860:4860::8888 \
bridge.stp no ethernet.mtu 9216
nmcli connection add type vxlan slave-type bridge con-name vxlan10027
ifname vxlan10027 \
id 10027 destination-port 4789 local 2407:XXXX:0:1::1 vxlan.learning no \
master cloudbr0 ethernet.mtu 9216 dev lo
nmcli connection up cloudbr0
nmcli connection up vxlan10027
```
Why are you setting anything on cloudbr0? There is no need to create
cloudbr0 with VXLAN.
We only have created cloudbr1 (using systemd-networkd) for the POD
communcation, but that's all:
*cloudbr1.network*
[Match]
Name=cloudbr1
[Network]
LinkLocalAddressing=no
[Address]
Address=10.100.2.108/20
[Route]
Gateway=10.100.1.1
[Link]
MTUBytes=1500
*cloudbr1.netdev*
[NetDev]
Name=cloudbr1
Kind=bridge
This /20 IPv4 is used for everything within CloudStack's communcation.
In *agent.properties* we have only set 'private.network.device=cloudbr1'
I can see that the EVPN route with MAC address of cloudbr0 can be seen
on both leaf switches. However, I can’t ping from the hypervisor to its
gateway (.1) which is a firewall running somewhere that’s connected to a
switchport tagged with VLAN 27.
You first need to make sure that the HV can ping all the other loopback
addresses of the Leaf and Spine switches and all HVs can connect to
eachother via their loopback addresses.
Can you check that? That's not EVPN nor VXLAN, just /32 (IPv4) routing
with BGP.
Wido