On 1/9/26 2:45 PM, Lorenzo Bianconi wrote:
> Introduce the capability to specify multiple ips for ovn-evpn-local-ip
> option in order to enable dual-stack for EVPN vxlan tunnels.
> The IPs used for vxlan tunnel can be specified using the following
> syntax in the external_ids column of Open_vSwitch table:
>
> external_ids:ovn-evpn-local-ip=10-1.1.1.1,20-2::2,30-3.3.3.3,40-4::4,30-3::3
>
> Reported-at: https://issues.redhat.com/browse/FDP-2768
> Signed-off-by: Lorenzo Bianconi <[email protected]>
> ---
Hi Lorenzo,
Thanks for this new version!
> Changes in v5:
> - Drop ovn-evpn-local-ip-mapping config option and rely on the following
> syntax:
> external_ids:ovn-evpn-local-ip=10-1.1.1.1,20-2::2,30-3.3.3.3,40-4::4,30-3::3
> - Do not use global variable for default_ip4 and default_ip6
> - improve code readability
> - cosmetics
> Changes in v4:
> - Do not use linear lookup in evpn_local_ip_lookup_by_vni()
> - Introduce ovn-evpn-local-ip-mapping config option
> Changes in v3:
> - Add default IP configuration
> - Fix possible parsing crashes in evpn_vni_local_ip_map_alloc()
> - Use hashmap for vni_local_ip mapping
> Changes in v2:
> - Add NEWS entry
> - Update documentation
> - Add ip_address_from_str utility routine
> - Fix IPv4 vs IPv6 openflow management in physical_consider_evpn_multicast()
> - Add Dual-Stack entry in EVPN_SWITCH_TESTS function
> ---
> NEWS | 2 +
> TODO.rst | 4 -
> controller/neighbor.c | 5 +-
> controller/ovn-controller.8.xml | 11 +-
> controller/physical.c | 324 +++++++++++++----
> tests/system-common-macros.at | 10 +-
> tests/system-ovn.at | 597 +++++++++++++++++++++++++++++++-
> 7 files changed, 867 insertions(+), 86 deletions(-)
>
> diff --git a/NEWS b/NEWS
> index 87500de03..9883fb81d 100644
> --- a/NEWS
> +++ b/NEWS
> @@ -80,6 +80,8 @@ Post v25.09.0
> - Add fallback support for Network Function.
> - Introduce the capability to specify EVPN device names using
> Logical_Switch
> other_config column.
> + - Introduce the capability to specify multiple ips for ovn-evpn-local-ip
> + option.
>
> OVN v25.09.0 - xxx xx xxxx
> --------------------------
> diff --git a/TODO.rst b/TODO.rst
> index 0c9b9598b..9f5e0976d 100644
> --- a/TODO.rst
> +++ b/TODO.rst
> @@ -156,10 +156,6 @@ OVN To-do List
> Otherwise we could try to add duplicated Learned_Routes and the ovnsb
> commit would fail.
>
> - * Allow ovn-evpn-local-ip to accept list of
> - $VNI1:$LOCAL_IP1,$VNI2:$LOCAL_IP2 combinations which will be properly
> - reflected in physical flows for given LS with VNI.
> -
> * Add support for EVPN L3, that involves MAC Binding learning and
> advertisement.
>
> diff --git a/controller/neighbor.c b/controller/neighbor.c
> index 9aeb1e36b..e77b41d29 100644
> --- a/controller/neighbor.c
> +++ b/controller/neighbor.c
> @@ -91,7 +91,6 @@ neigh_parse_device_name(struct sset *device_names, struct
> local_datapath *ld,
> {
> const char *names = smap_get_def(&ld->datapath->external_ids,
> neighbor_opt_name[type], "");
> - sset_clear(device_names);
> sset_from_delimited_string(device_names, names, ",");
> if (sset_is_empty(device_names)) {
> /* Default device name if not specified. */
> @@ -123,7 +122,7 @@ neighbor_run(struct neighbor_ctx_in *n_ctx_in,
> continue;
> }
>
> - struct sset device_names = SSET_INITIALIZER(&device_names);
> + struct sset device_names;
> neigh_parse_device_name(&device_names, ld, NEIGH_IFACE_VXLAN, vni);
> const char *name;
> SSET_FOR_EACH (name, &device_names) {
> @@ -132,6 +131,7 @@ neighbor_run(struct neighbor_ctx_in *n_ctx_in,
> NEIGH_IFACE_VXLAN, vni,
> name);
> vector_push(n_ctx_out->monitored_interfaces, &vxlan);
> }
> + sset_destroy(&device_names);
>
> neigh_parse_device_name(&device_names, ld, NEIGH_IFACE_LOOPBACK,
> vni);
> if (sset_count(&device_names) > 1) {
> @@ -145,6 +145,7 @@ neighbor_run(struct neighbor_ctx_in *n_ctx_in,
> NEIGH_IFACE_LOOPBACK, vni,
> SSET_FIRST(&device_names));
> vector_push(n_ctx_out->monitored_interfaces, &lo);
> + sset_destroy(&device_names);
>
> neigh_parse_device_name(&device_names, ld, NEIGH_IFACE_BRIDGE, vni);
> if (sset_count(&device_names) > 1) {
> diff --git a/controller/ovn-controller.8.xml b/controller/ovn-controller.8.xml
> index dfc7cc217..0c5535f3e 100644
> --- a/controller/ovn-controller.8.xml
> +++ b/controller/ovn-controller.8.xml
> @@ -432,10 +432,13 @@
>
> <dt><code>external_ids:ovn-evpn-local-ip</code></dt>
> <dd>
> - IP address used as a source address for the EVPN traffic leaving this
> - OVN setup. There is currently support only for single IP address
> - being specified. NOTE: this feature is experimental and may be
> subject
> - to removal/change in the future.
> + IP address list used as a source addresses for the EVPN traffic
> + leaving this OVN setup. The <code>ovn-evpn-local-ip</code>
> + can be specified by the CMS according to the following syntax:
> + <code>external_ids:ovn-evpn-local-ip=vni0-IPv4,vni1-IPv4,
> + vni1-IPv6,IPv4,IPv6</code>, where if no VNI value is specified,
> + OVN will use the provided IP as default IP. NOTE: this feature
> + is experimental and may be object to removal/change in the future.
Nit: s/object/subject/
> </dd>
> </dl>
>
> diff --git a/controller/physical.c b/controller/physical.c
> index b9c60c8ab..ac685f268 100644
> --- a/controller/physical.c
> +++ b/controller/physical.c
> @@ -3178,12 +3178,161 @@ physical_eval_remote_chassis_flows(const struct
> physical_ctx *ctx,
> ofpbuf_uninit(&ingress_ofpacts);
> }
>
> +struct vni_local_ip {
> + struct hmap_node hmap_node;
> + struct in6_addr ip;
> + uint32_t vni;
> +};
> +
> +struct evpn_local_ip_map {
> + struct hmap vni_ip_v4; /* Per VNI local IPv4 vni_local_ips. */
I know I suggested this but looking again at it it's probably better to
call these vni_ip4 and vni_ip6 to be aligned with the default names below.
> + struct hmap vni_ip_v6; /* Per VNI local IPv4 vni_local_ips. */
Typo: should be IPv6
> + struct in6_addr default_ip4; /* Default local IPv4. */
> + struct in6_addr default_ip6; /* Default local IPv6. */
> +};
> +
> +static const struct in6_addr *
> +evpn_local_ip_lookup(const struct hmap *map, uint32_t vni)
> +{
> + struct vni_local_ip *e;
> + HMAP_FOR_EACH_WITH_HASH (e, hmap_node, hash_add(vni, 0), map) {
> + if (e->vni == vni) {
> + return &e->ip;
> + }
> + }
> + return NULL;
> +}
> +
> +static ovs_be32
> +evpn_local_ip_find_v4(const struct evpn_local_ip_map *vni_ip_map,
> + uint32_t vni)
> +{
> + const struct in6_addr *addr =
> evpn_local_ip_lookup(&vni_ip_map->vni_ip_v4,
> + vni);
> + if (addr) {
> + return in6_addr_get_mapped_ipv4(addr);
> + }
> +
> + if (ipv6_addr_is_set(&vni_ip_map->default_ip4)) {
> + return in6_addr_get_mapped_ipv4(&vni_ip_map->default_ip4);
> + }
> +
> + return 0;
> +}
> +
> +static const struct in6_addr *
> +evpn_local_ip_find_v6(const struct evpn_local_ip_map *vni_ip_map,
> + uint32_t vni)
> +{
> + const struct in6_addr *addr =
> evpn_local_ip_lookup(&vni_ip_map->vni_ip_v6,
> + vni);
> + if (addr) {
> + return addr;
> + }
> +
> + if (ipv6_addr_is_set(&vni_ip_map->default_ip6)) {
> + return &vni_ip_map->default_ip6;
> + }
> +
> + return NULL;
> +}
> +
> +static void
> +evpn_local_ip_map_init(struct evpn_local_ip_map *vni_ip_map,
> + const struct smap *config)
> +{
> + char *tokstr, *token, *ptr0 = NULL;
> +
> + const char *local_ip_str = smap_get_def(config, "ovn-evpn-local-ip", "");
> + tokstr = xstrdup(local_ip_str);
> + for (token = strtok_r(tokstr, ",", &ptr0); token;
> + token = strtok_r(NULL, ",", &ptr0)) {
> + char *ptr1 = NULL, *vni_str = strtok_r(token, "-", &ptr1);
> + char *ip_str = strtok_r(NULL, "-", &ptr1);
> + struct in6_addr ip;
> + uint32_t vni;
> +
> + if (strlen(ptr1)) {
Nit: to be on the safe side I'd write this as "if (ptr1 && *ptr1) {"
> + static struct vlog_rate_limit rl = VLOG_RATE_LIMIT_INIT(5, 1);
> + VLOG_WARN_RL(&rl, "Malformed ovn-evpn-local-ip: %s", tokstr);
> + break;
> + }
> +
> + if (!ip_str) { /* default IP */
> + if (!ip46_parse(vni_str, &ip)) {
> + static struct vlog_rate_limit rl = VLOG_RATE_LIMIT_INIT(5,
> 1);
> + VLOG_WARN_RL(&rl, "Invalid IP: %s", vni_str);
Nit: I'd say "Invalid ovn-evpn-local-ip IP: %s".
> + continue;
> + }
> + struct in6_addr *ip_ptr = IN6_IS_ADDR_V4MAPPED(&ip)
> + ? &vni_ip_map->default_ip4 : &vni_ip_map->default_ip6;
Nit: indentation
> + if (ipv6_addr_is_set(ip_ptr)) {
> + static struct vlog_rate_limit rl = VLOG_RATE_LIMIT_INIT(5,
> 1);
> + VLOG_WARN_RL(&rl, "Default EVPN local IP%d already
> configured",
> + IN6_IS_ADDR_V4MAPPED(&ip) ? 4 : 6);
Here too.
> + continue;
> + }
> + *ip_ptr = ip;
> + } else {
> + if (!ip46_parse(ip_str, &ip)) {
> + static struct vlog_rate_limit rl = VLOG_RATE_LIMIT_INIT(5,
> 1);
> + VLOG_WARN_RL(&rl,
> + "EVPN enabled, but required 'evpn-local-ip' is "
> + "missing or invalid %s ", local_ip_str);
> + continue;
> + }
> +
> + if (!vni_str) {
> + static struct vlog_rate_limit rl = VLOG_RATE_LIMIT_INIT(5,
> 1);
> + VLOG_WARN_RL(&rl, "Required VNI not configured");
> + continue;
> + }
> +
> + if (!ovs_scan(vni_str, "%u", &vni) || !ovn_is_valid_vni(vni)) {
> + static struct vlog_rate_limit rl = VLOG_RATE_LIMIT_INIT(5,
> 1);
> + VLOG_WARN_RL(&rl, "Invalid VNI: %s", vni_str);
> + continue;
> + }
> +
> + struct hmap *map = IN6_IS_ADDR_V4MAPPED(&ip)
> + ? &vni_ip_map->vni_ip_v4 : &vni_ip_map->vni_ip_v6;
Nit: I'd align this under the '=' and move the "else" on a separate line.
> + if (evpn_local_ip_lookup(map, vni)) {
> + static struct vlog_rate_limit rl = VLOG_RATE_LIMIT_INIT(5,
> 1);
> + VLOG_WARN_RL(&rl, "Duplicated VNI entry: %s", vni_str);
> + continue;
> + }
For all the warnings above we should include 'ovn-evpn-local-ip' in the
message. Otherwise it's too generic.
> +
> + struct vni_local_ip *e = xmalloc(sizeof *e);
> + *e = (struct vni_local_ip) {
> + .vni = vni,
> + .ip = ip,
> + };
> + hmap_insert(map, &e->hmap_node, hash_add(vni, 0));
> + }
> + }
> + free(tokstr);
> +}
> +
> +static void
> +evpn_local_ip_map_destroy(struct evpn_local_ip_map *map)
> +{
> + struct vni_local_ip *e;
> + HMAP_FOR_EACH_POP (e, hmap_node, &map->vni_ip_v4) {
> + free(e);
> + }
> + hmap_destroy(&map->vni_ip_v4);
> +
> + HMAP_FOR_EACH_POP (e, hmap_node, &map->vni_ip_v6) {
> + free(e);
> + }
> + hmap_destroy(&map->vni_ip_v6);
> +}
> +
> static void
> physical_consider_evpn_binding(const struct evpn_binding *binding,
> - const struct in6_addr *local_ip,
> + const struct evpn_local_ip_map *vni_ip_map,
> struct ofpbuf *ofpacts, struct match *match,
> - struct ovn_desired_flow_table *flow_table,
> - bool ipv4)
> + struct ovn_desired_flow_table *flow_table)
> {
> /* Ingress flows. */
> ofpbuf_clear(ofpacts);
> @@ -3191,13 +3340,30 @@ physical_consider_evpn_binding(const struct
> evpn_binding *binding,
>
> match_set_in_port(match, binding->tunnel_ofport);
> match_set_tun_id(match, htonll(binding->vni));
> - if (ipv4) {
> +
> + const struct in6_addr *local_ip6 = NULL;
> + ovs_be32 local_ip4 = 0;
> +
> + if (IN6_IS_ADDR_V4MAPPED(&binding->remote_ip)) {
> + local_ip4 = evpn_local_ip_find_v4(vni_ip_map, binding->vni);
> + } else {
> + local_ip6 = evpn_local_ip_find_v6(vni_ip_map, binding->vni);
> + }
> +
> + if (!local_ip4 && !local_ip6) {
> + static struct vlog_rate_limit rl = VLOG_RATE_LIMIT_INIT(5, 1);
> + VLOG_WARN_RL(&rl, "failed to get local tunnel ip for VNI %d",
> + binding->vni);
> + return;
> + }
> +
> + if (local_ip4) {
> match_set_tun_src(match,
> in6_addr_get_mapped_ipv4(&binding->remote_ip));
> - match_set_tun_dst(match, in6_addr_get_mapped_ipv4(local_ip));
> + match_set_tun_dst(match, local_ip4);
> } else {
> match_set_tun_ipv6_src(match, &binding->remote_ip);
> - match_set_tun_ipv6_dst(match, local_ip);
> + match_set_tun_ipv6_dst(match, local_ip6);
> }
>
> put_load(binding->dp_key, MFF_LOG_DATAPATH, 0, 32, ofpacts);
> @@ -3216,13 +3382,13 @@ physical_consider_evpn_binding(const struct
> evpn_binding *binding,
> match_outport_dp_and_port_keys(match, binding->dp_key,
> binding->binding_key);
>
> - if (ipv4) {
> - ovs_be32 ip4 = in6_addr_get_mapped_ipv4(local_ip);
> - put_load_bytes(&ip4, sizeof ip4, MFF_TUN_SRC, 0, 32, ofpacts);
> - ip4 = in6_addr_get_mapped_ipv4(&binding->remote_ip);
> + if (local_ip4) {
> + put_load_bytes(&local_ip4, sizeof local_ip4, MFF_TUN_SRC, 0, 32,
> + ofpacts);
> + ovs_be32 ip4 = in6_addr_get_mapped_ipv4(&binding->remote_ip);
> put_load_bytes(&ip4, sizeof ip4, MFF_TUN_DST, 0, 32, ofpacts);
> } else {
> - put_load_bytes(local_ip, sizeof *local_ip, MFF_TUN_IPV6_SRC,
> + put_load_bytes(local_ip6, sizeof *local_ip6, MFF_TUN_IPV6_SRC,
> 0, 128, ofpacts);
> put_load_bytes(&binding->remote_ip, sizeof binding->remote_ip,
> MFF_TUN_IPV6_DST, 0, 128, ofpacts);
> @@ -3308,49 +3474,82 @@ physical_consider_evpn_binding(const struct
> evpn_binding *binding,
>
> static void
> physical_consider_evpn_multicast(const struct evpn_multicast_group *mc_group,
> - const struct in6_addr *local_ip,
> + const struct evpn_local_ip_map *vni_ip_map,
> struct ofpbuf *ofpacts, struct match *match,
> - struct ovn_desired_flow_table *flow_table,
> - bool ipv4)
> + struct ovn_desired_flow_table *flow_table)
> {
> - const struct evpn_binding *binding = NULL;
> + ovs_be32 local_ip4 = evpn_local_ip_find_v4(vni_ip_map, mc_group->vni);
> + const struct in6_addr *local_ip6 = evpn_local_ip_find_v6(vni_ip_map,
> + mc_group->vni);
>
> ofpbuf_clear(ofpacts);
> uint32_t multicast_tunnel_keys[] = {OVN_MCAST_FLOOD_TUNNEL_KEY,
> OVN_MCAST_UNKNOWN_TUNNEL_KEY,
> OVN_MCAST_FLOOD_L2_TUNNEL_KEY};
> - if (ipv4) {
> - ovs_be32 ip4 = in6_addr_get_mapped_ipv4(local_ip);
> - put_load_bytes(&ip4, sizeof ip4, MFF_TUN_SRC, 0, 32, ofpacts);
> - } else {
> - put_load_bytes(local_ip, sizeof *local_ip, MFF_TUN_IPV6_SRC,
> - 0, 128, ofpacts);
> - }
> put_load(mc_group->vni, MFF_TUN_ID, 0, 24, ofpacts);
>
> - const struct hmapx_node *node;
> - HMAPX_FOR_EACH (node, &mc_group->bindings) {
> - binding = node->data;
> - if (ipv4) {
> - ovs_be32 ip4 = in6_addr_get_mapped_ipv4(&binding->remote_ip);
> - put_load_bytes(&ip4, sizeof ip4, MFF_TUN_DST, 0, 32, ofpacts);
> - } else {
> + const struct evpn_binding *binding = NULL;
> + if (local_ip4) {
> + put_load_bytes(&local_ip4, sizeof local_ip4, MFF_TUN_SRC,
> + 0, 32, ofpacts);
> +
> + const struct hmapx_node *node;
> + HMAPX_FOR_EACH (node, &mc_group->bindings) {
> + binding = node->data;
> + if (!IN6_IS_ADDR_V4MAPPED(&binding->remote_ip)) {
> + continue;
> + }
> +
> + ovs_be32 remote_ip4 =
> + in6_addr_get_mapped_ipv4(&binding->remote_ip);
> + put_load_bytes(&remote_ip4, sizeof remote_ip4, MFF_TUN_DST, 0,
> 32,
> + ofpacts);
> + ofpact_put_OUTPUT(ofpacts)->port = binding->tunnel_ofport;
> + }
> + put_resubmit(OFTABLE_LOCAL_OUTPUT, ofpacts);
> + }
> +
> + /* We first walk all the v4 remote IPs and generate tun encap actions for
> + * them. Then we will iterate over all the v6 remotes and generate tun
> + * encap actions for them.
> + * We need to set tun_v4 OVS fields to zero here in order to avoid having
> + * inconsistent OVS flow actions. */
> + if (local_ip4 && local_ip6) {
> + local_ip4 = 0;
> + put_load_bytes(&local_ip4, sizeof local_ip4, MFF_TUN_SRC, 0, 32,
> + ofpacts);
> + put_load_bytes(&local_ip4, sizeof local_ip4, MFF_TUN_DST, 0, 32,
> + ofpacts);
> + }
> +
> + if (local_ip6) {
> + put_load_bytes(local_ip6, sizeof *local_ip6, MFF_TUN_IPV6_SRC,
> + 0, 128, ofpacts);
> + const struct hmapx_node *node;
> + HMAPX_FOR_EACH (node, &mc_group->bindings) {
> + binding = node->data;
> + if (IN6_IS_ADDR_V4MAPPED(&binding->remote_ip)) {
> + continue;
> + }
> +
> put_load_bytes(&binding->remote_ip, sizeof binding->remote_ip,
> MFF_TUN_IPV6_DST, 0, 128, ofpacts);
> + ofpact_put_OUTPUT(ofpacts)->port = binding->tunnel_ofport;
> }
> - ofpact_put_OUTPUT(ofpacts)->port = binding->tunnel_ofport;
> + put_resubmit(OFTABLE_LOCAL_OUTPUT, ofpacts);
> }
> - put_resubmit(OFTABLE_LOCAL_OUTPUT, ofpacts);
>
> - ovs_assert(!hmapx_is_empty(&mc_group->bindings));
> - for (size_t i = 0; i < ARRAY_SIZE(multicast_tunnel_keys); i++) {
> - match_init_catchall(match);
> - match_outport_dp_and_port_keys(match, binding->dp_key,
> - multicast_tunnel_keys[i]);
> + if (binding) {
> + ovs_assert(!hmapx_is_empty(&mc_group->bindings));
> + for (size_t i = 0; i < ARRAY_SIZE(multicast_tunnel_keys); i++) {
> + match_init_catchall(match);
> + match_outport_dp_and_port_keys(match, binding->dp_key,
> + multicast_tunnel_keys[i]);
>
> - ofctrl_add_flow(flow_table, OFTABLE_REMOTE_VTEP_OUTPUT, 50,
> - mc_group->flow_uuid.parts[0],
> - match, ofpacts, &mc_group->flow_uuid);
> + ofctrl_add_flow(flow_table, OFTABLE_REMOTE_VTEP_OUTPUT, 50,
> + mc_group->flow_uuid.parts[0],
> + match, ofpacts, &mc_group->flow_uuid);
> + }
> }
> }
>
> @@ -3418,30 +3617,26 @@ physical_eval_evpn_flows(const struct physical_ctx
> *ctx,
> return;
> }
>
> - const char *local_ip_str = smap_get_def(&ctx->chassis->other_config,
> - "ovn-evpn-local-ip", "");
> - struct in6_addr local_ip;
> - if (!ip46_parse(local_ip_str, &local_ip)) {
> - static struct vlog_rate_limit rl = VLOG_RATE_LIMIT_INIT(5, 1);
> - VLOG_WARN_RL(&rl, "EVPN enabled, but required 'evpn-local-ip' is "
> - "missing or invalid %s ", local_ip_str);
> - return;
> - }
> + struct evpn_local_ip_map vni_ip_map = {
> + .vni_ip_v4 = HMAP_INITIALIZER(&vni_ip_map.vni_ip_v4),
> + .vni_ip_v6 = HMAP_INITIALIZER(&vni_ip_map.vni_ip_v6),
> + };
> + evpn_local_ip_map_init(&vni_ip_map, &ctx->chassis->other_config);
>
> struct match match = MATCH_CATCHALL_INITIALIZER;
> - bool ipv4 = IN6_IS_ADDR_V4MAPPED(&local_ip);
> -
> const struct evpn_binding *binding;
> +
> HMAP_FOR_EACH (binding, hmap_node, ctx->evpn_bindings) {
> - physical_consider_evpn_binding(binding, &local_ip, ofpacts,
> - &match, flow_table, ipv4);
> + physical_consider_evpn_binding(binding, &vni_ip_map, ofpacts,
> + &match, flow_table);
> }
>
> const struct evpn_multicast_group *mc_group;
> HMAP_FOR_EACH (mc_group, hmap_node, ctx->evpn_multicast_groups) {
> - physical_consider_evpn_multicast(mc_group, &local_ip, ofpacts,
> - &match, flow_table, ipv4);
> + physical_consider_evpn_multicast(mc_group, &vni_ip_map, ofpacts,
> + &match, flow_table);
> }
> + evpn_local_ip_map_destroy(&vni_ip_map);
>
> const struct evpn_fdb *fdb;
> HMAP_FOR_EACH (fdb, hmap_node, ctx->evpn_fdbs) {
> @@ -3569,34 +3764,33 @@ physical_handle_evpn_binding_changes(
> const struct uuidset *removed_bindings,
> const struct uuidset *removed_multicast_groups)
> {
> - const char *local_ip_str = smap_get_def(&ctx->chassis->other_config,
> - "ovn-evpn-local-ip", "");
> - struct in6_addr local_ip;
> - if (!ip46_parse(local_ip_str, &local_ip)) {
> - return;
> - }
> + struct evpn_local_ip_map vni_ip_map = {
> + .vni_ip_v4 = HMAP_INITIALIZER(&vni_ip_map.vni_ip_v4),
> + .vni_ip_v6 = HMAP_INITIALIZER(&vni_ip_map.vni_ip_v6),
> + };
> + evpn_local_ip_map_init(&vni_ip_map, &ctx->chassis->other_config);
>
> struct ofpbuf ofpacts;
> ofpbuf_init(&ofpacts, 0);
> struct match match = MATCH_CATCHALL_INITIALIZER;
> - bool ipv4 = IN6_IS_ADDR_V4MAPPED(&local_ip);
>
> const struct hmapx_node *node;
> HMAPX_FOR_EACH (node, updated_bindings) {
> const struct evpn_binding *binding = node->data;
>
> ofctrl_remove_flows(flow_table, &binding->flow_uuid);
> - physical_consider_evpn_binding(binding, &local_ip, &ofpacts,
> - &match, flow_table, ipv4);
> + physical_consider_evpn_binding(binding, &vni_ip_map, &ofpacts,
> + &match, flow_table);
> }
>
> HMAPX_FOR_EACH (node, updated_multicast_groups) {
> const struct evpn_multicast_group *mc_group = node->data;
>
> ofctrl_remove_flows(flow_table, &mc_group->flow_uuid);
> - physical_consider_evpn_multicast(mc_group, &local_ip, &ofpacts,
> - &match, flow_table, ipv4);
> + physical_consider_evpn_multicast(mc_group, &vni_ip_map, &ofpacts,
> + &match, flow_table);
> }
> + evpn_local_ip_map_destroy(&vni_ip_map);
>
> ofpbuf_uninit(&ofpacts);
>
> diff --git a/tests/system-common-macros.at b/tests/system-common-macros.at
> index 9c5a124a0..0f4f8952c 100644
> --- a/tests/system-common-macros.at
> +++ b/tests/system-common-macros.at
> @@ -209,18 +209,20 @@ m4_define([SET_EVPN_IFACE_NAMES],
> ifname=$1 switch=$2 vni=$3
>
> [[ $ifname = "default" ]] && BR_NAME=br-$vni || BR_NAME=br-$ifname
> - [[ $ifname = "default" ]] && VXLAN_NAME=vxlan-$vni ||
> VXLAN_NAME=vxlan-$ifname
> + [[ $ifname = "default" ]] && VXLAN_NAME=vxlan-$vni ||
> VXLAN_NAME=vxlan-v4-$ifname
> + [[ $ifname = "default" ]] && VXLAN_V6_NAME=vxlan-v6-$vni ||
> VXLAN_V6_NAME=vxlan-v6-$ifname
> [[ $ifname = "default" ]] && LO_NAME=lo-$vni || LO_NAME=lo-$ifname
>
> if [[ $ifname != "default" ]]; then
> check ovn-nbctl set logical_switch $switch \
> other_config:dynamic-routing-bridge-ifname=$BR_NAME \
> - other_config:dynamic-routing-vxlan-ifname=$VXLAN_NAME \
> other_config:dynamic-routing-advertise-ifname=$LO_NAME
> fi
> + check ovn-nbctl set logical_switch $switch \
> +
> other_config:dynamic-routing-vxlan-ifname=$VXLAN_NAME","$VXLAN_V6_NAME
>
> - export BR_NAME VXLAN_NAME LO_NAME
> - on_exit 'unset BR_NAME VXLAN_NAME LO_NAME'
> + export BR_NAME VXLAN_NAME VXLAN_V6_NAME LO_NAME
> + on_exit 'unset BR_NAME VXLAN_NAME VXLAN_V6_NAME LO_NAME'
> ]
> )
>
> diff --git a/tests/system-ovn.at b/tests/system-ovn.at
> index ec3b3735f..584f91894 100644
> --- a/tests/system-ovn.at
> +++ b/tests/system-ovn.at
> @@ -17731,10 +17731,10 @@
> priority=1050,tun_id=0xa,tun_src=169.0.0.10,tun_dst=169.0.0.1,in_port=$ofport
> ac
>
> AT_CHECK_UNQUOTED([ovs-ofctl dump-flows br-int
> table=OFTABLE_REMOTE_VTEP_OUTPUT | grep output | \
> awk '{print $[7], $[8]}' | sort], [0], [dnl
> -priority=50,reg15=0x8000,metadata=0x$dp_key
> actions=load:0xa9000001->NXM_NX_TUN_IPV4_SRC[[]],load:0xa->NXM_NX_TUN_ID[[0..23]],load:0xa900000a->NXM_NX_TUN_IPV4_DST[[]],output:$ofport,resubmit(,OFTABLE_LOCAL_OUTPUT)
> +priority=50,reg15=0x8000,metadata=0x$dp_key
> actions=load:0xa->NXM_NX_TUN_ID[[0..23]],load:0xa9000001->NXM_NX_TUN_IPV4_SRC[[]],load:0xa900000a->NXM_NX_TUN_IPV4_DST[[]],output:$ofport,resubmit(,OFTABLE_LOCAL_OUTPUT)
> priority=50,reg15=0x80000001,metadata=0x$dp_key
> actions=load:0xa9000001->NXM_NX_TUN_IPV4_SRC[[]],load:0xa900000a->NXM_NX_TUN_IPV4_DST[[]],load:0xa->NXM_NX_TUN_ID[[0..23]],output:$ofport
> -priority=50,reg15=0x8001,metadata=0x$dp_key
> actions=load:0xa9000001->NXM_NX_TUN_IPV4_SRC[[]],load:0xa->NXM_NX_TUN_ID[[0..23]],load:0xa900000a->NXM_NX_TUN_IPV4_DST[[]],output:$ofport,resubmit(,OFTABLE_LOCAL_OUTPUT)
> -priority=50,reg15=0x8004,metadata=0x$dp_key
> actions=load:0xa9000001->NXM_NX_TUN_IPV4_SRC[[]],load:0xa->NXM_NX_TUN_ID[[0..23]],load:0xa900000a->NXM_NX_TUN_IPV4_DST[[]],output:$ofport,resubmit(,OFTABLE_LOCAL_OUTPUT)
> +priority=50,reg15=0x8001,metadata=0x$dp_key
> actions=load:0xa->NXM_NX_TUN_ID[[0..23]],load:0xa9000001->NXM_NX_TUN_IPV4_SRC[[]],load:0xa900000a->NXM_NX_TUN_IPV4_DST[[]],output:$ofport,resubmit(,OFTABLE_LOCAL_OUTPUT)
> +priority=50,reg15=0x8004,metadata=0x$dp_key
> actions=load:0xa->NXM_NX_TUN_ID[[0..23]],load:0xa9000001->NXM_NX_TUN_IPV4_SRC[[]],load:0xa900000a->NXM_NX_TUN_IPV4_DST[[]],output:$ofport,resubmit(,OFTABLE_LOCAL_OUTPUT)
> priority=55,reg10=0x1/0x1,reg15=0x80000001,metadata=0x$dp_key
> actions=load:0xa9000001->NXM_NX_TUN_IPV4_SRC[[]],load:0xa900000a->NXM_NX_TUN_IPV4_DST[[]],load:0xa->NXM_NX_TUN_ID[[0..23]],load:0xffff->NXM_OF_IN_PORT[[]],output:$ofport
> ])
>
> @@ -18170,7 +18170,7 @@ check ovs-vsctl \
> -- set Open_vSwitch . external-ids:ovn-encap-type=geneve \
> -- set Open_vSwitch . external-ids:ovn-encap-ip=169::1 \
> -- set Open_vSwitch . external-ids:ovn-bridge-mappings=phynet:br-ext \
> - -- set Open_vSwitch . external-ids:ovn-evpn-local-ip=169::1 \
> + -- set Open_vSwitch . external-ids:ovn-evpn-local-ip=169.0.0.1,169::1 \
> -- set Open_vSwitch . external-ids:ovn-evpn-vxlan-ports=4789 \
> -- set bridge br-int fail-mode=secure other-config:disable-in-band=true
>
> @@ -18268,10 +18268,10 @@
> priority=1050,tun_id=0xa,tun_ipv6_src=169::10,tun_ipv6_dst=169::1,in_port=$ofpor
>
> AT_CHECK_UNQUOTED([ovs-ofctl dump-flows br-int
> table=OFTABLE_REMOTE_VTEP_OUTPUT | grep output | \
> awk '{print $[7], $[8]}' | sort], [0], [dnl
> -priority=50,reg15=0x8000,metadata=0x$dp_key
> actions=load:0x1->NXM_NX_TUN_IPV6_SRC[[0..63]],load:0x169000000000000->NXM_NX_TUN_IPV6_SRC[[64..127]],load:0xa->NXM_NX_TUN_ID[[0..23]],load:0x10->NXM_NX_TUN_IPV6_DST[[0..63]],load:0x169000000000000->NXM_NX_TUN_IPV6_DST[[64..127]],output:$ofport,resubmit(,OFTABLE_LOCAL_OUTPUT)
> +priority=50,reg15=0x8000,metadata=0x$dp_key
> actions=load:0xa->NXM_NX_TUN_ID[[0..23]],load:0xa9000001->NXM_NX_TUN_IPV4_SRC[[]],resubmit(,OFTABLE_LOCAL_OUTPUT),load:0->NXM_NX_TUN_IPV4_SRC[[]],load:0->NXM_NX_TUN_IPV4_DST[[]],load:0x1->NXM_NX_TUN_IPV6_SRC[[0..63]],load:0x169000000000000->NXM_NX_TUN_IPV6_SRC[[64..127]],load:0x10->NXM_NX_TUN_IPV6_DST[[0..63]],load:0x169000000000000->NXM_NX_TUN_IPV6_DST[[64..127]],output:$ofport,resubmit(,OFTABLE_LOCAL_OUTPUT)
> priority=50,reg15=0x80000001,metadata=0x$dp_key
> actions=load:0x1->NXM_NX_TUN_IPV6_SRC[[0..63]],load:0x169000000000000->NXM_NX_TUN_IPV6_SRC[[64..127]],load:0x10->NXM_NX_TUN_IPV6_DST[[0..63]],load:0x169000000000000->NXM_NX_TUN_IPV6_DST[[64..127]],load:0xa->NXM_NX_TUN_ID[[0..23]],output:$ofport
> -priority=50,reg15=0x8001,metadata=0x$dp_key
> actions=load:0x1->NXM_NX_TUN_IPV6_SRC[[0..63]],load:0x169000000000000->NXM_NX_TUN_IPV6_SRC[[64..127]],load:0xa->NXM_NX_TUN_ID[[0..23]],load:0x10->NXM_NX_TUN_IPV6_DST[[0..63]],load:0x169000000000000->NXM_NX_TUN_IPV6_DST[[64..127]],output:$ofport,resubmit(,OFTABLE_LOCAL_OUTPUT)
> -priority=50,reg15=0x8004,metadata=0x$dp_key
> actions=load:0x1->NXM_NX_TUN_IPV6_SRC[[0..63]],load:0x169000000000000->NXM_NX_TUN_IPV6_SRC[[64..127]],load:0xa->NXM_NX_TUN_ID[[0..23]],load:0x10->NXM_NX_TUN_IPV6_DST[[0..63]],load:0x169000000000000->NXM_NX_TUN_IPV6_DST[[64..127]],output:$ofport,resubmit(,OFTABLE_LOCAL_OUTPUT)
> +priority=50,reg15=0x8001,metadata=0x$dp_key
> actions=load:0xa->NXM_NX_TUN_ID[[0..23]],load:0xa9000001->NXM_NX_TUN_IPV4_SRC[[]],resubmit(,OFTABLE_LOCAL_OUTPUT),load:0->NXM_NX_TUN_IPV4_SRC[[]],load:0->NXM_NX_TUN_IPV4_DST[[]],load:0x1->NXM_NX_TUN_IPV6_SRC[[0..63]],load:0x169000000000000->NXM_NX_TUN_IPV6_SRC[[64..127]],load:0x10->NXM_NX_TUN_IPV6_DST[[0..63]],load:0x169000000000000->NXM_NX_TUN_IPV6_DST[[64..127]],output:$ofport,resubmit(,OFTABLE_LOCAL_OUTPUT)
> +priority=50,reg15=0x8004,metadata=0x$dp_key
> actions=load:0xa->NXM_NX_TUN_ID[[0..23]],load:0xa9000001->NXM_NX_TUN_IPV4_SRC[[]],resubmit(,OFTABLE_LOCAL_OUTPUT),load:0->NXM_NX_TUN_IPV4_SRC[[]],load:0->NXM_NX_TUN_IPV4_DST[[]],load:0x1->NXM_NX_TUN_IPV6_SRC[[0..63]],load:0x169000000000000->NXM_NX_TUN_IPV6_SRC[[64..127]],load:0x10->NXM_NX_TUN_IPV6_DST[[0..63]],load:0x169000000000000->NXM_NX_TUN_IPV6_DST[[64..127]],output:$ofport,resubmit(,OFTABLE_LOCAL_OUTPUT)
> priority=55,reg10=0x1/0x1,reg15=0x80000001,metadata=0x$dp_key
> actions=load:0x1->NXM_NX_TUN_IPV6_SRC[[0..63]],load:0x169000000000000->NXM_NX_TUN_IPV6_SRC[[64..127]],load:0x10->NXM_NX_TUN_IPV6_DST[[0..63]],load:0x169000000000000->NXM_NX_TUN_IPV6_DST[[64..127]],load:0xa->NXM_NX_TUN_ID[[0..23]],load:0xffff->NXM_OF_IN_PORT[[]],output:$ofport
> ])
>
> @@ -18684,6 +18684,589 @@ OVS_TRAFFIC_VSWITCHD_STOP(["/failed to query port
> patch-.*/d
> AT_CLEANUP
> ])
>
> +OVN_FOR_EACH_NORTHD([
> +AT_SETUP([dynamic-routing - EVPN $1 naming - Dual Stack])
> +AT_KEYWORDS([dynamic-routing])
> +
> +CHECK_VRF()
> +CHECK_CONNTRACK()
> +CHECK_CONNTRACK_NAT()
> +
> +IFNAME=$1
> +vni=10
> +VRF_RESERVE([$vni])
> +ovn_start
> +OVS_TRAFFIC_VSWITCHD_START()
> +ADD_BR([br-int])
> +ADD_BR([br-ext], [set Bridge br-ext fail-mode=standalone])
> +
> +# Set external-ids in br-int needed for ovn-controller.
> +check ovs-vsctl \
> + -- set Open_vSwitch . external-ids:system-id=hv1 \
> + -- set Open_vSwitch .
> external-ids:ovn-remote=unix:$ovs_base/ovn-sb/ovn-sb.sock \
> + -- set Open_vSwitch . external-ids:ovn-encap-type=geneve \
> + -- set Open_vSwitch . external-ids:ovn-encap-ip=169.0.0.1 \
> + -- set Open_vSwitch . external-ids:ovn-bridge-mappings=phynet:br-ext \
> + -- set Open_vSwitch .
> external-ids:ovn-evpn-local-ip=$vni-169.0.0.1,$vni-169::1 \
> + -- set Open_vSwitch . external-ids:ovn-evpn-vxlan-ports=4789 \
> + -- set bridge br-int fail-mode=secure other-config:disable-in-band=true
> +
> +# Start ovn-controller.
> +start_daemon ovn-controller
> +
> +OVS_WAIT_WHILE([ip link | grep -q ovnvrf$vni:.*UP])
> +
> +check ovn-nbctl
> \
> + -- ls-add ls-evpn
> \
> + -- lsp-add ls-evpn workload1
> \
> + -- lsp-set-addresses workload1 "f0:00:0f:16:01:10 172.16.1.10
> 172:16::10" \
> + -- lsp-add ls-evpn workload2
> \
> + -- lsp-set-addresses workload2 "f0:00:0f:16:01:20 172.16.1.20
> 172:16::20" \
> + -- lsp-add-localnet-port ls-evpn ln_port phynet
> +
> +SET_EVPN_IFACE_NAMES([$IFNAME], [ls-evpn], [$vni])
> +
> +ADD_NAMESPACES(workload1)
> +ADD_VETH(workload1, workload1, br-int, "172:16::10/64", "f0:00:0f:16:01:10",
> \
> + "172:16::1", "nodad", "172.16.1.10/24", "172.16.1.1")
> +
> +ADD_NAMESPACES(workload2)
> +ADD_VETH(workload2, workload2, br-int, "172:16::20/64", "f0:00:0f:16:01:20",
> \
> + "172:16::1", "nodad", "172.16.1.20/24", "172.16.1.1")
> +
> +OVN_POPULATE_ARP
> +check ovn-nbctl --wait=hv sync
> +wait_for_ports_up
> +
> +# Setup a VRF for the VNI.
> +check ip link add vrf-$vni type vrf table $vni
> +on_exit "ip link del vrf-$vni"
> +check ip link set vrf-$vni up
> +
> +# Add VNI bridge.
> +check ip link add $BR_NAME type bridge
> +on_exit "ip link del $BR_NAME"
> +check ip link set $BR_NAME master vrf-$vni addrgenmode none
> +check ip link set dev $BR_NAME up
> +
> +# Add VXLAN VTEP for the VNI (linked to the OVS vxlan_sys_<port> interface).
> +# Use a dstport different than the one used by OVS.
> +# This is fine because we don't actually want traffic to pass through
> +# the $vxlan interface. FRR should read the dstport from the linked
> +# vxlan_sys_${vxlan_port} device.
> +dstport=$((60000 + $vni))
> +check ip link add $VXLAN_NAME type vxlan \
> + id $vni dstport $dstport local 169.0.0.1 nolearning
> +check ip link add $VXLAN_V6_NAME type vxlan \
> + id $vni dstport $dstport local 169::1 nolearning
> +on_exit "ip link del $VXLAN_NAME"
> +on_exit "ip link del $VXLAN_V6_NAME"
Leftover from v4. The cleanup of the v4 interface should be just after
the ip link add. Otherwise we might not cleanup if the v6 interface
addition fails.
As the comments I had are minor and easy to fix, I went ahead and fixed
them and then applied the patch to main.
Thanks again!
Regards,
Dumitru
_______________________________________________
dev mailing list
[email protected]
https://mail.openvswitch.org/mailman/listinfo/ovs-dev