On 3/18/26 10:08 PM, Mark Michelson wrote:
> Hi Dumitru,
> 

Hi Mark,

> The only nit I have is with the system test you have added. The test
> name is the same as the test added in ovn.at "Load balancer health

Right, I had copy/pasted it from the ovn-northd.at test and struggled
with changing it to a better name.  I'll do that in v2.

> checks - service monitor source MAC". However, it doesn't check the
> source MAC address of the health check packets to ensure that the
> global service monitor MAC is being used. Instead, it seems to be a
> basic test to ensure that health checks work when the load balancer is
> applied to the switch and the logical router's IP address is used as
> the source IP address of the health checks. The test is useful, since

Well, it's more than just a basic test IMO because it actually fails if
the wrong source mac is used.

> I don't think we have coverage for this case, but I suggest that we
> add some sort of check to ensure that the source MAC of the health
> checks is what we expect. Otherwise, the test should be renamed to
> indicate it is a basic functionality test.

As said above, this test fails without the source mac fix because OVN
will expect probe replies to be sent to $svc_monitor_mac destination but
the probe requests were actually sent from the LRP mac.

I'll try to add an explicit check in v2 to make it more obvious.

Thanks,
Dumitru

> 
> On Wed, Mar 18, 2026 at 11:23 AM Dumitru Ceara via dev
> <[email protected]> wrote:
>>
>> It doesn't really make sense to use the router port MAC as service
>> monitor source otherwise.  Worse, it may even break existing use cases
>> when the load balancer is only applied to logical switches and not to
>> the routers connected to them.
>>
>> In order for the change to work properly we also had to fallback to
>> recompute in all cases when load balancers with health check
>> configuration are added/updated/deleted (through load balancer groups
>> too).  This was currently not happening but didn't really cause issues
>> before ceea1d7cccad ("northd: Allow LB health checks from router IPs.").
>>
>> This also uncovered a bug in pinctrl, where we don't resync the
>> in-memory version of the service monitor if the SB contents change
>> (e.g., the source mac address).  The patch fixes that part too as it
>> wasn't especially visible until now when we change the source mac of the
>> monitor if it gets applied to a router and uses the router IP.
>>
>> Fixes: a0a5dd8ce56f ("controller: Store src_mac, src_ip in svc_monitor 
>> struct.")
>> Fixes: ceea1d7cccad ("northd: Allow LB health checks from router IPs.")
>> Reported-at: https://redhat.atlassian.net/browse/FDP-3453
>> Signed-off-by: Dumitru Ceara <[email protected]>
>> ---
>>  controller/pinctrl.c | 11 ++++---
>>  northd/en-lb-data.c  | 20 ++++++++----
>>  northd/lb.c          | 13 +++++++-
>>  northd/lb.h          |  8 +++--
>>  northd/northd.c      |  8 +++--
>>  tests/ovn-northd.at  | 77 ++++++++++++++++++++++++++++++++++++++++++++
>>  tests/system-ovn.at  | 65 +++++++++++++++++++++++++++++++++++++
>>  7 files changed, 184 insertions(+), 18 deletions(-)
>>
>> diff --git a/controller/pinctrl.c b/controller/pinctrl.c
>> index 35c42942f3..18b7b0df2e 100644
>> --- a/controller/pinctrl.c
>> +++ b/controller/pinctrl.c
>> @@ -7197,16 +7197,13 @@ sync_svc_monitors(struct ovsdb_idl_txn 
>> *ovnsb_idl_txn,
>>          if (!svc_mon) {
>>              svc_mon = xmalloc(sizeof *svc_mon);
>>              svc_mon->dp_key = dp_key;
>> -            svc_mon->input_port_key = input_port_key;
>>              svc_mon->port_key = port_key;
>>              svc_mon->proto_port = sb_svc_mon->port;
>>              svc_mon->ip = ip_addr;
>> -            svc_mon->src_ip = ip_addr_src;
>>              svc_mon->is_ip6 = !is_ipv4;
>>              svc_mon->state = SVC_MON_S_INIT;
>>              svc_mon->status = SVC_MON_ST_UNKNOWN;
>>              svc_mon->protocol = protocol;
>> -            svc_mon->type = mon_type;
>>
>>              smap_init(&svc_mon->options);
>>              svc_mon->interval =
>> @@ -7220,15 +7217,19 @@ sync_svc_monitors(struct ovsdb_idl_txn 
>> *ovnsb_idl_txn,
>>              svc_mon->n_success = 0;
>>              svc_mon->n_failures = 0;
>>
>> -            eth_addr_from_string(sb_svc_mon->src_mac, &svc_mon->src_mac);
>> -
>>              hmap_insert(&svc_monitors_map, &svc_mon->hmap_node, hash);
>>              ovs_list_push_back(&svc_monitors, &svc_mon->list_node);
>>              changed = true;
>>          }
>>
>>          svc_mon->sb_svc_mon = sb_svc_mon;
>> +
>> +        svc_mon->input_port_key = input_port_key;
>> +        svc_mon->src_ip = ip_addr_src;
>> +        svc_mon->type = mon_type;
>> +        eth_addr_from_string(sb_svc_mon->src_mac, &svc_mon->src_mac);
>>          svc_mon->ea = ea;
>> +
>>          if (!smap_equal(&svc_mon->options, &sb_svc_mon->options)) {
>>              smap_destroy(&svc_mon->options);
>>              smap_clone(&svc_mon->options, &sb_svc_mon->options);
>> diff --git a/northd/en-lb-data.c b/northd/en-lb-data.c
>> index a64c06bfda..04ef20bcc6 100644
>> --- a/northd/en-lb-data.c
>> +++ b/northd/en-lb-data.c
>> @@ -268,6 +268,7 @@ lb_data_load_balancer_group_handler(struct engine_node 
>> *node, void *data)
>>                  hmapx_add(&clbg->assoc_lbs, lb_group->lbs[i]);
>>              }
>>
>> +            trk_lb_data->has_health_checks |= lb_group->has_health_checks;
>>              trk_lb_data->has_routable_lb |= lb_group->has_routable_lb;
>>              continue;
>>          }
>> @@ -283,6 +284,7 @@ lb_data_load_balancer_group_handler(struct engine_node 
>> *node, void *data)
>>          if (nbrec_load_balancer_group_is_deleted(tracked_lb_group)) {
>>              hmap_remove(&lb_data->lbgrps, &lb_group->hmap_node);
>>              add_deleted_lbgrp_to_tracked_data(lb_group, trk_lb_data);
>> +            trk_lb_data->has_health_checks |= lb_group->has_health_checks;
>>              trk_lb_data->has_routable_lb |= lb_group->has_routable_lb;
>>          } else {
>>              /* Determine the lbs which are added or deleted for this
>> @@ -299,6 +301,7 @@ lb_data_load_balancer_group_handler(struct engine_node 
>> *node, void *data)
>>                  build_lrouter_lb_ips(lb_group->lb_ips, lb_group->lbs[i]);
>>              }
>>
>> +            trk_lb_data->has_health_checks |= lb_group->has_health_checks;
>>              trk_lb_data->has_routable_lb |= lb_group->has_routable_lb;
>>              struct crupdated_lbgrp *clbg =
>>                  add_crupdated_lbgrp_to_tracked_data(lb_group, trk_lb_data);
>> @@ -690,10 +693,12 @@ handle_od_lb_changes(struct nbrec_load_balancer 
>> **nbrec_lbs,
>>              /* Add this lb to the tracked data. */
>>              uuidset_insert(&codlb->assoc_lbs, lb_uuid);
>>
>> +            struct ovn_northd_lb *lb = ovn_northd_lb_find(&lb_data->lbs,
>> +                                                          lb_uuid);
>> +            ovs_assert(lb);
>> +
>> +            trk_lb_data->has_health_checks |= lb->health_checks;
>>              if (!trk_lb_data->has_routable_lb) {
>> -                struct ovn_northd_lb *lb = ovn_northd_lb_find(&lb_data->lbs,
>> -                                                              lb_uuid);
>> -                ovs_assert(lb);
>>                  trk_lb_data->has_routable_lb |= lb->routable;
>>                  trk_lb_data->has_distributed_lb |= lb->is_distributed;
>>              }
>> @@ -730,10 +735,12 @@ handle_od_lbgrp_changes(struct 
>> nbrec_load_balancer_group **nbrec_lbgrps,
>>              /* Add this lb group to the tracked data. */
>>              uuidset_insert(&codlb->assoc_lbgrps, lbgrp_uuid);
>>
>> +            struct ovn_lb_group *lbgrp =
>> +                ovn_lb_group_find(&lb_data->lbgrps, lbgrp_uuid);
>> +            ovs_assert(lbgrp);
>> +
>> +            trk_lb_data->has_health_checks |= lbgrp->has_health_checks;
>>              if (!trk_lb_data->has_routable_lb) {
>> -                struct ovn_lb_group *lbgrp =
>> -                    ovn_lb_group_find(&lb_data->lbgrps, lbgrp_uuid);
>> -                ovs_assert(lbgrp);
>>                  trk_lb_data->has_routable_lb |= lbgrp->has_routable_lb;
>>              }
>>          }
>> @@ -755,6 +762,7 @@ destroy_tracked_data(struct ed_type_lb_data *lb_data)
>>      lb_data->tracked_lb_data.has_dissassoc_lbs_from_lbgrps = false;
>>      lb_data->tracked_lb_data.has_dissassoc_lbs_from_od = false;
>>      lb_data->tracked_lb_data.has_dissassoc_lbgrps_from_od = false;
>> +    lb_data->tracked_lb_data.has_health_checks = false;
>>      lb_data->tracked_lb_data.has_routable_lb = false;
>>
>>      struct hmapx_node *node;
>> diff --git a/northd/lb.c b/northd/lb.c
>> index 83c1bf97e1..2f683cef1b 100644
>> --- a/northd/lb.c
>> +++ b/northd/lb.c
>> @@ -464,7 +464,8 @@ ovn_northd_lb_init(struct ovn_northd_lb *lb,
>>   * considered.  If no matching LRP is found, the 'svc_mon_lrp' is set to
>>   * NULL. */
>>  void
>> -ovn_northd_lb_backend_set_mon_port(const struct ovn_port *backend_op,
>> +ovn_northd_lb_backend_set_mon_port(const struct ovn_lb_datapaths *lb_dps,
>> +                                   const struct ovn_port *backend_op,
>>                                     struct ovn_northd_lb_backend *backend_nb)
>>  {
>>      if (backend_op && !backend_nb->remote_backend) {
>> @@ -473,6 +474,14 @@ ovn_northd_lb_backend_set_mon_port(const struct 
>> ovn_port *backend_op,
>>              if (!svc_mon_op->peer) {
>>                  continue;
>>              }
>> +
>> +            /* Skip routers this load balancer is not applied onto. */
>> +            const struct ovn_datapath *rtr_od = svc_mon_op->peer->od;
>> +            if (!dynamic_bitmap_is_set(&lb_dps->nb_lr_map,
>> +                                       rtr_od->sdp->index)) {
>> +                continue;
>> +            }
>> +
>>              const char *lrp_ip = lrp_find_member_ip(
>>                  svc_mon_op->peer,
>>                  backend_nb->svc_mon_src_ip);
>> @@ -564,6 +573,7 @@ ovn_lb_group_init(struct ovn_lb_group *lb_group,
>>          const struct uuid *lb_uuid =
>>              &nbrec_lb_group->load_balancer[i]->header_.uuid;
>>          lb_group->lbs[i] = ovn_northd_lb_find(lbs, lb_uuid);
>> +        lb_group->has_health_checks |= lb_group->lbs[i]->health_checks;
>>          lb_group->has_routable_lb |= lb_group->lbs[i]->routable;
>>      }
>>  }
>> @@ -586,6 +596,7 @@ ovn_lb_group_cleanup(struct ovn_lb_group *lb_group)
>>  {
>>      ovn_lb_ip_set_destroy(lb_group->lb_ips);
>>      lb_group->lb_ips = NULL;
>> +    lb_group->has_health_checks = false;
>>      lb_group->has_routable_lb = false;
>>      free(lb_group->lbs);
>>  }
>> diff --git a/northd/lb.h b/northd/lb.h
>> index c1f0c95da1..97d2c4c366 100644
>> --- a/northd/lb.h
>> +++ b/northd/lb.h
>> @@ -108,9 +108,6 @@ struct ovn_northd_lb_backend {
>>      struct ovn_port *svc_mon_lrp;
>>  };
>>
>> -void ovn_northd_lb_backend_set_mon_port(const struct ovn_port *,
>> -                                        struct ovn_northd_lb_backend *);
>> -
>>  struct ovn_northd_lb *ovn_northd_lb_create(const struct nbrec_load_balancer 
>> *);
>>  struct ovn_northd_lb *ovn_northd_lb_find(const struct hmap *,
>>                                           const struct uuid *);
>> @@ -136,6 +133,7 @@ struct ovn_lb_group {
>>      size_t n_lbs;
>>      struct ovn_northd_lb **lbs;
>>      struct ovn_lb_ip_set *lb_ips;
>> +    bool has_health_checks;
>>      bool has_routable_lb;
>>  };
>>
>> @@ -203,6 +201,10 @@ void ovn_lb_datapaths_add_ls(struct ovn_lb_datapaths *, 
>> size_t n,
>>                               struct ovn_datapath **,
>>                               size_t n_ls_datapaths);
>>
>> +void ovn_northd_lb_backend_set_mon_port(const struct ovn_lb_datapaths *,
>> +                                        const struct ovn_port *,
>> +                                        struct ovn_northd_lb_backend *);
>> +
>>  struct ovn_lb_group_datapaths {
>>      struct hmap_node hmap_node;
>>
>> diff --git a/northd/northd.c b/northd/northd.c
>> index 734efea657..c607c4c08d 100644
>> --- a/northd/northd.c
>> +++ b/northd/northd.c
>> @@ -3332,13 +3332,15 @@ ovn_nf_svc_create(struct ovsdb_idl_txn *ovnsb_txn,
>>
>>  static void
>>  ovn_lb_svc_create(struct ovsdb_idl_txn *ovnsb_txn,
>> -                  const struct ovn_northd_lb *lb,
>> +                  const struct ovn_lb_datapaths *lb_dps,
>>                    const struct svc_monitor_addresses *svc_global_addresses,
>>                    struct hmap *ls_ports,
>>                    struct sset *svc_monitor_lsps,
>>                    struct hmap *local_svc_monitors_map,
>>                    struct hmap *ic_learned_svc_monitors_map)
>>  {
>> +    const struct ovn_northd_lb *lb = lb_dps->lb;
>> +
>>      if (lb->template) {
>>          return;
>>      }
>> @@ -3366,7 +3368,7 @@ ovn_lb_svc_create(struct ovsdb_idl_txn *ovnsb_txn,
>>                  continue;
>>              }
>>
>> -            ovn_northd_lb_backend_set_mon_port(op, backend_nb);
>> +            ovn_northd_lb_backend_set_mon_port(lb_dps, op, backend_nb);
>>
>>              /* If the service monitor is backed by a real port, use its MAC
>>               * address instead of the default service check MAC. */
>> @@ -3787,7 +3789,7 @@ build_svc_monitors_data(
>>
>>      struct ovn_lb_datapaths *lb_dps;
>>      HMAP_FOR_EACH (lb_dps, hmap_node, lb_dps_map) {
>> -        ovn_lb_svc_create(ovnsb_txn, lb_dps->lb,
>> +        ovn_lb_svc_create(ovnsb_txn, lb_dps,
>>                            svc_global_addresses,
>>                            ls_ports,
>>                            svc_monitor_lsps,
>> diff --git a/tests/ovn-northd.at b/tests/ovn-northd.at
>> index 60258a6f25..f068e49465 100644
>> --- a/tests/ovn-northd.at
>> +++ b/tests/ovn-northd.at
>> @@ -20170,3 +20170,80 @@ AT_CHECK([ovn-sbctl get Port_Binding $lsp_pb_uuid 
>> external_ids:name],
>>  OVN_CLEANUP_NORTHD
>>  AT_CLEANUP
>>  ])
>> +
>> +OVN_FOR_EACH_NORTHD_NO_HV([
>> +AT_SETUP([Load balancer health checks - service monitor source MAC])
>> +ovn_start
>> +
>> +global_svc_mon_mac="11:11:11:11:11:11"
>> +router_port_mac="00:00:00:00:00:01"
>> +check ovn-nbctl set NB_Global . 
>> options:svc_monitor_mac="$global_svc_mon_mac"
>> +
>> +check ovn-nbctl ls-add ls                                   \
>> +      -- lsp-add ls lsp1                                    \
>> +      -- lsp-add ls lsp2                                    \
>> +    -- lr-add lr                                            \
>> +      -- lrp-add lr lr-ls $router_port_mac 192.168.1.254/24 \
>> +      -- lsp-add-router-port ls ls-lr lr-ls
>> +
>> +check ovn-nbctl                                                 \
>> +    -- lb-add lb0 192.168.5.1:1 192.168.1.1:1,192.168.1.2:1 udp \
>> +    -- lb-add lb1 192.168.5.1:2 192.168.1.1:2,192.168.1.2:2 udp
>> +lb0=$(fetch_column nb:Load_Balancer _uuid name=lb0)
>> +lb1=$(fetch_column nb:Load_Balancer _uuid name=lb1)
>> +
>> +check ovn-nbctl                                                             
>>   \
>> +    -- set load_balancer $lb0 
>> ip_port_mappings:192.168.1.1=lsp1:192.168.1.254 \
>> +    -- set load_balancer $lb1 
>> ip_port_mappings:192.168.1.1=lsp1:192.168.1.254
>> +
>> +check_uuid ovn-nbctl --id=@hc create Load_Balancer_Health_Check 
>> vip="192.168.5.1\:1" \
>> +    -- add Load_Balancer $lb0 health_check @hc
>> +check_uuid ovn-nbctl --id=@hc create Load_Balancer_Health_Check 
>> vip="192.168.5.1\:2" \
>> +    -- add Load_Balancer $lb1 health_check @hc
>> +
>> +dnl Add LB0 and LB1 to the switch (LB1 via load balancer group).
>> +check ovn-nbctl ls-lb-add ls lb0
>> +lbg=$(ovn-nbctl --wait=sb create load_balancer_group name=lbg)
>> +check ovn-nbctl add load_balancer_group lbg load_balancer $lb1
>> +check ovn-nbctl add logical_switch ls load_balancer_group $lbg
>> +
>> +check ovn-nbctl --wait=sb sync
>> +check_row_count sb:Service_Monitor 2
>> +
>> +AS_BOX([LB0 on switch, LB1 on switch])
>> +check_column "$global_svc_mon_mac" sb:Service_Monitor src_mac port=1
>> +check_column "$global_svc_mon_mac" sb:Service_Monitor src_mac port=2
>> +
>> +AS_BOX([LB0 on switch + router, LB1 on switch])
>> +check ovn-nbctl --wait=sb lr-lb-add lr lb0
>> +check_column "$router_port_mac"    sb:Service_Monitor src_mac port=1
>> +check_column "$global_svc_mon_mac" sb:Service_Monitor src_mac port=2
>> +
>> +AS_BOX([LB0 on switch + router, LB1 on switch + router])
>> +check ovn-nbctl --wait=sb add logical_router lr load_balancer_group $lbg
>> +check_column "$router_port_mac"    sb:Service_Monitor src_mac port=1
>> +check_column "$router_port_mac"    sb:Service_Monitor src_mac port=2
>> +
>> +AS_BOX([LB0 on switch + router, LB1 nowhere])
>> +check ovn-nbctl --wait=sb remove load_balancer_group lbg load_balancer $lb1
>> +check_column "$router_port_mac"    sb:Service_Monitor src_mac port=1
>> +check_column "$global_svc_mon_mac" sb:Service_Monitor src_mac port=2
>> +
>> +AS_BOX([LB0 on switch + router, LB1 on switch + router, readd])
>> +check ovn-nbctl --wait=sb add load_balancer_group lbg load_balancer $lb1
>> +check_column "$router_port_mac"    sb:Service_Monitor src_mac port=1
>> +check_column "$router_port_mac"    sb:Service_Monitor src_mac port=2
>> +
>> +AS_BOX([LB0 on switch + router, LB1 on switch, remove])
>> +check ovn-nbctl --wait=sb remove logical_router lr load_balancer_group $lbg
>> +check_column "$router_port_mac"    sb:Service_Monitor src_mac port=1
>> +check_column "$global_svc_mon_mac" sb:Service_Monitor src_mac port=2
>> +
>> +AS_BOX([LB0 on switch, LB1 on switch, remove])
>> +check ovn-nbctl --wait=sb lr-lb-del lr lb0
>> +check_column "$global_svc_mon_mac" sb:Service_Monitor src_mac port=1
>> +check_column "$global_svc_mon_mac" sb:Service_Monitor src_mac port=2
>> +
>> +OVN_CLEANUP_NORTHD
>> +AT_CLEANUP
>> +])
>> diff --git a/tests/system-ovn.at b/tests/system-ovn.at
>> index 7edb8a45b0..72edc6956f 100644
>> --- a/tests/system-ovn.at
>> +++ b/tests/system-ovn.at
>> @@ -21496,3 +21496,68 @@ OVS_TRAFFIC_VSWITCHD_STOP(["/failed to query port 
>> patch-.*/d
>>  /connection dropped.*/d"])
>>  AT_CLEANUP
>>  ])
>> +
>> +OVN_FOR_EACH_NORTHD([
>> +AT_SETUP([Load balancer health checks - service monitor source MAC])
>> +AT_SKIP_IF([test $HAVE_NC = no])
>> +
>> +ovn_start
>> +OVS_TRAFFIC_VSWITCHD_START()
>> +ADD_BR([br-int])
>> +
>> +check ovs-vsctl \
>> +        -- set Open_vSwitch . external-ids:system-id=hv1 \
>> +        -- set Open_vSwitch . 
>> external-ids:ovn-remote=unix:$ovs_base/ovn-sb/ovn-sb.sock \
>> +        -- set Open_vSwitch . external-ids:ovn-encap-type=geneve \
>> +        -- set Open_vSwitch . external-ids:ovn-encap-ip=169.0.0.1 \
>> +        -- set bridge br-int fail-mode=secure 
>> other-config:disable-in-band=true
>> +
>> +start_daemon ovn-controller
>> +
>> +check ovn-nbctl ls-add ls1
>> +check ovn-nbctl lsp-add ls1 ls1p1
>> +check ovn-nbctl lsp-set-addresses ls1p1 "00:00:00:01:01:01 192.168.1.1"
>> +
>> +check ovn-nbctl lr-add lr1
>> +check ovn-nbctl lrp-add lr1 lr1-ls1 00:00:00:00:00:01 192.168.1.254/24
>> +check ovn-nbctl -- lsp-add-router-port ls1 ls1-lr1 lr1-ls1
>> +
>> +check ovn-nbctl lb-add lb 192.168.5.1:42 192.168.1.1:42 udp
>> +check ovn-nbctl ls-lb-add ls1 lb
>> +lb=$(fetch_column nb:Load_Balancer _uuid name=lb)
>> +
>> +check ovn-nbctl set Logical_Router lr1 options:chassis="hv1"
>> +
>> +lbhc=$(ovn-nbctl                                                     \
>> +    --id=@hc create Load_Balancer_Health_Check vip="192.168.5.1\:42" \
>> +    -- add Load_Balancer $lb health_check @hc)
>> +check ovn-nbctl set Load_Balancer_Health_Check $lbhc \
>> +    options:interval=1 options:timeout=1             \
>> +    options:success_count=2                          \
>> +    options:failure_count=2
>> +check ovn-nbctl --wait=sb set load_balancer $lb \
>> +    ip_port_mappings:192.168.1.1=ls1p1:192.168.1.254
>> +
>> +ADD_NAMESPACES(ls1p1)
>> +ADD_VETH(ls1p1, ls1p1, br-int, "192.168.1.1/24", "00:00:00:01:01:01",
>> +         "192.168.1.254")
>> +
>> +wait_for_ports_up
>> +check ovn-nbctl --wait=hv sync
>> +
>> +dnl Start a backend server, the monitor should change status to "online".
>> +NETNS_DAEMONIZE([ls1p1], [nc -l -u 42], [nc1.pid])
>> +wait_row_count Service_Monitor 1 status=online
>> +
>> +dnl Kill the backend server, the monitor should change status to "offline".
>> +check kill -9 `cat nc1.pid`
>> +wait_row_count Service_Monitor 1 status=offline
>> +
>> +OVN_CLEANUP_CONTROLLER([hv1])
>> +OVN_CLEANUP_NORTHD
>> +
>> +as
>> +OVS_TRAFFIC_VSWITCHD_STOP(["/failed to query port patch-.*/d
>> +/connection dropped.*/d"])
>> +AT_CLEANUP
>> +])
>> --
>> 2.53.0
>>
>> _______________________________________________
>> dev mailing list
>> [email protected]
>> https://mail.openvswitch.org/mailman/listinfo/ovs-dev
>>
> 

_______________________________________________
dev mailing list
[email protected]
https://mail.openvswitch.org/mailman/listinfo/ovs-dev

Reply via email to