Re: [ovs-dev] [PATCH RESEND v11] netdev-dpdk: Add custom rx-steering configuration.

2023-06-19 Thread Robin Jarry
Kevin Traynor, Jun 15, 2023 at 12:53:
> >   new_n_rxq = MAX(smap_get_int(args, "n_rxq", NR_QUEUE), 1);
> > +if (dev->requested_rx_steer_flags != 0) {
> > +new_n_rxq += 1;
>
> If rx-steering is set for the port and the flow has previously not been 
> able to be offloaded, the dev->requested_n_rxq will always be different 
> than the netdev->n_rxq. That means this device will do a  reconfigure 
> anytime there is a config change on any device.
>
> e.g. If rx sterring on device A and device A cannot offload flows (this 
> is acceptable). Any config change to device B will result in reconfigure 
> of device A, not based on flags but based on num of rxqs.

This is why I had added an intermediate dev->user_n_rxq field in v10.
I tried to get rid of it as you suggested but it causes this issue.

Also, I think there's another side effect that is actually worse, if the
rte flow offload fails, dev->requested_n_rxq is decremented by one. If
netdev_dpdk_reconfigure() is called without reinitializing
dev->requested_n_rxq (mtu change or some other event), the value will
eventually drop down to zero and then to negative values.

I'll reintroduce user_n_rxq as an intermediate value that cannot be
modified other by than the user so that these problems cannot occur.

___
dev mailing list
d...@openvswitch.org
https://mail.openvswitch.org/mailman/listinfo/ovs-dev


Re: [ovs-dev] [PATCH RESEND v11] netdev-dpdk: Add custom rx-steering configuration.

2023-06-15 Thread Kevin Traynor

On 01/06/2023 10:29, Robin Jarry wrote:

Some control protocols are used to maintain link status between
forwarding engines (e.g. LACP). When the system is not sized properly,
the PMD threads may not be able to process all incoming traffic from the
configured Rx queues. When a signaling packet of such protocols is
dropped, it can cause link flapping, worsening the situation.

Use the RTE flow API to redirect these protocols into a dedicated Rx
queue. The assumption is made that the ratio between control protocol
traffic and user data traffic is very low and thus this dedicated Rx
queue will never get full. Re-program the RSS redirection table to only
use the other Rx queues.

The additional Rx queue will be assigned a PMD core like any other Rx
queue. Polling that extra queue may introduce increased latency and
a slight performance penalty at the benefit of preventing link flapping.

This feature must be enabled per port on specific protocols via the
rx-steering option. This option takes "rss" followed by a "+" separated
list of protocol names. It is only supported on ethernet ports. This
feature is experimental.

If the user has already configured multiple Rx queues on the port, an
additional one will be allocated for  packets. If the hardware cannot
satisfy the requested number of requested Rx queues, the last Rx queue
will be assigned for control plane. If only one Rx queue is available,
the rx-steering feature will be disabled. If the hardware does not
support the RTE flow matchers/actions, the feature will be disabled.

It cannot be enabled when other_config:hw-offload=true as it may
conflict with the offloaded RTE flows. Similarly, if hw-offload is
enabled, custom rx-steering will be forcibly disabled on all ports.

Example use:

  ovs-vsctl add-bond br-phy bond0 phy0 phy1 -- \
set interface phy0 type=dpdk options:dpdk-devargs=:ca:00.0 -- \
set interface phy0 options:rx-steering=rss+lacp -- \
set interface phy1 type=dpdk options:dpdk-devargs=:ca:00.1 -- \
set interface phy1 options:rx-steering=rss+lacp

As a starting point, only one protocol is supported: LACP. Other
protocols can be added in the future. NIC compatibility should be
checked.

To validate that this works as intended, I used a traffic generator to
generate random traffic slightly above the machine capacity at line rate
on a two ports bond interface. OVS is configured to receive traffic on
two VLANs and pop/push them in a br-int bridge based on tags set on
patch ports.

+--+
| DUT  |
|++|
||   br-int   || 
in_port=patch10,actions=mod_dl_src:$patch11,mod_dl_dst:$tgen1,output:patch11
|||| 
in_port=patch11,actions=mod_dl_src:$patch10,mod_dl_dst:$tgen0,output:patch10
|| patch10patch11 ||
|+---|---|+|
||   | |
|+---|---|+|
|| patch00patch01 ||
||  tag:10tag:20  ||
||||
||   br-phy   || default flow, action=NORMAL
||||
||   bond0|| balance-slb, lacp=passive, lacp-time=fast
||phy0   phy1 ||
|+--|-|---+|
+---|-|+
| |
+---|-|+
| port0  port1 | balance L3/L4, lacp=active, lacp-time=fast
| lag  | mode trunk VLANs 10, 20
|  |
|switch|
|  |
|  vlan 10vlan 20  |  mode access
|   port2  port3   |
+-|--|-+
  |  |
+-|--|-+
|   tgen0  tgen1   |  Random traffic that is properly balanced
|  |  across the bond ports in both directions.
|  traffic generator   |
+--+

Without rx-steering, the bond0 links are randomly switching to
"defaulted" when one of the LACP packets sent by the switch is dropped
because the RX queues are full and the PMD threads did not process them
fast enough. When that happens, all traffic must go through a single
link which causes above line rate traffic to be dropped.

  ~# ovs-appctl lacp/show-stats bond0
   bond0 statistics 
  member: phy0:
TX PDUs: 347246
RX PDUs: 14865
RX Bad PDUs: 0
RX Marker Request PDUs: 0
Link Expired: 168
Link Defaulted: 0
Carrier Status Changed: 0
  member: phy1:
TX PDUs: 347245
RX PDUs: 14919
RX Bad PDUs: 0
RX Marker Request PDUs: 0
Link Expired: 147
Link Defaulted: 1
Carrier Status Changed: 0

When rx-steering is enabled, no LACP packet is dropped and the bond
links remain enabled at all times, maximizing the throughput. Neither
the "Link Expired" nor the "Link Defaulted" counters are incremented
anymore.

This feature may be considered as "QoS". However, it does not work by
limiting the rate of traffic explicitly. It only guarantees that some
protocols 

[ovs-dev] [PATCH RESEND v11] netdev-dpdk: Add custom rx-steering configuration.

2023-06-01 Thread Robin Jarry
Some control protocols are used to maintain link status between
forwarding engines (e.g. LACP). When the system is not sized properly,
the PMD threads may not be able to process all incoming traffic from the
configured Rx queues. When a signaling packet of such protocols is
dropped, it can cause link flapping, worsening the situation.

Use the RTE flow API to redirect these protocols into a dedicated Rx
queue. The assumption is made that the ratio between control protocol
traffic and user data traffic is very low and thus this dedicated Rx
queue will never get full. Re-program the RSS redirection table to only
use the other Rx queues.

The additional Rx queue will be assigned a PMD core like any other Rx
queue. Polling that extra queue may introduce increased latency and
a slight performance penalty at the benefit of preventing link flapping.

This feature must be enabled per port on specific protocols via the
rx-steering option. This option takes "rss" followed by a "+" separated
list of protocol names. It is only supported on ethernet ports. This
feature is experimental.

If the user has already configured multiple Rx queues on the port, an
additional one will be allocated for  packets. If the hardware cannot
satisfy the requested number of requested Rx queues, the last Rx queue
will be assigned for control plane. If only one Rx queue is available,
the rx-steering feature will be disabled. If the hardware does not
support the RTE flow matchers/actions, the feature will be disabled.

It cannot be enabled when other_config:hw-offload=true as it may
conflict with the offloaded RTE flows. Similarly, if hw-offload is
enabled, custom rx-steering will be forcibly disabled on all ports.

Example use:

 ovs-vsctl add-bond br-phy bond0 phy0 phy1 -- \
   set interface phy0 type=dpdk options:dpdk-devargs=:ca:00.0 -- \
   set interface phy0 options:rx-steering=rss+lacp -- \
   set interface phy1 type=dpdk options:dpdk-devargs=:ca:00.1 -- \
   set interface phy1 options:rx-steering=rss+lacp

As a starting point, only one protocol is supported: LACP. Other
protocols can be added in the future. NIC compatibility should be
checked.

To validate that this works as intended, I used a traffic generator to
generate random traffic slightly above the machine capacity at line rate
on a two ports bond interface. OVS is configured to receive traffic on
two VLANs and pop/push them in a br-int bridge based on tags set on
patch ports.

   +--+
   | DUT  |
   |++|
   ||   br-int   || 
in_port=patch10,actions=mod_dl_src:$patch11,mod_dl_dst:$tgen1,output:patch11
   |||| 
in_port=patch11,actions=mod_dl_src:$patch10,mod_dl_dst:$tgen0,output:patch10
   || patch10patch11 ||
   |+---|---|+|
   ||   | |
   |+---|---|+|
   || patch00patch01 ||
   ||  tag:10tag:20  ||
   ||||
   ||   br-phy   || default flow, action=NORMAL
   ||||
   ||   bond0|| balance-slb, lacp=passive, lacp-time=fast
   ||phy0   phy1 ||
   |+--|-|---+|
   +---|-|+
   | |
   +---|-|+
   | port0  port1 | balance L3/L4, lacp=active, lacp-time=fast
   | lag  | mode trunk VLANs 10, 20
   |  |
   |switch|
   |  |
   |  vlan 10vlan 20  |  mode access
   |   port2  port3   |
   +-|--|-+
 |  |
   +-|--|-+
   |   tgen0  tgen1   |  Random traffic that is properly balanced
   |  |  across the bond ports in both directions.
   |  traffic generator   |
   +--+

Without rx-steering, the bond0 links are randomly switching to
"defaulted" when one of the LACP packets sent by the switch is dropped
because the RX queues are full and the PMD threads did not process them
fast enough. When that happens, all traffic must go through a single
link which causes above line rate traffic to be dropped.

 ~# ovs-appctl lacp/show-stats bond0
  bond0 statistics 
 member: phy0:
   TX PDUs: 347246
   RX PDUs: 14865
   RX Bad PDUs: 0
   RX Marker Request PDUs: 0
   Link Expired: 168
   Link Defaulted: 0
   Carrier Status Changed: 0
 member: phy1:
   TX PDUs: 347245
   RX PDUs: 14919
   RX Bad PDUs: 0
   RX Marker Request PDUs: 0
   Link Expired: 147
   Link Defaulted: 1
   Carrier Status Changed: 0

When rx-steering is enabled, no LACP packet is dropped and the bond
links remain enabled at all times, maximizing the throughput. Neither
the "Link Expired" nor the "Link Defaulted" counters are incremented
anymore.

This feature may be considered as "QoS". However, it does not work by
limiting the rate of traffic explicitly. It only guarantees that some
protocols have a lower chance of being dropped because the PMD cores
cannot keep up with regular traffic.

The