> Hi,
> 
> Here is a joint work from Mellanox and Napatech, to enable the flow hw
> offload with the DPDK generic flow interface (rte_flow).
> 
> The basic idea is to associate the flow with a mark id (a unit32_t
> number).
> Later, we then get the flow directly from the mark id, which could bypass
> some heavy CPU operations, including but not limiting to mini flow
> extract, emc lookup, dpcls lookup, etc.
> 
> The association is done with CMAP in patch 1. The CPU workload bypassing
> is done in patch 2. The flow offload is done in patch 3, which mainly does
> two things:
> 
> - translate the ovs match to DPDK rte flow patterns
> - bind those patterns with a RSS + MARK action.
> 
> Patch 5 makes the offload work happen in another thread, for leaving the
> datapath as light as possible.
> 
> A PHY-PHY forwarding with 1000 mega flows (udp,tp_src=1000-1999) and 1
> million streams (tp_src=1000-1999, tp_dst=2000-2999) show more than 260%
> performance boost.
> 
> Note that it's disabled by default, which can be enabled by:
> 
>     $ ovs-vsctl set Open_vSwitch . other_config:hw-offload=true

Hi Yuanhan,

Thanks for working on this, I'll be looking at this over the coming week so 
don't consider this a full review.

Just a general comment, at first glance there doesn't seem to be any 
documentation in the patchset?

I would expect a patch for the DPDK section of the OVS docs at minimum 
detailing:

(i) HWOL requirements (Any specific SW/drivers required or DPDK libs/PMDs etc. 
that have to be enabled for HWOL).
(ii) HWOL Usage (Enablement and disablement as shown above).
(iii) List of Validated HW devices (As HWOL functionality will vary between 
devices it would be good to provide some known 'verified' cards to use with it. 
At this stage we don't have to have every card that it will work with it but 
I'd like to see a list of NIC Models that this has been validated on to date).
(iv) Any Known limitations.

You'll also have to add an entry to the NEWS document to flag that HWOL has 
been introduced to OVS with DPDK.

As discussed previously on the community call, at this point the feature should 
be marked experimental in both NEWS and the documentation, this is just to 
signify that it is subject to change as more capabilities such as full offload 
is added over time.

Thanks
Ian

> 
> 
> v5: - fixed an issue that it took too long if we do flow add/remove
>       repeatedly.
>     - removed an unused mutex lock
>     - turned most of the log level to DBG
>     - rebased on top of the latest code
> 
> v4: - use RSS action instead of QUEUE action with MARK
>     - make it work with multiple queue (see patch 1)
>     - rebased on top of latest code
> 
> v3: - The mark and id association is done with array instead of CMAP.
>     - Added a thread to do hw offload operations
>     - Removed macros completely
>     - dropped the patch to set FDIR_CONF, which is a workround some
>       Intel NICs.
>     - Added a debug patch to show all flow patterns we have created.
>     - Misc fixes
> 
> v2: - workaround the queue action issue
>     - fixed the tcp_flags being skipped issue, which also fixed the
>       build warnings
>     - fixed l2 patterns for Intel nic
>     - Converted some macros to functions
>     - did not hardcode the max number of flow/action
>     - rebased on top of the latest code
> 
> Thanks.
> 
>     --yliu
> 
> 
> ---
> Finn Christensen (1):
>   netdev-dpdk: implement flow offload with rte flow
> 
> Yuanhan Liu (4):
>   dpif-netdev: associate flow with a mark id
>   dpif-netdev: retrieve flow directly from the flow mark
>   netdev-dpdk: add debug for rte flow patterns
>   dpif-netdev: do hw flow offload in a thread
> 
>  lib/dp-packet.h   |  13 +
>  lib/dpif-netdev.c | 473 ++++++++++++++++++++++++++++++++++-
>  lib/flow.c        | 155 +++++++++---
>  lib/flow.h        |   1 +
>  lib/netdev-dpdk.c | 732
> +++++++++++++++++++++++++++++++++++++++++++++++++++++-
>  lib/netdev.h      |   6 +
>  6 files changed, 1341 insertions(+), 39 deletions(-)
> 
> --
> 2.7.4

_______________________________________________
dev mailing list
d...@openvswitch.org
https://mail.openvswitch.org/mailman/listinfo/ovs-dev

Reply via email to