This patch set implements the hairpin feature.
The hairpin feature was introduced in RFC[1]

The hairpin feature (different name can be forward) acts as "bump on the wire",
meaning that a packet that is received from the wire can be modified using
offloaded action and then sent back to the wire without application intervention
which save CPU cycles.

The hairpin is the inverse function of loopback in which application
sends a packet then it is received again by the
application without being sent to the wire.

The hairpin can be used by a number of different NVF, for example load
balancer, gateway and so on.

As can be seen from the hairpin description, hairpin is basically RX queue
connected to TX queue.

During the design phase I was thinking of two ways to implement this
feature the first one is adding a new rte flow action. and the second
one is create a special kind of queue.

The advantages of using the queue approch:
1. More control for the application. queue depth (the memory size that
should be used).
2. Enable QoS. QoS is normaly a parametr of queue, so in this approch it
will be easy to integrate with such system.
3. Native integression with the rte flow API. Just setting the target
queue/rss to hairpin queue, will result that the traffic will be routed
to the hairpin queue.
4. Enable queue offloading.

Each hairpin Rxq can be connected Txq / number of Txqs which can belong to a
different ports assuming the PMD supports it. The same goes the other
way each hairpin Txq can be connected to one or more Rxqs.
This is the reason that both the Txq setup and Rxq setup are getting the
hairpin configuration structure.

>From PMD prespctive the number of Rxq/Txq is the total of standard
queues + hairpin queues.

To configure hairpin queue the user should call
rte_eth_rx_hairpin_queue_setup / rte_eth_tx_hairpin_queue_setup insteed
of the normal queue setup functions.

The hairpin queues are not part of the normal RSS functiosn.

To use the queues the user simply create a flow that points to RSS/queue
actions that are hairpin queues.
The reason for selecting 2 new functions for hairpin queue setup are:
1. avoid API break.
2. avoid extra and unused parameters.



[1] 
https://inbox.dpdk.org/dev/1565703468-55617-1-git-send-email-or...@mellanox.com/

Cc: wenzhuo...@intel.com
Cc: bernard.iremon...@intel.com
Cc: tho...@monjalon.net
Cc: ferruh.yi...@intel.com
Cc: arybche...@solarflare.com
Cc: viachesl...@mellanox.com

------
V7:
 - all changes are in patch 2: ethdev: add support for hairpin queue
   - Move is_rx/tx_hairpin_queue to ethdev.c and ethdev.h also remove the 
inline.
   - change checks for max number of hairpin queues.
   - modify log messages.

V6:
 - add missing include in nfb driver.
 - change comparing of rte_eth_dev_is_tx_hairpin_queue /
   rte_eth_dev_is_rx_hairpin_queue to boolean operator.
 - split the doc patch to the relevant patches.

V5:
 - modify log messages to be more distinct.
 - set that log message will be in the same line even if > 80.
 - change peer_n to peer_count.
 - add functions to get if queue is hairpin queue.

V4:
 - update according to comments from ML.

V3:
 - update according to comments from ML.

V2:
 - update according to comments from ML.




Ori Kam (14):
  ethdev: move queue state defines to private file
  ethdev: add support for hairpin queue
  net/mlx5: query hca hairpin capabilities
  net/mlx5: support Rx hairpin queues
  net/mlx5: prepare txq to work with different types
  net/mlx5: support Tx hairpin queues
  net/mlx5: add get hairpin capabilities
  app/testpmd: add hairpin support
  net/mlx5: add hairpin binding function
  net/mlx5: add support for hairpin hrxq
  net/mlx5: add internal tag item and action
  net/mlx5: add id generation function
  net/mlx5: add default flows for hairpin
  net/mlx5: split hairpin flows

 app/test-pmd/parameters.c                |  28 +++
 app/test-pmd/testpmd.c                   | 109 ++++++++-
 app/test-pmd/testpmd.h                   |   3 +
 doc/guides/rel_notes/release_19_11.rst   |   6 +
 drivers/net/mlx5/mlx5.c                  | 170 ++++++++++++-
 drivers/net/mlx5/mlx5.h                  |  69 +++++-
 drivers/net/mlx5/mlx5_devx_cmds.c        | 194 +++++++++++++++
 drivers/net/mlx5/mlx5_ethdev.c           | 129 ++++++++--
 drivers/net/mlx5/mlx5_flow.c             | 393 ++++++++++++++++++++++++++++++-
 drivers/net/mlx5/mlx5_flow.h             |  67 +++++-
 drivers/net/mlx5/mlx5_flow_dv.c          | 231 +++++++++++++++++-
 drivers/net/mlx5/mlx5_flow_verbs.c       |  11 +-
 drivers/net/mlx5/mlx5_prm.h              | 127 +++++++++-
 drivers/net/mlx5/mlx5_rss.c              |   1 +
 drivers/net/mlx5/mlx5_rxq.c              | 318 ++++++++++++++++++++++---
 drivers/net/mlx5/mlx5_rxtx.c             |   2 +-
 drivers/net/mlx5/mlx5_rxtx.h             |  68 +++++-
 drivers/net/mlx5/mlx5_trigger.c          | 140 ++++++++++-
 drivers/net/mlx5/mlx5_txq.c              | 294 +++++++++++++++++++----
 drivers/net/nfb/nfb_tx.h                 |   1 +
 lib/librte_ethdev/rte_ethdev.c           | 232 ++++++++++++++++++
 lib/librte_ethdev/rte_ethdev.h           | 177 +++++++++++++-
 lib/librte_ethdev/rte_ethdev_core.h      |  91 ++++++-
 lib/librte_ethdev/rte_ethdev_driver.h    |   7 +
 lib/librte_ethdev/rte_ethdev_version.map |   3 +
 25 files changed, 2704 insertions(+), 167 deletions(-)

-- 
1.8.3.1

Reply via email to