On 4/18/2024 10:00 PM, Rahul Rameshbabu wrote:
On Thu, 18 Apr, 2024 01:24:57 -0400 Mateusz Polchlopek
<mateusz.polchlo...@intel.com> wrote:
From: Jacob Keller <jacob.e.kel...@intel.com>
Using VIRTCHNL_VF_OFFLOAD_FLEX_DESC, the iAVF driver is capable of
negotiating to enable the advanced flexible descriptor layout. Add the
flexible NIC layout (RXDID=2) as a member of the Rx descriptor union.
Also add bit position definitions for the status and error indications
that are needed.
The iavf_clean_rx_irq function needs to extract a few fields from the Rx
descriptor, including the size, rx_ptype, and vlan_tag.
Move the extraction to a separate function that decodes the fields into
a structure. This will reduce the burden for handling multiple
descriptor types by keeping the relevant extraction logic in one place.
To support handling an additional descriptor format with minimal code
duplication, refactor Rx checksum handling so that the general logic
is separated from the bit calculations. Introduce an iavf_rx_desc_decoded
structure which holds the relevant bits decoded from the Rx descriptor.
This will enable implementing flexible descriptor handling without
duplicating the general logic twice.
Introduce an iavf_extract_flex_rx_fields, iavf_flex_rx_hash, and
iavf_flex_rx_csum functions which operate on the flexible NIC descriptor
format instead of the legacy 32 byte format. Based on the negotiated
RXDID, select the correct function for processing the Rx descriptors.
With this change, the Rx hot path should be functional when using either
the default legacy 32byte format or when we switch to the flexible NIC
layout.
Modify the Rx hot path to add support for the flexible descriptor
format and add request enabling Rx timestamps for all queues.
As in ice, make sure we bump the checksum level if the hardware detected
a packet type which could have an outer checksum. This is important
because hardware only verifies the inner checksum.
Reviewed-by: Wojciech Drewek <wojciech.dre...@intel.com>
Signed-off-by: Jacob Keller <jacob.e.kel...@intel.com>
Co-developed-by: Mateusz Polchlopek <mateusz.polchlo...@intel.com>
Signed-off-by: Mateusz Polchlopek <mateusz.polchlo...@intel.com>
---
drivers/net/ethernet/intel/iavf/iavf_txrx.c | 354 +++++++++++++-----
drivers/net/ethernet/intel/iavf/iavf_txrx.h | 8 +
drivers/net/ethernet/intel/iavf/iavf_type.h | 149 ++++++--
.../net/ethernet/intel/iavf/iavf_virtchnl.c | 5 +
4 files changed, 390 insertions(+), 126 deletions(-)
diff --git a/drivers/net/ethernet/intel/iavf/iavf_txrx.c
b/drivers/net/ethernet/intel/iavf/iavf_txrx.c
<snip>
+/**
+ * iavf_flex_rx_hash - set the hash value in the skb
+ * @ring: descriptor ring
+ * @rx_desc: specific descriptor
+ * @skb: skb currently being received and modified
+ * @rx_ptype: Rx packet type
+ *
+ * This function only operates on the VIRTCHNL_RXDID_2_FLEX_SQ_NIC flexible
+ * descriptor writeback format.
+ **/
+static void iavf_flex_rx_hash(struct iavf_ring *ring,
+ union iavf_rx_desc *rx_desc,
+ struct sk_buff *skb, u16 rx_ptype)
+{
+ __le16 status0;
+
+ if (!(ring->netdev->features & NETIF_F_RXHASH))
+ return;
+
+ status0 = rx_desc->flex_wb.status_error0;
Any reason to not convert rx_desc->flex_wb.status_error0 to
CPU-endianness for the bit check?
+ if (status0 & cpu_to_le16(IAVF_RX_FLEX_DESC_STATUS0_RSS_VALID_M)) {
+ u32 hash = le32_to_cpu(rx_desc->flex_wb.rss_hash);
+
+ skb_set_hash(skb, hash, iavf_ptype_to_htype(rx_ptype));
+ }
+}
<snip>
+/**
+ * iavf_extract_flex_rx_fields - Extract fields from the Rx descriptor
+ * @rx_ring: rx descriptor ring
+ * @rx_desc: the descriptor to process
+ * @fields: storage for extracted values
+ *
+ * Decode the Rx descriptor and extract relevant information including the
+ * size, VLAN tag, Rx packet type, end of packet field and RXE field value.
+ *
+ * This function only operates on the VIRTCHNL_RXDID_2_FLEX_SQ_NIC flexible
+ * descriptor writeback format.
+ */
+static void iavf_extract_flex_rx_fields(struct iavf_ring *rx_ring,
+ union iavf_rx_desc *rx_desc,
+ struct iavf_rx_extracted *fields)
+{
+ __le16 status0, status1, flexi_flags0;
+
+ fields->size = FIELD_GET(IAVF_RX_FLEX_DESC_PKT_LEN_M,
+ le16_to_cpu(rx_desc->flex_wb.pkt_len));
+
+ flexi_flags0 = rx_desc->flex_wb.ptype_flexi_flags0;
+
+ fields->rx_ptype = FIELD_GET(IAVF_RX_FLEX_DESC_PTYPE_M,
+ le16_to_cpu(flexi_flags0));
+
+ status0 = rx_desc->flex_wb.status_error0;
+ if (status0 & cpu_to_le16(IAVF_RX_FLEX_DESC_STATUS0_L2TAG1P_M) &&
+ rx_ring->flags & IAVF_TXRX_FLAGS_VLAN_TAG_LOC_L2TAG1)
+ fields->vlan_tag = le16_to_cpu(rx_desc->flex_wb.l2tag1);
+
+ status1 = rx_desc->flex_wb.status_error1;
Similar comment to previous in this function.
Thanks for that comment, I will take a look on that and probably will
change in next version.
+ if (status1 & cpu_to_le16(IAVF_RX_FLEX_DESC_STATUS1_L2TAG2P_M) &&
+ rx_ring->flags & IAVF_RXR_FLAGS_VLAN_TAG_LOC_L2TAG2_2)
+ fields->vlan_tag = le16_to_cpu(rx_desc->flex_wb.l2tag2_2nd);
+
+ fields->end_of_packet = FIELD_GET(IAVF_RX_FLEX_DESC_STATUS_ERR0_EOP_BIT,
+ le16_to_cpu(status0));
+ fields->rxe = FIELD_GET(IAVF_RX_FLEX_DESC_STATUS_ERR0_RXE_BIT,
+ le16_to_cpu(status0));
+}
+
+static void iavf_extract_rx_fields(struct iavf_ring *rx_ring,
+ union iavf_rx_desc *rx_desc,
+ struct iavf_rx_extracted *fields)
+{
+ if (rx_ring->rxdid == VIRTCHNL_RXDID_1_32B_BASE)
+ iavf_extract_legacy_rx_fields(rx_ring, rx_desc, fields);
+ else
+ iavf_extract_flex_rx_fields(rx_ring, rx_desc, fields);
+}
+
<snip>
--
Thanks,
Rahul Rameshbabu