Hi Ori,

sorry for delay with reply.

On 12/17/23 13:07, Ori Kam wrote:
Hi Andrew,

-----Original Message-----
From: Andrew Rybchenko <andrew.rybche...@oktetlabs.ru>
Sent: Saturday, December 16, 2023 11:19 AM

On 12/10/23 11:30, Ori Kam wrote:
When offloading rules with the encap action, the HW may calculate entropy
based on the encap protocol.
Each HW can implement a different algorithm.

When the application receives packets that should have been
encaped by the HW, but didn't reach this stage yet (for example TCP SYN
packets),
then when encap is done in SW, application must apply
the same entropy calculation algorithm.

Using the new API application can request the PMD to calculate the
value as if the packet passed in the HW.

I'm still wondering why the API is required. Does app install encap
rule when the first packet is processed? The rule can contain all
outer headers (as it is calculated in SW anyway) and HW does not need
to calculate anything.

Yes, the application installs a rule based on the first packet.
as a result, all the rest of the packets are encaped by the HW.
This API allows the application to use the same value as the HW will use when 
encapsulating the packet.
In other words, we have 2 paths:
Path 1 SW, for the first packet
Path 2 HW, all the rest of the packest

I get it, but I still don't understand why HW should calculate
something. Can it simply use value provided by SW in encap rule?
If so, calculation becomes not HW-specific and does not require
driver callback.


Signed-off-by: Ori Kam <or...@nvidia.com>
---
   lib/ethdev/rte_flow.h | 49
+++++++++++++++++++++++++++++++++++++++++++
   1 file changed, 49 insertions(+)

diff --git a/lib/ethdev/rte_flow.h b/lib/ethdev/rte_flow.h
index affdc8121b..3989b089dd 100644
--- a/lib/ethdev/rte_flow.h
+++ b/lib/ethdev/rte_flow.h
@@ -6753,6 +6753,55 @@ rte_flow_calc_table_hash(uint16_t port_id, const
struct rte_flow_template_table
                         const struct rte_flow_item pattern[], uint8_t
pattern_template_index,
                         uint32_t *hash, struct rte_flow_error *error);

+/**
+ * @warning
+ * @b EXPERIMENTAL: this API may change without prior notice.
+ *
+ * Destination field type for the entropy calculation.
+ *
+ * @see function rte_flow_calc_encap_entropy
+ */
+enum rte_flow_entropy_dest {
+       /* Calculate entropy placed in UDP source port field. */
+       RTE_FLOW_ENTROPY_DEST_UDP_SRC_PORT,

And source and destination are used together but for different
purposes it is very hard to follow which is used for which purpose.
I'd avoid term "dest" in the enum naming. May be present "flow" is
already good enough to highlight that it is per-flow?
rte_flow_encap_hash? rte_flow_encap_field_hash?

I'm open to any suggestions, this enum is supposed to show to which
field the HW insert the calculated value. This field is defined by the 
encapsulation
protocol. For example, VXLAN the hash is stored in source port, while in NVGRE 
it is stored in
flow_id field. The destination field also impact the size.

What do you think about:
RTE_FLOW_ENCAP_HASH_SRC_PORT?

Sounds better.

What about if we change the destination to enum that will hold the destination 
tunnel type
RTE_FLOW_TUNNEL_TYPE_VXLAN,
RTE_FLOW_TUNNEL_TYPE_NVGRE

It could be an option as well, but binds tunnel type to size of hash to
be calculated. It looks OK right now, but may be wrong in the future.

+       /* Calculate entropy placed in NVGRE flow ID field. */
+       RTE_FLOW_ENTROPY_DEST_NVGRE_FLOW_ID,
+};
+
+/**
+ * @warning
+ * @b EXPERIMENTAL: this API may change without prior notice.
+ *
+ * Calculate the entropy generated by the HW for a given pattern,
+ * when encapsulation flow action is executed.
+ *
+ * @param[in] port_id
+ *   Port identifier of Ethernet device.
+ * @param[in] pattern
+ *   The values to be used in the entropy calculation.

Is it inner packet fields? Should all fields be used for hash
calculation? May be it is better to describe as inner packet
headers? How does app know which headers to extract and provide?

The fields are the outer fields that will become inner fields, after the 
encapsulation.
The fields are dependent on the HW (in standard case 5 tuple). but applications 
can /should set
all the fields he got from the packet, at the end those are the fields that the 
HW will see.

OK I see.


+ * @param[in] dest_field
+ *   Type of destination field for entropy calculation.
+ * @param[out] entropy
+ *   Used to return the calculated entropy. It will be written in network 
order,
+ *   so entropy[0] is the MSB.
+ *   The number of bytes is based on the destination field type.f

API contract is a bit unclear here. May be it is safer to provide
buffer size explicitly?

The size of the field is defined by the destination field, which in turn is 
defined by the
protocol.

I don't think adding size has any meaning when you know that the value is going 
to be set
in source port which has the size of 16 bites.
Based on enum suggestion of tunnel type. I think it will also explain the dest 
and size. What do you think?

I see, but IMHO it is still very unsafe. Simply too error prone and
there is not way to catch it. IMHO buffer size will not overload it
too much, but will make API clearer and safer.



+ * @param[out] error
+ *   Perform verbose error reporting if not NULL.
+ *   PMDs initialize this structure in case of error only.
+ *
+ * @return
+ *   - (0) if success.
+ *   - (-ENODEV) if *port_id* invalid.
+ *   - (-ENOTSUP) if underlying device does not support this functionality.
+ *   - (-EINVAL) if *pattern* doesn't hold enough information to calculate the
entropy
+ *               or the dest is not supported.
+ */
+__rte_experimental
+int
+rte_flow_calc_encap_entropy(uint16_t port_id, const struct rte_flow_item
pattern[],
+                           enum rte_flow_entropy_dest dest_field, uint8_t
*entropy,
+                           struct rte_flow_error *error);
+
   #ifdef __cplusplus
   }
   #endif


Reply via email to