On 17/12/15 08:54, Savolainen, Petri (Nokia - FI/Espoo) wrote:


-----Original Message-----
From: EXT Zoltan Kiss [mailto:zoltan.k...@linaro.org]
Sent: Wednesday, December 16, 2015 5:48 PM
To: Savolainen, Petri (Nokia - FI/Espoo); lng-odp@lists.linaro.org
Subject: Re: [lng-odp] [API-NEXT PATCH v5 2/7] api: pktio: added multiple
pktio input queues

Hi,

On 16/12/15 12:56, Savolainen, Petri (Nokia - FI/Espoo) wrote:
+typedef struct odp_pktio_input_queue_param_t {
+       /** Single thread per queue. Enable performance
optimization when
each
+         * queue has only single user.
+         * 0: Queue is multi-thread safe
+         * 1: Queue is used by single thread only */
+       odp_bool_t single_user;
+
+       /** Enable flow hashing
+         * 0: Do not hash flows
+         * 1: Hash flows to input queues */
+       odp_bool_t hash_enable;
+
+       /** Protocol field selection for hashing. Multiple
protocols can be
+         * selected. */
+       odp_pktin_hash_proto_t hash_proto;
+
+       /** Number of input queues to be created. More than one
input queue
+         * require input hashing or classifier setup.
Hash_proto is ignored
+         * when hash_enable is zero or num_queues is one. This
value must
be

OVS use the hashes even with one queue, it's useful for fast
lookup in
flow tables. Is there any reason to automatically disable it?


This describes input queue hashing (hashing of flows into queues).

Let me reword my concerns:
- What happens if num_queues > 1 and hash_enable = 0? What do we know
about the distribution of packets between queues?
Not specified yet. But in practice, need to use classifier, or accept
that ODP does not maintain packet order.



- If num_queues = 1, hash_proto is ignored (no matter what
hashe_enable
is), so how do we set up hashing? As I said, we still need it.
As said this controls hashing between N queues. No hashing needed with
single queue.
It is needed. I think the base of our misunderstanding is that your use
of "hashing" also includes "distributing it between queues", while mine
only includes "generate the hash". DPDK also separate these things,
especially because "hashing" is not the only way of deciding where the
packet goes, the recently discussed Flow Director (which extends RSS) is
also a valid way.


There are many ways to distribute packets but this API controls hashing for 
distribution of packets. User defines the fields used for hashing 
(odp_pktin_hash_proto_t), but not e.g. the algorithm. With this version of the 
API, user cannot pin flows or API does not specify how the algorithm works (if 
it favors some cores, etc).


Input flow hashing is pretty standard term - e.g. ethtool calls it "rx flow 
hash" (get/set rx flow hash configuration)


In the context of input queue configuration (odp_pktio_input_queue_param_t), it 
think the spec is pretty clear.

        /** Enable flow hashing
          * 0: Do not hash flows
          * 1: Hash flows to input queues */
        odp_bool_t hash_enable;


Hash vs. do not hash flows to input queues - it does not say anything about 
what hash value is stored / not stored in the packet.

We already have a flow hash API to query the computed hash, this wording suggest that you are setting up what hash value is generated for that. That would be my first guess at least, and I think that's the sensible thing to do. I don't think there would be any platform where you can ask from the hardware for 2 different hashes of the same packet, based on different fields.

If I would adapt ODP-OVS to this, it would lose performance when there is only 1 queue used. OVS would still use the flow hash value, but as the spec say:

"Hash_proto is ignored when hash_enable is zero or num_queues is one"

As I said, this wording says that odp_packet_has_flow_hash() should return 0 if num_queues == 1, as you can't use hash_proto, which essentially implies the platform shouldn't compute the hash.




This API can be extended later with additional API spec and related flags: 
flow_pinning_enable, symmetric_hash, ...





  > It does not control or comment about flow hash value stored in the
packet. We don't have an API to control is yet.

I think it would be very confusing to have separate setting for "flow
hash" and "the hash which decides which queue is chosen", and quite
unlikely to implement in hardware when the two differs, as usually the
card computes only one kind of hash.


Does HW always output the hash value it generated for queue selection (and 
store in the packet)? If it does not, a spec that would demand that *the same 
hash* is always stored in the packet would require a SW based hash calculation 
per packet.

I don't talk about such demands. The platform can return 0 in odp_packet_has_flow_hash() if it never stores the computed hash. I'm talking about not creating two separate interface to configure two hash values about the same packet.

We can add control for generation of the flow hash stored in the packet, it could be as simple as another flag (store_flow_hash) in this struct or something else. Depends on HW capability.


-Petri

_______________________________________________
lng-odp mailing list
lng-odp@lists.linaro.org
https://lists.linaro.org/mailman/listinfo/lng-odp

Reply via email to