On Mon, May 21, 2018 at 8:12 AM, Willem de Bruijn
<willemdebruijn.ker...@gmail.com> wrote:
> On Mon, May 21, 2018 at 10:51 AM, Tom Herbert <t...@herbertland.com> wrote:
>> On Sat, May 19, 2018 at 1:27 PM, Willem de Bruijn
>> <willemdebruijn.ker...@gmail.com> wrote:
>>> On Sat, May 19, 2018 at 4:13 PM, Willem de Bruijn
>>> <willemdebruijn.ker...@gmail.com> wrote:
>>>> On Fri, May 18, 2018 at 12:03 AM, Tom Herbert <t...@herbertland.com> wrote:
>>>>> On Tue, May 15, 2018 at 6:26 PM, Amritha Nambiar
>>>>> <amritha.namb...@intel.com> wrote:
>>>>>> This patch adds support to pick Tx queue based on the Rx queue map
>>>>>> configuration set by the admin through the sysfs attribute
>>>>>> for each Tx queue. If the user configuration for receive
>>>>>> queue map does not apply, then the Tx queue selection falls back
>>>>>> to CPU map based selection and finally to hashing.
>>>>>>
>>>>>> Signed-off-by: Amritha Nambiar <amritha.namb...@intel.com>
>>>>>> Signed-off-by: Sridhar Samudrala <sridhar.samudr...@intel.com>
>>>>>> ---
>>>
>>>>>> +static int get_xps_queue(struct net_device *dev, struct sk_buff *skb)
>>>>>> +{
>>>>>> +#ifdef CONFIG_XPS
>>>>>> +       enum xps_map_type i = XPS_MAP_RXQS;
>>>>>> +       struct xps_dev_maps *dev_maps;
>>>>>> +       struct sock *sk = skb->sk;
>>>>>> +       int queue_index = -1;
>>>>>> +       unsigned int tci = 0;
>>>>>> +
>>>>>> +       if (sk && sk->sk_rx_queue_mapping <= dev->real_num_rx_queues &&
>>>>>> +           dev->ifindex == sk->sk_rx_ifindex)
>>>>>> +               tci = sk->sk_rx_queue_mapping;
>>>>>> +
>>>>>> +       rcu_read_lock();
>>>>>> +       while (queue_index < 0 && i < __XPS_MAP_MAX) {
>>>>>> +               if (i == XPS_MAP_CPUS)
>>>>>
>>>>> This while loop typifies exactly why I don't think the XPS maps should
>>>>> be an array.
>>>>
>>>> +1
>>>
>>> as a matter of fact, as enabling both cpu and rxqueue map at the same
>>> time makes no sense, only one map is needed at any one time. The
>>> only difference is in how it is indexed. It should probably not be possible
>>> to configure both at the same time. Keeping a single map probably also
>>> significantly simplifies patch 1/4.
>>
>> Willem,
>>
>> I think it might makes sense to have them both. Maybe one application
>> is spin polling that needs this, where others might be happy with
>> normal CPU mappings as default.
>
> Some entries in the rx_queue table have queue_pair affinity
> configured, the others return -1 to fall through to the cpu
> affinity table?
>
Right, that's the intent of the while loop.

> I guess that implies flow steering to those special purpose
> queues. I wonder whether this would be used this in practice.
> I does make the code more complex by having to duplicate
> the map lookup logic (mostly, patch 1/4).

That's a good pont. I think we need more information on how the
feature is going to be used in practice. My assumption is that there
are some number of "special" queues for which spin polling is being
done.

Tom

Reply via email to