Yeah, I did not expect change, but at least we know that RSS is confiugred on 
this card:

    rss avail:         ipv4 ipv4-tcp ipv4-udp ipv6 ipv6-tcp ipv6-udp port 
                       vxlan geneve nvgre 
    rss active:        ipv4 ipv4-tcp ipv4-udp ipv6 ipv6-tcp ipv6-udp 


I'm affraid this is not VPP issue, you will need to talk with Cavium guys and 
ask them why RSS doesn't work properly...

-- 
Damjan

> On 19 Dec 2018, at 09:00, mik...@yeah.net wrote:
> 
> The result is the same.  The 2rd queue only received 24 packets as it is in 
> 18.07.
> 
> version : vpp v18.10-19~ga8e3001-dirty 
> vpp# show hardware-interfaces 
>               Name                Idx   Link  Hardware
> VirtualFunctionEthernet5/0/2       1     up   VirtualFunctionEthernet5/0/2
>   Ethernet address 12:90:b1:65:2d:19
>   Cavium ThunderX
>     carrier up full duplex speed 10000 mtu 9190 
>     flags: admin-up pmd maybe-multiseg
>     rx: queues 2 (max 96), desc 1024 (min 0 max 65535 align 1)
>     tx: queues 2 (max 96), desc 1024 (min 0 max 65535 align 1)
>     pci: device 177d:a034 subsystem 177d:a234 address 0000:05:00.02 numa 0
>     module: unknown
>     max rx packet len: 9204
>     promiscuous: unicast off all-multicast off
>     vlan offload: strip off filter off qinq off
>     rx offload avail:  vlan-strip ipv4-cksum udp-cksum tcp-cksum jumbo-frame 
>                        crc-strip scatter 
>     rx offload active: jumbo-frame crc-strip scatter 
>     tx offload avail:  ipv4-cksum udp-cksum tcp-cksum outer-ipv4-cksum 
>     tx offload active: 
>     rss avail:         ipv4 ipv4-tcp ipv4-udp ipv6 ipv6-tcp ipv6-udp port 
>                        vxlan geneve nvgre 
>     rss active:        ipv4 ipv4-tcp ipv4-udp ipv6 ipv6-tcp ipv6-udp 
>     tx burst function: nicvf_xmit_pkts_multiseg
>     rx burst function: nicvf_recv_pkts_multiseg_no_offload
> 
>     tx frames ok                                       11064
>     tx bytes ok                                      3042600
>     rx frames ok                                     3995658
>     rx bytes ok                                    687253292
>     rx missed                                           4342
>     extended stats:
>       rx good packets                                3995658
>       tx good packets                                  11064
>       rx good bytes                                687253292
>       tx good bytes                                  3042600
>       rx missed errors                                  4342
>       rx q0packets                                   3995634
>       rx q0bytes                                   687249164
>       rx q1packets                                        24
>       rx q1bytes                                        4128
>       tx q0packets                                        12
>       tx q0bytes                                        3300
>       tx q1packets                                     11052
>       tx q1bytes                                     3039300
> 
> 
> mik...@yeah.net <mailto:mik...@yeah.net>
>  
> From: Damjan Marion <mailto:dmar...@me.com>
> Date: 2018-12-19 15:46
> To: mik...@yeah.net <mailto:mik...@yeah.net>
> CC: vpp-dev <mailto:vpp-dev@lists.fd.io>
> Subject: Re: [vpp-dev] dpdk-input : serious load imbalance
> Can you try to capture "show hardw" with 18.10 ?
> 
> Looks like ThunderX is not acting as PCI device so part of the output is 
> suppressed in 18.07 and we changed that behaviour in 18.10.
> 
> I'm looking for someething like:
> 
>     rss avail:         ipv4 ipv4-tcp ipv4-udp ipv6 ipv6-tcp ipv6-udp 
> ipv6-tcp-ex
>                        ipv6-udp-ex ipv6-ex ipv6-tcp-ex ipv6-udp-ex
>     rss active:        none
> 
> 
> -- 
> Damjan
> 
>> On 19 Dec 2018, at 08:26, mik...@yeah.net <mailto:mik...@yeah.net> wrote:
>> 
>> vpp v18.07.1-10~gc548f5d-dirty 
>> 
>> mik...@yeah.net <mailto:mik...@yeah.net>
>>  
>> From: Damjan Marion <mailto:dmar...@me.com>
>> Date: 2018-12-19 15:21
>> To: mik...@yeah.net <mailto:mik...@yeah.net>
>> CC: vpp-dev <mailto:vpp-dev@lists.fd.io>
>> Subject: Re: [vpp-dev] dpdk-input : serious load imbalance
>> 
>> What version of VPP do you use.
>> I'm missing some outputs in "show hardware"...
>> 
>> -- 
>> Damjan
>> 
>>> On 19 Dec 2018, at 02:19, mik...@yeah.net <mailto:mik...@yeah.net> wrote:
>>> 
>>> The "show hardw" is as follow, the statistics may be different from 
>>> yesterday.
>>> 
>>> vpp# show hardware-interfaces 
>>> Name Idx Link Hardware
>>> VirtualFunctionEthernet5/0/2 1 up VirtualFunctionEthernet5/0/2
>>> Ethernet address 72:62:8a:40:43:12
>>> Cavium ThunderX
>>> carrier up full duplex speed 10000 mtu 9190 
>>> flags: admin-up pmd maybe-multiseg
>>> rx queues 2, rx desc 1024, tx queues 2, tx desc 1024
>>> cpu socket 0
>>> 
>>> tx frames ok 268302
>>> tx bytes ok 74319654
>>> rx frames ok 4000000
>>> rx bytes ok 688000000
>>> extended stats:
>>> rx good packets 4000000
>>> tx good packets 268302
>>> rx good bytes 688000000
>>> tx good bytes 74319654
>>> rx q0packets 3999976
>>> rx q0bytes 687995872
>>> rx q1packets 24
>>> rx q1bytes 4128
>>> tx q0packets 12
>>> tx q0bytes 3324
>>> tx q1packets 268290
>>> tx q1bytes 74316330
>>> VirtualFunctionEthernet5/0/3 2 down VirtualFunctionEthernet5/0/3
>>> Ethernet address 2a:f2:d5:47:67:f1
>>> Cavium ThunderX
>>> carrier down 
>>> flags: pmd maybe-multiseg
>>> rx queues 2, rx desc 1024, tx queues 2, tx desc 1024
>>> cpu socket 0
>>> 
>>> local0 0 down local0
>>> local
>>> 
>>> mik...@yeah.net <mailto:mik...@yeah.net>
>>>  
>>> From: Damjan Marion <mailto:dmar...@me.com>
>>> Date: 2018-12-18 20:38
>>> To: mik...@yeah.net <mailto:mik...@yeah.net>
>>> CC: vpp-dev@lists.fd.io <mailto:vpp-dev@lists.fd.io>
>>> Subject: Re: [vpp-dev] dpdk-input : serious load imbalance
>>> 
>>> What kind of nic do you have? Can you capture "show hardw" ?
>>> 
>>> -- 
>>> Damjan
>>> 
>>>> On 18 Dec 2018, at 04:03, mik...@yeah.net <mailto:mik...@yeah.net> wrote:
>>>> 
>>>> Hi,
>>>>    I configured 2 worker thread and 2 dpdk rx-queuein startup.conf . Then 
>>>> I forged 400w packets and send them to a single dpdk interface . It turns 
>>>> out that the second thread received only 24 packets .I test it for several 
>>>> times 
>>>> and the results are almost the same. Why did this happen ?
>>>> 
>>>>    Here is some config and "show":
>>>> VPP : 18.07
>>>> startup.conf:
>>>> cpu {
>>>>         main-core 1
>>>>         corelist-workers 2,3
>>>> }
>>>> 
>>>> dpdk {
>>>>          dev default {
>>>>                  num-rx-queues 2
>>>>                  num-tx-queues 2
>>>>          }
>>>> }
>>>> 
>>>> packets:
>>>> these pkts share the same src mac  , dst mac  and ipv4 body , only ipv4 
>>>> src ip and dst ip are different from each other.
>>>> <Catch.jpg>
>>>> 
>>>> ------------------------------------------------------------------
>>>> # sh runtime
>>>> Thread 1 vpp_wk_0 (lcore 2)
>>>> Time 69.8, average vectors/node 1.02, last 128 main loops 0.00 per node 
>>>> 0.00
>>>>   vector rates in 5.7911e4, out 5.0225e2, drop 5.7783e4, punt 0.0000e0
>>>>              Name                 State         Calls          Vectors     
>>>>    Suspends         Clocks       Vectors/Call  
>>>> dpdk-input                       polling         104246554         3999630 
>>>>               0          7.84e2             .04
>>>> ---------------
>>>> Thread 2 vpp_wk_1 (lcore 3)
>>>> Time 69.8, average vectors/node 1.00, last 128 main loops 0.00 per node 
>>>> 0.00
>>>>   vector rates in 5.1557e-1, out 1.7186e-1, drop 5.1557e-1, punt 0.0000e0
>>>>              Name                 State         Calls          Vectors     
>>>>    Suspends         Clocks       Vectors/Call  
>>>> dpdk-input                       polling         132390000              24 
>>>>               0          1.59e8            0.00
>>>> -----------------------------------------------------------------
>>>> # show interface rx-placement 
>>>> Thread 1 (vpp_wk_0):
>>>>   node dpdk-input:
>>>>     VirtualFunctionEthernet5/0/2 queue 0 (polling)
>>>>     VirtualFunctionEthernet5/0/3 queue 0 (polling)
>>>> Thread 2 (vpp_wk_1):
>>>>   node dpdk-input:
>>>>     VirtualFunctionEthernet5/0/2 queue 1 (polling)
>>>>     VirtualFunctionEthernet5/0/3 queue 1 (polling)
>>>> -----------------------------------------------------------------
>>>> vpp# show interface 
>>>>               Name               Idx    State  MTU (L3/IP4/IP6/MPLS)     
>>>> Counter          Count     
>>>> VirtualFunctionEthernet5/0/2      1      up          9000/0/0/0     rx 
>>>> packets               3999654
>>>>                                                                            
>>>>                         rx bytes               687940488
>>>>                                                                            
>>>>                         tx packets                 35082
>>>>                                                                            
>>>>                         tx bytes                 9647550
>>>>                                                                            
>>>>                         rx-miss                      346
>>>> VirtualFunctionEthernet5/0/3      2     down         9000/0/0/0     
>>>> local0                            0     down          0/0/0/0       
>>>> 
>>>> 
>>>> Thanks in advance.
>>>> Mikado
>>>> mik...@yeah.net <mailto:mik...@yeah.net>-=-=-=-=-=-=-=-=-=-=-=-
>>>> Links: You receive all messages sent to this group.
>>>> 
>>>> View/Reply Online (#11675): https://lists.fd.io/g/vpp-dev/message/11675 
>>>> <https://lists.fd.io/g/vpp-dev/message/11675>
>>>> Mute This Topic: https://lists.fd.io/mt/28791566/675642 
>>>> <https://lists.fd.io/mt/28791566/675642>
>>>> Group Owner: vpp-dev+ow...@lists.fd.io <mailto:vpp-dev+ow...@lists.fd.io>
>>>> Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub 
>>>> <https://lists.fd.io/g/vpp-dev/unsub>  [dmar...@me.com 
>>>> <mailto:dmar...@me.com>]
>>>> -=-=-=-=-=-=-=-=-=-=-=-

-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.

View/Reply Online (#11689): https://lists.fd.io/g/vpp-dev/message/11689
Mute This Topic: https://lists.fd.io/mt/28791566/21656
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub  [arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-

Reply via email to