resend for the last mail send failed to vpp-dev

zhangguangm...@baicells.com
 
From: zhangguangm...@baicells.com
Date: 2021-11-05 09:54
To: bganne
CC: vpp-dev
Subject: Re: RE: Is there a bug in IKEv2 when enable multithread ?
      Yes, the flow with same source/dest ip and ports was assgin to the same 
nic queue is the expected.  But  the  resust is the init and auth packet was 
assign the same queue, but the informatuon reply (infomation request was send 
by vpp) was 
assigned the other queue.. 
 
    I also make another test , cpature all the IKEv2 packets and relpay the 
packets those  dest address is vpp,  all the pakcet was assgined to the same 
queue.   

I think there are two cause ,the one is NIC rss is not work  , the other is 
IKEv2 code .  The firt  is most possibility ,  but the first is not able to 
explain the the second test result . 

 I  have reported  the first cause through the Intel $(D"n Premier Support site.
My physical NIC  is 82599.  The VF(SRIOV ) was use by  a VM . The question 
about RSS  support ixgbevf  

root@gmzhang:~/vpn# ethtool -i enp2s13
driver: ixgbevf
version: 4.1.0-k
firmware-version: 
expansion-rom-version: 
bus-info: 0000:02:0d.0
supports-statistics: yes
supports-test: yes
supports-eeprom-access: no
supports-register-dump: yes
supports-priv-flags: no


Guangming 

Thanks



zhangguangm...@baicells.com
 
From: Benoit Ganne (bganne)
Date: 2021-11-05 01:12
To: zhangguangm...@baicells.com
CC: vpp-dev
Subject: RE: Is there a bug in IKEv2 when enable multithread ?
Hi,
 
Why do you receive those packets on different workers? I would expect all the 
IKE packets to use the same source/dest IP and ports hence arriving in the same 
worker. Is it not the case?
 
Best
ben
 
> -----Original Message-----
> From: zhangguangm...@baicells.com <zhangguangm...@baicells.com>
> Sent: mardi 2 novembre 2021 10:15
> To: Damjan Marion (damarion) <damar...@cisco.com>; Filip Tehlar -X
> (ftehlar - PANTHEON TECH SRO at Cisco) <fteh...@cisco.com>; nranns
> <nra...@cisco.com>; Benoit Ganne (bganne) <bga...@cisco.com>
> Subject: Is there a bug in IKEv2 when enable multithread ?
> 
> 
> 
> 
> ________________________________
> 
> zhangguangm...@baicells.com
> 
> 
> From: zhangguangm...@baicells.com
> <mailto:zhangguangm...@baicells.com>
> Date: 2021-11-02 17:01
> To: vpp-dev <mailto:vpp-dev@lists.fd.io>
> CC: ftehlar <mailto:fteh...@cisco.com>
> Subject: Is there a bug in IKEv2 when enable multithread ?
> Hi,
> 
>      When I test IKEv2,   i found when enable multithread ,  the ike
> sa will be detelte quickly after IKE negotiation complete.
> The root casue is the inti and auth packet handle by one worker
> thread ,but the  informational packet was handled by the other thread.
> RSS is enable.
> 
> 
> The follow is my configuration
> 
> 
> 
> cpu {
> ## In the VPP there is one main thread and optionally the user can
> create worker(s)
> ## The main thread and worker thread(s) can be pinned to CPU core(s)
> manually or automatically
> 
> 
> ## Manual pinning of thread(s) to CPU core(s)
> 
> 
> ## Set logical CPU core where main thread runs, if main core is not
> set
> ## VPP will use core 1 if available
> main-core 1
> 
> 
> ## Set logical CPU core(s) where worker threads are running
> # corelist-workers 2-3,18-19
> corelist-workers 2-3,4-5
> 
> 
> ## Automatic pinning of thread(s) to CPU core(s)
> 
> 
> ## Sets number of CPU core(s) to be skipped (1 ... N-1)
> ## Skipped CPU core(s) are not used for pinning main thread and
> working thread(s).
> ## The main thread is automatically pinned to the first available
> CPU core and worker(s)
> ## are pinned to next free CPU core(s) after core assigned to main
> thread
> # skip-cores 4
> 
> 
> ## Specify a number of workers to be created
> ## Workers are pinned to N consecutive CPU cores while skipping
> "skip-cores" CPU core(s)
> ## and main thread's CPU core
> #workers 2
> 
> 
> ## Set scheduling policy and priority of main and worker threads
> 
> 
> ## Scheduling policy options are: other (SCHED_OTHER), batch
> (SCHED_BATCH)
> ## idle (SCHED_IDLE), fifo (SCHED_FIFO), rr (SCHED_RR)
> # scheduler-policy fifo
> 
> 
> ## Scheduling priority is used only for "real-time policies (fifo
> and rr),
> ## and has to be in the range of priorities supported for a
> particular policy
> # scheduler-priority 50
> }
> 
> 
> 
> 
> dpdk {
> ## Change default settings for all interfaces
> dev default {
> ## Number of receive queues, enables RSS
> ## Default is 1
> num-rx-queues 4
> 
> 
> ## Number of transmit queues, Default is equal
> ## to number of worker threads or 1 if no workers treads
>        num-tx-queues 4
> 
> 
> ## Number of descriptors in transmit and receive rings
> ## increasing or reducing number can impact performance
> ## Default is 1024 for both rx and tx
> # num-rx-desc 512
> # num-tx-desc 512
> 
> 
> ## VLAN strip offload mode for interface
> ## Default is off
> # vlan-strip-offload on
> 
> 
> ## TCP Segment Offload
> ## Default is off
> ## To enable TSO, 'enable-tcp-udp-checksum' must be set
> # tso on
> 
> 
> ## Devargs
>                 ## device specific init args
>                 ## Default is NULL
> # devargs safe-mode-support=1,pipeline-mode-support=1
> 
>                 #rss 3
> ## rss-queues
> ## set valid rss steering queues
> # rss-queues 0,2,5-7
> #rss-queues 0,1
> }
> 
> 
> ## Whitelist specific interface by specifying PCI address
> # dev 0000:02:00.0
> 
>         dev 0000:00:14.0
>         dev 0000:00:15.0
>         dev 0000:00:10.0
>         dev 0000:00:11.0
>         #vdev crypto_aesni_mb0,socket_id=1
>         #vdev crypto_aesni_mb1,socket_id=1
> 
> ## Blacklist specific device type by specifying PCI vendor:device
>         ## Whitelist entries take precedence
> # blacklist 8086:10fb
> 
> 
> ## Set interface name
> # dev 0000:02:00.1 {
> # name eth0
> # }
> 
> 
> ## Whitelist specific interface by specifying PCI address and in
> ## addition specify custom parameters for this interface
> # dev 0000:02:00.1 {
> # num-rx-queues 2
> # }
> 
> 
> ## Change UIO driver used by VPP, Options are: igb_uio, vfio-pci,
> ## uio_pci_generic or auto (default)
> # uio-driver vfio-pci
>         #uio-driver igb_uio
> 
> 
> ## Disable multi-segment buffers, improves performance but
> ## disables Jumbo MTU support
> # no-multi-seg
> 
> 
> ## Change hugepages allocation per-socket, needed only if there is
> need for
> ## larger number of mbufs. Default is 256M on each detected CPU
> socket
> # socket-mem 2048,2048
> 
> 
> ## Disables UDP / TCP TX checksum offload. Typically needed for use
> ## faster vector PMDs (together with no-multi-seg)
> # no-tx-checksum-offload
> 
> 
> ## Enable UDP / TCP TX checksum offload
> ## This is the reversed option of 'no-tx-checksum-offload'
> # enable-tcp-udp-checksum
> 
> 
> ## Enable/Disable AVX-512 vPMDs
> # max-simd-bitwidth <256|512>
> }
> 
> DBGvpp# show threads
> ID     Name                Type        LWP     Sched Policy
> (Priority)  lcore  Core   Socket State
> 0      vpp_main                        2306    other (0)
> 1      1      0
> 1      vpp_wk_0            workers     2308    other (0)
> 2      2      0
> 2      vpp_wk_1            workers     2309    other (0)
> 3      3      0
> 3      vpp_wk_2            workers     2310    other (0)
> 4      4      0
> 4      vpp_wk_3            workers     2311    other (0)
> 5      5      0
> DBGvpp#
> DBGvpp# show hardware-interfaces
>               Name                Idx   Link  Hardware
> 0: format_dpdk_device:598: rte_eth_dev_rss_hash_conf_get returned -
> 95
> GigabitEthernet0/14/0              1     up   GigabitEthernet0/14/0
>   Link speed: 4294 Gbps
>   RX Queues:
>     queue thread         mode
>     0     vpp_wk_0 (1)   polling
>     1     vpp_wk_1 (2)   polling
>     2     vpp_wk_2 (3)   polling
>     3     vpp_wk_3 (4)   polling
>   Ethernet address 5a:9b:03:80:93:cf
>   Red Hat Virtio
>     carrier up full duplex mtu 9206
>     flags: admin-up pmd maybe-multiseg int-supported
>     Devargs:
>     rx: queues 4 (max 4), desc 256 (min 0 max 65535 align 1)
>     tx: queues 4 (max 4), desc 256 (min 0 max 65535 align 1)
>     pci: device 1af4:1000 subsystem 1af4:0001 address 0000:00:14.00
> numa 0
>     max rx packet len: 9728
>     promiscuous: unicast off all-multicast on
>     vlan offload: strip off filter off qinq off
>     rx offload avail:  vlan-strip udp-cksum tcp-cksum tcp-lro vlan-
> filter
>                        jumbo-frame scatter
>     rx offload active: jumbo-frame scatter
>     tx offload avail:  vlan-insert udp-cksum tcp-cksum tcp-tso
> multi-segs
>     tx offload active: multi-segs
>     rss avail:         none
>     rss active:        none
>     tx burst function: virtio_xmit_pkts
>     rx burst function: virtio_recv_mergeable_pkts
> 
> DBGvpp# show ikev2 profile
> profile profile1
>   auth-method shared-key-mic auth data foobarblah
>   local id-type ip4-addr data 10.10.10.15
>   remote id-type ip4-addr data 10.10.10.2
>   local traffic-selector addr 10.10.20.0 - 10.10.20.255 port 0 -
> 65535 protocol 0
>   remote traffic-selector addr 172.16.2.0 - 172.16.2.255 port 0 -
> 65535 protocol 0
>   lifetime 0 jitter 0 handover 0 maxdata 0
> 
> DBGvpp# show interface addr
> GigabitEthernet0/14/0 (up):
>   L3 10.10.10.15/24
> GigabitEthernet0/15/0 (up):
>   L3 10.10.20.15/24
> local0 (dn):
> 
> 
> 
> I  also config a flow with rss
> 
> DBGvpp# show flow entry
> flow-index 0 type ipv4 active 0
>   match: src_addr any, dst_addr any, protocol UDP
>   action: rss
>     rss function default, rss types ipv4-udp
> 
> Is it a bug or my wrong configure ?
> 
> Thanks
> guangming
> 
> ________________________________
> 
> 
> zhangguangm...@baicells.com
 
-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.
View/Reply Online (#20430): https://lists.fd.io/g/vpp-dev/message/20430
Mute This Topic: https://lists.fd.io/mt/86821392/21656
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub [arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-

Reply via email to