Re: [vpp-dev] Is there a bug in IKEv2 when enable multithread ?

2021-11-05 Thread Guangming

Thanks ben
With software RSS , the test result is OK.  
This should be ixgbevf RSS issue . 




zhangguangm...@baicells.com
 
From: Benoit Ganne (bganne) via lists.fd.io
Date: 2021-11-05 16:15
To: zhangguangm...@baicells.com
CC: vpp-dev
Subject: Re: [vpp-dev] Is there a bug in IKEv2 when enable multithread ?
Here is my take: currently in the ike plugin we expect all related ike packets 
to arrive on the same NIC queue. I expect all ike packets to use the same UDP 
5-tuple, hence I think this assumption is correct.
If you can share a scenario (in the RFC or even better with an existing ike 
implementation) where it is not correct, we should probably reconsider this.
If it is a RSS issue, you should fix it. You can use 'vppctl set interface 
handoff' for software RSS as a workaround.
 
Best
ben
 
> -Original Message-
> From: zhangguangm...@baicells.com 
> Sent: vendredi 5 novembre 2021 02:55
> To: Benoit Ganne (bganne) 
> Cc: vpp-dev 
> Subject: Re: RE: Is there a bug in IKEv2 when enable multithread ?
> 
>   Yes, the flow with same source/dest ip and ports was assgin to the
> same nic queue is the expected.  But  the  resust is the init and auth
> packet was assign the same queue, but the informatuon reply (infomation
> request was send by vpp) was
> assigned the other queue.
> 
> I also make another test , cpature all the IKEv2 packets and relpay
> the packets those  dest address is vpp,  all the pakcet was assgined to
> the same queue.
> 
> I think there are two cause ,the one is NIC rss is not work  , the other
> is IKEv2 code .  The firt  is most possibility ,  but the first is not
> able to explain the the second test result .
> 
>  I  have reported  the first cause through the Intel $(D"n Premier Support
> site.
> My physical NIC  is 82599.  The VF(SRIOV ) was use by  a VM . The question
> about RSS  support ixgbevf
> 
> root@gmzhang:~/vpn# ethtool -i enp2s13
> driver: ixgbevf
> version: 4.1.0-k
> firmware-version:
> expansion-rom-version:
> bus-info: :02:0d.0
> supports-statistics: yes
> supports-test: yes
> supports-eeprom-access: no
> supports-register-dump: yes
> supports-priv-flags: no
> 
> 
> Guangming
> 
> Thanks
> 
> 
> 
> zhangguangm...@baicells.com
> 
> 
> From: Benoit Ganne (bganne) <mailto:bga...@cisco.com>
> Date: 2021-11-05 01:12
> To: zhangguangm...@baicells.com <mailto:zhangguangm...@baicells.com>
> CC: vpp-dev <mailto:vpp-dev@lists.fd.io>
> Subject: RE: Is there a bug in IKEv2 when enable multithread ?
> Hi,
> 
> Why do you receive those packets on different workers? I would
> expect all the IKE packets to use the same source/dest IP and ports hence
> arriving in the same worker. Is it not the case?
> 
> Best
> ben
> 
> > -Original Message-
> > From: zhangguangm...@baicells.com 
> > Sent: mardi 2 novembre 2021 10:15
> > To: Damjan Marion (damarion) ; Filip Tehlar -X
> > (ftehlar - PANTHEON TECH SRO at Cisco) ; nranns
> > ; Benoit Ganne (bganne) 
> > Subject: Is there a bug in IKEv2 when enable multithread ?
> >
> >
> >
> >
> > 
> >
> > zhangguangm...@baicells.com
> >
> >
> > From: zhangguangm...@baicells.com
> > <mailto:zhangguangm...@baicells.com>
> > Date: 2021-11-02 17:01
> > To: vpp-dev <mailto:vpp-dev@lists.fd.io>
> > CC: ftehlar <mailto:fteh...@cisco.com>
> > Subject: Is there a bug in IKEv2 when enable multithread ?
> > Hi,
> >
> >  When I test IKEv2,   i found when enable multithread ,  the
> ike
> > sa will be detelte quickly after IKE negotiation complete.
> > The root casue is the inti and auth packet handle by one worker
> > thread ,but the  informational packet was handled by the other
> thread.
> > RSS is enable.
> >
> >
> > The follow is my configuration
> >
> >
> >
> > cpu {
> > ## In the VPP there is one main thread and optionally the user can
> > create worker(s)
> > ## The main thread and worker thread(s) can be pinned to CPU
> core(s)
> > manually or automatically
> >
> >
> > ## Manual pinning of thread(s) to CPU core(s)
> >
> >
> > ## Set logical CPU core where main thread runs, if main core is
> not
> > set
> > ## VPP will use core 1 if available
> > main-core 1
> >
> >
> > ## Set logical CPU core(s) where worker threads are running
> > # corelist-workers 2-3,18-19
> > corelist-workers 2-3,4-5
> >
> >
> > ## Automatic pinning of thread(s) to CPU core(s)
> >
> >
> > 

Re: [vpp-dev] Is there a bug in IKEv2 when enable multithread ?

2021-11-05 Thread Benoit Ganne (bganne) via lists.fd.io
Here is my take: currently in the ike plugin we expect all related ike packets 
to arrive on the same NIC queue. I expect all ike packets to use the same UDP 
5-tuple, hence I think this assumption is correct.
If you can share a scenario (in the RFC or even better with an existing ike 
implementation) where it is not correct, we should probably reconsider this.
If it is a RSS issue, you should fix it. You can use 'vppctl set interface 
handoff' for software RSS as a workaround.

Best
ben

> -Original Message-
> From: zhangguangm...@baicells.com 
> Sent: vendredi 5 novembre 2021 02:55
> To: Benoit Ganne (bganne) 
> Cc: vpp-dev 
> Subject: Re: RE: Is there a bug in IKEv2 when enable multithread ?
> 
>   Yes, the flow with same source/dest ip and ports was assgin to the
> same nic queue is the expected.  But  the  resust is the init and auth
> packet was assign the same queue, but the informatuon reply (infomation
> request was send by vpp) was
> assigned the other queue.
> 
> I also make another test , cpature all the IKEv2 packets and relpay
> the packets those  dest address is vpp,  all the pakcet was assgined to
> the same queue.
> 
> I think there are two cause ,the one is NIC rss is not work  , the other
> is IKEv2 code .  The firt  is most possibility ,  but the first is not
> able to explain the the second test result .
> 
>  I  have reported  the first cause through the Intel® 禸鑌髈\xF2 綅琹焱\xF4
\xBE 籅罃\xAE
\xBE 瘭 珡迠髃邐 盁\xC3  髬 塕哆妟  纚\xE5 膐崱篅砢 \xA9 蝥\xF3 𦪷\xE5 醳  \xE1 膘 \xAE 纚\xE5 𥁊鑣𦉰焈
\xBE 遧煓\xF4 篪\xD3  𥹥琹焱\xF4 髺韑鑭\xE6
\xBE 
\xBE 秂煊摭\x8F逧餳淎屩欗琤\xA3 鑨饞焌\xEC 斊 鑕牣窬\xB3
\xBE 鍞髵鑡\xBA 髺韑鑭\xE6
\xBE 蔌秖髤沉 匬侟丰\xEB
\xBE 隟禴蝥祧昈鑡籅焈\xBA
\xBE 鑯玵渼髤毱秂鼙蔌秖髤沉
\xBE 醮窂髠隯\xBA 乁乁嬈僔伻暱
\xBE 𥹥琹焱羐旹䍃𦉰𥹖髃竧 轘\xF3
\xBE 𥹥琹焱羐旼鑣絰 轘\xF3
\xBE 𥹥琹焱羐敺鑜秂鼙遬鉝𥸮\xBA 渢
\xBE 𥹥琹焱羐旲鐿髬罃䁘鍧\x8F逅\xBA 轘\xF3
\xBE 𥹥琹焱羐旔禖茝隦邅竧 渢
\xBE 
\xBE 
\xBE 湋邗韯髠\xE7
\xBE 
\xBE 纚邗鵔
\xBE 
\xBE 
\xBE 
\xBE 鉽邗韝舡淎\x8F迚淎摝邋鉝麯窊鉵\xED
\xBE 
\xBE 
\xBE沟焇\xBA 曏渢髲 淩渞\xE5 嶙韐渞鎋 弴邋黕澐醆邗洴摟髬鉵桕焇\xBE
\xBE榷罃\xBA 傺傽捎侞乜 乄嬗\xB2
\xBE缿\xBA 鉽邗韝舡淎\x8F迚淎摝邋鉝麯窊鉵\xED 弴邋黕澐鉽邗韝舡淎\x8F迚淎摝邋鉝麯窊鉵鼿
\xBE柡\xBA 蕑爹錴\xF6 弴邋黕澐蕑爹錴菹麨𥹖窊隉桼澥
\xBE綅醑鐲絰 箻\xBA 煅 㓁鑡\xE5 \xE1 醮\xE7 髠 炆歘\xB2 螋鑕 鑕遧麘 \x8F逡黕髲饛鐯\xE4 \xBF
\xBE澐\xAC
\xBE 
\xBE茡\xF9 鍗 迊\xF5 祧鉝髵\xE5 㓁煐\xE5 玵鉩鑨\xF3 焈 鍈隖鑡鑕\xF4 蟁禡鑡笇 \xC9 蟁䑶\xE4
\xBE 鑯玿鉻 邐\xEC 㓁\xE5 炆\xC5 玵鉩鑨\xF3 罭 𦪷\xE5 㓁\xE5 篚\x8F辣 籩艣鉝櫆鑣\xF4 炘 邗\xE4 
琱䅈\xF3 餺涴\xE5
\xBE 邠禖蔫淎 髠 㓁\xE5 篚\x8F辣 蟁禡鑡\xAE 煅 髲 渢\xF4 㓁\xE5 鉑簋\xBF
\xBE 
\xBE曏𥹖
\xBE醃\xEE
\xBE 
\xBE\xBE 挻挻撑禖韠涫\xEC 瘃𥸮邅鎕挻挻
\xBE\xBE 沟焇\xBA 鉽邗韝舡淎\x8F迚淎摝邋鉝麯窊鉵\xED 彛餳淎頙邗韯髠靽酹髃鑊黋桕焇\xBE
\xBE\xBE 絗湈\xBA 龢䄅\xE9 \xB2 渢蔌龣祧 傺傽 侷嬗\xB5
\xBE\xBE 缿\xBA 榷\x8F迥邗 痱禖焈 嶲邕邠髤毚 弡邕邠髤泂鉧篰漘鉵鼿\xBB 沅麨\xF0 纑饈邠 擥
\xBE\xBE 𡽶罃饈邠 \xAD 礞盬漍砅 縝柷 紓\xCF 邢 桻篰滫 弣罃饈邠摟髬鉵桕焇愌 渲邗渼
\xBE\xBE 弶祊渞𥫱鉧篰漘鉵鼿\xBB 曏渢髲 淩渞\xE5 嶙韐渞鎋 弜韐渞鏇鉧篰漘鉵鼿
\xBE\xBE 綅醑鐲絰 煅 㓁鑡\xE5 \xE1 醮\xE7 髠 炆歘\xB2 螋鑕 鑕遧麘 \x8F逡黕髲饛鐯\xE4 \xBF
\xBE\xBE
\xBE\xBE
\xBE\xBE
\xBE\xBE
\xBE\xBE 
\xBE\xBE
\xBE\xBE 鉽邗韝舡淎\x8F迚淎摝邋鉝麯窊鉵\xED
\xBE\xBE
\xBE\xBE
\xBE\xBE 沟焇\xBA 鉽邗韝舡淎\x8F迚淎摝邋鉝麯窊鉵\xED
\xBE\xBE 弴邋黕澐鉽邗韝舡淎\x8F迚淎摝邋鉝麯窊鉵鼿
\xBE\xBE 榷罃\xBA 傺傽捎侞乇 俁嬈\xB1
\xBE\xBE 缿\xBA 蕑爹錴\xF6 弴邋黕澐蕑爹錴菹麨𥹖窊隉桼澥
\xBE\xBE 柡\xBA 嶲鐽麇\xF2 弴邋黕澐嶲鐽麇砉鉧篰漘鉵鼿
\xBE\xBE 綅醑鐲絰 煅 㓁鑡\xE5 \xE1 醮\xE7 髠 炆歘\xB2 螋鑕 鑕遧麘 \x8F逡黕髲饛鐯\xE4 \xBF
\xBE\xBE 澐,
>   >
>   >  When I test IKEv2,   i found when enable multithread ,  the
> ike
>   > sa will be detelte quickly after IKE negotiation complete.
>   > The root casue is the inti and auth packet handle by one worker
>   > thread ,but the  informational packet was handled by the other
> thread.
>   > RSS is enable.
>   >
>   >
>   > The follow is my configuration
>   >
>   >
>   >
>   > cpu {
>   > ## In the VPP there is one main thread and optionally the user can
>   > create worker(s)
>   > ## The main thread and worker thread(s) can be pinned to CPU
> core(s)
>   > manually or automatically
>   >
>   >
>   > ## Manual pinning of thread(s) to CPU core(s)
>   >
>   >
>   > ## Set logical CPU core where main thread runs, if main core is
> not
>   > set
>   > ## VPP will use core 1 if available
>   > main-core 1
>   >
>   >
>   > ## Set logical CPU core(s) where worker threads are running
>   > # corelist-workers 2-3,18-19
>   > corelist-workers 2-3,4-5
>   >
>   >
>   > ## Automatic pinning of thread(s) to CPU core(s)
>   >
>   >
>   > ## Sets number of CPU core(s) to be skipped (1 ... N-1)
>   > ## Skipped CPU core(s) are not used for pinning main thread and
>   > working thread(s).
>   > ## The main thread is automatically pinned to the first available
>   > CPU core and worker(s)
>   > ## are pinned to next free CPU core(s) after core assigned to main
>   > thread
>   > # skip-cores 4
>   >
>   >
>   > ## Specify a number of workers to be created
>   > ## Workers are pinned to N consecutive CPU c

Re: [vpp-dev] Is there a bug in IKEv2 when enable multithread ?

2021-11-04 Thread Guangming

resend for the last mail send failed to vpp-dev


zhangguangm...@baicells.com
 
From: zhangguangm...@baicells.com
Date: 2021-11-05 09:54
To: bganne
CC: vpp-dev
Subject: Re: RE: Is there a bug in IKEv2 when enable multithread ?
  Yes, the flow with same source/dest ip and ports was assgin to the same 
nic queue is the expected.  But  the  resust is the init and auth packet was 
assign the same queue, but the informatuon reply (infomation request was send 
by vpp) was 
assigned the other queue.. 
 
I also make another test , cpature all the IKEv2 packets and relpay the 
packets those  dest address is vpp,  all the pakcet was assgined to the same 
queue.   

I think there are two cause ,the one is NIC rss is not work  , the other is 
IKEv2 code .  The firt  is most possibility ,  but the first is not able to 
explain the the second test result . 

 I  have reported  the first cause through the Intel $(D"n Premier Support site.
My physical NIC  is 82599.  The VF(SRIOV ) was use by  a VM . The question 
about RSS  support ixgbevf  

root@gmzhang:~/vpn# ethtool -i enp2s13
driver: ixgbevf
version: 4.1.0-k
firmware-version: 
expansion-rom-version: 
bus-info: :02:0d.0
supports-statistics: yes
supports-test: yes
supports-eeprom-access: no
supports-register-dump: yes
supports-priv-flags: no


Guangming 

Thanks



zhangguangm...@baicells.com
 
From: Benoit Ganne (bganne)
Date: 2021-11-05 01:12
To: zhangguangm...@baicells.com
CC: vpp-dev
Subject: RE: Is there a bug in IKEv2 when enable multithread ?
Hi,
 
Why do you receive those packets on different workers? I would expect all the 
IKE packets to use the same source/dest IP and ports hence arriving in the same 
worker. Is it not the case?
 
Best
ben
 
> -Original Message-
> From: zhangguangm...@baicells.com 
> Sent: mardi 2 novembre 2021 10:15
> To: Damjan Marion (damarion) ; Filip Tehlar -X
> (ftehlar - PANTHEON TECH SRO at Cisco) ; nranns
> ; Benoit Ganne (bganne) 
> Subject: Is there a bug in IKEv2 when enable multithread ?
> 
> 
> 
> 
> 
> 
> zhangguangm...@baicells.com
> 
> 
> From: zhangguangm...@baicells.com
> 
> Date: 2021-11-02 17:01
> To: vpp-dev 
> CC: ftehlar 
> Subject: Is there a bug in IKEv2 when enable multithread ?
> Hi,
> 
>  When I test IKEv2,   i found when enable multithread ,  the ike
> sa will be detelte quickly after IKE negotiation complete.
> The root casue is the inti and auth packet handle by one worker
> thread ,but the  informational packet was handled by the other thread.
> RSS is enable.
> 
> 
> The follow is my configuration
> 
> 
> 
> cpu {
> ## In the VPP there is one main thread and optionally the user can
> create worker(s)
> ## The main thread and worker thread(s) can be pinned to CPU core(s)
> manually or automatically
> 
> 
> ## Manual pinning of thread(s) to CPU core(s)
> 
> 
> ## Set logical CPU core where main thread runs, if main core is not
> set
> ## VPP will use core 1 if available
> main-core 1
> 
> 
> ## Set logical CPU core(s) where worker threads are running
> # corelist-workers 2-3,18-19
> corelist-workers 2-3,4-5
> 
> 
> ## Automatic pinning of thread(s) to CPU core(s)
> 
> 
> ## Sets number of CPU core(s) to be skipped (1 ... N-1)
> ## Skipped CPU core(s) are not used for pinning main thread and
> working thread(s).
> ## The main thread is automatically pinned to the first available
> CPU core and worker(s)
> ## are pinned to next free CPU core(s) after core assigned to main
> thread
> # skip-cores 4
> 
> 
> ## Specify a number of workers to be created
> ## Workers are pinned to N consecutive CPU cores while skipping
> "skip-cores" CPU core(s)
> ## and main thread's CPU core
> #workers 2
> 
> 
> ## Set scheduling policy and priority of main and worker threads
> 
> 
> ## Scheduling policy options are: other (SCHED_OTHER), batch
> (SCHED_BATCH)
> ## idle (SCHED_IDLE), fifo (SCHED_FIFO), rr (SCHED_RR)
> # scheduler-policy fifo
> 
> 
> ## Scheduling priority is used only for "real-time policies (fifo
> and rr),
> ## and has to be in the range of priorities supported for a
> particular policy
> # scheduler-priority 50
> }
> 
> 
> 
> 
> dpdk {
> ## Change default settings for all interfaces
> dev default {
> ## Number of receive queues, enables RSS
> ## Default is 1
> num-rx-queues 4
> 
> 
> ## Number of transmit queues, Default is equal
> ## to number of worker threads or 1 if no workers treads
>num-tx-queues 4
> 
> 
> ## Number of descriptors in transmit and receive rings
> ## increasing or reducing number can impact performance
> ## Default is 1024 for both rx and tx
> # num-rx-desc 512
> # num-tx-desc 512
> 
> 
> ## VLAN strip offload mode for interface
> ## Default is off
> # vlan-strip-offload on
> 
> 
> ## TCP Segment Offload
> ## Default is off
> ## To enable TSO, 'enable-tcp-udp-checksum' must be set
> # tso on
> 
> 
> ## Devargs
> ## device s

Re: [vpp-dev] Is there a bug in IKEv2 when enable multithread ?

2021-11-04 Thread Benoit Ganne (bganne) via lists.fd.io
Hi,

Why do you receive those packets on different workers? I would expect all the 
IKE packets to use the same source/dest IP and ports hence arriving in the same 
worker. Is it not the case?

Best
ben

> -Original Message-
> From: zhangguangm...@baicells.com 
> Sent: mardi 2 novembre 2021 10:15
> To: Damjan Marion (damarion) ; Filip Tehlar -X
> (ftehlar - PANTHEON TECH SRO at Cisco) ; nranns
> ; Benoit Ganne (bganne) 
> Subject: Is there a bug in IKEv2 when enable multithread ?
> 
> 
> 
> 
> 
> 
> zhangguangm...@baicells.com
> 
> 
>   From: zhangguangm...@baicells.com
> 
>   Date: 2021-11-02 17:01
>   To: vpp-dev 
>   CC: ftehlar 
>   Subject: Is there a bug in IKEv2 when enable multithread ?
>   Hi,
> 
>When I test IKEv2,   i found when enable multithread ,  the ike
> sa will be detelte quickly after IKE negotiation complete.
>   The root casue is the inti and auth packet handle by one worker
> thread ,but the  informational packet was handled by the other thread.
>   RSS is enable.
> 
> 
>   The follow is my configuration
> 
> 
> 
>   cpu {
>   ## In the VPP there is one main thread and optionally the user can
> create worker(s)
>   ## The main thread and worker thread(s) can be pinned to CPU core(s)
> manually or automatically
> 
> 
>   ## Manual pinning of thread(s) to CPU core(s)
> 
> 
>   ## Set logical CPU core where main thread runs, if main core is not
> set
>   ## VPP will use core 1 if available
>   main-core 1
> 
> 
>   ## Set logical CPU core(s) where worker threads are running
>   # corelist-workers 2-3,18-19
>   corelist-workers 2-3,4-5
> 
> 
>   ## Automatic pinning of thread(s) to CPU core(s)
> 
> 
>   ## Sets number of CPU core(s) to be skipped (1 ... N-1)
>   ## Skipped CPU core(s) are not used for pinning main thread and
> working thread(s).
>   ## The main thread is automatically pinned to the first available
> CPU core and worker(s)
>   ## are pinned to next free CPU core(s) after core assigned to main
> thread
>   # skip-cores 4
> 
> 
>   ## Specify a number of workers to be created
>   ## Workers are pinned to N consecutive CPU cores while skipping
> "skip-cores" CPU core(s)
>   ## and main thread's CPU core
>   #workers 2
> 
> 
>   ## Set scheduling policy and priority of main and worker threads
> 
> 
>   ## Scheduling policy options are: other (SCHED_OTHER), batch
> (SCHED_BATCH)
>   ## idle (SCHED_IDLE), fifo (SCHED_FIFO), rr (SCHED_RR)
>   # scheduler-policy fifo
> 
> 
>   ## Scheduling priority is used only for "real-time policies (fifo
> and rr),
>   ## and has to be in the range of priorities supported for a
> particular policy
>   # scheduler-priority 50
>   }
> 
> 
> 
> 
>   dpdk {
>   ## Change default settings for all interfaces
>   dev default {
>   ## Number of receive queues, enables RSS
>   ## Default is 1
>   num-rx-queues 4
> 
> 
>   ## Number of transmit queues, Default is equal
>   ## to number of worker threads or 1 if no workers treads
>  num-tx-queues 4
> 
> 
>   ## Number of descriptors in transmit and receive rings
>   ## increasing or reducing number can impact performance
>   ## Default is 1024 for both rx and tx
>   # num-rx-desc 512
>   # num-tx-desc 512
> 
> 
>   ## VLAN strip offload mode for interface
>   ## Default is off
>   # vlan-strip-offload on
> 
> 
>   ## TCP Segment Offload
>   ## Default is off
>   ## To enable TSO, 'enable-tcp-udp-checksum' must be set
>   # tso on
> 
> 
>   ## Devargs
>   ## device specific init args
>   ## Default is NULL
>   # devargs safe-mode-support=1,pipeline-mode-support=1
> 
>   #rss 3
>   ## rss-queues
>   ## set valid rss steering queues
>   # rss-queues 0,2,5-7
>   #rss-queues 0,1
>   }
> 
> 
>   ## Whitelist specific interface by specifying PCI address
>   # dev :02:00.0
> 
>   dev :00:14.0
>   dev :00:15.0
>   dev :00:10.0
>   dev :00:11.0
>   #vdev crypto_aesni_mb0,socket_id=1
>   #vdev crypto_aesni_mb1,socket_id=1
> 
>   ## Blacklist specific device type by specifying PCI vendor:device
>   ## Whitelist entries take precedence
>   # blacklist 8086:10fb
> 
> 
>   ## Set interface name
>   # dev :02:00.1 {
>   # name eth0
>   # }
> 
> 
>   ## Whitelist specific interface by specifying PCI address and in
>   ## addition specify custom parameters for this interface
>   # dev :02:00.1 {
>   # num-rx-queues 2
>   # }
> 
> 
>   ## Change UIO driver used by VPP, Options are: igb_uio, vfio-p