Re: [vpp-dev] vpp+dpdk #dpdk

2022-12-19 Thread zheng jie
Have never seen two net devices, even SRIOV devices share same PCI address. 
Will you dump your device via your lspci or or /sys/… , PCI bus addresses are 
always unique?

Personally I thought the PCI addresses in your screenshot are inaccurate.


From:  on behalf of "first_se...@163.com" 

Reply-To: "vpp-dev@lists.fd.io" 
Date: Monday, December 12, 2022 at 11:16 PM
To: "vpp-dev@lists.fd.io" 
Subject: [vpp-dev] vpp+dpdk #dpdk

I have the issue about when the same buf_info with different  name  of device 
like bottom picture,what should I do to bound two device called enp6s0f01d and 
enp6s0f02d. tks
[cid:image001.png@01D913B6.6A9781F0]

-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.
View/Reply Online (#22350): https://lists.fd.io/g/vpp-dev/message/22350
Mute This Topic: https://lists.fd.io/mt/95640416/21656
Mute #dpdk:https://lists.fd.io/g/vpp-dev/mutehashtag/dpdk
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/leave/1480452/21656/631435203/xyzzy 
[arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-



[vpp-dev] vlib buffer allocate/free consumes too many CPU cycles

2022-08-23 Thread zheng jie
hi team,

recently I am doing plugin dev with VPP,  I found all threads are spending too 
many CPU cycles with locking/unlocking (with one numa node)buffer_main's 
buffer_known_hash_lockp,  it hinders burst performance.

```
+6.00%  vpp_wk_4 libvlib.so.22.02.0[.] clib_spinlock_lock
+5.33%  vpp_wk_7 libvlib.so.22.02.0[.] clib_spinlock_lock
+5.29%  vpp_wk_9 libvlib.so.22.02.0[.] clib_spinlock_lock
+5.18%  vpp_wk_2 libvlib.so.22.02.0[.] clib_spinlock_lock
+5.05%  vpp_wk_0 libvlib.so.22.02.0[.] clib_spinlock_lock
+4.95%  vpp_wk_10libvlib.so.22.02.0[.] clib_spinlock_lock
+4.92%  vpp_wk_6 libvlib.so.22.02.0[.] clib_spinlock_lock
+4.90%  vpp_wk_3 libvlib.so.22.02.0[.] clib_spinlock_lock
+4.63%  vpp_wk_8 libvlib.so.22.02.0[.] clib_spinlock_lock
+4.41%  vpp_wk_5 libvlib.so.22.02.0[.] clib_spinlock_lock
+4.33%  vpp_wk_1 libvlib.so.22.02.0[.] clib_spinlock_lock
+4.24%  vpp_wk_11libvlib.so.22.02.0[.] clib_spinlock_lock
+2.27%  vpp_main libvlib.so.22.02.0[.] dispatch_node
+0.89%  vpp_main libvlib.so.22.02.0[.] 
vlib_main_or_worker_loop
+0.85%  vpp_main libvlib.so.22.02.0[.] vlib_get_node
+0.83%  vpp_wk_11libvlib.so.22.02.0[.] vlib_frame_free
+0.78%  vpp_wk_1 libvlib.so.22.02.0[.] vlib_frame_free
+0.65%  vpp_wk_5 libvlib.so.22.02.0[.] vlib_frame_free
+0.51%  vpp_main vcgnat_dp_plugin.so   [.] 
node_channel_tx_process

```

the calling graph:

-5.33%  vpp_wk_7 libvlib.so.22.02.0[.] clib_spinlock_lock
   - clib_spinlock_lock
  - 4.89% vlib_buffer_is_known
 - vlib_buffer_validate_alloc_free
+ 1.94% vlib_buffer_alloc_from_pool
+ 1.45% vlib_buffer_pool_put
+ 1.42% vlib_buffer_alloc_from_pool
-5.29%  vpp_wk_9 libvlib.so.22.02.0[.] clib_spinlock_lock
   - clib_spinlock_lock
  - 4.89% vlib_buffer_is_known
 - vlib_buffer_validate_alloc_free
+ 2.14% vlib_buffer_alloc_from_pool
+ 1.40% vlib_buffer_alloc_from_pool
+ 1.28% vlib_buffer_pool_put
+5.18%  vpp_wk_2 libvlib.so.22.02.0[.] clib_spinlock_lock
-5.05%  vpp_wk_0 libvlib.so.22.02.0[.] clib_spinlock_lock
   - clib_spinlock_lock
  - 4.57% vlib_buffer_is_known
 - vlib_buffer_validate_alloc_free
+ 2.25% vlib_buffer_alloc_from_pool
+ 1.20% vlib_buffer_pool_put
+ 1.03% vlib_buffer_alloc_from_pool


thanks,
Looking forward to your help.


-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.
View/Reply Online (#21810): https://lists.fd.io/g/vpp-dev/message/21810
Mute This Topic: https://lists.fd.io/mt/93199622/21656
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/leave/1480452/21656/631435203/xyzzy 
[arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-