Hi Benoit Ganne:
I have upgrade linux kernel version to 5.17.3, and then do the same test, the 
test result is OK, no kernel error message occured: libbpf: Kernel error 
message: virtio_net: Too few free TX rings available.
bpf.vger.kernel.org  mail disccuss information also gives support for this 
guess. The link is for referrence: 
https://lore.kernel.org/bpf/20210314045946-mutt-send-email-...@kernel.org/ ( 
http:// 
https//lore.kernel.org/bpf/20210314045946-mutt-send-email-...@kernel.org/ )

vhost135:~$ uname -a
Linux vhost135 5.17.3-051703-generic #202204131853 SMP PREEMPT Wed Apr 13 
18:57:08 UTC 2022 x86_64 x86_64 x86_64 GNU/Linux

On Wed, Apr 13, 2022 at 03:48 PM, Benoit Ganne (bganne) wrote:

> 
> Hmm sounds like the virtio driver does not return an error to userspace
> when failing to create txq?
> With af_xdp we try to create 1 txq per VPP worker, up to the number
> available txq advertised by the kernel for the interface. If the txq
> creation failed, we just use the txq we got. Unless there is a bug in our
> logic (which is totally possible 😊 ) to end up in this situation:
> 1) the kernel advertises X txq but looks like a lie
> 2) we try to create Y txq <= X
> 3) the kernel returns success to us but that's a lie
> 
> What you can try:
> 1) run VPP with a single thread (remove any 'cpu { ... }' section in your
> startup.conf): that will make af_xdp create a single txq
> 2) check whether all calls to
> src/plugins/af_xdp/device.c:af_xdp_create_queue() succeed prior to the
> crash: that would confirm whether the kernel is lying to us or if our
> logic is buggy
> 
> Best
> ben
> 
> 
>> -----Original Message-----
>> From: vpp-dev@lists.fd.io <vpp-dev@lists.fd.io> On Behalf Of Smith Beirvin
>> 
>> Sent: Wednesday, April 13, 2022 5:48
>> To: vpp-dev@lists.fd.io
>> Subject: [vpp-dev] create af_xdp interface with exception kernel info
>> "Kernel error message: virtio_net: Too few free TX rings available"
>> 
>> [Edited Message Follows]
>> 
>> Hi VPP fellows:
>> when Icreate af_xdp interface based on KVM host network interface,
>> there is exception kernel info "Kernel error message: virtio_net: Too few
>> free TX rings available". However, when I do the test based on hardware
>> network card, it is OK. pls help me out for this issue.
>> 
>> 1.VPP version: 21.10
>> 
>> 2.KVM host Linux kernel version:
>> vhost135:~$ uname -a
>> Linux vhost135 5.4.0-050400rc1-generic #201909301433 SMP Mon Sep 30
>> 18:37:54 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux
>> 
>> 3.Libbpf version: 0.5.0
>> 
>> 4.exceptin info:
>> localhost vpp[1068]: libbpf: Kernel error message: virtio_net: Too few
>> free TX rings available
>> 
>> 5. coredump info
>> Apr 13 11:04:18 localhost vnet[1068]: received signal SIGFPE, PC
>> 0x7f0f1a50166f
>> Apr 13 11:04:18 localhost vnet[1068]: #0 0x00007f0f5c7683f4
>> unix_signal_handler + 0x2a4
>> Apr 13 11:04:18 localhost vnet[1068]: #1 0x00007f0f5bff88a0
>> 0x7f0f5bff88a0
>> Apr 13 11:04:18 localhost vnet[1068]: #2 0x00007f0f1a50166f
>> af_xdp_device_class_tx_fn + 0x18f
>> Apr 13 11:04:18 localhost vnet[1068]: #3 0x00007f0f5c6e3ab5 dispatch_node
>> + 0x365
>> Apr 13 11:04:18 localhost vnet[1068]: #4 0x00007f0f5c6e4387
>> dispatch_pending_node + 0x3c7
>> Apr 13 11:04:18 localhost vnet[1068]: #5 0x00007f0f5c6de7e1
>> vlib_main_or_worker_loop + 0xc51
>> Apr 13 11:04:18 localhost vnet[1068]: #6 0x00007f0f5c6e052a
>> vlib_main_loop + 0x1a
>> Apr 13 11:04:18 localhost vnet[1068]: #7 0x00007f0f5c6e02ff vlib_main +
>> 0xacf
>> Apr 13 11:04:18 localhost vnet[1068]: #8 0x00007f0f5c7671b5 thread0 +
>> 0x35
>> Apr 13 11:04:18 localhost vnet[1068]: #9 0x00007f0f5bb17a24
>> 0x7f0f5bb17a24
>> 
>> (gdb) bt
>> #0 0x00007f200415c66f in af_xdp_device_class_tx_fn (
>> vm=0x7f20465f9c00 , node=0x7f2006479700,
>> frame=0x7f201db68180)
>> at /home/wjyl/ncclwan-agent/ncclwan-agent-
>> test/router/vpp/src/plugins/af_xdp/output.c:220
>> #1 0x00007f204633eab5 in dispatch_node (vm=0x7f20465f9c00 ,
>> node=0x7f2006479700, type=VLIB_NODE_TYPE_INTERNAL,
>> dispatch_state=VLIB_NODE_STATE_POLLING, frame=0x7f201db68180,
>> last_time_stamp=10107226988068)
>> at /home/wjyl/ncclwan-agent/ncclwan-agent-
>> test/router/vpp/src/vlib/main.c:1235
>> #2 0x00007f204633f387 in dispatch_pending_node (
>> vm=0x7f20465f9c00 , pending_frame_index=1,
>> last_time_stamp=10107226988068)
>> at /home/wjyl/ncclwan-agent/ncclwan-agent-
>> test/router/vpp/src/vlib/main.c:1403
>> #3 0x00007f20463397e1 in vlib_main_or_worker_loop (
>> vm=0x7f20465f9c00 , is_main=1)
>> at /home/wjyl/ncclwan-agent/ncclwan-agent-
>> test/router/vpp/src/vlib/main.c:1862
>> #4 0x00007f204633b52a in vlib_main_loop (vm=0x7f20465f9c00 )
>> at /home/wjyl/ncclwan-agent/ncclwan-agent-
>> test/router/vpp/src/vlib/main.c:1990
>> ---Type to continue, or q to quit---
>> #5 0x00007f204633b2ff in vlib_main (vm=0x7f20465f9c00 ,
>> input=0x7f2005e60fa8)
>> at /home/wjyl/ncclwan-agent/ncclwan-agent-
>> test/router/vpp/src/vlib/main.c:2236
>> #6 0x00007f20463c21b5 in thread0 (arg=139776596352000)
>> at /home/wjyl/ncclwan-agent/ncclwan-agent-
>> test/router/vpp/src/vlib/unix/main.c:658
>> #7 0x00007f2045772a24 in clib_calljmp ()
>> at /home/wjyl/ncclwan-agent/ncclwan-agent-
>> test/router/vpp/src/vppinfra/longjmp.S:123
>> #8 0x00007ffeb586ff70 in ?? ()
>> #9 0x00007f20463c1d47 in vlib_unix_main (argc=54, argv=0x179f2b0)
>> at /home/wjyl/ncclwan-agent/ncclwan-agent-
>> test/router/vpp/src/vlib/unix/main.c:730
>> #10 0x000000000040693f in main (argc=54, argv=0x179f2b0)
>> at /home/wjyl/ncclwan-agent/ncclwan-agent-
>> test/router/vpp/src/vpp/vnet/main.c:300
> 
>
-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.
View/Reply Online (#21266): https://lists.fd.io/g/vpp-dev/message/21266
Mute This Topic: https://lists.fd.io/mt/90434107/21656
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub [arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-

Reply via email to