Re: [vpp-dev] create af_xdp interface with exception kernel info "Kernel error message: virtio_net: Too few free TX rings available"
[Edited Message Follows] Hi Benoit Ganne: I have upgrade linux kernel version to 5.17.3, and then do the same test, the test result is OK, no kernel error message occured: libbpf: Kernel error message: virtio_net: Too few free TX rings available. bpf.vger.kernel.org mail disccuss information also gives support for this guess. The link is for referrence: https://lore.kernel.org/bpf/20210314045946-mutt-send-email-...@kernel.org/ ( http:// https//lore.kernel.org/bpf/20210314045946-mutt-send-email-...@kernel.org/ ) https://lore.kernel.org/bpf/20210314045946-mutt-send-email-...@kernel.org/ vhost135:~$ uname -a Linux vhost135 5.17.3-051703-generic #202204131853 SMP PREEMPT Wed Apr 13 18:57:08 UTC 2022 x86_64 x86_64 x86_64 GNU/Linux On Wed, Apr 13, 2022 at 03:48 PM, Benoit Ganne (bganne) wrote: > > Hmm sounds like the virtio driver does not return an error to userspace > when failing to create txq? > With af_xdp we try to create 1 txq per VPP worker, up to the number > available txq advertised by the kernel for the interface. If the txq > creation failed, we just use the txq we got. Unless there is a bug in our > logic (which is totally possible ) to end up in this situation: > 1) the kernel advertises X txq but looks like a lie > 2) we try to create Y txq <= X > 3) the kernel returns success to us but that's a lie > > What you can try: > 1) run VPP with a single thread (remove any 'cpu { ... }' section in your > startup.conf): that will make af_xdp create a single txq > 2) check whether all calls to > src/plugins/af_xdp/device.c:af_xdp_create_queue() succeed prior to the > crash: that would confirm whether the kernel is lying to us or if our > logic is buggy > > Best > ben > > >> -Original Message- >> From: vpp-dev@lists.fd.io On Behalf Of Smith Beirvin >> >> Sent: Wednesday, April 13, 2022 5:48 >> To: vpp-dev@lists.fd.io >> Subject: [vpp-dev] create af_xdp interface with exception kernel info >> "Kernel error message: virtio_net: Too few free TX rings available" >> >> [Edited Message Follows] >> >> Hi VPP fellows: >> when Icreate af_xdp interface based on KVM host network interface, >> there is exception kernel info "Kernel error message: virtio_net: Too few >> free TX rings available". However, when I do the test based on hardware >> network card, it is OK. pls help me out for this issue. >> >> 1.VPP version: 21.10 >> >> 2.KVM host Linux kernel version: >> vhost135:~$ uname -a >> Linux vhost135 5.4.0-050400rc1-generic #201909301433 SMP Mon Sep 30 >> 18:37:54 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux >> >> 3.Libbpf version: 0.5.0 >> >> 4.exceptin info: >> localhost vpp[1068]: libbpf: Kernel error message: virtio_net: Too few >> free TX rings available >> >> 5. coredump info >> Apr 13 11:04:18 localhost vnet[1068]: received signal SIGFPE, PC >> 0x7f0f1a50166f >> Apr 13 11:04:18 localhost vnet[1068]: #0 0x7f0f5c7683f4 >> unix_signal_handler + 0x2a4 >> Apr 13 11:04:18 localhost vnet[1068]: #1 0x7f0f5bff88a0 >> 0x7f0f5bff88a0 >> Apr 13 11:04:18 localhost vnet[1068]: #2 0x7f0f1a50166f >> af_xdp_device_class_tx_fn + 0x18f >> Apr 13 11:04:18 localhost vnet[1068]: #3 0x7f0f5c6e3ab5 dispatch_node >> + 0x365 >> Apr 13 11:04:18 localhost vnet[1068]: #4 0x7f0f5c6e4387 >> dispatch_pending_node + 0x3c7 >> Apr 13 11:04:18 localhost vnet[1068]: #5 0x7f0f5c6de7e1 >> vlib_main_or_worker_loop + 0xc51 >> Apr 13 11:04:18 localhost vnet[1068]: #6 0x7f0f5c6e052a >> vlib_main_loop + 0x1a >> Apr 13 11:04:18 localhost vnet[1068]: #7 0x7f0f5c6e02ff vlib_main + >> 0xacf >> Apr 13 11:04:18 localhost vnet[1068]: #8 0x7f0f5c7671b5 thread0 + >> 0x35 >> Apr 13 11:04:18 localhost vnet[1068]: #9 0x7f0f5bb17a24 >> 0x7f0f5bb17a24 >> >> (gdb) bt >> #0 0x7f200415c66f in af_xdp_device_class_tx_fn ( >> vm=0x7f20465f9c00 , node=0x7f2006479700, >> frame=0x7f201db68180) >> at /home/wjyl/ncclwan-agent/ncclwan-agent- >> test/router/vpp/src/plugins/af_xdp/output.c:220 >> #1 0x7f204633eab5 in dispatch_node (vm=0x7f20465f9c00 , >> node=0x7f2006479700, type=VLIB_NODE_TYPE_INTERNAL, >> dispatch_state=VLIB_NODE_STATE_POLLING, frame=0x7f201db68180, >> last_time_stamp=10107226988068) >> at /home/wjyl/ncclwan-agent/ncclwan-agent- >> test/router/vpp/src/vlib/main.c:1235 >> #2 0x7f204633f387 in dispatch_pending_node ( >> vm=0x7f20465f9c00 , pending_frame_index=1, >> last_time_stamp=10107226988068) >> at /home/wjyl/ncclwan-agent/ncclwan-agent- >> test/router/vpp/src/vlib/ma
Re: [vpp-dev] create af_xdp interface with exception kernel info "Kernel error message: virtio_net: Too few free TX rings available"
Hi Benoit Ganne: I have upgrade linux kernel version to 5.17.3, and then do the same test, the test result is OK, no kernel error message occured: libbpf: Kernel error message: virtio_net: Too few free TX rings available. bpf.vger.kernel.org mail disccuss information also gives support for this guess. The link is for referrence: https://lore.kernel.org/bpf/20210314045946-mutt-send-email-...@kernel.org/ ( http:// https//lore.kernel.org/bpf/20210314045946-mutt-send-email-...@kernel.org/ ) vhost135:~$ uname -a Linux vhost135 5.17.3-051703-generic #202204131853 SMP PREEMPT Wed Apr 13 18:57:08 UTC 2022 x86_64 x86_64 x86_64 GNU/Linux On Wed, Apr 13, 2022 at 03:48 PM, Benoit Ganne (bganne) wrote: > > Hmm sounds like the virtio driver does not return an error to userspace > when failing to create txq? > With af_xdp we try to create 1 txq per VPP worker, up to the number > available txq advertised by the kernel for the interface. If the txq > creation failed, we just use the txq we got. Unless there is a bug in our > logic (which is totally possible ) to end up in this situation: > 1) the kernel advertises X txq but looks like a lie > 2) we try to create Y txq <= X > 3) the kernel returns success to us but that's a lie > > What you can try: > 1) run VPP with a single thread (remove any 'cpu { ... }' section in your > startup.conf): that will make af_xdp create a single txq > 2) check whether all calls to > src/plugins/af_xdp/device.c:af_xdp_create_queue() succeed prior to the > crash: that would confirm whether the kernel is lying to us or if our > logic is buggy > > Best > ben > > >> -Original Message- >> From: vpp-dev@lists.fd.io On Behalf Of Smith Beirvin >> >> Sent: Wednesday, April 13, 2022 5:48 >> To: vpp-dev@lists.fd.io >> Subject: [vpp-dev] create af_xdp interface with exception kernel info >> "Kernel error message: virtio_net: Too few free TX rings available" >> >> [Edited Message Follows] >> >> Hi VPP fellows: >> when Icreate af_xdp interface based on KVM host network interface, >> there is exception kernel info "Kernel error message: virtio_net: Too few >> free TX rings available". However, when I do the test based on hardware >> network card, it is OK. pls help me out for this issue. >> >> 1.VPP version: 21.10 >> >> 2.KVM host Linux kernel version: >> vhost135:~$ uname -a >> Linux vhost135 5.4.0-050400rc1-generic #201909301433 SMP Mon Sep 30 >> 18:37:54 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux >> >> 3.Libbpf version: 0.5.0 >> >> 4.exceptin info: >> localhost vpp[1068]: libbpf: Kernel error message: virtio_net: Too few >> free TX rings available >> >> 5. coredump info >> Apr 13 11:04:18 localhost vnet[1068]: received signal SIGFPE, PC >> 0x7f0f1a50166f >> Apr 13 11:04:18 localhost vnet[1068]: #0 0x7f0f5c7683f4 >> unix_signal_handler + 0x2a4 >> Apr 13 11:04:18 localhost vnet[1068]: #1 0x7f0f5bff88a0 >> 0x7f0f5bff88a0 >> Apr 13 11:04:18 localhost vnet[1068]: #2 0x7f0f1a50166f >> af_xdp_device_class_tx_fn + 0x18f >> Apr 13 11:04:18 localhost vnet[1068]: #3 0x7f0f5c6e3ab5 dispatch_node >> + 0x365 >> Apr 13 11:04:18 localhost vnet[1068]: #4 0x7f0f5c6e4387 >> dispatch_pending_node + 0x3c7 >> Apr 13 11:04:18 localhost vnet[1068]: #5 0x7f0f5c6de7e1 >> vlib_main_or_worker_loop + 0xc51 >> Apr 13 11:04:18 localhost vnet[1068]: #6 0x7f0f5c6e052a >> vlib_main_loop + 0x1a >> Apr 13 11:04:18 localhost vnet[1068]: #7 0x7f0f5c6e02ff vlib_main + >> 0xacf >> Apr 13 11:04:18 localhost vnet[1068]: #8 0x7f0f5c7671b5 thread0 + >> 0x35 >> Apr 13 11:04:18 localhost vnet[1068]: #9 0x7f0f5bb17a24 >> 0x7f0f5bb17a24 >> >> (gdb) bt >> #0 0x7f200415c66f in af_xdp_device_class_tx_fn ( >> vm=0x7f20465f9c00 , node=0x7f2006479700, >> frame=0x7f201db68180) >> at /home/wjyl/ncclwan-agent/ncclwan-agent- >> test/router/vpp/src/plugins/af_xdp/output.c:220 >> #1 0x7f204633eab5 in dispatch_node (vm=0x7f20465f9c00 , >> node=0x7f2006479700, type=VLIB_NODE_TYPE_INTERNAL, >> dispatch_state=VLIB_NODE_STATE_POLLING, frame=0x7f201db68180, >> last_time_stamp=10107226988068) >> at /home/wjyl/ncclwan-agent/ncclwan-agent- >> test/router/vpp/src/vlib/main.c:1235 >> #2 0x7f204633f387 in dispatch_pending_node ( >> vm=0x7f20465f9c00 , pending_frame_index=1, >> last_time_stamp=10107226988068) >> at /home/wjyl/ncclwan-agent/ncclwan-agent- >> test/router/vpp/src/vlib/main.c:1403 >> #3 0x7f20463397e1 in vlib_main_or_worker_loop ( >> vm=0x7f20465f9c00 , is_main
Re: [vpp-dev] create af_xdp interface with exception kernel info "Kernel error message: virtio_net: Too few free TX rings available"
Hi Benoit: Thanks for your reply. I have done the gdb test to track af_xdp_create_queue() function. Acutlly the af_xdp_create_queue() return error value, The function call flow is as below, I doubt that Linux kernel version 5.4 doesn't support af_xdp fucntion for virtio network card. af_xdp_create_if() -->af_xdp_create_queue() -->af_xdp_create_queue() -->xsk_socket__create() -->xsk_socket__create_shared() -->__xsk_setup_xdp_prog() -->bpf_set_link_xdp_fd() // kernel error message occured: libbpf: Kernel error message: virtio_net: Too few free TX rings available On Wed, Apr 13, 2022 at 03:48 PM, Benoit Ganne (bganne) wrote: > > Hmm sounds like the virtio driver does not return an error to userspace > when failing to create txq? > With af_xdp we try to create 1 txq per VPP worker, up to the number > available txq advertised by the kernel for the interface. If the txq > creation failed, we just use the txq we got. Unless there is a bug in our > logic (which is totally possible ) to end up in this situation: > 1) the kernel advertises X txq but looks like a lie > 2) we try to create Y txq <= X > 3) the kernel returns success to us but that's a lie > > What you can try: > 1) run VPP with a single thread (remove any 'cpu { ... }' section in your > startup.conf): that will make af_xdp create a single txq > 2) check whether all calls to > src/plugins/af_xdp/device.c:af_xdp_create_queue() succeed prior to the > crash: that would confirm whether the kernel is lying to us or if our > logic is buggy > > Best > ben > > >> -Original Message- >> From: vpp-dev@lists.fd.io On Behalf Of Smith Beirvin >> >> Sent: Wednesday, April 13, 2022 5:48 >> To: vpp-dev@lists.fd.io >> Subject: [vpp-dev] create af_xdp interface with exception kernel info >> "Kernel error message: virtio_net: Too few free TX rings available" >> >> [Edited Message Follows] >> >> Hi VPP fellows: >> when Icreate af_xdp interface based on KVM host network interface, >> there is exception kernel info "Kernel error message: virtio_net: Too few >> free TX rings available". However, when I do the test based on hardware >> network card, it is OK. pls help me out for this issue. >> >> 1.VPP version: 21.10 >> >> 2.KVM host Linux kernel version: >> vhost135:~$ uname -a >> Linux vhost135 5.4.0-050400rc1-generic #201909301433 SMP Mon Sep 30 >> 18:37:54 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux >> >> 3.Libbpf version: 0.5.0 >> >> 4.exceptin info: >> localhost vpp[1068]: libbpf: Kernel error message: virtio_net: Too few >> free TX rings available >> >> 5. coredump info >> Apr 13 11:04:18 localhost vnet[1068]: received signal SIGFPE, PC >> 0x7f0f1a50166f >> Apr 13 11:04:18 localhost vnet[1068]: #0 0x7f0f5c7683f4 >> unix_signal_handler + 0x2a4 >> Apr 13 11:04:18 localhost vnet[1068]: #1 0x7f0f5bff88a0 >> 0x7f0f5bff88a0 >> Apr 13 11:04:18 localhost vnet[1068]: #2 0x7f0f1a50166f >> af_xdp_device_class_tx_fn + 0x18f >> Apr 13 11:04:18 localhost vnet[1068]: #3 0x7f0f5c6e3ab5 dispatch_node >> + 0x365 >> Apr 13 11:04:18 localhost vnet[1068]: #4 0x7f0f5c6e4387 >> dispatch_pending_node + 0x3c7 >> Apr 13 11:04:18 localhost vnet[1068]: #5 0x7f0f5c6de7e1 >> vlib_main_or_worker_loop + 0xc51 >> Apr 13 11:04:18 localhost vnet[1068]: #6 0x7f0f5c6e052a >> vlib_main_loop + 0x1a >> Apr 13 11:04:18 localhost vnet[1068]: #7 0x7f0f5c6e02ff vlib_main + >> 0xacf >> Apr 13 11:04:18 localhost vnet[1068]: #8 0x7f0f5c7671b5 thread0 + >> 0x35 >> Apr 13 11:04:18 localhost vnet[1068]: #9 0x7f0f5bb17a24 >> 0x7f0f5bb17a24 >> >> (gdb) bt >> #0 0x7f200415c66f in af_xdp_device_class_tx_fn ( >> vm=0x7f20465f9c00 , node=0x7f2006479700, >> frame=0x7f201db68180) >> at /home/wjyl/ncclwan-agent/ncclwan-agent- >> test/router/vpp/src/plugins/af_xdp/output.c:220 >> #1 0x7f204633eab5 in dispatch_node (vm=0x7f20465f9c00 , >> node=0x7f2006479700, type=VLIB_NODE_TYPE_INTERNAL, >> dispatch_state=VLIB_NODE_STATE_POLLING, frame=0x7f201db68180, >> last_time_stamp=10107226988068) >> at /home/wjyl/ncclwan-agent/ncclwan-agent- >> test/router/vpp/src/vlib/main.c:1235 >> #2 0x7f204633f387 in dispatch_pending_node ( >> vm=0x7f20465f9c00 , pending_frame_index=1, >> last_time_stamp=10107226988068) >> at /home/wjyl/ncclwan-agent/ncclwan-agent- >> test/router/vpp/src/vlib/main.c:1403 >> #3 0x7f20463397e1 in vlib_main_or_worker_loop ( >> vm=0x7f20465f9c00 , is_main=1) >> at /home/wjyl/ncclwan-agent/ncclwan-ag
Re: [vpp-dev] create af_xdp interface with exception kernel info "Kernel error message: virtio_net: Too few free TX rings available"
[Edited Message Follows] HI Benoit: Thanks for your reply. I have done the gdb test to track af_xdp_create_queue() function. Acutlly the af_xdp_create_queue() return error value, The function call flow is as below, I doubt that Linux kernel version 5.4 doesn't support af_xdp fucntion for virtio network card. af_xdp_create_if() -->af_xdp_create_queue() -->af_xdp_create_queue() -->xsk_socket__create() -->xsk_socket__create_shared() -->__xsk_setup_xdp_prog() -->bpf_set_link_xdp_fd() // kernel error message occured: libbpf: Kernel error message: virtio_net: Too few free TX rings available -=-=-=-=-=-=-=-=-=-=-=- Links: You receive all messages sent to this group. View/Reply Online (#21261): https://lists.fd.io/g/vpp-dev/message/21261 Mute This Topic: https://lists.fd.io/mt/90434107/21656 Group Owner: vpp-dev+ow...@lists.fd.io Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub [arch...@mail-archive.com] -=-=-=-=-=-=-=-=-=-=-=-
Re: [vpp-dev] create af_xdp interface with exception kernel info "Kernel error message: virtio_net: Too few free TX rings available"
[Edited Message Follows] HI Benoi: Thanks for your reply. I have done the gdb test to track af_xdp_create_queue() function. Acutlly the af_xdp_create_queue() return error value, The function call flow is as below, I doubt that Linux kernel version 5.4 doesn't support af_xdp fucntion for virtio network card. af_xdp_create_if() -->af_xdp_create_queue() -->af_xdp_create_queue() -->xsk_socket__create() -->xsk_socket__create_shared() -->__xsk_setup_xdp_prog() -->bpf_set_link_xdp_fd() // kernel error message occured: libbpf: Kernel error message: virtio_net: Too few free TX rings available -=-=-=-=-=-=-=-=-=-=-=- Links: You receive all messages sent to this group. View/Reply Online (#21261): https://lists.fd.io/g/vpp-dev/message/21261 Mute This Topic: https://lists.fd.io/mt/90434107/21656 Group Owner: vpp-dev+ow...@lists.fd.io Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub [arch...@mail-archive.com] -=-=-=-=-=-=-=-=-=-=-=-
Re: [vpp-dev] create af_xdp interface with exception kernel info "Kernel error message: virtio_net: Too few free TX rings available"
vppctl enable tap-inject create interface af_xdp host-if ens9 num-rx-queues all set int mac address ens9/0 52:54:00:ba:be:14 sudo ifconfig vpp0 inet 172.16.2.135 netmask 255.255.255.0 up hw ether 52:54:00:ba:be:14 delete interface af_xdp ens9/0 af_xdp_create_if() -->af_xdp_create_queue() -->xsk_umem__create() -->xsk_umem__create_v0_0_4() -->umem->fd = socket(AF_XDP, SOCK_RAW, 0); xsk_set_umem_config(>config, usr_config); setsockopt(umem->fd, SOL_XDP, XDP_UMEM_REG, , sizeof(mr)); xsk_create_umem_rings(umem, umem->fd, fill, comp); -->setsockopt(fd, SOL_XDP, XDP_UMEM_FILL_RING, >config.fill_size, sizeof(umem->config.fill_size)); setsockopt(fd, SOL_XDP, XDP_UMEM_COMPLETION_RING, >config.comp_size, sizeof(umem->config.comp_size)); xsk_get_mmap_offsets(fd, ); mmap(NULL, off.fr.desc + umem->config.fill_size * sizeof(__u64),PROT_READ | PROT_WRITE, MAP_SHARED | MAP_POPULATE, fd,XDP_UMEM_PGOFF_FILL_RING); mmap(NULL, off.cr.desc + umem->config.comp_size * sizeof(__u64), PROT_READ | PROT_WRITE, MAP_SHARED | MAP_POPULATE, fd, XDP_UMEM_PGOFF_COMPLETION_RING); -->xsk_socket__create() -->xsk_socket__create_shared() -->xsk_set_xdp_socket_config(>config, usr_config); xsk->fd = umem->fd; ctx = xsk_create_ctx(xsk, umem, ifindex, ifname, queue_id,fill, comp); setsockopt(xsk->fd, SOL_XDP, XDP_RX_RING, >config.rx_size, sizeof(xsk->config.rx_size)); setsockopt(xsk->fd, SOL_XDP, XDP_TX_RING, >config.tx_size, sizeof(xsk->config.tx_size)); xsk_get_mmap_offsets(xsk->fd, ); mmap(NULL, off.rx.desc + xsk->config.rx_size * sizeof(struct xdp_desc), PROT_READ | PROT_WRITE, MAP_SHARED | MAP_POPULATE, xsk->fd, XDP_PGOFF_RX_RING); mmap(NULL, off.tx.desc + xsk->config.tx_size * sizeof(struct xdp_desc), PROT_READ | PROT_WRITE, MAP_SHARED | MAP_POPULATE, xsk->fd, XDP_PGOFF_TX_RING); bind(xsk->fd, (struct sockaddr *), sizeof(sxdp)); /* 第二次绑定会失败,不管是多queue情况下,还是删端口再建立端口,第二次绑定大概率会失败 */ -->ad->pool = vlib_buffer_pool_get_default_for_numa (vm, af_xdp_get_numa(ad->linux_ifname)); ethernet_mac_address_generate (ad->hwaddr); ethernet_register_interface (vnm, af_xdp_device_class.index, ad->dev_instance, ad->hwaddr, >hw_if_index, af_xdp_flag_change); sw = vnet_get_hw_sw_interface (vnm, ad->hw_if_index); hw = vnet_get_hw_interface (vnm, ad->hw_if_index); vnet_hw_if_set_input_node (vnm, ad->hw_if_index, af_xdp_input_node.index); rxq->queue_index = vnet_hw_if_register_rx_queue (vnm, ad->hw_if_index, i, VNET_HW_IF_RXQ_THREAD_ANY) rxq->file_index = clib_file_add (_main, ); vnet_hw_if_set_rx_queue_file_index (vnm, rxq->queue_index, rxq->file_index); (gdb) bt #0 0x7f6fe0fcf958 in xsk_create_bpf_maps (xsk=0xf1b040) at xsk.c:589 #1 0x7f6fe0fd01c1 in xsk_init_xdp_res (xsk=0xf1b040, xsks_map_fd=0x0) at xsk.c:800 #2 0x7f6fe0fd042e in __xsk_setup_xdp_prog (_xdp=0xf1b040, xsks_map_fd=0x0) at xsk.c:886 #3 0x7f6fe0fd108c in xsk_socket__create_shared (xsk_ptr=0x7f6fe421aa00, ifname=0x7f6fe421a930 "ens9", queue_id=0, umem=0xf88b20, rx=0x7f6fe421aa48, tx=0x7f6fe421ab50, fill=0x7f6fe421aa78, comp=0x7f6fe421ab80, usr_config=0x7f6ffac1ce40) at xsk.c:1170 #4 0x7f6fe0fd122e in xsk_socket__create (xsk_ptr=0x7f6fe421aa00, ifname=0x7f6fe421a930 "ens9", queue_id=0, umem=0xf88b20, rx=0x7f6fe421aa48, tx=0x7f6fe421ab50, usr_config=0x7f6ffac1ce40) at xsk.c:1206 #5 0x7f6fe121a237 in af_xdp_create_queue ( vm=0x7f70236bec00 , args=0x7f6ffac1d460, ad=0x7f6fe421a880, qid=0) at /home/beirvin/workspace/flexirouter_new/vpp/src/plugins/af_xdp/device.c:254 #6 0x7f6fe121879d in af_xdp_create_if ( vm=0x7f70236bec00 , args=0x7f6ffac1d460) at /home/beirvin/workspace/flexirouter_new/vpp/src/plugins/af_xdp/device.c:468 #7 0x7f6fe1215936 in af_xdp_create_command_fn ( err = bpf_set_link_xdp_fd(xsk->ctx->ifindex, ctx->prog_fd, xsk->config.xdp_flags); af_xdp_create_if() -->af_xdp_create_queue() -->af_xdp_create_queue() -->xsk_socket__create() -->xsk_socket__create_shared() -->__xsk_setup_xdp_prog() -->bpf_set_link_xdp_fd() *// kernel error message occured: libbpf: Kernel error message: virtio_net: Too few free TX rings available* -=-=-=-=-=-=-=-=-=-=-=- Links: You receive all messages sent to this group. View/Reply Online (#21261): https://lists.fd.io/g/vpp-dev/message/21261 Mute This Topic: https://lists.fd.io/mt/90434107/21656 Group Owner: vpp-dev+ow...@lists.fd.io Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub [arch...@mail-archive.com] -=-=-=-=-=-=-=-=-=-=-=-
Re: [vpp-dev] create af_xdp interface with exception kernel info "Kernel error message: virtio_net: Too few free TX rings available"
On Wed, Apr 13, 2022 at 03:48 PM, Benoit Ganne (bganne) wrote: > > single -=-=-=-=-=-=-=-=-=-=-=- Links: You receive all messages sent to this group. View/Reply Online (#21260): https://lists.fd.io/g/vpp-dev/message/21260 Mute This Topic: https://lists.fd.io/mt/90434107/21656 Group Owner: vpp-dev+ow...@lists.fd.io Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub [arch...@mail-archive.com] -=-=-=-=-=-=-=-=-=-=-=-
[vpp-dev] #vpp-dev AF_XDP interface create and delete problem
Hi VPP fellows: I got problems when I tried to create and delete AF_XDP interface. Pls give me some suggestions to figure out the problem. The issue description is as below: Step 1: create af_xdp interface based on host if wlan0: create interface af_xdp host-if wlan0 num-rx-queues 1 Step 2: delete the af_xdp interface: delete interface af_xdp wlan0/0 Step 3: create af_xdp interface based on host if wlan0 again: create interface af_xdp host-if wlan0 num-rx-queues 1 Here get excetion info as below DBGvpp# create interface af_xdp host-if wlan0 num-rx-queues 1 libbpf: can't get next link: Invalid argument create interface af_xdp: xsk_socket__create() failed (is linux netdev wlan0 up?): Device or resource busy if I try to create af_xdp interface by using command: create interface af_xdp host-if wlan0 num-rx-queues all Step 1: create af_xdp interface based on host if wlan0: create interface af_xdp host-if wlan0 num-rx-queues all Step 2: delete the af_xdp interface: delete interface af_xdp wlan0/0 Step 3: create af_xdp interface based on host if wlan0 again: create interface af_xdp host-if wlan0 num-rx-queues all Here I find that af_xdp_create_queue() is faild, as result, there is no queue bond to the interface because of bind() function failed. af_xdp_create_queue() ->xsk_socket__create() ->xsk_socket__create_shared ->err = bind(xsk->fd, (struct sockaddr *), sizeof(sxdp)); if (err) { err = -errno; goto out_mmap_tx; } -=-=-=-=-=-=-=-=-=-=-=- Links: You receive all messages sent to this group. View/Reply Online (#21171): https://lists.fd.io/g/vpp-dev/message/21171 Mute This Topic: https://lists.fd.io/mt/90169229/21656 Mute #vpp-dev:https://lists.fd.io/g/vpp-dev/mutehashtag/vpp-dev Group Owner: vpp-dev+ow...@lists.fd.io Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub [arch...@mail-archive.com] -=-=-=-=-=-=-=-=-=-=-=-
Re: [vpp-dev] Coredump occur when duplicate a packet at "interface-output" arc. #vpp
On Mon, Mar 7, 2022 at 04:38 PM, Benoit Ganne (bganne) wrote: > > instead Hi Benoit Bro, I think you are right, thanks again. -=-=-=-=-=-=-=-=-=-=-=- Links: You receive all messages sent to this group. View/Reply Online (#20970): https://lists.fd.io/g/vpp-dev/message/20970 Mute This Topic: https://lists.fd.io/mt/89604305/21656 Mute #vpp:https://lists.fd.io/g/vpp-dev/mutehashtag/vpp Group Owner: vpp-dev+ow...@lists.fd.io Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub [arch...@mail-archive.com] -=-=-=-=-=-=-=-=-=-=-=-
Re: [vpp-dev] Coredump occur when duplicate a packet at "interface-output" arc. #vpp
On Mon, Mar 7, 2022 at 04:38 PM, Benoit Ganne (bganne) wrote: > > bi0 okay, I will check again, thanks -=-=-=-=-=-=-=-=-=-=-=- Links: You receive all messages sent to this group. View/Reply Online (#20969): https://lists.fd.io/g/vpp-dev/message/20969 Mute This Topic: https://lists.fd.io/mt/89604305/21656 Mute #vpp:https://lists.fd.io/g/vpp-dev/mutehashtag/vpp Group Owner: vpp-dev+ow...@lists.fd.io Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub [arch...@mail-archive.com] -=-=-=-=-=-=-=-=-=-=-=-
Re: [vpp-dev] Coredump occur when duplicate a packet at "interface-output" arc. #vpp
Hi Buddy, thanks for reply, I already take account into the outbond issue regarding two packets increments at one loop( if (is_send_syn && (1 < n_left_to_next)) ). Actually, I couldn't find logical issue on source code. I will read “l2-flood” node soure code for reference. -=-=-=-=-=-=-=-=-=-=-=- Links: You receive all messages sent to this group. View/Reply Online (#20968): https://lists.fd.io/g/vpp-dev/message/20968 Mute This Topic: https://lists.fd.io/mt/89604305/21656 Mute #vpp:https://lists.fd.io/g/vpp-dev/mutehashtag/vpp Group Owner: vpp-dev+ow...@lists.fd.io Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub [arch...@mail-archive.com] -=-=-=-=-=-=-=-=-=-=-=-
[vpp-dev] Coredump occur when duplicate a packet at "interface-output" arc. #vpp
Hi fellows: I got a problem when duplicate a packet at "interface-output" arc, this will trigger a coredump as below. while I can't find any issue for the sorce code. So pls help me. Coredump information; Mar 7 02:54:24 localhost vnet[9542]: vlib_buffer_validate_alloc_free:367: freeing known-free buffer 0x9f947 Mar 7 02:54:24 localhost vnet[9542]: received signal SIGABRT, PC 0x7f2b4330ce87 Mar 7 02:54:24 localhost vnet[9542]: #0 0x7f2b44e5e914 unix_signal_handler + 0x2a4 Mar 7 02:54:24 localhost vnet[9542]: #1 0x7f2b446f1980 0x7f2b446f1980 Mar 7 02:54:24 localhost vnet[9542]: #2 0x7f2b4330ce87 gsignal + 0xc7 Mar 7 02:54:24 localhost vnet[9542]: #3 0x7f2b4330e7f1 abort + 0x141 Mar 7 02:54:24 localhost vnet[9542]: #4 0x00407263 0x407263 Mar 7 02:54:24 localhost vnet[9542]: #5 0x7f2b441ed7f9 debugger + 0x9 Mar 7 02:54:24 localhost vnet[9542]: #6 0x7f2b441ed577 _clib_error + 0x3b7 Mar 7 02:54:24 localhost vnet[9542]: #7 0x7f2b44d6b4a7 vlib_buffer_validate_alloc_free + 0x117 Mar 7 02:54:24 localhost vnet[9542]: #8 0x7f2afa231267 vlib_buffer_free_inline.constprop.9 + 0x1167 Mar 7 02:54:24 localhost vnet[9542]: #9 0x7f2afa2328bb tap_inject_tx + 0x64b Mar 7 02:54:24 localhost vnet[9542]: #10 0x7f2b44dd9fd5 dispatch_node + 0x365 Mar 7 02:54:24 localhost vnet[9542]: #11 0x7f2b44dda8a7 dispatch_pending_node + 0x3c7 Mar 7 02:54:24 localhost vnet[9542]: #12 0x7f2b44dd4d01 vlib_main_or_worker_loop + 0xc51 Mar 7 02:54:24 localhost vnet[9542]: #13 0x7f2b44dd6a4a vlib_main_loop + 0x1a Mar 7 02:54:24 localhost vnet[9542]: #14 0x7f2b44dd681f vlib_main + 0xacf Source Code: // decode typedef enum { UDP_TCP_FAKE_OUT_NEXT_DROP, UDP_TCP_FAKE_OUT_NEXT_INT_TX, UDP_TCP_FAKE_OUT_N_NEXT, } UDP_TCP_FAKE_OUT_NEXT_E; always_inline uword tcp_fake_out_inline (vlib_main_t * vm, vlib_node_runtime_t * node, vlib_frame_t * frame, u8 is_ip4) { u32 n_left_from, *from, *to_next, next_index, matches; from = vlib_frame_vector_args (frame); n_left_from = frame->n_vectors; next_index = node->cached_next_index; matches = 0; while (n_left_from > 0) { u32 n_left_to_next; vlib_get_next_frame (vm, node, next_index, to_next, n_left_to_next); while (n_left_from > 0 && n_left_to_next > 0 && n_left_to_next <= 256) { u32 next0; vlib_buffer_t* b0; u32 bi0; bi0 = from[0]; b0 = vlib_get_buffer (vm, bi0); vnet_feature_next (, b0); if (is_ip4) { u8 is_send_syn = 1; if (is_send_syn && (1 < n_left_to_next)) { u32 syn_bi = 0; vlib_buffer_t* syn_b; // syn_b = vlib_buffer_copy3(vm, b0, _bi); syn_b = vlib_buffer_copy(vm, b0); // send syn ack if (syn_b) { syn_bi = vlib_get_buffer_index(vm, syn_b); to_next[0] = syn_bi; to_next += 1; n_left_to_next -= 1; vlib_validate_buffer_enqueue_x1 (vm, node, next_index, to_next, n_left_to_next, bi0, next0); printf("%s %d %s %s: tcp_fake track.\r\n", __FUNCTION__, __LINE__, __DATE__, __TIME__); } else { printf("beirvin note %s %d: syn_b is NULL;\r\n", __FUNCTION__, __LINE__); } } } to_next[0] = bi0; from += 1; to_next += 1; n_left_from -= 1; n_left_to_next -= 1; vlib_validate_buffer_enqueue_x1 (vm, node, next_index, to_next, n_left_to_next, bi0, next0); } vlib_put_next_frame (vm, node, next_index, n_left_to_next); } // vlib_node_increment_counter (vm, fast-vxlan-output.index, FWABF_ERROR_MATCHED, matches); return frame->n_vectors; } always_inline uword tcp_fake_out_ipv4 (vlib_main_t * vm, vlib_node_runtime_t * node, vlib_frame_t * frame) { return tcp_fake_out_inline (vm, node, frame, 1); } /* *INDENT-OFF* */ VLIB_REGISTER_NODE (tcp_fake_out_node) = { .function = tcp_fake_out_ipv4, .name = "udp-tcp-fake-out", /* Takes a vector of packets. */ .vector_size = sizeof (u32), .type = VLIB_NODE_TYPE_INTERNAL, .n_next_nodes = UDP_TCP_FAKE_OUT_N_NEXT, .next_nodes = { [UDP_TCP_FAKE_OUT_NEXT_DROP] = "error-drop", [UDP_TCP_FAKE_OUT_NEXT_INT_TX] = "interface-tx", }, //.format_buffer = format_tcp_fake_header, //.format_trace = format_tcp_fake_trace, }; VNET_FEATURE_INIT (tcp_fake_out_feat, static) = { .arc_name = "interface-output", .node_name = "udp-tcp-fake-out", .runs_before = VNET_FEATURES ("interface-tx"), }; static_always_inline clib_error_t * tcp_fake_init (vlib_main_t * vm) { tcp_fake_main_t * ufm = vnet_get_tcp_fake_main (); ufm->enable = 0; ufm->vm = vm; ufm->enable_interfaces = NULL; return 0; } VLIB_INIT_FUNCTION (tcp_fake_init); -=-=-=-=-=-=-=-=-=-=-=- Links: You receive all messages sent to this group. View/Reply Online (#20958): https://lists.fd.io/g/vpp-dev/message/20958 Mute This Topic: https://lists.fd.io/mt/89604305/21656 Mute #vpp:https://lists.fd.io/g/vpp-dev/mutehashtag/vpp Group Owner: vpp-dev+ow...@lists.fd.io Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub [arch...@mail-archive.com] -=-=-=-=-=-=-=-=-=-=-=-
[vpp-dev] [Bug or Not] QoS l2-policer-classify match question #vpp_qos
Hi VPP fellows: Recently I try to do the qos rate limit based on loop port(used for ESP/VxLAN encapspulation). But I found that "l2-policer-classify" node wouldn't hit the packet, by contrast, acl match can work fine.So I track the source code for "l2-policer-classify" node fucntion "policer_classify_inline", it use "h0 = b0->data;" to do some match action, however I think it should use "h0 = (u8 *) vlib_buffer_get_current (b0);" to do corresponding match action. By the way, "l2-input-acl" node function "l2_in_out_acl_node_fn" also use "h0 = (u8 *) vlib_buffer_get_current (b0);" to do corresponding match action. So I modified the "policer_classify_inline" function source code as below, Then it work fine. *So I wonder this is a bug or I didn't configure the VPP Qos use the correct step? Hope get a reply from VPP fellows.* 1.Qos Limit rate to 800kbps configuration: configure policer name policy1 cir 800 cb 9 rate kbps round closest type 1r2c conform-action transmit exceed-action drop classify table mask l3 ip4 src proto classify session policer-hit-next policy1 exceed-color table-index 0 match l3 ip4 src 10.100.0.176 proto 50 set policer classify interface loop48 l2-table 0 2. ACL Deny configuration classify table mask l3 ip4 src proto classify session acl-hit-next deny table-index 1 match l3 ip4 src 10.100.0.176 proto 50 set int input acl intfc loop48 l2-table 1 3.show interface features loop48 l2-input: POLICER_CLAS (l2-policer-classify) ACL (l2-input-acl) FWD (l2-fwd) UU_FLOOD (l2-flood) ARP_TERM (arp-term-l2bd) FLOOD (l2-flood) 4. "policer_classify_inline" source code modification: diff --git a/src/vnet/policer/node_funcs.c b/src/vnet/policer/node_funcs.c index fd7f197e9..bdbb17087 100644 --- a/src/vnet/policer/node_funcs.c +++ b/src/vnet/policer/node_funcs.c @@ -559,11 +559,17 @@ policer_classify_inline (vlib_main_t * vm, bi0 = from[0]; b0 = vlib_get_buffer (vm, bi0); - h0 = b0->data; bi1 = from[1]; b1 = vlib_get_buffer (vm, bi1); + +#if 0 /* modified by liuman for policer match */ + h0 = b0->data; h1 = b1->data; +#else + h0 = (u8 *) vlib_buffer_get_current (b0); + h1 = (u8 *) vlib_buffer_get_current (b1); +#endif sw_if_index0 = vnet_buffer (b0)->sw_if_index[VLIB_RX]; table_index0 = @@ -606,7 +612,11 @@ policer_classify_inline (vlib_main_t * vm, bi0 = from[0]; b0 = vlib_get_buffer (vm, bi0); +#if 0 /* modified by liuman for policer match */ h0 = b0->data; +#else + h0 = (u8 *) vlib_buffer_get_current (b0); +#endif sw_if_index0 = vnet_buffer (b0)->sw_if_index[VLIB_RX]; table_index0 = @@ -673,7 +683,11 @@ policer_classify_inline (vlib_main_t * vm, n_left_to_next -= 1; b0 = vlib_get_buffer (vm, bi0); - h0 = b0->data; +#if 0 /* modified by liuman for policer match */ + h0 = b0->data; +#else + h0 = (u8 *) vlib_buffer_get_current (b0); +#endif table_index0 = vnet_buffer (b0)->l2_classify.table_index; e0 = 0; t0 = 0; -=-=-=-=-=-=-=-=-=-=-=- Links: You receive all messages sent to this group. View/Reply Online (#20511): https://lists.fd.io/g/vpp-dev/message/20511 Mute This Topic: https://lists.fd.io/mt/87137335/21656 Mute #vpp_qos:https://lists.fd.io/g/vpp-dev/mutehashtag/vpp_qos Group Owner: vpp-dev+ow...@lists.fd.io Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub [arch...@mail-archive.com] -=-=-=-=-=-=-=-=-=-=-=-
Re: [vpp-dev] how to configure vpp acl to match VxLAN field
Hi guys: I figure out that use the following command can match VxLAN fied. configure policer name policy1 cir 800 cb 9 rate kbps round closest type 1r2c conform-action transmit exceed-action drop classify table mask hex 00ffff00 classify session policer-hit-next policy1 exceed-color table-index 0 match hex 081112b5d600 set policer classify interface GigabitEthernet5/0/0 ip4-table 0 show classify tables verbose -=-=-=-=-=-=-=-=-=-=-=- Links: You receive all messages sent to this group. View/Reply Online (#20399): https://lists.fd.io/g/vpp-dev/message/20399 Mute This Topic: https://lists.fd.io/mt/86509296/21656 Group Owner: vpp-dev+ow...@lists.fd.io Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub [arch...@mail-archive.com] -=-=-=-=-=-=-=-=-=-=-=-