Re: [vpp-dev] Vhost-user interface not working

2023-03-02 Thread steven luong via lists.fd.io
It is likely that you are missing memAccess=’shared’ https://fdio-vpp.readthedocs.io/en/latest/usecases/vhost/xmlexample.html#:~:text=%3Ccell%20id%3D%270%27%20cpus%3D%270%27%20memory%3D%27262144%27%20unit%3D%27KiB%27%20memAccess%3D%27shared%27/%3E From: on behalf of Benjamin Vandendriessche

Re: [vpp-dev] VPP Policer API Memory Leak

2023-02-21 Thread steven luong via lists.fd.io
I bet you didn’t limit the number of API trace entries. Try limit the number of API trace entries that VPP keeps with nitems and give it a reasonable number. api-trace { on nitems 65535 } Steven From: on behalf of

Re: [vpp-dev] VPP Hardware Interface Output show Carrier Down

2023-02-17 Thread steven luong via lists.fd.io
Sunil is using dpdk vmxnet3 driver. So he doesn’t need to load VPP native vmxnet3 plugin. Gdb dpdk code to see why it returns -22 when VPP adds the NIC to dpdk. rte_eth_dev_start[port:1, errno:-22]: Unknown error -22 Steven From: on behalf of Guangming Reply-To: "vpp-dev@lists.fd.io"

Re: [vpp-dev] VPP logging does not logs API calls debug message

2023-02-04 Thread steven luong via lists.fd.io
Did you try vppctl show log Steven From: on behalf of "Tripathi, VinayX" Reply-To: "vpp-dev@lists.fd.io" Date: Saturday, February 4, 2023 at 4:19 AM To: "vpp-dev@lists.fd.io" Cc: "Ji, Kai" Subject: Re: [vpp-dev] VPP logging does not logs API calls debug message Hi Team , Any suggestion

Re: [vpp-dev] LACP issues w/ cdma/connectX 6

2022-12-05 Thread steven luong via lists.fd.io
Type show lacp details to see if the member interface that is not forming the bundle receives and sends LACP PDUs. Type show hardware to see if both member interfaces have the same mac address. From: on behalf of Eyle Brinkhuis Reply-To: "vpp-dev@lists.fd.io" Date: Monday,

Re: [vpp-dev] LACP bonding not working with RDMA driver

2022-11-15 Thread steven luong via lists.fd.io
In addition, do 1. show hardware The bond, eth1/0, and eth2/0 should have the same mac address. 2. show lacp details Check these statistics for the interface that is not forming the bond Good LACP PDUs received: 13 Bad LACP PDUs received: 0 LACP PDUs sent: 14 last LACP PDU

Re: [vpp-dev] #vpp-dev No packets generated from Vhost user interface

2022-10-24 Thread steven luong via lists.fd.io
Use “virsh dumpxml” to check the output to see if you have memAccess=share as below Steven From: on behalf of suresh vuppala Reply-To: "vpp-dev@lists.fd.io" Date: Friday, October 21, 2022 at 5:23 PM To: "vpp-dev@lists.fd.io" Subject: Re: [vpp-dev] #vpp-dev No packets generated from

Re: [vpp-dev] #vpp-dev No packets generated from Vhost user interface

2022-10-21 Thread steven luong via lists.fd.io
Your Qemu command to launch the VM is likely missing the hugepage or share option. -=-=-=-=-=-=-=-=-=-=-=- Links: You receive all messages sent to this group. View/Reply Online (#22062): https://lists.fd.io/g/vpp-dev/message/22062 Mute This Topic: https://lists.fd.io/mt/94432596/21656 Mute

Re: [vpp-dev] VPP crashing if we configure srv6 policy with five sids in the sidlist

2022-08-05 Thread steven luong via lists.fd.io
Can you provide the topology, configurations, and steps to recreate this crash? Steven From: on behalf of Chinmaya Aggarwal Reply-To: "vpp-dev@lists.fd.io" Date: Wednesday, July 13, 2022 at 4:07 AM To: "vpp-dev@lists.fd.io" Subject: Re: [vpp-dev] VPP crashing if we configure srv6 policy

Re: [vpp-dev] LACP bond interface not working

2022-08-04 Thread steven luong via lists.fd.io
Please check to make sure the interface can ping to each other prior to adding them to the bond. Type “show lacp details” to verify VPP receives LACP PDUs from each other and the state machine. Steven From: on behalf of Chinmaya Aggarwal Reply-To: "vpp-dev@lists.fd.io" Date: Tuesday, June

Re: [vpp-dev] Memory region shows empty for vhost interface

2022-08-04 Thread steven luong via lists.fd.io
It is related to memoryBacking, missing hugepages, or missing shared option. What does your qemu launch command look like? Steven From: on behalf of Chinmaya Aggarwal Reply-To: "vpp-dev@lists.fd.io" Date: Thursday, July 14, 2022 at 3:31 AM To: "vpp-dev@lists.fd.io" Subject: [vpp-dev]

Re: [vpp-dev] VPP crashes when lcp host interface is added in network bridge

2022-08-04 Thread steven luong via lists.fd.io
Please try debug image and provide a sane back trace. Steven From: on behalf of Chinmaya Aggarwal Reply-To: "vpp-dev@lists.fd.io" Date: Thursday, July 21, 2022 at 4:42 AM To: "vpp-dev@lists.fd.io" Subject: [vpp-dev] VPP crashes when lcp host interface is added in network bridge Hi, As

Re: [vpp-dev] Bridge-domain function and usage.

2022-08-01 Thread steven luong via lists.fd.io
Pragya, UU-Flood stands for Unknown Unicast Flooding. It does not flood multicast or broadcast packets. You need “Flooding” on to flood multicast/broadcast packets. Steven From: on behalf of Pragya Nand Bhagat Reply-To: "vpp-dev@lists.fd.io" Date: Monday, August 1, 2022 at 2:59 AM To:

[vpp-dev] Please include Fixes: tag for regression fix

2021-11-02 Thread steven luong via lists.fd.io
Folks, In case you don’t already know, there is a tag called Fixes in the commit message which allows one to specify if the current patch fixes a regression. See an example usage in https://gerrit.fd.io/r/c/vpp/+/34212 When you commit a patch which fixes a known regression, please make use of

Re: [vpp-dev] DPDK PMD vs native VPP bonding driver

2021-09-16 Thread steven luong via lists.fd.io
Srikanth , You are correct that dpdk bonding has been deprecated for a while. I don’t remember since when. The performance of VPP native bonding when compared to dpdk bonding is about the same. With VPP native bonding, you have an additional option to configure LACP which was not supported

Re: [vpp-dev] fail_over_mac=1 (Active) Bonding

2021-09-15 Thread steven luong via lists.fd.io
Chetan, I have a patch in gerrit a long time ago and I just rebased it to the latest master https://gerrit.fd.io/r/c/vpp/+/30866 Please feel free to test it thoroughly and let me know if you encounter any problem or not. Steven From: on behalf of chetan bhasin Date: Tuesday, September 14,

Re: [vpp-dev] vnet bonding crashes - need some suggestions to narrow down

2021-05-22 Thread steven luong via lists.fd.io
I set up the same bonding with dot1q and subinterface configuration as given, but using tap interface to connect to Linux instead. It works just fine. I believe the crash was due to using a custom plugin which is cloned from VPP DPDK plugin to handle the Octeon-tx2 SoC. When bonding gets the

Re: [vpp-dev] observing issue with LACP port selection logic

2021-05-12 Thread steven luong via lists.fd.io
Sudhir, It is an error topology/configuration we don’t currently handle. Please try this and report back https://gerrit.fd.io/r/c/vpp/+/32292 The behavior is container-1 will form one bonding group with container-2. It is with either BondEthernet0 or BondEthernet1. Steven From: on behalf

Re: [vpp-dev] lawful intercept

2021-04-27 Thread steven luong via lists.fd.io
Your commit subject line is missing a component name. The commit comment is missing “Type:”. Steven From: on behalf of "hemant via lists.fd.io" Reply-To: "hem...@mnkcg.com" Date: Tuesday, April 27, 2021 at 12:56 PM To: "vpp-dev@lists.fd.io" Subject: Re: [vpp-dev] lawful intercept Newer

Re: [vpp-dev] LACP Troubleshooting

2021-02-16 Thread steven luong via lists.fd.io
VPP implements both active and passive modes. The default operation mode is active. The current setting for the port, active/passive, can be inferred from the output of show lacp. In the active state column, I see act=1 for all 4 ports. The output of the show bond command looks like VPP is

Re: [vpp-dev] VPP Packet Generator and Packet Tracer

2021-01-06 Thread steven luong via lists.fd.io
“make build” from the top of the workspace will generate the debug image for you to run gdb. Steven From: on behalf of Yaser Azfar Date: Wednesday, January 6, 2021 at 1:21 PM To: "Benoit Ganne (bganne)" Cc: "fdio+vpp-...@groups.io" Subject: Re: [vpp-dev] VPP Packet Generator and Packet

Re: [vpp-dev] Blackholed packets after forwarding interface output

2020-12-20 Thread steven luong via lists.fd.io
Additionally, please figure out why carrier is down. It needs to be up. Intel 82599 carrier down Steven From: on behalf of Dave Barach Date: Sunday, December 20, 2020 at 4:58 AM To: 'Merve' , "vpp-dev@lists.fd.io" Subject: Re: [vpp-dev] Blackholed packets after forwarding interface

Re: [vpp-dev] Multicast packets sent via memif when rule says to forward through another interface

2020-12-17 Thread steven luong via lists.fd.io
show interface displays the interface’s admin state. show hardware displays the interface’s operational link state. The link down is likely caused by memif configuration error. Please check your configuration on both sides to make sure they match. Some tips to debug, show memif set logging class

Re: [vpp-dev] #vpp #vpp-memif #vppcom

2020-12-11 Thread steven luong via lists.fd.io
Can you check the output of show hardware? I suspect the link is down for the corresponding memif interface. Steven From: on behalf of "tahir.a.sangli...@gmail.com" Date: Friday, December 11, 2020 at 1:14 PM To: "vpp-dev@lists.fd.io" Subject: [vpp-dev] #vpp #vpp-memif #vppcom in our

Re: [vpp-dev] Vpp crashes with core dump vhost-user interface

2020-12-09 Thread steven luong via lists.fd.io
Right, it should not crash. With the patch, the VM just refuses to come up unless we raise the queue support. Steven On 12/9/20, 10:24 AM, "Benoit Ganne (bganne)" wrote: > This argument in your qemu command line, > queues=16, > is over our current limit. We support up to 8. I can

Re: [vpp-dev] Vpp crashes with core dump vhost-user interface

2020-12-09 Thread steven luong via lists.fd.io
Eyle, This argument in your qemu command line, queues=16, is over our current limit. We support up to 8. I can submit an improvement patch. But I think it will be master only. Steven From: Eyle Brinkhuis Date: Wednesday, December 9, 2020 at 9:24 AM To: "Steven Luong (sluong)" Cc: "Benoit

Re: [vpp-dev] Vpp crashes with core dump vhost-user interface

2020-12-09 Thread steven luong via lists.fd.io
Eyle, Can you also show me the qemu command line to bring up the VM? I think it is asking for more than 16 queues. VPP supports up to 16. Steven On 12/9/20, 8:22 AM, "vpp-dev@lists.fd.io on behalf of Benoit Ganne (bganne) via lists.fd.io" wrote: Hi Eyle, could you share the associated

Re: [vpp-dev] multicast traffic getting duplicated in lacp bond mode

2020-12-06 Thread steven luong via lists.fd.io
When you create the bond interface using either lacp or xor mode, there is an option to specify load-balance l2, l23, or l34 which is equivalent to linux xmit_hash_policy. Steven From: on behalf of "ashish.sax...@hsc.com" Date: Sunday, December 6, 2020 at 3:24 AM To: "vpp-dev@lists.fd.io"

Re: [vpp-dev] multicast traffic getting duplicated in lacp bond mode

2020-12-02 Thread steven luong via lists.fd.io
Bonding cares less whether the traffic is unicast or multicast. It just hashes the packet header and selects one of the members as the outgoing interface. The only bonding mode which it replicates packets across all members is when you create the bonding interface to do broadcast which you

Re: [vpp-dev] Vpp crashes with core dump vhost-user interface

2020-12-02 Thread steven luong via lists.fd.io
Please use gdb to provide a meaningful backtrace. Steven From: on behalf of Eyle Brinkhuis Date: Wednesday, December 2, 2020 at 5:59 AM To: "vpp-dev@lists.fd.io" Subject: [vpp-dev] Vpp crashes with core dump vhost-user interface Hi all, In our environment (vpp 20.05.1, ubuntu 18.04.5,

Re: [vpp-dev] unformat fails processing > 3 variables

2020-11-27 Thread steven luong via lists.fd.io
You have 17 format tags, but you pass 18 arguments to the unformat function. Is that intentional? Steven From: on behalf of "hemant via lists.fd.io" Reply-To: "hem...@mnkcg.com" Date: Friday, November 27, 2020 at 3:52 PM To: "vpp-dev@lists.fd.io" Subject: [vpp-dev] unformat fails

Re: [vpp-dev] How to do Bond interface configuration as fail_over_mac=active in VPP

2020-07-20 Thread steven luong via lists.fd.io
It is not supported. From: on behalf of Venkatarao M Date: Monday, July 20, 2020 at 8:35 AM To: "vpp-dev@lists.fd.io" Cc: praveenkumar A S , Lokesh Chimbili , Mahesh Sivapuram Subject: [vpp-dev] How to do Bond interface configuration as fail_over_mac=active in VPP Hi all, We are trying

Re: [tsc] [vpp-dev] Replacing master/slave nomenclature

2020-07-14 Thread steven luong via lists.fd.io
ists.fd.io>> wrote: Hi Steven, Please note that per this proposition, https://lkml.org/lkml/2020/7/4/229, slave must be avoided but master can be kept. Maybe master/member or master/secondary could be options too. Jerome Le 14/07/2020 18:32, « vpp-dev@lists.fd.io<mailto:vpp-dev@lists

Re: [vpp-dev] Replacing master/slave nomenclature

2020-07-14 Thread steven luong via lists.fd.io
I am in the process of pushing a patch to replace master/slave with aggregator/member for the bonding. Steven On 7/13/20, 4:44 AM, "vpp-dev@lists.fd.io on behalf of Dave Barach via lists.fd.io" wrote: +1, especially since our next release will be supported for a year, and API name

Re: [vpp-dev] Userspace tcp between two vms using vhost user interface?

2020-07-02 Thread steven luong via lists.fd.io
Inline. From: on behalf of "sadhanakesa...@gmail.com" Date: Thursday, July 2, 2020 at 9:55 AM To: "vpp-dev@lists.fd.io" Subject: [vpp-dev] Userspace tcp between two vms using vhost user interface? Hi, there seems like lot of ways to setup userspace tcp with vpp, hoststack , with and without

Re: [vpp-dev] Need help with setup.. cannot ping a VPP interface.

2020-06-15 Thread steven luong via lists.fd.io
E: [vpp-dev] Need help with setup.. cannot ping a VPP interface. +check hardware addresses with “show hardware”, to make sure you’ve configured the interface which is actually connected to the peer system / switch... HTH... Dave From: vpp-dev@lists.fd.io On Behalf Of steven luong via lists.fd

Re: [vpp-dev] Need help with setup.. cannot ping a VPP interface.

2020-06-12 Thread steven luong via lists.fd.io
Please correct the subnet mask first. L3 10.1.1.10/24. <-- system A inet 10.1.1.11 netmask 255.0.0.0 broadcast 10.255.255.255 <--- system B Steven From: on behalf of Manoj Iyer Date: Friday, June 12, 2020 at 12:28 PM To: "vpp-dev@lists.fd.io" Subject: [vpp-dev] Need help with setup..

Re: [vpp-dev] Unable to ping vpp interface from outside after configuring vrrp on vpp interface and making it as Master

2020-06-08 Thread steven luong via lists.fd.io
Vmxnet3 is a paravirtualized device. I could be wrong, it does not appear it supports adding virtual mac address. This error returns from dpdk indicates just that. Jun 8 12:32:43 bfs-dl360g10-14-vm17 vnet[29645]: vrrp_vr_transition_vmac:120: Adding virtual MAC address 00:00:5e:00:01:01 on

Re: [vpp-dev] worker thread deadlock for current master branch, started with commit "bonding: adjust link state based on active slaves"

2020-05-29 Thread steven luong via lists.fd.io
The problem is the aforementioned commit added a call to invoke vnet_hw_interface_set_flags() in the worker thread. That is no can do. We are in the process of reverting the commit. Steven On 5/29/20, 10:02 AM, "vpp-dev@lists.fd.io on behalf of Elias Rudberg" wrote: Hello, We now

Re: [vpp-dev] Query regarding bonding in Vpp 19.08

2020-04-20 Thread steven luong via lists.fd.io
First, your question has nothing to do with bonding. Whatever you are seeing is true regardless of bonding configured or not. Show interfaces displays the admin state of the interface. Whenever you set the admin state to up, it is displayed as up regardless of the physical carrier is up or

Re: [vpp-dev] Unknown input `tap connect` #vpp

2020-04-14 Thread steven luong via lists.fd.io
tapcli has been deprecated few releases ago. It has been replaced by virtio over tap. The new cli is create tap … Steven From: on behalf of "mauricio.solisjr via lists.fd.io" Reply-To: "mauricio.soli...@tno.nl" Date: Tuesday, April 14, 2020 at 3:55 AM To: "vpp-dev@lists.fd.io" Subject:

Re: [vpp-dev] Jobs are failing due to inspect.py

2020-04-06 Thread steven luong via lists.fd.io
s! Will do the same on the other two branches later today if we are all happy about the fix on master... --a On 6 Apr 2020, at 17:03, steven luong via lists.fd.io wrote: Folks, It looks like jobs for all branches, 19.08, 20.01, and master, are failing due to this inspect.py error. Could some

Re: [vpp-dev] Jobs are failing due to inspect.py

2020-04-06 Thread steven luong via lists.fd.io
sts.fd.io>> wrote: Andrew submitted a changeset that backs out the updated Sphinx package. I am building the target 'test-doc' to try to learn the root cause. On Mon, Apr 6, 2020 at 11:03 AM steven luong via lists.fd.io<http://lists.fd.io> mailto:cisco@lists.fd.io>> wrote:

[vpp-dev] Jobs are failing due to inspect.py

2020-04-06 Thread steven luong via lists.fd.io
Folks, It looks like jobs for all branches, 19.08, 20.01, and master, are failing due to this inspect.py error. Could somebody who is familiar with the issue please take a look at it? 18:59:12 Exception occurred: 18:59:12 File "/usr/lib/python3.6/inspect.py", line 516, in unwrap 18:59:12

Re: [vpp-dev] Unknown input `ping' #vpp

2020-03-26 Thread steven luong via Lists.Fd.Io
Ping command has been moved to a separate plugin. You probably didn’t have the ping plugin enable in your startup.conf. Please add the ping plugin to your startup.conf. Something like this will do the trick. plugins { … plugin ping_plugin.so { enable } } From: on behalf of

Re: [vpp-dev] vmxnet3 rx-queue error "vmxnet3 failed to activate dev error 1" #vmxnet3 #ipsec

2020-02-25 Thread steven luong via Lists.Fd.Io
From: on behalf of "ravinder.ya...@hughes.com" Date: Tuesday, February 25, 2020 at 7:27 AM To: "vpp-dev@lists.fd.io" Subject: [vpp-dev] vmxnet3 rx-queue error "vmxnet3 failed to activate dev error 1" #vmxnet3 #ipsec [Edited Message Follows] VPP IPsec responder on ESXI VM RHEL 7.6 Is

Re: [vpp-dev] vmxnet3 rx-queue error "vmxnet3 failed to activate dev error 1" #vmxnet3 #ipsec

2020-02-24 Thread steven luong via Lists.Fd.Io
It works for me although I am on ubuntu 1804 VM. Your statement is unclear to me if your problem is strictly related to more than 4 rx-queues or not when you say “but when i try to associate more than 4 num-rx-queues i get error” Does it work fine when you reduce the number of rx-queues less

Re: [vpp-dev] "vppctl show int" no NIC (just local0) #vpp #vnet

2020-01-07 Thread steven luong via Lists.Fd.Io
So you now know what command in the dpdk section that dpdk doesn’t like. Try adding “log-level debug” in the dpdk section of startup.conf to see if you can find more helpful messages in “vppctl show log” from dpdk why it fails to probe the NIC. Steven From: on behalf of Gencli Liu

Re: [vpp-dev] "vppctl show int" no NIC (just local0) #vpp #vnet

2020-01-06 Thread steven luong via Lists.Fd.Io
It is likely a resource problem – when VPP requests more descriptors and/or TX/RX queues for the NIC than the firmware has, DPDK fails to initialize the interface. There are few ways to figure out what the problem is. 1. Bypass VPP and run testpmd with debug options turned on, something like

Re: [vpp-dev] #vpp #bond How to config bond mode in vpp?

2020-01-03 Thread steven luong via Lists.Fd.Io
DPDK bonding is no longer supported in 19.08. However, you can use VPP native bonding to accomplish the same thing. create bond mode active-backup load-balance l34 set interface state BondEthernet0 up bond add BondEthernet0 GigabitEthernet1/0/0 bond add BondEthernet0 GigabitEthernet1/0/1 Steven

Re: [vpp-dev] VPP with DPDK vhost ---- VPP with DPDK virtio

2019-08-12 Thread steven luong via Lists.Fd.Io
Using VPP+DPDK virtio to connect with VPP + vhost-user is not actively maintained. I got it working couple years ago by committing some changes to the DPDK virtio code. Since then, I’ve not been playing with it anymore. Breakage is possible. I could spend a whole week on it to get it working

Re: [vpp-dev] #vpp Connecting a VPP inside a container to a VPP inside host using vhost-virtio-user interfaces

2019-07-29 Thread steven luong via Lists.Fd.Io
create interface virtio Or just use memif interface. That is what it is built for. Steven From: on behalf of "mojtaba.eshghi" Date: Monday, July 29, 2019 at 5:50 AM To: "vpp-dev@lists.fd.io" Subject: Re: [vpp-dev] #vpp Connecting a VPP inside a container to a VPP inside host using

Re: [vpp-dev] #vpp Connecting a VPP inside a container to a VPP inside host using vhost-virtio-user interfaces

2019-07-28 Thread steven luong via Lists.Fd.Io
The debug CLI was replaced by set logging class vhost-user level debug Use show log to view the messages. Did you configure 1GB huge on the container? It used to be that dpdk virtio requires 1GB huge page. Not sure if it is still the case nowadays. If you use VPP 19.04 or later, you could try

Re: [vpp-dev] Many "tx packet drops (no available descriptors)" #vpp

2019-07-11 Thread steven luong via Lists.Fd.Io
Packet drops due to “no available descriptors” for vhost-user interface is extremely likely when doing performance test with qemu’s default vring queue size. You need to specify the vring queue size of 1024, default is 256, when you bring up the VM. The queue size can be specified either via

Re: [vpp-dev] some questions about LACP(link bonding mode 4)

2019-06-13 Thread steven luong via Lists.Fd.Io
Yes on both counts. From: on behalf of Zhiyong Yang Date: Wednesday, June 12, 2019 at 10:33 PM To: "Yang, Zhiyong" , "Steven Luong (sluong)" , "vpp-dev@lists.fd.io" , "Carter, Thomas N" Cc: "Kinsella, Ray" Subject: Re: [vpp-dev] some questions about LACP(link bonding mode 4) I mean, Is

Re: [vpp-dev] some questions about LACP(link bonding mode 4)

2019-06-12 Thread steven luong via Lists.Fd.Io
There is no limit on the number of slaves in a bonding group in VPP’s implementation. I don’t know/remember how to select one port over another from the spec without reading it carefully again. Steven From: "Yang, Zhiyong" Date: Tuesday, June 11, 2019 at 11:09 PM To: "vpp-dev@lists.fd.io" ,

Re: [vpp-dev] vpp received signal SIGSEGV, PC 0x7fe54936bc40, faulting address 0x0

2019-05-29 Thread steven luong via Lists.Fd.Io
Clueless with useless tracebacks. Please hook up gdb and get the complete human-readable backtrace. Steven From: on behalf of Mostafa Salari Date: Wednesday, May 29, 2019 at 10:24 AM To: "vpp-dev@lists.fd.io" Subject: [vpp-dev] vpp received signal SIGSEGV, PC 0x7fe54936bc40, faulting

Re: [vpp-dev] Vpp 1904 does not recognize vmxnet3 interfaces

2019-05-12 Thread steven luong via Lists.Fd.Io
Mostafa, Vmxnet3 NICs are in the blacklist by default. Please specify the vmxnet3 pci’s in the dpdk section of the startup.conf. Steven From: on behalf of Mostafa Salari Date: Sunday, May 12, 2019 at 4:52 AM To: "vpp-dev@lists.fd.io" Subject: [vpp-dev] Vpp 1904 does not recognize vmxnet3

Re: [vpp-dev] Bond interface won't respond ping #vnet #vpp

2019-02-23 Thread steven luong via Lists.Fd.Io
Dear Anthony, Please check the bond interface to see if the active slaves count has any positive number using show bond. Since you didn’t configure LACP on VM2, I believe you’ve not gotten any active slave in VPP. Your solution is to configure a bond interface in VM2 using mode 4 (I believe)

Re: [vpp-dev] Bond interface won't respond ping #vnet #vpp

2019-02-19 Thread steven luong via Lists.Fd.Io
Anthony, L3 address should be configured on the bond interface, not the slave interface. If there is a switch in between VPP’s physical NICs and the VM, the switch should be configured to do the bonding, not the remote VM. Use show bond to check the bundle is created successfully between VPP

Re: [vpp-dev] Problem switching a bonded interface from L2 to L3 mode

2018-10-25 Thread steven luong via Lists.Fd.Io
to ethernet-input */ ethernet_set_rx_redirect (vnm, sif_hw, 1); } } return 0; } when I switch the mode of bonding interface to l2, the function(blue color code above) redirects all the members to ethernet-input , but when I switch it back to l3, all the members don't redirect to b

Re: [vpp-dev] Problem switching a bonded interface from L2 to L3 mode

2018-10-24 Thread steven luong via Lists.Fd.Io
Are you using VPP native bonding driver or DPDK bonding driver? How do you configure the bonding interface? Please include the configuration and process to recreate the problem. Steven From: on behalf of "saint_sun 孙 via Lists.Fd.Io" Reply-To: "saint_...@aliyun.com" Date: Wednesday,

Re: [vpp-dev] "Incompatible UPT version” error when running VPP v18.01 with DPDK v17.11 on VMWare with VMXNET3 interface ,ESXI Version 6.5/6.7

2018-10-01 Thread steven luong via Lists.Fd.Io
DPDK is expecting UPT version > 0 and ESXi 6.5/6.7 seems to be returning UPT version 0, when it was queried, which is not a supported version. I am using ESXi 6.0 and it is working fine. You could try ESXi 6.0 to see if it helps. Steven From: on behalf of truring truring Date: Monday,

Re: [vpp-dev] [BUG] vhost-user display bug

2018-09-20 Thread steven luong via Lists.Fd.Io
Stephen, Fix for vhost https://gerrit.fd.io/r/14920 I'll take care of vmxnet3 later. Steven On 9/20/18, 10:57 AM, "vpp-dev@lists.fd.io on behalf of Stephen Hemminger" wrote: Why is there not a simple link on FD.io developer web page to report bugs. Reporting bugs page talks

Re: [**EXTERNAL**] Re: [vpp-dev] tx-drops with vhost-user interface

2018-08-30 Thread steven luong via Lists.Fd.Io
, "vpp-dev@lists.fd.io" Subject: [**EXTERNAL**] Re: [vpp-dev] tx-drops with vhost-user interface Hi, Vijay, Sorry to ask dumb question, can you make sure the interface in your VM (either Linux Kernel or DPDK) is “UP”? Regards, Yichen From: on behalf of "steven luong via Lists

Re: [vpp-dev] LACP link bonding issue

2018-08-17 Thread steven luong via Lists.Fd.Io
Aleksander, I found the CLI bug. You can easily workaround with it. Please set the physical interface state up first in your CLI sequence and it will work. create bond mode lacp load-balance l23 bond add BondEthernet0 GigabitEtherneta/0/0 bond add BondEthernet0 GigabitEtherneta/0/1 set

Re: [vpp-dev] LACP link bonding issue

2018-08-16 Thread steven luong via Lists.Fd.Io
Aleksander, This problem should be easy to figure out if you can gdb the code. When the very first slave interface is added to the bonding group via the command “bond add BondEthernet0 GigabitEthnerneta/0/0/1”, - The PTX machine schedules the interface with the periodic timer via

Re: [vpp-dev] LACP link bonding issue

2018-08-15 Thread steven luong via Lists.Fd.Io
This configuration is not supported in VPP. Steven From: on behalf of Aleksander Djuric Date: Wednesday, August 15, 2018 at 12:33 AM To: "vpp-dev@lists.fd.io" Subject: Re: [vpp-dev] LACP link bonding issue In addition.. I have tried to configure LACP in dpdk section of vpp startup.conf..

Re: [vpp-dev] LACP link bonding issue

2018-08-15 Thread steven luong via Lists.Fd.Io
Aleksander, The problem is LACP periodic timer is not running as shown in your output. I wonder if lacp-process is launched properly or got stuck. Could you please do show run and check on the health of lacp-process? periodic timer: not running Steven From: on behalf of Aleksander

Re: [vpp-dev] LACP link bonding issue

2018-08-14 Thread steven luong via Lists.Fd.Io
I forgot to ask if these 2 boxes’ interfaces are connected back to back or through a switch. Steven From: on behalf of "steven luong via Lists.Fd.Io" Reply-To: "Steven Luong (sluong)" Date: Tuesday, August 14, 2018 at 8:24 AM To: Aleksander Djuric , "vpp-dev@li

Re: [vpp-dev] LACP link bonding issue

2018-08-14 Thread steven luong via Lists.Fd.Io
Aleksander It looks like the LACP packets are not going out to the interfaces as expected or being dropped. Additional output and trace are needed to determine why. Please collect the following from both sides. clear hardware clear error wait a few seconds show hardware show error show lacp

Re: [vpp-dev] tx-drops with vhost-user interface

2018-08-06 Thread steven luong via Lists.Fd.Io
Vijay, From the show output, I can’t really tell what your problem is. If you could provide additional information about your environment, I could try setting it up and see what’s wrong. Things I need from you are exact VPP version, VPP configuration, qemu startup command line or the XML

Re: [vpp-dev] VPP Vnet crash with vhost-user interface

2018-06-06 Thread steven luong via Lists.Fd.Io
Ravi, I supposed you already checked the obvious that the vhost connection is established and shared memory has at least 1 region in show vhost. For traffic issue, use show error to see why packets are dropping. trace add vhost-user-input and show trace to see if vhost is getting the packet.

Re: [vpp-dev] VPP Vnet crash with vhost-user interface

2018-06-05 Thread steven luong via Lists.Fd.Io
Ravi, I only have an SSE machine (Ivy Bridge) and DPDK is using ring mempool as far as I can tell from gdb. You are using AVX2 which I don't have one to try it to see whether Octeontx mempool is the default mempool for AVX2. What do you put in dpdk in the host startup.conf? What is the output

Re: [vpp-dev] VPP Vnet crash with vhost-user interface

2018-06-05 Thread steven luong via Lists.Fd.Io
Ravi, In order to use dpdk virtio_user, you need 1GB huge page. Steven On 6/5/18, 11:17 AM, "Ravi Kerur" wrote: Hi Steven, Connection is the problem. I don't see memory regions setup correctly. Below are some details. Currently I am using 2MB hugepages. (1) Create

Re: [vpp-dev] VPP Vnet crash with vhost-user interface

2018-06-05 Thread steven luong via Lists.Fd.Io
Ravi, Do this 1. Run VPP native vhost-user in the host. Turn on debug "debug vhost-user on". 2. Bring up the container with the vdev virtio_user commands that you have as before 3. show vhost-user in the host and verify that it has a shared memory region. If not, the connection has a problem.

Re: [vpp-dev] VPP Vnet crash with vhost-user interface

2018-06-04 Thread steven luong via Lists.Fd.Io
Ravi, VPP only supports vhost-user in the device mode. In your example, the host, in device mode, and the container also in device mode do not make a happy couple. You need one of them, either the host or container, running in driver mode using the dpdk vdev virtio_user command in