[dpdk-users] expose kni interface on VM to host

2021-04-10 Thread Quanren Xiong
Hi all,

I have successfully installed DPDK on a Linux guest VM, which is launched
by VirtualBox on a Mac host.

After running the kni Example Application on my VM by following DPDK kni
example guide
,  I
am able to assign an IP address to it by doing "ip addr add dev vEth0_0 192
.168.56.103". And the VM has another NIC with IP address 192.168.56.101.
The host has an interface 192.168.56.1.  I can ping .101 from the host, but
NOT .103 from the host.

Any idea how to make pinging the kni interface(.103) work?  Or the kni
interface is different and is meant not to work?

thanks
xiong


Re: [dpdk-users] mlx5: packets lost between good+discard and phy counters

2021-04-10 Thread Gerry Wan
After further investigation, I think this may be a bug introduced in DPDK
v20.11, where these "lost" packets should be counted as "rx_out_of_buffer"
and "rx_missed_errors". On v20.08 both of these counters increment, but on
v20.11 and v21.02 these counters always remain 0.

Any workarounds for this? This is an important statistic for my use case.

On Fri, Apr 2, 2021 at 5:03 PM Gerry Wan  wrote:

> I have a simple forwarding experiment using a mlx5 NIC directly connected
> to a generator. I am noticing that at high enough throughput,
> rx_good_packets + rx_phy_discard_packets may not equal rx_phy_packets.
> Where are these packets being dropped?
>
> Below is an example xstats where I receive at almost the limit of what my
> application can handle with no loss. It shows rx_phy_discard_packets is 0
> but the number actually received by the CPU is less than rx_phy_packets.
> rx_out_of_buffer and other errors are also 0.
>
> I have disabled Ethernet flow control via rte_eth_dev_flow_ctrl_set with
> mode = RTE_FC_NONE, if that matters.
>
> {
> "rx_good_packets": 319992439,
> "tx_good_packets": 0,
> "rx_good_bytes": 19199546340,
> "tx_good_bytes": 0,
> "rx_missed_errors": 0,
> "rx_errors": 0,
> "tx_errors": 0,
> "rx_mbuf_allocation_errors": 0,
> "rx_q0_packets": 319992439,
> "rx_q0_bytes": 19199546340,
> "rx_q0_errors": 0,
> "rx_wqe_errors": 0,
> "rx_unicast_packets": 31892,
> "rx_unicast_bytes": 1913520,
> "tx_unicast_packets": 0,
> "tx_unicast_bytes": 0,
> "rx_multicast_packets": 0,
> "rx_multicast_bytes": 0,
> "tx_multicast_packets": 0,
> "tx_multicast_bytes": 0,
> "rx_broadcast_packets": 0,
> "rx_broadcast_bytes": 0,
> "tx_broadcast_packets": 0,
> "tx_broadcast_bytes": 0,
> "tx_phy_packets": 0,
> "rx_phy_packets": 31892,
> "rx_phy_crc_errors": 0,
> "tx_phy_bytes": 0,
> "rx_phy_bytes": 20479993088,
> "rx_phy_in_range_len_errors": 0,
> "rx_phy_symbol_errors": 0,
> "rx_phy_discard_packets": 0,
> "tx_phy_discard_packets": 0,
> "tx_phy_errors": 0,
> "rx_out_of_buffer": 0,
> "tx_pp_missed_interrupt_errors": 0,
> "tx_pp_rearm_queue_errors": 0,
> "tx_pp_clock_queue_errors": 0,
> "tx_pp_timestamp_past_errors": 0,
> "tx_pp_timestamp_future_errors": 0,
> "tx_pp_jitter": 0,
> "tx_pp_wander": 0,
> "tx_pp_sync_lost": 0,
> }
>
>


Re: [dpdk-users] All links down with Chelsio T6 NICs

2021-04-10 Thread Danushka Menikkumbura
Thank you for your reply, Thomas!

This is the port summary.

Number of available ports: 4
Port MAC Address   Name Driver Status   Link
000:07:43:5D:4E:60 :05:00.4 net_cxgbe  down 100 Gbps
100:07:43:5D:4E:68 :05:00.4_1 net_cxgbe  down 100 Gbps
200:07:43:5D:51:00 :0b:00.4 net_cxgbe  down 100 Gbps
300:07:43:5D:51:08 :0b:00.4_1 net_cxgbe  down 100 Gbps

Additionally, this is info of one of the ports.

* Infos for port 0  *
MAC address: 00:07:43:5D:4E:60
Device name: :05:00.4
Driver name: net_cxgbe
Firmware-version: not available
Connect to socket: 0
memory allocation on the socket: 0
Link status: down
Link speed: 100 Gbps
Link duplex: full-duplex
MTU: 1500
Promiscuous mode: enabled
Allmulticast mode: disabled
Maximum number of MAC addresses: 1
Maximum number of MAC addresses of hash filtering: 0
VLAN offload:
  strip off, filter off, extend off, qinq strip off
Hash key size in bytes: 40
Redirection table size: 32
Supported RSS offload flow types:
  ipv4
  ipv4-frag
  ipv4-tcp
  ipv4-udp
  ipv4-other
  ipv6
  ipv6-frag
  ipv6-tcp
  ipv6-udp
  ipv6-other
  user defined 15
  user defined 16
  user defined 17
Minimum size of RX buffer: 68
Maximum configurable length of RX packet: 9018
Maximum configurable size of LRO aggregated packet: 0
Maximum number of VFs: 256
Current number of RX queues: 1
Max possible RX queues: 114
Max possible number of RXDs per queue: 4096
Min possible number of RXDs per queue: 128
RXDs number alignment: 1
Current number of TX queues: 1
Max possible TX queues: 114
Max possible number of TXDs per queue: 4096
Min possible number of TXDs per queue: 128
TXDs number alignment: 1
Max segment number per packet: 0
Max segment number per MTU/TSO: 0

Best,
Danushka

On Sat, Apr 10, 2021 at 4:15 AM Thomas Monjalon  wrote:

> +Cc Chelsio maintainer
>
> 09/04/2021 19:24, Danushka Menikkumbura:
> > Hello,
> >
> > When I run testpmd on a system with 2 two-port Chelsio T6 NICs, the link
> > status is down for all four ports. I use igb_uio as the kernel driver.
> > Below is my testpmd commandline and the startup log.
> >
> > sudo ./build/app/dpdk-testpmd -l 0,1,2,5 -b 81:00.0 -- -i
> >
> > EAL: Detected 20 lcore(s)
> > EAL: Detected 4 NUMA nodes
> > EAL: Detected static linkage of DPDK
> > EAL: Multi-process socket /var/run/dpdk/rte/mp_socket
> > EAL: Selected IOVA mode 'PA'
> > EAL: No available 1048576 kB hugepages reported
> > EAL: Probing VFIO support...
> > EAL: VFIO support initialized
> > EAL: Probe PCI driver: net_cxgbe (1425:6408) device: :05:00.4
> (socket 0)
> > rte_cxgbe_pmd: Maskless filter support disabled. Continuing
> > EAL: Probe PCI driver: net_cxgbe (1425:6408) device: :0b:00.4
> (socket 0)
> > rte_cxgbe_pmd: Maskless filter support disabled. Continuing
> > Interactive-mode selected
> > testpmd: create a new mbuf pool : n=171456, size=2176,
> socket=0
> > testpmd: preferred mempool ops selected: ring_mp_mc
> > testpmd: create a new mbuf pool : n=171456, size=2176,
> socket=2
> > testpmd: preferred mempool ops selected: ring_mp_mc
> > Configuring Port 0 (socket 0)
> > Port 0: 00:07:43:5D:4E:60
> > Configuring Port 1 (socket 0)
> > Port 1: 00:07:43:5D:4E:68
> > Configuring Port 2 (socket 0)
> > Port 2: 00:07:43:5D:51:00
> > Configuring Port 3 (socket 0)
> > Port 3: 00:07:43:5D:51:08
> > Checking link statuses...
> > Done
> > testpmd>
> >
> > Your help is very much appreciated.
>
> Please run the command "show port summary all"
>
>
>
>


Re: [dpdk-users] All links down with Chelsio T6 NICs

2021-04-10 Thread Thomas Monjalon
+Cc Chelsio maintainer

09/04/2021 19:24, Danushka Menikkumbura:
> Hello,
> 
> When I run testpmd on a system with 2 two-port Chelsio T6 NICs, the link
> status is down for all four ports. I use igb_uio as the kernel driver.
> Below is my testpmd commandline and the startup log.
> 
> sudo ./build/app/dpdk-testpmd -l 0,1,2,5 -b 81:00.0 -- -i
> 
> EAL: Detected 20 lcore(s)
> EAL: Detected 4 NUMA nodes
> EAL: Detected static linkage of DPDK
> EAL: Multi-process socket /var/run/dpdk/rte/mp_socket
> EAL: Selected IOVA mode 'PA'
> EAL: No available 1048576 kB hugepages reported
> EAL: Probing VFIO support...
> EAL: VFIO support initialized
> EAL: Probe PCI driver: net_cxgbe (1425:6408) device: :05:00.4 (socket 0)
> rte_cxgbe_pmd: Maskless filter support disabled. Continuing
> EAL: Probe PCI driver: net_cxgbe (1425:6408) device: :0b:00.4 (socket 0)
> rte_cxgbe_pmd: Maskless filter support disabled. Continuing
> Interactive-mode selected
> testpmd: create a new mbuf pool : n=171456, size=2176, socket=0
> testpmd: preferred mempool ops selected: ring_mp_mc
> testpmd: create a new mbuf pool : n=171456, size=2176, socket=2
> testpmd: preferred mempool ops selected: ring_mp_mc
> Configuring Port 0 (socket 0)
> Port 0: 00:07:43:5D:4E:60
> Configuring Port 1 (socket 0)
> Port 1: 00:07:43:5D:4E:68
> Configuring Port 2 (socket 0)
> Port 2: 00:07:43:5D:51:00
> Configuring Port 3 (socket 0)
> Port 3: 00:07:43:5D:51:08
> Checking link statuses...
> Done
> testpmd>
> 
> Your help is very much appreciated.

Please run the command "show port summary all"





Re: [dpdk-users] Running Pktgen with Mellanox CX5

2021-04-10 Thread Narcisa Ana Maria Vasile
On Fri, Apr 09, 2021 at 02:15:04PM +, Wiles, Keith wrote:
> 
> 
> > On Apr 8, 2021, at 7:08 PM, Narcisa Ana Maria Vasile 
> >  wrote:
> > 
> > Thanks for the quick reply! Just to clarify, the libmlx4 doesn't get 
> > installed when installing the latest Mellanox drivers.
> > Instead, the libmlx5 is installed, which I believe is the right one to use 
> > with Cx5s.
> > 
> > It looks like pktgen needs libmlx4 to run, so does this mean that it is 
> > only working with older Mellanox tools and NICs?
> > I could try install the libmlx4, but my understanding was that libmlx4 is 
> > for Cx3s and libmlx5 is for Cx4s and Cx5s.
> 
> Pktgen has no requirement to work with any mlx library or hardware for that 
> matter. I do not know what this problem is, but I believe it is not a Pktgen 
> problem. Did you try with testpmd or any of the DPDK examples. If they work 
> with mlx then I can look at why Pktgen does not work in this case. If 
> anything the libmlx5 is referencing the mlx4 library for some reason and when 
> you added the mlx5 PMD it also needed the mlx4.
> 
  Sure, I understand it's not required to work with any of these HW or libs. I 
thought I'd ask in case someone in the community tested with a similar
  setup and can confirm whether it works or not. If someone has it working, 
then it's probably a misconfiguration on my side,
  alternatively if it's known that this is an unsupported setup I can stop 
investigating.


> BTW, notice the error message states "EAL: libmlx4.so.1: cannot open shared 
> object file: No such file or directory” EAL Is DPDK initialization not Pktgen.

  That's right, it looks like it's required by EAL. However, I am able to run 
testpmd successfully. I've also tried l2fwd successfully.
  Still, this doesn't necessarily mean the issue is Pktgen-related, that's why 
I wanted to verify here.

> > 
> > -Original Message-
> > From: Wiles, Keith  
> > Sent: Thursday, April 8, 2021 2:12 PM
> > To: Narcisa Ana Maria Vasile 
> > Cc: users@dpdk.org; Kevin Daniel (WIPRO LIMITED) ; 
> > Omar Cardona 
> > Subject: [EXTERNAL] Re: Running Pktgen with Mellanox CX5
> > 
> > 
> > 
> >> On Apr 8, 2021, at 1:47 PM, Narcisa Ana Maria Vasile 
> >>  wrote:
> >> 
> >> Hi,
> >> 
> >> I’m trying to run pktgen (latest ‘master’ branch) with a Mellanox CX5 NIC 
> >> on Ubuntu 20.04.
> >> I’ve installed the latest Mellanox drivers 
> >> (MLNX_OFED_LINUX-5.3-1.0.0.1-ubuntu20.04-x86_64.iso).
> >> I’ve compiled and installed DPDK successfully (latest ‘main’ branch).
> >> 
> >> As you can see below, I’m getting an error message saying “libmlx4.so.1: 
> >> cannot open shared object file: No such file or directory”.
> >> I am able to run other DPDK applications such as ‘testpmd’.
> >> 
> >> Is pktgen supported with the latest Mellanox drivers on CX5? Thank you!
> >> 
> >> --
> >> pktgen -l 1,3,5 -a 04:00.0 -d librte_net_mlx5.so -- -P -m "[3:5].0" -T
> >> 
> >> Copyright(c) <2010-2021>, Intel Corporation. All rights reserved. Powered 
> >> by DPDK
> >> EAL: Detected 20 lcore(s)
> >> EAL: Detected 2 NUMA nodes
> >> EAL: Detected shared linkage of DPDK
> >> EAL: libmlx4.so.1: cannot open shared object file: No such file or 
> >> directory
> >> EAL: FATAL: Cannot init plugins
> >> EAL: Cannot init plugins
> > 
> > I have not built anything with mlx in a long time. My guess is the 
> > libmix4.so.1 is not located in a place the system can pick up. Maybe you 
> > need to set LD_LIBRARY_PATH to the path where this library is located. I 
> > see you included the DPDK PMD, but you still need to tell applications 
> > where to locate the library. Another option is to add it to the 
> > /etc/ld.so.conf.d/ file or use pkg-config to locate the libs may help too. 
> > In some cases external packages do not store the libs in a standard place 
> > for ldconfig to locate or the package does not provide ldconfig 
> > configuration files.
> >> 
> >> Thank you,
> >> Narcisa V.
> > 
>