RE: mlx5 VF packet lost between rx_port_unicast_packets and rx_good_packets

2021-10-14 Thread Yan, Xiaoping (NSB - CN/Hangzhou)
Hi,

I’m using 20.11
commit b1d36cf828771e28eb0130b59dcf606c2a0bc94d (HEAD, tag: v20.11)
Author: Thomas Monjalon 
Date:   Fri Nov 27 19:48:48 2020 +0100

version: 20.11.0

Best regards
Yan Xiaoping

From: Asaf Penso 
Sent: 2021年10月14日 14:56
To: Yan, Xiaoping (NSB - CN/Hangzhou) ; 
users@dpdk.org
Cc: Slava Ovsiienko ; Matan Azrad ; 
Raslan Darawsheh 
Subject: RE: mlx5 VF packet lost between rx_port_unicast_packets and 
rx_good_packets

Are you using the latest stable 20.11.3? If not, can you try?

Regards,
Asaf Penso

From: Yan, Xiaoping (NSB - CN/Hangzhou) 
mailto:xiaoping@nokia-sbell.com>>
Sent: Thursday, September 30, 2021 11:05 AM
To: Asaf Penso mailto:as...@nvidia.com>>; 
users@dpdk.org
Cc: Slava Ovsiienko mailto:viachesl...@nvidia.com>>; 
Matan Azrad mailto:ma...@nvidia.com>>; Raslan Darawsheh 
mailto:rasl...@nvidia.com>>
Subject: RE: mlx5 VF packet lost between rx_port_unicast_packets and 
rx_good_packets

Hi,

In below log, we can clearly see packets are dropped between counter 
rx_unicast_packets  and rx_good_packets
But there is not any error/miss counter tell why/where packet is dropped.
Is this a known bug/limitation of Mellanox card?
Any suggestion?

Counter in  test center(traffic generator):
  Tx count: 617496152
  Rx count: 617475672
  Drop: 20480

testpmd started with:
dpdk-testpmd -l "2,3" --legacy-mem --socket-mem "5000,0" -a :03:07.0  -- -i 
--nb-cores=1 --portmask=0x1 --rxd=512 --txd=512
testpmd> port stop 0
testpmd> vlan set filter on 0
testpmd> rx_vlan add 767 0
testpmd> port start 0
testpmd> set fwd 5tswap
testpmd> start
testpmd> show fwd stats all

  -- Forward statistics for port 0  --
  RX-packets: 617475727  RX-dropped: 0 RX-total: 617475727
  TX-packets: 617475727  TX-dropped: 0 TX-total: 617475727
  

  +++ Accumulated forward statistics for all ports+++
  RX-packets: 617475727  RX-dropped: 0 RX-total: 617475727
  TX-packets: 617475727  TX-dropped: 0 TX-total: 617475727
  
testpmd> show port xstats 0
## NIC extended statistics for port 0
rx_good_packets: 617475731
tx_good_packets: 617475730
rx_good_bytes: 45693207378
tx_good_bytes: 45693207036
rx_missed_errors: 0
rx_errors: 0
tx_errors: 0
rx_mbuf_allocation_errors: 0
rx_q0_packets: 617475731
rx_q0_bytes: 45693207378
rx_q0_errors: 0
tx_q0_packets: 617475730
tx_q0_bytes: 45693207036
rx_wqe_errors: 0
rx_unicast_packets: 617496152
rx_unicast_bytes: 45694715248
tx_unicast_packets: 617475730
tx_unicast_bytes: 45693207036
rx_multicast_packets: 3
rx_multicast_bytes: 342
tx_multicast_packets: 0
tx_multicast_bytes: 0
rx_broadcast_packets: 56
rx_broadcast_bytes: 7308
tx_broadcast_packets: 0
tx_broadcast_bytes: 0
tx_phy_packets: 0
rx_phy_packets: 0
rx_phy_crc_errors: 0
tx_phy_bytes: 0
rx_phy_bytes: 0
rx_phy_in_range_len_errors: 0
rx_phy_symbol_errors: 0
rx_phy_discard_packets: 0
tx_phy_discard_packets: 0
tx_phy_errors: 0
rx_out_of_buffer: 0
tx_pp_missed_interrupt_errors: 0
tx_pp_rearm_queue_errors: 0
tx_pp_clock_queue_errors: 0
tx_pp_timestamp_past_errors: 0
tx_pp_timestamp_future_errors: 0
tx_pp_jitter: 0
tx_pp_wander: 0
tx_pp_sync_lost: 0


Best regards
Yan Xiaoping

From: Yan, Xiaoping (NSB - CN/Hangzhou)
Sent: 2021年9月29日 16:26
To: 'Asaf Penso' mailto:as...@nvidia.com>>
Cc: 'Slava Ovsiienko' mailto:viachesl...@nvidia.com>>; 
'Matan Azrad' mailto:ma...@nvidia.com>>; 'Raslan Darawsheh' 
mailto:rasl...@nvidia.com>>; Xu, Meng-Maggie (NSB - 
CN/Hangzhou) 
mailto:meng-maggie...@nokia-sbell.com>>
Subject: RE: mlx5 VF packet lost between rx_port_unicast_packets and 
rx_good_packets

Hi,

We replaced the NIC also (originally it was cx-4, now it is cx-5), but result 
is the same.
Do you know why the packet is dropped between rx_port_unicast_packets and 
rx_good_packets, but there is no error/miss counter?

And do you know mlx5_xxx kernel thread?
They have cpu affinity to all cpu cores, including the core used by 
fastpath/testpmd.
Would it affect?

[cranuser1@hztt24f-rm17-ocp-sno-1 ~]$ taskset -cp 74548
pid 74548's current affinity list: 0-27

[cranuser1@hztt24f-rm17-ocp-sno-1 ~]$ ps -emo pid,tid,psr,comm | grep mlx5
903   -   - mlx5_health
904   -   - mlx5_page_alloc
907   -   - mlx5_cmd_:0
916   -   - mlx5_events
917   -   - mlx5_esw_wq
918   -   - mlx5_fw_tracer
919   -   - mlx5_hv_vhca
921   -   - mlx5_fc
924   -   - mlx5_health
925   -   - mlx5_page_alloc
927   -   - mlx5_cmd_:0
935   -   - mlx5_events
936   -   - mlx5_esw_wq
937   -   - mlx5_fw_tracer
938   -   - mlx5_hv_vhca
939   -   - mlx5_fc
941   -   - mlx5_health
942   -   - 

RE: Troubles using pdump to capture the packets

2021-10-14 Thread Pattan, Reshma


> -Original Message-
> From: 廖書華 


> Yes, I already set *CONFIG_RTE_LIBRTE_PMD_PCAP=y *and
> *CONFIG_RTE_LIBRTE_PDUMP=y *in the file "dpdk-
> 19.11/config/common_base"
> then build DPDK.
> Also, in the files "dpdk-19.11/x86_64-native-linuxapp-icc/.config" and "dpdk-
> 19.11/x86_64-native-linuxapp-icc/.config.orig", they also show that
> *CONFIG_RTE_LIBRTE_PMD_PCAP=y *and *CONFIG_RTE_LIBRTE_PDUMP=y.*
> 
> It seems that it already enabled pcap PMD of DPDK.
> Do you have other suggestions ?

Hi,

Few options you can double check
1)Make sure your primary application is calling 
rte_pdump_init()/ret_pudmp_uninit() to initialize/uninitialize the pdump 
library.
2)If you are using a shared library build,   double check you are  linking pcap 
pmd properly in primary build as explained in below link
https://www.mail-archive.com/users@dpdk.org/msg05039.html
https://stackoverflow.com/questions/62795017/dpdk-pdump-failed-to-hotplug-add-device
3)If you are passing any pci device using  eal "-w" option to primary,  try to 
pass the same device to secondary also using "-w" option . 

If you still see the issue please paste the full primary and secondary 
application run log with command that you are running.
Also what kind of build you are using.

Thanks,
Reshma
  


RE: mlx5 VF packet lost between rx_port_unicast_packets and rx_good_packets

2021-10-14 Thread Yan, Xiaoping (NSB - CN/Hangzhou)
Hi,

Ok, I will try. (probably some days later as I’m busing with another task right 
now)
Could you also share me the commit id for those fixes?
Thank you.

Best regards
Yan Xiaoping

From: Asaf Penso 
Sent: 2021年10月14日 17:51
To: Yan, Xiaoping (NSB - CN/Hangzhou) ; 
users@dpdk.org
Cc: Slava Ovsiienko ; Matan Azrad ; 
Raslan Darawsheh 
Subject: RE: mlx5 VF packet lost between rx_port_unicast_packets and 
rx_good_packets

Can you please try the last LTS 20.11.3?
We have some related fixes and we think the issue is already solved.

Regards,
Asaf Penso

From: Yan, Xiaoping (NSB - CN/Hangzhou) 
mailto:xiaoping@nokia-sbell.com>>
Sent: Thursday, October 14, 2021 12:33 PM
To: Asaf Penso mailto:as...@nvidia.com>>; 
users@dpdk.org
Cc: Slava Ovsiienko mailto:viachesl...@nvidia.com>>; 
Matan Azrad mailto:ma...@nvidia.com>>; Raslan Darawsheh 
mailto:rasl...@nvidia.com>>
Subject: RE: mlx5 VF packet lost between rx_port_unicast_packets and 
rx_good_packets

Hi,

I’m using 20.11
commit b1d36cf828771e28eb0130b59dcf606c2a0bc94d (HEAD, tag: v20.11)
Author: Thomas Monjalon mailto:tho...@monjalon.net>>
Date:   Fri Nov 27 19:48:48 2020 +0100

version: 20.11.0

Best regards
Yan Xiaoping

From: Asaf Penso mailto:as...@nvidia.com>>
Sent: 2021年10月14日 14:56
To: Yan, Xiaoping (NSB - CN/Hangzhou) 
mailto:xiaoping@nokia-sbell.com>>; 
users@dpdk.org
Cc: Slava Ovsiienko mailto:viachesl...@nvidia.com>>; 
Matan Azrad mailto:ma...@nvidia.com>>; Raslan Darawsheh 
mailto:rasl...@nvidia.com>>
Subject: RE: mlx5 VF packet lost between rx_port_unicast_packets and 
rx_good_packets

Are you using the latest stable 20.11.3? If not, can you try?

Regards,
Asaf Penso

From: Yan, Xiaoping (NSB - CN/Hangzhou) 
mailto:xiaoping@nokia-sbell.com>>
Sent: Thursday, September 30, 2021 11:05 AM
To: Asaf Penso mailto:as...@nvidia.com>>; 
users@dpdk.org
Cc: Slava Ovsiienko mailto:viachesl...@nvidia.com>>; 
Matan Azrad mailto:ma...@nvidia.com>>; Raslan Darawsheh 
mailto:rasl...@nvidia.com>>
Subject: RE: mlx5 VF packet lost between rx_port_unicast_packets and 
rx_good_packets

Hi,

In below log, we can clearly see packets are dropped between counter 
rx_unicast_packets  and rx_good_packets
But there is not any error/miss counter tell why/where packet is dropped.
Is this a known bug/limitation of Mellanox card?
Any suggestion?

Counter in  test center(traffic generator):
  Tx count: 617496152
  Rx count: 617475672
  Drop: 20480

testpmd started with:
dpdk-testpmd -l "2,3" --legacy-mem --socket-mem "5000,0" -a :03:07.0  -- -i 
--nb-cores=1 --portmask=0x1 --rxd=512 --txd=512
testpmd> port stop 0
testpmd> vlan set filter on 0
testpmd> rx_vlan add 767 0
testpmd> port start 0
testpmd> set fwd 5tswap
testpmd> start
testpmd> show fwd stats all

  -- Forward statistics for port 0  --
  RX-packets: 617475727  RX-dropped: 0 RX-total: 617475727
  TX-packets: 617475727  TX-dropped: 0 TX-total: 617475727
  

  +++ Accumulated forward statistics for all ports+++
  RX-packets: 617475727  RX-dropped: 0 RX-total: 617475727
  TX-packets: 617475727  TX-dropped: 0 TX-total: 617475727
  
testpmd> show port xstats 0
## NIC extended statistics for port 0
rx_good_packets: 617475731
tx_good_packets: 617475730
rx_good_bytes: 45693207378
tx_good_bytes: 45693207036
rx_missed_errors: 0
rx_errors: 0
tx_errors: 0
rx_mbuf_allocation_errors: 0
rx_q0_packets: 617475731
rx_q0_bytes: 45693207378
rx_q0_errors: 0
tx_q0_packets: 617475730
tx_q0_bytes: 45693207036
rx_wqe_errors: 0
rx_unicast_packets: 617496152
rx_unicast_bytes: 45694715248
tx_unicast_packets: 617475730
tx_unicast_bytes: 45693207036
rx_multicast_packets: 3
rx_multicast_bytes: 342
tx_multicast_packets: 0
tx_multicast_bytes: 0
rx_broadcast_packets: 56
rx_broadcast_bytes: 7308
tx_broadcast_packets: 0
tx_broadcast_bytes: 0
tx_phy_packets: 0
rx_phy_packets: 0
rx_phy_crc_errors: 0
tx_phy_bytes: 0
rx_phy_bytes: 0
rx_phy_in_range_len_errors: 0
rx_phy_symbol_errors: 0
rx_phy_discard_packets: 0
tx_phy_discard_packets: 0
tx_phy_errors: 0
rx_out_of_buffer: 0
tx_pp_missed_interrupt_errors: 0
tx_pp_rearm_queue_errors: 0
tx_pp_clock_queue_errors: 0
tx_pp_timestamp_past_errors: 0
tx_pp_timestamp_future_errors: 0
tx_pp_jitter: 0
tx_pp_wander: 0
tx_pp_sync_lost: 0


Best regards
Yan Xiaoping

From: Yan, Xiaoping (NSB - CN/Hangzhou)
Sent: 2021年9月29日 16:26
To: 'Asaf Penso' mailto:as...@nvidia.com>>
Cc: 'Slava Ovsiienko' mailto:viachesl...@nvidia.com>>; 
'Matan Azrad' mailto:ma...@nvidia.com>>; 'Raslan Darawsheh' 
mailto:rasl...@nvidia.com>>; Xu, Meng-Maggie (NSB - 
CN/Hangzhou) 
mailto:meng-maggie...@nokia-sbell.com>>
Subject: RE: mlx5 VF packet lost betwee

DPDK 20.11/CentOS7.9, first run -> *FAILED*

2021-10-14 Thread Ruslan R. Laishev

Hello !


 Since 
http://doc.dpdk.org/guides-16.04/linux_gsg/quick_start.html#linux-setup-script 
 refer to nonexistent SETUP.SH, i walking over the docs an do steps to 
unsure that DPDK is working at all after installation.


 So:


[root@sysman ~]# sudo modprobe uio_pci_generic
[ 1084.673269] Generic UIO driver for PCI 2.3 devices version: 0.01.0

[root@sysman ~]# sudo modprobe vfio-pci
[ 1118.429157] VFIO - User Level meta-driver version: 0.3

[root@sysman ~]# dpdk-devbind.py  -s

Network devices using kernel driver
===
:02:01.0 '82545EM Gigabit Ethernet Controller (Copper) 100f' 
if=ens33 drv=e1000 unused=vfio-pci,uio_pci_generic *Active*
:02:05.0 '82545EM Gigabit Ethernet Controller (Copper) 100f' 
if=ens37 drv=e1000 unused=vfio-pci,uio_pci_generic
:02:06.0 '82545EM Gigabit Ethernet Controller (Copper) 100f' 
if=ens38 drv=e1000 unused=vfio-pci,uio_pci_generic


[root@sysman ~]# dpdk-devbind.py  --bind=vfio-pci  ens37
Error: bind failed for :02:05.0 - Cannot bind to driver vfio-pci
Error: unbind failed for :02:05.0 - Cannot open 
/sys/bus/pci/drivers//unbind


[ 1215.157980] vfio-pci: probe of :02:05.0 failed with error -22
[ 1215.164317] vfio-pci: probe of :02:05.0 failed with error -22


[root@sysman ~]# dpdk-devbind.py  -s

Network devices using kernel driver
===
:02:01.0 '82545EM Gigabit Ethernet Controller (Copper) 100f' 
if=ens33 drv=e1000 unused=vfio-pci,uio_pci_generic *Active*
:02:06.0 '82545EM Gigabit Ethernet Controller (Copper) 100f' 
if=ens38 drv=e1000 unused=vfio-pci,uio_pci_generic


Other Network devices
=
:02:05.0 '82545EM Gigabit Ethernet Controller (Copper) 100f' 
unused=e1000,vfio-pci,uio_pci_generic



[root@sysman ~]# dpdk-devbind.py  --bind=vfio-pci  :02:05.0
Error: bind failed for :02:05.0 - Cannot bind to driver vfio-pci


[root@sysman ~]# dpdk-devbind.py  --bind=uio_pci_generic   :02:05.0
Error: bind failed for :02:05.0 - Cannot bind to driver uio_pci_generic




So, can someone help me to get the DPDK working ?

TIA.


Re: DPDK 20.11/CentOS7.9, first run -> *FAILED*

2021-10-14 Thread David Marchand
On Thu, Oct 14, 2021 at 1:37 PM Ruslan R. Laishev  wrote:
>   Since
> http://doc.dpdk.org/guides-16.04/linux_gsg/quick_start.html#linux-setup-script

I am surprised to see you are using a 16.04 version.

I recommend using the latest LTS, i.e. 20.11.
For this version, the quick start guide is
https://doc.dpdk.org/guides-20.11/linux_gsg/index.html


>   refer to nonexistent SETUP.SH, i walking over the docs an do steps to
> unsure that DPDK is working at all after installation.
>
>   So:
>
>
> [root@sysman ~]# sudo modprobe uio_pci_generic
> [ 1084.673269] Generic UIO driver for PCI 2.3 devices version: 0.01.0
>
> [root@sysman ~]# sudo modprobe vfio-pci
> [ 1118.429157] VFIO - User Level meta-driver version: 0.3
>
> [root@sysman ~]# dpdk-devbind.py  -s
>
> Network devices using kernel driver
> ===
> :02:01.0 '82545EM Gigabit Ethernet Controller (Copper) 100f'
> if=ens33 drv=e1000 unused=vfio-pci,uio_pci_generic *Active*
> :02:05.0 '82545EM Gigabit Ethernet Controller (Copper) 100f'
> if=ens37 drv=e1000 unused=vfio-pci,uio_pci_generic
> :02:06.0 '82545EM Gigabit Ethernet Controller (Copper) 100f'
> if=ens38 drv=e1000 unused=vfio-pci,uio_pci_generic
>
> [root@sysman ~]# dpdk-devbind.py  --bind=vfio-pci  ens37
> Error: bind failed for :02:05.0 - Cannot bind to driver vfio-pci
> Error: unbind failed for :02:05.0 - Cannot open
> /sys/bus/pci/drivers//unbind
>
> [ 1215.157980] vfio-pci: probe of :02:05.0 failed with error -22
> [ 1215.164317] vfio-pci: probe of :02:05.0 failed with error -22

If your server has a iommu, you probably did not configure it.
If you don't have such hw, you need no-iommu support.

https://doc.dpdk.org/guides-20.11/linux_gsg/linux_drivers.html#troubleshooting-vfio


-- 
David Marchand



DPDK version 16.x -- Re: DPDK 20.11/CentOS7.9, first run -> *FAILED*

2021-10-14 Thread Gábor LENCSE

Dear David,

I still use also DPDK 16 (exactly 16.11.11-1+deb9u2 included in Debian 
9), and I have been locked in this version, because I can not compile my 
project under DPDK 18 (included in Debian 10).


I  would be very happy if you (or someone else) could advise me.

My problem description can be found here: 
http://mails.dpdk.org/archives/users/2021-July/005692.html


Best regards,

Gábor

10/14/2021 8:45 PM keltezéssel, David Marchand írta:

On Thu, Oct 14, 2021 at 1:37 PM Ruslan R. Laishev  wrote:

   Since
http://doc.dpdk.org/guides-16.04/linux_gsg/quick_start.html#linux-setup-script

I am surprised to see you are using a 16.04 version.

I recommend using the latest LTS, i.e. 20.11.
For this version, the quick start guide is
https://doc.dpdk.org/guides-20.11/linux_gsg/index.html



   refer to nonexistent SETUP.SH, i walking over the docs an do steps to
unsure that DPDK is working at all after installation.

   So:


[root@sysman ~]# sudo modprobe uio_pci_generic
[ 1084.673269] Generic UIO driver for PCI 2.3 devices version: 0.01.0

[root@sysman ~]# sudo modprobe vfio-pci
[ 1118.429157] VFIO - User Level meta-driver version: 0.3

[root@sysman ~]# dpdk-devbind.py  -s

Network devices using kernel driver
===
:02:01.0 '82545EM Gigabit Ethernet Controller (Copper) 100f'
if=ens33 drv=e1000 unused=vfio-pci,uio_pci_generic *Active*
:02:05.0 '82545EM Gigabit Ethernet Controller (Copper) 100f'
if=ens37 drv=e1000 unused=vfio-pci,uio_pci_generic
:02:06.0 '82545EM Gigabit Ethernet Controller (Copper) 100f'
if=ens38 drv=e1000 unused=vfio-pci,uio_pci_generic

[root@sysman ~]# dpdk-devbind.py  --bind=vfio-pci  ens37
Error: bind failed for :02:05.0 - Cannot bind to driver vfio-pci
Error: unbind failed for :02:05.0 - Cannot open
/sys/bus/pci/drivers//unbind

[ 1215.157980] vfio-pci: probe of :02:05.0 failed with error -22
[ 1215.164317] vfio-pci: probe of :02:05.0 failed with error -22

If your server has a iommu, you probably did not configure it.
If you don't have such hw, you need no-iommu support.

https://doc.dpdk.org/guides-20.11/linux_gsg/linux_drivers.html#troubleshooting-vfio