Re: DPDK NIC - Firmware Comptability

2024-03-28 Thread Thomas Monjalon
28/03/2024 06:53, Pujar, Shyam:
> Hi All ,
>   I have question regarding DPDK NIC-Firmware version 
> compatibility. https://doc.dpdk.org/guides/nics/i40e.html.
> I have highlighted the DPDK version in use.   Does the following matrix 
> indicates that, "minimum version of Firmware or exact version"?

Good question, Yuying, Hailin, can we upgrade the firmware version,
while being on an old DPDK and/or old kernel module?

Would be nice to make it clear in the documentation.

Thank you




Re: when link librte_net_mlx5.so and use gdb, my program can't get environment variable

2024-01-08 Thread Thomas Monjalon
Hello,

We could dig in details what happens in constructors,
but first I don't understand why you want to LD_PRELOAD some drivers.
The DPDK drivers are already loaded dynamically with dlopen() from EAL.


04/12/2023 10:45, jiangheng (G):
> Environment:
> [root@localhost jh]# cat /etc/centos-release
> CentOS Linux release 8.5.2111
> root@localhost jh]# uname -r
> 4.18.0-348.el8.x86_64
> [root@localhost jh]# rpm -qa gcc
> gcc-8.5.0-3.el8.x86_64
> [root@localhost jh]# rpm -qa dpdk
> dpdk-21.11-3.el8.x86_64
> 
> Reproduction Procedure:
> 1. We need create test.c and init.c
> 
> [root@localhost jh]# cat init.c
> #include 
> #include 
> #include 
> 
> __attribute__ ((constructor)) void init(void)
> {
> char *enval = NULL;
> enval = getenv("LD_PRELOAD");
> printf("enval LD_PRELOAD : %s\n", enval);
> return;
> }
> 
> [root@localhost jh]# cat test.c
> #include 
> 
> int main(int argc, char **argv)
> {
> printf("Hello World\n");
> return 0;
> }
> 
> 2. Build test.c
> gcc test.c -o test
> 
> 3. Build init.c as so in three cases:
> 3.1 only link librte_eal.so
> gcc --shared -fPIC -lrte_eal   init.c -o libinit.so
> 3.2 link librte_eal.so and librte_net_i40e.so
> gcc --shared -fPIC -lrte_eal -lrte_net_i40e  init.c -o libinit_i40e.so
> 3.3 link librte_eal.so and librte_net_mlx5.so
> gcc --shared -fPIC -lrte_eal -lrte_net_mlx5  init.c -o libinit_mlx5.so
> 
> 4. run test using gdb and LD_PRELOAD 
> 4.1 : libinit.so links only librte_eal.so 
> LD_PRELOAD=./libinit.so   gdb  ./test
> (gdb) r
> enval LD_PRELOAD : ./libinit.so
> enval LD_PRELOAD : ./libinit.so   (I don't know why the constrctor function 
> is executed twice.)
> Hello World
> 
> 4.2 libinit.so links librte_eal.so and librte_net_i40e.so
> LD_PRELOAD=./libinit_i40e.sogdb  ./test
> enval LD_PRELOAD : ./libinit_i40e.so
> enval LD_PRELOAD : ./libinit_i40e.so(I don't know why the constrctor 
> function is executed twice.)
> Hello World
> 
> 4.3 4.3 libinit.so links librte_eal.so and librte_net_mlx5.so
> LD_PRELOAD=./libinit_mlx5.so gdb  ./test
> enval LD_PRELOAD : (null)  // ? ? ? ?
> enval LD_PRELOAD : ./libinit_mlx5.so
> Hello World
> 
> 
> After libinit.so is linked to librte_net_mlx5.so, the symptom when the gdb is 
> used to execute the program is different from the preceding two cases. The 
> value of LD_PRELOAD obtained is NULL for the first time.
> Have you ever had the same problem?
> Looking forward to your favourable reply.
> 
> Thanks
> 
> 
> 







Re: Direct Mem Pool vs Indirect mem pool creation

2023-09-01 Thread Thomas Monjalon
01/09/2023 14:47, omer yamac:
> Hello,
> 
> I need clarification while creating direct/indirect buffers for mbuf. I
> couldn't find exact documentation, and I just looked over the fragmentation
> test case and saw that two pools were created. One is a
> direct pool, and the other is an indirect pool. Here are the methods to
> create pools:
> direct_pool = rte_pktmbuf_pool_create("FRAG_D_MBUF_POOL",
>   NUM_MBUFS, BURST, 0,
>   RTE_MBUF_DEFAULT_BUF_SIZE,
>   SOCKET_ID_ANY);
> indirect_pool = rte_pktmbuf_pool_create("FRAG_I_MBUF_POOL",
> NUM_MBUFS, BURST, 0,
> 0, SOCKET_ID_ANY)
> 
> I couldn't see the exact difference. Just the "data_room_size" parameter is
> different. If this parameter is 0, then is the pool indirect?

A pool is neither direct or indirect, it is just a pool of buffers
with a defined size for all buffers of a pool.
You are free to create any pool for your needs.

Now if you create a pool of buffers with size 0,
we can expect you will save some data elsewhere,
using rte_pktmbuf_attach_extbuf() for instance.

More explanations can be found in the doc:
https://doc.dpdk.org/guides/prog_guide/mbuf_lib.html#direct-and-indirect-buffers




Re: help

2023-08-11 Thread Thomas Monjalon
Thanks for the info.
Do you think it should be written in the vmxnet3 page of the DPDK documentation?
If yes, would you like to initiate a patch for review?


11/08/2023 10:43, Igor de Paula:
> Hi again,
> I got this resolved with VMWARE support so I thought to share it here.
> What I originally wanted was to use IOVA-VA on an AMD host. Which didn't
> work. I have learned that the ESXI version that supports
> virtual IOMMU in AMD hosts (which is a prerequisite to IOVA-VA) is ESXI 7.0
> U1. After updating it worked. On Intel hosts ESXI 6.7 supports it already
> as far as I know.
> 
> 
> On Tue, Jul 25, 2023 at 6:19 PM Varghese, Vipin 
> wrote:
> 
> > [AMD Official Use Only - General]
> >
> > Like I said earlier, trying with the Intel host I have on VMWARE,
> > specifically  Intel(R) Xeon(R) CPU E5-2697A v4 @ 2.60GHz
> > With IOMMU enabled, VMXNET3 works with VA as well as PA.
> >
> > [VV] since ` enable_unsafe_iommu: not enabled` on Intel platform, could
> > it be possible the specific version EXSI hypervisor supports the HW IOMMU
> > specific to the platform. My suspicion will be in case on AMD platform
> >  changes for required to enable HW iommu might not be available to specific
> > EXSI (hypervior OS) used.
> >
> >
> >
> > I am not an expert on virtio_user PMD, but I can check if it will work
> > with PA with deferred setting for vmx_net3 PMD are ok?
> >
> >
> >
> > *From:* Igor de Paula 
> > *Sent:* Tuesday, July 25, 2023 8:42 PM
> > *To:* Varghese, Vipin 
> > *Cc:* Yigit, Ferruh ; Jochen Behrens <
> > jbehr...@vmware.com>; Thomas Monjalon ;
> > users@dpdk.org; Gupta, Nipun ; Agarwal, Nikhil <
> > nikhil.agar...@amd.com>; Ronak Doshi ; Immanni, Venkat
> > ; Chenbo Xia 
> > *Subject:* Re: help
> >
> >
> >
> > *Caution:* This message originated from an External Source. Use proper
> > caution when opening attachments, clicking links, or responding.
> >
> >
> >
> > Well,
> > Like I said earlier, trying with the Intel host I have on VMWARE,
> > specifically  Intel(R) Xeon(R) CPU E5-2697A v4 @ 2.60GHz
> > With IOMMU enabled, VMXNET3 works with VA as well as PA.
> > Meaning, PA works regardless if IOMMU is enabled or not. From my
> > experience anyway.
> > That's why I thought that:
> > virtio_user needs VA to work.
> > For some reason VMXNET3 does not work with VA (only on AMD host).
> >
> >
> >
> >
> >
> > On Tue, Jul 25, 2023 at 4:04 PM Varghese, Vipin 
> > wrote:
> >
> > [AMD Official Use Only - General]
> >
> >
> >
> > Thanks Igor,
> >
> >
> >
> > As suspected the vmx_net3 works with
> >
> >
> >
> >1. Iommu: disabled
> >2. enable_unsafe_iommu: enabled
> >3. dpdk eal iova mode: PA
> >
> >
> >
> > as pointed by you in logs, the virtio_user fails as it expects VA too.
> >
> >
> >
> > Will check and get back.
> >
> >
> >
> > *From:* Igor de Paula 
> > *Sent:* Tuesday, July 25, 2023 8:16 PM
> > *To:* Yigit, Ferruh 
> > *Cc:* Jochen Behrens ; Thomas Monjalon <
> > tho...@monjalon.net>; users@dpdk.org; Gupta, Nipun ;
> > Agarwal, Nikhil ; Ronak Doshi ;
> > Immanni, Venkat ; Varghese, Vipin <
> > vipin.vargh...@amd.com>; Chenbo Xia 
> > *Subject:* Re: help
> >
> >
> >
> > *Caution:* This message originated from an External Source. Use proper
> > caution when opening attachments, clicking links, or responding.
> >
> >
> >
> > Hi,
> > Attaching the logs of EAL when trying to run a configuration with
> > virtio_user port when IOMMU is
> > disabled and enable_unsafe_iommu is enabled. As you can see it forces IOVA
> > as PA but the viritui_user needs IOVA as VA.
> > I am also attaching the output of dmesg. I am not sure which kernel logs
> > you wanted... if there is anything else please let me know..
> > Regarding the ESXI logs, they are HUGE so I will send to you on a separate
> > email.
> >
> >
> >
> > On Fri, Jul 21, 2023 at 1:14 PM Ferruh Yigit  wrote:
> >
> > On 7/21/2023 12:39 PM, Igor de Paula wrote:
> > > I am trying to use virtio_user for an interface with the
> > > kernel:
> > https://doc.dpdk.org/guides/howto/virtio_user_as_exception_path.html <
> > https://doc.dpdk.org/guides/howto/virtio_user_as_exception_path.html>
> > > I think this requires IOVA as va.
> > >
> >
> > I am not sure if virtio-user has IOVA as VA 

Re: Supporting RSS with DPDK in a VM

2023-07-28 Thread Thomas Monjalon
Hello,

You can have packets distributed to multiple queues without RSS.

If you really wants to enable the RSS algorithms,
it seems not supported for now with vhost_user.
It can be enabled with vhost running in the kernel:
https://qemu.readthedocs.io/en/latest/devel/ebpf_rss.html


26/07/2023 00:58, Matheus Stolet:
> Hello,
> 
> I am trying to run a DPDK application with RSS enabled so that I can 
> have multiple rx queues. This application is running inside a VM. This 
> VM is hosted by QEMU using KVM acceleration and OvS with DPDK and 
> vhost-user are used in the backend. So to clarify things there are two 
> DPDK portions to this. The first is the DPDK portion used by OvS that 
> bypasses the host operating system. This is working fine. The other is a 
> DPDK application inside the virtual machine that will bypass the guest 
> operating system. This is where I am having trouble with.
> 
> When I set rte_eth_conf.rxmode.mq_mode = ETH_MQ_RX_RSS in my application 
> I get the following errors:
>Warning: NIC does not support all requested RSS hash functions.
>virtio_dev_configure(): RSS support requested but not supported by the 
> device
>Port0 dev_configure = -95
> 
> I setup my VM in QEMU to have mq=on and queues=10. I also set the number 
> of rx_queues when creating the vhost port using ovs to 10. Before 
> binding the interface to DPDK, I used ethtool to verify if the network 
> interface was actually setup to have multiple queues.
> 
> Running the 'ethtool -l enps02' command yields the following output:
>Pre-set maximums:
>RX: 0
>TX: 0
>Other:  0
>Combined:   10
>Current hardware settings:
>RX: 0
>TX: 0
>Other:  0
>Combined:   10
> 
>  From my understanding the combined values indicate that the interface 
> was properly setup to have multiple queues, so why am I getting the 
> unsupported RSS error? Are there other configuration steps that I have 
> to take to get this to work? Is RSS with DPDK not supported at all 
> inside a VM at the moment? Perhaps the "Port0 dev_configure() = -95" 
> error means something else? Without the receive side scaling turned on 
> my application is not able to achieve the desired throughput and won't 
> scale when I assign more cores to the application.
> 
> Versions:
> VM
> DPDK: 21.11.4
> Kernel: 5.4.0-148-generic
> Distribution: Ubuntu 20.04
> 
> Host
> DPDK: 21.11.4
> QEMU: 8.0.90
> OvS: 3.0.5
> Kernel: 5.15.111.1.amd64-smp
> Distribution: Debian 11
> 







Re: Enable RSS for virtio application ( dpdk version 21.11)

2023-07-28 Thread Thomas Monjalon
You may need vhost running in the Linux kernel with some BPF code.
There is a documentation about eBPF RSS:
https://qemu.readthedocs.io/en/latest/devel/ebpf_rss.html


26/07/2023 09:32, shiv chittora:
> Thanks Bing for quick response.
> 
> The virtio driver version 1.0.0 is included in the Linux kernel version 4.9
> that powers VM.
> 
> ethtool -i eth1
> driver: virtio_net
> version: 1.0.0
> firmware-version:
> expansion-rom-version:
> bus-info: :00:04.0
> 
> Nutanix document stats that "Ensure the AHV UVM is running the latest
> Nutanix VirtIO driver package. Nutanix VirtIO 1.1.6 or higher is required
> for RSS support. " Linux kernel version: 5.4 and later will have Virtio
> 1.1.6.
> 
> Since the programme is built on the dpdk, the PMD driver will use the eth
> interface rather than the one that the kernel provides. I apologise if I'm
> mistaken. RSS is supported by the dpdk PMD version in use.
> 
> Because of the client-centric nature of this application, upgrading the
> kernel will be challenging.
> 
> Do you believe that the only option is to upgrade the vm kernel version?
> 
> Thanks ,
> Shiv
> 
> On Wed, Jul 26, 2023 at 12:33 PM Bing Zhao  wrote:
> 
> > IIRC, the “VIRTIO_NET_F_RSS” is some capability reported and decided
> > during the driver setup/communication stage. It is mostly like that your
> > libs/drivers running on the host for the VM does not support this feature.
> >
> > Have you tried to update the versions of VM or the package/lib of VirtIO
> > for this VM?
> >
> >
> >
> > *From:* shiv chittora 
> > *Sent:* Wednesday, July 26, 2023 1:05 PM
> > *To:* users@dpdk.org
> > *Subject:* Enable RSS for virtio application ( dpdk version 21.11)
> >
> >
> >
> > *External email: Use caution opening links or attachments*
> >
> >
> >
> > I'm using a Nutanix virtual machine to run a DPDK(Version 21.11)-based
> > application.
> > Application is failing during rte_eth_dev_configure . For our application,
> > RSS support is required.
> >
> > eth_config.rxmode.mq_mode = ETH_MQ_RX_RSS;
> > static uint8_t hashKey[] = {
> > 0x6D, 0x5A, 0x6D, 0x5A, 0x6D, 0x5A, 0x6D, 0x5A,
> > 0x6D, 0x5A, 0x6D, 0x5A, 0x6D, 0x5A, 0x6D, 0x5A,
> > 0x6D, 0x5A, 0x6D, 0x5A, 0x6D, 0x5A, 0x6D, 0x5A,
> > 0x6D, 0x5A, 0x6D, 0x5A, 0x6D, 0x5A, 0x6D, 0x5A,
> > 0x6D, 0x5A, 0x6D, 0x5A, 0x6D, 0x5A, 0x6D, 0x5A,
> > };
> >
> > eth_config.rx_adv_conf.rss_conf.rss_key = hashKey;
> > eth_config.rx_adv_conf.rss_conf.rss_key_len = sizeof(hashKey);
> > eth_config.rx_adv_conf.rss_conf.rss_hf = 260
> >
> >
> >
> > With the aforementioned RSS configuration, the application is not coming
> > up. The same application runs without any issues on a VMware virtual
> > machine.
> >
> > When I set
> >
> > eth_config.rxmode.mq_mode = ETH_MQ_RX_NONE
> > eth_config.rx_adv_conf.rss_conf.rss_hf = 0
> >
> > Application starts working fine. Since we need RSS support for our
> > application I cannot set eth_config.rxmode.mq_mode = ETH_MQ_RX_NONE.
> >
> > I looked at the DPDK 21.11 release notes, and it mentions that virtio_net
> > supports RSS support.
> >
> >
> > In this application traffic is tapped to capture port. I have also created
> > two queues using ACLI comments.
> >
> >  vm.nic_create nutms1-ms type=kNetworkFunctionNic
> > network_function_nic_type=kTap queues=2
> >
> >  vm.nic_get testvm
> > xx:xx:xx:xx:xx:xx {
> >   mac_addr: "xx:xx:xx:xx:xx:xx"
> >   network_function_nic_type: "kTap"
> >   network_type: "kNativeNetwork"
> >   queues: 2
> >   type: "kNetworkFunctionNic"
> >   uuid: "9c26c704-bcb3-4483-bdaf-4b64bb9233ef"
> > }
> >
> >
> > Additionally, I've turned on dpdk logging. PFB the dpdk log's output.
> >
> > EAL: PCI device :00:05.0 on NUMA socket 0
> > EAL:   probe driver: 1af4:1000 net_virtio
> > EAL: Probe PCI driver: net_virtio (1af4:1000) device: :00:05.0 (socket
> > 0)
> > EAL:   PCI memory mapped at 0x94000
> > EAL:   PCI memory mapped at 0x940001000
> > virtio_read_caps(): [98] skipping non VNDR cap id: 11
> > virtio_read_caps(): [84] cfg type: 5, bar: 0, offset: , len: 0
> > virtio_read_caps(): [70] cfg type: 2, bar: 4, offset: 3000, len: 4096
> > virtio_read_caps(): [60] cfg type: 4, bar: 4, offset: 2000, len: 4096
> > virtio_read_caps(): [50] cfg type: 3, bar: 4, offset: 1000, len: 4096
> > virtio_read_caps(): [40] cfg type: 1, bar: 4, offset: , len: 4096
> > virtio_read_caps(): found modern virtio pci device.
> > virtio_read_caps(): common cfg mapped at: 0x940001000
> > virtio_read_caps(): device cfg mapped at: 0x940003000
> > virtio_read_caps(): isr cfg mapped at: 0x940002000
> > virtio_read_caps(): notify base: 0x940004000, notify off multiplier: 4
> > vtpci_init(): modern virtio pci detected.
> > virtio_ethdev_negotiate_features(): guest_features before negotiate =
> > 805f10ef8028
> > virtio_ethdev_negotiate_features(): host_features before negotiate =
> > 130a7
> > virtio_ethdev_negotiate_features(): features after 

Re: help

2023-07-20 Thread Thomas Monjalon
+Cc some AMD maintainers, they can have an idea about IOMMU settings.


20/07/2023 14:44, Igor de Paula:
> I have enabled it in the host and in the BIOS for AMD...
> In the Bios I changed to amd_iommu=on and in the host it's the same for
> either.
> 
> On Thu, Jul 20, 2023 at 1:31 PM Thomas Monjalon  wrote:
> 
> > 20/07/2023 11:35, Igor de Paula:
> > > The weird thing is that it only happens when I am using a host with an
> > AMD
> > > processor. It doesn't happen when I use a host with an Intel processor.
> >
> > So it's probably a matter of BIOS settings for the IOMMU?
> >
> >
> > > On Thu, Jul 20, 2023 at 10:32 AM Thomas Monjalon 
> > > wrote:
> > >
> > > > +Cc the vmxnet3 maintainer.
> > > >
> > > > Please Jochen, do you have an idea what's wrong below?
> > > >
> > > >
> > > > 20/07/2023 11:25, Igor de Paula:
> > > > > This is because it can't negotiate the IOMMU type with any port.
> > > > >
> > > > > On Thu, Jul 20, 2023 at 5:08 AM Thomas Monjalon  > >
> > > > wrote:
> > > > >
> > > > > > Hello,
> > > > > >
> > > > > > The first error is "Cause: Error: number of ports must be even"
> > > > > >
> > > > > >
> > > > > > 03/05/2023 18:13, Igor de Paula:
> > > > > > > I am running a VM inside a VMWARE server (vSphere).
> > > > > > > My goal it to set up DPDK with two HW ports, and set up a
> > > > virtio_user to
> > > > > > > interact with the kernel stack.
> > > > > > > In another app I have it working but instead of virtio_user I am
> > > > running
> > > > > > > KNI, it works in IOVA-PA mode.
> > > > > > > I am looking to replace the KNI.
> > > > > > >
> > > > > > > When I try to set up virtio_user port as in the doc:
> > > > > > >
> > > > > >
> > > >
> > https://doc.dpdk.org/guides/howto/virtio_user_as_exception_path.html#virtio-user-as-exception-path
> > > > > > > I get a error it can't run in PA mode.
> > > > > > >
> > > > > > >
> > > > > > > When I try to run as VA mode from a parameter, I get the
> > following
> > > > > > errors:
> > > > > > > EAL: lib.eal log level changed from info to debug
> > > > > > > EAL: Detected lcore 0 as core 0 on socket 0
> > > > > > > EAL: Detected lcore 1 as core 0 on socket 0
> > > > > > > EAL: Support maximum 128 logical core(s) by configuration.
> > > > > > > EAL: Detected 2 lcore(s)
> > > > > > > EAL: Detected 1 NUMA nodes
> > > > > > > EAL: Checking presence of .so 'librte_eal.so.21.3'
> > > > > > > EAL: Checking presence of .so 'librte_eal.so.21'
> > > > > > > EAL: Checking presence of .so 'librte_eal.so'
> > > > > > > EAL: Detected static linkage of DPDK
> > > > > > > EAL: Ask a virtual area of 0x7000 bytes
> > > > > > > EAL: Virtual area found at 0x1 (size = 0x7000)
> > > > > > > EAL: Multi-process socket /var/run/dpdk/rte/mp_socket
> > > > > > > EAL: DPAA Bus not present. Skipping.
> > > > > > > EAL: VFIO PCI modules not loaded
> > > > > > > EAL: Selected IOVA mode 'VA'
> > > > > > > EAL: Probing VFIO support...
> > > > > > > EAL: IOMMU type 1 (Type 1) is supported
> > > > > > > EAL: IOMMU type 7 (sPAPR) is not supported
> > > > > > > EAL: IOMMU type 8 (No-IOMMU) is supported
> > > > > > > EAL: VFIO support initialized
> > > > > > > EAL: Ask a virtual area of 0x5b000 bytes
> > > > > > > EAL: Virtual area found at 0x17000 (size = 0x5b000)
> > > > > > > EAL: Setting up physically contiguous memory...
> > > > > > > EAL: Setting maximum number of open files to 1048576
> > > > > > > EAL: Detected memory type: socket_id:0 hugepage_sz:1073741824
> > > > > > > EAL: Creating 2 segment lists: n_segs:128 socket_id:0
> > > > > > hugepage_sz:1073741824
> > > > > > > EAL: Ask a virtual area of 0x2000 bytes
> > > > > > > EAL: V

Re: help

2023-07-20 Thread Thomas Monjalon
20/07/2023 11:35, Igor de Paula:
> The weird thing is that it only happens when I am using a host with an AMD
> processor. It doesn't happen when I use a host with an Intel processor.

So it's probably a matter of BIOS settings for the IOMMU?


> On Thu, Jul 20, 2023 at 10:32 AM Thomas Monjalon 
> wrote:
> 
> > +Cc the vmxnet3 maintainer.
> >
> > Please Jochen, do you have an idea what's wrong below?
> >
> >
> > 20/07/2023 11:25, Igor de Paula:
> > > This is because it can't negotiate the IOMMU type with any port.
> > >
> > > On Thu, Jul 20, 2023 at 5:08 AM Thomas Monjalon 
> > wrote:
> > >
> > > > Hello,
> > > >
> > > > The first error is "Cause: Error: number of ports must be even"
> > > >
> > > >
> > > > 03/05/2023 18:13, Igor de Paula:
> > > > > I am running a VM inside a VMWARE server (vSphere).
> > > > > My goal it to set up DPDK with two HW ports, and set up a
> > virtio_user to
> > > > > interact with the kernel stack.
> > > > > In another app I have it working but instead of virtio_user I am
> > running
> > > > > KNI, it works in IOVA-PA mode.
> > > > > I am looking to replace the KNI.
> > > > >
> > > > > When I try to set up virtio_user port as in the doc:
> > > > >
> > > >
> > https://doc.dpdk.org/guides/howto/virtio_user_as_exception_path.html#virtio-user-as-exception-path
> > > > > I get a error it can't run in PA mode.
> > > > >
> > > > >
> > > > > When I try to run as VA mode from a parameter, I get the following
> > > > errors:
> > > > > EAL: lib.eal log level changed from info to debug
> > > > > EAL: Detected lcore 0 as core 0 on socket 0
> > > > > EAL: Detected lcore 1 as core 0 on socket 0
> > > > > EAL: Support maximum 128 logical core(s) by configuration.
> > > > > EAL: Detected 2 lcore(s)
> > > > > EAL: Detected 1 NUMA nodes
> > > > > EAL: Checking presence of .so 'librte_eal.so.21.3'
> > > > > EAL: Checking presence of .so 'librte_eal.so.21'
> > > > > EAL: Checking presence of .so 'librte_eal.so'
> > > > > EAL: Detected static linkage of DPDK
> > > > > EAL: Ask a virtual area of 0x7000 bytes
> > > > > EAL: Virtual area found at 0x1 (size = 0x7000)
> > > > > EAL: Multi-process socket /var/run/dpdk/rte/mp_socket
> > > > > EAL: DPAA Bus not present. Skipping.
> > > > > EAL: VFIO PCI modules not loaded
> > > > > EAL: Selected IOVA mode 'VA'
> > > > > EAL: Probing VFIO support...
> > > > > EAL: IOMMU type 1 (Type 1) is supported
> > > > > EAL: IOMMU type 7 (sPAPR) is not supported
> > > > > EAL: IOMMU type 8 (No-IOMMU) is supported
> > > > > EAL: VFIO support initialized
> > > > > EAL: Ask a virtual area of 0x5b000 bytes
> > > > > EAL: Virtual area found at 0x17000 (size = 0x5b000)
> > > > > EAL: Setting up physically contiguous memory...
> > > > > EAL: Setting maximum number of open files to 1048576
> > > > > EAL: Detected memory type: socket_id:0 hugepage_sz:1073741824
> > > > > EAL: Creating 2 segment lists: n_segs:128 socket_id:0
> > > > hugepage_sz:1073741824
> > > > > EAL: Ask a virtual area of 0x2000 bytes
> > > > > EAL: Virtual area found at 0x100062000 (size = 0x2000)
> > > > > EAL: Memseg list allocated at socket 0, page size 0x10kB
> > > > > EAL: Ask a virtual area of 0x20 bytes
> > > > > EAL: Virtual area found at 0x14000 (size = 0x20)
> > > > > EAL: VA reserved for memseg list at 0x14000, size 20
> > > > > EAL: Ask a virtual area of 0x2000 bytes
> > > > > EAL: Virtual area found at 0x214000 (size = 0x2000)
> > > > > EAL: Memseg list allocated at socket 0, page size 0x10kB
> > > > > EAL: Ask a virtual area of 0x20 bytes
> > > > > EAL: Virtual area found at 0x218000 (size = 0x20)
> > > > > EAL: VA reserved for memseg list at 0x218000, size 20
> > > > > EAL: TSC frequency is ~235 KHz
> > > > > EAL: Main lcore 0 is ready (tid=7f8ad790ec00;cpuset=[0])
> > > > > EAL: lcore 1 is ready (tid=7f8ad6907400;cpuset=[1])
> > > > > EAL:

Re: help

2023-07-20 Thread Thomas Monjalon
+Cc the vmxnet3 maintainer.

Please Jochen, do you have an idea what's wrong below?


20/07/2023 11:25, Igor de Paula:
> This is because it can't negotiate the IOMMU type with any port.
> 
> On Thu, Jul 20, 2023 at 5:08 AM Thomas Monjalon  wrote:
> 
> > Hello,
> >
> > The first error is "Cause: Error: number of ports must be even"
> >
> >
> > 03/05/2023 18:13, Igor de Paula:
> > > I am running a VM inside a VMWARE server (vSphere).
> > > My goal it to set up DPDK with two HW ports, and set up a virtio_user to
> > > interact with the kernel stack.
> > > In another app I have it working but instead of virtio_user I am running
> > > KNI, it works in IOVA-PA mode.
> > > I am looking to replace the KNI.
> > >
> > > When I try to set up virtio_user port as in the doc:
> > >
> > https://doc.dpdk.org/guides/howto/virtio_user_as_exception_path.html#virtio-user-as-exception-path
> > > I get a error it can't run in PA mode.
> > >
> > >
> > > When I try to run as VA mode from a parameter, I get the following
> > errors:
> > > EAL: lib.eal log level changed from info to debug
> > > EAL: Detected lcore 0 as core 0 on socket 0
> > > EAL: Detected lcore 1 as core 0 on socket 0
> > > EAL: Support maximum 128 logical core(s) by configuration.
> > > EAL: Detected 2 lcore(s)
> > > EAL: Detected 1 NUMA nodes
> > > EAL: Checking presence of .so 'librte_eal.so.21.3'
> > > EAL: Checking presence of .so 'librte_eal.so.21'
> > > EAL: Checking presence of .so 'librte_eal.so'
> > > EAL: Detected static linkage of DPDK
> > > EAL: Ask a virtual area of 0x7000 bytes
> > > EAL: Virtual area found at 0x1 (size = 0x7000)
> > > EAL: Multi-process socket /var/run/dpdk/rte/mp_socket
> > > EAL: DPAA Bus not present. Skipping.
> > > EAL: VFIO PCI modules not loaded
> > > EAL: Selected IOVA mode 'VA'
> > > EAL: Probing VFIO support...
> > > EAL: IOMMU type 1 (Type 1) is supported
> > > EAL: IOMMU type 7 (sPAPR) is not supported
> > > EAL: IOMMU type 8 (No-IOMMU) is supported
> > > EAL: VFIO support initialized
> > > EAL: Ask a virtual area of 0x5b000 bytes
> > > EAL: Virtual area found at 0x17000 (size = 0x5b000)
> > > EAL: Setting up physically contiguous memory...
> > > EAL: Setting maximum number of open files to 1048576
> > > EAL: Detected memory type: socket_id:0 hugepage_sz:1073741824
> > > EAL: Creating 2 segment lists: n_segs:128 socket_id:0
> > hugepage_sz:1073741824
> > > EAL: Ask a virtual area of 0x2000 bytes
> > > EAL: Virtual area found at 0x100062000 (size = 0x2000)
> > > EAL: Memseg list allocated at socket 0, page size 0x10kB
> > > EAL: Ask a virtual area of 0x20 bytes
> > > EAL: Virtual area found at 0x14000 (size = 0x20)
> > > EAL: VA reserved for memseg list at 0x14000, size 20
> > > EAL: Ask a virtual area of 0x2000 bytes
> > > EAL: Virtual area found at 0x214000 (size = 0x2000)
> > > EAL: Memseg list allocated at socket 0, page size 0x10kB
> > > EAL: Ask a virtual area of 0x20 bytes
> > > EAL: Virtual area found at 0x218000 (size = 0x20)
> > > EAL: VA reserved for memseg list at 0x218000, size 20
> > > EAL: TSC frequency is ~235 KHz
> > > EAL: Main lcore 0 is ready (tid=7f8ad790ec00;cpuset=[0])
> > > EAL: lcore 1 is ready (tid=7f8ad6907400;cpuset=[1])
> > > EAL: Trying to obtain current memory policy.
> > > EAL: Setting policy MPOL_PREFERRED for socket 0
> > > EAL: Restoring previous memory policy: 0
> > > EAL: request: mp_malloc_sync
> > > EAL: Heap on socket 0 was expanded by 1024MB
> > > EAL: PCI device :0b:00.0 on NUMA socket -1
> > > EAL:   probe driver: 15ad:7b0 net_vmxnet3
> > > EAL:   Expecting 'PA' IOVA mode but current mode is 'VA', not
> > initializing
> > > EAL: Requested device :0b:00.0 cannot be used
> > > EAL: PCI device :13:00.0 on NUMA socket -1
> > > EAL:   probe driver: 15ad:7b0 net_vmxnet3
> > > EAL:   Expecting 'PA' IOVA mode but current mode is 'VA', not
> > initializing
> > > EAL: Requested device :13:00.0 cannot be used
> > > EAL: Bus (pci) probe failed.
> > > EAL: lib.telemetry log level changed from disabled to warning
> > > EAL: Error - exiting with code: 1
> > >   Cause: Error: number of ports must be even
> > > EAL: request: mp_malloc_sync
> > > 

Re: help

2023-07-19 Thread Thomas Monjalon
Hello,

The first error is "Cause: Error: number of ports must be even"


03/05/2023 18:13, Igor de Paula:
> I am running a VM inside a VMWARE server (vSphere).
> My goal it to set up DPDK with two HW ports, and set up a virtio_user to
> interact with the kernel stack.
> In another app I have it working but instead of virtio_user I am running
> KNI, it works in IOVA-PA mode.
> I am looking to replace the KNI.
> 
> When I try to set up virtio_user port as in the doc:
> https://doc.dpdk.org/guides/howto/virtio_user_as_exception_path.html#virtio-user-as-exception-path
> I get a error it can't run in PA mode.
> 
> 
> When I try to run as VA mode from a parameter, I get the following errors:
> EAL: lib.eal log level changed from info to debug
> EAL: Detected lcore 0 as core 0 on socket 0
> EAL: Detected lcore 1 as core 0 on socket 0
> EAL: Support maximum 128 logical core(s) by configuration.
> EAL: Detected 2 lcore(s)
> EAL: Detected 1 NUMA nodes
> EAL: Checking presence of .so 'librte_eal.so.21.3'
> EAL: Checking presence of .so 'librte_eal.so.21'
> EAL: Checking presence of .so 'librte_eal.so'
> EAL: Detected static linkage of DPDK
> EAL: Ask a virtual area of 0x7000 bytes
> EAL: Virtual area found at 0x1 (size = 0x7000)
> EAL: Multi-process socket /var/run/dpdk/rte/mp_socket
> EAL: DPAA Bus not present. Skipping.
> EAL: VFIO PCI modules not loaded
> EAL: Selected IOVA mode 'VA'
> EAL: Probing VFIO support...
> EAL: IOMMU type 1 (Type 1) is supported
> EAL: IOMMU type 7 (sPAPR) is not supported
> EAL: IOMMU type 8 (No-IOMMU) is supported
> EAL: VFIO support initialized
> EAL: Ask a virtual area of 0x5b000 bytes
> EAL: Virtual area found at 0x17000 (size = 0x5b000)
> EAL: Setting up physically contiguous memory...
> EAL: Setting maximum number of open files to 1048576
> EAL: Detected memory type: socket_id:0 hugepage_sz:1073741824
> EAL: Creating 2 segment lists: n_segs:128 socket_id:0 hugepage_sz:1073741824
> EAL: Ask a virtual area of 0x2000 bytes
> EAL: Virtual area found at 0x100062000 (size = 0x2000)
> EAL: Memseg list allocated at socket 0, page size 0x10kB
> EAL: Ask a virtual area of 0x20 bytes
> EAL: Virtual area found at 0x14000 (size = 0x20)
> EAL: VA reserved for memseg list at 0x14000, size 20
> EAL: Ask a virtual area of 0x2000 bytes
> EAL: Virtual area found at 0x214000 (size = 0x2000)
> EAL: Memseg list allocated at socket 0, page size 0x10kB
> EAL: Ask a virtual area of 0x20 bytes
> EAL: Virtual area found at 0x218000 (size = 0x20)
> EAL: VA reserved for memseg list at 0x218000, size 20
> EAL: TSC frequency is ~235 KHz
> EAL: Main lcore 0 is ready (tid=7f8ad790ec00;cpuset=[0])
> EAL: lcore 1 is ready (tid=7f8ad6907400;cpuset=[1])
> EAL: Trying to obtain current memory policy.
> EAL: Setting policy MPOL_PREFERRED for socket 0
> EAL: Restoring previous memory policy: 0
> EAL: request: mp_malloc_sync
> EAL: Heap on socket 0 was expanded by 1024MB
> EAL: PCI device :0b:00.0 on NUMA socket -1
> EAL:   probe driver: 15ad:7b0 net_vmxnet3
> EAL:   Expecting 'PA' IOVA mode but current mode is 'VA', not initializing
> EAL: Requested device :0b:00.0 cannot be used
> EAL: PCI device :13:00.0 on NUMA socket -1
> EAL:   probe driver: 15ad:7b0 net_vmxnet3
> EAL:   Expecting 'PA' IOVA mode but current mode is 'VA', not initializing
> EAL: Requested device :13:00.0 cannot be used
> EAL: Bus (pci) probe failed.
> EAL: lib.telemetry log level changed from disabled to warning
> EAL: Error - exiting with code: 1
>   Cause: Error: number of ports must be even
> EAL: request: mp_malloc_sync
> EAL: Heap on socket 0 was shrunk by 1024MB
> 
> 
> 
> For some reason the HW ports won't setup. From what I understand
> net_vmxnet3 should work with VA mode.
> I enabled I/OMUU for the VM.
> The weird thing even when enabled, I still have the
> enable_unsafe_noiommu_mode flag on.
> And because it's on the this:
> 
> dev_iova_mode = pci_device_iova_mode(dr, dev);
> 
> return PA mode, and it fails.
> 
> When I disable it by modifying
> /sys/module/vfio/parameters/enable_unsafe_noiommu_mode, I get another error.
> The error is that it doesn't find a suitable IOMMU type:
> Just putting the relevant message:
> 
> 
> EAL: Heap on socket 0 was expanded by 1024MB
> EAL: PCI device :0b:00.0 on NUMA socket -1
> EAL:   probe driver: 15ad:7b0 net_vmxnet3
> EAL: Set IOMMU type 1 (Type 1) failed, error 19 (No such device)
> EAL: Set IOMMU type 7 (sPAPR) failed, error 19 (No such device)
> EAL: Set IOMMU type 8 (No-IOMMU) failed, error 19 (No such device)
> EAL: :0b:00.0 failed to select IOMMU type
> EAL: Requested device :0b:00.0 cannot be used
> EAL: PCI device :13:00.0 on NUMA socket -1
> EAL:   probe driver: 15ad:7b0 net_vmxnet3
> EAL: Set IOMMU type 1 (Type 1) failed, error 19 (No such device)
> EAL: Set IOMMU type 7 (sPAPR) failed, error 19 (No such device)
> EAL: Set IOMMU type 8 (No-IOMMU) failed, error 19 (No 

Re: Generic flow string parser

2023-04-29 Thread Thomas Monjalon
This thread is an API suggestion, it should be discussed in
the developer mailing list (did the Cc here).

29/04/2023 16:23, Cliff Burdick:
> > Would rather the flow parser was rewritten as well. Doing open coded
> > parser is much more error prone and hard to extend versus writing the
> > parser in yacc/lex (ie bison/flex).
> 
> I agree, and that's kind of why the original suggestion of using testpmd
> came from. Writing a new parser is obviously the better choice and would
> have been great if testpmd started that way, but a significant amount of
> time was invested in that method. Since it works and is tested, it didn't
> seem like a bad request to build off that and bring that code into an rte_
> API. I'd imagine building a proper parser would not just require the parser
> piece, but also making sure all the tests now use that, and also the legacy
> testpmd was converted. It seemed unlikely all of this could be done in a
> reasonable amount of time and a lot of input from many people. Given the
> amount of debugging I (and others) have spent on figuring on why a flow
> spec didn't work properly, this could be a huge timesaver for new projects
> like Tom mentioned.
> 
> On Fri, Apr 28, 2023 at 5:04 PM Stephen Hemminger <
> step...@networkplumber.org> wrote:
> 
> > On Fri, 28 Apr 2023 12:13:26 -0700
> > Cliff Burdick  wrote:
> >
> > > Hi Stephen, it would definitely not be worthwhile to repeat everything
> > > that's already tested with testpmd. I was thinking that given that there
> > > already is a "flow_parse" function that does almost everything needed,
> > > something like that could be exposed. If we think of the testpmd flow
> > > string as a sort of "IR" for string flow specification, that would allow
> > > others to implement higher-level transform of a schema like JSON or YAML
> > > into the testpmd language. Due to the complexity of testpmd and how it's
> > > the source of true for testing flows, I think it's too great of an ask to
> > > have testpmd support a new type of parsing. My only suggestion would be
> > > to take what already exists and expose it in a public API that is included
> > > in a DPDK install.

So the only things we need are 2 functions, if I understand well:

int rte_flow_to_text(const struct rte_flow*);
struct rte_flow *rte_flow_from_text(const char *);

Here I assume the output of rte_flow_from_text() would be a created flow,
meaning it calls rte_flow_create() under the hood.
Is it what you wish?
Or should it fill port ID, attributes, patterns and actions?




Re: Generic flow string parser

2023-04-27 Thread Thomas Monjalon
26/04/2023 07:47, David Marchand:
> On Wed, Apr 26, 2023 at 6:47 AM Cliff Burdick  wrote:
> >
> > Does anyone know if a generic parser for flow strings exists anywhere? The 
> > one inside of testpmd is ideal, but unfortunately it's self-contained and 
> > not distributed as part of a normal DPDK install. This seems like something 
> > that is likely reinvented over and over and it would be useful if there was 
> > a single API to take in strings and generate flows.
> 
> I heard this same question in the past, but I don't remember the answer.
> Copying Thomas and Ori who might know.

I'm not sure how the testpmd code could help another application.
And in general, if your application has a CLI,
you need to integrate the flow commands in a broader context.




Re: Running testpmd as non-root user with uio_pci_generic

2023-03-28 Thread Thomas Monjalon
14/01/2023 23:17, Isaac Boukris:
> Hi,
> 
> I tried to run testpmd as a non-root user with uio_pci_generic (i.e.
> not vfio-pci) on a vmxnet3 interface by setting the
> 'cap_ipc_lock,cap_sys_admin' capabilities as according to the doc at:
> https://doc.dpdk.org/guides-21.11/linux_gsg/enable_func.html
> 
> But that didn't work and I was still getting the documented error:
> EAL: rte_mem_virt2phy(): cannot open /proc/self/pagemap: Permission denied
> 
> I dug a little and found that I had to add the 'cap_dac_override' as
> well and then it worked, the hint was at (which also includes a small
> demo program): https://bugs.centos.org/view.php?id=17176
> 
> I thought it was worth sharing as I have seen it being asked a couple of 
> times.

Thank you for reporting.

The preferred solution is to use the capability DAC_READ_SEARCH.

The DPDK doc is updated:
https://git.dpdk.org/dpdk/commit/?id=50b567c66da268bcc
(so you become a DPDK contributor :)




Re: USB device support in DPDK

2022-12-05 Thread Thomas Monjalon
02/12/2022 14:01, John Bize:
> Will DPDK support USB-Ethernet devices?

If there is a need and an advantage for USB devices
to be supported in DPDK, why not?

In general, DPDK supports what is contributed and maintained.




Re: OpenWRT Related Question

2022-07-21 Thread Thomas Monjalon
21/07/2022 03:08, R T:
> On Wednesday, July 20, 2022 at 06:03:32 AM EDT, Thomas Monjalon 
>  wrote:  
> 
> > We have this howto page about OpenWRT:
> > https://doc.dpdk.org/guides/howto/openwrt.html
> > I am not sure how much it is up to date,
> > please do not hesitate to give feedback if anything can be improved.
> 
> One of the items that I'm not seeing, where the regular mason build env on 
> Ubuntu was pretty clear on, is building any of the example for testing on 
> OpenWRT.
> How would I go about building and installing one or more of the DPDK examples 
> on the OpenWRT Image?

It should not be different of any other OS.
You can compile examples either by adding -Dexamples=all in meson command,
or by calling make in the example directory.




Re: OpenWRT Related Question

2022-07-20 Thread Thomas Monjalon
Hello,

20/07/2022 06:15, R T:
> Hello Folks,I've been writing a small application on Ubuntu 20.04 that uses 
> DPDK.  I started with some of the simpler examples such as l2fwd and 
> packet_ordering and added some other function I wanted to experiment with.  
> I'm getting good packet throughput and the library is working well.  Overall, 
> DPDK is feature rich, easy to compile, and working as advertised.

Great to read, thanks.

> As next steps, I would like to move my application to run on top of a very 
> small Linux image that can be loaded off of a very small flash or even over 
> the network.OpenWRT checks all the boxes for being small, having a lot of 
> network utilities, and supporting DPDK.
> Looking at the documents, the OpenWRT docs is pretty small and I don't see 
> mmay uch discussions about it on the archives of this mailing list.
> I'm asking if people could provide some feedback on writing a DPDK app for 
> use on OpenWRT.
> Is the advertised integration working well?How easy is it to cross compile an 
> app to run on OpenWRT (x86 in m y case)?
> I would welcome any comment on this topic

We have this howto page about OpenWRT:
https://doc.dpdk.org/guides/howto/openwrt.html
I am not sure how much it is up to date,
please do not hesitate to give feedback if anything can be improved.




Re: Fwd: QOS sample example.

2022-03-31 Thread Thomas Monjalon
+Cc QoS scheduler maintainers (see file MAINTAINERS)

31/03/2022 18:59, satish amara:
> Hi,
> I am trying to understand the QOS sample scheduler application code.
> Trying to understand what is tc_period in the config.
> 30. QoS Scheduler Sample Application — Data Plane Development Kit 21.05.0
> documentation (dpdk.org)
>  Is
> tc_period same as  tb_period
> tb_period Bytes Time period that should elapse since the last credit update
> in order for the bucket to be awarded tb_credits_per_period worth or
> credits.
> Regards,
> Satish Amara
> 







Re: ConnectX5 Setup with DPDK

2022-02-25 Thread Thomas Monjalon
25/02/2022 19:29, Aaron Lee:
> Hi Thomas,
> 
> I was doing some more testing and wanted to increase the RX queues for the
> CX5 but was wondering how I could do that. I see in the usage example in
> the docs, I could pass in --rxq=2 --txq=2 to set the queues to 2 each but I
> don't see that in my output when I run the command. Below is the output
> from running the command in
> https://doc.dpdk.org/guides/nics/mlx5.html#usage-example. Does this mean
> that the MCX515A-CCAT I have can't support more than 1 queue or am I
> supposed to configure another setting?

I see nothing about the number of queues in your output.
You should try the command "show config rxtx".


> EAL: Detected 80 lcore(s)
> EAL: Detected 2 NUMA nodes
> EAL: Multi-process socket /var/run/dpdk/rte/mp_socket
> EAL: Selected IOVA mode 'PA'
> EAL: Probing VFIO support...
> EAL: VFIO support initialized
> EAL: Probe PCI driver: mlx5_pci (15b3:1017) device: :af:00.0 (socket 1)
> mlx5_pci: Size 0x is not power of 2, will be aligned to 0x1.
> EAL: No legacy callbacks, legacy socket not created
> Interactive-mode selected
> testpmd: create a new mbuf pool : n=203456, size=2176, socket=0
> testpmd: preferred mempool ops selected: ring_mp_mc
> testpmd: create a new mbuf pool : n=203456, size=2176, socket=1
> testpmd: preferred mempool ops selected: ring_mp_mc
> 
> Warning! port-topology=paired and odd forward ports number, the last port
> will pair with itself.
> 
> Configuring Port 0 (socket 1)
> mlx5_pci: Failed to init cache list FDB_ingress_0_matcher_cache entry (nil).
> mlx5_pci: Failed to init cache list FDB_ingress_0_matcher_cache entry (nil).
> mlx5_pci: Failed to init cache list FDB_ingress_0_matcher_cache entry (nil).
> mlx5_pci: Failed to init cache list FDB_ingress_0_matcher_cache entry (nil).
> Port 0: EC:0D:9A:68:21:A8
> Checking link statuses...
> Done
> mlx5_pci: Failed to init cache list FDB_ingress_0_matcher_cache entry (nil).
> 
> Best,
> Aaron







Re: ConnectX5 Setup with DPDK

2022-02-21 Thread Thomas Monjalon
21/02/2022 21:10, Aaron Lee:
> Hi Thomas,
> 
> Actually I remembered in my previous setup I had run dpdk-devbind.py to
> bind the mlx5 NIC to igb_uio. I read somewhere that you don't need to do
> this and just wanted to confirm that this is correct.

Indeed, mlx5 PMD runs on top of mlx5 kernel driver.
We don't need UIO or VFIO drivers.
The kernel modules must remain loaded and can be used in the same time.
When DPDK is working, the traffic goes to the userspace PMD by default,
but it is possible to configure some flows to go directly to the kernel driver.
This behaviour is called "bifurcated model".


> On Mon, Feb 21, 2022 at 11:45 AM Aaron Lee  wrote:
> 
> > Hi Thomas,
> >
> > I tried installing things from scratch two days ago and have gotten
> > things working! I think part of the problem was figuring out the correct
> > hugepage allocation for my system. If I recall correctly, I tried setting
> > up my system with default page size 1G but perhaps didn't have enough pages
> > allocated at the time. Currently have the following which gives me the
> > output you've shown previously.
> >
> > root@yeti-04:~/dpdk-21.11# usertools/dpdk-hugepages.py -s
> > Node Pages Size Total
> > 0161Gb16Gb
> > 1161Gb16Gb
> >
> > root@yeti-04:~/dpdk-21.11# echo show port summary all |
> > build/app/dpdk-testpmd --in-memory -- -i
> > EAL: Detected CPU lcores: 80
> > EAL: Detected NUMA nodes: 2
> > EAL: Detected static linkage of DPDK
> > EAL: Selected IOVA mode 'PA'
> > EAL: No free 2048 kB hugepages reported on node 0
> > EAL: No free 2048 kB hugepages reported on node 1
> > EAL: No available 2048 kB hugepages reported
> > EAL: VFIO support initialized
> > EAL: Probe PCI driver: mlx5_pci (15b3:1017) device: :af:00.0 (socket 1)
> > TELEMETRY: No legacy callbacks, legacy socket not created
> > Interactive-mode selected
> > testpmd: create a new mbuf pool : n=779456, size=2176, socket=0
> > testpmd: preferred mempool ops selected: ring_mp_mc
> > testpmd: create a new mbuf pool : n=779456, size=2176, socket=1
> > testpmd: preferred mempool ops selected: ring_mp_mc
> >
> > Warning! port-topology=paired and odd forward ports number, the last port
> > will pair with itself.
> >
> > Configuring Port 0 (socket 1)
> > Port 0: EC:0D:9A:68:21:A8
> > Checking link statuses...
> > Done
> > testpmd> show port summary all
> > Number of available ports: 1
> > Port MAC Address   Name Driver Status   Link
> > 0EC:0D:9A:68:21:A8 :af:00.0 mlx5_pci   up   100 Gbps
> >
> > Best,
> > Aaron
> >
> > On Mon, Feb 21, 2022 at 11:03 AM Thomas Monjalon 
> > wrote:
> >
> >> 21/02/2022 19:52, Thomas Monjalon:
> >> > 18/02/2022 22:12, Aaron Lee:
> >> > > Hello,
> >> > >
> >> > > I'm trying to get my ConnectX5 NIC working with DPDK v21.11 but I'm
> >> > > wondering if the card I have simply isn't compatible. I first noticed
> >> that
> >> > > the model I was given is MCX515A-CCA_Ax_Bx. Below are some of the
> >> error
> >> > > logs when running dpdk-pdump.
> >> >
> >> > When testing a NIC, it is more convenient to use dpdk-testpmd.
> >> >
> >> > > EAL: Detected CPU lcores: 80
> >> > > EAL: Detected NUMA nodes: 2
> >> > > EAL: Detected static linkage of DPDK
> >> > > EAL: Multi-process socket
> >> /var/run/dpdk/rte/mp_socket_383403_1ac7441297c92
> >> > > EAL: failed to send to (/var/run/dpdk/rte/mp_socket) due to No such
> >> file or
> >> > > directory
> >> > > EAL: Fail to send request /var/run/dpdk/rte/mp_socket:bus_vdev_mp
> >> > > vdev_scan(): Failed to request vdev from primary
> >> > > EAL: Selected IOVA mode 'PA'
> >> > > EAL: failed to send to (/var/run/dpdk/rte/mp_socket) due to No such
> >> file or
> >> > > directory
> >> > > EAL: Fail to send request /var/run/dpdk/rte/mp_socket:eal_vfio_mp_sync
> >> > > EAL: Cannot request default VFIO container fd
> >> > > EAL: VFIO support could not be initialized
> >> > > EAL: Probe PCI driver: mlx5_pci (15b3:1017) device: :af:00.0
> >> (socket 1)
> >> > > EAL: failed to send to (/var/run/dpdk/rte/mp_socket) due to No such
> >> file or
> >> > > directory
> >> > > EAL: Fail to send request /var/run/dpdk/rte/mp_socket:common_mlx5_mp
> >

Re: ConnectX5 Setup with DPDK

2022-02-21 Thread Thomas Monjalon
21/02/2022 19:52, Thomas Monjalon:
> 18/02/2022 22:12, Aaron Lee:
> > Hello,
> > 
> > I'm trying to get my ConnectX5 NIC working with DPDK v21.11 but I'm
> > wondering if the card I have simply isn't compatible. I first noticed that
> > the model I was given is MCX515A-CCA_Ax_Bx. Below are some of the error
> > logs when running dpdk-pdump.
> 
> When testing a NIC, it is more convenient to use dpdk-testpmd.
> 
> > EAL: Detected CPU lcores: 80
> > EAL: Detected NUMA nodes: 2
> > EAL: Detected static linkage of DPDK
> > EAL: Multi-process socket /var/run/dpdk/rte/mp_socket_383403_1ac7441297c92
> > EAL: failed to send to (/var/run/dpdk/rte/mp_socket) due to No such file or
> > directory
> > EAL: Fail to send request /var/run/dpdk/rte/mp_socket:bus_vdev_mp
> > vdev_scan(): Failed to request vdev from primary
> > EAL: Selected IOVA mode 'PA'
> > EAL: failed to send to (/var/run/dpdk/rte/mp_socket) due to No such file or
> > directory
> > EAL: Fail to send request /var/run/dpdk/rte/mp_socket:eal_vfio_mp_sync
> > EAL: Cannot request default VFIO container fd
> > EAL: VFIO support could not be initialized
> > EAL: Probe PCI driver: mlx5_pci (15b3:1017) device: :af:00.0 (socket 1)
> > EAL: failed to send to (/var/run/dpdk/rte/mp_socket) due to No such file or
> > directory
> > EAL: Fail to send request /var/run/dpdk/rte/mp_socket:common_mlx5_mp
> > mlx5_common: port 0 request to primary process failed
> > mlx5_net: probe of PCI device :af:00.0 aborted after encountering an
> > error: No such file or directory
> > mlx5_common: Failed to load driver mlx5_eth
> > EAL: Requested device :af:00.0 cannot be used
> > EAL: Error - exiting with code: 1
> >   Cause: No Ethernet ports - bye
> 
> From this log, we miss the previous steps before running the application.
> 
> Please check these simple steps:
> - install rdma-core
> - build dpdk (meson build && ninja -C build)
> - reserve hugepages (usertools/dpdk-hugepages.py -r 1G)
> - run testpmd (echo show port summary all | build/app/dpdk-testpmd 
> --in-memory -- -i)
> 
> EAL: Detected CPU lcores: 10
> EAL: Detected NUMA nodes: 1
> EAL: Detected static linkage of DPDK
> EAL: Selected IOVA mode 'PA'
> EAL: Probe PCI driver: mlx5_pci (15b3:101f) device: :08:00.0 (socket 0)
> Interactive-mode selected
> testpmd: create a new mbuf pool : n=219456, size=2176, socket=0
> testpmd: preferred mempool ops selected: ring_mp_mc
> Configuring Port 0 (socket 0)
> Port 0: 0C:42:A1:D6:E0:00
> Checking link statuses...
> Done
> testpmd> show port summary all
> Number of available ports: 1
> Port MAC Address   Name Driver Status   Link
> 00C:42:A1:D6:E0:00 08:00.0  mlx5_pci   up   25 Gbps
> 
> > I noticed that the pci id of the card I was given is 15b3:1017 as below.
> > This sort of indicates to me that the PMD driver isn't supported on this
> > card.
> 
> This card is well supported and even officially tested with DPDK 21.11,
> as you can see in the release notes:
> https://doc.dpdk.org/guides/rel_notes/release_21_11.html#tested-platforms
> 
> > af:00.0 Ethernet controller [0200]: Mellanox Technologies MT27800 Family
> > [ConnectX-5] [15b3:1017]
> > 
> > I'd appreciate it if someone has gotten this card to work with DPDK to
> > point me in the right direction or if my suspicions were correct that this
> > card doesn't work with the PMD.

If you want to check which hardware is supported by a PMD,
you can use this command:

usertools/dpdk-pmdinfo.py build/drivers/librte_net_mlx5.so  
PMD NAME: mlx5_eth
PMD KMOD DEPENDENCIES: * ib_uverbs & mlx5_core & mlx5_ib
PMD HW SUPPORT:
 Mellanox Technologies (15b3) : MT27700 Family [ConnectX-4] (1013) (All 
Subdevices)
 Mellanox Technologies (15b3) : MT27700 Family [ConnectX-4 Virtual Function] 
(1014) (All Subdevices)
 Mellanox Technologies (15b3) : MT27710 Family [ConnectX-4 Lx] (1015) (All 
Subdevices)
 Mellanox Technologies (15b3) : MT27710 Family [ConnectX-4 Lx Virtual Function] 
(1016) (All Subdevices)
 Mellanox Technologies (15b3) : MT27800 Family [ConnectX-5] (1017) (All 
Subdevices)
 Mellanox Technologies (15b3) : MT27800 Family [ConnectX-5 Virtual Function] 
(1018) (All Subdevices)
 Mellanox Technologies (15b3) : MT28800 Family [ConnectX-5 Ex] (1019) (All 
Subdevices)
 Mellanox Technologies (15b3) : MT28800 Family [ConnectX-5 Ex Virtual Function] 
(101a) (All Subdevices)
 Mellanox Technologies (15b3) : MT416842 BlueField integrated ConnectX-5 
network controller (a2d2) (All Subdevices)
 Mellanox Technologies (15b3) : MT416842 BlueField multicore SoC family VF 
(a2d3) (All Subdevices)
 Mellanox Technologies (15b3) : MT28908 Fami

Re: ConnectX5 Setup with DPDK

2022-02-21 Thread Thomas Monjalon
18/02/2022 22:12, Aaron Lee:
> Hello,
> 
> I'm trying to get my ConnectX5 NIC working with DPDK v21.11 but I'm
> wondering if the card I have simply isn't compatible. I first noticed that
> the model I was given is MCX515A-CCA_Ax_Bx. Below are some of the error
> logs when running dpdk-pdump.

When testing a NIC, it is more convenient to use dpdk-testpmd.

> EAL: Detected CPU lcores: 80
> EAL: Detected NUMA nodes: 2
> EAL: Detected static linkage of DPDK
> EAL: Multi-process socket /var/run/dpdk/rte/mp_socket_383403_1ac7441297c92
> EAL: failed to send to (/var/run/dpdk/rte/mp_socket) due to No such file or
> directory
> EAL: Fail to send request /var/run/dpdk/rte/mp_socket:bus_vdev_mp
> vdev_scan(): Failed to request vdev from primary
> EAL: Selected IOVA mode 'PA'
> EAL: failed to send to (/var/run/dpdk/rte/mp_socket) due to No such file or
> directory
> EAL: Fail to send request /var/run/dpdk/rte/mp_socket:eal_vfio_mp_sync
> EAL: Cannot request default VFIO container fd
> EAL: VFIO support could not be initialized
> EAL: Probe PCI driver: mlx5_pci (15b3:1017) device: :af:00.0 (socket 1)
> EAL: failed to send to (/var/run/dpdk/rte/mp_socket) due to No such file or
> directory
> EAL: Fail to send request /var/run/dpdk/rte/mp_socket:common_mlx5_mp
> mlx5_common: port 0 request to primary process failed
> mlx5_net: probe of PCI device :af:00.0 aborted after encountering an
> error: No such file or directory
> mlx5_common: Failed to load driver mlx5_eth
> EAL: Requested device :af:00.0 cannot be used
> EAL: Error - exiting with code: 1
>   Cause: No Ethernet ports - bye

>From this log, we miss the previous steps before running the application.

Please check these simple steps:
- install rdma-core
- build dpdk (meson build && ninja -C build)
- reserve hugepages (usertools/dpdk-hugepages.py -r 1G)
- run testpmd (echo show port summary all | build/app/dpdk-testpmd --in-memory 
-- -i)

EAL: Detected CPU lcores: 10
EAL: Detected NUMA nodes: 1
EAL: Detected static linkage of DPDK
EAL: Selected IOVA mode 'PA'
EAL: Probe PCI driver: mlx5_pci (15b3:101f) device: :08:00.0 (socket 0)
Interactive-mode selected
testpmd: create a new mbuf pool : n=219456, size=2176, socket=0
testpmd: preferred mempool ops selected: ring_mp_mc
Configuring Port 0 (socket 0)
Port 0: 0C:42:A1:D6:E0:00
Checking link statuses...
Done
testpmd> show port summary all
Number of available ports: 1
Port MAC Address   Name Driver Status   Link
00C:42:A1:D6:E0:00 08:00.0  mlx5_pci   up   25 Gbps

> I noticed that the pci id of the card I was given is 15b3:1017 as below.
> This sort of indicates to me that the PMD driver isn't supported on this
> card.

This card is well supported and even officially tested with DPDK 21.11,
as you can see in the release notes:
https://doc.dpdk.org/guides/rel_notes/release_21_11.html#tested-platforms

> af:00.0 Ethernet controller [0200]: Mellanox Technologies MT27800 Family
> [ConnectX-5] [15b3:1017]
> 
> I'd appreciate it if someone has gotten this card to work with DPDK to
> point me in the right direction or if my suspicions were correct that this
> card doesn't work with the PMD.

Please tell me what drove you into the wrong direction,
because I really would like to improve the documentation & tools.




Re: Initial Setup and best practices

2022-02-13 Thread Thomas Monjalon
13/02/2022 18:37, Simon Brown:
> Hello,

Hello

> I'm new to DPDK and I'm trying to setup a simple project to count packets. 
> I'm 
> using MoonGen to generate the traffic on one machine and can receive the data 
> using traditional sockets on another machine. So I know that part works.
> 
> I've built dpdk 21.11 and I've tried to modify the example rxtx callbacks 
> application to count packets, but it doesn't see any traffic. So I presume 
> there's something wrong with my environment.

You should explain more what is your setup.

> Can you advise on how to verify that my environment is correct and what is 
> the 
> recommended setup for new projects? Should I be using the virtualisation 
> interface vfio-pci or the other interfaces? I have mlx5, i40e and ice NICs 
> available for test.

There is no best setup I think.
You should check your environment with dpdk-testpmd.

> For mlx5 dpdk-devbind suggests that vfio-pci is compatible

That's interesting, we should take mlx4 and mlx5 as exceptions
in this script because they are bifurcated, i.e. no need of UIO or VFIO.
We should use the accurate info given by dpdk-pmdinfo.py

> whereas mlx5_core 
> is a kernel driver, but trying to run with vfio-pci leads to:
> 
> mlx5_common: No Verbs device matches PCI device :01:00.0, are kernel 
> drivers loaded?
> 
> mlx5 seems to work correctly with MoonGen.

mlx5 is working with its own kernel driver.
The DPDK PMD is negotiating with the kernel driver to get the traffic.
The main benefit is that we can choose some flows to go directly in kernel,
and the rest being managed directly by DPDK, bypassing the kernel.





Re: net_mlx5: unable to recognize master/representors on the multiple IB devices

2022-01-16 Thread Thomas Monjalon
+Cc mlx5 experts


14/01/2022 11:10, Rocio Dominguez:
> Hi,
> 
> I'm doing a setup with Mellanox ConnectX-4 (MCX416A-CCA) NICs.
> 
> I'm using:
> 
> OS SLES 15 SP2
> DPDK 19.11.4 (the official supported version for SLES 15 SP2)
> MLNX_OFED_LINUX-5.5-1.0.3.2-sles15sp2-x86_64 (the latest one)
> Mellanox adapters firmware 12.28.2006 (corresponding to this MLNX_OFED 
> version)
> kernel 5.3.18-24.34-default
> 
> 
> This is my SRIOV configuration for DPDK capable PCI slots:
> 
> {
> "resourceName": "mlnx_sriov_netdevice",
> "resourcePrefix": "mellanox.com",
> "isRdma": true,
> "selectors": {
> "vendors": ["15b3"],
> "devices": ["1014"],
> "drivers": ["mlx5_core"],
> "pciAddresses": [":d8:00.2", ":d8:00.3", 
> ":d8:00.4", ":d8:00.5"],
> "isRdma": true
> }
> 
> The sriov device plugin starts without problems, the devices are correctly 
> allocated:
> 
> {
>   "cpu": "92",
>   "ephemeral-storage": "419533922385",
>   "hugepages-1Gi": "8Gi",
>   "hugepages-2Mi": "4Gi",
>   "intel.com/intel_sriov_dpdk": "0",
>   "intel.com/sriov_cre": "3",
>   "mellanox.com/mlnx_sriov_netdevice": "4",
>   "mellanox.com/sriov_dp": "0",
>   "memory": "183870336Ki",
>   "pods": "110"
> }
> 
> The Mellanox NICs are binded to the kernel driver mlx5_core:
> 
> pcgwpod009-c04:~ # dpdk-devbind --status
> 
> Network devices using kernel driver
> ===
> :18:00.0 'Ethernet Controller 10G X550T 1563' if=em1 drv=ixgbe 
> unused=vfio-pci
> :18:00.1 'Ethernet Controller 10G X550T 1563' if=em2 drv=ixgbe 
> unused=vfio-pci
> :19:00.0 'Ethernet Controller 10G X550T 1563' if=em3 drv=ixgbe 
> unused=vfio-pci
> :19:00.1 'Ethernet Controller 10G X550T 1563' if=em4 drv=ixgbe 
> unused=vfio-pci
> :3b:00.0 'MT27700 Family [ConnectX-4] 1013' if=enp59s0f0 drv=mlx5_core 
> unused=vfio-pci
> :3b:00.1 'MT27700 Family [ConnectX-4] 1013' if=enp59s0f1 drv=mlx5_core 
> unused=vfio-pci
> :5e:00.0 '82599ES 10-Gigabit SFI/SFP+ Network Connection 10fb' if=p3p1 
> drv=ixgbe unused=vfio-pci
> :5e:00.1 '82599ES 10-Gigabit SFI/SFP+ Network Connection 10fb' if=p3p2 
> drv=ixgbe unused=vfio-pci
> :5e:10.0 '82599 Ethernet Controller Virtual Function 10ed' if= 
> drv=ixgbevf unused=vfio-pci
> :5e:10.2 '82599 Ethernet Controller Virtual Function 10ed' if=p3p1_1 
> drv=ixgbevf unused=vfio-pci
> :5e:10.4 '82599 Ethernet Controller Virtual Function 10ed' if= 
> drv=ixgbevf unused=vfio-pci
> :5e:10.6 '82599 Ethernet Controller Virtual Function 10ed' if=p3p1_3 
> drv=ixgbevf unused=vfio-pci
> :af:00.0 '82599ES 10-Gigabit SFI/SFP+ Network Connection 10fb' if=p4p1 
> drv=ixgbe unused=vfio-pci
> :af:00.1 '82599ES 10-Gigabit SFI/SFP+ Network Connection 10fb' if=p4p2 
> drv=ixgbe unused=vfio-pci
> :d8:00.0 'MT27700 Family [ConnectX-4] 1013' if=enp216s0f0 drv=mlx5_core 
> unused=vfio-pci
> :d8:00.1 'MT27700 Family [ConnectX-4] 1013' if=enp216s0f1 drv=mlx5_core 
> unused=vfio-pci
> :d8:00.2 'MT27700 Family [ConnectX-4 Virtual Function] 1014' 
> if=enp216s0f2 drv=mlx5_core unused=vfio-pci
> :d8:00.3 'MT27700 Family [ConnectX-4 Virtual Function] 1014' 
> if=enp216s0f3 drv=mlx5_core unused=vfio-pci
> :d8:00.4 'MT27700 Family [ConnectX-4 Virtual Function] 1014' 
> if=enp216s0f4 drv=mlx5_core unused=vfio-pci
> :d8:00.5 'MT27700 Family [ConnectX-4 Virtual Function] 1014' 
> if=enp216s0f5 drv=mlx5_core unused=vfio-pci
> 
> The interfaces are up:
> 
> pcgwpod009-c04:~ # ibdev2netdev -v
> :3b:00.0 mlx5_0 (MT4115 - MT1646K01301) CX416A - ConnectX-4 QSFP28 fw 
> 12.28.2006 port 1 (ACTIVE) ==> enp59s0f0 (Up)
> :3b:00.1 mlx5_1 (MT4115 - MT1646K01301) CX416A - ConnectX-4 QSFP28 fw 
> 12.28.2006 port 1 (ACTIVE) ==> enp59s0f1 (Up)
> :d8:00.0 mlx5_2 (MT4115 - MT1646K00538) CX416A - ConnectX-4 QSFP28 fw 
> 12.28.2006 port 1 (ACTIVE) ==> enp216s0f0 (Up)
> :d8:00.1 mlx5_3 (MT4115 - MT1646K00538) CX416A - ConnectX-4 QSFP28 fw 
> 12.28.2006 port 1 (ACTIVE) ==> enp216s0f1 (Up)
> :d8:00.2 mlx5_4 (MT4116 - NA)  fw 12.28.2006 port 1 (ACTIVE) ==> 
> enp216s0f2 (Up)
> :d8:00.3 mlx5_5 (MT4116 - NA)  fw 12.28.2006 port 1 (ACTIVE) ==> 
> enp216s0f3 (Up)
> :d8:00.4 mlx5_6 (MT4116 - NA)  fw 12.28.2006 port 1 (ACTIVE) ==> 
> enp216s0f4 (Up)
> :d8:00.5 mlx5_7 (MT4116 - NA)  fw 12.28.2006 port 1 (ACTIVE) ==> 
> enp216s0f5 (Up)
> pcgwpod009-c04:~ #
> 
> 
> But when I run my application the Mellanox adapters are probed and I obtain 
> the following error:
> 
> {"proc_id":"6"},"message":"[pio] EAL: Probe PCI driver: mlx5_pci (15b3:1014) 
> device: :d8:00.4 (socket 1)"}
> {"version":"0.2.0","timestamp":"2022-01-14T09:51:39.826+00:00","severity":"info","service_id":"eric-pc-up-data-plane","metadata":{"proc_id":"6"},"message":"[pio]
>  net_mlx5: unable to recognize 

release schedule change proposal

2021-11-15 Thread Thomas Monjalon
For the last 5 years, DPDK was doing 4 releases per year,
in February, May, August and November (the LTS one):
.02   .05   .08   .11 (LTS)

This schedule has multiple issues:
- clash with China's Spring Festival
- too many rushes, impacting maintainers & testers
- not much buffer, impacting proposal period

I propose to switch to a new schedule with 3 releases per year:
.03  .07  .11 (LTS)

New LTS branch would start at the same time of the year as before.
There would be one less intermediate release during spring/summer:
.05 and .08 intermediate releases would become a single .07.
I think it has almost no impact for the users.
This change could be done starting next year.

In details, this is how we could extend some milestones:

ideal schedule so far (in 13 weeks):
proposal deadline: 4
rc1 - API freeze: 5
rc2 - PMD features freeze: 2
rc3 - app features freeze: 1
rc4 - last chance to fix: 1
release: 0

proposed schedule (in 17 weeks):
proposal deadline: 4
rc1 - API freeze: 7
rc2 - PMD features freeze: 3
rc3 - app features freeze: 1
rc4 - more fixes: 1
rc5 - last chance buffer: 1
release: 0

Opinions?




Re: Enable RX interrupts

2021-11-06 Thread Thomas Monjalon
01/11/2021 09:56, Jack Humphries:
> Hi Dmitry,
> 
> Thanks for the quick response! Yes, I was calling rte_eth_dev_rx_intr_enable 
> — I just tried re-arming interrupts before each call to epoll and my initial 
> experiment shows that fixes the issue.
> 
> You are right that l3fwd-power does the re-arming as well. If possible, it 
> would be helpful to have a comment in that example code about the re-arming 
> since the impression I got from quickly looking at the l3fwd-power code is 
> that interrupts are turned off because the app wants to poll for a threshold 
> of time before “giving up” and turning interrupts back on again. I’m happy to 
> open a pull request to add this comment, too, if you prefer.

The best is to submit a patch on the mailing list, Cc'in the relevant 
maintainers.
Thanks





Re: [dpdk-dev] Doubt regarding DPDK hash Library implementation

2021-11-04 Thread Thomas Monjalon
04/11/2021 17:39, Medvedkin, Vladimir:
> >> 01/11/2021 11:55, Syam Prasad N Pearson:
> >>> /** Number of items per bucket. */
> >>> *#define RTE_HASH_BUCKET_ENTRIES 8*
> >>>
> >>> defined inside:
> >>> dpdk-20.11.3/dpdk-stable-20.11.3/lib/librte_hash /rte_cuckoo_hash.h
> >>>
> >>> Why does the library take this value as *8*, is there any particular
> >>> reason for this? what if it is 16,32... etc.
> 
> Yes, RTE_HASH_BUCKET_ENTRIES can be any power of 2.
> The reason for choosing 8 is a tradeoff between performance and memory. 
> When it is equal to 8, the sizeof(struct rte_hash_bucket) equal to 
> RTE_CACHE_LINE_SIZE, thus, there are no gaps in memory between the hash 
> buckets due to their alignment.

That's a good comment to add in the code.




Re: [dpdk-dev] Doubt regarding DPDK hash Library implementation

2021-11-04 Thread Thomas Monjalon
+Cc hash lib maintainers

01/11/2021 11:55, Syam Prasad N Pearson:
> Dear Sir/Madam,
> I am a developer trying to get familiar with the DPDK hash library. I tried
> to make and use a hash table successfully.
> During the development I came across a variable
> 
> /** Number of items per bucket. */
> *#define RTE_HASH_BUCKET_ENTRIES 8*
> 
> defined inside:
> dpdk-20.11.3/dpdk-stable-20.11.3/lib/librte_hash /rte_cuckoo_hash.h
> 
> Why does the library take this value as *8*, is there any particular reason
> for this? what if it is 16,32... etc.
> 
> I am using DPDK 20.11.3 LTS.
> 
> Please help.





Re: Troubles with building of DPDK.RPM and meson

2021-10-12 Thread Thomas Monjalon
12/10/2021 08:45, Ruslan R. Laishev:
> Hello All!
> 
>   I have a small study task to make DPDK.RPM with the latest DPDK from 
> the git.

You should look at what is done in Fedora for the latest DPDK.

>   I use DPDK.SPEC from the latest .SRC.RPM kit as a template.

Not sure what is your reference.





Re: I need DPDK MLX5 Probe error support

2021-10-06 Thread Thomas Monjalon
I don't even know which Linux distribution you are using.
Please send the Dockerfile.
If it compiles in Docker, it should run.


06/10/2021 14:27, Jaeeun Ham:
> Hi Thomas,
> 
> The cause is that I fail to load mlx5 driver using pci address on the docker 
> container.
> So, I tried to add rdma-core library to solve dependency issue you mentioned 
> as below.
> Docker image is built with these Dockerfiles.
> This docker image is built with DPDK20.11.
> How should I add rdma-core library?
> 
> I don't find any rdma related so files in the docker container.
> b273016e5be8:/usr/local/lib # ls *mlx*
> librte_common_mlx5.so  librte_common_mlx5.so.21  librte_common_mlx5.so.21.0  
> librte_net_mlx5.so  librte_net_mlx5.so.21  librte_net_mlx5.so.21.0
> b273016e5be8:/usr/local/lib # ls *rdma*
> ls: cannot access '*rdma*': No such file or directory
> 
> dpdk-20.11/doc/guides/rel_notes/release_20_11.rst
> 911:  * rdma-core:
> 913:* rdma-core-31.0-1 and above
> 
> 
> 
> < error log >
> f1d23550a947:/ # cat /tmp/logs/epp.log
> MIDHAUL_PCI_ADDR::12:01.0, BACKHAUL_PCI_ADDR::12:01.1 
> MIDHAUL_IP_ADDR:10.255.20.125, BACKHAUL_IP_ADDR:10.255.20.124
> mlx5_pci: unable to recognize master/representors on the multiple IB devices
> common_mlx5: Failed to load driver = mlx5_pci.
> 
> EAL: Requested device :12:01.0 cannot be used
> mlx5_pci: unable to recognize master/representors on the multiple IB devices
> common_mlx5: Failed to load driver = mlx5_pci.
> 
> EAL: Requested device :12:01.1 cannot be used
> EAL: Bus (pci) probe failed.
> FATAL: epp_init.c::copy_mac_addr:130: Call to 
> rte_eth_dev_get_port_by_name(src_dpdk_dev_name, _id) failed: -19 
> (Unknown error -19), rte_errno=0 (not set)
> 
> Caught signal 6
> Obtained 7 stack frames, tid=1377.
> tid=1377, /usr/local/bin/ericsson-packet-processor() [0x40a3c4] tid=1377, 
> /lib64/libpthread.so.0(+0x13f80) [0x7f56c4786f80] tid=1377, 
> /lib64/libc.so.6(gsignal+0x10b) [0x7f56c229018b] tid=1377, 
> /lib64/libc.so.6(abort+0x175) [0x7f56c2291585] tid=1377, 
> /usr/local/bin/ericsson-packet-processor(main+0x458) [0x406818] tid=1377, 
> /lib64/libc.so.6(__libc_start_main+0xed) [0x7f56c227b34d] tid=1377, 
> /usr/local/bin/ericsson-packet-processor(_start+0x2a) [0x4090ea]
> 
> 
> 
> BR/Jaeeun
> 
> -Original Message-
> From: Thomas Monjalon 
> Sent: Wednesday, October 6, 2021 7:59 PM
> To: Jaeeun Ham 
> Cc: users@dpdk.org; alia...@nvidia.com; rasl...@nvidia.com; as...@nvidia.com
> Subject: Re: I need DPDK MLX5 Probe error support
> 
> Installing dependencies is not an issue.
> I don't understand which support you need.
> 
> 
> 06/10/2021 11:57, Jaeeun Ham:
> > Hi Thomas,
> >
> > Could you take a look at the attached file?
> > My engineer managed to compile DPDK 20.11 to support MLX5. Please find the 
> > output from dpdk-testpmd command in attached file. As you can see testpmd 
> > was able to probe mlx5_pci drivers and get MAC addresses.
> > The key issue in his case for enabling MLX5 support was to export rdma-core 
> > lib path to shared libs for meson/ninja commands as new build system 
> > automatically enables MLX5 support if needed dependencies are available.
> >
> > BR/Jaeeun
> >
> > -Original Message-
> > From: Thomas Monjalon mailto:tho...@monjalon.net>>
> > Sent: Sunday, October 3, 2021 4:51 PM
> > To: Jaeeun Ham mailto:jaeeun@ericsson.com>>
> > Cc: users@dpdk.org<mailto:users@dpdk.org>; 
> > alia...@nvidia.com<mailto:alia...@nvidia.com>; 
> > rasl...@nvidia.com<mailto:rasl...@nvidia.com>;
> > as...@nvidia.com<mailto:as...@nvidia.com>
> > Subject: Re: I need DPDK MLX5 Probe error support
> >
> > Hi,
> >
> > I think you need to read the documentation.
> > For DPDK install on Linux:
> > https://protect2.fireeye.com/v1/url?k=7925aba3-26be92c2-7925eb38-86d8a
> > 30ca42b-d871f122b4a0a61a=1=88eca0f4-aa71-4ba8-a332-179f08406da3=
> > https%3A%2F%2Fdoc.dpdk.org%2Fguides%2Flinux_gsg%2Fbuild_dpdk.html%23co
> > mpiling-and-installing-dpdk-system-wide
> > For mlx5 specific dependencies, install rdma-core package:
> > https://protect2.fireeye.com/v1/url?k=9bce4984-c45570e5-9bce091f-86d8a
> > 30ca42b-25bd3d467b5f290d=1=88eca0f4-aa71-4ba8-a332-179f08406da3=
> > https%3A%2F%2Fdoc.dpdk.org%2Fguides%2Fnics%2Fmlx5.html%23linux-prerequ
> > isites
> >
> >
> > 02/10/2021 12:57, Jaeeun Ham:
> > > Hi,
> > >
> > > Could you teach me how to install dpdk-testpmd?
> > > I have to run the application on the host server, not a development 
> > > se

Re: I need DPDK MLX5 Probe error support

2021-10-06 Thread Thomas Monjalon
Installing dependencies is not an issue.
I don't understand which support you need.


06/10/2021 11:57, Jaeeun Ham:
> Hi Thomas,
> 
> Could you take a look at the attached file?
> My engineer managed to compile DPDK 20.11 to support MLX5. Please find the 
> output from dpdk-testpmd command in attached file. As you can see testpmd was 
> able to probe mlx5_pci drivers and get MAC addresses.
> The key issue in his case for enabling MLX5 support was to export rdma-core 
> lib path to shared libs for meson/ninja commands as new build system 
> automatically enables MLX5 support if needed dependencies are available.
> 
> BR/Jaeeun
> 
> -----Original Message-
> From: Thomas Monjalon  
> Sent: Sunday, October 3, 2021 4:51 PM
> To: Jaeeun Ham 
> Cc: users@dpdk.org; alia...@nvidia.com; rasl...@nvidia.com; as...@nvidia.com
> Subject: Re: I need DPDK MLX5 Probe error support
> 
> Hi,
> 
> I think you need to read the documentation.
> For DPDK install on Linux:
> https://protect2.fireeye.com/v1/url?k=7925aba3-26be92c2-7925eb38-86d8a30ca42b-d871f122b4a0a61a=1=88eca0f4-aa71-4ba8-a332-179f08406da3=https%3A%2F%2Fdoc.dpdk.org%2Fguides%2Flinux_gsg%2Fbuild_dpdk.html%23compiling-and-installing-dpdk-system-wide
> For mlx5 specific dependencies, install rdma-core package:
> https://protect2.fireeye.com/v1/url?k=9bce4984-c45570e5-9bce091f-86d8a30ca42b-25bd3d467b5f290d=1=88eca0f4-aa71-4ba8-a332-179f08406da3=https%3A%2F%2Fdoc.dpdk.org%2Fguides%2Fnics%2Fmlx5.html%23linux-prerequisites
> 
> 
> 02/10/2021 12:57, Jaeeun Ham:
> > Hi,
> > 
> > Could you teach me how to install dpdk-testpmd?
> > I have to run the application on the host server, not a development server.
> > So, I don't know how to get dpdk-testpmd.
> > 
> > By the way, testpmd run result is as below.
> > root@seroics05590:~/ejaeham# testpmd
> > EAL: Detected 64 lcore(s)
> > EAL: libmlx4.so.1: cannot open shared object file: No such file or 
> > directory
> > EAL: FATAL: Cannot init plugins
> > 
> > EAL: Cannot init plugins
> > 
> > PANIC in main():
> > Cannot init EAL
> > 5: [testpmd(_start+0x2a) [0x55d301d98e1a]]
> > 4: [/lib/x86_64-linux-gnu/libc.so.6(__libc_start_main+0xe7) 
> > [0x7f5e044a4bf7]]
> > 3: [testpmd(main+0x907) [0x55d301d98d07]]
> > 2: [/usr/lib/x86_64-linux-gnu/librte_eal.so.17.11(__rte_panic+0xbd) 
> > [0x7f5e04ca3cfd]]
> > 1: [/usr/lib/x86_64-linux-gnu/librte_eal.so.17.11(rte_dump_stack+0x2e) 
> > [0x7f5e04cac19e]] Aborted
> > 
> > 
> > I added option below when the process is starting in the docker.
> >  dv_flow_en=0 \
> >  --log-level=pmd,8 \
> > < MLX5 log >
> > 415a695ba348:/tmp/logs # cat epp.log
> > MIDHAUL_PCI_ADDR::12:01.0, BACKHAUL_PCI_ADDR::12:01.1 
> > MIDHAUL_IP_ADDR:10.255.21.177, BACKHAUL_IP_ADDR:10.255.21.178
> > mlx5_pci: unable to recognize master/representors on the multiple IB 
> > devices
> > common_mlx5: Failed to load driver = mlx5_pci.
> > 
> > EAL: Requested device :12:01.0 cannot be used
> > mlx5_pci: unable to recognize master/representors on the multiple IB 
> > devices
> > common_mlx5: Failed to load driver = mlx5_pci.
> > 
> > EAL: Requested device :12:01.1 cannot be used
> > EAL: Bus (pci) probe failed.
> > EAL: Trying to obtain current memory policy.
> > EAL: Setting policy MPOL_PREFERRED for socket 1 Caught signal 15
> > EAL: Restoring previous memory policy: 0
> > EAL: Calling mem event callback 'MLX5_MEM_EVENT_CB:(nil)'
> > EAL: request: mp_malloc_sync
> > EAL: Heap on socket 1 was expanded by 5120MB
> > FATAL: epp_init.c::copy_mac_addr:130: Call to 
> > rte_eth_dev_get_port_by_name(src_dpdk_dev_name, _id) failed: -19 
> > (Unknown error -19), rte_errno=0 (not set)
> > 
> > Caught signal 6
> > Obtained 7 stack frames, tid=713.
> > tid=713, /usr/local/bin/ericsson-packet-processor() [0x40a4a4] 
> > tid=713, /lib64/libpthread.so.0(+0x13f80) [0x7f7e1eae8f80] tid=713, 
> > /lib64/libc.so.6(gsignal+0x10b) [0x7f7e1c5f818b] tid=713, 
> > /lib64/libc.so.6(abort+0x175) [0x7f7e1c5f9585] tid=713, 
> > /usr/local/bin/ericsson-packet-processor(main+0x458) [0x406818] 
> > tid=713, /lib64/libc.so.6(__libc_start_main+0xed) [0x7f7e1c5e334d] 
> > tid=713, /usr/local/bin/ericsson-packet-processor(_start+0x2a) 
> > [0x4091ca]
> > 
> > < i40e log >
> > cat epp.log
> > MIDHAUL_PCI_ADDR::3b:0d.5, BACKHAUL_PCI_ADDR::3b:0d.4 
> > MIDHAUL_IP_ADDR:10.51.21.112, BACKHAUL_IP_ADDR:10.51.21.113
> > EAL: Trying to obtain current memory policy.
> > EAL: Setting policy MPOL_PREFERRED for socket 1
&

Re: I need DPDK MLX5 Probe error support

2021-10-05 Thread Thomas Monjalon
05/10/2021 03:17, Jaeeun Ham:
> Hi Thomas,
> 
> I attached the testpmd result which is gathered on the host sever.
> Could you please take a look at the mlx5_core PCI issue?

I see no real issue in the log.
For doing more tests, I recommend using the latest DPDK version.


> Thank you in advance.
> 
> BR/Jaeeun
> 
> -Original Message-----
> From: Thomas Monjalon  
> Sent: Sunday, October 3, 2021 4:51 PM
> To: Jaeeun Ham 
> Cc: users@dpdk.org; alia...@nvidia.com; rasl...@nvidia.com; as...@nvidia.com
> Subject: Re: I need DPDK MLX5 Probe error support
> 
> Hi,
> 
> I think you need to read the documentation.
> For DPDK install on Linux:
> https://protect2.fireeye.com/v1/url?k=7925aba3-26be92c2-7925eb38-86d8a30ca42b-d871f122b4a0a61a=1=88eca0f4-aa71-4ba8-a332-179f08406da3=https%3A%2F%2Fdoc.dpdk.org%2Fguides%2Flinux_gsg%2Fbuild_dpdk.html%23compiling-and-installing-dpdk-system-wide
> For mlx5 specific dependencies, install rdma-core package:
> https://protect2.fireeye.com/v1/url?k=9bce4984-c45570e5-9bce091f-86d8a30ca42b-25bd3d467b5f290d=1=88eca0f4-aa71-4ba8-a332-179f08406da3=https%3A%2F%2Fdoc.dpdk.org%2Fguides%2Fnics%2Fmlx5.html%23linux-prerequisites
> 
> 
> 02/10/2021 12:57, Jaeeun Ham:
> > Hi,
> > 
> > Could you teach me how to install dpdk-testpmd?
> > I have to run the application on the host server, not a development server.
> > So, I don't know how to get dpdk-testpmd.
> > 
> > By the way, testpmd run result is as below.
> > root@seroics05590:~/ejaeham# testpmd
> > EAL: Detected 64 lcore(s)
> > EAL: libmlx4.so.1: cannot open shared object file: No such file or 
> > directory
> > EAL: FATAL: Cannot init plugins
> > 
> > EAL: Cannot init plugins
> > 
> > PANIC in main():
> > Cannot init EAL
> > 5: [testpmd(_start+0x2a) [0x55d301d98e1a]]
> > 4: [/lib/x86_64-linux-gnu/libc.so.6(__libc_start_main+0xe7) 
> > [0x7f5e044a4bf7]]
> > 3: [testpmd(main+0x907) [0x55d301d98d07]]
> > 2: [/usr/lib/x86_64-linux-gnu/librte_eal.so.17.11(__rte_panic+0xbd) 
> > [0x7f5e04ca3cfd]]
> > 1: [/usr/lib/x86_64-linux-gnu/librte_eal.so.17.11(rte_dump_stack+0x2e) 
> > [0x7f5e04cac19e]] Aborted
> > 
> > 
> > I added option below when the process is starting in the docker.
> >  dv_flow_en=0 \
> >  --log-level=pmd,8 \
> > < MLX5 log >
> > 415a695ba348:/tmp/logs # cat epp.log
> > MIDHAUL_PCI_ADDR::12:01.0, BACKHAUL_PCI_ADDR::12:01.1 
> > MIDHAUL_IP_ADDR:10.255.21.177, BACKHAUL_IP_ADDR:10.255.21.178
> > mlx5_pci: unable to recognize master/representors on the multiple IB 
> > devices
> > common_mlx5: Failed to load driver = mlx5_pci.
> > 
> > EAL: Requested device :12:01.0 cannot be used
> > mlx5_pci: unable to recognize master/representors on the multiple IB 
> > devices
> > common_mlx5: Failed to load driver = mlx5_pci.
> > 
> > EAL: Requested device :12:01.1 cannot be used
> > EAL: Bus (pci) probe failed.
> > EAL: Trying to obtain current memory policy.
> > EAL: Setting policy MPOL_PREFERRED for socket 1 Caught signal 15
> > EAL: Restoring previous memory policy: 0
> > EAL: Calling mem event callback 'MLX5_MEM_EVENT_CB:(nil)'
> > EAL: request: mp_malloc_sync
> > EAL: Heap on socket 1 was expanded by 5120MB
> > FATAL: epp_init.c::copy_mac_addr:130: Call to 
> > rte_eth_dev_get_port_by_name(src_dpdk_dev_name, _id) failed: -19 
> > (Unknown error -19), rte_errno=0 (not set)
> > 
> > Caught signal 6
> > Obtained 7 stack frames, tid=713.
> > tid=713, /usr/local/bin/ericsson-packet-processor() [0x40a4a4] 
> > tid=713, /lib64/libpthread.so.0(+0x13f80) [0x7f7e1eae8f80] tid=713, 
> > /lib64/libc.so.6(gsignal+0x10b) [0x7f7e1c5f818b] tid=713, 
> > /lib64/libc.so.6(abort+0x175) [0x7f7e1c5f9585] tid=713, 
> > /usr/local/bin/ericsson-packet-processor(main+0x458) [0x406818] 
> > tid=713, /lib64/libc.so.6(__libc_start_main+0xed) [0x7f7e1c5e334d] 
> > tid=713, /usr/local/bin/ericsson-packet-processor(_start+0x2a) 
> > [0x4091ca]
> > 
> > < i40e log >
> > cat epp.log
> > MIDHAUL_PCI_ADDR::3b:0d.5, BACKHAUL_PCI_ADDR::3b:0d.4 
> > MIDHAUL_IP_ADDR:10.51.21.112, BACKHAUL_IP_ADDR:10.51.21.113
> > EAL: Trying to obtain current memory policy.
> > EAL: Setting policy MPOL_PREFERRED for socket 1
> > EAL: Restoring previous memory policy: 0
> > EAL: Calling mem event callback 'vfio_mem_event_clb:(nil)'
> > EAL: request: mp_malloc_sync
> > EAL: Heap on socket 1 was expanded by 5120MB
> > i40evf_dev_alarm_handler(): ICR01_ADMINQ is reported
> > i40evf_handle_aq_msg(): adminq response is 

Re: I need DPDK MLX5 Probe error support

2021-10-03 Thread Thomas Monjalon
03/10/2021 10:10, Jaeeun Ham:
> Hi Thomas,
> 
> Thank you so much for your sincere support.
> I will follow your suggestion and do my best to solve this issue.
> 
> By the way, is it okay to use mlx5_core driver by different applications 
> which have different DPDK versions?
> :12:01.0 (DPDK 20.11 - mlx5_pci: unable to recognize master/representors 
> on the multiple IB)
> :12:01.1 (DPDK 20.11 - mlx5_pci: unable to recognize master/representors 
> on the multiple IB)
> :12:01.2 (DPDK 18.11 - currently used)

I think it should be OK but it is not well tested.




Re: I need DPDK MLX5 Probe error support

2021-10-03 Thread Thomas Monjalon
 is reported
> i40evf_handle_aq_msg(): adminq response is received, opcode = 15
> i40evf_dev_alarm_handler(): ICR01_ADMINQ is reported
> i40evf_handle_aq_msg(): adminq response is received, opcode = 15
> i40evf_dev_alarm_handler(): ICR01_ADMINQ is reported
> i40evf_handle_aq_msg(): adminq response is received, opcode = 15
> i40evf_dev_alarm_handler(): ICR01_ADMINQ is reported
> i40evf_handle_aq_msg(): adminq response is received, opcode = 15
> i40evf_dev_alarm_handler(): ICR01_ADMINQ is reported
> i40evf_handle_aq_msg(): adminq response is received, opcode = 15
> i40evf_dev_alarm_handler(): ICR01_ADMINQ is reported
> i40evf_handle_aq_msg(): adminq response is received, opcode = 15
> Caught signal 10
> i40evf_dev_alarm_handler(): ICR01_ADMINQ is reported
> i40evf_handle_aq_msg(): adminq response is received, opcode = 15
> i40evf_dev_alarm_handler(): ICR01_ADMINQ is reported
> i40evf_handle_aq_msg(): adminq response is received, opcode = 15
> i40evf_dev_alarm_handler(): ICR01_ADMINQ is reported
> i40evf_handle_aq_msg(): adminq response is received, opcode = 15
> i40evf_dev_alarm_handler(): ICR01_ADMINQ is reported
> i40evf_handle_aq_msg(): adminq response is received, opcode = 15
> i40evf_dev_alarm_handler(): ICR01_ADMINQ is reported
> i40evf_handle_aq_msg(): adminq response is received, opcode = 15
> i40evf_dev_alarm_handler(): ICR01_ADMINQ is reported
> i40evf_handle_aq_msg(): adminq response is received, opcode = 15
> 
> 
> process start option which is triggered by shell script is as below.
> 
> < start-epp.sh >
> exec /usr/local/bin/ericsson-packet-processor \
>   $(get_dpdk_core_list_parameter) \
>   $(get_dpdk_mem_parameter) \
>   $(get_dpdk_hugepage_parameters) \
>  -d /usr/local/lib/librte_mempool_ring.so \
>  -d /usr/local/lib/librte_mempool_stack.so \
>  -d /usr/local/lib/librte_net_pcap.so \
>  -d /usr/local/lib/librte_net_i40e.so \
>  -d /usr/local/lib/librte_net_mlx5.so \
>  -d /usr/local/lib/librte_event_dsw.so \
>  $DPDK_PCI_OPTIONS \
>  --vdev=event_dsw0 \
>  --vdev=eth_pcap0,iface=midhaul_edk \
>  --vdev=eth_pcap1,iface=backhaul_edk \
>  --file-prefix=container \
>  --log-level lib.eal:debug \
>  dv_flow_en=0 \
>  --log-level=pmd,8 \
>  -- \
>   $(get_epp_mempool_parameter) \
>  
> "--neighbor-discovery-interface=midhaul_ker,${MIDHAUL_IP_ADDR},mac_addr_dev=${MIDHAUL_MAC_ADDR_DEV},vr_id=0"
>  \
>  
> "--neighbor-discovery-interface=backhaul_ker,${BACKHAUL_IP_ADDR},mac_addr_dev=${BACKHAUL_MAC_ADDR_DEV},vr_id=1"
> 
> BR/Jaeeun
> 
> -Original Message-
> From: Thomas Monjalon  
> Sent: Wednesday, September 29, 2021 8:16 PM
> To: Jaeeun Ham 
> Cc: users@dpdk.org; alia...@nvidia.com; rasl...@nvidia.com; as...@nvidia.com
> Subject: Re: I need DPDK MLX5 Probe error support
> 
> 27/09/2021 02:18, Jaeeun Ham:
> > Hi,
> > 
> > I hope you are well.
> > My name is Jaeeun Ham and I have been working for the Ericsson.
> > 
> > I am suffering from enabling MLX5 NIC, so could you take a look at how to 
> > run it?
> > There are two pci address for the SRIOV(vfio) mlx5 nic support but it 
> > doesn't run correctly. (12:01.0, 12:01.1)
> > 
> > I started one process which is running inside the docker process that is on 
> > the MLX5 NIC support host server.
> > The process started to run with following option.
> > -d /usr/local/lib/librte_net_mlx5.so And the docker process has 
> > mlx5 libraries as below.
> 
> Did you try on the host outside of any container?
> 
> Please could you try following commands (variables to be replaced)?
> 
> dpdk-hugepages.py --reserve 1G
> ip link set $netdev netns $container
> docker run --cap-add SYS_NICE --cap-add IPC_LOCK --cap-add NET_ADMIN \
>--device /dev/infiniband/ $image
> echo show port summary all | dpdk-testpmd --in-memory -- -i
> 
> 
> 
> > 706a37a35d29:/usr/local/lib # ls -1 | grep mlx librte_common_mlx5.so
> > librte_common_mlx5.so.21
> > librte_common_mlx5.so.21.0
> > librte_net_mlx5.so
> > librte_net_mlx5.so.21
> > librte_net_mlx5.so.21.0
> > 
> > But I failed to run the process with following error. 
> > (MIDHAUL_PCI_ADDR::12:01.0, BACKHAUL_PCI_ADDR::12:01.1)
> > 
> > ---
> > 
> > mlx5_pci: unable to recognize master/representors on the multiple IB 
> > devices
> > common_mlx5: Failed to load driver = mlx5_pci.
> > EAL: Requested device :12:01.0 cannot be used
> > mlx5_pci: unable to recognize master/representors on the multiple IB 
> > devices
> > common_mlx5: Failed to load driver = mlx5_pci.
> > EAL: Requested device :12:01.1 cannot be used
> > EAL: Bus (pci) probe failed.
> > 
> > ---
> > 
> > For the success case of pci address 12:01.2, it showed following messages.
> > 
> > ---
> > 
> > EAL: Detected 64 lcore(s)
> > EAL: Detected 2 NUMA nodes
> > EAL: Multi-process socket /var/run/dpdk/nah2/mp_socket
> > EAL: Probing VFIO support...
> > EAL: VFIO support initialized
> > EAL: PCI device :12:01.2 on NUMA socket 0
> > EAL:   probe driver: 15b3:1016 net_mlx5
> > net_mlx5: MPLS over GRE/UDP tunnel offloading disabled due to old 
> > OFED/rdma-core version or firmware configuration
> > net_mlx5: port 0 the requested maximum Rx packet size (2056) is larger 
> > than a single mbuf (2048) and scattered mode has not been requested
> > USER1: rte_ip_frag_table_create: allocated of 6291584 bytes at socket 
> > 0
> > 
> > ---
> > 
> > BR/Jaeeun
> 







Re: [dpdk-users] Memory allocation limits

2021-09-29 Thread Thomas Monjalon
29/09/2021 14:43, Burakov, Anatoly:
> From: Thomas Monjalon 
> > 29/09/2021 12:14, Burakov, Anatoly:
> > > From: Thomas Monjalon 
> > > > 26/09/2021 17:52, Mohammad Masumi:
> > > > > I have HP server with 768GB memory 384GB in each Numa but I can't
> > > > > allocate more than 64GB by rte_malloc by changing some parameters
> > > > > in rte_config.h it increased to 128GB How to increase heap size?
> > >
> > > This is intentional. In order to increase the amount of contiguous 
> > > allocation
> > possible to perform in DPDK, you need to adjust the following values in
> > rte_config.h:
> > >
> > > #define RTE_MAX_MEMSEG_PER_LIST 8192
> > > #define RTE_MAX_MEM_MB_PER_LIST 32768
> > > #define RTE_MAX_MEMSEG_PER_TYPE 32768
> > > #define RTE_MAX_MEM_MB_PER_TYPE 65536
> > >
> > > I do not recommend arbitrarily changing them as this is untested, but
> > increasing them proportionally (e.g. multiply all of them by 2 or 4) should 
> > not
> > break anything.
> > 
> > It looks to be something to add in docs, right?
> 
> [[AB]] 
> Yes, which is why I already have 
> 
> http://doc.dpdk.org/guides/prog_guide/env_abstraction_layer.html#memory-mapping-discovery-and-memory-reservation
> 
> There's a section on "Maximum amount of memory" there.

It says "Normally, these options do not need to be changed."
Would it be meaningful to add how we can increase (by multiplying all of them)?




Re: I need DPDK MLX5 Probe error support

2021-09-29 Thread Thomas Monjalon
27/09/2021 02:18, Jaeeun Ham:
> Hi,
> 
> I hope you are well.
> My name is Jaeeun Ham and I have been working for the Ericsson.
> 
> I am suffering from enabling MLX5 NIC, so could you take a look at how to run 
> it?
> There are two pci address for the SRIOV(vfio) mlx5 nic support but it doesn't 
> run correctly. (12:01.0, 12:01.1)
> 
> I started one process which is running inside the docker process that is on 
> the MLX5 NIC support host server.
> The process started to run with following option.
> -d /usr/local/lib/librte_net_mlx5.so
> And the docker process has mlx5 libraries as below.

Did you try on the host outside of any container?

Please could you try following commands (variables to be replaced)?

dpdk-hugepages.py --reserve 1G
ip link set $netdev netns $container
docker run --cap-add SYS_NICE --cap-add IPC_LOCK --cap-add NET_ADMIN \
   --device /dev/infiniband/ $image
echo show port summary all | dpdk-testpmd --in-memory -- -i



> 706a37a35d29:/usr/local/lib # ls -1 | grep mlx
> librte_common_mlx5.so
> librte_common_mlx5.so.21
> librte_common_mlx5.so.21.0
> librte_net_mlx5.so
> librte_net_mlx5.so.21
> librte_net_mlx5.so.21.0
> 
> But I failed to run the process with following error. 
> (MIDHAUL_PCI_ADDR::12:01.0, BACKHAUL_PCI_ADDR::12:01.1)
> 
> ---
> 
> mlx5_pci: unable to recognize master/representors on the multiple IB devices
> common_mlx5: Failed to load driver = mlx5_pci.
> EAL: Requested device :12:01.0 cannot be used
> mlx5_pci: unable to recognize master/representors on the multiple IB devices
> common_mlx5: Failed to load driver = mlx5_pci.
> EAL: Requested device :12:01.1 cannot be used
> EAL: Bus (pci) probe failed.
> 
> ---
> 
> For the success case of pci address 12:01.2, it showed following messages.
> 
> ---
> 
> EAL: Detected 64 lcore(s)
> EAL: Detected 2 NUMA nodes
> EAL: Multi-process socket /var/run/dpdk/nah2/mp_socket
> EAL: Probing VFIO support...
> EAL: VFIO support initialized
> EAL: PCI device :12:01.2 on NUMA socket 0
> EAL:   probe driver: 15b3:1016 net_mlx5
> net_mlx5: MPLS over GRE/UDP tunnel offloading disabled due to old 
> OFED/rdma-core version or firmware configuration
> net_mlx5: port 0 the requested maximum Rx packet size (2056) is larger than a 
> single mbuf (2048) and scattered mode has not been requested
> USER1: rte_ip_frag_table_create: allocated of 6291584 bytes at socket 0
> 
> ---
> 
> BR/Jaeeun




Re: [dpdk-users] Memory allocation limits

2021-09-29 Thread Thomas Monjalon
29/09/2021 12:14, Burakov, Anatoly:
> From: Thomas Monjalon 
> > 26/09/2021 17:52, Mohammad Masumi:
> > > Hi
> > >
> > > I have HP server with 768GB memory 384GB in each Numa but I can't
> > > allocate more than 64GB by rte_malloc by changing some parameters in
> > > rte_config.h it increased to 128GB How to increase heap size?
> > 
> > adding people Cc to help
> > 
> 
> Hi,
> 
> This is intentional. In order to increase the amount of contiguous allocation 
> possible to perform in DPDK, you need to adjust the following values in 
> rte_config.h:
> 
> #define RTE_MAX_MEMSEG_PER_LIST 8192
> #define RTE_MAX_MEM_MB_PER_LIST 32768
> #define RTE_MAX_MEMSEG_PER_TYPE 32768
> #define RTE_MAX_MEM_MB_PER_TYPE 65536
> 
> I do not recommend arbitrarily changing them as this is untested, but 
> increasing them proportionally (e.g. multiply all of them by 2 or 4) should 
> not break anything.

It looks to be something to add in docs, right?




Re: [dpdk-users] can we reserve hugepage and not release

2021-09-29 Thread Thomas Monjalon
29/09/2021 12:16, Burakov, Anatoly:
> From: Thomas Monjalon 
> > 18/09/2021 04:37, Jiany Wu:
> > > Hello,
> > >
> > > I met a scenario that, need to start and stop the container many times
> > > for the hugepage. But after several times container start and stop,
> > > the hugepage is not able to reserve.
> > > Hugepage size is 2MB, and HW only support 2MB, can't support 1GB.
> > > Is there anyway to make sure the hugepage is still kept continuous?
> > > Thanks indeed.
> > 
> > Interesting question.
> > I think we need to address it in the DPDK documentation.
> > 
> > Anatoly, Stephen, Bruce, any advice please?
> > 
> 
> Hi,
> 
> From description, I don't quite understand what's the issue here. Is the 
> problem about "contiguousness of memory", or is it about inability to reserve 
> more hugepages?

I think the issue is that sometimes some pages are not properly released,
so we cannot reserve them again.
That's something I experienced myself.
Any trick to reset hugepages state?

> How are hugepages assigned to your container?
> Have you tried using --in-memory mode?





Re: Issues Cross compiling DPDK

2021-09-29 Thread Thomas Monjalon
27/09/2021 17:45, Ginés García Avilés:
> Hi,
> I'm trying to cross-compile DPDK using the information provided
> here (56. Installing DPDK Using the meson build system — Data Plane
> Development Kit 21.11.0-rc0 documentation
> ) but, after
> running "meson build --cross-file config/defconfig_x86_64-native-linux-icc"
> I'm getting the following error:
> 
>   - configparser.MissingSectionHeaderError: File contains no section
> headers.
>   - dpdk/config/defconfig_x86_64-native-linux-icc', line: 6
> 'CONFIG_RTE_MACHINE="native"\n'
> 
> 1st workaround: Compiling directly at the destination machine with "make
> install T=x86_64-native-linux-icc" works.
> 
> Any ideas about how to solve it?

Looks like you are not using a recent DPDK version.




Re: [dpdk-users] what's the cache size of rte_mempool_create()?

2021-09-29 Thread Thomas Monjalon
+Cc mempool maintainers

08/09/2021 11:18, topperxin:
> HI list
>  A question about the value of cache size of rte_mempool_crate() 
> function, the defination of this function like below:
> 
> 
> struct rte_mempool *
> 
> rte_mempool_create(const char *name, unsigned n, unsigned elt_size,
> 
>unsigned cache_size, unsigned private_data_size,
> 
>rte_mempool_ctor_t *mp_init, void *mp_init_arg,
> 
>rte_mempool_obj_cb_t *obj_init, void *obj_init_arg,
> 
>int socket_id, unsigned flags);
> 
> 
> 
> 
> 
>  My question is : what's cache_size value means ? what's difference 
> between if I set cache_size = 0 and cache_size = 512 ?
>  I get some information from the the dpdk 20.11 it said that, if we set 
> cache size to 0 , it can be useful to avoid losing objects in cache , I can't 
> understand this point, does it mean
>  that if we set the cache size to non zero, it will suffer the risk that 
> some packages will lost ? right ?
> 
> 
>  Thanks for your tips.
> 
> 
>  BR.






Re: [dpdk-users] pktgen-dpdk failed to load LUA script

2021-09-29 Thread Thomas Monjalon
+Cc the maintainer

10/09/2021 09:34, Kevin Chen (陳奕儒) : 8553:
> Dear DPDK Community,
> 
> I'm trying to execute the LUA script with the following command.
> # usr/local/bin/pktgen -l 0-2 -- -T -P -m 1.0,2.1 -f test/hello-world.lua
> 
> But the LUA script is not executed with this message.
> >>> User State for CLI not set for Lua
> 
> Is there anything I left to set?
> 
> 
> 
> -- Pktgen 21.03.1 (DPDK 21.11.0-rc0)  Powered by DPDK  (pid:17619) 
> 
> 

> 
> 
> ** Version: DPDK 21.11.0-rc0, Command Line Interface without timers
> Pktgen:/>
> Executing 'test/hello-world.lua'
> >>> User State for CLI not set for Lua
> 
> 
> Regards,
> Kevin
> 







Re: [dpdk-users] pktgen not showing any capture.

2021-09-29 Thread Thomas Monjalon
+Cc dpdk-pktgen maintainer

10/09/2021 12:02, Filip Janiszewski:
> Hi,
> 
> While attempting to capture with pktgen, I see the counter
> rx_steer_missed_packets increasing in ethtool and nothing being captured.
> 
> in pktgen 'page stats' is always empty and 'page xstats' shows something
> is received but i guess nothing is delivered to the queues.
> 
> How should pktgen be configured to steer packets properly?
> 
> Thanks






Re: [dpdk-users] MLX ConnectX-4 Discarding packets

2021-09-29 Thread Thomas Monjalon
Great, thanks for the update!


12/09/2021 11:32, Filip Janiszewski:
> Alright, nailed it down to a wrong preferred PCIe device in the BIOS
> configuration, it has not been changed after the NIC have been moved to
> another PCIe slot.
> 
> Now the EPYC is going really great, getting 100Gbps rate easily.
> 
> Thank
> 
> Il 9/11/21 4:34 PM, Filip Janiszewski ha scritto:
> > I wanted just to add, while running the same exact testpmd on the other
> > machine I won't get a single miss with the same patter traffic:
> > 
> > .
> > testpmd> stop
> > Telling cores to stop...
> > Waiting for lcores to finish...
> > 
> >   --- Forward Stats for RX Port= 0/Queue= 0 -> TX Port= 0/Queue= 0
> > ---
> >   RX-packets: 61711939   TX-packets: 0  TX-dropped: 0
> > 
> > 
> >   --- Forward Stats for RX Port= 0/Queue= 1 -> TX Port= 0/Queue= 1
> > ---
> >   RX-packets: 62889424   TX-packets: 0  TX-dropped: 0
> > 
> > 
> >   --- Forward Stats for RX Port= 0/Queue= 2 -> TX Port= 0/Queue= 2
> > ---
> >   RX-packets: 61914199   TX-packets: 0  TX-dropped: 0
> > 
> > 
> >   --- Forward Stats for RX Port= 0/Queue= 3 -> TX Port= 0/Queue= 3
> > ---
> >   RX-packets: 63484438   TX-packets: 0  TX-dropped: 0
> > 
> > 
> >   -- Forward statistics for port 0
> > --
> >   RX-packets: 25000  RX-dropped: 0 RX-total: 25000
> >   TX-packets: 0  TX-dropped: 0 TX-total: 0
> > 
> > 
> > 
> >   +++ Accumulated forward statistics for all
> > ports+++
> >   RX-packets: 25000  RX-dropped: 0 RX-total: 25000
> >   TX-packets: 0  TX-dropped: 0 TX-total: 0
> > 
> > 
> > .
> > 
> > In the lab I've the EPYC connected directly to the Xeon using a 100GbE
> > link, both same RHL8.4 and same DPDK 21.02, running:
> > 
> > .
> > ./dpdk-testpmd -l 21-31 -n 8 -w 81:00.1  -- -i --rxq=4 --txq=4
> > --burst=64 --forward-mode=rxonly --rss-ip --total-num-mbufs=4194304
> > --nb-cores=4
> > .
> > 
> > and sending from the other end with pktgen, the EPYC loss tons of
> > packets (see my previous email), the Xeon don't loss anything.
> > 
> > *Confusion!*
> > 
> > Il 9/11/21 4:19 PM, Filip Janiszewski ha scritto:
> >> Thanks,
> >>
> >> I knew that document and we've implemented many of those settings/rules,
> >> but perhaps there's one crucial I've forgot? Wonder which one.
> >>
> >> Anyway, increasing the amount of queues impinge the performance, while
> >> sending 250M packets over a 100GbE link to an Intel 810-cqda2 NIC
> >> mounted on the EPYC Milan server, i see:
> >>
> >> .
> >> 1 queue, 30Gbps, ~45Mpps, 64B frame = imiss: 54,590,111
> >> 2 queue, 30Gbps, ~45Mpps, 64B frame = imiss: 79,394,138
> >> 4 queue, 30Gbps, ~45Mpps, 64B frame = imiss: 87,414,030
> >> .
> >>
> >> With DPDK 21.02 on RHL8.4. I can't observe this situation while
> >> capturing from my Intel server where increasing the queues leads to
> >> better performance (while with the test input set I drop with one queue,
> >> I do not drop anymore with 2 on the Intel server.)
> >>
> >> A customer with a brand new EPYC Milan server in his lab observed as
> >> well this scenario which is a bit of a worry, but again it might be some
> >> config/compilation issue we need do deal with?
> >>
> >> BTW, the same issue can be reproduced with testpmd, using 4 queues and
> >> the same input data set (250M of 64bytes frame at 30Gbps):
> >>
> >> .
> >> testpmd> stop
> >> Telling cores to stop...
> >> Waiting for lcores to finish...
> >>
> >>   --- Forward Stats for RX Port= 0/Queue= 0 -> TX Port= 0/Queue= 0
> >> ---
> >>   RX-packets: 41762999   TX-packets: 0  TX-dropped: 0
> >>
> >>
> >>   --- Forward Stats for RX Port= 0/Queue= 1 -> TX Port= 0/Queue= 1
> >> ---
> >>   RX-packets: 40152306   TX-packets: 0  TX-dropped: 0
> >>
> >>
> >>   --- Forward Stats for RX Port= 0/Queue= 2 -> TX Port= 0/Queue= 2
> >> ---
> >>   RX-packets: 41153402   TX-packets: 0  TX-dropped: 0
> >>
> >>
> >>   --- Forward Stats for RX Port= 0/Queue= 3 -> TX Port= 0/Queue= 3
> >> ---
> >>   RX-packets: 38341370   TX-packets: 0  TX-dropped: 0
> >>
> >>
> >>   -- Forward statistics for port 0
> >> --
> >>   RX-packets: 161410077  RX-dropped: 88589923  RX-total: 25000
> >>   TX-packets: 0  TX-dropped: 0 TX-total: 0
> >>
> >> 
> >> .
> >>
> >> .
> >> testpmd> show port xstats 0
> >> ## NIC extended statistics for port 0
> >> rx_good_packets: 161410081
> >> tx_good_packets: 0
> >> rx_good_bytes: 9684605284
> 

Re: [dpdk-users] SW Turbo Poll Mode Driver

2021-09-29 Thread Thomas Monjalon
+Cc maintainer

14/09/2021 12:58, Ginés García Avilés:
> Hi all,
> After following the steps listed here (3. SW Turbo Poll Mode Driver — Data
> Plane Development Kit 21.08.0 documentation (dpdk.org)
> ), using the specific
> versions of
> DPDK and FlexRAN, I'm facing an error while trying to run one of the bbdev
> tests:
>   - command:
> > python2 test-bbdev.py
> -e="--vdev=baseband_turbo_sw,socket_id=0,max_nb_queues=8" -c validation -v
> turbo_dec_default.data
>   - Error:
> > "Device 0 (baseband_turbo_sw) does not support specified capabilities"
> 
> which I think is due to an incorrect linkage of DPDK and FlexRAN. I have
> checked all the environmental variables pointing to the different
> components  and everything seems to be correct.
> 
> Any suggestions about how to solve this problem?
> 
> Thanks a lot for your help,
> Ginés.





Re: [dpdk-users] EAL: Failed to attach device on primary process error in dpdk 21.08

2021-09-29 Thread Thomas Monjalon
15/09/2021 16:08, animesh tripathi:
> Hi Team,
> 
> I am migrating from dpdk 19.11 to dpdk 21.08 for my application and I am
> getting the following error while executing the binary:
> 
> *EAL: Failed to attach device on primary process*
> 
> Can anyone please help me out here in understanding the real reason for
> this error and how to resolve this issue.

We need more info: command line, other logs, etc.
Do you reproduce with testpmd application?




Re: [dpdk-users] [Broadcom BNXT] Link event not working

2021-09-29 Thread Thomas Monjalon
17/09/2021 14:39, Antoine POLLENUS:
> Hi,
> 
> I'm experiencing some issues with a Broadcom P225P and DPDK in 19.11.3 
> version.
> 
> We register a handler to take care of the link status as specified in the 
> DPDK documentation.
> 
> But with this specific board the event is never triggered. I tested by 
> plugging and unplugging the SFP28 cable.
> 
> The issue is kind of problematic because our code relies on this event.
> 
> I've tested with an intel xxv710-da2 and with that board no problem.
> 
> Is it normal that this link status event is not working on Broadcom ?
> 
> Is it fixed in an higher version ?

Possible, please try testpmd with a recent DPDK.
If it doesn't work on DPDK 21.08, then you can open a bug in bugs.dpdk.org.

Thanks




Re: [dpdk-users] DPDK CPU selection criteria?

2021-09-29 Thread Thomas Monjalon
17/09/2021 17:23, Jared Brown:
> Hello everybody!
> 
> Is there some canonical resource or at least a recommendation on how to 
> evaluate different CPUs for suitability for use with DPDK?

There are some performance reports with details:
https://core.dpdk.org/perf-reports/

It is difficult to give any generic info
because it depends on CPU architecture, use case, and application.
Please keep in mind there are hundreds of API functions so we cannot
have clues for all combinations.

> My use case is software routers (for example DANOS, 6WIND, TNRS), so I am 
> mainly interested in forwarding performance in terms of Mpps.
> 
> What I am looking for is to develop some kind of heuristics to evaluate CPUs 
> in terms of $/Mpps without having to purchase hundreds of SKUs and running 
> tests on them.
> 
> The official DPDK documentation[0] states thus:
> 
> "7.1. Hardware and Memory Requirements
> 
> For best performance use an Intel Xeon class server system such as Ivy 
> Bridge, Haswell or newer."
> 
> This is somewhat... vague.

This is only for Intel platforms.
DPDK is supported on AMD and Arm CPUs as well.

Are you interested only in Intel CPU?
All your questions below are interesting but are very hardware-specific.
+Cc few Intel engineers for specific questions.

> I suppose one could take [1] as a baseline, which states on page 2 that an 
> Ivy Bridge Xeon E3-1230 V2 is able to forward unidirectional flows at 
> linerate using 10G NICs at all frequencies above 1.6 GHz and bidirectional 
> flows at linerate using 10G NICs at 3.3 GHz.
> 
> This however pales compared with [2] that on page 23 shows that a 3rd 
> Generation Scalable Xeon 8380 manages to very nearly saturate a 100G NIC at 
> all packet sizes.
> 
> As there is almost a magnitude in difference in forwarding performance per 
> core, you can perhaps understand that I am somewhat at a loss when trying to 
> gauge the performance of a particular CPU model.
> 
> Reading [3] one learns that several aspects of the CPU affect the forwarding 
> performance, but very little light is shed on how much each feature on its 
> own contributes. On page 172 one learns that CPU frequency has a linear 
> impact on the performance. This is borne out by [1], but does not take into 
> consideration inter-generational gaps as witnessed by [2].
> 
> This begs the question, what are those inter-generational differences made of?
> 
> - L3 cache latency (p. 54) as an upper limit on Mpps. Do newer generations 
> have decidedly lower cache latencies and is this the defining performance 
> factor?
> 
> - Direct Data I/O (p. 69)? Is DDIO combined with lower L3 cache latency a 
> major force multipler? Or is prefetch sufficient to keep caches hot? This is 
> somewhat confusing, as [3] states on page 62 that DPDK can get one core to 
> handleup to 33 mpps, on average. On one hand this is the performance [1] 
> demonstrated the better part of a decade earlier, but on the other hand [2] 
> demonstrates a magnitude larger performance per core.
> 
> - New instructions? On page 171 [3] notes that the AVX512 instruction can 
> move 64 bytes per cycle which [2] indicates has an almost 30% effect on Mpps 
> on page 22. How important is Transactional Synchronization Extensions (TSX) 
> support (see page 119 of [3]) for forwarding performance?
> 
> - Other factors are mentioned, such as memory frequency, memory size, memory 
> channels and cache sizes, but nothing is said how each of these affect 
> forwarding performance in terms of Mpps. The official documentation [0] only 
> states that: "Ensure that each memory channel has at least one memory DIMM 
> inserted, and that the memory size for each is at least 4GB. Note: this has 
> one of the most direct effects on performance."
> 
> - Turbo boost and hyperthreading? Are these supposed to be enabled or 
> disabled? I am getting conflicting information.  Results listed in [2] show 
> increased Mpps by enabling, but [1] notes that they were disabled due to them 
> introducing measurement artifacts. I recall some documentation recommending 
> disabling, since enabling increases latency and variance.
> 
> - Xeon D, W, E3, E5, E7 and Scalable. Are these different processor siblings 
> observably different from each other from the perspective of DPDK? Atoms 
> certainly are as [3] notes on page 57, because they only perform at 50% 
> compared to an equivalent Xeon core. A reson isn't given, but perhaps it is 
> due to the missing L3 cache?
> 
> - Something entirely else? Am I missing something completely obvious that 
> explains the inter-generational differences between CPUS in terms of 
> forwarding performance?
> 
> 
> So, given all this, how can I perform the mundane task of comparing for 
> example the Xeon W-1250P with the Xeon W-1350P?
> 
> The 1250 is older, but has a larger L2 cache and a higher frequency.
> 
> The 1350 is newer, uses faster memory, has a higher max memory bandwidth, 
> PCIe4.0, more PCI lanes and AVX-512.
> 
> Or 

Re: [dpdk-users] can we reserve hugepage and not release

2021-09-29 Thread Thomas Monjalon
18/09/2021 04:37, Jiany Wu:
> Hello,
> 
> I met a scenario that, need to start and stop the container many times for
> the hugepage. But after several times container start and stop, the
> hugepage is not able to reserve.
> Hugepage size is 2MB, and HW only support 2MB, can't support 1GB.
> Is there anyway to make sure the hugepage is still kept continuous? Thanks
> indeed.

Interesting question.
I think we need to address it in the DPDK documentation.

Anatoly, Stephen, Bruce, any advice please?




Re: [dpdk-users] TX/RX adapter running on the same core problem

2021-09-29 Thread Thomas Monjalon
+Cc eventdev maintainers

26/09/2021 07:21, Jaeeun Ham:
> Hi,
> 
> I hope you are well.
> 
> During the traffic test, TX adapter showed starvation due to Rx adapter 
> processing on the same dpdk-core 03 and dropped the 41412 packets after 
> 154655 tx_retry.
> So, I expect I have to assign TX/RX adapter on each dpdk-core to prevent 
> starvation and packet drop.
> 
> Are those tx_retry and tx_dropped packet due to RX adapter packet processing?
> Is it possible to separate TX/RX adapter into different dpdk-core?
> 
> [ TX adapter stats ]
> tx_retry: 154655
> tx_packets: 438771
> tx_dropped: 41412
> 
> 
> < eventdev_dump.txt >
> [ Services ]
> qdispatch_service_id: 0
> txa_service_id  : 1
> rxa_service_id  : 2
> edk_service_id  : 3
> timer_service_id: 4
> timer_adapter_service_id: 5
> 
> Service Cores Summary
> dpdk-core 01: PDCP Fastpath Plugin control function thread
> 
> dpdk-core   qu-dispat   txadapter   rxadapter edk tmr   
> tmr-adapt
>03   0   770574949   770574948   0   0 
>   0
>05   816421027   0   0   816421571   816421571   
> 816421571
>07   896307282   0   0   896307828   896307828 
>   0
>09   899213296   0   0   899213687   899213687 
>   0
>11   889323680   0   0   889323871   889323871 
>   0
>13   897300534   0   0   897300686   897300686 
>   0
>15   891124716   0   0   891124811   891124811 
>   0
>17   896336177   0   0   896336212   896336211 
>   0
>19   895461845   0   0   895461846   895461846 
>   0
> 
> [ Event dev port 0 ~ 10 xstats ]
> - packet flow: f1u -> s1u one way
>   midhaul_ker -> midhaul_edk -> backhaul_edk -> backhaul_ker
>   port 09 Rx adapter -> port 07 -> port 06 -> port 08 -> port 09 Tx adapter
> 
> port 00: dpdk-core 05, worker_core // queue-dispatcher, edk, timer
> port 01: dpdk-core 07, worker_core // queue-dispatcher, edk, timer
> port 02: dpdk-core 09, worker_core // queue-dispatcher, edk, timer
> port 03: dpdk-core 11, worker_core // queue-dispatcher, edk, timer
> port 04: dpdk-core 13, worker_core // queue-dispatcher, edk, timer
> port 05: dpdk-core 15, worker_core // queue-dispatcher, edk, timer
> port 06: dpdk-core 17, worker_core // queue-dispatcher, edk, timer
> port 07: dpdk-core 19, worker_core // queue-dispatcher, edk, timer
> port 08: TX adapter, dpdk-core 03  // packet transmission
> port 09: RX adapter, dpdk-core 03  // packet receiving
> port 10: Event timer adapter   // tmr-adapter
> 
> BR/Jaeeun
> -- next part --
> An embedded and charset-unspecified text was scrubbed...
> Name: eventdev_dump.txt
> URL: 
> 
> 







Re: [dpdk-users] Memory allocation limits

2021-09-29 Thread Thomas Monjalon
26/09/2021 17:52, Mohammad Masumi:
> Hi
> 
> I have HP server with 768GB memory 384GB in each Numa but I can't allocate
> more than 64GB by rte_malloc by changing some parameters in rte_config.h it
> increased to 128GB
> How to increase heap size?

adding people Cc to help




Re: Using rte_flow to distribute single flow type among multiple Rx queues using DPDK in Mellanox ConnectX-5 Ex

2021-09-29 Thread Thomas Monjalon
29/09/2021 07:26, Anna A:
> Hi,
> 
> I'm trying to use rte_flow_action_type_rss to distribute packets all of the
> same flow type among multiple Rx queues on a single port. Mellanox
> ConnectX-5 Ex and DPDK version 20.05 is used for this purpose. It doesn't
> seem to work and all the packets are sent only to a single queue.

Adding mlx5 maintainers Cc.

> My queries are :
> 1. What am I missing or doing differently?
> 2. Should I be doing any other configurations in rte_eth_conf or
> rte_eth_rxmode?

Do you see any error log?
For info, you can change log level with --log-level.
Experiment options with '--log-level help' in recent DPDK.

> My rte_flow configurations:
> 
> struct rte_flow_item pattern[MAX_RTE_FLOW_PATTERN] = {};
> struct rte_flow_action action[MAX_RTE_FLOW_ACTIONS] = {};
> struct rte_flow_attr attr;
> struct rte_flow_item_eth eth;
> struct rte_flow *flow = NULL;
> struct rte_flow_error error;
> int ret;
> int no_queues =2;
> uint16_t queues[2];
> struct rte_flow_action_rss rss;
> memset(, 0x22, sizeof(error));
> memset(, 0, sizeof(attr));
> attr.egress = 0;
> attr.ingress = 1;
> 
> memset(, 0, sizeof(pattern));
> memset(, 0, sizeof(action));
> /* setting the eth to pass all packets */
> pattern[0].type = RTE_FLOW_ITEM_TYPE_ETH;
> pattern[0].spec = 
> pattern[1].type = RTE_FLOW_ITEM_TYPE_END;
> 
> rss.types = ETH_RSS_IP;
> rss.level = 0;
> rss.func = RTE_ETH_HASH_FUNCTION_TOEPLITZ;
> rss.key_len =0;
> rss.key = NULL;
> rss.queue_num = no_queues;
> for (int i= 0; i < no_queues; i++){
> queues[i] = i;
> }
> rss.queue = queues;
> action[0].type = RTE_FLOW_ACTION_TYPE_RSS;
> action[0].conf = 
> 
> action[1].type = RTE_FLOW_ACTION_TYPE_END;
> 
> ret = rte_flow_validate(portid, , pattern, action, );
>  if (ret < 0) {
>   printf( "Flow validation failed %s\n", error.message);
> return;
> }
> flow = rte_flow_create(portid, , pattern, action, );
> 
> if (flow == NULL)
> printf(" Cannot create Flow create");
> 
> And Rx queues configuration:
> for (int j = 0; j < no_queues; j++) {
> 
>  int ret = rte_eth_rx_queue_setup(portid, j, nb_rxd,
> rte_eth_dev_socket_id(port_id),
>NULL,mbuf_pool);
>  if (ret < 0) {
>   printf( "rte_eth_rx_queue_setup:err=%d, port=%u", ret, (unsigned)
> portid);
> exit(1);
>}
> }
> 
> Thanks
> Anna





Re: [dpdk-users] [DISCUSSION] code snippet documentation

2021-07-22 Thread Thomas Monjalon
15/07/2021 09:01, Asaf Penso:
> Hello DPDK community,
> 
> I would like to bring up a discussion about a way to have code snippets as an 
> example for proper usage.
> The DPDK tree is filled with great pieces of code that are well documented 
> and maintained in high quality.
> I feel we are a bit behind when we talk about usage examples.
> 
> One way, whenever we implement a new feature, is to extend one of the test-* 
> under the "app" folder.
> This, however, provides means to test but doesn't provide a good usage 
> example.
> 
> Another way is to check the content of the "example" folder and whenever we 
> have a BIG new feature it seems like a good place.
> This, however, doesn't provide a good option when we talk about small 
> features.
> If, for example, we extend rte_flow with an extra action then providing a 
> full-blown example application is somewhat an entry barrier.
> 
> A third option could be to document it in one of the .rst files we have.
> Obviously, this requires high maintenance and no option to assure it still 
> compiles.
> 
> I'd like to propose another approach that will address the main two issues: 
> remove the entry barrier and assure compilation.
> In this approach, inside the "examples" folder we'll create another folder 
> for "snippets".
> Inside "snippets" we'll have several files per category, for example, 
> rte_flow_snippets.c
> Each .c file will include a main function that calls the different use cases 
> we want to give as an example.
> The purpose is not to generate traffic nor read rx/tx packets from the DPDK 
> ports. 
> The purpose is to have a good example that compiles properly.
> 
> Taking the rte_flow_snippets.c as an example its main function would look 
> like this:
> 
> int
> main(int argc, char **argv)
> {
>   rte_flow_snippet_match_5tuple_and_drop();
>   rte_flow_snippet_match_geneve_ope_and_rss();
>   ...
>   Return 0;
> }

I think we need to have a policy or justification about which snippet
is worth to have.
My thought is to avoid creating snippets which have no other value
than showing a function call.
I think there is a value if the context is not simple.

Please could you provide a more complex example?




[dpdk-users] new IRC channel

2021-07-16 Thread Thomas Monjalon
As agreed in a Technical Board meeting, the preferred IRC channel
for the DPDK project moved from freenode to Libera.Chat:
https://mails.dpdk.org/archives/dev/2021-July/214662.html

The website is updated:
https://core.dpdk.org/contribute/

There is a good guide to start with Libera.Chat:
https://libera.chat/guides/connect

Let's meet on Libera.Chat channel #DPDK
for quick questions, synchronization, or just to say hello!




Re: [dpdk-users] Question regarding DPDK on AWS

2021-07-01 Thread Thomas Monjalon
30/06/2021 16:40, Antonis Christodoulou:
> Hello all,

Hello,

> my name is Antonis Christodoulou, and I am a new user of DPDK. I am not sure 
> this is the right place to ask a usage question, so please feel free to 
> redirect me to someone more appropriate for such questions as needed.

Your question below looks related to the TCP stack in F-stack, not DPDK itself.
I would recommend to get in touch with the F-Stack project:
https://github.com/F-Stack/f-stack/issues


> I am working on AWS, and I have set up F-stack over DPDK, successfully 
> connecting to an address within my private VPC, using a client socket. On the 
> other side I am running a simple echo server with  ncat -l 2001 -k -c 'xargs 
> -n1 echo' -vvv. However, when I just change the address to some global IP, 
> like the one used by www.example.com, ie.  
> 93.184.216.34 (I used port 80), then I am not getting any socket connection.
> 
> Would you know why this is happening? I have not set up any veth0 interface 
> yet for the DPDK NIC, I am not sure this is needed for connectivity.
> 
> Regards,
> Antonis







Re: [dpdk-users] Unable to setup hugepages

2021-06-01 Thread Thomas Monjalon
31/05/2021 17:35, Gabriel Danjon:
> Hello,
> 
> After successfully installed the DPDK 20.11 on my Centos 8-Stream 
> (minimal), I am trying to configure the hugepages but encounters a lot 
> of difficulties.

There's some confusing info below.
Let's forget all the details and focus on simple things:
1/ use dpdk-hugepages.py
2/ choose one page size (2M or 1G)
3/ check which node requires memory with lstopo
4/ don't be confused with warnings about unused page size



> I am trying to reserve 4 hugepages of 1GB.
> 
> 
> Here the steps I have done following the documentation 
> (https://doc.dpdk.org/guides-20.11/linux_gsg/sys_reqs.html):
> 
> Additional information about meminfo :
> 
> cat /proc/meminfo
> MemTotal:   32619404 kB
> MemFree:27331024 kB
> MemAvailable:   27415524 kB
> Buffers:4220 kB
> Cached:   328628 kB
> SwapCached:0 kB
> Active:   194828 kB
> Inactive: 210156 kB
> Active(anon):   1744 kB
> Inactive(anon):83384 kB
> Active(file): 193084 kB
> Inactive(file):   126772 kB
> Unevictable:   0 kB
> Mlocked:   0 kB
> SwapTotal:  16474108 kB
> SwapFree:   16474108 kB
> Dirty: 0 kB
> Writeback: 0 kB
> AnonPages: 72136 kB
> Mapped:84016 kB
> Shmem: 12992 kB
> KReclaimable: 211956 kB
> Slab: 372852 kB
> SReclaimable: 211956 kB
> SUnreclaim:   160896 kB
> KernelStack:9120 kB
> PageTables: 6852 kB
> NFS_Unstable:  0 kB
> Bounce:0 kB
> WritebackTmp:  0 kB
> CommitLimit:30686656 kB
> Committed_AS: 270424 kB
> VmallocTotal:   34359738367 kB
> VmallocUsed:   0 kB
> VmallocChunk:  0 kB
> Percpu:28416 kB
> HardwareCorrupted: 0 kB
> AnonHugePages: 10240 kB
> ShmemHugePages:0 kB
> ShmemPmdMapped:0 kB
> FileHugePages: 0 kB
> FilePmdMapped: 0 kB
> HugePages_Total:   0
> HugePages_Free:0
> HugePages_Rsvd:0
> HugePages_Surp:0
> Hugepagesize:1048576 kB
> Hugetlb: 4194304 kB
> DirectMap4k:  225272 kB
> DirectMap2M: 4919296 kB
> DirectMap1G:30408704 kB
> 
> 1 Step follow documentation
> 
> bash -c 'echo 2048 > /sys/kernel/mm/hugepages/hugepages-2048kB/nr_hugepages'
> 
> As we're working on a NUMA machine we do this too. (We even do the 
> previous step because without it, it provides more errors)
> 
> bash -c 'echo 2048 > 
> /sys/devices/system/node/node0/hugepages/hugepages-2048kB/nr_hugepages' && \
> bash -c 'echo 2048 > 
> /sys/devices/system/node/node1/hugepages/hugepages-2048kB/nr_hugepages'
> 
> mkdir /mnt/huge
> mount -t hugetlbfs pagesize=1GB /mnt/huge
> 
> bash -c 'echo nodev /mnt/huge hugetlbfs pagesize=1GB 0 0 >> /etc/fstab'
> 
> So here the result of my meminfo (cat /proc/meminfo | grep Huge) :
> 
> AnonHugePages: 10240 kB
> ShmemHugePages:0 kB
> FileHugePages: 0 kB
> HugePages_Total:   0
> HugePages_Free:0
> HugePages_Rsvd:0
> HugePages_Surp:0
> Hugepagesize:1048576 kB
> Hugetlb: 4194304 kB
> 
> It looks strange that there is no total and free hugepages.
> 
> I tried the dpdk-testpmd using the DPDK documentation : dpdk-testpmd -l 
> 0-3 -n 4 -- -i --nb-cores=2
> 
> EAL: Detected 48 lcore(s)
> EAL: Detected 2 NUMA nodes
> EAL: Detected static linkage of DPDK
> EAL: Multi-process socket /var/run/dpdk/rte/mp_socket
> EAL: Selected IOVA mode 'PA'
> EAL: 2048 hugepages of size 2097152 reserved, but no mounted hugetlbfs 
> found for that size
> EAL: No free hugepages reported in hugepages-1048576kB
> EAL: No free hugepages reported in hugepages-1048576kB
> EAL: No available hugepages reported in hugepages-1048576kB
> EAL: FATAL: Cannot get hugepage information.
> EAL: Cannot get hugepage information.
> EAL: Error - exiting with code: 1
>Cause: Cannot init EAL: Permission denied
> 
> 
> So I checked in the /mnt/huge to look if files had been created (ls 
> /mnt/huge/ -la) : Empty folder
> 
> Then I checked if my folder was correctly mounted : mount | grep huge
> pagesize=1GB on /mnt/huge type hugetlbfs 
> (rw,relatime,seclabel,pagesize=1024M)
> 
> Then I tried the helloworld example (make clean && make && 
> ./build/helloworld):
> 
> EAL: Detected 48 lcore(s)
> EAL: Detected 2 NUMA nodes
> EAL: Detected shared linkage of DPDK
> EAL: Multi-process socket /var/run/dpdk/rte/mp_socket
> EAL: Selected IOVA mode 'PA'
> EAL: 2048 hugepages of size 2097152 reserved, but no mounted hugetlbfs 
> found for that size
> EAL: No free 1048576 kB hugepages reported on node 0
> EAL: No free 1048576 kB hugepages reported on node 1
> EAL: No available 1048576 kB hugepages reported
> EAL: FATAL: Cannot get hugepage information.
> EAL: Cannot get hugepage information.
> PANIC in main():
> Cannot init EAL
> 5: [./build/helloworld() [0x40079e]]
> 4: 

Re: [dpdk-users] Enable to install DPDK on Centos 8-Stream

2021-05-27 Thread Thomas Monjalon
27/05/2021 18:10, Gabriel Danjon:
> Hello,
> 
> I am having difficulties to compile and install DPDK from sources on the 
> latest Centos 8-Stream.

Did you compile and install DPDK successfully?
Where is it installed?

> 
> After having installed the required drivers and libraries, following the 
> documentation and the DPDK build (meson build && cd build && ninja && 
> ninja install && ldconfig), I tried to compile the helloworld example 
> without success:
> 'Makefile:12: *** "no installation of DPDK found".  Stop.'
> 
> 
> Please find attached to this mail some logs.

The log is way too long to be read.
Please copy only what is relevant.

> Could you provide help please ?

It looks to be basic issue with library installation.
Did you read the doc?
http://doc.dpdk.org/guides/linux_gsg/build_dpdk.html

Especially this note:
"
On some linux distributions, such as Fedora or Redhat, paths in /usr/local are 
not in the default paths for the loader. Therefore, on these distributions, 
/usr/local/lib and /usr/local/lib64 should be added to a file in 
/etc/ld.so.conf.d/ before running ldconfig.
"





Re: [dpdk-users] All links down with Chelsio T6 NICs

2021-04-10 Thread Thomas Monjalon
+Cc Chelsio maintainer

09/04/2021 19:24, Danushka Menikkumbura:
> Hello,
> 
> When I run testpmd on a system with 2 two-port Chelsio T6 NICs, the link
> status is down for all four ports. I use igb_uio as the kernel driver.
> Below is my testpmd commandline and the startup log.
> 
> sudo ./build/app/dpdk-testpmd -l 0,1,2,5 -b 81:00.0 -- -i
> 
> EAL: Detected 20 lcore(s)
> EAL: Detected 4 NUMA nodes
> EAL: Detected static linkage of DPDK
> EAL: Multi-process socket /var/run/dpdk/rte/mp_socket
> EAL: Selected IOVA mode 'PA'
> EAL: No available 1048576 kB hugepages reported
> EAL: Probing VFIO support...
> EAL: VFIO support initialized
> EAL: Probe PCI driver: net_cxgbe (1425:6408) device: :05:00.4 (socket 0)
> rte_cxgbe_pmd: Maskless filter support disabled. Continuing
> EAL: Probe PCI driver: net_cxgbe (1425:6408) device: :0b:00.4 (socket 0)
> rte_cxgbe_pmd: Maskless filter support disabled. Continuing
> Interactive-mode selected
> testpmd: create a new mbuf pool : n=171456, size=2176, socket=0
> testpmd: preferred mempool ops selected: ring_mp_mc
> testpmd: create a new mbuf pool : n=171456, size=2176, socket=2
> testpmd: preferred mempool ops selected: ring_mp_mc
> Configuring Port 0 (socket 0)
> Port 0: 00:07:43:5D:4E:60
> Configuring Port 1 (socket 0)
> Port 1: 00:07:43:5D:4E:68
> Configuring Port 2 (socket 0)
> Port 2: 00:07:43:5D:51:00
> Configuring Port 3 (socket 0)
> Port 3: 00:07:43:5D:51:08
> Checking link statuses...
> Done
> testpmd>
> 
> Your help is very much appreciated.

Please run the command "show port summary all"





Re: [dpdk-users] [dpdk-ci] Integration of DPDK into Bazel

2021-03-11 Thread Thomas Monjalon
11/03/2021 01:17, Jinkun Geng:
> Do you mean all components must be built inside bazel?
> >> Sort of. We have a project that is built on bazel. Now, we need to use the 
> >> core functions of DPDK to replace the network primitives in our project, 
> >> so that we can improve the performance of our project.

You don't need to compile DPDK inside your project.
Just install the package or compile it externally.

> You are not able to call meson/ninja commands from bazel?
> >> I have found any examples using meson inside bazel. According to my 
> >> understanding, Bazel and meson are two parallel building systems and I 
> >> haven't seen anyone could use them together.
> 
> How do you do with other libraries? Does bazel usually reimplement what is 
> packaged with autotools?
> >> That is the reason why https://github.com/bazelment exists. If some 
> >> libraries are popular and do not support bazel, then these guys will help 
> >> generate some modified version of these libraries, so that developers can 
> >> integrate the library into bazel. It is not completely reimplementation, 
> >> but it indeed needs much extraction to adapt the previous library so that 
> >> it can be used in bazel.
> Unfortunately, these guys stop supporting DPDK atop bazel now.

That's crazy trying to replace the build system of projects.
Even if you find new volunteers, it will not be always up-to-date.

You should look for how to link an existing library from bazel.


> 
From: Thomas Monjalon 
> 11/03/2021 00:42, Jinkun Geng:
> > For any project using bazel, if we want to use DPDK, then we need to 
> > compile DPDK stuff into bazel by ourselves. It is not a trivial thing and 
> > the bazelment (https://github.com/bazelment/dpdk) guys have spent much 
> > effort extracting the core files in DPDK and write the BUILD files for 
> > DPDK. But now it seems they have stopped maintaining that repo since DPDK 
> > 16.04. Even in that version, it has some runtime failure when we use DPDK 
> > in our bazel project.
> 
> Sorry I don't know bazel.
> Do you mean all components must be built inside bazel?
> You are not able to call meson/ninja commands from bazel?
> How do you do with other libraries?
> Does bazel usually reimplement what is packaged with autotools?
> 
> 
> > 
> From: Thomas Monjalon 
> > 09/03/2021 05:11, Jinkun Geng:
> > > Too bad. :<
> >
> > Why is it too bad?
> > How the choice of an internal build system
> > can affect other projects?
> >
> >
> > From: Stephen Hemminger 
> > > On Tue, 9 Mar 2021 01:32:16 +
> > > Jinkun Geng  wrote:
> > >
> > > > Hi, all.
> > > > Since bazel building system is becoming more and more popular, 
> > > > sometimes we need to integrate DPDK library into a bazel project. 
> > > > However, it seems there is no much support for bazel from DPDK 
> > > > community.
> >
> > Why the DPDK community would support building with Bazel?
> > What is the benefit?
> > Bazel projects cannot just link with DPDK using pkg-config?
> >
> >
> > > > The only support at https://github.com/bazelment/dpdk has been 
> > > > outdated. Based on our experience, it can only compile successfully 
> > > > with dpdk-16.04 (i.e. the bazel-16.04 branch). Now DPDK has developed 
> > > > to DPDK 21.02, but the bazel support fails to catch up.
> > > >
> > > > It would be great if the experts in DPDK community can provide some 
> > > > portable BUILD files to facilitate the integration of the newest DPDK 
> > > > into bazel project (just like bazelment). After all, writing the bazel 
> > > > files can be really challenging, especially if we do not have a very 
> > > > deep understanding of the whole DPDK codes.
> > > >
> > > > Jinkun
> > >
> > > DPDK is on meson now. The core team is unlikely to change build systems 
> > > again.
> >
> > DPDK supports library standards for compiling, installing and linking.
> > What else is needed?
> 
> 
> 
> 
> 
> 







Re: [dpdk-users] [dpdk-ci] Integration of DPDK into Bazel

2021-03-10 Thread Thomas Monjalon
11/03/2021 00:42, Jinkun Geng:
> For any project using bazel, if we want to use DPDK, then we need to compile 
> DPDK stuff into bazel by ourselves. It is not a trivial thing and the 
> bazelment (https://github.com/bazelment/dpdk) guys have spent much effort 
> extracting the core files in DPDK and write the BUILD files for DPDK. But now 
> it seems they have stopped maintaining that repo since DPDK 16.04. Even in 
> that version, it has some runtime failure when we use DPDK in our bazel 
> project.

Sorry I don't know bazel.
Do you mean all components must be built inside bazel?
You are not able to call meson/ninja commands from bazel?
How do you do with other libraries?
Does bazel usually reimplement what is packaged with autotools?


> ________
From: Thomas Monjalon 
> 09/03/2021 05:11, Jinkun Geng:
> > Too bad. :<
> 
> Why is it too bad?
> How the choice of an internal build system
> can affect other projects?
> 
> 
> From: Stephen Hemminger 
> > On Tue, 9 Mar 2021 01:32:16 +
> > Jinkun Geng  wrote:
> >
> > > Hi, all.
> > > Since bazel building system is becoming more and more popular, sometimes 
> > > we need to integrate DPDK library into a bazel project. However, it seems 
> > > there is no much support for bazel from DPDK community.
> 
> Why the DPDK community would support building with Bazel?
> What is the benefit?
> Bazel projects cannot just link with DPDK using pkg-config?
> 
> 
> > > The only support at https://github.com/bazelment/dpdk has been outdated. 
> > > Based on our experience, it can only compile successfully with dpdk-16.04 
> > > (i.e. the bazel-16.04 branch). Now DPDK has developed to DPDK 21.02, but 
> > > the bazel support fails to catch up.
> > >
> > > It would be great if the experts in DPDK community can provide some 
> > > portable BUILD files to facilitate the integration of the newest DPDK 
> > > into bazel project (just like bazelment). After all, writing the bazel 
> > > files can be really challenging, especially if we do not have a very deep 
> > > understanding of the whole DPDK codes.
> > >
> > > Jinkun
> >
> > DPDK is on meson now. The core team is unlikely to change build systems 
> > again.
> 
> DPDK supports library standards for compiling, installing and linking.
> What else is needed?







Re: [dpdk-users] [dpdk-ci] Integration of DPDK into Bazel

2021-03-10 Thread Thomas Monjalon
09/03/2021 05:11, Jinkun Geng:
> Too bad. :<

Why is it too bad?
How the choice of an internal build system
can affect other projects?


From: Stephen Hemminger 
> On Tue, 9 Mar 2021 01:32:16 +
> Jinkun Geng  wrote:
> 
> > Hi, all.
> > Since bazel building system is becoming more and more popular, sometimes we 
> > need to integrate DPDK library into a bazel project. However, it seems 
> > there is no much support for bazel from DPDK community.

Why the DPDK community would support building with Bazel?
What is the benefit?
Bazel projects cannot just link with DPDK using pkg-config?


> > The only support at https://github.com/bazelment/dpdk has been outdated. 
> > Based on our experience, it can only compile successfully with dpdk-16.04 
> > (i.e. the bazel-16.04 branch). Now DPDK has developed to DPDK 21.02, but 
> > the bazel support fails to catch up.
> >
> > It would be great if the experts in DPDK community can provide some 
> > portable BUILD files to facilitate the integration of the newest DPDK into 
> > bazel project (just like bazelment). After all, writing the bazel files can 
> > be really challenging, especially if we do not have a very deep 
> > understanding of the whole DPDK codes.
> >
> > Jinkun
> 
> DPDK is on meson now. The core team is unlikely to change build systems again.

DPDK supports library standards for compiling, installing and linking.
What else is needed?




Re: [dpdk-users] MLX5: Using packet send scheduling / packet pacing

2021-01-30 Thread Thomas Monjalon
30/01/2021 11:54, Carsten Andrich:
> Hi Slava,
> 
> thank you for the prompt response. I think the requirements for Packet 
> Pacing should be added to Table 34.2(?) of the MLX5 docs [1].

+1 for improving the doc




Re: [dpdk-users] MLX5: Using packet send scheduling / packet pacing

2021-01-29 Thread Thomas Monjalon
+Cc Slava

29/01/2021 17:30, Carsten Andrich:
> Hello everyone,
> 
> I'm trying to use packet send scheduling [1] with DPDK 20.11 and the 
> MLX5 PMD (NIC: ConnectX-5 MCX516A-CDAT). This patch contains some 
> additional information on this feature also know as packet pacing [2].
> 
> According to MLX5's docs, packet pacing requires the "tx_pp" parameter 
> [3, CTRL+F: "tx_pp"]. However, when firing up testpmd with that 
> parameter, it fails as follows:
> 
> > # dpdk-testpmd -a 81:00.0,tx_pp=500 -- -i
> > ...
> > EAL: Probe PCI driver: mlx5_pci (15b3:1019) device: :81:00.0 (socket 0)
> > mlx5_pci: WQE rate mode is required for packet pacing
> > mlx5_pci: probe of PCI device :81:00.0 aborted after encountering an 
> > error: No such device
> > common_mlx5: Failed to load driver = mlx5_pci.
> >
> > EAL: Requested device :81:00.0 cannot be used
> The error message originates here [4] and is caused by what to me 
> appears to be a value read from the NIC [5]. Unfortunately, that leaves 
> me clueless on how to activate the required "WQE rate mode".  According 
> to the output of ibv_devinfo, my NIC does support packet pacing:
> 
> > # ibv_devinfo -v 81:00.0
> > ...
> > packet_pacing_caps:
> > qp_rate_limit_min:  1kbps
> > qp_rate_limit_max:  1kbps
> > supported_qp:
> > SUPPORT_RAW_PACKET
> I'd be grateful for any information on how to get packet pacing up and 
> running. Am I just missing another required option (which is not given 
> in the docs) or does my NIC lack packet pacing support?
> 
> Thank you very much in advance.
> 
> Best regards,
> Carsten
> 
> [1] 
> https://doc.dpdk.org/api/rte__ethdev_8h.html#a990d8351447a710628cbb24a28d3252d
> [2] https://patches.dpdk.org/patch/73742/
> [3] https://doc.dpdk.org/guides/nics/mlx5.html#run-time-configuration
> [4] 
> http://code.dpdk.org/dpdk/v20.11/source/drivers/net/mlx5/linux/mlx5_os.c#L1278
> [5] 
> http://code.dpdk.org/dpdk/v20.11/source/drivers/common/mlx5/mlx5_devx_cmds.c#L748
> 
> 







Re: [dpdk-users] DPDK: MPLS packet processing

2021-01-18 Thread Thomas Monjalon
18/01/2021 09:46, Raslan Darawsheh:
> From: raktim bhatt
> 
> > Hi All,
> > 
> > I am trying to build a multi-RX-queue dpdk program, using RSS to split the
> > incoming traffic into RX queues on a single port. Mellanox ConnectX-5 and
> > DPDK Version 19.11 is used for this purpose. It works fine when I use IP
> > over Ethernet packets as input. However when the packet contains IP over
> > MPLS over Ethernet, RSS does not seem to work. As a result, all packets
> > belonging to various flows (with different src & dst IPs, ports over MPLS)
> > are all sent into the same RX queue.
> > 
> > 
> > My queries are
> > 
> > 1. Is there any parameter/techniques in DPDK to distribute MPLS packets to
> > multiple RX queues?
> > 
> I've tried it over my setup with testpmd:
> ./build/app/dpdk-testpmd -n 4 -w :08:00.0 -- --mbcache=512 -i 
> --nb-cores=27 --rxq=4 --txq=4 --rss-ip
> testpmd> set verbose 1
> testpmd> start
> 
> then tried to send two MPLS packets with different src IP:
> packet1 =  Ether()/MPLS()/IP(src='1.1.1.1')
> packet2 =  Ether()/MPLS()/IP(src='1.1.1.2')
> 
> and I see that both packets are being spread over the queues, see the bellow 
> testpmd dump output:
> testpmd> port 0/queue 3: received 1 packets
>   src=00:00:00:00:00:00 - dst=FF:FF:FF:FF:FF:FF - type=0x8847 - length=60 - 
> nb_segs=1 - RSS hash=0x43781943 - RSS queue=0x3 - hw ptype: L2_ETHER 
> L3_IPV4_EXT_UNKNOWN L4_NONFRAG  - sw ptype: L2_ETHER  - l2_len=14 - Receive 
> queue=0x3
>   ol_flags: PKT_RX_RSS_HASH PKT_RX_L4_CKSUM_UNKNOWN PKT_RX_IP_CKSUM_GOOD 
> PKT_RX_OUTER_L4_CKSUM_UNKNOWN
> port 0/queue 1: received 1 packets
>   src=00:00:00:00:00:00 - dst=FF:FF:FF:FF:FF:FF - type=0x8847 - length=60 - 
> nb_segs=1 - RSS hash=0xb8631e05 - RSS queue=0x1 - hw ptype: L2_ETHER 
> L3_IPV4_EXT_UNKNOWN L4_NONFRAG  - sw ptype: L2_ETHER  - l2_len=14 - Receive 
> queue=0x1
>   ol_flags: PKT_RX_RSS_HASH PKT_RX_L4_CKSUM_UNKNOWN PKT_RX_IP_CKSUM_GOOD 
> PKT_RX_OUTER_L4_CKSUM_UNKNOWN
> 
> first packet was received on queue 3 and the second one was received over 
> queue 1,
> by the way, this is with both 19.11.0 and v19.11.6 
> 
> > 2. Is there any way to strip off MPLS tags (between Eth and IP) in
> > hardware, something like hw_vlan_strip?
> > 
> For this I'm not sure we have such thing in dpdk maybe Thomas can confirm 
> this here?

Look for "POP_MPLS" in rte_flow.




Re: [dpdk-users] Does vmxnet3 PMD supports LSC=1 ?

2021-01-14 Thread Thomas Monjalon
+Cc Yong Wang, maintainer of this PMD.

14/01/2021 18:30, madhukar mythri:
> Hi,
> 
> Does vmxnet3 PMD support LSC=1(i.e with interrupt mode) for link changes ?
> 
> When i enable LSC=1 the functionality works fine, but, when pumping traffic
> i'm seeing increasing in CPU load on some cores which is running
> "eal-intr-thread" epoll_wait() function for more CPU-time.
> 
> Actually, interrupt should come only when Link changes, but, we are seeing
> interrupt for each incoming Rx-packet and also a lot of spurious interrupts.
> =
> ~ # cat /proc/interrupts |grep igb
>  58:1254293  0  0  0   PCI-MSI 1572864-edge
>  igb_uio
>  59:1278105  0  0  0   PCI-MSI 5767168-edge
>  igb_uio
> ~ # cat /proc/irq/58/spurious
> count 98035
> unhandled 0
> last_unhandled 0 ms
> ~ #
> ==
> 
> Does anyone tried LSC=1 in vmxnet3 PMD based apps and faced similar issues
> ? If so, please let me know.
> 
> Tried with DPDK-18.11, DPDK-19.11 and DPDK-20.05.
> 
> Thanks,
> Madhukar.




Re: [dpdk-users] DPDK 20.11 MLX5 testpmd tx_pp 'WQE index ignore feature is required for packet pacing'

2020-12-11 Thread Thomas Monjalon
11/12/2020 17:19, Slava Ovsiienko:
> From: Thomas Monjalon 
> > 09/12/2020 17:03, Alessandro Pagani:
> > > Hi all,
> > >
> > > I am trying to run dpdk testpmd with Mellanox ConnectX4 Lx (mlx5 driver).
> > >
> > > I am specifying the tx_pp parameter to provide the packet send
> > > scheduling on mbuf timestamps, but the testpmd fails with the following
> > error:
> > [...]
> > > EAL: Probe PCI driver: mlx5_pci (15b3:1015) device: :3b:00.0
> 
> This is ConnectX-4LX (DevID is 1015), it does not support scheduling.
> Tx scheduling is supported since ConnectX-6DX. 
> 
> > > (socket 0)
> > > mlx5_pci: No available register for Sampler.
> > > mlx5_pci: WQE index ignore feature is required for packet pacing
> > > mlx5_pci: probe of PCI device :3b:00.0 aborted after encountering
> > > an
> > > error: No such device
> > > common_mlx5: Failed to load driver = mlx5_pci.
> > >
> > > EAL: Requested device :3b:00.0 cannot be used
> > [...]
> > > The error messages suggest that "WQE index ignore feature is required
> > > for packet pacing".
> > >
> > > Anyone knows the reason of this error and how to solve it?
> > 
> > I think it means your device does not support this feature.
> > But I realize it is not documented here:
> 
> Yes, indeed. I'll provide the patch, thank you for noticing that.

I think we should also improve the error message to
something like "not supported on this device".




Re: [dpdk-users] Questions about public IP included in DPDK driver code

2020-12-11 Thread Thomas Monjalon
+Cc ark maintainers

07/12/2020 15:17, HuangLiming:
> I found that the driver contains public IP(169.254.10.240), which leads to my 
> binary
> with public IP too, which is not what I want.
> What is the function of this IP, and could the community delete it for 
> security reasons?
> 
> drivers/net/ark/ark_pktgen.c: {{"dst_ip"}, OTSTRING, .v.STR = 
> "169.254.10.240"},
> drivers/net/ark/ark_pktchkr.c:{{"dst_ip"}, OTSTRING, .v.STR = 
> "169.254.10.240"}





Re: [dpdk-users] DPDK 20.11 MLX5 testpmd tx_pp 'WQE index ignore feature is required for packet pacing'

2020-12-11 Thread Thomas Monjalon
09/12/2020 17:03, Alessandro Pagani:
> Hi all,
> 
> I am trying to run dpdk testpmd with Mellanox ConnectX4 Lx (mlx5 driver).
> 
> I am specifying the tx_pp parameter to provide the packet send scheduling
> on mbuf timestamps, but the testpmd fails with the following error:
[...]
> EAL: Probe PCI driver: mlx5_pci (15b3:1015) device: :3b:00.0 (socket 0)
> mlx5_pci: No available register for Sampler.
> mlx5_pci: WQE index ignore feature is required for packet pacing
> mlx5_pci: probe of PCI device :3b:00.0 aborted after encountering an
> error: No such device
> common_mlx5: Failed to load driver = mlx5_pci.
> 
> EAL: Requested device :3b:00.0 cannot be used
[...]
> The error messages suggest that "WQE index ignore feature is required for
> packet pacing".
> 
> Anyone knows the reason of this error and how to solve it?

I think it means your device does not support this feature.
But I realize it is not documented here:
http://doc.dpdk.org/guides/nics/mlx5.html#supported-hardware-offloads





Re: [dpdk-users] Changes to DPDK kmod drivers / backward compatibility within LTS rel.

2020-10-29 Thread Thomas Monjalon
24/09/2020 19:12, Alexander Kotliarov:
> Hi there!
> I would like to find out what is the policy regarding changes to the DPDK's
> kmod drivers such as igb_uio.ko within a DPDK's LTS release. Are these
> changes backward compatible?
> For example, is there a guarantee that an application built
> against 19.11.5, where igb_uio.ko received changes,would run with this
> driver built from 19.11.1 version?
> 
> Does http://doc.dpdk.org/guides/contributing/stable.html section 8.4 apply
> to kmod drivers as well?

There is no such formal guarantee, but in my opinion,
it should be the case. Do you imagine a change in kmod
which could break a DPDK version?




Re: [dpdk-users] Error while building DPDK version 19.11 over latest kernel version

2020-10-02 Thread Thomas Monjalon
01/10/2020 19:04, Klei rama:
> Hei,
> 
> I am trying to build DPDK version 19.11 in my ubuntu machine (18.04) with
> the latest kernel version 5.9. It gives me an error while I try to build
> it. The error is when I try to build linux/kni module.
> The error looks something like this:
> 
> *dpdk/kernel/linux/kni/kni_dev.h:104:30:* *error: *passing argument 1 of ‘
> *get_user_pages_remote*’ from incompatible pointer type [
> *-Werror=incompatible-pointer-types*]
> 
> 
> Is there any workaround?

Yes, it will be fixed with the backport of this patch:
http://git.dpdk.org/dpdk/commit/?id=87efaea6376c8

> Do I need to disable that module or should I
> downgrade the kernel version? I wanted to debug my application and see if
> it needs kni module but I did not not how to disable kni module. I tried
> common_base and common_linux under config directory but could not find the
> line which disable this module.

You can disable KNI in the config file:
CONFIG_RTE_KNI_KMOD=n





Re: [dpdk-users] Resolved Issues per release

2020-08-12 Thread Thomas Monjalon
12/08/2020 15:32, Jim Holland (jimholla):
> The release notes at https://doc.dpdk.org/guides-20.02/rel_notes/index.html 
> don't list the resolved issues/bugs. Is there a way to find what bugs are 
> resolved for each dpdk release?

When looking at the git history,
every commit fixing a bug should contain a line starting with "Fixes:"
identifying the root cause.

Catching such commits will give you all details of bugs fixed.




Re: [dpdk-users] meson: is there a mechanism for controlling compilation configuration

2020-07-29 Thread Thomas Monjalon
29/07/2020 09:20, Chengchang Tang:
> Hi,
> DPDK with 'make' will be deprecated in a future release. I have some
> questions about using meson to build DPDK.
> 
> When using the make, we can change the macros in config/common_base to
> control the compiling macros. For example, if i want to debug the mbuf,
> i can set CONFIG_RTE_LIBRTE_MBUF_DEBUG=y in config/common_base to change
> the compiling macros.
> 
> According to my understanding. DPDK meson build dose not generate
> rte_config.h based on common_base content during compilation. Is there
> any convenient way to modify the compiling macro in meson build. If all
> the compilation macros need to be modified using environment variable,
> it is inconvenient.

Please look at the work done in this series:
https://patches.dpdk.org/project/dpdk/list/?series=10930=*




Re: [dpdk-users] Trouble Building DPDK with MLNX OFED(4.4) Works with 5.0

2020-06-06 Thread Thomas Monjalon
06/06/2020 22:54, Vineeth Thapeta:
> Hi Thomas,
> 
> We want to use TAS which is built on top of DPDK,

Do you have a link to TAS? I don't know it.

> I installed Mellanox OFED 5.0 and TAS and everything was good,
> until we had to install Moogen.

So your only issue is to install Moongen,
given you already have DPDK working.

> Please find my answer to your questions.
> Thanks for the prompt reply. I appreciate it.
> 
> We are using Ubuntu 18.04. I indeed had success in installing dpdk 19.11
> LTS with MLNX_OFED 5.0. To answer your questions about Moongen either
> requires 1) Complete OFED drivers or 2) libibverbs, libmlx5,
> mlnx-ofed-kernel packages,

No, Moongen requires only DPDK.
If I am wrong, please could you show where you found such info?

> These packages are installed with MLNX_ofed
> drivers if I am not mistaken.

Installing the Ubuntu package rdma-core should be enough.

> So I moved on to build Moongen, When building
> moongen with OFED 5.0 I get the error fatal error: infiniband/mlx5_hw.h: No
> such file or directory #include . I thought this
> might be related to the libmlx5 and I installed libmlx5.deb from the
> packages in the DEBS/ folder. testpmd stops working with this prompt
> ./testpmd: /usr/lib/libmlx5.so.1: no version information available
> (required by ./testpmd). So I tried rolling back the OFED version to 4.4,
> which has caused problems with building dpdk with version 19.11 LTS,18.11
> LTS and 17.11 LTS as I shared in the previous mail.

You are mixing different versions of drivers.
Choose either rdma-core packaged in Ubuntu,
or OFED from mellanox.com.


> I hope this answers your questions.
> 
> Vineeth
> 
> On Sat, Jun 6, 2020 at 3:13 PM Thomas Monjalon  wrote:
> 
> > Hi,
> >
> > 06/06/2020 21:43, Vineeth Thapeta:
> > > Hi guys,
> > >
> > > I had to roll back OFED version from 5.0 to 4.4 because another library
> > > Moongen has trouble building with OFED 5.0.
> >
> > How Moongen is related to OFED?
> > Why using OFED and not upstream packaging of rdma-core?
> > Which distribution are you using?
> >
> > > I rolled back the version and
> > > Moongen builds and when I try to build dpdk 18.11, 19.11,17.11. All the
> > > three builds fail with the following error (
> > https://pastebin.com/fiw7iz1Z
> > > Link of build error). Does anyone know why this happens.
> >
> > Any DPDK version should build with any version of OFED or rdma-core.
> > Note: DPDK 17.11 is not supported anymore.
> >
> > > I am exhausted
> > > with roll backs,if anyone can help me with this I would be grateful. , By
> > > the looks of it, it might mean it had problems with the OFED driver? I
> > > installed the MLNX ofed driver without any flags, i.e ./mlnxofedinstall.
> > If
> > > you want any further information,please let me know. Any suggestion would
> > > be appreciated, even if you don't think it might work. I have been trying
> > > to install these two for the past week, without much success. Are the
> > above
> > > mentioned DPDK version not compatible with said driver(4.4)?
> >
> > Let's focus on your goal and see what is the real issue you hit.
> > Which version of DPDK do you need?
> > Why not start with the latest LTS, which is 19.11?





Re: [dpdk-users] Trouble Building DPDK with MLNX OFED(4.4) Works with 5.0

2020-06-06 Thread Thomas Monjalon
Hi,

06/06/2020 21:43, Vineeth Thapeta:
> Hi guys,
> 
> I had to roll back OFED version from 5.0 to 4.4 because another library
> Moongen has trouble building with OFED 5.0.

How Moongen is related to OFED?
Why using OFED and not upstream packaging of rdma-core?
Which distribution are you using?

> I rolled back the version and
> Moongen builds and when I try to build dpdk 18.11, 19.11,17.11. All the
> three builds fail with the following error (https://pastebin.com/fiw7iz1Z
> Link of build error). Does anyone know why this happens.

Any DPDK version should build with any version of OFED or rdma-core.
Note: DPDK 17.11 is not supported anymore.

> I am exhausted
> with roll backs,if anyone can help me with this I would be grateful. , By
> the looks of it, it might mean it had problems with the OFED driver? I
> installed the MLNX ofed driver without any flags, i.e ./mlnxofedinstall. If
> you want any further information,please let me know. Any suggestion would
> be appreciated, even if you don't think it might work. I have been trying
> to install these two for the past week, without much success. Are the above
> mentioned DPDK version not compatible with said driver(4.4)?

Let's focus on your goal and see what is the real issue you hit.
Which version of DPDK do you need?
Why not start with the latest LTS, which is 19.11?




Re: [dpdk-users] mlx5 PMD fails to receive certain icmpv6 multicast

2020-05-06 Thread Thomas Monjalon
26/03/2020 22:11, Thomas Monjalon:
> 06/03/2020 01:45, Liwu Liu:
> > Hi Team,
> > 
> > I am using the mlx5/100G in KVM guest. The host shows this PCI vfNIC is 
> > provisioned to the guest:
> >   "17:01.1 Ethernet controller: Mellanox Technologies MT27800 Family 
> > [ConnectX-5 Virtual Function]"
> > 
> > I am using DPDK 19.11 with kind of standard configurations, and when DPDK 
> > application runs I still have the kernel mlx5e net device present. I have 
> > both promiscuous and all-multicast turned on.
> > 
> > It works fine for IPV4, but for IPV6 it fails. It can receive packets 
> > destined to 33:33:00:00:00:02 (IPV6 Router solicitation), but cannot 
> > receive packets destined to 33:33:ff:00:00:01 (IPV6 neighbor solicitation 
> > for some address).
> > 
> > But if I avoid DPDK, directly use the OFED-4.6 based kernel driver, 
> > everything works fine as expected.
> > 
> > I am thinking there is some mismatch happened for MLX5 PMD. Please give 
> > some advice/hints.
> 
> Adding Mellanox engineers in Cc list for help.

Any update to share please?




Re: [dpdk-users] [mlx5 + DPDK 19.11] Flow insertion rate less than 4K per sec

2020-04-19 Thread Thomas Monjalon
+Cc Wisam

16/04/2020 17:32, Yan Lei:
> Hi Thomas,
> 
> 
> I tried the patch (68057 + 68058) on DPDK 19.11/20.02 + ofed 4.7.3.
> 
> 
> TL;DR
> 
> 
> 1. I was only able to generate 3K rules per second.
> 
> 2. The maximum number of distinct rules the NIC can support seems to be 65536.
> 
> 
> How can I increase the insertion rate? Any firmware/driver config I need to 
> tune? Also, is 65536 distinct flows truly a limit of the NIC? The patch 
> defaults to generate 4 million distinct flows though...
> 
> 
> Thanks in advance!
> 
> 
> 
> Initially, running
> 
> 
> ```
> 
> sudo ./flow_perf -l 3-7 -n 4 -w 02:00.0,dv_flow_en=1 -- --ingress --ether 
> --ipv4 --udp --queue --flows-count=100
> 
> ```
> 
> 
> failed after a few seconds and it gave
> 
> 
> ```
> Flow can't be created 1 message: hardware refuses to create flow
> EAL: Error - exiting with code: 1
>   Cause: error in creating flow
> ```
> 
> 
> Then I added a small debug patch (attached) and it showed that the error 
> happens when creating the 65536th flow rule.
> 
> 
> ```
> Flow can't be created 1 message: hardware refuses to create flow
> EAL: Error - exiting with code: 1
>   Cause: error in creating flow,flows generated: 65536
> ```
> 
> 
> My guess is that the NIC can only accept 65536 concurrent rules. Once I 
> changed the outer ip mask to 0x, the above command runs fine.
> 
> 
> To see how many rules I can generate per second. I ran (with the outer ip 
> mask 0x)
> 
> 
> ```
> 
> sudo ./flow_perf -l 3-7 -n 4 -w 02:00.0,dv_flow_en=1 -- --ingress --ether 
> --ipv4 --udp --queue --flows-count=65536
> 
> ```
> 
> 
> and it gives
> 
> 
> ```
> 
> :: Total flow insertion rate -> 3.015922 K/Sec
> :: The time for creating 65536 in flows 21.730005 seconds
> :: EAGAIN counter = 0
> ```
> So 3 rules per sec. Which is close to what I observed before.
> 
> ```
> sudo ./flow_perf -l 3-7 -n 4 -w 02:00.0,dv_flow_en=1 -- --ingress --ether 
> --ipv4 --udp --queue --flows-count=10
> ```
> gives
> 
> ```
> :: Total flow insertion rate -> 0.949381 K/Sec
> :: The time for creating 10 in flows 105.331842 seconds
> :: EAGAIN counter = 0
> ```
> Have no idea why it's only 1k/sec in this case...
> 
> Thanks and cheers,
> Lei
> 
> 
> 
> From: users  on behalf of Yan Lei 
> Sent: Tuesday, April 14, 2020 1:20 PM
> To: Thomas Monjalon
> Cc: users@dpdk.org
> Subject: Re: [dpdk-users] [mlx5 + DPDK 19.11] Flow insertion rate less than 
> 4K per sec
> 
> Hi Thomas,
> 
> Thanks! I will give it a try (using DPDK 19.11 + ofed 4.7.3).
> 
> Cheers,
> Lei
> 
> From: Thomas Monjalon 
> Sent: Tuesday, April 14, 2020 12:12:28 PM
> To: Yan Lei
> Cc: users@dpdk.org
> Subject: Re: [dpdk-users] [mlx5 + DPDK 19.11] Flow insertion rate less than 
> 4K per sec
> 
> Hi,
> 
> 10/04/2020 20:11, Yan Lei:
> > I am doing some study that requires inserting more than 1 million flow
> > rules per second to the NIC. And I runs DPDK 19.11 on a ConnectX-5 NIC.
> >
> > But I only managed to create around 3.3K rules per second.
> > Below is the code I used to measure the insertion rate:
> 
> Please could you review this new application designed for such measure?
> https://patches.dpdk.org/patch/68058/
> 
> Any feedback about the above patch is welcome. Feel free to try and review it.






Re: [dpdk-users] [mlx5 + DPDK 19.11] Flow insertion rate less than 4K per sec

2020-04-14 Thread Thomas Monjalon
Hi,

10/04/2020 20:11, Yan Lei:
> I am doing some study that requires inserting more than 1 million flow
> rules per second to the NIC. And I runs DPDK 19.11 on a ConnectX-5 NIC.
> 
> But I only managed to create around 3.3K rules per second.
> Below is the code I used to measure the insertion rate:

Please could you review this new application designed for such measure?
https://patches.dpdk.org/patch/68058/

Any feedback about the above patch is welcome. Feel free to try and review it.





Re: [dpdk-users] General Questions

2020-04-09 Thread Thomas Monjalon
09/04/2020 17:32, Stephen Hemminger:
> On Thu, 9 Apr 2020 20:57:19 +0530
> Shyam Shrivastav  wrote:
> 
> > From my experience as dpdk user
> > 
> > On Thu, Apr 9, 2020 at 11:41 AM Cristofer Martins <
> > cristofermart...@hotmail.com> wrote:  
> > 
> > > Well the reason i thought about using dpdk(together with a user space tcp
> > > stack) is because my tcp code spend so much time with syscalls that
> > > removing that would allow better throughput and latency. Is this a valid
> > > reason? My software runs in single core(and most of time in cheap vps) so 
> > > i
> > > want to extract the best i can from them.
> > >  
> > 
> > Yes using dpdk instead of getting packets from kernel stack increases
> > performance
> > 
> > 
> > 
> > > The other question is, can dpdk runs alongside with the linux network
> > > stack? I want to use dpdk in my special app but i still want to have ssh
> > > and apps working as expected without any modification.
> > >  
> > Interface used by dpdk is not available, at least another interface
> > required for management/access &  other network apps
> > 
> > 
> > 
> > >
> > > Thanks in advance.
> > >  
> 
> This might be a good use case for AF_XDP with or without DPDK

AF_XDP helps to use a device both with Linux stack and userland application.
This capability is what we call the bifurcated model.
The Mellanox drivers are also using a bifurcated model:
the same device can send some packet flows to the kernel interface,
and other (configured) packet flows to the DPDK interface.




Re: [dpdk-users] CX4-Lx VF link status in Azure

2020-03-27 Thread Thomas Monjalon
27/03/2020 18:26, Benoit Ganne (bganne):
> > Unfortunately the reason was not documented.
> > I suggest we go with a patch from your understanding
> > and we'll test it in multiple conditions to validate nothing is broken.
> 
> Done: http://mails.dpdk.org/archives/dev/2020-March/161096.html

Very good, thanks.




Re: [dpdk-users] CX4-Lx VF link status in Azure

2020-03-27 Thread Thomas Monjalon
27/03/2020 11:02, Benoit Ganne (bganne):
> > Second, as Benoit said, we should relax this requirement.
> > If the link speed is unknown, a second request can be tried, no more.
> > Benoit, feel free to submit a patch showing how you think it should
> > behave.
> > Otherwise, I guess a maintainer of mlx5 will try to arrange it later.
> > Note: a patch (even not perfect) is usually speeding up resolution.
> 
> I can do that, but I am not sure I understand the logic of this test to begin 
> with: looking into other PMD (mlx4, i40e), it seems to be the only one 
> worrying about updating link state only when "ready" for some not clear (to 
> me) definition of "ready".
> I'll tend to agree with other PMD here: if the syscalls did not failed we 
> should just update with what we know.
> Why was this test introduced and what did it fixed?

Unfortunately the reason was not documented.
I suggest we go with a patch from your understanding
and we'll test it in multiple conditions to validate nothing is broken.




Re: [dpdk-users] CX4-Lx VF link status in Azure

2020-03-26 Thread Thomas Monjalon
On 3/26/2020 12:00, Benoit Ganne (bganne) wrote:
> Just removing the over-strict check in mlx5 PMD is enough for everything to 
> work fine:
> https://gerrit.fd.io/r/c/vpp/+/26152/1/build/external/patches/dpdk_20.02/0002-mlx5-azure-workaround.patch
[...]
>  2) mlx5 PMD enforce that both link speed is defined and link is up to update 
> interface state

The original commit introducing this logic is:
http://git.dpdk.org/dpdk/commit/?id=cfee94752b8f8f

I would say that the first issue is a lack of comment in this code.

Second, as Benoit said, we should relax this requirement.
If the link speed is unknown, a second request can be tried, no more.

Benoit, feel free to submit a patch showing how you think it should behave.
Otherwise, I guess a maintainer of mlx5 will try to arrange it later.
Note: a patch (even not perfect) is usually speeding up resolution.

Thanks




Re: [dpdk-users] mlx5 PMD fails to receive certain icmpv6 multicast

2020-03-26 Thread Thomas Monjalon
06/03/2020 01:45, Liwu Liu:
> Hi Team,
> 
> I am using the mlx5/100G in KVM guest. The host shows this PCI vfNIC is 
> provisioned to the guest:
>   "17:01.1 Ethernet controller: Mellanox Technologies MT27800 Family 
> [ConnectX-5 Virtual Function]"
> 
> I am using DPDK 19.11 with kind of standard configurations, and when DPDK 
> application runs I still have the kernel mlx5e net device present. I have 
> both promiscuous and all-multicast turned on.
> 
> It works fine for IPV4, but for IPV6 it fails. It can receive packets 
> destined to 33:33:00:00:00:02 (IPV6 Router solicitation), but cannot receive 
> packets destined to 33:33:ff:00:00:01 (IPV6 neighbor solicitation for some 
> address).
> 
> But if I avoid DPDK, directly use the OFED-4.6 based kernel driver, 
> everything works fine as expected.
> 
> I am thinking there is some mismatch happened for MLX5 PMD. Please give some 
> advice/hints.

Adding Mellanox engineers in Cc list for help.




Re: [dpdk-users] rte_eth_stats_get: imiss is not set when using mlx4/mlx5 driver

2020-03-26 Thread Thomas Monjalon
Hi,

Sorry for the late answer.

22/10/2019 10:38, guyifan:
> DPDK version 18.11.2,imiss is always 0.
> And I could not find any code about 'imiss' in 
> 'dpdk-stable-18.11.2/drivers/net/mlx5/' or 
> 'dpdk-stable-18.11.2/drivers/net/mlx4/'.
> Is there any way to know how many packets have been dropped by a Mellanox NIC?

It is supported in DPDK 19.02:
http://git.dpdk.org/dpdk/commit/?id=ce9494d76c4783
and DPDK 18.11.3:
http://git.dpdk.org/dpdk-stable/commit/?h=81d0621264449ecc




Re: [dpdk-users] DPDK TX problems

2020-03-26 Thread Thomas Monjalon
Thanks for the interesting feedback.
It seems we should test this performance use case in our labs.


18/02/2020 09:36, Hrvoje Habjanic:
> On 08. 04. 2019. 11:52, Hrvoje Habjanić wrote:
> > On 29/03/2019 08:24, Hrvoje Habjanić wrote:
> >>> Hi.
> >>>
> >>> I did write an application using dpdk 17.11 (did try also with 18.11),
> >>> and when doing some performance testing, i'm seeing very odd behavior.
> >>> To verify that this is not because of my app, i did the same test with
> >>> l2fwd example app, and i'm still confused by results.
> >>>
> >>> In short, i'm trying to push a lot of L2 packets through dpdk engine -
> >>> packet processing is minimal. When testing, i'm starting with small
> >>> number of packets-per-second, and then gradually increase it to see
> >>> where is the limit. At some point, i do reach this limit - packets start
> >>> to get dropped. And this is when stuff become weird.
> >>>
> >>> When i reach peek packet rate (at which packets start to get dropped), i
> >>> would expect that reducing packet rate will remove packet drops. But,
> >>> this is not the case. For example, let's assume that peek packet rate is
> >>> 3.5Mpps. At this point everything works ok. Increasing pps to 4.0Mpps,
> >>> makes a lot of dropped packets. When reducing pps back to 3.5Mpps, app
> >>> is still broken - packets are still dropped.
> >>>
> >>> At this point, i need to drastically reduce pps (1.4Mpps) to make
> >>> dropped packets go away. Also, app is unable to successfully forward
> >>> anything beyond this 1.4M, despite the fact that in the beginning it did
> >>> forward 3.5M! Only way to recover is to restart the app.
> >>>
> >>> Also, sometimes, the app just stops forwarding any packets - packets are
> >>> received (as seen by counters), but app is unable to send anything back.
> >>>
> >>> As i did mention, i'm seeing the same behavior with l2fwd example app. I
> >>> did test dpdk 17.11 and also dpdk 18.11 - the results are the same.
> >>>
> >>> My test environment is HP DL380G8, with 82599ES 10Gig (ixgbe) cards,
> >>> connected with Cisco nexus 9300 sw. On the other side is ixia test
> >>> appliance. Application is run in virtual machine (VM), using KVM
> >>> (openstack, with sriov enabled, and numa restrictions). I did check that
> >>> VM is using only cpu's from NUMA node on which network card is
> >>> connected, so there is no cross-numa traffic. Openstack is Queens,
> >>> Ubuntu is Bionic release. Virtual machine is also using ubuntu bionic
> >>> as OS.
> >>>
> >>> I do not know how to debug this? Does someone else have the same
> >>> observations?
> >>>
> >>> Regards,
> >>>
> >>> H.
> >> There are additional findings. It seems that when i reach peak pps
> >> rate, application is not fast enough, and i can see rx missed errors
> >> on card statistics on the host. At the same time, tx side starts to
> >> show problems (tx burst starts to show it did not send all packets).
> >> Shortly after that, tx falls apart completely and top pps rate drops.
> >>
> >> Since i did not disable pause frames, i can see on the switch "RX
> >> pause" frame counter is increasing. On the other hand, if i disable
> >> pause frames (on the nic of server), host driver (ixgbe) reports "TX
> >> unit hang" in dmesg, and issues card reset. Of course, after reset
> >> none of the dpdk apps in VM's on this host does not work.
> >>
> >> Is it possible that at time of congestion DPDK does not release mbufs
> >> back to the pool, and tx ring becomes "filled" with zombie packets
> >> (not send by card and also having ref counter as they are in use)?
> >>
> >> Is there a way to check mempool or tx ring for "left-owers"? Is is
> >> possible to somehow "flush" tx ring and/or mempool?
> >>
> >> H.
> > After few more test, things become even weirder - if i do not free mbufs
> > which are not sent, but resend them again, i can "survive" over-the-peek
> > event! But, then peek rate starts to drop gradually ...
> >
> > I would ask if someone can try this on their platform and report back? I
> > would really like to know if this is problem with my deployment, or
> > there is something wrong with dpdk?
> >
> > Test should be simple - use l2fwd or l3fwd, and determine max pps. Then
> > drive pps 30%over max, and then return back and confirm that you can
> > still get max pps.
> >
> > Thanks in advance.
> >
> > H.
> >
> 
> I did receive few mails from users facing this issue, asking how it was
> resolved.
> 
> Unfortunately, there is no real fix. It seems that this issue is related
> to card and hardware used. I'm still not sure which is more to blame,
> but the combination i had is definitely problematic.
> 
> Anyhow, in the end, i did conclude that card driver have some issues
> when it is saturated with packets. My suspicion is that driver/software
> does not properly free packets, and then DPDK mempool becomes
> fragmented, and this causes performance drops. Restarting software
> releases pools, and restores proper functionality.
> 
> 

Re: [dpdk-users] CX4-Lx VF link status in Azure

2020-03-26 Thread Thomas Monjalon
26/03/2020 21:09, Mark Bloch:
> 
> On 3/26/2020 12:00, Benoit Ganne (bganne) wrote:
> >> Pasting back this important info:
> >> "
> >> Note that ethtool and '/sys/class/net//speed' also fails
> >> to report the link speed (but not the link status).
> >> "
> >>
> >> 26/03/2020 19:27, Benoit Ganne (bganne):
> >>> Yes everything is initialized correctly. The netdev itself is configured
> >> and usable from Linux (ping etc.). Just removing the over-strict check in
> >> mlx5 PMD is enough for everything to work fine:
> >> https://gerrit.fd.io/r/c/vpp/+/26152/1/build/external/patches/dpdk_20.02/0002-mlx5-azure-workaround.patch
> >>> The link speed is unknown but this is not issue, and link state and
> >> other link info are correctly reported.
> >>> Thomas, any input regarding this behavior in mlx5 PMD?
> >>
> >> I am not aware about the lack of link speed info.
> >> It is probably not specific to ConnectX-4 Lx.
> >> I guess it happens only with Hyper-V?
> 
> Should be fixed by those 3 commits (last 1 one is just cosmetic):
> 
> https://git.kernel.org/pub/scm/linux/kernel/git/netdev/net-next.git/commit/?id=dc392fc56f39a00a46d6db2d150571ccafe99734
> https://git.kernel.org/pub/scm/linux/kernel/git/netdev/net-next.git/commit/?id=c268ca6087f553bfc0e16ffec412b983ffe32fd4
> https://git.kernel.org/pub/scm/linux/kernel/git/netdev/net-next.git/commit/?id=2f5438ca0ee01a1b3a9c37e3f33d47c8122afe74

Thanks for the patches Mark.

> > For me there are 2 separate issues:
> >  1) Linux kernel driver does not report link speed in Azure for CX4-Lx in 
> > Ubuntu 18.04

1 looks to be addressed with patches above.

> >  2) mlx5 PMD enforce that both link speed is defined and link is up to 
> > update interface state

Yes we can look at this issue.

> > If (1) is fixed, (2) should work, but to me (2) is too strict
> > for no good reason: we do not really care about reported link speed,

I agree that link speed is less important than link status.

> > esp in a virtual environment it usually does not mean much,

Yes the link speed is shared between all VFs.

> > but we do care about link state.




Re: [dpdk-users] [dpdk-dev] Issue with DCB with X710 Quad 10Gb NIC

2019-09-18 Thread Thomas Monjalon
18/09/2019 09:02, Zhang, Xiao:
> 
> There is some hardware limitation and need to enable RSS to distribute 
> packets for X710.

Is this limitation documented?




Re: [dpdk-users] devargs syntax to use MAC address for bond slaves

2019-09-04 Thread Thomas Monjalon
19/08/2019 17:56, Greg O'Rawe:
> Hi,
> 
> Thanks for the reply. Is such a patch to the bonding PMD feasible?

Probably yes.
You can try to do it, and work with Chas for help.





Re: [dpdk-users] devargs syntax to use MAC address for bond slaves

2019-08-10 Thread Thomas Monjalon
+Cc Chas, the maintainer of the bonding PMD

10/08/2019 18:42, Thomas Monjalon:
> Hi,
> 
> 08/08/2019 18:28, Greg O'Rawe:
> > Hi,
> > 
> > I'm using DPDK 17.11.5 to bond two interfaces via the vdev syntax, however 
> > the two interfaces are on the same NIC and share the same PCI address.
> > 
> > Is there a way to reference them using the devargs syntax by MAC address? 
> > It is discussed here https://patches.dpdk.org/patch/33808/
> > 
> > E.g.
> > class=eth,mac=00:11:22:33:44:55
> > 
> > I don't see this as an option in the bond PMD for slave interfaces even in 
> > the latest 19.05 release.
> 
> It is implemented at ethdev level as a port iterator:
>   http://git.dpdk.org/dpdk/commit/?id=8b9ea3b3ca
> 
> > I'm trying to set this using VPP e.g. normally:
> > vdev eth_bond0,mode=1,slave=PCI1,slave=PCI2
> 
> In order to use this syntax with bonding slaves,
> I think a patch is required.





Re: [dpdk-users] devargs syntax to use MAC address for bond slaves

2019-08-10 Thread Thomas Monjalon
Hi,

08/08/2019 18:28, Greg O'Rawe:
> Hi,
> 
> I'm using DPDK 17.11.5 to bond two interfaces via the vdev syntax, however 
> the two interfaces are on the same NIC and share the same PCI address.
> 
> Is there a way to reference them using the devargs syntax by MAC address? It 
> is discussed here https://patches.dpdk.org/patch/33808/
> 
> E.g.
> class=eth,mac=00:11:22:33:44:55
> 
> I don't see this as an option in the bond PMD for slave interfaces even in 
> the latest 19.05 release.

It is implemented at ethdev level as a port iterator:
http://git.dpdk.org/dpdk/commit/?id=8b9ea3b3ca

> I'm trying to set this using VPP e.g. normally:
> vdev eth_bond0,mode=1,slave=PCI1,slave=PCI2

In order to use this syntax with bonding slaves,
I think a patch is required.




Re: [dpdk-users] DPDK MLX5 Probe error

2019-04-03 Thread Thomas Monjalon
Hi,

03/04/2019 20:32, Mora Gamboa, Luis Eduardo:
> I'm not able to use the mlx5 pmd driver with some Mellanox NICs I have 
> installed on my server. The error I'm receiving during EAL initialization is:
> 
> et_mlx5: no Verbs device matches PCI device :03:00.0, are kernel drivers 
> loaded?
> 
> The DPDK version I'm currently using is: DPDK-STABLE-18.11
> 
> I have installed the OFED latest version:
> 
> mlnx-en-4.5-1.0.1.0-ubuntu16.04-x86_64
> 
> I have performed modprobe of the ib_uverbs kernel module

There is more configuration to be done when using an old distro with OFED.
You will find the information in this chapter of the doc:
http://doc.dpdk.org/guides/nics/mlx5.html#quick-start-guide-on-ofed-en





Re: [dpdk-users] Suggestion: DPDK "latest" Download Links

2018-08-23 Thread Thomas Monjalon
22/08/2018 11:58, Ferruh Yigit:
> On 8/20/2018 8:03 PM, Justin Parus wrote:
> > Hi,
> > 
> > I would like to propose adding DPDK "latest" download links. For example 
> > dpdk-latest-major and dpdk-latest-stable, would automatically update and 
> > point to the most recent date major and stable releases respectively. This 
> > would be useful for our tests as they would always be testing the latest 
> > packages automatically.
> 
> +1, looks good idea

I am not sure how useful it is.
When you change from one major version to another one,
you often have to adapt your code, so it cannot be automatic.

I see more values in having a link for the latest version of each stable branch.





Re: [dpdk-users] Apply patches from the mailing list

2018-03-26 Thread Thomas Monjalon
25/03/2018 16:31, long...@viettel.com.vn:
> From: "shreyansh jain" 
> > From: long...@viettel.com.vn
> > > A very basic question, but how do I apply some of the patches that
> > > were put on the dev mailing list to try it out? I already looked at
> > > the next- subtrees but apparently even major patch set such as the new
> > > packet framework/ip_pipeline is not in there (yet).
> > 
> > This is what I do:
> > 
> > 1. Access http://dpdk.org/dev/patchwork/project/dpdk/list/ and search for
> > patches from the author. This has all the patches posted to Mailing List
> > - with their state (that is, for example, superseded if a series has been
> > superseded with another version)> > 
> > 2. You have three options:
> >  a) Either select all patches (you will need to register/login) in a
> >  series and add to "bundle" and download that bundle as mbox b) Select
> >  individual patch and look for "download patch" or "download mbox" link
> >  and manually download them.> > 
> > OR, one I use most frequently:
> >  b) Copy the link to patch (for example,
> >  http://dpdk.org/dev/patchwork/patch/36473/) and append "mbox" to it
> >  (http://dpdk.org/dev/patchwork/patch/36473/mbox)> > 
> > Then,
> > 
> > $ wget  -O - | git am
> > 
> > One can easily make a script which can do the steps (1)>(2b) above based
> > on a given patch ID (last integer in the link to patch).


This script exists already. It is pwclient:
https://dpdk.org/dev/patchwork/help/pwclient/


> > Maybe there is a better and efficient way - this is just what I do. :)
> 
> After my email I figured out your 2a). But yeah having to register/login was 
> a nuisance 
> and not very intuitive especially for people like me who had had no 
> experience working
> with patchwork or mailing lists before.


The basic method is to have a decent email client which allows you to
download emails (i.e. patches). Then you can just apply them with git am.


> > > The contributor guideline only has sections for submitting patches to
> > > the mailing list, not pulling and applying patches for local testing.
> > > I know of dpdk patchwork but there are no instructions provided.
> > 
> > Maybe you can go ahead and send across a patch for a method you find best
> > and efficient. Others can add their way/suggestions and I am confident
> > Thomas would be happy to accept a documentation improvement patch.
> 
> Echoing this. A section in the contributor guideline just like the one I 
> followed when I
> pushed my first patch would be very helpful indeed.


Now that you have all the informations, and that you are interested in,
you are welcome to update the contributors guide and send a patch :)




Re: [dpdk-users] Using Mellanox ConnectX-3 for DPDK

2018-02-05 Thread Thomas Monjalon
Hi,

06/02/2018 00:01, Rohan Gandhi:
> Hello All,
> 
> 1) I am trying to use Mellanox ConnectX-3 to use DPDK on my server
> (Ubuntu 16.04 kernel 4.13.0-32-generic).
> 
> 2) Using dpdk/usertools/dpdk-setup.sh, I can see that the Mellanox
> interface is dpdk compatible.
> 
> Network devices using DPDK-compatible driver
> 
> :01:00.0 'MT27520 Family [ConnectX-3 Pro] 1007' drv=vfio-pci 
> unused=igb_uio

Mellanox drivers do not use neither UIO nor VFIO.


> 3) When I try to use Moongen to use this interface, it returned an
> error that it cannot detect the device.
> 
> [INFO]  Found 0 usable devices:
> [FATAL] Lua error in task master
> ...ace/dpdk/moongen/MoonGen/build/../libmoon/lua/device.lua:100: there
> are only 0 ports, tried to configure port id 1
> 
> 
> The same setup works with my other Intel NIC. I am not sure what I am
> doing wrong. Can you please help?

Please read the documentation:
http://dpdk.org/doc/guides/nics/mlx4.html#usage-example


Re: [dpdk-users] Does BONDING_MODE_8023AD is works in 17.11?

2018-01-18 Thread Thomas Monjalon
No reply after 2 months, adding maintainer Cc.

17/11/2017 23:24, Алексей Телятников:
> Greetings. I have issue with link-aggregation in DPDK. My HP A5820X 
> switch does not recive LACP from DPDK application.
> 
> Has anyone worked with BONDING_MODE_8023AD?
> 
> I have: DPDK 17.11 HEAD & 11.1-RELEASE-p4 FreeBSD
> 
> Also I build examples/bond with little changes:
> 
> --- a/examples/bond/main.c
> +++ b/examples/bond/main.c
> @@ -226,7 +226,7 @@ bond_port_init(struct rte_mempool *mbuf_pool)
>  uint16_t nb_rxd = RTE_RX_DESC_DEFAULT;
>  uint16_t nb_txd = RTE_TX_DESC_DEFAULT;
> 
> -   retval = rte_eth_bond_create("bond0", BONDING_MODE_ALB,
> +   retval = rte_eth_bond_create("net_bonding0", BONDING_MODE_8023AD,
>  0 /*SOCKET_ID_ANY*/);
>  if (retval < 0)
>  rte_exit(EXIT_FAILURE,
> 
> Run it:
> 
> $ sudo build/bond_app
> EAL: Sysctl reports 16 cpus
> EAL: Detected 16 lcore(s)
> EAL: Contigmem driver has 1 buffers, each of size 32GB
> EAL: Mapped memory segment 1 @ 0x8023ff000: physaddr:0x8, len 
> 34359738368
> EAL: PCI device :03:00.0 on NUMA socket 0
> EAL:   probe driver: 8086:1521 net_e1000_igb
> EAL:   :03:00.0 not managed by UIO driver, skipping
> EAL: PCI device :03:00.3 on NUMA socket 0
> EAL:   probe driver: 8086:1521 net_e1000_igb
> EAL:   :03:00.3 not managed by UIO driver, skipping
> EAL: PCI device :05:00.0 on NUMA socket 0
> EAL:   probe driver: 8086:10fb net_ixgbe
> EAL: PCI device :05:00.1 on NUMA socket 0
> EAL:   probe driver: 8086:10fb net_ixgbe
> User device list:
> PMD: ixgbe_dev_link_status_print():  Port 0: Link Down
> Port 0 MAC: 90:e2:ba:d0:a3:c4
> PMD: ixgbe_dev_link_status_print():  Port 1: Link Down
> Port 1 MAC: 90:e2:ba:d0:a3:c5
> EAL: Initializing pmd_bond for net_bonding0
> PMD: Using mode 4, it is necessary to do TX burst and RX burst at least 
> every 100ms.
> EAL: Create bonded device net_bonding0 on port 2 in mode 4 on socket 0.
> PMD: ixgbe_dev_link_status_print():  Port 0: Link Down
> PMD: ixgbe_dev_link_status_print():  Port 1: Link Down
> Port 2 MAC: 90:e2:ba:d0:a3:c4
> Starting lcore_main on core 1:0 Our IP:7.0.0.10
> bond6>show
> 90:e2:ba:d0:a3:c4
> 90:e2:ba:d0:a3:c5
> Active_slaves:0 packets received:Tot:0 Arp:0 IPv4:0
> 
> Active slaves always 0. I tried configure bond in testpmd. Same problem.
> 
> On the switch all ports is in up state but:
> 
> Received LACP Packets: 0 packet(s)
> 
> 





Re: [dpdk-users] DPDK mlx4 PMD on Azure VM

2017-12-19 Thread Thomas Monjalon
19/12/2017 08:14, Hui Ling:
> I installed DPDK 17.11 on Ubuntu 16.04. And I downloaded MLNX OFED
> 4.2-1.2.0.0 and installed up-stream libs with
> ./mlnxofedinstall --guest --dpdk --upstream-libs
> 
> MLX4 PMD in DPDK doesn't seem to work with lib from ubuntu repository
> and install OFED allows me to compile DPDK mlx4 PMD without any
> compilation problem.
> 
> Then I tried to see if the mlx4 PMD works or not by running:
> 
> root@myVM:
> ./build/app/testpmd -l 1-2 -n 4 -w 0003:00:02.0 -w 0004:00:02.0 --
> --rxq=2 --txq=2 -i
[...]
> Configuring Port 0 (socket 0)
> PMD: mlx4_rxq.c:811: mlx4_rx_queue_setup(): 0xde0740: MR creation
> failure: Operation not permitted

[...]
> I also tried to run DPDK 17.11 on Ubuntu 17.10. It didn't work either.
> the testpmd hangs during "configuring Port 0" forever.

So you see 2 different errors on Ubuntu 16.04 and 17.10.
What are the Linux kernel versions?

> Can someone from MS or Mellanox help me figure out why? and how to
> make mlx4 PMD work on Azure VM?

Mellanox will support you.


Re: [dpdk-users] DPDK mlx4 PMD on Azure VM

2017-12-19 Thread Thomas Monjalon
Hi,

19/12/2017 08:14, Hui Ling:
> I installed DPDK 17.11 on Ubuntu 16.04. And I downloaded MLNX OFED
> 4.2-1.2.0.0 and installed up-stream libs with
> ./mlnxofedinstall --guest --dpdk --upstream-libs
> 
> MLX4 PMD in DPDK doesn't seem to work with lib from ubuntu repository
> and install OFED allows me to compile DPDK mlx4 PMD without any
> compilation problem.

The recommended setup is using Linux 4.14 with rdma-core v15, not OFED.
Please check this doc:

http://dpdk.org/doc/guides/nics/mlx4.html#current-rdma-core-package-and-linux-kernel-recommended


Re: [dpdk-users] attach/detach on secondary process

2017-12-13 Thread Thomas Monjalon
13/12/2017 22:10, Stephen Hemminger:
> On Wed, 13 Dec 2017 22:00:48 +0100
> Thomas Monjalon <tho...@monjalon.net> wrote:
> 
> > 13/12/2017 18:09, Stephen Hemminger:
> > > Many DPDK drivers require that setup and initialization be done by
> > > the primary process. This is mostly to avoid dealing with concurrency 
> > > since
> > > there can be multiple secondary processes.  
> > 
> > I think we should consider this limitation as a bug.
> > We must allow a secondary process to initialize a device.
> > The race in device creation must be fixed.
> > 
> 
> Secondary processes should be able to do setup.
> But it is up to the application not to do it concurrently from multiple
> processes.

Yes there can be synchronization between processes.
But I think it is safer to fix the device creation race in ethdev.
Note that I am not talking about configuration concurrency,
but just race in probing.


Re: [dpdk-users] attach/detach on secondary process

2017-12-13 Thread Thomas Monjalon
13/12/2017 18:09, Stephen Hemminger:
> Many DPDK drivers require that setup and initialization be done by
> the primary process. This is mostly to avoid dealing with concurrency since
> there can be multiple secondary processes.

I think we should consider this limitation as a bug.
We must allow a secondary process to initialize a device.
The race in device creation must be fixed.



Re: [dpdk-users] VF RSS availble in I350-T2?

2017-12-13 Thread Thomas Monjalon
12/12/2017 13:58, ..:
> I assume my message was ignored due to it not being related to dpdk
> software?

It is ignored because people have not read it or are not expert in
this hardware.
I am CC'ing the maintainer of igb/e1000.


> On 11 December 2017 at 10:14, ..  wrote:
> 
> > Hi,
> >
> > I have an intel I350-T2 which I use for SR-IOV, however, I am hitting some
> > rx_dropped on the card when I start increasing traffic. (I have got more
> > with the same software out of a identical bare metal system)
> >
> > I am using the Intel igb driver on Centos 7.2 (downloaded from Intel not
> > the driver installed with Centos), so the RSS parameters amongst others are
> > availbe to me
> >
> > This then led me to investigate the interrupts on the tx rx ring buffers
> > and I noticed that the interface (vfs enabled) only had on tx/rx queue. Its
> > distributed between   This is on the KVM Host
> >
> >  CPU0   CPU1   CPU2   CPU3   CPU4
> > CPU5   CPU6   CPU7   CPU8
> >  100:  1 33137  0  0
> > 0  0  0  0 IR-PCI-MSI-edge  ens2f1
> >  101:   2224  0  0   6309 178807
> > 0  0  0  0 IR-PCI-MSI-edge  ens2f1-TxRx-0
> >
> > Looking at my standard nic ethernet ports I see 1 rx and 4 rx queues
> >
> > On the VM I only get one tx one rx queue ( I know all the interrupts are
> > only using CPU0) but that is defined in our builds.
> >
> > egrep "CPU|ens11" /proc/interrupts
> >CPU0   CPU1   CPU2   CPU3   CPU4
> > CPU5   CPU6   CPU7
> >  34:  715885552  0  0  0  0
> > 0  0  0  0   PCI-MSI-edge  ens11-tx-0
> >  35:  559402399  0  0  0  0
> > 0  0  0  0   PCI-MSI-edge  ens11-rx-0
> >
> > I activated RSS in my card, and can set if, however if I use the param
> > max_vfs=n then it  defaults back to to 1 rx 1 tx queue per nic port
> >
> > [  392.833410] igb :07:00.0: Using MSI-X interrupts. 1 rx queue(s), 1
> > tx queue(s)
> > [  393.035408] igb :07:00.1: Using MSI-X interrupts. 1 rx queue(s), 1
> > tx queue(s)
> >
> > I have been reading some of the dpdk older posts and see that VF RSS is
> > implemented in some cards, does anybody know if its available in this card
> > (from reading it only seemed the 10GB cards)
> >
> > One of my plans aside from trying to create more RSS per VM is to add more
> > CPUS to the VM that are not isolated so that the rx and tx queues can
> > distribute their load a bit to see if this helps.
> >
> > Also is it worth investigating the VMDq options, however I understand this
> > to be less useful than SR-IOV which works well for me with KVM.
> >
> >
> > Thanks in advance,
> >
> > Rolando
> >
> 





Re: [dpdk-users] DPDK Performance tips

2017-12-13 Thread Thomas Monjalon
13/12/2017 09:14, Anand Prasad:
>  Hi Dpdk team,
>Can anyone please share tips to get better DPDK performance? I have tried 
> to run DPDK test applications on various PC configurations with different CPU 
> Speeds, RAM Size/Speed and and PCIex16 2nd and 3rd generation connections... 
> but I don't get very consistent results.
>The same test application does not behave similarly on 2 PC's with same 
> hardware configuration.But general observation is that, performance is better 
> on Intel motherboard with 3rd generation PCIe slot. When tried on Gigabyte 
> motherboard (even with higher CPU and RAM speeds), performance was very poor.
> The performance issue I am facing is the Packet Drop issue on the Rx side.
>  Two PC's with exactly same hardware configuration, one PC drops packets 
> after few hours, but in the other PC I don't observe packet drop
> Highly appreciate a quick response
> Regards,Anand Prasad

This is very complicate because it really depends on the hardware.
Managing performance requires a very good knowledge of the hardware.

You can find some basic advices in this guide for some Intel hardware:
http://dpdk.org/doc/guides/linux_gsg/nic_perf_intel_platform.html
You will also find some informations in the drivers guide. Example:
http://dpdk.org/doc/guides/nics/mlx5.html#performance-tuning



Re: [dpdk-users] [vpp-dev] vpp-verify-master-opensuse build failure triage

2017-11-29 Thread Thomas Monjalon
Hi,

It is an error on DPDK side.
It seems the tarball for DPDK 17.08 has been overwritten by the stable release 
process.
I have restored the original tarball and we are checking what went wrong.

fast.dpdk.org is a CDN for static.dpdk.org.
It may take time to get the restored tarball on all CDN nodes.

Sorry for the noise


28/11/2017 12:31, Marco Varlese:
> Hi Gabriel,
> 
> I just submitted a patch (https://gerrit.fd.io/r/#/c/9597/) to VPP to fix 
> thisissue. 
> I've added you to the review so you can take a look.
> 
> In order to verify that the new patch fixes the issue, people should first 
> run:
> 
> $ rpm -e vpp-dpdk-devel && rm dpdk/dpdk-17.08.tar.xz
> 
> to trigger a rebuild and reinstall of the dpdk package...
> 
> 
> Cheers,
> Marco
> 
> On Tue, 2017-11-28 at 10:34 +, Gabriel Ganne wrote:
> > Adding dpdk-user ML.
> > 
> > I had a look with an older dpdk archive I found.
> > The folder archived has been renamed from *dpdk-17.08* to 
> > *dpdk-stable-17.08*
> > This is the only difference, but it is enough to make the md5sum fail.
> > 
> > --
> > Gabriel Ganne
> > From: Gabriel Ganne
> > Sent: Tuesday, November 28, 2017 11:13:23 AM
> > To: Marco Varlese; Dave Wallace; Gonzalez Monroy, Sergio
> > Cc: vpp-...@lists.fd.io
> > Subject: Re: [vpp-dev] vpp-verify-master-opensuse build failure triage
> >  
> > Hi Marco,
> > 
> > I believe http://fast.dpdk.org/rel redirects to http://static.dpdk.org/rel/
> > 
> > I disagree on the md5 hashs.
> > I have the following (NOK on 17.08, and OK on 17.11) :
> > 
> > $ wget http://static.dpdk.org/rel/dpdk-17.08.tar.xz
> > $ openssl md5 dpdk-17.08.tar.xz # is 0641f59ea8ea98afefa7cfa2699f6241 in
> > dpdk/Makefile
> > MD5(dpdk-17.08.tar.xz)= 537ff038915fefd0f210905fafcadb4b 
> > 
> > $ wget http://static.dpdk.org/rel/dpdk-17.11.tar.xz
> > $ openssl md5 dpdk-17.11.tar.xz
> > MD5(dpdk-17.11.tar.xz)= 53ee9e054a8797c9e67ffa0eb5d0c701 
> > 
> > Though I agree that if the "recheck" button made the build pass, there must 
> > be
> > something wrong on my side.
> >  ... what did I miss ?
> > 
> > --
> > Gabriel Ganne
> > From: Marco Varlese 
> > Sent: Tuesday, November 28, 2017 10:55:49 AM
> > To: Gabriel Ganne; Dave Wallace; Gonzalez Monroy, Sergio
> > Cc: vpp-...@lists.fd.io
> > Subject: Re: [vpp-dev] vpp-verify-master-opensuse build failure triage
> >  
> > Hi Gabriel,
> > 
> > On Tue, 2017-11-28 at 09:19 +, Gabriel Ganne wrote:
> > > Hi,
> > > 
> > > I also have this issue on my machine, and I see on 
> > > http://static.dpdk.org/re
> > > l/ that dpdk-17.08.tar.xz  was written yesterday (27-Nov-2017 13:00)
> > > Wouldn't it be possible that the archive was overwritten ?
> > 
> > The DPDK tarball in VPP is downloaded from http://fast.dpdk.org/rel
> > According to http://dpdk.org/rel the MD5 used in VPP for the DPDK 17.08
> > release is correct.
> > > In which case, the hash would need to be updated.
> > 
> > Right, if the tarball was a newer and different one then the MD5 hash should
> > be updated in VPP for the the checksum performed...
> > However, in the case described by Dave below, a simple 'recheck' which
> > triggers a new build (with the same code/scripts/etc. hence the same MD5 
> > hash)
> > solved it.
> > 
> > > Also, this would probably not be seen by people who had the 
> > > dpdk-install-dev 
> > > package already installed.
> > > 
> > > Who should I ask to check this ?
> > 
> > I've added Sergio who might have further thoughts on this one.
> > 
> > > Best regards
> > > 
> > > --
> > > Gabriel Ganne
> > > From: vpp-dev-boun...@lists.fd.io  on behalf 
> > > of
> > > Marco Varlese 
> > > Sent: Tuesday, November 28, 2017 9:19:37 AM
> > > To: Dave Wallace
> > > Cc: vpp-...@lists.fd.io
> > > Subject: Re: [vpp-dev] vpp-verify-master-opensuse build failure triage
> > >  
> > > Dear Dave,
> > > 
> > > By the look of it is seemed to have been an hiccup with the download or
> > > that something spurious was left on the filesystem...
> > > ===
> > > 12:08:13 Bad Checksum! Please remove /w/workspace/vpp-verify-master-
> > > opensuse/dpdk/dpdk-17.08.tar.xz and retry
> > > 12:08:13 Makefile:267: recipe for target '/w/workspace/vpp-verify-
> > > master-opensuse/build-root/build-vpp-native/dpdk/.download.ok' failed
> > > 12:08:13 make[3]: *** [/w/workspace/vpp-verify-master-opensuse/build-
> > > root/build-vpp-native/dpdk/.download.ok] Error 1
> > > 12:08:13 make[3]: Leaving directory '/w/workspace/vpp-verify-master-
> > > opensuse/dpdk'
> > > 12:08:13 Makefile:460: recipe for target 'ebuild-build' failed
> > > 12:08:13 make[2]: *** [ebuild-build] Error 2
> > > 12:08:13 make[2]: Leaving directory '/w/workspace/vpp-verify-master-
> > > opensuse/dpdk'
> > > 12:08:13 Makefile:682: recipe for target 'dpdk-build' failed
> > > 12:08:13 make[1]: *** [dpdk-build] Error 2
> > > 12:08:13 make[1]: Leaving directory '/w/workspace/vpp-verify-master-
> > > opensuse/build-root'
> > > 12:08:13 Makefile:333: recipe 

Re: [dpdk-users] Building mlx4 drivers

2017-11-07 Thread Thomas Monjalon
Hi,

07/11/2017 21:49, Michael Sowka:
> Hello, I am experiencing some early difficulties in building the mlx4
> driver for a ConnectX-3 device.

Which version of DPDK?

[...]
> Again, i'm sticking with docs to not install anything outside of what
> MLNX_OFED provides in its packages, but where is that pesky mlx4dv.h header?

If you are compiling DPDK 17.11, the doc is not updated yet unfortunately.
You should try rdma-core and upstream kernel instead of MLNX_OFED.




  1   2   >