Hi Robert, Did you set the registry key (DevxFsRules) in the Windows registry?
https://docs.nvidia.com/networking/display/winof2v290/Configuring+the+Driver+Registry+Keys If not, can you try setting it to the value (0xFFFFFFFF) and see it the issue still occurs after adapter restart? Thanks, Tal. > Subject: Re: Windows examples failed to start using mellanox card > > External email: Use caution opening links or attachments > > > Tal, Ophir, could you advise? > > 2022-07-11 14:10 (UTC+0000), Robert Hable: > > Hello, > > > > I am having trouble running DPDK on Windows. I am trying to use the > example programs and testpmd, but they fail with some errors (see outputs > below). The testpmd program also does not go into interactive mode and exits > after a keypress. > > I am using Windows Server 2022 with a Mellanox ConnectX-4 LX Card using > Win-OF 2 2.90.50010 / SDK 2.90.25518. > > I am using the current build (DPDK Version 22.07-rc3). > > > > I followed to DPDK Windows guide, but currently I always get the following > error: > > mlx5_common: DevX read access NIC register=0X9055 failed errno=0 > > status=0 syndrome=0 > > > > does anybody have an idea how to resolve this problem, or at least get > some more information why it failed? > > Helloworld output: > > C:\dpdk\build\examples>dpdk-helloworld.exe > > EAL: Detected CPU lcores: 24 > > EAL: Detected NUMA nodes: 2 > > EAL: Multi-process support is requested, but not available. > > EAL: Probe PCI driver: mlx5_pci (15b3:1015) device: 0000:06:00.0 > > (socket 0) > > mlx5_common: DevX read access NIC register=0X9055 failed errno=0 > > status=0 syndrome=0 > > mlx5_net: mlx5_os_dev_shared_handler_install: is not supported > > mlx5_net: Rx CQE 128B compression is not supported. > > EAL: Probe PCI driver: mlx5_pci (15b3:1015) device: 0000:06:00.1 > > (socket 0) > > mlx5_common: DevX read access NIC register=0X9055 failed errno=0 > > status=0 syndrome=0 > > mlx5_net: mlx5_os_dev_shared_handler_install: is not supported > > mlx5_net: Rx CQE 128B compression is not supported. > > hello from core 1 > > hello from core 2 > > hello from core 3 > > hello from core 4 > > hello from core 5 > > hello from core 6 > > hello from core 7 > > hello from core 8 > > hello from core 16 > > hello from core 22 > > hello from core 11 > > hello from core 12 > > hello from core 13 > > hello from core 14 > > hello from core 15 > > hello from core 9 > > hello from core 17 > > hello from core 18 > > hello from core 19 > > hello from core 20 > > hello from core 21 > > hello from core 23 > > hello from core 0 > > hello from core 10 > > > > testpmd output: > > C:\dpdk\build\app>dpdk-testpmd.exe > > EAL: Detected CPU lcores: 24 > > EAL: Detected NUMA nodes: 2 > > EAL: Multi-process support is requested, but not available. > > EAL: Probe PCI driver: mlx5_pci (15b3:1015) device: 0000:06:00.0 > > (socket 0) > > mlx5_common: DevX read access NIC register=0X9055 failed errno=0 > > status=0 syndrome=0 > > mlx5_net: mlx5_os_dev_shared_handler_install: is not supported > > mlx5_net: Rx CQE 128B compression is not supported. > > EAL: Probe PCI driver: mlx5_pci (15b3:1015) device: 0000:06:00.1 > > (socket 0) > > mlx5_common: DevX read access NIC register=0X9055 failed errno=0 > > status=0 syndrome=0 > > mlx5_net: mlx5_os_dev_shared_handler_install: is not supported > > mlx5_net: Rx CQE 128B compression is not supported. > > testpmd: create a new mbuf pool <mb_pool_0>: n=331456, size=2176, > > socket=0 > > testpmd: preferred mempool ops selected: ring_mp_mc > > testpmd: create a new mbuf pool <mb_pool_1>: n=331456, size=2176, > > socket=1 > > testpmd: preferred mempool ops selected: ring_mp_mc Configuring Port 0 > > (socket 0) > > mlx5_net: port 0 failed to set defaults flows Fail to start port 0: > > Invalid argument Configuring Port 1 (socket 0) > > mlx5_net: port 1 failed to set defaults flows Fail to start port 1: > > Invalid argument Please stop the ports first Done No commandline core > > given, start packet forwarding Not all ports were started Press enter > > to exit > > > > > > Stopping port 0... > > Stopping ports... > > Done > > > > Stopping port 1... > > Stopping ports... > > Done > > > > Shutting down port 0... > > Closing ports... > > mlx5_net: mlx5_os_dev_shared_handler_uninstall: is not supported Port > > 0 is closed Done > > > > Shutting down port 1... > > Closing ports... > > mlx5_net: mlx5_os_dev_shared_handler_uninstall: is not supported Port > > 1 is closed Done > > > > Bye... > > > > Kind regards, > > Robert
