Re: [dpdk-users] CX4-Lx VF link status in Azure

2020-03-26 Thread Benoit Ganne (bganne)
Hi Stephen,

> Is this with netvsc PMD or failsafe PMD?

I am using failsafe PMD using the string "--vdev net_vdev_netvsc0,iface=eth1" 
etc. as mentioned here: 
https://docs.microsoft.com/en-us/azure/virtual-network/setup-dpdk
I checked with gdb what are the underlying devices and there is 1 mlx5 and 1 
tap instance as expected.
To get the link state, the call stack is rte_eth_link_get_nowait() -> 
fs_link_update() -> mlx5_link_update() -> mlx5_link_update_unlocked_gs() which 
looks good to me.
The link state update fails in mlx5_link_update_unlocked_gs() because the link 
speed retrieved from the Linux kernel driver is '0' (unknown). Note that 
ethtool and '/sys/class/net//speed' also fails to report the link speed 
(but not the link status).
At the end of the day, maybe the Linux kernel driver should report a link 
speed, however I think it should not prevent DPDK to update the link state 
(up/down). 
Just removing the offending test makes everything working again but I do not 
think it is the correct solution: I do not know why this test was added for.

> You maybe missing this patch, which is only in current development branch.
> Since it is tagged for stable, it should end up in later LTS versions as
> well 18.11.X and 19.11.X.

I tried the patch but it did not solve the issue. I think it is expected as I 
am using failsafe with tap and not netvsc, correct?

Best
ben


Re: [dpdk-users] CX4-Lx VF link status in Azure

2020-03-26 Thread Stephen Hemminger
On Thu, 26 Mar 2020 14:26:56 +
"Benoit Ganne (bganne)"  wrote:

> Hi Stephen,
> 
> > Is this with netvsc PMD or failsafe PMD?  
> 
> I am using failsafe PMD using the string "--vdev net_vdev_netvsc0,iface=eth1" 
> etc. as mentioned here: 
> https://docs.microsoft.com/en-us/azure/virtual-network/setup-dpdk
> I checked with gdb what are the underlying devices and there is 1 mlx5 and 1 
> tap instance as expected.
> To get the link state, the call stack is rte_eth_link_get_nowait() -> 
> fs_link_update() -> mlx5_link_update() -> mlx5_link_update_unlocked_gs() 
> which looks good to me.
> The link state update fails in mlx5_link_update_unlocked_gs() because the 
> link speed retrieved from the Linux kernel driver is '0' (unknown). Note that 
> ethtool and '/sys/class/net//speed' also fails to report the link 
> speed (but not the link status).
> At the end of the day, maybe the Linux kernel driver should report a link 
> speed, however I think it should not prevent DPDK to update the link state 
> (up/down). 
> Just removing the offending test makes everything working again but I do not 
> think it is the correct solution: I do not know why this test was added for.
> 
> > You maybe missing this patch, which is only in current development branch.
> > Since it is tagged for stable, it should end up in later LTS versions as
> > well 18.11.X and 19.11.X.  
> 
> I tried the patch but it did not solve the issue. I think it is expected as I 
> am using failsafe with tap and not netvsc, correct?
> 
> Best
> ben


Is the Mellanox device being brought up by the base kernel setup?
I find that for Mellanox the device has to be started from kernel (like ip)
and DPDK doesn't do itself.



Re: [dpdk-users] CX4-Lx VF link status in Azure

2020-03-26 Thread Benoit Ganne (bganne)
> Is the Mellanox device being brought up by the base kernel setup?
> I find that for Mellanox the device has to be started from kernel (like
> ip) and DPDK doesn't do itself.

Yes everything is initialized correctly. The netdev itself is configured and 
usable from Linux (ping etc.). Just removing the over-strict check in mlx5 PMD 
is enough for everything to work fine: 
https://gerrit.fd.io/r/c/vpp/+/26152/1/build/external/patches/dpdk_20.02/0002-mlx5-azure-workaround.patch
The link speed is unknown but this is not issue, and link state and other link 
info are correctly reported.
Thomas, any input regarding this behavior in mlx5 PMD?

Thx
ben


Re: [dpdk-users] CX4-Lx VF link status in Azure

2020-03-26 Thread Thomas Monjalon
Pasting back this important info:
"
Note that ethtool and '/sys/class/net//speed' also fails
to report the link speed (but not the link status).
"

26/03/2020 19:27, Benoit Ganne (bganne):
> Yes everything is initialized correctly. The netdev itself is configured and 
> usable from Linux (ping etc.). Just removing the over-strict check in mlx5 
> PMD is enough for everything to work fine: 
> https://gerrit.fd.io/r/c/vpp/+/26152/1/build/external/patches/dpdk_20.02/0002-mlx5-azure-workaround.patch
> The link speed is unknown but this is not issue, and link state and other 
> link info are correctly reported.
> Thomas, any input regarding this behavior in mlx5 PMD?

I am not aware about the lack of link speed info.
It is probably not specific to ConnectX-4 Lx.
I guess it happens only with Hyper-V?

Cc mlx5 maintainers




Re: [dpdk-users] CX4-Lx VF link status in Azure

2020-03-26 Thread Benoit Ganne (bganne)
> Pasting back this important info:
> "
> Note that ethtool and '/sys/class/net//speed' also fails
> to report the link speed (but not the link status).
> "
> 
> 26/03/2020 19:27, Benoit Ganne (bganne):
> > Yes everything is initialized correctly. The netdev itself is configured
> and usable from Linux (ping etc.). Just removing the over-strict check in
> mlx5 PMD is enough for everything to work fine:
> https://gerrit.fd.io/r/c/vpp/+/26152/1/build/external/patches/dpdk_20.02/0
> 002-mlx5-azure-workaround.patch
> > The link speed is unknown but this is not issue, and link state and
> other link info are correctly reported.
> > Thomas, any input regarding this behavior in mlx5 PMD?
> 
> I am not aware about the lack of link speed info.
> It is probably not specific to ConnectX-4 Lx.
> I guess it happens only with Hyper-V?

For me there are 2 separate issues:
 1) Linux kernel driver does not report link speed in Azure for CX4-Lx in 
Ubuntu 18.04
 2) mlx5 PMD enforce that both link speed is defined and link is up to update 
interface state

If (1) is fixed, (2) should work, but to me (2) is too strict for no good 
reason: we do not really care about reported link speed, esp in a virtual 
environment it usually does not mean much, but we do care about link state.

ben


Re: [dpdk-users] CX4-Lx VF link status in Azure

2020-03-26 Thread Thomas Monjalon
26/03/2020 21:09, Mark Bloch:
> 
> On 3/26/2020 12:00, Benoit Ganne (bganne) wrote:
> >> Pasting back this important info:
> >> "
> >> Note that ethtool and '/sys/class/net//speed' also fails
> >> to report the link speed (but not the link status).
> >> "
> >>
> >> 26/03/2020 19:27, Benoit Ganne (bganne):
> >>> Yes everything is initialized correctly. The netdev itself is configured
> >> and usable from Linux (ping etc.). Just removing the over-strict check in
> >> mlx5 PMD is enough for everything to work fine:
> >> https://gerrit.fd.io/r/c/vpp/+/26152/1/build/external/patches/dpdk_20.02/0002-mlx5-azure-workaround.patch
> >>> The link speed is unknown but this is not issue, and link state and
> >> other link info are correctly reported.
> >>> Thomas, any input regarding this behavior in mlx5 PMD?
> >>
> >> I am not aware about the lack of link speed info.
> >> It is probably not specific to ConnectX-4 Lx.
> >> I guess it happens only with Hyper-V?
> 
> Should be fixed by those 3 commits (last 1 one is just cosmetic):
> 
> https://git.kernel.org/pub/scm/linux/kernel/git/netdev/net-next.git/commit/?id=dc392fc56f39a00a46d6db2d150571ccafe99734
> https://git.kernel.org/pub/scm/linux/kernel/git/netdev/net-next.git/commit/?id=c268ca6087f553bfc0e16ffec412b983ffe32fd4
> https://git.kernel.org/pub/scm/linux/kernel/git/netdev/net-next.git/commit/?id=2f5438ca0ee01a1b3a9c37e3f33d47c8122afe74

Thanks for the patches Mark.

> > For me there are 2 separate issues:
> >  1) Linux kernel driver does not report link speed in Azure for CX4-Lx in 
> > Ubuntu 18.04

1 looks to be addressed with patches above.

> >  2) mlx5 PMD enforce that both link speed is defined and link is up to 
> > update interface state

Yes we can look at this issue.

> > If (1) is fixed, (2) should work, but to me (2) is too strict
> > for no good reason: we do not really care about reported link speed,

I agree that link speed is less important than link status.

> > esp in a virtual environment it usually does not mean much,

Yes the link speed is shared between all VFs.

> > but we do care about link state.




Re: [dpdk-users] DPDK TX problems

2020-03-26 Thread Thomas Monjalon
Thanks for the interesting feedback.
It seems we should test this performance use case in our labs.


18/02/2020 09:36, Hrvoje Habjanic:
> On 08. 04. 2019. 11:52, Hrvoje Habjanić wrote:
> > On 29/03/2019 08:24, Hrvoje Habjanić wrote:
> >>> Hi.
> >>>
> >>> I did write an application using dpdk 17.11 (did try also with 18.11),
> >>> and when doing some performance testing, i'm seeing very odd behavior.
> >>> To verify that this is not because of my app, i did the same test with
> >>> l2fwd example app, and i'm still confused by results.
> >>>
> >>> In short, i'm trying to push a lot of L2 packets through dpdk engine -
> >>> packet processing is minimal. When testing, i'm starting with small
> >>> number of packets-per-second, and then gradually increase it to see
> >>> where is the limit. At some point, i do reach this limit - packets start
> >>> to get dropped. And this is when stuff become weird.
> >>>
> >>> When i reach peek packet rate (at which packets start to get dropped), i
> >>> would expect that reducing packet rate will remove packet drops. But,
> >>> this is not the case. For example, let's assume that peek packet rate is
> >>> 3.5Mpps. At this point everything works ok. Increasing pps to 4.0Mpps,
> >>> makes a lot of dropped packets. When reducing pps back to 3.5Mpps, app
> >>> is still broken - packets are still dropped.
> >>>
> >>> At this point, i need to drastically reduce pps (1.4Mpps) to make
> >>> dropped packets go away. Also, app is unable to successfully forward
> >>> anything beyond this 1.4M, despite the fact that in the beginning it did
> >>> forward 3.5M! Only way to recover is to restart the app.
> >>>
> >>> Also, sometimes, the app just stops forwarding any packets - packets are
> >>> received (as seen by counters), but app is unable to send anything back.
> >>>
> >>> As i did mention, i'm seeing the same behavior with l2fwd example app. I
> >>> did test dpdk 17.11 and also dpdk 18.11 - the results are the same.
> >>>
> >>> My test environment is HP DL380G8, with 82599ES 10Gig (ixgbe) cards,
> >>> connected with Cisco nexus 9300 sw. On the other side is ixia test
> >>> appliance. Application is run in virtual machine (VM), using KVM
> >>> (openstack, with sriov enabled, and numa restrictions). I did check that
> >>> VM is using only cpu's from NUMA node on which network card is
> >>> connected, so there is no cross-numa traffic. Openstack is Queens,
> >>> Ubuntu is Bionic release. Virtual machine is also using ubuntu bionic
> >>> as OS.
> >>>
> >>> I do not know how to debug this? Does someone else have the same
> >>> observations?
> >>>
> >>> Regards,
> >>>
> >>> H.
> >> There are additional findings. It seems that when i reach peak pps
> >> rate, application is not fast enough, and i can see rx missed errors
> >> on card statistics on the host. At the same time, tx side starts to
> >> show problems (tx burst starts to show it did not send all packets).
> >> Shortly after that, tx falls apart completely and top pps rate drops.
> >>
> >> Since i did not disable pause frames, i can see on the switch "RX
> >> pause" frame counter is increasing. On the other hand, if i disable
> >> pause frames (on the nic of server), host driver (ixgbe) reports "TX
> >> unit hang" in dmesg, and issues card reset. Of course, after reset
> >> none of the dpdk apps in VM's on this host does not work.
> >>
> >> Is it possible that at time of congestion DPDK does not release mbufs
> >> back to the pool, and tx ring becomes "filled" with zombie packets
> >> (not send by card and also having ref counter as they are in use)?
> >>
> >> Is there a way to check mempool or tx ring for "left-owers"? Is is
> >> possible to somehow "flush" tx ring and/or mempool?
> >>
> >> H.
> > After few more test, things become even weirder - if i do not free mbufs
> > which are not sent, but resend them again, i can "survive" over-the-peek
> > event! But, then peek rate starts to drop gradually ...
> >
> > I would ask if someone can try this on their platform and report back? I
> > would really like to know if this is problem with my deployment, or
> > there is something wrong with dpdk?
> >
> > Test should be simple - use l2fwd or l3fwd, and determine max pps. Then
> > drive pps 30%over max, and then return back and confirm that you can
> > still get max pps.
> >
> > Thanks in advance.
> >
> > H.
> >
> 
> I did receive few mails from users facing this issue, asking how it was
> resolved.
> 
> Unfortunately, there is no real fix. It seems that this issue is related
> to card and hardware used. I'm still not sure which is more to blame,
> but the combination i had is definitely problematic.
> 
> Anyhow, in the end, i did conclude that card driver have some issues
> when it is saturated with packets. My suspicion is that driver/software
> does not properly free packets, and then DPDK mempool becomes
> fragmented, and this causes performance drops. Restarting software
> releases pools, and restores proper functionality.
> 
> Aft

Re: [dpdk-users] rte_eth_stats_get: imiss is not set when using mlx4/mlx5 driver

2020-03-26 Thread Thomas Monjalon
Hi,

Sorry for the late answer.

22/10/2019 10:38, guyifan:
> DPDK version 18.11.2,imiss is always 0.
> And I could not find any code about 'imiss' in 
> 'dpdk-stable-18.11.2/drivers/net/mlx5/' or 
> 'dpdk-stable-18.11.2/drivers/net/mlx4/'.
> Is there any way to know how many packets have been dropped by a Mellanox NIC?

It is supported in DPDK 19.02:
http://git.dpdk.org/dpdk/commit/?id=ce9494d76c4783
and DPDK 18.11.3:
http://git.dpdk.org/dpdk-stable/commit/?h=81d0621264449ecc




Re: [dpdk-users] mlx5 PMD fails to receive certain icmpv6 multicast

2020-03-26 Thread Thomas Monjalon
06/03/2020 01:45, Liwu Liu:
> Hi Team,
> 
> I am using the mlx5/100G in KVM guest. The host shows this PCI vfNIC is 
> provisioned to the guest:
>   "17:01.1 Ethernet controller: Mellanox Technologies MT27800 Family 
> [ConnectX-5 Virtual Function]"
> 
> I am using DPDK 19.11 with kind of standard configurations, and when DPDK 
> application runs I still have the kernel mlx5e net device present. I have 
> both promiscuous and all-multicast turned on.
> 
> It works fine for IPV4, but for IPV6 it fails. It can receive packets 
> destined to 33:33:00:00:00:02 (IPV6 Router solicitation), but cannot receive 
> packets destined to 33:33:ff:00:00:01 (IPV6 neighbor solicitation for some 
> address).
> 
> But if I avoid DPDK, directly use the OFED-4.6 based kernel driver, 
> everything works fine as expected.
> 
> I am thinking there is some mismatch happened for MLX5 PMD. Please give some 
> advice/hints.

Adding Mellanox engineers in Cc list for help.




Re: [dpdk-users] CX4-Lx VF link status in Azure

2020-03-26 Thread Thomas Monjalon
On 3/26/2020 12:00, Benoit Ganne (bganne) wrote:
> Just removing the over-strict check in mlx5 PMD is enough for everything to 
> work fine:
> https://gerrit.fd.io/r/c/vpp/+/26152/1/build/external/patches/dpdk_20.02/0002-mlx5-azure-workaround.patch
[...]
>  2) mlx5 PMD enforce that both link speed is defined and link is up to update 
> interface state

The original commit introducing this logic is:
http://git.dpdk.org/dpdk/commit/?id=cfee94752b8f8f

I would say that the first issue is a lack of comment in this code.

Second, as Benoit said, we should relax this requirement.
If the link speed is unknown, a second request can be tried, no more.

Benoit, feel free to submit a patch showing how you think it should behave.
Otherwise, I guess a maintainer of mlx5 will try to arrange it later.
Note: a patch (even not perfect) is usually speeding up resolution.

Thanks




Re: [dpdk-users] Issue while running DPDK19.11 test-pmd with Intel X722 Nic

2020-03-26 Thread Puneet Singh
Hi Everyone,

I am using  X722 NIC with DPDK 19.11 after a single line patch for port 
detection as was advised earlier.
The port gets detected properly.
The NIC stats via rte_eth_stats_get report that the packets are arriving at 
NIC. There are no packets that are dropped due to no-mbuf's
But the calls to rte_eth_rx_burst function calls in the application do not 
provide any packets to the user space.

Has anyone used successful packet I/O with X722 NIC and if yes with which OS 
and which DPDK release ? If any tricks are needed, kindly advise. My entire 
usecase which works normally with X520, Vmxnet3 and virtio is blocked with the 
X722 NIC.

Regards
Puneet
From: Puneet Singh
Sent: 19 March 2020 13:35
To: 'Li, Xiaoyun' ; 'users@dpdk.org' ; 
'd...@dpdk.org' 
Subject: RE: [dpdk-users] Issue while running DPDK19.11 test-pmd with Intel 
X722 Nic

Hi Everyone,

I am using X722 NIC with DPDK 19.11.

Testpmd works fine but my application does not (the port is detected but data 
rx/tx not working)

I have reconciled the exact configs that are passed to rte_eth_dev_configure, 
rte_eth_tx_queue_setup, rte_eth_rx_queue_setup between testpmd and my 
application

I do notice that in my application the rte_eth_rx_burst uses 1 as the max 
packets to be received at a time, while testpmd uses MAX_PKT_BURST=512.

I changed the MAX_PKT_BURST to 1 in testpmd and testpmd also runs into some 
problems eg. I cannot give the stop command.

Also I notice the following difference in logs while using testpmd with 
MAX_PKT_BURST=512 versus MAX_PKT_BURST=1

With MAX_PKT_BURST=1
i40e_ethertype_filter_restore(): Ethertype filter: mac_etype_used = 41450, 
etype_used = 189, mac_etype_free = 0, etype_free = 0

With MAX_PKT_BURST=512
i40e_ethertype_filter_restore(): Ethertype filter: mac_etype_used = 37994, 
etype_used = 189, mac_etype_free = 0, etype_free = 0

It should be noted that MAX_PKT_BURST parameter also indirectly controls the 
number of mbuf's created in the packet pool, so how is that changing the above 
parameters ?

Further in my application, the number of mbuf's are allocated independently, 
there the following log comes out -
i40e_ethertype_filter_restore(): Ethertype filter: mac_etype_used = 0, 
etype_used = 1, mac_etype_free = 0, etype_free = 0


Regards
Puneet Singh

From: Puneet Singh
Sent: 18 March 2020 13:25
To: 'Li, Xiaoyun' mailto:xiaoyun...@intel.com>>; 
'users@dpdk.org' mailto:users@dpdk.org>>; 'd...@dpdk.org' 
mailto:d...@dpdk.org>>
Subject: RE: [dpdk-users] Issue while running DPDK19.11 test-pmd with Intel 
X722 Nic

Hi Team,

The Only Difference I could see :

TestPmd:  Ethertype filter: mac_etype_used = 37994, etype_used = 189

My Applicaiton:  Ethertype filter: mac_etype_used = 0, etype_used = 1

Can anyone tell, what's significance of these fields for i40 Nic and how to 
configure them correctly via application and what is it in testpmd which 
triggers the settings of 37994 and 189 which is not showing up with my 
application ?



Thanks & Regards
Puneet Singh

From: Puneet Singh
Sent: 17 March 2020 12:30
To: Li, Xiaoyun mailto:xiaoyun...@intel.com>>; 
users@dpdk.org; d...@dpdk.org
Subject: RE: [dpdk-users] Issue while running DPDK19.11 test-pmd with Intel 
X722 Nic

Hi Xiaoyun Li

Following  is the difference between eth conf of testpmd and my application. 
Please let us know if any parameter is critical

testpmd (rte_eth_conf)

e = {link_speeds = 0, rxmode = {mq_mode = ETH_MQ_RX_NONE, max_rx_pkt_len = 
1518, max_lro_pkt_size = 0, split_hdr_size = 0, offloads = 0, reserved_64s = 
{0, 0},
reserved_ptrs = {0x0, 0x0}}, txmode = {mq_mode = ETH_MQ_TX_NONE, offloads = 
65536, pvid = 0, hw_vlan_reject_tagged = 0 '\000', hw_vlan_reject_untagged = 0 
'\000',
hw_vlan_insert_pvid = 0 '\000', reserved_64s = {0, 0}, reserved_ptrs = 
{0x0, 0x0}}, lpbk_mode = 0, rx_adv_conf = {rss_conf = {rss_key = 0x0,
  rss_key_len = 0 '\000', rss_hf = 0}, vmdq_dcb_conf = {nb_queue_pools = 
(unknown: 0), enable_default_pool = 0 '\000', default_pool = 0 '\000',
  nb_pool_maps = 0 '\000', pool_map = {{vlan_id = 0, pools = 0} }, dcb_tc = "\000\000\000\000\000\000\000"}, dcb_rx_conf = {
  nb_tcs = (unknown: 0), dcb_tc = "\000\000\000\000\000\000\000"}, 
vmdq_rx_conf = {nb_queue_pools = (unknown: 0), enable_default_pool = 0 '\000',
  default_pool = 0 '\000', enable_loop_back = 0 '\000', nb_pool_maps = 0 
'\000', rx_mode = 0, pool_map = {{vlan_id = 0, pools = 0} }}},
  tx_adv_conf = {vmdq_dcb_tx_conf = {nb_queue_pools = (unknown: 0), dcb_tc = 
"\000\000\000\000\000\000\000"}, dcb_tx_conf = {nb_tcs = (unknown: 0),
  dcb_tc = "\000\000\000\000\000\000\000"}, vmdq_tx_conf = {nb_queue_pools 
= (unknown: 0)}}, dcb_capability_en = 0, fdir_conf = {mode = RTE_FDIR_MODE_NONE,
pballoc = RTE_FDIR_PBALLOC_64K, status = RTE_FDIR_REPORT_STATUS, drop_queue 
= 127 '\177', mask = {vlan_tci_mask = 65519, ipv4_mask = {src_ip = 4294967295,
dst_ip = 4294967295, tos = 0 '\000', t

Re: [dpdk-users] CX4-Lx VF link status in Azure

2020-03-26 Thread Mark Bloch



On 3/26/2020 12:00, Benoit Ganne (bganne) wrote:
>> Pasting back this important info:
>> "
>> Note that ethtool and '/sys/class/net//speed' also fails
>> to report the link speed (but not the link status).
>> "
>>
>> 26/03/2020 19:27, Benoit Ganne (bganne):
>>> Yes everything is initialized correctly. The netdev itself is configured
>> and usable from Linux (ping etc.). Just removing the over-strict check in
>> mlx5 PMD is enough for everything to work fine:
>> https://eur03.safelinks.protection.outlook.com/?url=https%3A%2F%2Fgerrit.fd.io%2Fr%2Fc%2Fvpp%2F%2B%2F26152%2F1%2Fbuild%2Fexternal%2Fpatches%2Fdpdk_20.02%2F0&data=02%7C01%7Cmarkb%40mellanox.com%7Ccf7438ae7d924c03b03308d7d1b7fd83%7Ca652971c7d2e4d9ba6a4d149256f461b%7C0%7C0%7C637208460478847341&sdata=fV4wrAvzmHNM88mfCr6lxFXG0ANVOP0rBjfovZii84c%3D&reserved=0
>> 002-mlx5-azure-workaround.patch
>>> The link speed is unknown but this is not issue, and link state and
>> other link info are correctly reported.
>>> Thomas, any input regarding this behavior in mlx5 PMD?
>>
>> I am not aware about the lack of link speed info.
>> It is probably not specific to ConnectX-4 Lx.
>> I guess it happens only with Hyper-V?

Should be fixed by those 3 commits (last 1 one is just cosmetic):

https://git.kernel.org/pub/scm/linux/kernel/git/netdev/net-next.git/commit/?id=dc392fc56f39a00a46d6db2d150571ccafe99734
https://git.kernel.org/pub/scm/linux/kernel/git/netdev/net-next.git/commit/?id=c268ca6087f553bfc0e16ffec412b983ffe32fd4
https://git.kernel.org/pub/scm/linux/kernel/git/netdev/net-next.git/commit/?id=2f5438ca0ee01a1b3a9c37e3f33d47c8122afe74

Mark

> 
> For me there are 2 separate issues:
>  1) Linux kernel driver does not report link speed in Azure for CX4-Lx in 
> Ubuntu 18.04
>  2) mlx5 PMD enforce that both link speed is defined and link is up to update 
> interface state
> 
> If (1) is fixed, (2) should work, but to me (2) is too strict for no good 
> reason: we do not really care about reported link speed, esp in a virtual 
> environment it usually does not mean much, but we do care about link state.
> 
> ben
>