The LLDP FW Agent only “eats” LLDP packets but it can confuse LACP, OVS or 
other sw bridges since they expect LLDP info from the TOR. With LACP the bond 
sw does not receive anything from the TOR so it might think the port is down. 
The FW agent also response back with the same MAC on both ports so that can 
also cause packets to get dropped or sent out of order. The behavior and side 
effects can be hard to troubleshoot to figure out the root cause. The original 
design was to simplify the setup of DCB for FCoE and provide TOR information 
out of band to lifecycle controllers via a BMC. Since the initial release we 
have been making updates and improvements X710 (700 Series) and significant 
changes to the implementation on 800 Series to address the issues. 

I have brought issue back up with our engineering and support team to work with 
our OEM server and ISVs partners to help improve the communication of the 
issue. 

Brian Johnson
 Intel Corp 
Sent from my iPhone

> On Mar 14, 2022, at 5:02 PM, Laurent Dumont <laurentfdum...@gmail.com> wrote:
> 
> Wow! That is good to know. We had issues in the past with hardware lldp
> agent eating our OS lldp packets.
> 
> But never eating the actual data packets!
> 
>> On Mon, Mar 14, 2022, 7:43 PM Matthew Weiner <mlwei...@lakelandschools.org>
>> wrote:
>> 
>> The issue was with the integrated LLDP daemon.  Disabling it according to
>> Jesse Brandeburg's excellent recommendation solved the issue that both
>> Citrix and Dell couldn't figure out.  Running the following on all NICs in
>> the LACP bond brought my performance from 5 megabit to full ten gigabit
>> line speed.
>> 
>> ethtool --set-priv-flags (interface) disable-fw-lldp on
>> 
>> Just a heads up for my fellow Dell owners, there is a setting to turn this
>> off in the UEFI setup, but it appears it's able to be overridden by the
>> driver so even though you thought you disabled it in the UEFI firmware
>> options, it can still be in effect until you use ethtool to disable it.
>> 
>> Thanks again to Jesse Brandeburg for getting me out of this really
>> tough situation!
>> 
>> 
>> On Mon, Mar 14, 2022 at 5:47 PM Laurent Dumont <laurentfdum...@gmail.com>
>> wrote:
>> 
>>> That's really weird.
>>> 
>>>   - How are you measuring performance? iperf?
>>>   - Are you able to put a Ubuntu/other OS directly on the R740 and
>>>   validate the performance?
>>> 
>>> We have a lot of X710 with various firmware and I have never seen
>>> something like this.
>>> 
>>> On Mon, Mar 14, 2022 at 2:49 PM Matthew Weiner <
>>> mlwei...@lakelandschools.org> wrote:
>>> 
>>>> I'll give that a try after my day ends, just in case it drops the
>> link.  I
>>>> had a problem before where I changed some offload settings using ethtool
>>>> and the links went hard down and the only way I could bring them back up
>>>> was a reboot.  I'll let you the results after I make the change.
>>>> 
>>>> My kernel is 4.19.0+1 - it's Citrix Hypervisor (formerly XenServer) 8.2
>>>> LTSR.
>>>> 
>>>> 
>>>> 
>>>> On Mon, Mar 14, 2022 at 1:21 PM Jesse Brandeburg <
>>>> jesse.brandeb...@intel.com>
>>>> wrote:
>>>> 
>>>>> On 3/14/2022 8:28 AM, Kevin Bowling wrote:
>>>>>> Fortville (700) has always been a bit of a disaster
>>>>>> (
>>>> https://cdrdv2.intel.com/v1/dl/getContent/331430?explicitVersion=true),
>>>>>> I'd see if you can press your Intel reps into getting you the 550s
>> or
>>>>>> the 800-series NICs for the unnecessary troubles it's a much nicer
>>>>>> design.
>>>>>> 
>>>>>> It's surprising they are shipping new cards with that old of a
>>>>>> firmware, you should be on 8.50 for the driver you are running
>>>>>> (
>>>>> 
>>>> 
>> https://www.intel.com/content/www/us/en/download/18635/non-volatile-memory-nvm-update-utility-for-intel-ethernet-adapters-700-series-linux.html
>>>>> ).
>>>>>> Doing the FW update is worth a shot but most issues I've seen have
>>>>>> been driver related and you are running a pretty recent driver.
>>>>>> 
>>>>>> Regards,
>>>>>> Kevin
>>>>>> 
>>>>>> On Mon, Mar 14, 2022 at 7:44 AM Matthew Weiner
>>>>>> <mlwei...@lakelandschools.org> wrote:
>>>>>>> 
>>>>>>> I'm at my wits end with this, Citrix is stumped, Dell is stumped,
>> and
>>>>> with
>>>>>>> the supply chain issues the way they are we can't just yank these
>>>> out in
>>>>>>> favor of X550s.  The problem is we have a group of Dell R740s with
>>>> X710
>>>>>>> dual-port NICs and the performance is, in a word, awful.  Like 5-6
>>>>> megabit
>>>>>>> upload and 250 megabit download awful.  However, identical server
>>>>> hardware
>>>>>>> with any other card, be it a Broadcom or an Intel X550T, no issues.
>>>> We
>>>>> can
>>>>>>> get line rate all day long.  The latest attempt was swapping the
>> X710
>>>>> for a
>>>>>>> newer X710-T2L-t, which performed maybe 5-10 percent better.  We've
>>>>> tried
>>>>>>> three different driver revisions, firmware, BIOS, all the available
>>>>>>> Hypervisor updates, it still performs the same.
>>>>>>> 
>>>>>>> The servers in question have X550s on the motherboard mezzanine
>> card
>>>>> which
>>>>>>> perform fine, and a single dual-port X710 in the PCIe riser.  The
>>>> X710
>>>>> is
>>>>>>> set up with an LACP pair trunked with three VLANs tagged across it.
>>>> In
>>>>>>> this pool we also have servers with X550s on the PCIe cards, and
>>>>>>> Broadcoms.  All those with an identical configuration perform
>> without
>>>>>>> issue, it's only the X710s that show this problem.
>>>>> 
>>>>> Hi Matt, sorry to hear about this problem. Let's poke a bit (please be
>>>>> patient with me) and see if we can help you.
>>>>> 
>>>>> Have you followed the steps like located here:
>>>>> 
>>>>> 
>>>> 
>> https://www.thomas-krenn.com/en/wiki/Intel_Ethernet_700_Series_LACP_Configuration
>>>>> 
>>>>> As there are definitely known problems with LACP mode and the driver's
>>>>> default settings.
>>>>> 
>>>>> You can try the above workaround and see if it helps. If that does
>> help,
>>>>> then there are ways to make the settings get applied by ethtool as the
>>>>> system comes up.
>>>>> 
>>>>> Please let us know how it goes.
>>>>> 
>>>>> It would be helpful to know what kernel you're running, just for good
>>>>> measure.
>>>>> 
>>>>> PS. You may want to subscribe to e1000-devel as it is currently
>> holding
>>>>> your messages because you're not a subscriber, and they have to be
>>>>> manually released.
>>>>> 
>>>> 
>>>> _______________________________________________
>>>> E1000-devel mailing list
>>>> E1000-devel@lists.sourceforge.net
>>>> https://lists.sourceforge.net/lists/listinfo/e1000-devel
>>>> To learn more about Intel Ethernet, visit
>>>> https://forums.intel.com/s/topic/0TO0P00000018NbWAI/intel-ethernet
>>>> 
>>> 
>> 
>> _______________________________________________
>> E1000-devel mailing list
>> E1000-devel@lists.sourceforge.net
>> https://lists.sourceforge.net/lists/listinfo/e1000-devel
>> To learn more about Intel Ethernet, visit
>> https://forums.intel.com/s/topic/0TO0P00000018NbWAI/intel-ethernet
>> 
> 
> _______________________________________________
> E1000-devel mailing list
> E1000-devel@lists.sourceforge.net
> https://lists.sourceforge.net/lists/listinfo/e1000-devel
> To learn more about Intel Ethernet, visit 
> https://forums.intel.com/s/topic/0TO0P00000018NbWAI/intel-ethernet

_______________________________________________
E1000-devel mailing list
E1000-devel@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/e1000-devel
To learn more about Intel Ethernet, visit 
https://forums.intel.com/s/topic/0TO0P00000018NbWAI/intel-ethernet

Reply via email to