Hi,

Update: CONFIG_DCB was missing in the kernel config. This finally
solved the problem.

Greetings
Stefan Majer

On Mon, Mar 22, 2021 at 8:26 AM Stefan Majer <stefan.ma...@gmail.com> wrote:
>
> Hi,
>
> side node, this works in servers with the ixgbe and i40e drivers when 
> built-in.
>
> Greetings
> Stefan Majer
>
> On Sun, Mar 21, 2021 at 6:01 PM Stefan Majer <stefan.ma...@gmail.com> wrote:
> >
> > Hi,
> >
> > I have a system with two Intel Series 800 Network Cards:
> > 3b:00.0 Ethernet controller: Intel Corporation Device 1592 (rev 02)
> > d8:00.0 Ethernet controller: Intel Corporation Device 1592 (rev 02)
> >
> > In our use case we boot a kernel with all required modules build in
> > and start a small userland which is responsible to detect the hardware
> > details and the connections to the attached switches via LLDP.
> > The Kernel config can be seen here:
> > https://github.com/metal-stack/kernel/blob/master/config-mainline-x86_64
> >
> > The system is set to UEFI boot, the ice driver reports the following:
> >
> >
> > [   32.210982] ice: Intel(R) Ethernet Connection E800 Series Linux Driver
> > [   32.217514] ice: Copyright (c) 2018, Intel Corporation.
> > [   32.542456] ice 0000:3b:00.0: The DDP package was successfully
> > loaded: ICE OS Default Package version 1.3.20.0
> > [   32.661658] ice 0000:3b:00.0: 126.016 Gb/s available PCIe
> > bandwidth, limited by 8.0 GT/s PCIe x16 link at 0000:3a:00.0 (capable
> > of 252.048 Gb/s with 16.0 GT/s PCIe x16 link)
> > [   33.004696] ice 0000:d8:00.0: The DDP package was successfully
> > loaded: ICE OS Default Package version 1.3.20.0
> > [   33.124033] ice 0000:d8:00.0: 126.016 Gb/s available PCIe
> > bandwidth, limited by 8.0 GT/s PCIe x16 link at 0000:d7:00.0 (capable
> > of 252.048 Gb/s with 16.0 GT/s PCIe x16 link)
> > [   35.269298] ice 0000:3b:00.0 eth4: NIC Link is up 100 Gbps Full
> > Duplex, Requested FEC: RS-FEC, Negotiated FEC: RS-FEC, Autoneg
> > Advertised: Off, Autoneg Negotiated: False, Flow Control: None
> > [   35.287379] 8021q: adding VLAN 0 to HW filter on device eth4
> > [   35.395128] ice 0000:d8:00.0 eth5: NIC Link is up 100 Gbps Full
> > Duplex, Requested FEC: RS-FEC, Negotiated FEC: RS-FEC, Autoneg
> > Advertised: Off, Autoneg Negotiated: False, Flow Control: None
> > [   35.413184] 8021q: adding VLAN 0 to HW filter on device eth5
> > [   36.471407]      device=eth4, hwaddr=b4:96:91:af:72:c0,
> > ipaddr=10.255.255.39, mask=255.255.255.0, gw=10.255.255.1
> >
> > and ethtool:
> > ethtool --show-priv-flags eth4
> > Private flags for eth4:
> > link-down-on-close     : off
> > fw-lldp-agent          : off
> > vf-true-promisc-support: off
> > mdd-auto-reset-vf      : off
> > legacy-rx              : off
> >
> > But i can't see any LLDP Packets, the switch is a Edgecore AS7712-32x
> > running cumulus Linux.
> >
> > Changing fw-lldp-agent on/off always leads to the following error:
> > ethtool --set-priv-flags eth4 fw-lldp-agent on
> > [ 1270.971213] ice 0000:3b:00.0: Fail removing RX LLDP rule on VSI 3
> > error: ICE_ERR_DOES_NOT_EXIST
> > defaultsh-5.0# ethtool --set-priv-flags eth4 fw-lldp-agent off
> > [ 1281.001061] ice 0000:3b:00.0: Fail to init DCB
> >
> > No LLDP Packets received:
> > lldptool stats -i eth4
> > Total Frames Transmitted        = 0
> > Total Discarded Frames Received = 0
> > Total Error Frames Received     = 0
> > Total Frames Received           = 0
> > Total Discarded TLVs            = 0
> > Total Unrecognized TLVs         = 0
> > Total Ageouts                   = 0
> >
> > Booting the same machine with ubuntu 20.10 live CD with a modular
> > kernel receives LLDP perfectly fine, dmes also shows:
> >
> > FW LLDP is disabled, DCBx/LLDP in SW mode.
> >
> > Is there any way to tell the ice driver by commandline to behave the
> > same in built-in mode as well.
> >
> > Really appreciate any pointer
> > Greetings
> > Stefan Majer
> >
> >
> >
> >
> > --
> > Stefan Majer
>
>
>
> --
> Stefan Majer



-- 
Stefan Majer


_______________________________________________
E1000-devel mailing list
E1000-devel@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/e1000-devel
To learn more about Intel Ethernet, visit 
https://forums.intel.com/s/topic/0TO0P00000018NbWAI/intel-ethernet

Reply via email to