On 5/11/2023 9:54 PM, Igor Cicimov wrote:
> Hi,
>
> I have a problem with my 8086:1010 Intel Corporation 82546EB Gigabit
> Ethernet Controller (Copper) dual port ethernet card and Ubuntu 22.04.2 LTS
> using e1000 driver:
This card is from 2003! :-) Nice that it's still running!
....
Did you file a bug with Canonical against ubuntu or ask for help over
there yet?
> that I have configured in LACP bond0:
>
> # cat /proc/net/bonding/bond0
> Ethernet Channel Bonding Driver: v5.15.0-69-generic
>
> Bonding Mode: IEEE 802.3ad Dynamic link aggregation
> Transmit Hash Policy: layer2+3 (2)
> MII Status: down
> MII Polling Interval (ms): 100
> Up Delay (ms): 100
> Down Delay (ms): 100
> Peer Notification Delay (ms): 0
>
> 802.3ad info
> LACP active: on
> LACP rate: fast
> Min links: 0
> Aggregator selection policy (ad_select): stable
> System priority: 65535
> System MAC address: MAC_BOND0
> bond bond0 has no active aggregator
Did you try bonding without MII link monitoring? I'm wondering if you're
getting caught up in the ethtool transition to netlink for some reason.
>
> Slave Interface: eth1
> MII Status: down
> Speed: 1000 Mbps
> Duplex: full
> Link Failure Count: 0
> Permanent HW addr: MAC_ETH1
> Slave queue ID: 0
> Aggregator ID: 1
> Actor Churn State: churned
> Partner Churn State: churned
> Actor Churned Count: 1
> Partner Churned Count: 1
> details actor lacp pdu:
> system priority: 65535
> system mac address: MAC_BOND0
> port key: 0
> port priority: 255
> port number: 1
> port state: 71
> details partner lacp pdu:
> system priority: 65535
> system mac address: 00:00:00:00:00:00
> oper key: 1
> port priority: 255
> port number: 1
> port state: 1
>
> Slave Interface: eth2
> MII Status: down
> Speed: 1000 Mbps
> Duplex: full
> Link Failure Count: 0
> Permanent HW addr: MAC_ETH2
> Slave queue ID: 0
> Aggregator ID: 2
> Actor Churn State: churned
> Partner Churn State: churned
> Actor Churned Count: 1
> Partner Churned Count: 1
> details actor lacp pdu:
> system priority: 65535
> system mac address: MAC_BOND0
> port key: 0
> port priority: 255
> port number: 2
> port state: 71
> details partner lacp pdu:
> system priority: 65535
> system mac address: 00:00:00:00:00:00
> oper key: 1
> port priority: 255
> port number: 1
> port state: 1
>
> that is in state down of course since both interfaces have MII Status:
> down. The dmesg shows:
>
> # dmesg | grep -E "bond0|eth[1|2]"
> [ 42.999281] e1000 0000:01:0a.0 eth1: (PCI:33MHz:32-bit) MAC_ETH1
> [ 42.999292] e1000 0000:01:0a.0 eth1: Intel(R) PRO/1000 Network Connection
> [ 43.323358] e1000 0000:01:0a.1 eth2: (PCI:33MHz:32-bit) MAC_ETH2
> [ 43.323366] e1000 0000:01:0a.1 eth2: Intel(R) PRO/1000 Network Connection
> [ 65.617020] bonding: bond0 is being created...
> [ 65.787883] 8021q: adding VLAN 0 to HW filter on device eth1
> [ 67.790638] 8021q: adding VLAN 0 to HW filter on device eth2
> [ 70.094511] 8021q: adding VLAN 0 to HW filter on device bond0
> [ 70.558364] 8021q: adding VLAN 0 to HW filter on device eth1
> [ 70.558675] bond0: (slave eth1): Enslaving as a backup interface with a
> down link
> [ 70.560050] 8021q: adding VLAN 0 to HW filter on device eth2
> [ 70.560354] bond0: (slave eth2): Enslaving as a backup interface with a
> down link
>
> So both eth1 and eth2 are UP and recognised, ethtool says "Link detected:
> yes" but their links are DOWN. I have a confusing port type of FIBRE
> reported by ethtool (capabilities reported by lshw are capabilities: pm
> pcix msi cap_list rom ethernet physical fibre 1000bt-fd autonegotiation).
> It is weird and I suspect some hardware or firmware issue. Any ideas are
> welcome.
You didn't post your bonding options enabled or bonding config file:
did you try the use_carrier=1 option, it's the default but you're not
setting it to zero are you??
>
> P.S: It is not the switch or the switch ports and it is not the cables
> already tested that. The same setup, switch+cables+card was working fine up
> to Ubuntu 18.04
The
Supported ports: [ FIBRE ]
thing is strange, but it really shouldn't matter.
_______________________________________________
E1000-devel mailing list
[email protected]
https://lists.sourceforge.net/lists/listinfo/e1000-devel
To learn more about Intel Ethernet, visit
https://community.intel.com/t5/Ethernet-Products/bd-p/ethernet-products