On Mon, May 15, 2023 at 12:57 PM Igor Cicimov <icici...@gmail.com> wrote:
>
> Hi Jesse, thanks for your reply.
>
> On Sat, May 13, 2023 at 2:41 AM Jesse Brandeburg
> <jesse.brandeb...@intel.com> wrote:
> >
> > On 5/11/2023 9:54 PM, Igor Cicimov wrote:
> > > Hi,
> > >
> > > I have a problem with my 8086:1010 Intel Corporation 82546EB Gigabit
> > > Ethernet Controller (Copper) dual port ethernet card and Ubuntu 22.04.2 
> > > LTS
> > > using e1000 driver:
> >
> > This card is from 2003! :-) Nice that it's still running!
> >
>
> Time flies :-)
>
> > ....
> >
> > Did you file a bug with Canonical against ubuntu or ask for help over
> > there yet?
> >
>
> No, not yet. My assumption to be honest is that this card support has
> maybe been removed in the driver or the kernel?
>
> > > that I have configured in LACP bond0:
> > >
> > > # cat /proc/net/bonding/bond0
> > > Ethernet Channel Bonding Driver: v5.15.0-69-generic
> > >
> > > Bonding Mode: IEEE 802.3ad Dynamic link aggregation
> > > Transmit Hash Policy: layer2+3 (2)
> > > MII Status: down
> > > MII Polling Interval (ms): 100
> > > Up Delay (ms): 100
> > > Down Delay (ms): 100
> > > Peer Notification Delay (ms): 0
> > >
> > > 802.3ad info
> > > LACP active: on
> > > LACP rate: fast
> > > Min links: 0
> > > Aggregator selection policy (ad_select): stable
> > > System priority: 65535
> > > System MAC address: MAC_BOND0
> > > bond bond0 has no active aggregator
> >
> > Did you try bonding without MII link monitoring? I'm wondering if you're
> > getting caught up in the ethtool transition to netlink for some reason.
> >
>
> No but that's a good idea.

Here is the outcome with MII monitoring disabled:

# cat /proc/net/bonding/bond0
Ethernet Channel Bonding Driver: v5.15.0-69-generic

Bonding Mode: IEEE 802.3ad Dynamic link aggregation
Transmit Hash Policy: layer2+3 (2)
MII Status: down
MII Polling Interval (ms): 0
Up Delay (ms): 0
Down Delay (ms): 0
Peer Notification Delay (ms): 0

802.3ad info
LACP active: on
LACP rate: fast
Min links: 0
Aggregator selection policy (ad_select): stable
System priority: 65535
System MAC address: MAC_BOND0
bond bond0 has no active aggregator

Slave Interface: eth1
MII Status: up
Speed: 1000 Mbps
Duplex: full
Link Failure Count: 0
Permanent HW addr: MAC_ETH1
Slave queue ID: 0
Aggregator ID: 1
Actor Churn State: monitoring
Partner Churn State: monitoring
Actor Churned Count: 0
Partner Churned Count: 0
details actor lacp pdu:
    system priority: 65535
    system mac address: MAC_BOND0
    port key: 9
    port priority: 255
    port number: 1
    port state: 71
details partner lacp pdu:
    system priority: 65535
    system mac address: 00:00:00:00:00:00
    oper key: 1
    port priority: 255
    port number: 1
    port state: 1

Slave Interface: eth2
MII Status: up
Speed: 1000 Mbps
Duplex: full
Link Failure Count: 0
Permanent HW addr: MAC_ETH2
Slave queue ID: 0
Aggregator ID: 2
Actor Churn State: monitoring
Partner Churn State: monitoring
Actor Churned Count: 0
Partner Churned Count: 0
details actor lacp pdu:
    system priority: 65535
    system mac address: MAC_BOND0
    port key: 9
    port priority: 255
    port number: 2
    port state: 71
details partner lacp pdu:
    system priority: 65535
    system mac address: 00:00:00:00:00:00
    oper key: 1
    port priority: 255
    port number: 1
    port state: 1

but the bond is still down and no traffic is seen over the interfaces
to the switch.

>
> >
> > >
> > > Slave Interface: eth1
> > > MII Status: down
> > > Speed: 1000 Mbps
> > > Duplex: full
> > > Link Failure Count: 0
> > > Permanent HW addr: MAC_ETH1
> > > Slave queue ID: 0
> > > Aggregator ID: 1
> > > Actor Churn State: churned
> > > Partner Churn State: churned
> > > Actor Churned Count: 1
> > > Partner Churned Count: 1
> > > details actor lacp pdu:
> > >     system priority: 65535
> > >     system mac address: MAC_BOND0
> > >     port key: 0
> > >     port priority: 255
> > >     port number: 1
> > >     port state: 71
> > > details partner lacp pdu:
> > >     system priority: 65535
> > >     system mac address: 00:00:00:00:00:00
> > >     oper key: 1
> > >     port priority: 255
> > >     port number: 1
> > >     port state: 1
> > >
> > > Slave Interface: eth2
> > > MII Status: down
> > > Speed: 1000 Mbps
> > > Duplex: full
> > > Link Failure Count: 0
> > > Permanent HW addr: MAC_ETH2
> > > Slave queue ID: 0
> > > Aggregator ID: 2
> > > Actor Churn State: churned
> > > Partner Churn State: churned
> > > Actor Churned Count: 1
> > > Partner Churned Count: 1
> > > details actor lacp pdu:
> > >     system priority: 65535
> > >     system mac address: MAC_BOND0
> > >     port key: 0
> > >     port priority: 255
> > >     port number: 2
> > >     port state: 71
> > > details partner lacp pdu:
> > >     system priority: 65535
> > >     system mac address: 00:00:00:00:00:00
> > >     oper key: 1
> > >     port priority: 255
> > >     port number: 1
> > >     port state: 1
> > >
> > > that is in state down of course since both interfaces have MII Status:
> > > down. The dmesg shows:
> > >
> > > # dmesg | grep -E "bond0|eth[1|2]"
> > > [   42.999281] e1000 0000:01:0a.0 eth1: (PCI:33MHz:32-bit) MAC_ETH1
> > > [   42.999292] e1000 0000:01:0a.0 eth1: Intel(R) PRO/1000 Network 
> > > Connection
> > > [   43.323358] e1000 0000:01:0a.1 eth2: (PCI:33MHz:32-bit) MAC_ETH2
> > > [   43.323366] e1000 0000:01:0a.1 eth2: Intel(R) PRO/1000 Network 
> > > Connection
> > > [   65.617020] bonding: bond0 is being created...
> > > [   65.787883] 8021q: adding VLAN 0 to HW filter on device eth1
> > > [   67.790638] 8021q: adding VLAN 0 to HW filter on device eth2
> > > [   70.094511] 8021q: adding VLAN 0 to HW filter on device bond0
> > > [   70.558364] 8021q: adding VLAN 0 to HW filter on device eth1
> > > [   70.558675] bond0: (slave eth1): Enslaving as a backup interface with a
> > > down link
> > > [   70.560050] 8021q: adding VLAN 0 to HW filter on device eth2
> > > [   70.560354] bond0: (slave eth2): Enslaving as a backup interface with a
> > > down link
> > >
> > > So both eth1 and eth2 are UP and recognised, ethtool says "Link detected:
> > > yes" but their links are DOWN. I have a confusing port type of FIBRE
> > > reported by ethtool (capabilities reported by lshw are capabilities: pm
> > > pcix msi cap_list rom ethernet physical fibre 1000bt-fd autonegotiation).
> > > It is weird and I suspect some hardware or firmware issue. Any ideas are
> > > welcome.
> >
> > You didn't post your bonding options enabled or bonding config file:
> >
>
> Sure, this is the content of /etc/network/interfaces that I've been
> dragging since 12.04 or even maybe earlier, so many times I planned to
> migrate to systemd or even better netplan but never got the time:
>
> auto eth1
> allow-hotplug eth1
> allow-bond0 eth1
> iface eth1 inet manual
>     bond-master bond0
>
> auto eth2
> allow-hotplug eth2
> allow-bond0 eth2
> iface eth2 inet manual
>     bond-master bond0
>
> auto bond0
> iface bond0 inet static
>     pre-up /sbin/ifconfig eth1 0.0.0.0 up || /bin/true && \
>            sleep 2 && /sbin/ifconfig eth2 0.0.0.0 up || /bin/true && sleep 2
>     post-up ifenslave bond0 eth1 eth2
>     pre-down ifenslave -d bond0 eth1 eth2
>     #there are several modes, this is also known as mode 4
>     bond-mode 802.3ad
>     bond-miimon 100
>     bond-lacp_rate fast
>     bond-xmit_hash_policy layer2+3
>     bond-downdelay 100
>     bond-updelay 100
>     bond-slaves none
>     address 172.128.1.129
>     netmask 255.255.255.0
>
> Might be outdated though since there are some pre and post commands I
> had to use due to some bug in previous ubuntu versions, not sure if
> that hurts or not.
>
> > did you try the use_carrier=1 option, it's the default but you're not
> > setting it to zero are you??
> >
>
> Nope I haven't. As you say that's the default though and I'm not
> setting it to zero.
>
> > >
> > > P.S: It is not the switch or the switch ports and it is not the cables
> > > already tested that. The same setup, switch+cables+card was working fine 
> > > up
> > > to Ubuntu 18.04
> >
> > The
> >     Supported ports: [ FIBRE ]
> >
> > thing is strange, but it really shouldn't matter.
> >
> >
> >
> Cheers,
> Igor


_______________________________________________
E1000-devel mailing list
E1000-devel@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/e1000-devel
To learn more about Intel Ethernet, visit 
https://community.intel.com/t5/Ethernet-Products/bd-p/ethernet-products

Reply via email to