Ko, Stephen S <stephen.s.ko <at> intel.com> writes:
>
>
> > -----Original Message-----
> > From: bmack [mailto:bomack08 <at> gmail.com]
> > Sent: Tuesday, October 02, 2012 11:22 AM
> > To: e1000-devel <at> lists.sourceforge.net
> > Subject: Re: [E1000-devel] Bonding + ixgbe breaks with jumbo frames if
the
> > MTU is not set on bond0 before adding slaves
> >
> >
> >
> > Nathan March <nathan <at> gt.net> writes:
> >
> > >
> > > Hi All,
> > >
> > > I think I've found a bug in the ixgbe driver when using bonding +
> > > jumbo frames. Adding slaves to the bond device and setting mtu 9000
> > > after enslaving, results in one of the slaves dropping traffic. The
> > > strange thing is putting bond0 into promiscuous mode (by running
> > > tcpdump) will solve the problem (until you close tcpdump).
> > >
> >
> > Hi,
> >
> > Seeing what looks to be the same issue with bonded interface and jumbo
> > frames on ixgbe driver for 82599 chip. Can someone please point me to
the
> > fix for this and also the link to get history of fixes for ixgbe?
> >
> > Thanks!
> >
> >
> >
> > ------------------------------------------------------------------------
------
> > Don't let slow site performance ruin your business. Deploy New Relic APM
> > Deploy New Relic app performance management and know exactly what is
> > happening inside your Ruby, Python, PHP, Java, and .NET app Try New
Relic
> > at no cost today and get our sweet Data Nerd shirt too!
> > http://p.sf.net/sfu/newrelic-dev2dev
> > _______________________________________________
> > E1000-devel mailing list
> > E1000-devel <at> lists.sourceforge.net
> > https://lists.sourceforge.net/lists/listinfo/e1000-devel
> > To learn more about IntelĀ® Ethernet, visit
> > http://communities.intel.com/community/wired
>
> Hi,
>
> Fix is contained in 3.10.17:
http://sourceforge.net/projects/e1000/files/ixgbe%20stable/3.10.17/
>
> Thanks,
> S
>
> --------------------------------------------------------------------------
----
> Don't let slow site performance ruin your business. Deploy New Relic APM
> Deploy New Relic app performance management and know exactly
> what is happening inside your Ruby, Python, PHP, Java, and .NET app
> Try New Relic at no cost today and get our sweet Data Nerd shirt too!
> http://p.sf.net/sfu/newrelic-dev2dev
> _______________________________________________
> E1000-devel mailing list
> E1000-devel <at> lists.sourceforge.net
> https://lists.sourceforge.net/lists/listinfo/e1000-devel
> To learn more about IntelĀ® Ethernet, visit
http://communities.intel.com/community/wired
>
>
Hi All,
On our servers, we are using the Intel chip 82599 and the same issues happen
even with the latest IXGBE driver available which is 3.15.1.
This is what I do to reproduce the issue.
I'm running OVM Server 3.0.3, kernel is 2.6.32.21-45xen
eth2 and eth3 makes up the bonded interface bond0. Each interface are
connected to a different NEM/switch. eth2 is supposed to be the active
interface and eth3 the backup slave of my bond0, see ifcfg-bond0 file:
BONDING_OPTS="mode=1 miimon=100 use_carrier=0 primary=eth2
primary_reselect=2"
When I boot, the "Currently active slave" is eth2 as expected.
As soon as I change the MTU size (ifconfig bond0 mtu 9000), the bond0
interface fails over to eth3. See messages:
May 31 11:33:27 cbdn-000-002-000-031 kernel: [ 216.421227] ixgbe
0000:b0:00.0: eth2: Setting MTU > 1500 will disable legacy VFs
May 31 11:33:27 cbdn-000-002-000-031 kernel: [ 216.421496] ixgbe
0000:b0:00.0: eth2: changing MTU from 1500 to 9000
May 31 11:33:29 cbdn-000-002-000-031 kernel: [ 218.664403] ixgbe
0000:b0:00.1: eth3: Setting MTU > 1500 will disable legacy VFs
May 31 11:33:29 cbdn-000-002-000-031 kernel: [ 218.664672] ixgbe
0000:b0:00.1: eth3: changing MTU from 1500 to 9000
May 31 11:33:31 cbdn-000-002-000-031 kernel: [ 220.956381] ixgbe
0000:b0:00.1: eth3: NIC Link is Up 10 Gbps, Flow Control: RX/TX
May 31 11:33:31 cbdn-000-002-000-031 kernel: [ 220.956535] ixgbe
0000:b0:00.0: eth2: NIC Link is Up 10 Gbps, Flow Control: RX/TX
May 31 11:33:31 cbdn-000-002-000-031 kernel: [ 220.956567] bonding: bond0:
link status definitely down for interface eth2, disabling it
May 31 11:33:31 cbdn-000-002-000-031 kernel: [ 220.956571] bonding: bond0:
making interface eth3 the new active one.
May 31 11:33:31 cbdn-000-002-000-031 kernel: [ 220.956574] device eth2 left
promiscuous mode
May 31 11:33:31 cbdn-000-002-000-031 kernel: [ 220.957204] device eth3
entered promiscuous mode
May 31 11:33:31 cbdn-000-002-000-031 kernel: [ 221.055298] bonding: bond0:
link status definitely up for interface eth2.
This thread says the bug has also been fixed for the 82599 chip, yet i still
encounter this issue with the latest driver. Could someone help?
Thanks,
JB
------------------------------------------------------------------------------
Get 100% visibility into Java/.NET code with AppDynamics Lite
It's a free troubleshooting tool designed for production
Get down to code-level detail for bottlenecks, with <2% overhead.
Download for free and get started troubleshooting in minutes.
http://p.sf.net/sfu/appdyn_d2d_ap2
_______________________________________________
E1000-devel mailing list
[email protected]
https://lists.sourceforge.net/lists/listinfo/e1000-devel
To learn more about Intel® Ethernet, visit
http://communities.intel.com/community/wired