Re: [E1000-devel] Intel 82580 loopback test fail

2011-11-02 Thread Wyborny, Carolyn


>-Original Message-
>From: Zhaobing [mailto:soldier.zhaob...@huawei.com]
>Sent: Friday, October 28, 2011 1:18 AM
>To: e1000-devel@lists.sourceforge.net
>Cc: Wangshiwen
>Subject: [E1000-devel] Intel 82580 loopback test fail
>
>Hi,
> I am an engineer of Huawei Technologies Co., Ltd,my name is
>Zhaobing.
>My server’nic is Intel 82580,and the os is suse11sp1.When I do the nic
>self test use command “ethtool �Ct ethx”,the loopback test is fail,and
>the error code is 13,as follows.
>
>linux-ming:~ # ethtool -t eth0
>The test result is FAIL
>The test extra info:
>Register test  (offline) 0
>Eeprom test(offline) 0
>Interrupt test (offline) 0
>Loopback test  (offline) 13
>Link test   (on/offline) 0
>
>My server’s info is as follows.
>
>linux-ming:~ # cat /etc/issue
>
>Welcome to SUSE Linux Enterprise Server 11 SP1  (x86_64) - Kernel \r
>(\l).
>linux-ming:~ # uname -a
>Linux linux-ming 2.6.32.45-0.3-default #1 SMP 2011-08-22 10:12:58 +0200
>x86_64 x86_64 x86_64 GNU/Linux
>
>linux-ming:~ # ethtool -i eth0
>driver: igb
>version: 2.1.0-k2
>firmware-version: 3.2-9
>bus-info: :02:00.0
>
>linux-ming:~ # modinfo igb
>filename:   /lib/modules/2.6.32.45-0.3-
>default/kernel/drivers/net/igb/igb.ko
>version:2.1.0-k2
>license:GPL
>description:Intel(R) Gigabit Ethernet Network Driver
>author: Intel Corporation, 
>srcversion: 9C2F91127B64F7027621F71
>alias:  pci:v8086d10D6sv*sd*bc*sc*i*
>alias:  pci:v8086d10A9sv*sd*bc*sc*i*
>alias:  pci:v8086d10A7sv*sd*bc*sc*i*
>alias:  pci:v8086d10E8sv*sd*bc*sc*i*
>alias:  pci:v8086d1526sv*sd*bc*sc*i*
>alias:  pci:v8086d150Dsv*sd*bc*sc*i*
>alias:  pci:v8086d10E7sv*sd*bc*sc*i*
>alias:  pci:v8086d10E6sv*sd*bc*sc*i*
>alias:  pci:v8086d1518sv*sd*bc*sc*i*
>alias:  pci:v8086d150Asv*sd*bc*sc*i*
>alias:  pci:v8086d10C9sv*sd*bc*sc*i*
>alias:  pci:v8086d1516sv*sd*bc*sc*i*
>alias:  pci:v8086d1511sv*sd*bc*sc*i*
>alias:  pci:v8086d1510sv*sd*bc*sc*i*
>alias:  pci:v8086d150Fsv*sd*bc*sc*i*
>alias:  pci:v8086d150Esv*sd*bc*sc*i*
>alias:  pci:v8086d1524sv*sd*bc*sc*i*
>alias:  pci:v8086d1523sv*sd*bc*sc*i*
>alias:  pci:v8086d1522sv*sd*bc*sc*i*
>alias:  pci:v8086d1521sv*sd*bc*sc*i*
>depends:dca
>supported:  yes
>vermagic:   2.6.32.45-0.3-default SMP mod_unload modversions
>parm:   entropy:Allow igb to populate the /dev/random entropy
>pool (int)
>parm:   max_vfs:Maximum number of virtual functions to allocate
>per physical function (uint)
>linux-ming:~ #
>
>My workmate use Intel 82576 do the same test,and the loopback test is
>pass.Can you help me explain why?
>
>Thanks
>Zhaobing
>
>
>--
>赵兵
>华为技术有限公司
>Tel:13715112045  0755-89651961

Hello, 

I believe that we may have a problem with the loopback test on that part.  That 
explains why it passes on the 82576 part.  I will need to investigate further.  
In the meantime, have you tried downloading our latest out of tree driver 
located at the Source Forge site and see if the problem is there as well?  Some 
additional information that would be helpful is a full lspci -vv printout and a 
full dmesg log from executing the failing test.  Also, if you create a bug at 
the Source Forge site, it will make it easier to store and track data related 
to this problem.  

Thanks,

Carolyn

Carolyn Wyborny
Linux Development
LAN Access Division
Intel Corporation



--
RSA(R) Conference 2012
Save $700 by Nov 18
Register now
http://p.sf.net/sfu/rsa-sfdev2dev1___
E1000-devel mailing list
E1000-devel@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/e1000-devel
To learn more about Intel® Ethernet, visit 
http://communities.intel.com/community/wired


Re: [E1000-devel] e100e: does it do unicast filtering?

2011-11-02 Thread Allan, Bruce W
Yes.  At the time the IFF_UNICAST_FLT flag was added to many drivers by
Jiri (added), the e1000e driver did not have the code to configure unicast
addresses - it was missed somehow when the other drivers were similarly
updated.  We currently have a patch in test that does this and it will be
pushed upstream after the merge window closes.

Thanks,
Bruce.

>-Original Message-
>From: Stephen Hemminger [mailto:shemmin...@vyatta.com]
>Sent: Wednesday, November 02, 2011 3:42 PM
>To: Kirsher, Jeffrey T
>Cc: e1000-devel@lists.sourceforge.net
>Subject: [E1000-devel] e100e: does it do unicast filtering?
>
>Looks like most of the other Intel drivers set the UNICAST filter
>flag. Why not e1000e? Looks like the same setup code from e1000
>would work on e1000e?
>
>
>--
>RSA(R) Conference 2012
>Save $700 by Nov 18
>Register now
>http://p.sf.net/sfu/rsa-sfdev2dev1
>___
>E1000-devel mailing list
>E1000-devel@lists.sourceforge.net
>https://lists.sourceforge.net/lists/listinfo/e1000-devel
>To learn more about Intel® Ethernet, visit
>http://communities.intel.com/community/wired

--
RSA(R) Conference 2012
Save $700 by Nov 18
Register now
http://p.sf.net/sfu/rsa-sfdev2dev1
___
E1000-devel mailing list
E1000-devel@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/e1000-devel
To learn more about Intel® Ethernet, visit 
http://communities.intel.com/community/wired


[E1000-devel] e100e: does it do unicast filtering?

2011-11-02 Thread Stephen Hemminger
Looks like most of the other Intel drivers set the UNICAST filter
flag. Why not e1000e? Looks like the same setup code from e1000
would work on e1000e?


--
RSA(R) Conference 2012
Save $700 by Nov 18
Register now
http://p.sf.net/sfu/rsa-sfdev2dev1
___
E1000-devel mailing list
E1000-devel@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/e1000-devel
To learn more about Intel® Ethernet, visit 
http://communities.intel.com/community/wired


Re: [E1000-devel] igb driver throughput on Intel E31320 vs Intel X3450

2011-11-02 Thread Stephen Hemminger
On Fri, 29 Apr 2011 01:18:05 -0400
Ed Ravin  wrote:

> I'm comparing the performance of the Vyatta 6.2 distribution
> (Linux 2.6.35-1, 32-bit) on an Intel E31320 box (Supermicro X9SCL)
> and on an older Supermicro motherboard with a X3450 CPU.
> 
> Both boxes have a 2-port 82576 card installed ("Intel Corporation 82576
> Gigabit Network Connection (rev 01)" according to lspci -v).
> 
> One test I use is to have mz blast short UDP packets with randomized
> source addresses to the box under test, which receives them on
> one 82576 port and then forwards them out the other port.
> There are multiple destination IP addresses (a small subnet)
> and the traffic balances across the multiple queues as expected.
> 
> The igb driver is version 2.1.0-k2, which defaults on this platform
> to 8 combined RX/TX queues, and I see all the CPUs getting loaded
> down more or less equivalently during the test.  When I run the
> test past the capacity of the computer, all CPUs would show as close
> to 100% in softirq according to mpstat.

Vyatta puts a number of iptables rules in place (for use by firewall
commands). If you don't have any need for them it is possible to remove
all the chains and get a modest performance increase.

--
RSA(R) Conference 2012
Save $700 by Nov 18
Register now
http://p.sf.net/sfu/rsa-sfdev2dev1
___
E1000-devel mailing list
E1000-devel@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/e1000-devel
To learn more about Intel® Ethernet, visit 
http://communities.intel.com/community/wired


Re: [E1000-devel] BUG in ixgbe_clean_rx_irq()?

2011-11-02 Thread Tantilov, Emil S
>-Original Message-
>From: Benzi Weissman [mailto:benz...@gmail.com]
>Sent: Wednesday, November 02, 2011 1:45 PM
>To: Tantilov, Emil S
>Subject: RE: [E1000-devel] BUG in ixgbe_clean_rx_irq()?
>
>also, the generic packet receive that you pointed in the patch introduced
>in 2.6.39, no? and why is it relared to GRO?
>
>On Nov 2, 2011 10:39 PM, "Benzi Weissman"  wrote:
>
>
>   it is just my code review. i just cant see how it will work on 2.6.29
>up to 2.6.38 since no one will update last_rx and bond arp monitor code
>will fail. or maybe i am missing something?

Starting with 2.6.29 updating of last_rx was moved to the bonding code:

http://git.kernel.org/?p=linux/kernel/git/davem/net-next.git;a=commitdiff;h=6cf3f41e6c08bca6641a695449791c38a25f35ff

Thanks,
Emil


--
RSA(R) Conference 2012
Save $700 by Nov 18
Register now
http://p.sf.net/sfu/rsa-sfdev2dev1
___
E1000-devel mailing list
E1000-devel@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/e1000-devel
To learn more about Intel® Ethernet, visit 
http://communities.intel.com/community/wired


Re: [E1000-devel] BUG in ixgbe_clean_rx_irq()?

2011-11-02 Thread Tantilov, Emil S
>-Original Message-
>From: Benzi Weissman [mailto:benz...@gmail.com]
>Sent: Wednesday, November 02, 2011 1:52 AM
>To: e1000-devel@lists.sourceforge.net
>Subject: [E1000-devel] BUG in ixgbe_clean_rx_irq()?
>
>Hi,
>
>last_rx is being updated only if NETIF_F_GRO is not defined - probably in
>view that last_rx is being updated in the kernel in GRO code related.
>However, this is not true. last_rx is being updated in the linux kernel in
>the new generic packet receive handler stuff (bond_handle_frame) which is
>not related to GRO and even introduced few kernel versions later than
>GROwhich means your latest ixgbe won't work properly with bond in
>kernel versions 2.6.29-2.6.38.

NETIF_F_GRO was first defined in kernel 2.6.29, previous to that the ixgbe 
driver included in the kernel updated last_rx. This is the patch (in 2.6.29) 
that removed updating last_rx in all/most drivers including ixgbe:

http://git.kernel.org/?p=linux/kernel/git/davem/net-next.git;a=commitdiff;h=babcda74e9d96bb58fd9c6c5112dbdbff169e695

>From what I can tell the check for NETIF_F_GRO in ixgbe is in line with the 
kernel changes. Or am I missing something?

Is the bug report based on code review, or an actual problem you are seeing 
with the driver?

Thanks,
Emil

>
>--
>Benzi.

--
RSA(R) Conference 2012
Save $700 by Nov 18
Register now
http://p.sf.net/sfu/rsa-sfdev2dev1
___
E1000-devel mailing list
E1000-devel@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/e1000-devel
To learn more about Intel® Ethernet, visit 
http://communities.intel.com/community/wired


Re: [E1000-devel] igb/i350 VF/PF mailbox locking race leading to VF link staying down

2011-11-02 Thread Rose, Gregory V
James,

I'm sorry to hear you're having problems with this and I really appreciate the 
extensive and detailed investigation you've done on the matter.

I'll do some further investigation on my side and we will review and test your 
patch.  Given that no one here at Intel has seen the problem it might be that 
the best we can do is incorporate the patch and then make sure that it doesn't 
break anything else.  If that is the case and the patch fixes your issue then I 
can't see any reason for us to not accept the patch.

Thanks,

- Greg

> -Original Message-
> From: James Bulpin [mailto:james.bul...@eu.citrix.com]
> Sent: Wednesday, November 02, 2011 11:31 AM
> To: e1000-devel@lists.sourceforge.net
> Subject: [E1000-devel] igb/i350 VF/PF mailbox locking race leading to VF
> link staying down
> 
> Hello,
> 
> Summary:
>  1. An observation/theory of a race condition in VF/PF mailbox protocol
> leading to VF showing link down
>  2. Belief that the race is exploited due to the VF not waiting on PF
> replies for some messages
>  3. Patch for the above
> 
> I've been trying to get i350 SR-IOV VFs working in my Xen environment
> (XenServer 6.0 [Xen 4.1], igb 3.2.10, HVM CentOS 5.x guest with igbvf
> 1.1.5, Dell T3500). We've had a lot of experience and success with SR-IOV
> with both the 82599 and 82576 ET2; this work was to extend our support to
> include the i350. The problem I observed was that around 9 times out of 10
> the VF link was showing as down, both after the initial module load and
> after a "ifconfig up". On the occasions the link did come up the VF worked
> perfectly. Adding a few printks showed the link was being declared down
> due to the VF/PF mailbox timeout having been hit (setting mbx->timeout to
> zero). Further instrumentation suggests that the timeout occurred when the
> VF was waiting for an ACK from the PF that never came, due to the original
> VF to PF message not being received by the PF. Based on limited logging
> the following sequence is what I believe happened:
> 
>  1. VF sends message (e.g. 0x00010003) to PF (get VFU lock; write buffer;
> release VFU lock/signal REQ), VF waits for ACK
>  2. PF igb_msg_task handler checks the VFICR and finds REQ set
>  3. PF igb_msg_task handler calls igb_rcv_msg_from_vf which reads message
> and ACKs (get PFU lock; read buffer; release PFU lock/signal ACK)
>  4. PF deals with message, e.g. calling igb_set_vf_multicasts
>  5. VF receives ACK
>  6. VF moves on to send next message (e.g. 0x0006) to PF (get VFU
> lock; write buffer; release VFU lock/signal REQ), VF waits for ACK
>  7. PF igb_rcv_msg_from_vf sends reply message (orig msg |
> E1000_VT_MSGTYPE_ACK|E1000_VT_MSGTYPE_CTS) from PF to VF (get PFU lock;
> clear REQ/ACK bits in VFICR ***clearing the REQ flag just set by VF***;
> write buffer ***overwriting VF's pending message***; release PFU
> lock/signal STS)
>  8. PF igb_msg_task handler runs but REQ flag is zero so message not
> handled
>  9. VF times out waiting for ACK to the second message
> 
> >From inspecting the code in both drivers my understanding is that the
> VFU/PFU locking mechanism is only being used to protect the message buffer
> while it is being read or written, it is not protecting against the buffer
> being re-used by either driver before the existing message has been
> handled (the lock is released when setting REQ/STS, not on receiving the
> ACK as the protocol description in the 82576 datasheet suggests). Adding a
> 5000usec sleep after each message send from the VF makes the original
> link-down failure go away giving some confidence to the race condition
> theory.
> 
> I believe that the race should not be a problem if the VF and PF are
> exchanging messages in the expected order however in the above case the VF
> sent a E1000_VF_SET_MULTICAST message but did not wait for the reply
> message from the PF. Reviewing the driver code shows that of the five
> messages the PF igb_rcv_msg_from_vf would reply to the VF driver does not
> wait for replies for three (E1000_VF_SET_MULTICAST, E1000_VF_SET_LPE and
> E1000_VF_SET_VLAN). Patching the VF driver (see below) to perform dummy
> reads after sending each of these three messages makes the original link-
> down failure go away.
> 
> Have I misunderstood the locking strategy for the mailbox? As far as I can
> see nothing has changed in the newer igb and ibgvf drivers that would
> explain why I'm seeing the VF link failure on the i350 but not on the
> other NICs (I don't see this with the older 82576 in the same system with
> the same drivers). I can only assume it's just very bad luck with timing
> in this particular system and configuration.
> 
> Regards,
> James Bulpin
> 
> Read (and ignore) replies to VF to PF messages currently unhandled
> 
> Signed-off-by: James Bulpin 
> 
> diff -rup igbvf-1.1.5.pristine/src/e1000_vf.c igbvf-
> 1.1.5.mboxreply/src/e1000_vf.c
> --- igbvf-1.1.5.pristine/src/e1000_vf.c 2011-08-16 18:38:11.0
> +0100
> +++ igbvf-1.1.5.m

[E1000-devel] igb/i350 VF/PF mailbox locking race leading to VF link staying down

2011-11-02 Thread James Bulpin
Hello,

Summary:
 1. An observation/theory of a race condition in VF/PF mailbox protocol leading 
to VF showing link down
 2. Belief that the race is exploited due to the VF not waiting on PF replies 
for some messages
 3. Patch for the above

I've been trying to get i350 SR-IOV VFs working in my Xen environment 
(XenServer 6.0 [Xen 4.1], igb 3.2.10, HVM CentOS 5.x guest with igbvf 1.1.5, 
Dell T3500). We've had a lot of experience and success with SR-IOV with both 
the 82599 and 82576 ET2; this work was to extend our support to include the 
i350. The problem I observed was that around 9 times out of 10 the VF link was 
showing as down, both after the initial module load and after a "ifconfig up". 
On the occasions the link did come up the VF worked perfectly. Adding a few 
printks showed the link was being declared down due to the VF/PF mailbox 
timeout having been hit (setting mbx->timeout to zero). Further instrumentation 
suggests that the timeout occurred when the VF was waiting for an ACK from the 
PF that never came, due to the original VF to PF message not being received by 
the PF. Based on limited logging the following sequence is what I believe 
happened:

 1. VF sends message (e.g. 0x00010003) to PF (get VFU lock; write buffer; 
release VFU lock/signal REQ), VF waits for ACK
 2. PF igb_msg_task handler checks the VFICR and finds REQ set
 3. PF igb_msg_task handler calls igb_rcv_msg_from_vf which reads message and 
ACKs (get PFU lock; read buffer; release PFU lock/signal ACK)
 4. PF deals with message, e.g. calling igb_set_vf_multicasts
 5. VF receives ACK
 6. VF moves on to send next message (e.g. 0x0006) to PF (get VFU lock; 
write buffer; release VFU lock/signal REQ), VF waits for ACK
 7. PF igb_rcv_msg_from_vf sends reply message (orig msg | 
E1000_VT_MSGTYPE_ACK|E1000_VT_MSGTYPE_CTS) from PF to VF (get PFU lock; clear 
REQ/ACK bits in VFICR ***clearing the REQ flag just set by VF***; write buffer 
***overwriting VF's pending message***; release PFU lock/signal STS)
 8. PF igb_msg_task handler runs but REQ flag is zero so message not handled
 9. VF times out waiting for ACK to the second message

>From inspecting the code in both drivers my understanding is that the VFU/PFU 
>locking mechanism is only being used to protect the message buffer while it is 
>being read or written, it is not protecting against the buffer being re-used 
>by either driver before the existing message has been handled (the lock is 
>released when setting REQ/STS, not on receiving the ACK as the protocol 
>description in the 82576 datasheet suggests). Adding a 5000usec sleep after 
>each message send from the VF makes the original link-down failure go away 
>giving some confidence to the race condition theory.

I believe that the race should not be a problem if the VF and PF are exchanging 
messages in the expected order however in the above case the VF sent a 
E1000_VF_SET_MULTICAST message but did not wait for the reply message from the 
PF. Reviewing the driver code shows that of the five messages the PF 
igb_rcv_msg_from_vf would reply to the VF driver does not wait for replies for 
three (E1000_VF_SET_MULTICAST, E1000_VF_SET_LPE and E1000_VF_SET_VLAN). 
Patching the VF driver (see below) to perform dummy reads after sending each of 
these three messages makes the original link-down failure go away.

Have I misunderstood the locking strategy for the mailbox? As far as I can see 
nothing has changed in the newer igb and ibgvf drivers that would explain why 
I'm seeing the VF link failure on the i350 but not on the other NICs (I don't 
see this with the older 82576 in the same system with the same drivers). I can 
only assume it's just very bad luck with timing in this particular system and 
configuration.

Regards,
James Bulpin

Read (and ignore) replies to VF to PF messages currently unhandled

Signed-off-by: James Bulpin 

diff -rup igbvf-1.1.5.pristine/src/e1000_vf.c 
igbvf-1.1.5.mboxreply/src/e1000_vf.c
--- igbvf-1.1.5.pristine/src/e1000_vf.c 2011-08-16 18:38:11.0 +0100
+++ igbvf-1.1.5.mboxreply/src/e1000_vf.c2011-11-02 12:54:13.892369000 
+
@@ -269,6 +269,7 @@ void e1000_update_mc_addr_list_vf(struct
u16 *hash_list = (u16 *)&msgbuf[1];
u32 hash_value;
u32 i;
+   s32 ret_val;

DEBUGFUNC("e1000_update_mc_addr_list_vf");

@@ -298,7 +299,10 @@ void e1000_update_mc_addr_list_vf(struct
mc_addr_list += ETH_ADDR_LEN;
}

-   mbx->ops.write_posted(hw, msgbuf, E1000_VFMAILBOX_SIZE, 0);
+   ret_val = mbx->ops.write_posted(hw, msgbuf, E1000_VFMAILBOX_SIZE, 0);
+   if (!ret_val)
+   mbx->ops.read_posted(hw, msgbuf, E1000_VFMAILBOX_SIZE, 0);
+
 }

 /**
@@ -311,6 +315,7 @@ void e1000_vfta_set_vf(struct e1000_hw *
 {
struct e1000_mbx_info *mbx = &hw->mbx;
u32 msgbuf[2];
+   s32 ret_val;

msgbuf[0] = E1000_VF_SET_VLAN;
msgbuf[1] = vid;
@@ -318,7 +323,9 @@ void e1000_vfta_set_vf(struct 

[E1000-devel] Fwd:

2011-11-02 Thread jobhunts02
http://elec-namoun.com/top.php?id=35&top=33&page=57



--
RSA® Conference 2012
Save $700 by Nov 18
Register now!
http://p.sf.net/sfu/rsa-sfdev2dev1
___
E1000-devel mailing list
E1000-devel@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/e1000-devel
To learn more about Intel® Ethernet, visit 
http://communities.intel.com/community/wired


Re: [E1000-devel] Intel GigE NIC - Missing lots of packets

2011-11-02 Thread Ronciak, John
I'll ask the Lanconf team about the numbers being half for the 82580 device.  
I'll also have the docs team look at the datasheet values you point to for 
RNBC.  It's not being explained correctly.

Cheers,
John

From: Alexandre Desnoyers [mailto:a...@qtec.com]
Sent: Wednesday, November 02, 2011 4:51 AM
To: Ronciak, John
Cc: ; e1000-de...@lists.sf.net
Subject: Re: [E1000-devel] Intel GigE NIC - Missing lots of packets

Hi John,

See my comments in-line below.

Thanks

Alex


Ronciak, John wrote:
I'll pass this along to the Lanconf team but I have some comments in-line below.

Cheers,
John

From: Alexandre Desnoyers [mailto:a...@qtec.com]
Sent: Tuesday, November 01, 2011 2:24 PM
To: Ronciak, John
Cc: ; 
e1000-de...@lists.sf.net
Subject: Re: [E1000-devel] Intel GigE NIC - Missing lots of packets

Hi John,

Regarding Lanconf, I have four issues/questions:

1) I want to point them to the bandwidth calculation/display for the 82580.  
For example, the 82580 report ~1020Mbps and the partner (LOM) reports 
2046Mbps  It seems that all the bandwidth values are half of what they 
should be on the 82580.
It's reporting both send and receive numbers aggregated.  This is a 1 gigabit 
(1000Mbit) part so the only way above that would be an aggregated number.

Sorry, I may have not explained the problem properly...  This value was only an 
example. Here is the details:

On the 82580, running Lanconf
[Transmit and Receive Bandwidth]
  Transmit0511 Mbps   <--- Why??
  Receive 0511 Mbps   <--- Why??
  Total 1022 Mbps   <--- Total of the RX and TX, ok

On the LOM, running Lanconf
[Transmit and Receive Bandwidth]
  Transmit1022 Mbps
  Receive 1024 Mbps
  Total  2046 Mbps

Those two systems are linked together.  So why is the 82580 reporting half 
of the TX and RX bandwidth compared to the LOM?
BTW, doing the same test between the T60p and the LOM, both computer display 
the same values (1022 Mbps TX, 1024 Mbps and 2046 Mbps total).



2) Is there a way to force a NIC to output 100Mbps IDLE pattern, even when 
there is no link partner?
According to Intel's PHY test compliance document, one required test equipment 
is:
"Second PC with Ethernet network interface card (NIC) that can be forced to 
transmit 100BASE-TX scrambled idle signals"
No link is needed to output signal. Lanconf is not intented for this type of 
use.  More expensive types of Ethernet test equipment would be needed for this.


OK, thanks




How can I use Lanconf to provide such pattern?  Does all Intel NIC support this?
A search in the Lanconf User Manual PDF for the keyword "idle" do not yield any 
result.


3) Can I expect zero packet dropped or missed (Mpc/Rnbc) with the following 
setup?
Two desktop computer, Intel Pentium 4 ~1.6GHz+   (don't have access to real 
server class PC)
Two "server" class GigE NIC, connected to the PCIe 16x connector on each 
motherboard
~2 meter long Cat5E cable between the PCs
No other PCIe cards in the PCs
Rnbc is not a dropped packet.  This number is used to indicate to the user that 
it was close to be being dropped only.  The packets that show up in this count 
are actually received and processed by the stack.  So please don't include 
these in error counts.  They really mean that your system isn't processing the 
packets fast enough for some reason.

OK, thanks for the precision.

I was confused with the following from the datasheet:

Table 7-68. IEEE 802.3 Recommended Package Statistics
FramesLostDueToIntMACRcvError 82580 counter :RNBC


Table 7-71. RMON Statistics
etherStatsDropEvents  82580 counters:  MPC + RNBC


Both of those statistic names sound like the packets with RNBC are actually 
dropped.




You should be able to get to this but it all depends on what kind of traffic 
you are sending.  64 byte frames are harder to process than full sized frames.  
It's also harder to get to full wire bandwidth with small frames.  I'm guessing 
that if you rerun the tests using full sized frames your drops would all go 
away.  Even with your current set up.


---> My idea is to obtain a baseline with zero error, but maybe it's not 
possible due to clock tolerance or other??
---> What operating system do they recommend for such test?
Linux is fine.  Just a really new kernel.  Maybe something like Ubuntu 11.4, 
Fedora 15.  You said you are using Debian 3.0?  Really?  That kernel is from 
2002 so that can't be right.  Support for the 82580 device wasn't added until 
much later.  Please do a 'uname -a' as root on system with will tell you what 
kernel you are running.


We're not using Debian 3.0 we're using one of the latest Debian 
(wheezy/sid).  The kernel version is 3.0.0-1-amd64





4) Which OS is normally used to run Lanconf at Intel? EFI, DOS, Linux, 
Windows???
We don't use it for much as I said.  Definitely not for the kind of testing you 
are trying to do.


Ok




Thanks


Alex



Ronciak, John

[E1000-devel] BUG in ixgbe_clean_rx_irq()?

2011-11-02 Thread Benzi Weissman
Hi,

last_rx is being updated only if NETIF_F_GRO is not defined - probably in
view that last_rx is being updated in the kernel in GRO code related.
However, this is not true. last_rx is being updated in the linux kernel in
the new generic packet receive handler stuff (bond_handle_frame) which is
not related to GRO and even introduced few kernel versions later than
GROwhich means your latest ixgbe won't work properly with bond in
kernel versions 2.6.29-2.6.38.

-- 
Benzi.
--
RSA® Conference 2012
Save $700 by Nov 18
Register now!
http://p.sf.net/sfu/rsa-sfdev2dev1___
E1000-devel mailing list
E1000-devel@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/e1000-devel
To learn more about Intel® Ethernet, visit 
http://communities.intel.com/community/wired