I'd suggest running iperf with various settings to stress test the nic to find 
the breaking point.

If you are able to replicate this issue - take a kdump and let redhat dev 
analyze.  I've had tons of issues with be2net drivers in past and opened 
several BZ, on RedHat 5.8 it seemed to be finally addressed.



From: rhelv5-list-boun...@redhat.com [mailto:rhelv5-list-boun...@redhat.com] On 
Behalf Of Colin Coe
Sent: Tuesday, May 15, 2012 11:18 PM
To: Red Hat Enterprise Linux 5 (Tikanga) discussion mailing-list; Srija
Subject: Re: [rhelv5-list] Facing issue with be2net


Good luck with HP support.

Is bonding at the server level the right way?  What are you using as your 
interconnect module?  I had much more success when I let the interconnect 
module (flex10) handle the redundancy.

CC
On May 16, 2012 12:40 AM, "Srija" 
<swap_proj...@yahoo.com<mailto:swap_proj...@yahoo.com>> wrote:
Hello All,

Our server  is  hp proliant 620C g7,  we have  rhel5.8  86_64  xen kernel,.
We are using  four ports , two ports are  bonded  with  vlan tagging where  we 
can build  several guests  on different vlan.

We are having issue,  with be2net driver.   getting the errors  as

be2net 0000:04:00.1: Out of MCCQ wrbs
be2net 0000:04:00.1: Out of MCCQ wrbs
be2net 0000:04:00.1: Out of MCCQ wrbs

If we  move the guests  on this host,  the guests are  getting non responsive 
after running few minutes, also the hosts  too.
We upgraded the  firmware  , it is now as follows:

version: 4.0.100r
firmware-version: 4.0.493.0

HP has  changed the NIC card  too,   but the problem did not resolve. we are 
also with HP support

In the mean time if someone can advice  , it will be really  helpful

Thanks in advance





_______________________________________________
rhelv5-list mailing list
rhelv5-list@redhat.com<mailto:rhelv5-list@redhat.com>
https://www.redhat.com/mailman/listinfo/rhelv5-list
_______________________________________________
rhelv5-list mailing list
rhelv5-list@redhat.com
https://www.redhat.com/mailman/listinfo/rhelv5-list

Reply via email to