Welcome, glad to have helped.
Jack
On Thu, Apr 22, 2010 at 11:06 AM, Stephen Sanders
wrote:
> Adding "-P 2 " to the iperf client got the rate up to what it should be.
> Also, running multiple tcpreplay's pushed the rate up as well.
>
> Thanks again for the pointers.
>
>
> On 4/22/2010 12:39 PM
Adding "-P 2 " to the iperf client got the rate up to what it should
be. Also, running multiple tcpreplay's pushed the rate up as well.
Thanks again for the pointers.
On 4/22/2010 12:39 PM, Jack Vogel wrote:
> Couple more things that come to mind:
>
> make sure you increase mbuf pool, nmbcluster
Couple more things that come to mind:
make sure you increase mbuf pool, nmbclusters up to at least 262144, and the
driver uses 4K clusters if
you go to jumbo frames (nmbjumbop). some workloads will benefit from
increeasing the various sendspace
and recvspace parameters, maxsockets and maxfiles are
I believe that "pciconf -lvc" showed that the cards were in the correct
slot. I'm not sure as to what all of the output means but I'm guessing
that " cap 10[a0] = PCI-Express 2 endpoint max data 128(256) link
x8(x8)" means that the card is an 8 lane card and is using all 8 lanes.
Setting kern.ip
On Thu, Apr 22, 2010 at 5:34 AM, Stephen Sanders
wrote:
> According to pciconf, the card is a "82598EB 10 Gigabit AF Dual Port
> Network Connection".
>
> It looks to me like the card is plugged into a 4xPCIe slot. I'm sure
> this means we're not going to make the 10Gbps but I would imagine that
>
According to pciconf, the card is a "82598EB 10 Gigabit AF Dual Port
Network Connection".
It looks to me like the card is plugged into a 4xPCIe slot. I'm sure
this means we're not going to make the 10Gbps but I would imagine that
we should get north of 5 Gbps.
Is there a URL to pick the latest
Use my new driver and it will tell you when it comes up with the slot speed
is,
and if its substandard it will SQUAWK loudly at you :)
I think the S5000PAL only has Gen1 PCIE slots which is going to limit you
somewhat. Would recommend a current generation (x58 or 5520 chipset)
system if you want t
I'd be most pleased to get near 9k.
I'm running FreeBSD 8.0 amd64 on both of the the test hosts. I've reset
the configurations to system default as I was getting no where with
sysctl and loader.conf settings.
The motherboards have been configured to do MSI interrupts. The
S5000PAL has a MSI to
When you get into the 10G world your performance will only be as good
as your weakest link, what I mean is if you connect to something that has
less than stellar bus and/or memory performance it is going to throttle
everything.
Running back to back with two good systems you should be able to get
On Wed, Apr 21, 2010 at 9:32 AM, Stephen Sanders
wrote:
> I am running speed tests on a pair of systems equipped with Intel 10Gbps
> cards and am getting poor performance.
>
> iperf and tcpdump testing indicates that the card is running at roughly
> 2.5Gbps max transmit/receive.
>
> My attempts at
I am running speed tests on a pair of systems equipped with Intel 10Gbps
cards and am getting poor performance.
iperf and tcpdump testing indicates that the card is running at roughly
2.5Gbps max transmit/receive.
My attempts at turning fiddling with netisr, polling, and varying the
buffer sizes
11 matches
Mail list logo