Hi Bill,
I see similar results on my test systems
Thanks for this report and for confirming our observations. Could you
please confirm that a single-port bidrectional UDP link runs at wire
speed? This helps to localize the problem to the TCP stack or interaction
of the TCP stack with the e10
Bill Fink wrote:
> a 2.6.15.4 kernel. The GigE NICs are Intel PRO/1000
> 82546EB_QUAD_COPPER,
> on a 64-bit/133-MHz PCI-X bus, using version 6.1.16-k2 of the e1000
> driver, and running with 9000-byte jumbo frames. The TCP congestion
> control is BIC.
Bill, FYI, there was a known issue with e10
Hi Bruce,
On Thu, 31 Jan 2008, Bruce Allen wrote:
> > I see similar results on my test systems
>
> Thanks for this report and for confirming our observations. Could you
> please confirm that a single-port bidrectional UDP link runs at wire
> speed? This helps to localize the problem to the T
Hi David,
Could this be an issue with pause frames? At a previous job I remember
having issues with a similar configuration using two broadcom sb1250 3
gigE port devices. If I ran bidirectional tests on a single pair of
ports connected via cross over, it was slower than when I gave each
dire
Hi Bill,
I see similar results on my test systems
Thanks for this report and for confirming our observations. Could you
please confirm that a single-port bidrectional UDP link runs at wire
speed? This helps to localize the problem to the TCP stack or interaction
of the TCP stack with the
Bill Fink wrote:
If the receive direction uses a different GigE NIC that's part of the
same quad-GigE, all is fine:
[EMAIL PROTECTED] ~]$ nuttcp -f-beta -Itx -w2m 192.168.6.79 & nuttcp -f-beta
-Irx -r -w2m 192.168.5.79
tx: 1186.5051 MB / 10.05 sec = 990.2250 Mbps 12 %TX 13 %RX 0 retrans
rx:
On Wed, 30 Jan 2008, SANGTAE HA wrote:
> On Jan 30, 2008 5:25 PM, Bruce Allen <[EMAIL PROTECTED]> wrote:
> >
> > In our application (cluster computing) we use a very tightly coupled
> > high-speed low-latency network. There is no 'wide area traffic'. So it's
> > hard for me to understand why any
Hi Sangtae,
Thanks for joining this discussion -- it's good to a CUBIC author and
expert here!
In our application (cluster computing) we use a very tightly coupled
high-speed low-latency network. There is no 'wide area traffic'. So
it's hard for me to understand why any networking componen
Hi Bruce,
On Jan 30, 2008 5:25 PM, Bruce Allen <[EMAIL PROTECTED]> wrote:
>
> In our application (cluster computing) we use a very tightly coupled
> high-speed low-latency network. There is no 'wide area traffic'. So it's
> hard for me to understand why any networking components or software laye
Hi Stephen,
Indeed, we are not asking to see 1000 Mb/s. We'd be happy to see 900
Mb/s.
Netperf is trasmitting a large buffer in MTU-sized packets (min 1500
bytes). Since the acks are only about 60 bytes in size, they should be
around 4% of the total traffic. Hence we would not expect to see
On Wed, 30 Jan 2008 16:25:12 -0600 (CST)
Bruce Allen <[EMAIL PROTECTED]> wrote:
> Hi Stephen,
>
> Thanks for your helpful reply and especially for the literature pointers.
>
> >> Indeed, we are not asking to see 1000 Mb/s. We'd be happy to see 900
> >> Mb/s.
> >>
> >> Netperf is trasmitting a l
Hi Stephen,
Thanks for your helpful reply and especially for the literature pointers.
Indeed, we are not asking to see 1000 Mb/s. We'd be happy to see 900
Mb/s.
Netperf is trasmitting a large buffer in MTU-sized packets (min 1500
bytes). Since the acks are only about 60 bytes in size, they s
On Wed, 30 Jan 2008 08:01:46 -0600 (CST)
Bruce Allen <[EMAIL PROTECTED]> wrote:
> Hi David,
>
> Thanks for your note.
>
> >> (The performance of a full duplex stream should be close to 1Gb/s in
> >> both directions.)
> >
> > This is not a reasonable expectation.
> >
> > ACKs take up space on the
From: Bruce Allen <[EMAIL PROTECTED]>
Date: Wed, 30 Jan 2008 07:38:56 -0600 (CST)
> Wilco. Just subscribing now.
You don't need to subscribe to any list at vger.kernel.org in order to
post a message to it.
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of
Hi David,
Thanks for your note.
(The performance of a full duplex stream should be close to 1Gb/s in
both directions.)
This is not a reasonable expectation.
ACKs take up space on the link in the opposite direction of the
transfer.
So the link usage in the opposite direction of the transfer
From: Bruce Allen <[EMAIL PROTECTED]>
Date: Wed, 30 Jan 2008 03:51:51 -0600 (CST)
[ [EMAIL PROTECTED] added to CC: list, that is where
kernel networking issues are discussed. ]
> (The performance of a full duplex stream should be close to 1Gb/s in
> both directions.)
This is not a reasonable e
Hi Andi,
Thanks for the reply.
You forgot to specify what user programs you used to get to the
benchmark results. e.g. if the user space does not use large enough
reads/writes then performance will be not optimal.
We used netperf (as stated in the first paragraph of the original post).
Tell
Bruce Allen <[EMAIL PROTECTED]> writes:
> Dear LKML,
You forgot to specify what user programs you used to get to the
benchmark results. e.g. if the user space does not use large
enough reads/writes then performance will be not optimal.
Also best you repost your results with full information
on
Dear LKML,
We've connected a pair of modern high-performance boxes with integrated
copper Gb/s Intel NICS, with an ethernet crossover cable, and have run
some netperf full duplex TCP tests. The transfer rates are well below
wire speed. We're reporting this as a kernel bug, because we expect
19 matches
Mail list logo