Re: [Fwd: zero-copy TCP]

2000-09-02 Thread Jes Sorensen

> "Jeff" == Jeff V Merkey <[EMAIL PROTECTED]> writes:

Jeff> He said memory to memory transfers.

I also said data aquisition servers to data processing clients.

Jes
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to [EMAIL PROTECTED]
Please read the FAQ at http://www.tux.org/lkml/



Re: [Fwd: zero-copy TCP]

2000-09-02 Thread Jeff V. Merkey


He said memory to memory transfers.  

Jeff

Alan Cox wrote:
> 
> > > I'd love to see a netware box sustain 110MB/sec (MB as in mega byte)
> > > memory to memory in two TCP streams between dual 400MHz P2 boxes.
> >
> > What the hell does a NUMA interconnect have to do with networking.  Who
> > would be braindead enough to waste processing cycles passing Network
> > data over a NUMA fabric anyway.  There's a lot more efficient ways to
> 
> I have bad news for you Jeff. Thats not a NUMA fabric. Thats GigE. Thats what
> people plug larger than office sized servers into nowdays. Current NUMA fabrics
> are a good factor of 10 faster still.
> 
> Alan
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to [EMAIL PROTECTED]
Please read the FAQ at http://www.tux.org/lkml/



Re: [Fwd: zero-copy TCP]

2000-09-02 Thread Alan Cox

> > The equivalent to the netware specialist fast paths for file serving is Tux,
> > and Tux currently holds a world record. I'd love to see Manos beat Linux +
> > Tux at specweb. That would vindicate your arguments
> 
> Who wrote Tux?  USL while at Novell.  Enough said.

No, Ingo Molnar in Hungary with other open source community folks. I think you
have the wrong tux

Alan

-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to [EMAIL PROTECTED]
Please read the FAQ at http://www.tux.org/lkml/



Re: [Fwd: zero-copy TCP]

2000-09-02 Thread Jes Sorensen

> "Jeff" == Jeff V Merkey <[EMAIL PROTECTED]> writes:

Jeff> Jes Sorensen wrote:
>>  I'd love to see a netware box sustain 110MB/sec (MB as in mega
>> byte) memory to memory in two TCP streams between dual 400MHz P2
>> boxes.

Jeff> What the hell does a NUMA interconnect have to do with
Jeff> networking.  Who would be braindead enough to waste processing
Jeff> cycles passing Network data over a NUMA fabric anyway.  There's
Jeff> a lot more efficient ways to connect to boxes with NUMA than
Jeff> using a TCPIP stack on a NUMA interconnect.

NUMA? whats NUMA got to do with this? I am talking Gigabit Ethernet
with Jumbo frames on standard commodity hardware between two standard
440BX based PCs.

Jes
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to [EMAIL PROTECTED]
Please read the FAQ at http://www.tux.org/lkml/



Re: [Fwd: zero-copy TCP]

2000-09-02 Thread Alan Cox

> > I'd love to see a netware box sustain 110MB/sec (MB as in mega byte)
> > memory to memory in two TCP streams between dual 400MHz P2 boxes.
> 
> What the hell does a NUMA interconnect have to do with networking.  Who
> would be braindead enough to waste processing cycles passing Network
> data over a NUMA fabric anyway.  There's a lot more efficient ways to

I have bad news for you Jeff. Thats not a NUMA fabric. Thats GigE. Thats what
people plug larger than office sized servers into nowdays. Current NUMA fabrics
are a good factor of 10 faster still.

Alan

-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to [EMAIL PROTECTED]
Please read the FAQ at http://www.tux.org/lkml/



Re: [Fwd: zero-copy TCP]

2000-09-02 Thread Jeff V. Merkey



Alan Cox wrote:
> 
> > file system operation.  What I wrote is THREE TIMES FASTER THAN WHAT'S
> > IN LINUX.  Care to do a challenge.  Let's take my NetWare code and see
> > which is faster and lower latency on a Network.  Mine or Linux's.  I bet
> > you $100.00 it will beat the Linux code in every test.
> 
> At what. IPX - sure. How about at telnet serving.
> 
> The equivalent to the netware specialist fast paths for file serving is Tux,
> and Tux currently holds a world record. I'd love to see Manos beat Linux +
> Tux at specweb. That would vindicate your arguments

Who wrote Tux?  USL while at Novell.  Enough said.

Jeff
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to [EMAIL PROTECTED]
Please read the FAQ at http://www.tux.org/lkml/



Re: [Fwd: zero-copy TCP]

2000-09-02 Thread Alan Cox

> file system operation.  What I wrote is THREE TIMES FASTER THAN WHAT'S
> IN LINUX.  Care to do a challenge.  Let's take my NetWare code and see
> which is faster and lower latency on a Network.  Mine or Linux's.  I bet
> you $100.00 it will beat the Linux code in every test.

At what. IPX - sure. How about at telnet serving.

The equivalent to the netware specialist fast paths for file serving is Tux,
and Tux currently holds a world record. I'd love to see Manos beat Linux +
Tux at specweb. That would vindicate your arguments


-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to [EMAIL PROTECTED]
Please read the FAQ at http://www.tux.org/lkml/



[Fwd: zero-copy TCP]

2000-09-02 Thread Jeff V. Merkey

 




Jes Sorensen wrote:
> 
> > "Jeff" == Jeff V Merkey <[EMAIL PROTECTED]> writes:
> 
> Jeff> Jes,
> 
> Jeff> I wrote the SMP ODI networking layer in NetWare that used today by
> Jeff> over 90,000,000 NetWare users.  I also wrote the SMP LLC8022
> Jeff> Stack, the SMP IPX/SPX Stack, and the SMP OSPF TCPIP stack in
> Jeff> NetWare.  I think I know what the hell I'm doing here.  Most
> Jeff> Network protocols assume a primary/secondary relationship.  The
> Jeff> faster you can get requests in and out of a server, the faster the
> Jeff> response time on the client for remote file system operation.
> 
> You look at network file system issues and generalize that to generic
> networking. I am sorry but I do not think you know a whole lot about
> high speed networking. You forget that when talking about fast
> networking it depends on what you define as fast. Some people consider
> file serving performance others are interested in fast memory to memory
> transfers (from data aquisition servers to client processing units for
> instance). For bulk data transfers on high speed networks (note I do not
> consider 100Mbit/sec Fast Ethernet as a fast network) the real issue is
> pipelining through socket buffers and large TCP windows and not latency.
> 
> Besides, the fact that there are 90M netware boxes around doesn't matter
> when most of them are running IPX - IPX is braindamage and has nothing
> to do with proper networking.
> 
> Jeff> What I wrote is THREE TIMES FASTER THAN WHAT'S IN LINUX.  Care to
> Jeff> do a challenge.  Let's take my NetWare code and see which is
> Jeff> faster and lower latency on a Network.  Mine or Linux's.  I bet
> Jeff> you $100.00 it will beat the Linux code in every test.
> 
> I'd love to see a netware box sustain 110MB/sec (MB as in mega byte)
> memory to memory in two TCP streams between dual 400MHz P2 boxes.

What the hell does a NUMA interconnect have to do with networking.  Who
would be braindead enough to waste processing cycles passing Network
data over a NUMA fabric anyway.  There's a lot more efficient ways to
connect to boxes with NUMA than using a TCPIP stack on a NUMA
interconnect.

Jeff

> 
> Jes




[Fwd: zero-copy TCP]

2000-09-02 Thread Jeff V. Merkey

 



Jes,

I wrote the SMP ODI networking layer in NetWare that used today by over
90,000,000 NetWare users.  I also wrote the SMP LLC8022 Stack, the SMP
IPX/SPX Stack, and the SMP OSPF TCPIP stack in NetWare.  I think I know
what the hell I'm doing here.  Most Network protocols assume a
primary/secondary relationship.  The faster you can get requests in and
out of a server, the faster the response time on the client for remote
file system operation.  What I wrote is THREE TIMES FASTER THAN WHAT'S
IN LINUX.  Care to do a challenge.  Let's take my NetWare code and see
which is faster and lower latency on a Network.  Mine or Linux's.  I bet
you $100.00 it will beat the Linux code in every test.

Jeff

Jes Sorensen wrote:
> 
> > "Jeff" == Jeff V Merkey <[EMAIL PROTECTED]> writes:
> 
> Jeff, could you start by learning to quote email and not send a full
> copy of the entire email you reply to (read rfc1855).
> 
> Jeff> The entire Linux Network subsystem needs an overhaul.  The code
> Jeff> copies data all over the place. I am at present pulling it apart
> Jeff> and porting it to MANOS, and what a mess indeed. In NetWare, the
> Jeff> only time data ever gets copied from incoming packets is:
> 
> Try and understand the code before you make such bold statements.
> 
> Jeff> 1.  A copy to userspace at a stream head.  2.  An incoming write
> Jeff> that gets copied into the file cache.
> 
> Jeff> Reads from cache are never copied.  In fact, the network server
> Jeff> locks a file cache page and sends it unaltered to the network
> Jeff> drivers and DMA's directly from it.  Since NetWare has WTD's
> Jeff> these I/O requests get processed at the highest possible
> Jeff> priority.  In networking, the enemy is LATENCY for fast
> Jeff> performance.  That's why NetWare can handle 5000 users and Linux
> Jeff> barfs on 100 in similiar tests.  Copying increases latency, and
> Jeff> the long code paths in the Linux Network layer.
> 
> You can't DMA directly from a file cache page unless you have a
> network card that does scatter/gather DMA and surprise surprise,
> 80-90% of the cards on the market don't support this. Besides that you
> need to do copy-on-write if you want to be able to do zero copy on
> write() from user space, marking data copy on write is *expensive* on
> x86 SMP boxes since you have to modify the tlb on all
> processors. On top of that you have to look at the packet size, for
> small packets a copy is often a lot cheaper than modifying the page
> tables, even on UP systems so you need a copy/break scheme here.
> 
> As wrt your statement on latency then it's nice to see that you don't
> know what you are talking about. Latency is one issue in fast
> networking it's far from the only one. Latency is important for
> message passing type applications however for bulk data transfers it's
> less relevant since you really want deep pipelining here and properly
> written applications. If you TCP window is too small even zero latency
> will only buy you soo much on a really fast network.
> 
> Jes