Ben LaHaise writes:
> and then use a map_mm_kiobuf (which is
> map_user_kiobuf but with an mm parameter) for the portion of the buffer
> that's currently being copied. That improves code reuse and gives us a
> few primatives that are quite useful elsewhere.
If it has roughly the same cost a
Alan Cox wrote:
>
> > The main problem is that map_user_kiobuf() locks pages into memory.
> > It's a bad idea for pipes. Either we must severely limit the maximum
>
> You only have to map them for the actual copy.
The current map_user_kiobuf() doesn't have an 'mm' parameter, I can only
use it f
> The main problem is that map_user_kiobuf() locks pages into memory.
> It's a bad idea for pipes. Either we must severely limit the maximum
You only have to map them for the actual copy.
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to [EMAIL
[Stuff about NetBSD pipes snipped]
I'm testing out Manfred's patch for zero copy pipes, and haven't
crashed it yet.
My hardware is a AMD K6-2 (stepping 1) on an ALi M1541 with 320 Mb -
one quite slow 64 Mb stick and one fast 256 Mb stick.
The lmbench bw_pipe showed a performance improvement of
On Mon, 7 May 2001, Manfred Spraul wrote:
> The main problem is that map_user_kiobuf() locks pages into memory.
> It's a bad idea for pipes. Either we must severely limit the maximum
> amount of data in the direct-copy buffers, or we must add a swap file
> based backing store. If I understand the
From: "Ben LaHaise" <[EMAIL PROTECTED]>
>
> Any particular reason for not using davem's single copy kiobuf based
> code?
>
The main problem is that map_user_kiobuf() locks pages into memory.
It's a bad idea for pipes. Either we must severely limit the maximum
amount of data in the direct-copy buff
Manfred Spraul wrote:
>
> I'm now running with the patch for several hours, no problems.
>
> bw_pipe transfer rate has nearly doubled and the number of context
> switches for one bw_pipe run is down from 71500 to 5500.
>
> Please test it.
Any particular reason for not using davem's single copy
I'm now running with the patch for several hours, no problems.
bw_pipe transfer rate has nearly doubled and the number of context
switches for one bw_pipe run is down from 71500 to 5500.
Please test it.
--
Manfred
// $Header$
// Kernel Version:
// VERSION = 2
// PATCHLEVEL = 4
// SU
8 matches
Mail list logo