On Tue, 5 Jun 2001, Mike Galbraith wrote:
> Yes. If we start writing out sooner, we aren't stuck with pushing a
> ton of IO all at once and can use prudent limits. Not only because of
> potential allocation problems, but because our situation is changing
> rapidly so small corrections done
On Tue, 5 Jun 2001, Mike Galbraith wrote:
Yes. If we start writing out sooner, we aren't stuck with pushing a
ton of IO all at once and can use prudent limits. Not only because of
potential allocation problems, but because our situation is changing
rapidly so small corrections done often
On Mon, 26 Feb 2001, David S. Miller wrote:
> At gigapacket rates, it becomes an issue. This guy is talking about
> tinkering with new IP _options_, not just the header. So even if the
> IP header itself fits totally in a cache line, the options afterwardsd
> likely will not and thus require
On Mon, 26 Feb 2001, David S. Miller wrote:
> Not to my knowledge. Routers already change the time to live field,
> so I see no reason why they can't do smart things with special IP
> options either (besides efficiency concerns :-).
A number of ISPs patch the MSS value to 1492 due to the
On Mon, 26 Feb 2001, David S. Miller wrote:
Not to my knowledge. Routers already change the time to live field,
so I see no reason why they can't do smart things with special IP
options either (besides efficiency concerns :-).
A number of ISPs patch the MSS value to 1492 due to the
On Mon, 26 Feb 2001, David S. Miller wrote:
At gigapacket rates, it becomes an issue. This guy is talking about
tinkering with new IP _options_, not just the header. So even if the
IP header itself fits totally in a cache line, the options afterwardsd
likely will not and thus require
On 24 Jan 2001, David Wragg wrote:
> [EMAIL PROTECTED] (Eric W. Biederman) writes:
> > Why do you need such a large buffer?
>
> ext2 doesn't guarantee sustained write bandwidth (in particular,
> writing a page to an ext2 file can have a high latency due to reading
> the block bitmap
On 24 Jan 2001, David Wragg wrote:
[EMAIL PROTECTED] (Eric W. Biederman) writes:
Why do you need such a large buffer?
ext2 doesn't guarantee sustained write bandwidth (in particular,
writing a page to an ext2 file can have a high latency due to reading
the block bitmap synchronously).
On Tue, 9 Jan 2001, Linus Torvalds wrote:
> The _lower-level_ stuff (ie TCP and the drivers) want the "array of
> tuples", and again, they do NOT want an array of pages, because if
> somebody does two sendfile() calls that fit in one packet, it really needs
> an array of tuples.
A kiobuf simply
On Tue, 9 Jan 2001, Ingo Molnar wrote:
> this is why i ment that *right now* kiobufs are not suited for networking,
> at least the way we do it. Maybe if kiobufs had the same kind of internal
> structure as sk_frag (ie. array of (page,offset,size) triples, not array
> of pages), that would work
On Tue, 9 Jan 2001, Ingo Molnar wrote:
>
> On Tue, 9 Jan 2001, Stephen C. Tweedie wrote:
>
> > > please study the networking portions of the zerocopy patch and you'll see
> > > why this is not desirable. An alloc_kiovec()/free_kiovec() is exactly the
> > > thing we cannot afford in a
On Tue, 9 Jan 2001, Ingo Molnar wrote:
this is why i ment that *right now* kiobufs are not suited for networking,
at least the way we do it. Maybe if kiobufs had the same kind of internal
structure as sk_frag (ie. array of (page,offset,size) triples, not array
of pages), that would work out
On Tue, 9 Jan 2001, Ingo Molnar wrote:
On Tue, 9 Jan 2001, Stephen C. Tweedie wrote:
please study the networking portions of the zerocopy patch and you'll see
why this is not desirable. An alloc_kiovec()/free_kiovec() is exactly the
thing we cannot afford in a sendfile() operation.
On Wed, 11 Oct 2000, Linus Torvalds wrote:
>
> On Thu, 12 Oct 2000, Benjamin C.R. LaHaise wrote:
> >
> > Note the fragment above those portions of the patch where the
> > pte_xchg_clear is done on the page table: this results in a page fault
> > for any other c
On Wed, 11 Oct 2000, Linus Torvalds wrote:
On Thu, 12 Oct 2000, Benjamin C.R. LaHaise wrote:
Note the fragment above those portions of the patch where the
pte_xchg_clear is done on the page table: this results in a page fault
for any other cpu that looks at the pte while
On Wed, 11 Oct 2000, David S. Miller wrote:
>It's safe because of how x86s hardware works
>
> What about other platforms?
If atomic ops don't work, then software dirty bits are still an option
(read as: it shouldn't break RISC CPUs).
-ben
-
To unsubscribe from this list:
Hello Linus,
On Wed, 11 Oct 2000, Linus Torvalds wrote:
> I much prefered the dirty fault version.
> What does "quite noticeable" mean? Does it mean that you can see page
> faults (no big deal), or does it mean that you can actually measure the
> performance degradation objectively?
It's a
Hello Linus,
On Wed, 11 Oct 2000, Linus Torvalds wrote:
I much prefered the dirty fault version.
What does "quite noticeable" mean? Does it mean that you can see page
faults (no big deal), or does it mean that you can actually measure the
performance degradation objectively?
It's a factor
On Wed, 11 Oct 2000, David S. Miller wrote:
It's safe because of how x86s hardware works
What about other platforms?
If atomic ops don't work, then software dirty bits are still an option
(read as: it shouldn't break RISC CPUs).
-ben
-
To unsubscribe from this list:
On Mon, 25 Sep 2000 [EMAIL PROTECTED] wrote:
> On Mon, Sep 25, 2000 at 09:23:48PM +0100, Alan Cox wrote:
> > > my prediction is that if you show me an example of
> > > DoS vulnerability, I can show you fix that does not require bean counting.
> > > Am I wrong?
> >
> > I think so. Page tables
On Mon, 25 Sep 2000 [EMAIL PROTECTED] wrote:
On Mon, Sep 25, 2000 at 09:23:48PM +0100, Alan Cox wrote:
my prediction is that if you show me an example of
DoS vulnerability, I can show you fix that does not require bean counting.
Am I wrong?
I think so. Page tables are a good
21 matches
Mail list logo