On Fri, May 25, 2001 at 10:01:37PM -0400, Ben LaHaise wrote:
> On Sat, 26 May 2001, Andrea Arcangeli wrote:
>
> > On Fri, May 25, 2001 at 09:38:36PM -0400, Ben LaHaise wrote:
> > > You're missing a few subtle points:
> > >
> > > 1. reservations are against a specific zone
> >
> > A single zone
On Fri, May 25, 2001 at 09:38:36PM -0400, Ben LaHaise wrote:
> You're missing a few subtle points:
>
> 1. reservations are against a specific zone
A single zone is not used only for one thing, period. In my previous
email I enlighted the only conditions under which a reserved pool can
On Sat, 26 May 2001, Andrea Arcangeli wrote:
> Please merge this one in 2.4 for now (originally from Ingo, I only
> improved it), this is a real definitive fix
With the only minor detail being that it DOESN'T WORK.
You're not solving the problems of GFP_BUFFER allocators
looping forever in
On Fri, May 25, 2001 at 08:29:38PM -0400, Ben LaHaise wrote:
> amount of bounce buffers to guarentee progress while submitting io. The
> -ac kernels have a patch from Ingo that provides private pools for bounce
> buffers and buffer_heads. I went a step further and have a memory
> reservation
On Fri, 25 May 2001, Ben LaHaise wrote:
>
> Highmem systems currently manage to hang themselves quite completely upon
> running out of memory in the normal zone. One of the failure modes is
> looping in __alloc_pages from get_unused_buffer_head to map a dirty page.
> Another results in looping
On Fri, 25 May 2001, Rik van Riel wrote:
>
> The function do_try_to_free_pages() also gets called when we're
> only short on inactive pages, but we still have TONS of free
> memory. In that case, I don't think we'd actually want to steal
> free memory from anyone.
Well, kmem_cache_reap()
On Fri, 25 May 2001, Linus Torvalds wrote:
> Oh, also: the logic behind the change of the kmem_cache_reap() - instead
> of making it conditional on the _reverse_ test of what it has historically
> been, why isn't it just completely unconditional? You've basically
> dismissed the only valid
On Sat, 26 May 2001, Alan Cox wrote:
>
> But Linus is right I think - VM changes often prove 'interesting'. Test it in
> -ac , gets some figures for real world use then plan further
.. on the other hand, thinking more about this, I'd rather be called
"stupid" than "stodgy".
So I think I'll
On Sat, 26 May 2001, Alan Cox wrote:
> But Linus is right I think - VM changes often prove
> 'interesting'. Test it in -ac , gets some figures for real world
> use then plan further
Oh well. As long as he takes the patch to page_alloc.c, otherwise
everybody _will_ have to "experiment" with the
On Fri, 25 May 2001, Rik van Riel wrote:
>
> Without the patch my workstation (with ~180MB RAM) usually has
> between 50 and 80MB of inode/dentry cache and real usable stuff
> gets swapped out.
All I want is more people giving feedback.
It's clear that neither my nor your machine is a good
On Fri, 25 May 2001, Rik van Riel wrote:
>
> Yeah, I guess the way Linux 2.2 balances things is way too
> experimental ;)
Ehh.. Take a look at the other differences between the VM's. Which may
make a 2.2.x approach completely bogus.
And take a look at how long the 2.2.x VM took to stabilize,
On Fri, 25 May 2001, Linus Torvalds wrote:
> On Fri, 25 May 2001, Rik van Riel wrote:
> >
> > OK, shoot me. Here it is again, this time _with_ patch...
>
> I'm not going to apply this as long as it plays experimental
> games with "shrink_icache()" and friends. I haven't seen anybody
> comment on
On Fri, 25 May 2001, Linus Torvalds wrote:
> On Fri, 25 May 2001, Rik van Riel wrote:
> >
> > OK, shoot me. Here it is again, this time _with_ patch...
>
> I'm not going to apply this as long as it plays experimental games with
> "shrink_icache()" and friends. I haven't seen anybody comment on
On Fri, 25 May 2001, Rik van Riel wrote:
>
> OK, shoot me. Here it is again, this time _with_ patch...
I'm not going to apply this as long as it plays experimental games with
"shrink_icache()" and friends. I haven't seen anybody comment on the
performance on this, and I can well imagine that
OK, shoot me. Here it is again, this time _with_ patch...
-- Forwarded message --
Date: Fri, 25 May 2001 16:53:38 -0300 (BRST)
From: Rik van Riel <[EMAIL PROTECTED]>
Hi Linus,
the following patch does:
1) Remove GFP_BUFFER and HIGHMEM related deadlocks, by letting
these
OK, shoot me. Here it is again, this time _with_ patch...
-- Forwarded message --
Date: Fri, 25 May 2001 16:53:38 -0300 (BRST)
From: Rik van Riel [EMAIL PROTECTED]
Hi Linus,
the following patch does:
1) Remove GFP_BUFFER and HIGHMEM related deadlocks, by letting
these
On Fri, 25 May 2001, Rik van Riel wrote:
OK, shoot me. Here it is again, this time _with_ patch...
I'm not going to apply this as long as it plays experimental games with
shrink_icache() and friends. I haven't seen anybody comment on the
performance on this, and I can well imagine that it
On Fri, 25 May 2001, Linus Torvalds wrote:
On Fri, 25 May 2001, Rik van Riel wrote:
OK, shoot me. Here it is again, this time _with_ patch...
I'm not going to apply this as long as it plays experimental games with
shrink_icache() and friends. I haven't seen anybody comment on the
On Fri, 25 May 2001, Linus Torvalds wrote:
On Fri, 25 May 2001, Rik van Riel wrote:
OK, shoot me. Here it is again, this time _with_ patch...
I'm not going to apply this as long as it plays experimental
games with shrink_icache() and friends. I haven't seen anybody
comment on the
On Fri, 25 May 2001, Rik van Riel wrote:
Yeah, I guess the way Linux 2.2 balances things is way too
experimental ;)
Ehh.. Take a look at the other differences between the VM's. Which may
make a 2.2.x approach completely bogus.
And take a look at how long the 2.2.x VM took to stabilize, and
On Fri, 25 May 2001, Rik van Riel wrote:
Without the patch my workstation (with ~180MB RAM) usually has
between 50 and 80MB of inode/dentry cache and real usable stuff
gets swapped out.
All I want is more people giving feedback.
It's clear that neither my nor your machine is a good thing
On Fri, May 25, 2001 at 10:01:37PM -0400, Ben LaHaise wrote:
On Sat, 26 May 2001, Andrea Arcangeli wrote:
On Fri, May 25, 2001 at 09:38:36PM -0400, Ben LaHaise wrote:
You're missing a few subtle points:
1. reservations are against a specific zone
A single zone is not used only
On Sat, 26 May 2001, Andrea Arcangeli wrote:
Please merge this one in 2.4 for now (originally from Ingo, I only
improved it), this is a real definitive fix
With the only minor detail being that it DOESN'T WORK.
You're not solving the problems of GFP_BUFFER allocators
looping forever in
On Fri, May 25, 2001 at 09:38:36PM -0400, Ben LaHaise wrote:
You're missing a few subtle points:
1. reservations are against a specific zone
A single zone is not used only for one thing, period. In my previous
email I enlighted the only conditions under which a reserved pool can
avoid
On Fri, 25 May 2001, Linus Torvalds wrote:
Oh, also: the logic behind the change of the kmem_cache_reap() - instead
of making it conditional on the _reverse_ test of what it has historically
been, why isn't it just completely unconditional? You've basically
dismissed the only valid reason
On Fri, May 25, 2001 at 08:29:38PM -0400, Ben LaHaise wrote:
amount of bounce buffers to guarentee progress while submitting io. The
-ac kernels have a patch from Ingo that provides private pools for bounce
buffers and buffer_heads. I went a step further and have a memory
reservation patch
On Fri, 25 May 2001, Rik van Riel wrote:
The function do_try_to_free_pages() also gets called when we're
only short on inactive pages, but we still have TONS of free
memory. In that case, I don't think we'd actually want to steal
free memory from anyone.
Well, kmem_cache_reap() doesn't
On Sat, 26 May 2001, Alan Cox wrote:
But Linus is right I think - VM changes often prove
'interesting'. Test it in -ac , gets some figures for real world
use then plan further
Oh well. As long as he takes the patch to page_alloc.c, otherwise
everybody _will_ have to experiment with the -ac
On Fri, 25 May 2001, Ben LaHaise wrote:
Highmem systems currently manage to hang themselves quite completely upon
running out of memory in the normal zone. One of the failure modes is
looping in __alloc_pages from get_unused_buffer_head to map a dirty page.
Another results in looping on
On Sat, 26 May 2001, Alan Cox wrote:
But Linus is right I think - VM changes often prove 'interesting'. Test it in
-ac , gets some figures for real world use then plan further
.. on the other hand, thinking more about this, I'd rather be called
stupid than stodgy.
So I think I'll buy some
30 matches
Mail list logo