On Tue, 8 May 2001, David S. Miller wrote:
> So instead, you could test for the condition that prevents any
> possible forward progress, no?
if (!order || free_shortage() > 0)
goto try_again;
(which was the experimental patch I discussed with Marcelo)
regards,
Rik
--
On Tue, 8 May 2001, David S. Miller wrote:
So instead, you could test for the condition that prevents any
possible forward progress, no?
if (!order || free_shortage() 0)
goto try_again;
(which was the experimental patch I discussed with Marcelo)
regards,
Rik
--
Hi,
On Thu, May 10, 2001 at 03:49:05PM -0300, Marcelo Tosatti wrote:
> Back to the main discussion --- I guess we could make __GFP_FAIL (with
> __GFP_WAIT set :)) allocations actually fail if "try_to_free_pages()" does
> not make any progress (ie returns zero). But maybe thats a bit too
>
On Thu, 10 May 2001, Stephen C. Tweedie wrote:
> Hi,
>
> On Thu, May 10, 2001 at 03:22:57PM -0300, Marcelo Tosatti wrote:
>
> > Initially I thought about __GFP_FAIL to be used by writeout routines which
> > want to cluster pages until they can allocate memory without causing any
> > pressure
Hi,
On Thu, May 10, 2001 at 03:22:57PM -0300, Marcelo Tosatti wrote:
> Initially I thought about __GFP_FAIL to be used by writeout routines which
> want to cluster pages until they can allocate memory without causing any
> pressure to the system. Something like this:
>
> while ((page =
On Thu, 10 May 2001, Stephen C. Tweedie wrote:
> Hi,
>
> On Thu, May 10, 2001 at 01:43:46PM -0300, Marcelo Tosatti wrote:
>
> > No. __GFP_FAIL can to try to reclaim pages from inactive clean.
> >
> > We just want to avoid __GFP_FAIL allocations from going to
> > try_to_free_pages().
>
>
Hi,
On Thu, May 10, 2001 at 01:43:46PM -0300, Marcelo Tosatti wrote:
> No. __GFP_FAIL can to try to reclaim pages from inactive clean.
>
> We just want to avoid __GFP_FAIL allocations from going to
> try_to_free_pages().
Why? __GFP_FAIL is only useful as an indication that the caller has
On Thu, 10 May 2001, Mark Hemment wrote:
>
> On Wed, 9 May 2001, Marcelo Tosatti wrote:
> > On Wed, 9 May 2001, Mark Hemment wrote:
> > > Could introduce another allocation flag (__GFP_FAIL?) which is or'ed
> > > with a __GFP_WAIT to limit the looping?
> >
> > __GFP_FAIL is in the -ac tree
On Wed, 9 May 2001, Marcelo Tosatti wrote:
> On Wed, 9 May 2001, Mark Hemment wrote:
> > Could introduce another allocation flag (__GFP_FAIL?) which is or'ed
> > with a __GFP_WAIT to limit the looping?
>
> __GFP_FAIL is in the -ac tree already and it is being used by the bounce
> buffer
On Thu, 10 May 2001, Mark Hemment wrote:
On Wed, 9 May 2001, Marcelo Tosatti wrote:
On Wed, 9 May 2001, Mark Hemment wrote:
Could introduce another allocation flag (__GFP_FAIL?) which is or'ed
with a __GFP_WAIT to limit the looping?
__GFP_FAIL is in the -ac tree already and
Hi,
On Thu, May 10, 2001 at 01:43:46PM -0300, Marcelo Tosatti wrote:
No. __GFP_FAIL can to try to reclaim pages from inactive clean.
We just want to avoid __GFP_FAIL allocations from going to
try_to_free_pages().
Why? __GFP_FAIL is only useful as an indication that the caller has
some
On Thu, 10 May 2001, Stephen C. Tweedie wrote:
Hi,
On Thu, May 10, 2001 at 01:43:46PM -0300, Marcelo Tosatti wrote:
No. __GFP_FAIL can to try to reclaim pages from inactive clean.
We just want to avoid __GFP_FAIL allocations from going to
try_to_free_pages().
Why? __GFP_FAIL
Hi,
On Thu, May 10, 2001 at 03:22:57PM -0300, Marcelo Tosatti wrote:
Initially I thought about __GFP_FAIL to be used by writeout routines which
want to cluster pages until they can allocate memory without causing any
pressure to the system. Something like this:
while ((page =
On Thu, 10 May 2001, Stephen C. Tweedie wrote:
Hi,
On Thu, May 10, 2001 at 03:22:57PM -0300, Marcelo Tosatti wrote:
Initially I thought about __GFP_FAIL to be used by writeout routines which
want to cluster pages until they can allocate memory without causing any
pressure to the
Hi,
On Thu, May 10, 2001 at 03:49:05PM -0300, Marcelo Tosatti wrote:
Back to the main discussion --- I guess we could make __GFP_FAIL (with
__GFP_WAIT set :)) allocations actually fail if try_to_free_pages() does
not make any progress (ie returns zero). But maybe thats a bit too
extreme.
On Wed, 9 May 2001, Marcelo Tosatti wrote:
On Wed, 9 May 2001, Mark Hemment wrote:
Could introduce another allocation flag (__GFP_FAIL?) which is or'ed
with a __GFP_WAIT to limit the looping?
__GFP_FAIL is in the -ac tree already and it is being used by the bounce
buffer allocation
On Wed, 9 May 2001, Mark Hemment wrote:
>
> On Tue, 8 May 2001, David S. Miller wrote:
> > Actually, the change was made because it is illogical to try only
> > once on multi-order pages. Especially because we depend upon order
> > 1 pages so much (every task struct allocated). We depend
On Tue, 8 May 2001, David S. Miller wrote:
> Actually, the change was made because it is illogical to try only
> once on multi-order pages. Especially because we depend upon order
> 1 pages so much (every task struct allocated). We depend upon them
> even more so on sparc64 (certain kinds of
On Tue, 8 May 2001, David S. Miller wrote:
Actually, the change was made because it is illogical to try only
once on multi-order pages. Especially because we depend upon order
1 pages so much (every task struct allocated). We depend upon them
even more so on sparc64 (certain kinds of page
On Wed, 9 May 2001, Mark Hemment wrote:
On Tue, 8 May 2001, David S. Miller wrote:
Actually, the change was made because it is illogical to try only
once on multi-order pages. Especially because we depend upon order
1 pages so much (every task struct allocated). We depend upon them
Marcelo Tosatti writes:
> On Tue, 8 May 2001, Mark Hemment wrote:
> > Does anyone know why the 2.4.3pre6 change was made?
>
> Because wakeup_bdflush(0) can wakeup bdflush _even_ if it does not have
> any job to do (ie less than 30% dirty buffers in the default config).
Actually, the
On Tue, May 08 2001, Marcelo Tosatti wrote:
> > The attached patch (against 2.4.5-pre1) fixes the looping symptom, by
> > adding a counter and looping only twice for non-zero order allocations.
>
> Looks good. (actually Rik had a patch similar to this which fixed a real
> case with cdda2wav
On Tue, 8 May 2001, Mark Hemment wrote:
>
> In 2.4.3pre6, code in page_alloc.c:__alloc_pages(), changed from;
>
> try_to_free_pages(gfp_mask);
> wakeup_bdflush();
> if (!order)
> goto try_again;
> to
> try_to_free_pages(gfp_mask);
>
> The real fix is to measure fragmentation and the progress of kswapd, but
> that is too drastic for 2.4.x.
I suspect the real fix might, in general, be
a) to reduce use of kmalloc() etc. which gives
physically contiguous memory, where virtually
contiguous memory will do (and is,
In 2.4.3pre6, code in page_alloc.c:__alloc_pages(), changed from;
try_to_free_pages(gfp_mask);
wakeup_bdflush();
if (!order)
goto try_again;
to
try_to_free_pages(gfp_mask);
wakeup_bdflush();
goto try_again;
This introduced
In 2.4.3pre6, code in page_alloc.c:__alloc_pages(), changed from;
try_to_free_pages(gfp_mask);
wakeup_bdflush();
if (!order)
goto try_again;
to
try_to_free_pages(gfp_mask);
wakeup_bdflush();
goto try_again;
This introduced
The real fix is to measure fragmentation and the progress of kswapd, but
that is too drastic for 2.4.x.
I suspect the real fix might, in general, be
a) to reduce use of kmalloc() etc. which gives
physically contiguous memory, where virtually
contiguous memory will do (and is,
On Tue, 8 May 2001, Mark Hemment wrote:
In 2.4.3pre6, code in page_alloc.c:__alloc_pages(), changed from;
try_to_free_pages(gfp_mask);
wakeup_bdflush();
if (!order)
goto try_again;
to
try_to_free_pages(gfp_mask);
wakeup_bdflush();
On Tue, May 08 2001, Marcelo Tosatti wrote:
The attached patch (against 2.4.5-pre1) fixes the looping symptom, by
adding a counter and looping only twice for non-zero order allocations.
Looks good. (actually Rik had a patch similar to this which fixed a real
case with cdda2wav just like
Marcelo Tosatti writes:
On Tue, 8 May 2001, Mark Hemment wrote:
Does anyone know why the 2.4.3pre6 change was made?
Because wakeup_bdflush(0) can wakeup bdflush _even_ if it does not have
any job to do (ie less than 30% dirty buffers in the default config).
Actually, the change
30 matches
Mail list logo