On Fri, Jan 21, 2005 at 08:08:21AM +0100, Andi Kleen wrote:
> So at least for GFP_DMA it seems to be definitely needed.
Indeed. Plus if you add pci32 zone, it'll be needed for it too on
x86-64, like for the normal zone on x86, since ptes will go in highmem
while pci32 allocations will not. So
On Fri, Jan 21, 2005 at 06:04:25PM +1100, Nick Piggin wrote:
> OK this is a fairly lame example... but the current code is more or
> less just lucky that ZONE_DMA doesn't usually fill up with pinned mem
> on machines that need explicit ZONE_DMA allocations.
Yep. For the DMA zone all slab cache
On Thu, Jan 20, 2005 at 11:00:16PM -0800, Andrew Morton wrote:
> Last time we dicsussed this you pointed out that reserving more lowmem from
> highmem-capable allocations may actually *help* things. (Tries to remember
> why) By reducing inode/dentry eviction rates? I asked Martin Bligh if he
>
On Thu, 2005-01-20 at 22:46 -0800, Andrew Morton wrote:
> Nick Piggin <[EMAIL PROTECTED]> wrote:
> > It does turn on lowmem protection by default. We never reached
> > an agreement about doing this though, but Andrea has shown that
> > it fixes trivial OOM cases.
> >
> > I think it should be
Andrew Morton <[EMAIL PROTECTED]> writes:
> Just that it throws away a bunch of potentially usable memory. In three
> years I've seen zero reports of any problems which would have been solved
> by increasing the protection ratio.
We ran into a big problem with this on x86-64. The SUSE installer
On Thu, Jan 20, 2005 at 10:46:45PM -0800, Andrew Morton wrote:
> Thus empirically, it appears that the number of machines which need a
> non-zero protection ratio is exceedingly small. Why change the setting on
> all machines for the benefit of the tiny few? Seems weird. Especially
> when this
Andrea Arcangeli <[EMAIL PROTECTED]> wrote:
>
> Anyway if you leave it off by default I don't mind, with my new code
> forward ported stright from 2.4 mainline, it's possible for the first
> time to set it from userspace without having to embed knowledge on the
> kernel min_kbytes settings at
On Fri, Jan 21, 2005 at 05:36:14PM +1100, Nick Piggin wrote:
> I think it should be turned on by default. I can't recall what
I think it too, since the number of people that can be bitten by this is
certainly higher than the number of people who knows the VM internals
and for what kind of
Nick Piggin <[EMAIL PROTECTED]> wrote:
>
> On Thu, 2005-01-20 at 22:20 -0800, Andrew Morton wrote:
> > Andrea Arcangeli <[EMAIL PROTECTED]> wrote:
> > >
> > > This is the forward port to 2.6 of the lowmem_reserved algorithm I
> > > invented in 2.4.1*, merged in 2.4.2x already and needed to fix
On Thu, 2005-01-20 at 22:20 -0800, Andrew Morton wrote:
> Andrea Arcangeli <[EMAIL PROTECTED]> wrote:
> >
> > This is the forward port to 2.6 of the lowmem_reserved algorithm I
> > invented in 2.4.1*, merged in 2.4.2x already and needed to fix workloads
> > like google (especially without swap)
On Thu, Jan 20, 2005 at 10:20:56PM -0800, Andrew Morton wrote:
> Andrea Arcangeli <[EMAIL PROTECTED]> wrote:
> >
> > This is the forward port to 2.6 of the lowmem_reserved algorithm I
> > invented in 2.4.1*, merged in 2.4.2x already and needed to fix workloads
> > like google (especially
Andrea Arcangeli <[EMAIL PROTECTED]> wrote:
>
> This is the forward port to 2.6 of the lowmem_reserved algorithm I
> invented in 2.4.1*, merged in 2.4.2x already and needed to fix workloads
> like google (especially without swap) on x86 with >1G of ram, but it's
> needed in all sort of
From: Andrea Arcangeli <[EMAIL PROTECTED]>
Subject: keep balance between different classzones
This is the forward port to 2.6 of the lowmem_reserved algorithm I
invented in 2.4.1*, merged in 2.4.2x already and needed to fix workloads
like google (especially without swap) on x86 with >1G of ram,
From: Andrea Arcangeli [EMAIL PROTECTED]
Subject: keep balance between different classzones
This is the forward port to 2.6 of the lowmem_reserved algorithm I
invented in 2.4.1*, merged in 2.4.2x already and needed to fix workloads
like google (especially without swap) on x86 with 1G of ram, but
Andrea Arcangeli [EMAIL PROTECTED] wrote:
This is the forward port to 2.6 of the lowmem_reserved algorithm I
invented in 2.4.1*, merged in 2.4.2x already and needed to fix workloads
like google (especially without swap) on x86 with 1G of ram, but it's
needed in all sort of workloads with
On Thu, Jan 20, 2005 at 10:20:56PM -0800, Andrew Morton wrote:
Andrea Arcangeli [EMAIL PROTECTED] wrote:
This is the forward port to 2.6 of the lowmem_reserved algorithm I
invented in 2.4.1*, merged in 2.4.2x already and needed to fix workloads
like google (especially without swap) on
On Thu, 2005-01-20 at 22:20 -0800, Andrew Morton wrote:
Andrea Arcangeli [EMAIL PROTECTED] wrote:
This is the forward port to 2.6 of the lowmem_reserved algorithm I
invented in 2.4.1*, merged in 2.4.2x already and needed to fix workloads
like google (especially without swap) on x86
Nick Piggin [EMAIL PROTECTED] wrote:
On Thu, 2005-01-20 at 22:20 -0800, Andrew Morton wrote:
Andrea Arcangeli [EMAIL PROTECTED] wrote:
This is the forward port to 2.6 of the lowmem_reserved algorithm I
invented in 2.4.1*, merged in 2.4.2x already and needed to fix workloads
like
On Fri, Jan 21, 2005 at 05:36:14PM +1100, Nick Piggin wrote:
I think it should be turned on by default. I can't recall what
I think it too, since the number of people that can be bitten by this is
certainly higher than the number of people who knows the VM internals
and for what kind of
Andrea Arcangeli [EMAIL PROTECTED] wrote:
Anyway if you leave it off by default I don't mind, with my new code
forward ported stright from 2.4 mainline, it's possible for the first
time to set it from userspace without having to embed knowledge on the
kernel min_kbytes settings at boot
On Thu, Jan 20, 2005 at 10:46:45PM -0800, Andrew Morton wrote:
Thus empirically, it appears that the number of machines which need a
non-zero protection ratio is exceedingly small. Why change the setting on
all machines for the benefit of the tiny few? Seems weird. Especially
when this
Andrew Morton [EMAIL PROTECTED] writes:
Just that it throws away a bunch of potentially usable memory. In three
years I've seen zero reports of any problems which would have been solved
by increasing the protection ratio.
We ran into a big problem with this on x86-64. The SUSE installer
On Thu, 2005-01-20 at 22:46 -0800, Andrew Morton wrote:
Nick Piggin [EMAIL PROTECTED] wrote:
It does turn on lowmem protection by default. We never reached
an agreement about doing this though, but Andrea has shown that
it fixes trivial OOM cases.
I think it should be turned on by
On Thu, Jan 20, 2005 at 11:00:16PM -0800, Andrew Morton wrote:
Last time we dicsussed this you pointed out that reserving more lowmem from
highmem-capable allocations may actually *help* things. (Tries to remember
why) By reducing inode/dentry eviction rates? I asked Martin Bligh if he
could
On Fri, Jan 21, 2005 at 06:04:25PM +1100, Nick Piggin wrote:
OK this is a fairly lame example... but the current code is more or
less just lucky that ZONE_DMA doesn't usually fill up with pinned mem
on machines that need explicit ZONE_DMA allocations.
Yep. For the DMA zone all slab cache will
On Fri, Jan 21, 2005 at 08:08:21AM +0100, Andi Kleen wrote:
So at least for GFP_DMA it seems to be definitely needed.
Indeed. Plus if you add pci32 zone, it'll be needed for it too on
x86-64, like for the normal zone on x86, since ptes will go in highmem
while pci32 allocations will not. So
26 matches
Mail list logo