On Mon, 21 May 2007, Peter Zijlstra wrote:
> > Yes sure if we do not have a context then no restrictions originating
> > there can be enforced. So you want to restrict the logic now to
> > interrupt allocs? I.e. GFP_ATOMIC?
>
> No, any kernel alloc.
Then we have the problem again.
> > Correct.
On Mon, 2007-05-21 at 13:32 -0700, Christoph Lameter wrote:
> On Mon, 21 May 2007, Peter Zijlstra wrote:
>
> > > This means we will disobey cpuset and memory policy constraints?
> >
> > >From what I can make of it, yes. Although I'm a bit hazy on the
> > mempolicy code.
>
> In an interrupt conte
On Mon, 21 May 2007, Peter Zijlstra wrote:
> > This means we will disobey cpuset and memory policy constraints?
>
> >From what I can make of it, yes. Although I'm a bit hazy on the
> mempolicy code.
In an interrupt context we do not have a process context. But there is
no exemption from memory p
On Mon, 2007-05-21 at 12:43 -0700, Christoph Lameter wrote:
> On Mon, 21 May 2007, Peter Zijlstra wrote:
>
> > > So the original issue is still not fixed. A slab alloc may succeed without
> > > watermarks if that particular allocation is restricted to a different set
> > > of nodes. Then the rese
On Mon, 2007-05-21 at 09:45 -0700, Christoph Lameter wrote:
> On Sun, 20 May 2007, Peter Zijlstra wrote:
>
> > I care about kernel allocations only. In particular about those that
> > have PF_MEMALLOC semantics.
>
> H.. I wish I was more familiar with PF_MEMALLOC. ccing Nick.
>
> > - set pa
On Mon, 21 May 2007, Peter Zijlstra wrote:
> > So the original issue is still not fixed. A slab alloc may succeed without
> > watermarks if that particular allocation is restricted to a different set
> > of nodes. Then the reserve slab is dropped despite the memory scarcity on
> > another set of
On Sun, 20 May 2007, Peter Zijlstra wrote:
> I care about kernel allocations only. In particular about those that
> have PF_MEMALLOC semantics.
H.. I wish I was more familiar with PF_MEMALLOC. ccing Nick.
> - set page->reserve nonzero for each page allocated with
>ALLOC_NO_WATERMARKS; w
Ok, full reset.
I care about kernel allocations only. In particular about those that
have PF_MEMALLOC semantics.
The thing I need is that any memory allocated below
ALLOC_MIN|ALLOC_HIGH|ALLOC_HARDER
is only ever used by processes that have ALLOC_NO_WATERMARKS rights;
for the duration of the dis
Peter wrote:
> cpusets are ignored when in dire straights for an kernel alloc.
No - most kernel allocations never ignore cpusets.
The ones marked NOFAIL or ATOMIC can ignore cpusets in dire straights
and the ones off interrupts lack an applicable cpuset context.
--
I won't res
On Fri, 18 May 2007, Peter Zijlstra wrote:
> On Thu, 2007-05-17 at 15:27 -0700, Christoph Lameter wrote:
> Isn't the zone mask the same for all allocations from a specific slab?
> If so, then the slab wide ->reserve_slab will still dtrt (barring
> cpusets).
All allocations from a single slab hav
On Thu, 2007-05-17 at 15:27 -0700, Christoph Lameter wrote:
> On Thu, 17 May 2007, Peter Zijlstra wrote:
>
> > The way I read the cpuset page allocator, it will only respect the
> > cpuset if there is memory aplenty. Otherwise it will grab whatever. So
> > still, it will only ever use ALLOC_NO_WAT
On Thu, 17 May 2007, Peter Zijlstra wrote:
> The way I read the cpuset page allocator, it will only respect the
> cpuset if there is memory aplenty. Otherwise it will grab whatever. So
> still, it will only ever use ALLOC_NO_WATERMARKS if the whole system is
> in distress.
Sorry no. The purpose o
> The way I read the cpuset page allocator, it will only respect the
> cpuset if there is memory aplenty. Otherwise it will grab whatever. So
> still, it will only ever use ALLOC_NO_WATERMARKS if the whole system is
> in distress.
Wrong. Well, only a little right.
For allocations that can't fail
On Thu, 2007-05-17 at 12:24 -0700, Christoph Lameter wrote:
> On Thu, 17 May 2007, Peter Zijlstra wrote:
>
> > The proposed patch doesn't change how the kernel functions at this
> > point; it just enforces an existing rule better.
>
> Well I'd say it controls the allocation failures. And that onl
On Thu, 17 May 2007, Peter Zijlstra wrote:
> The proposed patch doesn't change how the kernel functions at this
> point; it just enforces an existing rule better.
Well I'd say it controls the allocation failures. And that only works if
one can consider the system having a single zone.
Lets say
On Thu, 2007-05-17 at 11:02 -0700, Christoph Lameter wrote:
> On Thu, 17 May 2007, Matt Mackall wrote:
>
> > Simply stated, the problem is sometimes it's impossible to free memory
> > without allocating more memory. Thus we must keep enough protected
> > reserve that we can guarantee progress. Thi
On Thu, 17 May 2007, Matt Mackall wrote:
> Simply stated, the problem is sometimes it's impossible to free memory
> without allocating more memory. Thus we must keep enough protected
> reserve that we can guarantee progress. This is what mempools are for
> in the regular I/O stack. Unfortunately,
On Thu, 17 May 2007, Peter Zijlstra wrote:
> > I am weirdly confused by these patches. Among other things you told me
> > that the performance does not matter since its never (or rarely) being
> > used (why do it then?).
>
> When we are very low on memory and do access the reserves by means of
On Thu, 17 May 2007, Peter Zijlstra wrote:
> > > Its about ensuring ALLOC_NO_WATERMARKS memory only reaches PF_MEMALLOC
> > > processes, not joe random's pi calculator.
> >
> > Watermarks are per zone?
>
> Yes, but the page allocator might address multiple zones in order to
> obtain a page.
And
On Thu, May 17, 2007 at 10:29:06AM -0700, Christoph Lameter wrote:
> On Thu, 17 May 2007, Peter Zijlstra wrote:
>
> > I'm really not seeing why you're making such a fuzz about it; normally
> > when you push the system this hard we're failing allocations left right
> > and center too. Its just that
On Thu, 2007-05-17 at 10:30 -0700, Christoph Lameter wrote:
> On Thu, 17 May 2007, Peter Zijlstra wrote:
>
> > > 2. It seems to be based on global ordering of allocations which is
> > >not possible given large systems and the relativistic constraints
> > >of physics. Ordering of events get
On Thu, 2007-05-17 at 10:29 -0700, Christoph Lameter wrote:
> On Thu, 17 May 2007, Peter Zijlstra wrote:
>
> > I'm really not seeing why you're making such a fuzz about it; normally
> > when you push the system this hard we're failing allocations left right
> > and center too. Its just that the bl
On Thu, 17 May 2007, Peter Zijlstra wrote:
> > 2. It seems to be based on global ordering of allocations which is
> >not possible given large systems and the relativistic constraints
> >of physics. Ordering of events get more expensive the bigger the
> >system is.
> >
> >How does
On Thu, 17 May 2007, Peter Zijlstra wrote:
> I'm really not seeing why you're making such a fuzz about it; normally
> when you push the system this hard we're failing allocations left right
> and center too. Its just that the block IO path has some mempools which
> allow it to write out some (swap
On Wed, 2007-05-16 at 14:42 -0700, Christoph Lameter wrote:
> On Wed, 16 May 2007, Peter Zijlstra wrote:
>
> > > Hmmm.. so we could simplify the scheme by storing the last rank
> > > somewheres.
> >
> > Not sure how that would help..
>
> One does not have a way of determining the current proces
On Wed, 2007-05-16 at 20:02 -0700, Christoph Lameter wrote:
> On Mon, 14 May 2007, Peter Zijlstra wrote:
>
> >
> > In the interest of creating a reserve based allocator; we need to make the
> > slab
> > allocator (*sigh*, all three) fair with respect to GFP flags.
> >
> > That is, we need to pr
On Mon, 14 May 2007, Peter Zijlstra wrote:
>
> In the interest of creating a reserve based allocator; we need to make the
> slab
> allocator (*sigh*, all three) fair with respect to GFP flags.
>
> That is, we need to protect memory from being used by easier gfp flags than it
> was allocated wit
On Wed, 16 May 2007, Peter Zijlstra wrote:
> > Hmmm.. so we could simplify the scheme by storing the last rank
> > somewheres.
>
> Not sure how that would help..
One does not have a way of determining the current processes
priority? Just need to do an alloc?
If we had the current processes "ra
On Wed, 2007-05-16 at 14:13 -0700, Christoph Lameter wrote:
> On Wed, 16 May 2007, Peter Zijlstra wrote:
> > > How we know that we are out of trouble? Just try another alloc and see?
> > > If
> > > that is the case then we may be failing allocations after the memory
> > > situation has cleared u
On Wed, 16 May 2007, Peter Zijlstra wrote:
> > How we know that we are out of trouble? Just try another alloc and see? If
> > that is the case then we may be failing allocations after the memory
> > situation has cleared up.
> No, no, for each regular allocation we retry to populate ->cpu_slab wi
On Wed, 2007-05-16 at 13:59 -0700, Christoph Lameter wrote:
> On Wed, 16 May 2007, Peter Zijlstra wrote:
>
> > > I do not see any distinction between DMA and regular memory. If we need
> > > DMA memory to complete the transaction then this wont work?
> >
> > If network relies on slabs that are c
On Wed, 16 May 2007, Peter Zijlstra wrote:
> > I do not see any distinction between DMA and regular memory. If we need
> > DMA memory to complete the transaction then this wont work?
>
> If network relies on slabs that are cpuset constrained and the page
> allocator reserves do not match that, t
On Wed, 2007-05-16 at 13:44 -0700, Christoph Lameter wrote:
> On Wed, 16 May 2007, Peter Zijlstra wrote:
>
> > > How does all of this interact with
> > >
> > > 1. cpusets
> > >
> > > 2. dma allocations and highmem?
> > >
> > > 3. Containers?
> >
> > Much like the normal kmem_cache would do; I'
On Wed, 16 May 2007, Peter Zijlstra wrote:
> > How does all of this interact with
> >
> > 1. cpusets
> >
> > 2. dma allocations and highmem?
> >
> > 3. Containers?
>
> Much like the normal kmem_cache would do; I'm not changing any of the
> page allocation semantics.
So if we run out of memory
On Wed, 2007-05-16 at 13:27 -0700, Christoph Lameter wrote:
> On Wed, 16 May 2007, Peter Zijlstra wrote:
>
> > > So its no use on NUMA?
> >
> > It is, its just that we're swapping very heavily at that point, a
> > bouncing cache-line will not significantly slow down the box compared to
> > waitin
On Wed, 16 May 2007, Peter Zijlstra wrote:
> > So its no use on NUMA?
>
> It is, its just that we're swapping very heavily at that point, a
> bouncing cache-line will not significantly slow down the box compared to
> waiting for block IO, will it?
How does all of this interact with
1. cpusets
On Wed, 2007-05-16 at 12:53 -0700, Christoph Lameter wrote:
> On Wed, 16 May 2007, Peter Zijlstra wrote:
>
> > If this 4k cpu system ever gets to touch the new lock it is in way
> > deeper problems than a bouncing cache-line.
>
> So its no use on NUMA?
It is, its just that we're swapping very he
On Wed, 16 May 2007, Peter Zijlstra wrote:
> If this 4k cpu system ever gets to touch the new lock it is in way
> deeper problems than a bouncing cache-line.
So its no use on NUMA?
> Please look at it more carefully.
>
> We differentiate pages allocated at the level where GFP_ATOMIC starts to
>
On Wed, 2007-05-16 at 11:43 -0700, Christoph Lameter wrote:
> On Wed, 16 May 2007, Peter Zijlstra wrote:
>
> > On Tue, 2007-05-15 at 15:02 -0700, Christoph Lameter wrote:
> > > On Tue, 15 May 2007, Peter Zijlstra wrote:
> > >
> > > > How about something like this; it seems to sustain a little str
On Wed, 16 May 2007, Peter Zijlstra wrote:
> On Tue, 2007-05-15 at 15:02 -0700, Christoph Lameter wrote:
> > On Tue, 15 May 2007, Peter Zijlstra wrote:
> >
> > > How about something like this; it seems to sustain a little stress.
> >
> > Argh again mods to kmem_cache.
>
> Hmm, I had not underst
On Tue, 2007-05-15 at 15:02 -0700, Christoph Lameter wrote:
> On Tue, 15 May 2007, Peter Zijlstra wrote:
>
> > How about something like this; it seems to sustain a little stress.
>
> Argh again mods to kmem_cache.
Hmm, I had not understood you minded that very much; I did stay away
from all the
On Tue, 15 May 2007, Peter Zijlstra wrote:
> How about something like this; it seems to sustain a little stress.
Argh again mods to kmem_cache.
Could we do this with a new slab page flag? F.e. SlabEmergPool.
in alloc_slab() do
if (is_emergency_pool_page(page)) {
SetSlabDebug(page);
On Mon, 2007-05-14 at 21:28 +0200, Peter Zijlstra wrote:
> One allocator is all I need; it would just be grand if all could be
> supported.
>
> So what you suggest is not placing the 'emergency' slab into the regular
> place so that normal allocations will not be able to find it. Then if an
> eme
On Mon, 14 May 2007, Peter Zijlstra wrote:
> > > The thing is; I'm not needing any speed, as long as the machine stay
> > > alive I'm good. However others are planing to build a full reserve based
> > > allocator to properly fix the places that now use __GFP_NOFAIL and
> > > situation such as in a
On Mon, 2007-05-14 at 13:06 -0700, Christoph Lameter wrote:
> On Mon, 14 May 2007, Peter Zijlstra wrote:
>
> > > Hmmm.. Maybe we could do that But what I had in mind was simply to
> > > set a page flag (DebugSlab()) if you know in alloc_slab that the slab
> > > should be only used for emerge
On Mon, 14 May 2007, Peter Zijlstra wrote:
> > Hmmm.. Maybe we could do that But what I had in mind was simply to
> > set a page flag (DebugSlab()) if you know in alloc_slab that the slab
> > should be only used for emergency allocation. If DebugSlab is set then the
> > fastpath will not be
On Mon, 2007-05-14 at 12:44 -0700, Andrew Morton wrote:
> On Mon, 14 May 2007 11:12:24 -0500
> Matt Mackall <[EMAIL PROTECTED]> wrote:
>
> > If I understand this correctly:
> >
> > privileged thread unprivileged greedy process
> > kmem_cache_alloc(...)
> >adds new slab pa
On Mon, 2007-05-14 at 12:56 -0700, Christoph Lameter wrote:
> On Mon, 14 May 2007, Peter Zijlstra wrote:
>
> > > You can pull the big switch (only on a SLUB slab I fear) to switch
> > > off the fast path. Do SetSlabDebug() when allocating a precious
> > > allocation that should not be gobbled up
On Mon, May 14, 2007 at 12:44:51PM -0700, Andrew Morton wrote:
> On Mon, 14 May 2007 11:12:24 -0500
> Matt Mackall <[EMAIL PROTECTED]> wrote:
>
> > If I understand this correctly:
> >
> > privileged thread unprivileged greedy process
> > kmem_cache_alloc(...)
> >adds new
On Mon, 14 May 2007, Peter Zijlstra wrote:
> > You can pull the big switch (only on a SLUB slab I fear) to switch
> > off the fast path. Do SetSlabDebug() when allocating a precious
> > allocation that should not be gobbled up by lower level processes.
> > Then you can do whatever you want in t
On Mon, 14 May 2007 11:12:24 -0500
Matt Mackall <[EMAIL PROTECTED]> wrote:
> If I understand this correctly:
>
> privileged thread unprivileged greedy process
> kmem_cache_alloc(...)
>adds new slab page from lowmem pool
> do_io()
>k
On Mon, 2007-05-14 at 10:57 -0700, Christoph Lameter wrote:
> On Mon, 14 May 2007, Peter Zijlstra wrote:
>
> > On Mon, 2007-05-14 at 09:29 -0700, Christoph Lameter wrote:
> > > On Mon, 14 May 2007, Matt Mackall wrote:
> > >
> > > > privileged thread unprivileged greedy proces
On Mon, 14 May 2007, Peter Zijlstra wrote:
> On Mon, 2007-05-14 at 09:29 -0700, Christoph Lameter wrote:
> > On Mon, 14 May 2007, Matt Mackall wrote:
> >
> > > privileged thread unprivileged greedy process
> > > kmem_cache_alloc(...)
> > >adds new slab page from lowmem po
On Mon, 2007-05-14 at 09:29 -0700, Christoph Lameter wrote:
> On Mon, 14 May 2007, Matt Mackall wrote:
>
> > privileged thread unprivileged greedy process
> > kmem_cache_alloc(...)
> >adds new slab page from lowmem pool
>
> Yes but it returns an object for the privileged
On Mon, 14 May 2007, Peter Zijlstra wrote:
> > Why does this have to handled by the slab allocators at all? If you have
> > free pages in the page allocator then the slab allocators will be able to
> > use that reserve.
>
> Yes, too freely. GFP flags are only ever checked when you allocate a ne
On Mon, 14 May 2007, Matt Mackall wrote:
> privileged thread unprivileged greedy process
> kmem_cache_alloc(...)
>adds new slab page from lowmem pool
Yes but it returns an object for the privileged thread. Is that not
enough?
> do_io()
>
On Mon, May 14, 2007 at 08:53:21AM -0700, Christoph Lameter wrote:
> On Mon, 14 May 2007, Peter Zijlstra wrote:
>
> > In the interest of creating a reserve based allocator; we need to make the
> > slab
> > allocator (*sigh*, all three) fair with respect to GFP flags.
>
> I am not sure what the p
On Mon, 2007-05-14 at 08:53 -0700, Christoph Lameter wrote:
> On Mon, 14 May 2007, Peter Zijlstra wrote:
>
> > In the interest of creating a reserve based allocator; we need to make the
> > slab
> > allocator (*sigh*, all three) fair with respect to GFP flags.
>
> I am not sure what the point of
On Mon, 14 May 2007, Peter Zijlstra wrote:
> In the interest of creating a reserve based allocator; we need to make the
> slab
> allocator (*sigh*, all three) fair with respect to GFP flags.
I am not sure what the point of all of this is.
> That is, we need to protect memory from being used by
59 matches
Mail list logo