Re: Very aggressive swapping after 2 hours rest

2000-09-22 Thread Rik van Riel

On Wed, 20 Sep 2000, Andrea Arcangeli wrote:
> On Wed, Sep 20, 2000 at 12:53:18PM -0300, Rik van Riel wrote:
> > On Tue, 19 Sep 2000, Andrea Arcangeli wrote:
> > > On Tue, Sep 19, 2000 at 05:29:29AM -0300, Rik van Riel wrote:
> > > > what I wanted to do in the new VM, except that I didn't
> > > > see why we would want to restrict it to swap pages only?
> > > 
> > > You _must_ do that _only_ for the swap_cache, that's a specific
> > > issue of the swap_cache during swapout (notenote: not during
> > > swapin!).
> > 
> > Which part of "why" did you not understand?
> > 
> > I see no reason why we should not do the same trick for
> > mmap()ed pages and maybe other memory too...
> 
> It's quite obvious we not talking about the same thing (and it's also quite
> obvious the problem isn't addressed in test9), I'll restart from scratch trying
> to be more clear.
> 
> There are two cases:
> 
> 1)pageins
> 2)pageouts
> 
> When we get a major page fault (pagein/swapin) and we create a
> swap cache page or a page-cache page, we must consider it a
> _more_ important page to keep in the working set.  So it's fine
> to put it at the head->next of the LRU and to age it properly.
> 
> So far so good.
> 
> Now there's a very special (subtle) case that I addressed in
> classzone and that is only related to the swapout of a swap
> cache (well, strictly speaking the pageout of shared pages could
> take advantage of it as well but I didn't wrote a mechamism
> generic enough to do that for MAP_SHARED as well yet and that's
> much less important because the dirty page cache is just in the
> LRU and it have less chances to be in the lru_cache->next
> position).

Well, my VM patch /does/ have this mechanism more
generic and uses it for:
- swapout
- unmapped mmap() pages
- drop-behind for readahead
- drop-behind for generic_file_write

Since it seems clear that you want the same thing
but just didn't read my email, I guess we should
end this thread ;)

regards,

Rik
--
"What you're running that piece of shit Gnome?!?!"
   -- Miguel de Icaza, UKUUG 2000

http://www.conectiva.com/   http://www.surriel.com/

-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to [EMAIL PROTECTED]
Please read the FAQ at http://www.tux.org/lkml/



Re: Very aggressive swapping after 2 hours rest

2000-09-22 Thread Rik van Riel

On Wed, 20 Sep 2000, Andrea Arcangeli wrote:
 On Wed, Sep 20, 2000 at 12:53:18PM -0300, Rik van Riel wrote:
  On Tue, 19 Sep 2000, Andrea Arcangeli wrote:
   On Tue, Sep 19, 2000 at 05:29:29AM -0300, Rik van Riel wrote:
what I wanted to do in the new VM, except that I didn't
see why we would want to restrict it to swap pages only?
   
   You _must_ do that _only_ for the swap_cache, that's a specific
   issue of the swap_cache during swapout (notenote: not during
   swapin!).
  
  Which part of "why" did you not understand?
  
  I see no reason why we should not do the same trick for
  mmap()ed pages and maybe other memory too...
 
 It's quite obvious we not talking about the same thing (and it's also quite
 obvious the problem isn't addressed in test9), I'll restart from scratch trying
 to be more clear.
 
 There are two cases:
 
 1)pageins
 2)pageouts
 
 When we get a major page fault (pagein/swapin) and we create a
 swap cache page or a page-cache page, we must consider it a
 _more_ important page to keep in the working set.  So it's fine
 to put it at the head-next of the LRU and to age it properly.
 
 So far so good.
 
 Now there's a very special (subtle) case that I addressed in
 classzone and that is only related to the swapout of a swap
 cache (well, strictly speaking the pageout of shared pages could
 take advantage of it as well but I didn't wrote a mechamism
 generic enough to do that for MAP_SHARED as well yet and that's
 much less important because the dirty page cache is just in the
 LRU and it have less chances to be in the lru_cache-next
 position).

Well, my VM patch /does/ have this mechanism more
generic and uses it for:
- swapout
- unmapped mmap() pages
- drop-behind for readahead
- drop-behind for generic_file_write

Since it seems clear that you want the same thing
but just didn't read my email, I guess we should
end this thread ;)

regards,

Rik
--
"What you're running that piece of shit Gnome?!?!"
   -- Miguel de Icaza, UKUUG 2000

http://www.conectiva.com/   http://www.surriel.com/

-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to [EMAIL PROTECTED]
Please read the FAQ at http://www.tux.org/lkml/



Re: Very aggressive swapping after 2 hours rest

2000-09-20 Thread Andrea Arcangeli

On Wed, Sep 20, 2000 at 12:53:18PM -0300, Rik van Riel wrote:
> On Tue, 19 Sep 2000, Andrea Arcangeli wrote:
> > On Tue, Sep 19, 2000 at 05:29:29AM -0300, Rik van Riel wrote:
> > > what I wanted to do in the new VM, except that I didn't
> > > see why we would want to restrict it to swap pages only?
> > 
> > You _must_ do that _only_ for the swap_cache, that's a specific
> > issue of the swap_cache during swapout (notenote: not during
> > swapin!).
> 
> Which part of "why" did you not understand?
> 
> I see no reason why we should not do the same trick for
> mmap()ed pages and maybe other memory too...

It's quite obvious we not talking about the same thing (and it's also quite
obvious the problem isn't addressed in test9), I'll restart from scratch trying
to be more clear.

There are two cases:

1)  pageins
2)  pageouts

When we get a major page fault (pagein/swapin) and we create a swap cache page
or a page-cache page, we must consider it a _more_ important page to keep in
the working set.  So it's fine to put it at the head->next of the LRU and to
age it properly.

So far so good.

Now there's a very special (subtle) case that I addressed in classzone
and that is only related to the swapout of a swap cache (well, strictly
speaking the pageout of shared pages could take advantage of it as well
but I didn't wrote a mechamism generic enough to do that for MAP_SHARED as well
yet and that's much less important because the dirty page cache is just in
the LRU and it have less chances to be in the lru_cache->next position).

When we choose to swapout a page that's the less interesting page we have
in the VM. If that wouldn't be true then our selection algorithm that chooses
which page to swapout would be flawed.

Ok, so now we have a page that we want to throw away by swapping it out.

What we do now? We put it in the lru_cache->next position and we write
it to disk. Then we left it in the lru_cache waiting shrink_mmap to free it.

What happens now? Before shrink_mmap will have a chance to free this page,
shrink_mmap will have to first throw away all the working set that we have in
cache from lru_cache->next->next to lru_cache->prev.

This is an obvious design bug that we have since 2.2.x and that degenerated
with the proper LRU in 2.4.x. The bug is that to free 1 page with the swapout
mechanism we first have to throw away all the working set in cache.

In classzone I have a proper lru_swap_cache where those swapped out pages
are put, and I always free those pages immediatly as soon as they're unlocked.
This avoids to throw away the working set for each single swapout.

One thing I will do to decrease the CPU usage in classzone is to make this
lru_swap_cache LRU a IRQ spinlock LRU, so that I can move the pages into this
lru_swap_cache only once the I/O is completed. As said this is a further
optimization not strictly necessary to save the working set of the kernel.

Aggressive aging can alleviate the problem indeed.

Andrea
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to [EMAIL PROTECTED]
Please read the FAQ at http://www.tux.org/lkml/



Re: Very aggressive swapping after 2 hours rest

2000-09-20 Thread Rik van Riel

On Tue, 19 Sep 2000, Andrea Arcangeli wrote:
> On Tue, Sep 19, 2000 at 05:29:29AM -0300, Rik van Riel wrote:
> > what I wanted to do in the new VM, except that I didn't
> > see why we would want to restrict it to swap pages only?
> 
> You _must_ do that _only_ for the swap_cache, that's a specific
> issue of the swap_cache during swapout (notenote: not during
> swapin!).

Which part of "why" did you not understand?

I see no reason why we should not do the same trick for
mmap()ed pages and maybe other memory too...

regards,

Rik
--
"What you're running that piece of shit Gnome?!?!"
   -- Miguel de Icaza, UKUUG 2000

http://www.conectiva.com/   http://www.surriel.com/

-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to [EMAIL PROTECTED]
Please read the FAQ at http://www.tux.org/lkml/



Re: Very aggressive swapping after 2 hours rest

2000-09-20 Thread Rik van Riel

On Tue, 19 Sep 2000, Andrea Arcangeli wrote:
 On Tue, Sep 19, 2000 at 05:29:29AM -0300, Rik van Riel wrote:
  what I wanted to do in the new VM, except that I didn't
  see why we would want to restrict it to swap pages only?
 
 You _must_ do that _only_ for the swap_cache, that's a specific
 issue of the swap_cache during swapout (notenote: not during
 swapin!).

Which part of "why" did you not understand?

I see no reason why we should not do the same trick for
mmap()ed pages and maybe other memory too...

regards,

Rik
--
"What you're running that piece of shit Gnome?!?!"
   -- Miguel de Icaza, UKUUG 2000

http://www.conectiva.com/   http://www.surriel.com/

-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to [EMAIL PROTECTED]
Please read the FAQ at http://www.tux.org/lkml/



Re: Very aggressive swapping after 2 hours rest

2000-09-20 Thread Andrea Arcangeli

On Wed, Sep 20, 2000 at 12:53:18PM -0300, Rik van Riel wrote:
 On Tue, 19 Sep 2000, Andrea Arcangeli wrote:
  On Tue, Sep 19, 2000 at 05:29:29AM -0300, Rik van Riel wrote:
   what I wanted to do in the new VM, except that I didn't
   see why we would want to restrict it to swap pages only?
  
  You _must_ do that _only_ for the swap_cache, that's a specific
  issue of the swap_cache during swapout (notenote: not during
  swapin!).
 
 Which part of "why" did you not understand?
 
 I see no reason why we should not do the same trick for
 mmap()ed pages and maybe other memory too...

It's quite obvious we not talking about the same thing (and it's also quite
obvious the problem isn't addressed in test9), I'll restart from scratch trying
to be more clear.

There are two cases:

1)  pageins
2)  pageouts

When we get a major page fault (pagein/swapin) and we create a swap cache page
or a page-cache page, we must consider it a _more_ important page to keep in
the working set.  So it's fine to put it at the head-next of the LRU and to
age it properly.

So far so good.

Now there's a very special (subtle) case that I addressed in classzone
and that is only related to the swapout of a swap cache (well, strictly
speaking the pageout of shared pages could take advantage of it as well
but I didn't wrote a mechamism generic enough to do that for MAP_SHARED as well
yet and that's much less important because the dirty page cache is just in
the LRU and it have less chances to be in the lru_cache-next position).

When we choose to swapout a page that's the less interesting page we have
in the VM. If that wouldn't be true then our selection algorithm that chooses
which page to swapout would be flawed.

Ok, so now we have a page that we want to throw away by swapping it out.

What we do now? We put it in the lru_cache-next position and we write
it to disk. Then we left it in the lru_cache waiting shrink_mmap to free it.

What happens now? Before shrink_mmap will have a chance to free this page,
shrink_mmap will have to first throw away all the working set that we have in
cache from lru_cache-next-next to lru_cache-prev.

This is an obvious design bug that we have since 2.2.x and that degenerated
with the proper LRU in 2.4.x. The bug is that to free 1 page with the swapout
mechanism we first have to throw away all the working set in cache.

In classzone I have a proper lru_swap_cache where those swapped out pages
are put, and I always free those pages immediatly as soon as they're unlocked.
This avoids to throw away the working set for each single swapout.

One thing I will do to decrease the CPU usage in classzone is to make this
lru_swap_cache LRU a IRQ spinlock LRU, so that I can move the pages into this
lru_swap_cache only once the I/O is completed. As said this is a further
optimization not strictly necessary to save the working set of the kernel.

Aggressive aging can alleviate the problem indeed.

Andrea
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to [EMAIL PROTECTED]
Please read the FAQ at http://www.tux.org/lkml/



Re: Very aggressive swapping after 2 hours rest

2000-09-19 Thread Andrea Arcangeli

On Tue, Sep 19, 2000 at 05:29:29AM -0300, Rik van Riel wrote:
> what I wanted to do in the new VM, except that I didn't
> see why we would want to restrict it to swap pages only?

You _must_ do that _only_ for the swap_cache, that's a specific issue of the
swap_cache during swapout (notenote: not during swapin!).

Andrea
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to [EMAIL PROTECTED]
Please read the FAQ at http://www.tux.org/lkml/



Re: Very aggressive swapping after 2 hours rest

2000-09-19 Thread Rik van Riel

On Mon, 18 Sep 2000, Byron Stanoszek wrote:

> I've finally had a chance to test out the new VM patch on my 32mb system.
> 
> It runs much, much better than the previous test8, and the
> pages->swap change is actually much smoother than I had expected
> it to be considering the recent talk about making it more
> gradual. I'm against having the swap more gradual because of the
> low amount of available memory and the high amount of memory
> actually taken up by processes required for normal operation.

> So, please take my opinions into consideration when/if you
> redesign the swap mechanism.

Oh I will. The "more smooth" swap code I am testing
right now concentrates on swapping from processes
that have been sleeping for longer than 

cache size / inactive_target

seconds. I think this might actually help with the
performance of low-memory systems, but I need to
test this a bit before I know for sure ;)

regards,

Rik
--
"What you're running that piece of shit Gnome?!?!"
   -- Miguel de Icaza, UKUUG 2000

http://www.conectiva.com/   http://www.surriel.com/

-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to [EMAIL PROTECTED]
Please read the FAQ at http://www.tux.org/lkml/



Re: Very aggressive swapping after 2 hours rest

2000-09-19 Thread Helge Hafting

Byron Stanoszek wrote:
> 
> I've finally had a chance to test out the new VM patch on my 32mb system.
> 
> It runs much, much better than the previous test8, and the pages->swap change
> is actually much smoother than I had expected it to be considering the recent
> talk about making it more gradual. I'm against having the swap more gradual
> because of the low amount of available memory and the high amount of memory
> actually taken up by processes required for normal operation.
> 
> At the moment, there's only room for about 5-6 meg of cache. If a gradual swap
> goes into effect, then I'm afraid that the processes that actually 'need' to
> stay in memory will start swapping out and thrashing, even when there's 6 meg
> still available for use. This was precisely the problem with the old VM on my
> machine, only the system wanted to keep 16 meg free for cache (*gag*).

I hope we can have a swap mechanism that gradually writes stuff 
into swap early on, but don't actually unmap the memory.
So we get clean pages that are still available but may be dropped
quickly if needed.  This will smooth things a bit without
degrading performance.

Some memory should be kept around for cache (if the processes do
any io at all) or you'll get io trashing: the same stuff read 
over and over from files instead of from cache.  And a read
before every write that isn't a complete block.

Almost all processes have some pages they don't use much, such
as startup initialization.  Swapping out that doesn't hurt.

Helge Hafting
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to [EMAIL PROTECTED]
Please read the FAQ at http://www.tux.org/lkml/



Re: Very aggressive swapping after 2 hours rest

2000-09-19 Thread Rik van Riel

On Sun, 17 Sep 2000, Andrea Arcangeli wrote:

> I don't know the internals of other OSes so I've no idea if that
> was a generic problem or not (guess why I started playing with
> linux: just to finally see the internals of an OS :) thus I
> don't know if this problem is generic or not.

Please take the time to work on your background
knowledge, it would be a good thing for you and
for Linux. You're a smart guy and it's a shame to
see you waste your time reinventing the wheel.

> Just to make sure if we're talking about the same thing, do you
> remeber when I told about the bigger problem under swap in
> 2.4.x? I remeber I explained this to you, SCT and Alexey.

Remember when I explained to you that this was exactly
what I wanted to do in the new VM, except that I didn't
see why we would want to restrict it to swap pages only?

> If you fixed it I'm very happy and I thank you.

If you have the time, please read the source of the new
VM code. It would be nice to have your input on it too.

regards,

Rik
--
"What you're running that piece of shit Gnome?!?!"
   -- Miguel de Icaza, UKUUG 2000

http://www.conectiva.com/   http://www.surriel.com/

-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to [EMAIL PROTECTED]
Please read the FAQ at http://www.tux.org/lkml/



Re: Very aggressive swapping after 2 hours rest

2000-09-19 Thread Rik van Riel

On Sun, 17 Sep 2000, Andrea Arcangeli wrote:

 I don't know the internals of other OSes so I've no idea if that
 was a generic problem or not (guess why I started playing with
 linux: just to finally see the internals of an OS :) thus I
 don't know if this problem is generic or not.

Please take the time to work on your background
knowledge, it would be a good thing for you and
for Linux. You're a smart guy and it's a shame to
see you waste your time reinventing the wheel.

 Just to make sure if we're talking about the same thing, do you
 remeber when I told about the bigger problem under swap in
 2.4.x? I remeber I explained this to you, SCT and Alexey.

Remember when I explained to you that this was exactly
what I wanted to do in the new VM, except that I didn't
see why we would want to restrict it to swap pages only?

 If you fixed it I'm very happy and I thank you.

If you have the time, please read the source of the new
VM code. It would be nice to have your input on it too.

regards,

Rik
--
"What you're running that piece of shit Gnome?!?!"
   -- Miguel de Icaza, UKUUG 2000

http://www.conectiva.com/   http://www.surriel.com/

-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to [EMAIL PROTECTED]
Please read the FAQ at http://www.tux.org/lkml/



Re: Very aggressive swapping after 2 hours rest

2000-09-19 Thread Helge Hafting

Byron Stanoszek wrote:
 
 I've finally had a chance to test out the new VM patch on my 32mb system.
 
 It runs much, much better than the previous test8, and the pages-swap change
 is actually much smoother than I had expected it to be considering the recent
 talk about making it more gradual. I'm against having the swap more gradual
 because of the low amount of available memory and the high amount of memory
 actually taken up by processes required for normal operation.
 
 At the moment, there's only room for about 5-6 meg of cache. If a gradual swap
 goes into effect, then I'm afraid that the processes that actually 'need' to
 stay in memory will start swapping out and thrashing, even when there's 6 meg
 still available for use. This was precisely the problem with the old VM on my
 machine, only the system wanted to keep 16 meg free for cache (*gag*).

I hope we can have a swap mechanism that gradually writes stuff 
into swap early on, but don't actually unmap the memory.
So we get clean pages that are still available but may be dropped
quickly if needed.  This will smooth things a bit without
degrading performance.

Some memory should be kept around for cache (if the processes do
any io at all) or you'll get io trashing: the same stuff read 
over and over from files instead of from cache.  And a read
before every write that isn't a complete block.

Almost all processes have some pages they don't use much, such
as startup initialization.  Swapping out that doesn't hurt.

Helge Hafting
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to [EMAIL PROTECTED]
Please read the FAQ at http://www.tux.org/lkml/



Re: Very aggressive swapping after 2 hours rest

2000-09-19 Thread Andrea Arcangeli

On Tue, Sep 19, 2000 at 05:29:29AM -0300, Rik van Riel wrote:
 what I wanted to do in the new VM, except that I didn't
 see why we would want to restrict it to swap pages only?

You _must_ do that _only_ for the swap_cache, that's a specific issue of the
swap_cache during swapout (notenote: not during swapin!).

Andrea
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to [EMAIL PROTECTED]
Please read the FAQ at http://www.tux.org/lkml/



Re: Very aggressive swapping after 2 hours rest

2000-09-18 Thread Byron Stanoszek

I've finally had a chance to test out the new VM patch on my 32mb system.

It runs much, much better than the previous test8, and the pages->swap change
is actually much smoother than I had expected it to be considering the recent
talk about making it more gradual. I'm against having the swap more gradual
because of the low amount of available memory and the high amount of memory
actually taken up by processes required for normal operation.

At the moment, there's only room for about 5-6 meg of cache. If a gradual swap
goes into effect, then I'm afraid that the processes that actually 'need' to
stay in memory will start swapping out and thrashing, even when there's 6 meg
still available for use. This was precisely the problem with the old VM on my
machine, only the system wanted to keep 16 meg free for cache (*gag*).

So, please take my opinions into consideration when/if you redesign the swap
mechanism.

Regards,
 Byron

-- 
Byron Stanoszek Ph: (330) 644-3059
Systems Programmer  Fax: (330) 644-8110
Commercial Timesharing Inc. Email: [EMAIL PROTECTED]

-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to [EMAIL PROTECTED]
Please read the FAQ at http://www.tux.org/lkml/



Re: Very aggressive swapping after 2 hours rest

2000-09-18 Thread Byron Stanoszek

I've finally had a chance to test out the new VM patch on my 32mb system.

It runs much, much better than the previous test8, and the pages-swap change
is actually much smoother than I had expected it to be considering the recent
talk about making it more gradual. I'm against having the swap more gradual
because of the low amount of available memory and the high amount of memory
actually taken up by processes required for normal operation.

At the moment, there's only room for about 5-6 meg of cache. If a gradual swap
goes into effect, then I'm afraid that the processes that actually 'need' to
stay in memory will start swapping out and thrashing, even when there's 6 meg
still available for use. This was precisely the problem with the old VM on my
machine, only the system wanted to keep 16 meg free for cache (*gag*).

So, please take my opinions into consideration when/if you redesign the swap
mechanism.

Regards,
 Byron

-- 
Byron Stanoszek Ph: (330) 644-3059
Systems Programmer  Fax: (330) 644-8110
Commercial Timesharing Inc. Email: [EMAIL PROTECTED]

-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to [EMAIL PROTECTED]
Please read the FAQ at http://www.tux.org/lkml/



Re: Very aggressive swapping after 2 hours rest

2000-09-17 Thread Andrea Arcangeli

On Sun, Sep 17, 2000 at 04:36:34PM -0300, Rik van Riel wrote:
> On Sun, 17 Sep 2000, Andrea Arcangeli wrote:
> > On Sun, Sep 17, 2000 at 03:43:53PM -0300, Rik van Riel wrote:
> > > The new VM, as integrated in -test9-pre1, does the same thing,
> > 
> > Thanks for giving me some credit for my ideas.
> 
> Your ideas?  This is as much your idea as it is mine (which
> it isn't). This is ancient stuff other OSes have been doing
> for years ...

I don't know the internals of other OSes so I've no idea if that was a generic
problem or not (guess why I started playing with linux: just to finally see the
internals of an OS :) thus I don't know if this problem is generic or not.

Just to make sure if we're talking about the same thing, do you remeber when I
told about the bigger problem under swap in 2.4.x? I remeber I explained this
to you, SCT and Alexey.

The problem I'm talking about is the way we free the last reference to the swap
cache after a swapout completes. It deals with subtle issues of the linux
swap cache management (we didn't had this problem in 2.0.x) and it looks
a very linux issue: an implementation bug.

The problem is/was that when in 2.1.x swap_out stopped to use the free_after
logic to do the last free of the pages (when we started using shrink_mmap for
freeing the last reference in the swap cache) we introduced a big problem in
the memory balancing. This problem was one of the things addressed by classzone
and as I told you this was the first problem to address for 2.4.x before
anything else.

We have this problem in 2.2.x and this problem is been aggravated when I
introduced a real LRU into 2.3.1x.  With the real LRU the less interesting page
in the system (the one we choosed to swapout because it have the accessed bit
clear) become the one at the top of the LRU (not anymore in a random position
as in 2.2.x) so to effectively swap 1 page out we had to throw away all the
working set of clean cache. At least in 2.2.x when we inserted a page in the
swap cache to free it, we at least had a chance not to be exactly at the other
end of our round trip around the physical memory (making a comparison with
2.4.x in mean such page would be put the middle of the LRU, not at the top).

When I was explaining this to you you didn't even agreed with that and you
was saying me that the swap cache shouldn't be shrink immediatly as soon
as the swap _out_ I/O is completed (even while that was the major problem
in the VM by that time).

If you fixed it I'm very happy and I thank you.

If you didn't fixed it (I'm not sure anymore since you seems to talk
about things relative to other OSes as well) then ac22-class++ could
still run faster than the new VM.

Andrea
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to [EMAIL PROTECTED]
Please read the FAQ at http://www.tux.org/lkml/



Re: Very aggressive swapping after 2 hours rest

2000-09-17 Thread Andrea Arcangeli

On Sun, Sep 17, 2000 at 03:42:08PM -0400, Byron Stanoszek wrote:
> Not to be a bother, but I would still like to see a value or at least someone
> tell me what calculations I would need to do with the values listed in
> /proc/meninfo in order to determine the number of pages actually in-use by
> processes (or in otherwords, the amount of memory I can allocate before I fill

You can do that with classzone with SYSRQ+M. There's a "map: " line that tells
you how many pages of page cache in the system are mapped in userspace memory.
This is done by accounting this information during page fault while dropping
the page from the lru_cache (the information is not in function of the aging
heuristic).

Andrea
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to [EMAIL PROTECTED]
Please read the FAQ at http://www.tux.org/lkml/



Re: Very aggressive swapping after 2 hours rest

2000-09-17 Thread Rik van Riel

On Sun, 17 Sep 2000, Andrea Arcangeli wrote:
> On Sun, Sep 17, 2000 at 03:43:53PM -0300, Rik van Riel wrote:
> > The new VM, as integrated in -test9-pre1, does the same thing,
> 
> Thanks for giving me some credit for my ideas.

Your ideas?  This is as much your idea as it is mine (which
it isn't). This is ancient stuff other OSes have been doing
for years ...

I think it's wrong any of us should claim ideas for stuff
that has been done already by other people. It's time to
put away the wheel reinvention kit and LEARN FROM OTHER
SYSTEMS and even from *shudder* books ;)

It is time to stop playing around with ideas that might be
fun. It's time to discover which ideas are working, which
don't and why. And only when the problem is understood, we
should make a kernel patch to fix the problem.

I admit I've done too much playing around without understanding
the issues involved over the last years as well, but it's time
to stop reinventing the (sometimes octangular) wheel and learn
everything from history which we can learn.

regards,

Rik
--
"What you're running that piece of shit Gnome?!?!"
   -- Miguel de Icaza, UKUUG 2000

http://www.conectiva.com/   http://www.surriel.com/

-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to [EMAIL PROTECTED]
Please read the FAQ at http://www.tux.org/lkml/



Re: Very aggressive swapping after 2 hours rest

2000-09-17 Thread Rik van Riel

On Sun, 17 Sep 2000, Andrea Arcangeli wrote:
> On Sun, Sep 17, 2000 at 03:42:08PM -0400, Byron Stanoszek wrote:
> > Not to be a bother, but I would still like to see a value or at least someone
> > tell me what calculations I would need to do with the values listed in
> > /proc/meninfo in order to determine the number of pages actually in-use by
> > processes (or in otherwords, the amount of memory I can allocate before I fill
> 
> You can do that with classzone with SYSRQ+M.

Sysrq-M displays the info for my VM too, but you'll have to
admit that it isn't as useful as vmstat ;)

regards,

Rik
--
"What you're running that piece of shit Gnome?!?!"
   -- Miguel de Icaza, UKUUG 2000

http://www.conectiva.com/   http://www.surriel.com/

-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to [EMAIL PROTECTED]
Please read the FAQ at http://www.tux.org/lkml/



Re: Very aggressive swapping after 2 hours rest

2000-09-17 Thread Byron Stanoszek

On Sun, 17 Sep 2000, Andrea Arcangeli wrote:

> Some of the fields recently added to /proc/meminfo are very dependent on the
> internal of the memory management of the kernel, I don't think it's good idea
> to make them part of the user<->kernel API because there are alteratvive
> algorithms that won't generate those numbers and that can generate different
> ones. I understand the interest in collecting the new data for debugging
> the behaviour of the MM but a few line of perl are enough to do that.

Not to be a bother, but I would still like to see a value or at least someone
tell me what calculations I would need to do with the values listed in
/proc/meninfo in order to determine the number of pages actually in-use by
processes (or in otherwords, the amount of memory I can allocate before I fill
up the system RAM at current state).

-- 
Byron Stanoszek Ph: (330) 644-3059
Systems Programmer  Fax: (330) 644-8110
Commercial Timesharing Inc. Email: [EMAIL PROTECTED]

-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to [EMAIL PROTECTED]
Please read the FAQ at http://www.tux.org/lkml/



Re: Very aggressive swapping after 2 hours rest

2000-09-17 Thread Andrea Arcangeli

On Sun, Sep 17, 2000 at 03:36:04PM -0300, Rik van Riel wrote:
> On Sun, 17 Sep 2000, Andrea Arcangeli wrote:
> > On Sun, Sep 17, 2000 at 02:33:48PM -0300, Rik van Riel wrote:
> > > If you have a better idea for memory management, I'd
> > > like to hear it ;)
> > 
> > You know 2.4.0-test1-ac22-class++ beaten 2.4.0-test1-ac22-riel++
> > under low memory scenario, right?
> 
> You know that the -ac* VM code was very different from the
> VM code that has been submitted recently, right? ;)
> 
> The VM code in the -ac* kernels concentrated on things like
> deferred swapout, while the real issues like multi-queue VM
> weren't present yet.
> 
> The newer VM patches leave out the deferred swapout and other
> issues, and instead keep all the "low-level" code equivalent
> to the old shrink_mmap(), with the only difference being the
> way we /select/ which pages undergo the different procedures.
> 
> Please take a look at the new VM before saying anything about
> it...

Rik could you tell me where I said anything about the new VM in test9?

I _very_ much hope the 2.4.0-test9-pre2 VM is much better than
2.4.0-test1-ac22-class++ otherwise it can't explain why your code is been
integrated and my classzone thing is only been complained by you and Linus.

I asked you if you known that ac22-class++ was faster under low mem because
from your previous question ended with the smile you looked to assume that
nobody can do anything better than the current VM. If my assumption was wrong I
apolgise, however you should explain me the reason of the smile then.

If my assumption was right, I given you the proof you shouldn't assume that.

Andrea
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to [EMAIL PROTECTED]
Please read the FAQ at http://www.tux.org/lkml/



Re: Very aggressive swapping after 2 hours rest

2000-09-17 Thread Andrea Arcangeli

On Sun, Sep 17, 2000 at 03:43:53PM -0300, Rik van Riel wrote:
> The new VM, as integrated in -test9-pre1, does the same thing,

Thanks for giving me some credit for my ideas.

Andrea
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to [EMAIL PROTECTED]
Please read the FAQ at http://www.tux.org/lkml/



Re: Very aggressive swapping after 2 hours rest

2000-09-17 Thread Andrea Arcangeli

On Sun, Sep 17, 2000 at 05:37:13PM -0300, Rik van Riel wrote:
> Sysrq-M displays the info for my VM too, but you'll have to
> admit that it isn't as useful as vmstat ;)

I agree about that info.  That's an information not really relevant to the
internal of the memory balancing algorithm, but it's an information that
regards the state of the pagecache and that can tell how much RAM the system
really needs to perform well.

Having only a few mbytes of cache "unmapped" is an issue, having to
pagein/pageout all the time because the big cache had to be pagedin/pagedout is
a completly different thing, so I think it would be useful to export to
userspace such information as the number of pagecache pages that shouldn't be
considered "free" (in your case it's the amount of Active pages).

Just to make the opposite example in the new VM there's a field that you said
to report the net amount of allocations we get per second, averaged over one
minute, that looks more an internal "memory" of the algorithm than something
that can be useful to know to an administrator/developer. It's certainly fine
to export it via /proc/meminfo as you did so we can monitor it easily in
background and also a VM expert developer can look at with proper scripts in
case he found something strange, but I'd prefer not to make generic userspace
programs like vmstat to depend on it (tomorrow you may even choose to replace
it with some more advanced thing or just to average it in two minutes instead
of one minute or whatever else).

Andrea
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to [EMAIL PROTECTED]
Please read the FAQ at http://www.tux.org/lkml/



Re: Very aggressive swapping after 2 hours rest

2000-09-17 Thread Rik van Riel

On Sun, 17 Sep 2000, Andrea Arcangeli wrote:
> On Sun, Sep 17, 2000 at 02:33:48PM -0300, Rik van Riel wrote:
> > If you have a better idea for memory management, I'd
> > like to hear it ;)
> 
> You know 2.4.0-test1-ac22-class++ beaten 2.4.0-test1-ac22-riel++
> under low memory scenario, right?

Btw, do you know /why/ this was the case?

I have a strong hunch this was because you moved some swap-out
pages out of the way of the rest of the pages on the LRU list.

The new VM, as integrated in -test9-pre1, does the same thing,
only for a lot more cases than your classzone patch did, achieving
the same effect, only stronger and effective in more different
situations.

Add to that page aging, and the drop-behind code that makes it
possible to do streaming IO without putting memory pressure on
the working set 

(well, not in all cases ... drop_behind() and read-ahead don't
work for mmap() yet)

regards,

Rik
--
"What you're running that piece of shit Gnome?!?!"
   -- Miguel de Icaza, UKUUG 2000

http://www.conectiva.com/   http://www.surriel.com/

-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to [EMAIL PROTECTED]
Please read the FAQ at http://www.tux.org/lkml/



Re: Very aggressive swapping after 2 hours rest

2000-09-17 Thread Rik van Riel

On Sun, 17 Sep 2000, Andrea Arcangeli wrote:
> On Sun, Sep 17, 2000 at 02:33:48PM -0300, Rik van Riel wrote:
> > If you have a better idea for memory management, I'd
> > like to hear it ;)
> 
> You know 2.4.0-test1-ac22-class++ beaten 2.4.0-test1-ac22-riel++
> under low memory scenario, right?

You know that the -ac* VM code was very different from the
VM code that has been submitted recently, right? ;)

The VM code in the -ac* kernels concentrated on things like
deferred swapout, while the real issues like multi-queue VM
weren't present yet.

The newer VM patches leave out the deferred swapout and other
issues, and instead keep all the "low-level" code equivalent
to the old shrink_mmap(), with the only difference being the
way we /select/ which pages undergo the different procedures.

Please take a look at the new VM before saying anything about
it...

> The only thing it can be a problem for an alternate VM if there
> would be user<->kernel API differences realted to the very
> internal of the memory management so if possible I'd like if
> that could be avoided.

Sure, lets get rid of /proc/meminfo ;)

But serious, if /proc/meminfo isn't there to give information
about the internal memory use of the system, why do we have
it? I don't see /proc/meminfo doing anything else than that...

regards,

Rik
--
"What you're running that piece of shit Gnome?!?!"
   -- Miguel de Icaza, UKUUG 2000

http://www.conectiva.com/   http://www.surriel.com/

-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to [EMAIL PROTECTED]
Please read the FAQ at http://www.tux.org/lkml/



Re: Very aggressive swapping after 2 hours rest

2000-09-17 Thread Andrea Arcangeli

On Sun, Sep 17, 2000 at 02:33:48PM -0300, Rik van Riel wrote:
> If you have a better idea for memory management, I'd
> like to hear it ;)

You know 2.4.0-test1-ac22-class++ beaten 2.4.0-test1-ac22-riel++ under low
memory scenario, right?

The only thing it can be a problem for an alternate VM if there would be
user<->kernel API differences realted to the very internal of the memory
management so if possible I'd like if that could be avoided.

Andrea
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to [EMAIL PROTECTED]
Please read the FAQ at http://www.tux.org/lkml/



Re: Very aggressive swapping after 2 hours rest

2000-09-17 Thread Rik van Riel

On Sun, 17 Sep 2000, Andrea Arcangeli wrote:
> On Sun, Sep 17, 2000 at 12:11:40AM -0400, Albert D. Cahalan wrote:
> > Umm, OK. I suppose that BSD prints this stuff? If so,
> > somebody please send me an output sample and explanation
> > so that I know where to put this stuff.
> 
> Some of the fields recently added to /proc/meminfo are very
> dependent on the internal of the memory management of the
> kernel, I don't think it's good idea to make them part of the
> user<->kernel API because there are alteratvive algorithms that
> won't generate those numbers and that can generate different
> ones.

If you have a better idea for memory management, I'd
like to hear it ;)

cheers,

Rik
--
"What you're running that piece of shit Gnome?!?!"
   -- Miguel de Icaza, UKUUG 2000

http://www.conectiva.com/   http://www.surriel.com/

-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to [EMAIL PROTECTED]
Please read the FAQ at http://www.tux.org/lkml/



Re: Very aggressive swapping after 2 hours rest

2000-09-17 Thread Andrea Arcangeli

On Sun, Sep 17, 2000 at 12:11:40AM -0400, Albert D. Cahalan wrote:
> Umm, OK. I suppose that BSD prints this stuff? If so,
> somebody please send me an output sample and explanation
> so that I know where to put this stuff.

Some of the fields recently added to /proc/meminfo are very dependent on the
internal of the memory management of the kernel, I don't think it's good idea
to make them part of the user<->kernel API because there are alteratvive
algorithms that won't generate those numbers and that can generate different
ones. I understand the interest in collecting the new data for debugging
the behaviour of the MM but a few line of perl are enough to do that.

Andrea
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to [EMAIL PROTECTED]
Please read the FAQ at http://www.tux.org/lkml/



Re: Very aggressive swapping after 2 hours rest

2000-09-17 Thread Andrea Arcangeli

On Sun, Sep 17, 2000 at 02:33:48PM -0300, Rik van Riel wrote:
 If you have a better idea for memory management, I'd
 like to hear it ;)

You know 2.4.0-test1-ac22-class++ beaten 2.4.0-test1-ac22-riel++ under low
memory scenario, right?

The only thing it can be a problem for an alternate VM if there would be
user-kernel API differences realted to the very internal of the memory
management so if possible I'd like if that could be avoided.

Andrea
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to [EMAIL PROTECTED]
Please read the FAQ at http://www.tux.org/lkml/



Re: Very aggressive swapping after 2 hours rest

2000-09-17 Thread Rik van Riel

On Sun, 17 Sep 2000, Andrea Arcangeli wrote:
 On Sun, Sep 17, 2000 at 02:33:48PM -0300, Rik van Riel wrote:
  If you have a better idea for memory management, I'd
  like to hear it ;)
 
 You know 2.4.0-test1-ac22-class++ beaten 2.4.0-test1-ac22-riel++
 under low memory scenario, right?

You know that the -ac* VM code was very different from the
VM code that has been submitted recently, right? ;)

The VM code in the -ac* kernels concentrated on things like
deferred swapout, while the real issues like multi-queue VM
weren't present yet.

The newer VM patches leave out the deferred swapout and other
issues, and instead keep all the "low-level" code equivalent
to the old shrink_mmap(), with the only difference being the
way we /select/ which pages undergo the different procedures.

Please take a look at the new VM before saying anything about
it...

 The only thing it can be a problem for an alternate VM if there
 would be user-kernel API differences realted to the very
 internal of the memory management so if possible I'd like if
 that could be avoided.

Sure, lets get rid of /proc/meminfo ;)

But serious, if /proc/meminfo isn't there to give information
about the internal memory use of the system, why do we have
it? I don't see /proc/meminfo doing anything else than that...

regards,

Rik
--
"What you're running that piece of shit Gnome?!?!"
   -- Miguel de Icaza, UKUUG 2000

http://www.conectiva.com/   http://www.surriel.com/

-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to [EMAIL PROTECTED]
Please read the FAQ at http://www.tux.org/lkml/



Re: Very aggressive swapping after 2 hours rest

2000-09-17 Thread Rik van Riel

On Sun, 17 Sep 2000, Andrea Arcangeli wrote:
 On Sun, Sep 17, 2000 at 02:33:48PM -0300, Rik van Riel wrote:
  If you have a better idea for memory management, I'd
  like to hear it ;)
 
 You know 2.4.0-test1-ac22-class++ beaten 2.4.0-test1-ac22-riel++
 under low memory scenario, right?

Btw, do you know /why/ this was the case?

I have a strong hunch this was because you moved some swap-out
pages out of the way of the rest of the pages on the LRU list.

The new VM, as integrated in -test9-pre1, does the same thing,
only for a lot more cases than your classzone patch did, achieving
the same effect, only stronger and effective in more different
situations.

Add to that page aging, and the drop-behind code that makes it
possible to do streaming IO without putting memory pressure on
the working set 

(well, not in all cases ... drop_behind() and read-ahead don't
work for mmap() yet)

regards,

Rik
--
"What you're running that piece of shit Gnome?!?!"
   -- Miguel de Icaza, UKUUG 2000

http://www.conectiva.com/   http://www.surriel.com/

-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to [EMAIL PROTECTED]
Please read the FAQ at http://www.tux.org/lkml/



Re: Very aggressive swapping after 2 hours rest

2000-09-17 Thread Andrea Arcangeli

On Sun, Sep 17, 2000 at 05:37:13PM -0300, Rik van Riel wrote:
 Sysrq-M displays the info for my VM too, but you'll have to
 admit that it isn't as useful as vmstat ;)

I agree about that info.  That's an information not really relevant to the
internal of the memory balancing algorithm, but it's an information that
regards the state of the pagecache and that can tell how much RAM the system
really needs to perform well.

Having only a few mbytes of cache "unmapped" is an issue, having to
pagein/pageout all the time because the big cache had to be pagedin/pagedout is
a completly different thing, so I think it would be useful to export to
userspace such information as the number of pagecache pages that shouldn't be
considered "free" (in your case it's the amount of Active pages).

Just to make the opposite example in the new VM there's a field that you said
to report the net amount of allocations we get per second, averaged over one
minute, that looks more an internal "memory" of the algorithm than something
that can be useful to know to an administrator/developer. It's certainly fine
to export it via /proc/meminfo as you did so we can monitor it easily in
background and also a VM expert developer can look at with proper scripts in
case he found something strange, but I'd prefer not to make generic userspace
programs like vmstat to depend on it (tomorrow you may even choose to replace
it with some more advanced thing or just to average it in two minutes instead
of one minute or whatever else).

Andrea
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to [EMAIL PROTECTED]
Please read the FAQ at http://www.tux.org/lkml/



Re: Very aggressive swapping after 2 hours rest

2000-09-17 Thread Andrea Arcangeli

On Sun, Sep 17, 2000 at 03:43:53PM -0300, Rik van Riel wrote:
 The new VM, as integrated in -test9-pre1, does the same thing,

Thanks for giving me some credit for my ideas.

Andrea
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to [EMAIL PROTECTED]
Please read the FAQ at http://www.tux.org/lkml/



Re: Very aggressive swapping after 2 hours rest

2000-09-17 Thread Byron Stanoszek

On Sun, 17 Sep 2000, Andrea Arcangeli wrote:

 Some of the fields recently added to /proc/meminfo are very dependent on the
 internal of the memory management of the kernel, I don't think it's good idea
 to make them part of the user-kernel API because there are alteratvive
 algorithms that won't generate those numbers and that can generate different
 ones. I understand the interest in collecting the new data for debugging
 the behaviour of the MM but a few line of perl are enough to do that.

Not to be a bother, but I would still like to see a value or at least someone
tell me what calculations I would need to do with the values listed in
/proc/meninfo in order to determine the number of pages actually in-use by
processes (or in otherwords, the amount of memory I can allocate before I fill
up the system RAM at current state).

-- 
Byron Stanoszek Ph: (330) 644-3059
Systems Programmer  Fax: (330) 644-8110
Commercial Timesharing Inc. Email: [EMAIL PROTECTED]

-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to [EMAIL PROTECTED]
Please read the FAQ at http://www.tux.org/lkml/



Re: Very aggressive swapping after 2 hours rest

2000-09-17 Thread Andrea Arcangeli

On Sun, Sep 17, 2000 at 03:36:04PM -0300, Rik van Riel wrote:
 On Sun, 17 Sep 2000, Andrea Arcangeli wrote:
  On Sun, Sep 17, 2000 at 02:33:48PM -0300, Rik van Riel wrote:
   If you have a better idea for memory management, I'd
   like to hear it ;)
  
  You know 2.4.0-test1-ac22-class++ beaten 2.4.0-test1-ac22-riel++
  under low memory scenario, right?
 
 You know that the -ac* VM code was very different from the
 VM code that has been submitted recently, right? ;)
 
 The VM code in the -ac* kernels concentrated on things like
 deferred swapout, while the real issues like multi-queue VM
 weren't present yet.
 
 The newer VM patches leave out the deferred swapout and other
 issues, and instead keep all the "low-level" code equivalent
 to the old shrink_mmap(), with the only difference being the
 way we /select/ which pages undergo the different procedures.
 
 Please take a look at the new VM before saying anything about
 it...

Rik could you tell me where I said anything about the new VM in test9?

I _very_ much hope the 2.4.0-test9-pre2 VM is much better than
2.4.0-test1-ac22-class++ otherwise it can't explain why your code is been
integrated and my classzone thing is only been complained by you and Linus.

I asked you if you known that ac22-class++ was faster under low mem because
from your previous question ended with the smile you looked to assume that
nobody can do anything better than the current VM. If my assumption was wrong I
apolgise, however you should explain me the reason of the smile then.

If my assumption was right, I given you the proof you shouldn't assume that.

Andrea
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to [EMAIL PROTECTED]
Please read the FAQ at http://www.tux.org/lkml/



Re: Very aggressive swapping after 2 hours rest

2000-09-17 Thread Rik van Riel

On Sun, 17 Sep 2000, Andrea Arcangeli wrote:
 On Sun, Sep 17, 2000 at 03:42:08PM -0400, Byron Stanoszek wrote:
  Not to be a bother, but I would still like to see a value or at least someone
  tell me what calculations I would need to do with the values listed in
  /proc/meninfo in order to determine the number of pages actually in-use by
  processes (or in otherwords, the amount of memory I can allocate before I fill
 
 You can do that with classzone with SYSRQ+M.

Sysrq-M displays the info for my VM too, but you'll have to
admit that it isn't as useful as vmstat ;)

regards,

Rik
--
"What you're running that piece of shit Gnome?!?!"
   -- Miguel de Icaza, UKUUG 2000

http://www.conectiva.com/   http://www.surriel.com/

-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to [EMAIL PROTECTED]
Please read the FAQ at http://www.tux.org/lkml/



Re: Very aggressive swapping after 2 hours rest

2000-09-17 Thread Rik van Riel

On Sun, 17 Sep 2000, Andrea Arcangeli wrote:
 On Sun, Sep 17, 2000 at 03:43:53PM -0300, Rik van Riel wrote:
  The new VM, as integrated in -test9-pre1, does the same thing,
 
 Thanks for giving me some credit for my ideas.

Your ideas?  This is as much your idea as it is mine (which
it isn't). This is ancient stuff other OSes have been doing
for years ...

I think it's wrong any of us should claim ideas for stuff
that has been done already by other people. It's time to
put away the wheel reinvention kit and LEARN FROM OTHER
SYSTEMS and even from *shudder* books ;)

It is time to stop playing around with ideas that might be
fun. It's time to discover which ideas are working, which
don't and why. And only when the problem is understood, we
should make a kernel patch to fix the problem.

I admit I've done too much playing around without understanding
the issues involved over the last years as well, but it's time
to stop reinventing the (sometimes octangular) wheel and learn
everything from history which we can learn.

regards,

Rik
--
"What you're running that piece of shit Gnome?!?!"
   -- Miguel de Icaza, UKUUG 2000

http://www.conectiva.com/   http://www.surriel.com/

-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to [EMAIL PROTECTED]
Please read the FAQ at http://www.tux.org/lkml/



Re: Very aggressive swapping after 2 hours rest

2000-09-17 Thread Andrea Arcangeli

On Sun, Sep 17, 2000 at 03:42:08PM -0400, Byron Stanoszek wrote:
 Not to be a bother, but I would still like to see a value or at least someone
 tell me what calculations I would need to do with the values listed in
 /proc/meninfo in order to determine the number of pages actually in-use by
 processes (or in otherwords, the amount of memory I can allocate before I fill

You can do that with classzone with SYSRQ+M. There's a "map: " line that tells
you how many pages of page cache in the system are mapped in userspace memory.
This is done by accounting this information during page fault while dropping
the page from the lru_cache (the information is not in function of the aging
heuristic).

Andrea
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to [EMAIL PROTECTED]
Please read the FAQ at http://www.tux.org/lkml/



Re: Very aggressive swapping after 2 hours rest

2000-09-16 Thread Albert D. Cahalan

Rik van Riel writes:

> It would be better to put that in a userspace tool like
> vmstat.
>
> Oh, and now we're talking about vmstat, I guess that
> program also needs support for displaying the number
> of active/inactive_dirty/inactive_clean pages ... ;)
>
> (any volunteers?)

Umm, OK. I suppose that BSD prints this stuff? If so,
somebody please send me an output sample and explanation
so that I know where to put this stuff.


-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to [EMAIL PROTECTED]
Please read the FAQ at http://www.tux.org/lkml/



Re: Very aggressive swapping after 2 hours rest

2000-09-16 Thread Rik van Riel

On Sat, 16 Sep 2000, Byron Stanoszek wrote:
> On Sat, 16 Sep 2000, Byron Stanoszek wrote:
> 
> > I'd like to be able to use that extra 16mb for process memory more than I would
> > for cache. When most of my programs get loaded up, it totals to around 24mb on
> > average, and medium-to-low disk access is used. I like the way it is now, where
> > it won't start swapping unless a process starts eating up memory quickly, or a
> > process starts to do a lot of disk access.
> 
> I do agree that processes that start up and never get 'touched'
> again should definitely get swapped out, but only when system
> ram is nearing the low point. At this point, processes who
> haven't used memory the longest should get swapped.

This is one of the things I'm thinking about at the moment.
Doing something like this should make the VM run a bit more
smooth at the moment the system would have just started
swapping, and the "kick in" of swap wouldn't be as sudden
as it is now.

> Also, one other thing I noticed (in the old VM, haven't noticed
> it on my 32mb machine yet) is that when processes get swapped
> out, doing a 'ps -aux' prints the SWAP values correctly as '0'.
> But doing a consecutive 'ps' shows these processes as '4'. Is
> there something new in recent kernels that getting process
> states actually has to access a page of the process's RAM? Just
> curious.

Shared glibc pages which are still in active use by other
processes aren't swapped out. As long as somebody is still
using this page actively we won't swap it out anyway, so
we might as well keep it mapped in the page table of every
process which references it ...

regards,

Rik
--
"What you're running that piece of shit Gnome?!?!"
   -- Miguel de Icaza, UKUUG 2000

http://www.conectiva.com/   http://www.surriel.com/

-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to [EMAIL PROTECTED]
Please read the FAQ at http://www.tux.org/lkml/



Re: Very aggressive swapping after 2 hours rest

2000-09-16 Thread Rik van Riel

On Sat, 16 Sep 2000, Byron Stanoszek wrote:
> On Sat, 16 Sep 2000, Rik van Riel wrote:
> 
> > MemFree:  memory on the freelist, contains no data
> > Buffers:  buffer cache memory
> > Cached:   page cache memory
> > 
> > Active:   buffer or page cache memory which is in active
> >   use (that is, page->age > 0 or many references)
> > Inact_dirty:  buffer or cache pages with page->age == 0 that
> >   /might/ be freeable
> >   (page->buffers is set or page->count == 2 when
> >   we add the page ... while holding a reference)
> > Inact_clean:  page cache pages with page->age == 0 of which
> >   we are /certain/ that they are freeable, these
> >   are counted almost as free pages
> > Inact_target: the net amount of allocations we get per second,
> >   averaged over one minute
> 
> I think I understand what those numbers mean, now. :)

Cool ;)

> But, I guess I'm still looking for a calculation that tells me
> exactly how many free (non-in-use) pages that I can allocate
> before running out of memory.

>  total   used   free sharedbuffers cached
> Mem:126516  34728  91788  0264   7836
> -/+ buffers/cache:  26628  99888
> Swap:32124964  31160
> 
> 
> Here, the value 26628+964 is closer to what the 'actual' amount
> of RAM usage really is by processes (minus shared mem, buffers,
> and cache). But I was unable to find that without the
> allocation. So, my question is, is it possible to add a line to
> /proc/meminfo that tells us this information?

It would be better to put that in a userspace tool like
vmstat.

Oh, and now we're talking about vmstat, I guess that
program also needs support for displaying the number
of active/inactive_dirty/inactive_clean pages ... ;)

(any volunteers?)

regards,

Rik
--
"What you're running that piece of shit Gnome?!?!"
   -- Miguel de Icaza, UKUUG 2000

http://www.conectiva.com/   http://www.surriel.com/

-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to [EMAIL PROTECTED]
Please read the FAQ at http://www.tux.org/lkml/



Re: Very aggressive swapping after 2 hours rest

2000-09-16 Thread Byron Stanoszek

On Sat, 16 Sep 2000, Rik van Riel wrote:

> MemFree:  memory on the freelist, contains no data
> Buffers:  buffer cache memory
> Cached:   page cache memory
> 
> Active:   buffer or page cache memory which is in active
>   use (that is, page->age > 0 or many references)
> Inact_dirty:  buffer or cache pages with page->age == 0 that
>   /might/ be freeable
>   (page->buffers is set or page->count == 2 when
>   we add the page ... while holding a reference)
> Inact_clean:  page cache pages with page->age == 0 of which
>   we are /certain/ that they are freeable, these
>   are counted almost as free pages
> Inact_target: the net amount of allocations we get per second,
>   averaged over one minute

I think I understand what those numbers mean, now. :)

But, I guess I'm still looking for a calculation that tells me exactly how many
free (non-in-use) pages that I can allocate before running out of memory. In
other words, how many KB of memory processes are actually taking up, versus
buffer/cache space.

For example, this doesn't tell me how much memory I can allocate before I
get to the point where swapping is inevitable:

root:~/local/benchmarks> cat /proc/meminfo; free
 [stuff omitted]
Buffers: 16648 kB
Cached:  35276 kB
Active:   2036 kB
Inact_dirty: 37264 kB
Inact_clean: 12624 kB
Inact_target:4 kB
 total   used   free sharedbuffers cached
Mem:126516  98852  27664  0  16648  35276
-/+ buffers/cache:  46928  79588
Swap:32124  0  32124


So I take a guess and allocate that much memory.

root:~/local/benchmarks> ./memspeed 88
Memory read  88M:  784.57 MB/s
Memory fill  88M:  347.12 MB/s
8-byte fill  88M:  344.73 MB/s
Memory copy  44M:  163.47 MB/s
8-byte copy  44M:  231.42 MB/s
Memory cmp   44M:  100.46 MB/s (Test OK)
Mem search   88M:  254.24 MB/s

Overall Memory Performance:  343.56 MB/s
root:~/local/benchmarks> echo $[27664+16648+35276]
79588
root:~/local/benchmarks> cat /proc/meminfo; free
 [stuff omitted]
Buffers:   264 kB
Cached:   7836 kB
Active:   5080 kB
Inact_dirty:   956 kB
Inact_clean:  2064 kB
Inact_target:0 kB
 total   used   free sharedbuffers cached
Mem:126516  34728  91788  0264   7836
-/+ buffers/cache:  26628  99888
Swap:32124964  31160


Here, the value 26628+964 is closer to what the 'actual' amount of RAM usage
really is by processes (minus shared mem, buffers, and cache). But I was unable
to find that without the allocation. So, my question is, is it possible to add
a line to /proc/meminfo that tells us this information? Or am I going against
the whole grain of the VM management system?

  -Byron

-- 
Byron Stanoszek Ph: (330) 644-3059
Systems Programmer  Fax: (330) 644-8110
Commercial Timesharing Inc. Email: [EMAIL PROTECTED]

-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to [EMAIL PROTECTED]
Please read the FAQ at http://www.tux.org/lkml/



Re: Very aggressive swapping after 2 hours rest

2000-09-16 Thread Byron Stanoszek

On Sat, 16 Sep 2000, Rik van Riel wrote:

> It would be better to put that in a userspace tool like
> vmstat.

Or modify 'free', which is what I was going to do. How would I find the number
of actual pages-in-use from those variables? I've tried adding and subtracting
several and can't seem to get the 26mb number from the first output of
/proc/meminfo that I showed in my last email:

Buffers: 16648 kB
Cached:  35276 kB
Active:   2036 kB
Inact_dirty: 37264 kB
Inact_clean: 12624 kB
Inact_target:4 kB

=>  -/+ buffers/cache:  26628  99888

Thanks,
 Byron

-- 
Byron Stanoszek Ph: (330) 644-3059
Systems Programmer  Fax: (330) 644-8110
Commercial Timesharing Inc. Email: [EMAIL PROTECTED]

-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to [EMAIL PROTECTED]
Please read the FAQ at http://www.tux.org/lkml/



Re: Very aggressive swapping after 2 hours rest

2000-09-16 Thread Rik van Riel

On Sat, 16 Sep 2000, Byron Stanoszek wrote:

> Active pages increased to 18688kB, and we see some inact_clean.
> Is this normal as intended?

Yes. There are 2 things going on during that find.

1) find touches buffers all over the place, those
   buffers will be added to the active list
2) kflushd will try to keep the number of inactive
   dirty pages smaller than or equal to the number 
   of inactive clean and free pages

In this case it means that memory is slowly filled
with the buffers find loads into memory, into the
active list. At the same time, the amount of free
memory decreases and kflushd balances the amount of
inactive_dirty pages to a sane value...

> total:used:free:  shared: buffers:  cached:
> Mem:  129552384 101093376 284590080 17039360 36040704
> Swap: 328949760 32894976
> MemTotal:   126516 kB
> MemFree: 27792 kB
> MemShared:   0 kB
> Buffers: 16640 kB
> Cached:  35196 kB

> Active:   3128 kB
> Inact_dirty: 35704 kB
> Inact_clean: 13004 kB
> Inact_target:   20 kB

> Number of active pages decreased to 3 meg. Ok.  So, what value
> should I use to determine what actually is in-use by processes.
> Obviously, 'free' doesn't give the correct results anymore. :)

OK, let me explain something about these numbers.
(I'll write better documentation around Linux Kongress)

MemFree:  memory on the freelist, contains no data
Buffers:  buffer cache memory
Cached:   page cache memory

Active:   buffer or page cache memory which is in active
  use (that is, page->age > 0 or many references)
Inact_dirty:  buffer or cache pages with page->age == 0 that
  /might/ be freeable
  (page->buffers is set or page->count == 2 when
  we add the page ... while holding a reference)
Inact_clean:  page cache pages with page->age == 0 of which
  we are /certain/ that they are freeable, these
  are counted almost as free pages
Inact_target: the net amount of allocations we get per second,
  averaged over one minute

We try to keep the following goals:

1) nr_free_pages() + nr_inactive_clean_pages()  >  freepages.high
2) nr_free_pages() + nr_inactive_clean_pages() + nr_inactive_dirty_pages
   >  freepages.high + inactive_target
3) nr_inactive_dirty_pages < nr_free_pages() + nr_inactive_clean_pages()

Goals #1 and #2 are kept by kswapd, while kflushd mostly takes
care of goal #3.

These 3 goals together make sure that VM performance is smooth
under most circumstances.

regards,

Rik
--
"What you're running that piece of shit Gnome?!?!"
   -- Miguel de Icaza, UKUUG 2000

http://www.conectiva.com/   http://www.surriel.com/

-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to [EMAIL PROTECTED]
Please read the FAQ at http://www.tux.org/lkml/



Re: Very aggressive swapping after 2 hours rest

2000-09-16 Thread Byron Stanoszek

On Sat, 16 Sep 2000, Byron Stanoszek wrote:

> I'd like to be able to use that extra 16mb for process memory more than I would
> for cache. When most of my programs get loaded up, it totals to around 24mb on
> average, and medium-to-low disk access is used. I like the way it is now, where
> it won't start swapping unless a process starts eating up memory quickly, or a
> process starts to do a lot of disk access.

I do agree that processes that start up and never get 'touched' again should
definitely get swapped out, but only when system ram is nearing the low point.
At this point, processes who haven't used memory the longest should get
swapped. For example, I have 24 /sbin/mingetty processes that listen on that
32mb system. Most of the time, except when I'm _really_ busy, I only use 6. :-)

Also, one other thing I noticed (in the old VM, haven't noticed it on my 32mb
machine yet) is that when processes get swapped out, doing a 'ps -aux' prints
the SWAP values correctly as '0'. But doing a consecutive 'ps' shows these
processes as '4'. Is there something new in recent kernels that getting process
states actually has to access a page of the process's RAM? Just curious.

 -Byron

-- 
Byron Stanoszek Ph: (330) 644-3059
Systems Programmer  Fax: (330) 644-8110
Commercial Timesharing Inc. Email: [EMAIL PROTECTED]

-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to [EMAIL PROTECTED]
Please read the FAQ at http://www.tux.org/lkml/



Re: Very aggressive swapping after 2 hours rest

2000-09-16 Thread Byron Stanoszek

On Sat, 16 Sep 2000, Rik van Riel wrote:

> OTOH, maybe we want to do /some/ background swapping of
> sleeping tasks, to smooth out the VM a bit at the point
> where we start to run into the situation where we need
> to swap something out...

This is something like what the previous VM did, and it was extremely annoying
when running on a system with only 32mb of ram. What would happen is that
processes started getting swapped out at the 16mb-free mark. While this is fine
for idle processes, it also began to blatantly swap out processes that were in
use up to 3 minutes ago. Typing 'w' would take a few seconds to come back up
as 'bash' had to get swapped back in. Yet, all the while there was 16mb of
space still taken up by cached files.

I'd like to be able to use that extra 16mb for process memory more than I would
for cache. When most of my programs get loaded up, it totals to around 24mb on
average, and medium-to-low disk access is used. I like the way it is now, where
it won't start swapping unless a process starts eating up memory quickly, or a
process starts to do a lot of disk access.

 -Byron

-- 
Byron Stanoszek Ph: (330) 644-3059
Systems Programmer  Fax: (330) 644-8110
Commercial Timesharing Inc. Email: [EMAIL PROTECTED]

-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to [EMAIL PROTECTED]
Please read the FAQ at http://www.tux.org/lkml/



Re: Very aggressive swapping after 2 hours rest

2000-09-16 Thread Rik van Riel

On Sat, 16 Sep 2000, Byron Stanoszek wrote:
> On Sat, 16 Sep 2000, Rik van Riel wrote:
> > On Sat, 16 Sep 2000, Dietmar Kling wrote:
> > 
> > > i thought i add a report to the new VM in 2.4.0pre9

> > > When I tried to restart my work after 2 hours,
> > > the machine started swapping madly.
> > 
> > Does this swapping storm get less (or even go
> > away?) when you apply my small patch to test9-pre1?
> > 
> > http://www.surriel.com/patches/2.4.0-t9-vmpatch
> 
> I think I might have a similar problem with 2.4.0-t8-vmpatch2,

> The size of the buffers increased to 16mb as expected, but also
> the amount of memory 'in use' also increased by 16mb! Free
> shows:

> I'm trying test9 to see if that behaves any better, then I'll try
> 2.4.0-t9-vmpatch.
> 
> Have you encountered this buffer problem before, Rik?

It's not a problem per se. The VM patch is more agressive
against the cache, and as such less likely to eat into
the RSS of processes. 

OTOH, maybe we want to do /some/ background swapping of
sleeping tasks, to smooth out the VM a bit at the point
where we start to run into the situation where we need
to swap something out...

regards,

Rik
--
"What you're running that piece of shit Gnome?!?!"
   -- Miguel de Icaza, UKUUG 2000

http://www.conectiva.com/   http://www.surriel.com/

-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to [EMAIL PROTECTED]
Please read the FAQ at http://www.tux.org/lkml/



Re: Very aggressive swapping after 2 hours rest

2000-09-16 Thread Byron Stanoszek

On Sat, 16 Sep 2000, Byron Stanoszek wrote:

> I think I might have a similar problem with 2.4.0-t8-vmpatch2, related to
> caching. Without the vmpatch, my standard system 'used' would be near 28mb
> actual in use, the rest cached or in buffers. When I tried vmpatch2, standard
> usage eventually got up to 44mb when using the same programs and processes,
> with 1600kb of buffers and about 78mb of cache (with 2 days of uptime).
> 
> Then I tried a: find / -name *.pdf
> 
> The size of the buffers increased to 16mb as expected, but also the amount of
> memory 'in use' also increased by 16mb! Free shows:
> 
>  total   used   free sharedbuffers cached
> Mem:126516 123312   3204  0  16496  46084
> -/+ buffers/cache:  60732  65784
> Swap:32124  0  32124
> 
> That 60732 figure used to be around 44000 before the 'find'.

Here's a follow-up, using 2.4.0-test9 (without 2.4.0-t9-vmpatch).

I rebooted my system into 2.4.0-test9 and started my usual number of processes.
The memory map looks like this:

root:~> cat /proc/meminfo
total:used:free:  shared: buffers:  cached:
Mem:  129552384 69799936 597524480  1654784 34791424
Swap: 328949760 32894976
MemTotal:   126516 kB
MemFree: 58352 kB
MemShared:   0 kB
Buffers:  1616 kB
Cached:  33976 kB
Active:  21668 kB
Inact_dirty: 13924 kB
Inact_clean: 0 kB
Inact_target:  392 kB
HighTotal:   0 kB
HighFree:0 kB
LowTotal:   126516 kB
LowFree: 58352 kB
SwapTotal:   32124 kB
SwapFree:32124 kB
root:~> free
 total   used   free sharedbuffers cached
Mem:126516  68172  58344  0   1616  33976
-/+ buffers/cache:  32580  93936
Swap:32124  0  32124


After about 1 minute of doing nothing, I tried it again, and it showed:

root:~> cat /proc/meminfo
total:used:free:  shared: buffers:  cached:
Mem:  129552384 69636096 599162880  1654784 34820096
Swap: 328949760 32894976
MemTotal:   126516 kB
MemFree: 58512 kB
MemShared:   0 kB
Buffers:  1616 kB
Cached:  34004 kB
Active:  11704 kB
Inact_dirty: 23916 kB
Inact_clean: 0 kB
Inact_target:  184 kB
HighTotal:   0 kB
HighFree:0 kB
LowTotal:   126516 kB
LowFree: 58512 kB
SwapTotal:   32124 kB
SwapFree:32124 kB
root:~> free
 total   used   free sharedbuffers cached
Mem:126516  68028  58488  0   1616  34004
-/+ buffers/cache:  32408  94108
Swap:32124  0  32124

# Active pages decreased, # inactive_dirty pages increased.


Then I did the 'find'. This is the state of memory after the 'find' ended:

root:~> cat /proc/meminfo; free
total:used:free:  shared: buffers:  cached:
Mem:  129552384 99676160 298762240 17022976 34881536
Swap: 328949760 32894976
MemTotal:   126516 kB
MemFree: 29176 kB
MemShared:   0 kB
Buffers: 16624 kB
Cached:  34064 kB
Active:  18688 kB
Inact_dirty: 18892 kB
Inact_clean: 13108 kB
Inact_target:  236 kB
HighTotal:   0 kB
HighFree:0 kB
LowTotal:   126516 kB
LowFree: 29176 kB
SwapTotal:   32124 kB
SwapFree:32124 kB
 total   used   free sharedbuffers cached
Mem:126516  97352  29164  0  16624  34064
-/+ buffers/cache:  46664  79852
Swap:32124  0  32124

Active pages increased to 18688kB, and we see some inact_clean. Is this normal
as intended? So after writing this, I began to think that the 'free' command
should look at the number of 'Active' pages to determine how much memory is
actually 'in use' by system processes. Then just for fun, I looked at
/proc/meminfo again and saw this:

root:~> cat /proc/meminfo; free
total:used:free:  shared: buffers:  cached:
Mem:  129552384 101093376 284590080 17039360 36040704
Swap: 328949760 32894976
MemTotal:   126516 kB
MemFree: 27792 kB
MemShared:   0 kB
Buffers: 16640 kB
Cached:  35196 kB
Active:   3128 kB
Inact_dirty: 35704 kB
Inact_clean: 13004 kB
Inact_target:   20 kB
HighTotal:   0 kB
HighFree:0 kB
LowTotal:   126516 kB
LowFree: 27792 kB
SwapTotal:   32124 kB
SwapFree:32124 kB
 total   used   free sharedbuffers cached
Mem:126516  98732  27784  0  16640  35196
-/+ buffers/cache:  46896  79620
Swap:32124  0  32124

Number of active pages decreased to 3 meg. Ok.  So, what value should I use to
determine what actually is 

[Fwd: Very aggressive swapping after 2 hours rest]

2000-09-16 Thread safemode





Trever Adams wrote:

> Rik van Riel wrote:
> > > I believe this is a tuning issue, so
> > > i do not complain :)
> >
> > Indeed. Now that the testing base for the VM patch
> > has increased so much, there are a few as-of-yet
> > untested workloads popping up that don't perform
> > very well.
> >
> > I'm gathering details about those workloads and
> > trying to fix them as best as possible.
> >
> > regards,
> >
> > Rik
> > --
> > "What you're running that piece of shit Gnome?!?!"
> >-- Miguel de Icaza, UKUUG 2000
>
> Actually, I have been having problems with the latest Netscapes (provided by
> RedHat) since the early 2.4.0test series.  It seems it goes into memory eating
> mode every so often.  Closing netscape and restarting it makes things fine.  I
> don't know if it is the kernel's fault or netscape's, but it is only netscape
> that I have such problems with.  I almost wonder if it is a networking change
> that causes it.  I remember such a bug in the late 2.1.x series.
>
> Trever
> --
> The finest family and value oriented products are at http://www.daysofyore.com/
> Tired of high costs and games with domain names? http://domains.daysofyore.com/
> -
> To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
> the body of a message to [EMAIL PROTECTED]
> Please read the FAQ at http://www.tux.org/lkml/

i must say, Netscape 4.75 has been great for me ... especially with this
2.4.0-test8-vm3  kernel. No swap after opening and closing netscape numerous times
and browsing flash sites etc.  the first time i've been in swap was 13 hours after
boot ...and that was after extensive hdd and ram tests along with compiling
freeamp and loading other mem hogging programs such as gtk xemacs21 and Q2 .
I'm very happy with this VM patch ...it seems quite stable.  although i wont know
if it's any better than the old one until 2-3 days uptime ...which is when the old
kernel used to hit 100MB of swap and X would crash ...  so ..  we shall see
soon.   BTW..  the test# kernels are quite stable as i've had test5 up for well
over 30 days at a time between power outages caused by lightning.I have
Bonnie++ scores that are kind of discouraging when it comes to file creations and
deletion.  here are the scores.

Version 1.00c   --Sequential Output-- --Sequential Input- --Random-
-Per Chr- --Block-- -Rewrite- -Per Chr- --Block-- --Seeks--
Machine  MB K/sec %CP K/sec %CP K/sec %CP K/sec %CP K/sec %CP  /sec %CP
psuedomode  240  4504  94 14034  20  4665   9  4796  97 14921  17  31.0   0
--Sequential Create-- Random Create
-Create-- --Read--- -Delete-- -Create-- --Read--- -Delete--
  files  /sec %CP  /sec %CP  /sec %CP  /sec %CP  /sec %CP  /sec %CP
 30   104  97 + +++  1526  99   104  98 + +++   521  89
psuedomode,240,4504,94,14034,20,4665,9,4796,97,14921,17,31.0,0,30,104,97,+,+++,1526,99,104,98,+,+++,521,89

this is on a pii 300 with 128MB sdram on a hdd with a 6GB ext2fs  partition on a
7200 rpm UDMA 2 maxtor eide drive.   ..netscape was not loaded but 9 Eterms, X,
gaim, xamixer, sawmill, apache, proftpd, and various other little apps were
loaded.





Re: Very aggressive swapping after 2 hours rest

2000-09-16 Thread Byron Stanoszek

On Sat, 16 Sep 2000, Rik van Riel wrote:

> On Sat, 16 Sep 2000, Dietmar Kling wrote:
> 
> > i thought i add a report to the new VM in 2.4.0pre9
> > 
> > My Machine has 256 MB of memory 
> > I left it for two hours ( several Netscapes -Instances,
> > Mail and xmms running _nothing in swap_ )
> > 
> > When I tried to restart my work after 2 hours,
> > the machine started swapping madly.
> 
> Does this swapping storm get less (or even go
> away?) when you apply my small patch to test9-pre1?
> 
> http://www.surriel.com/patches/2.4.0-t9-vmpatch

I think I might have a similar problem with 2.4.0-t8-vmpatch2, related to
caching. Without the vmpatch, my standard system 'used' would be near 28mb
actual in use, the rest cached or in buffers. When I tried vmpatch2, standard
usage eventually got up to 44mb when using the same programs and processes,
with 1600kb of buffers and about 78mb of cache (with 2 days of uptime).

Then I tried a: find / -name *.pdf

The size of the buffers increased to 16mb as expected, but also the amount of
memory 'in use' also increased by 16mb! Free shows:

 total   used   free sharedbuffers cached
Mem:126516 123312   3204  0  16496  46084
-/+ buffers/cache:  60732  65784
Swap:32124  0  32124

That 60732 figure used to be around 44000 before the 'find'.

I'm trying test9 to see if that behaves any better, then I'll try
2.4.0-t9-vmpatch.

Have you encountered this buffer problem before, Rik?

-- 
Byron Stanoszek Ph: (330) 644-3059
Systems Programmer  Fax: (330) 644-8110
Commercial Timesharing Inc. Email: [EMAIL PROTECTED]

-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to [EMAIL PROTECTED]
Please read the FAQ at http://www.tux.org/lkml/



Re: Very aggressive swapping after 2 hours rest

2000-09-16 Thread Trever Adams

Rik van Riel wrote:
> > I believe this is a tuning issue, so
> > i do not complain :)
> 
> Indeed. Now that the testing base for the VM patch
> has increased so much, there are a few as-of-yet
> untested workloads popping up that don't perform
> very well.
> 
> I'm gathering details about those workloads and
> trying to fix them as best as possible.
> 
> regards,
> 
> Rik
> --
> "What you're running that piece of shit Gnome?!?!"
>-- Miguel de Icaza, UKUUG 2000

Actually, I have been having problems with the latest Netscapes (provided by
RedHat) since the early 2.4.0test series.  It seems it goes into memory eating
mode every so often.  Closing netscape and restarting it makes things fine.  I
don't know if it is the kernel's fault or netscape's, but it is only netscape
that I have such problems with.  I almost wonder if it is a networking change
that causes it.  I remember such a bug in the late 2.1.x series.

Trever
-- 
The finest family and value oriented products are at http://www.daysofyore.com/
Tired of high costs and games with domain names? http://domains.daysofyore.com/
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to [EMAIL PROTECTED]
Please read the FAQ at http://www.tux.org/lkml/



Re: Very aggressive swapping after 2 hours rest

2000-09-16 Thread Rik van Riel

On Sat, 16 Sep 2000, Dietmar Kling wrote:

> i thought i add a report to the new VM in 2.4.0pre9
> 
> My Machine has 256 MB of memory 
> I left it for two hours ( several Netscapes -Instances,
> Mail and xmms running _nothing in swap_ )
> 
> When I tried to restart my work after 2 hours,
> the machine started swapping madly.

Does this swapping storm get less (or even go
away?) when you apply my small patch to test9-pre1?

http://www.surriel.com/patches/2.4.0-t9-vmpatch

> I believe this is a tuning issue, so 
> i do not complain :)

Indeed. Now that the testing base for the VM patch
has increased so much, there are a few as-of-yet
untested workloads popping up that don't perform
very well.

I'm gathering details about those workloads and
trying to fix them as best as possible.

regards,

Rik
--
"What you're running that piece of shit Gnome?!?!"
   -- Miguel de Icaza, UKUUG 2000

http://www.conectiva.com/   http://www.surriel.com/

-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to [EMAIL PROTECTED]
Please read the FAQ at http://www.tux.org/lkml/



Very aggressive swapping after 2 hours rest

2000-09-16 Thread Dietmar Kling

hi,


i thought i add a report to the new VM in 2.4.0pre9

My Machine has 256 MB of memory 
I left it for two hours ( several Netscapes -Instances,
Mail and xmms running _nothing in swap_ )

When I tried to restart my work after 2 hours,
the machine started swapping madly. I couldn't
run the Netscapes Instances anymore without
2 -3 second silences in xmms.  I waited for
5 minutes in the hope when i activate all
Netscapes again it would stabilize slowly,
but it didn't. After killing all Netscapes, 
the Machine silenced. I am
now typing this on the again _stable_
machine.

I believe this is a tuning issue, so 
i do not complain :)


Regards
Dietmar
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to [EMAIL PROTECTED]
Please read the FAQ at http://www.tux.org/lkml/



Very aggressive swapping after 2 hours rest

2000-09-16 Thread Dietmar Kling

hi,


i thought i add a report to the new VM in 2.4.0pre9

My Machine has 256 MB of memory 
I left it for two hours ( several Netscapes -Instances,
Mail and xmms running _nothing in swap_ )

When I tried to restart my work after 2 hours,
the machine started swapping madly. I couldn't
run the Netscapes Instances anymore without
2 -3 second silences in xmms.  I waited for
5 minutes in the hope when i activate all
Netscapes again it would stabilize slowly,
but it didn't. After killing all Netscapes, 
the Machine silenced. I am
now typing this on the again _stable_
machine.

I believe this is a tuning issue, so 
i do not complain :)


Regards
Dietmar
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to [EMAIL PROTECTED]
Please read the FAQ at http://www.tux.org/lkml/



[Fwd: Very aggressive swapping after 2 hours rest]

2000-09-16 Thread safemode





Trever Adams wrote:

 Rik van Riel wrote:
   I believe this is a tuning issue, so
   i do not complain :)
 
  Indeed. Now that the testing base for the VM patch
  has increased so much, there are a few as-of-yet
  untested workloads popping up that don't perform
  very well.
 
  I'm gathering details about those workloads and
  trying to fix them as best as possible.
 
  regards,
 
  Rik
  --
  "What you're running that piece of shit Gnome?!?!"
 -- Miguel de Icaza, UKUUG 2000

 Actually, I have been having problems with the latest Netscapes (provided by
 RedHat) since the early 2.4.0test series.  It seems it goes into memory eating
 mode every so often.  Closing netscape and restarting it makes things fine.  I
 don't know if it is the kernel's fault or netscape's, but it is only netscape
 that I have such problems with.  I almost wonder if it is a networking change
 that causes it.  I remember such a bug in the late 2.1.x series.

 Trever
 --
 The finest family and value oriented products are at http://www.daysofyore.com/
 Tired of high costs and games with domain names? http://domains.daysofyore.com/
 -
 To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
 the body of a message to [EMAIL PROTECTED]
 Please read the FAQ at http://www.tux.org/lkml/

i must say, Netscape 4.75 has been great for me ... especially with this
2.4.0-test8-vm3  kernel. No swap after opening and closing netscape numerous times
and browsing flash sites etc.  the first time i've been in swap was 13 hours after
boot ...and that was after extensive hdd and ram tests along with compiling
freeamp and loading other mem hogging programs such as gtk xemacs21 and Q2 .
I'm very happy with this VM patch ...it seems quite stable.  although i wont know
if it's any better than the old one until 2-3 days uptime ...which is when the old
kernel used to hit 100MB of swap and X would crash ...  so ..  we shall see
soon.   BTW..  the test# kernels are quite stable as i've had test5 up for well
over 30 days at a time between power outages caused by lightning.I have
Bonnie++ scores that are kind of discouraging when it comes to file creations and
deletion.  here are the scores.

Version 1.00c   --Sequential Output-- --Sequential Input- --Random-
-Per Chr- --Block-- -Rewrite- -Per Chr- --Block-- --Seeks--
Machine  MB K/sec %CP K/sec %CP K/sec %CP K/sec %CP K/sec %CP  /sec %CP
psuedomode  240  4504  94 14034  20  4665   9  4796  97 14921  17  31.0   0
--Sequential Create-- Random Create
-Create-- --Read--- -Delete-- -Create-- --Read--- -Delete--
  files  /sec %CP  /sec %CP  /sec %CP  /sec %CP  /sec %CP  /sec %CP
 30   104  97 + +++  1526  99   104  98 + +++   521  89
psuedomode,240,4504,94,14034,20,4665,9,4796,97,14921,17,31.0,0,30,104,97,+,+++,1526,99,104,98,+,+++,521,89

this is on a pii 300 with 128MB sdram on a hdd with a 6GB ext2fs  partition on a
7200 rpm UDMA 2 maxtor eide drive.   ..netscape was not loaded but 9 Eterms, X,
gaim, xamixer, sawmill, apache, proftpd, and various other little apps were
loaded.





Re: Very aggressive swapping after 2 hours rest

2000-09-16 Thread Byron Stanoszek

On Sat, 16 Sep 2000, Byron Stanoszek wrote:

 I think I might have a similar problem with 2.4.0-t8-vmpatch2, related to
 caching. Without the vmpatch, my standard system 'used' would be near 28mb
 actual in use, the rest cached or in buffers. When I tried vmpatch2, standard
 usage eventually got up to 44mb when using the same programs and processes,
 with 1600kb of buffers and about 78mb of cache (with 2 days of uptime).
 
 Then I tried a: find / -name *.pdf
 
 The size of the buffers increased to 16mb as expected, but also the amount of
 memory 'in use' also increased by 16mb! Free shows:
 
  total   used   free sharedbuffers cached
 Mem:126516 123312   3204  0  16496  46084
 -/+ buffers/cache:  60732  65784
 Swap:32124  0  32124
 
 That 60732 figure used to be around 44000 before the 'find'.

Here's a follow-up, using 2.4.0-test9 (without 2.4.0-t9-vmpatch).

I rebooted my system into 2.4.0-test9 and started my usual number of processes.
The memory map looks like this:

root:~ cat /proc/meminfo
total:used:free:  shared: buffers:  cached:
Mem:  129552384 69799936 597524480  1654784 34791424
Swap: 328949760 32894976
MemTotal:   126516 kB
MemFree: 58352 kB
MemShared:   0 kB
Buffers:  1616 kB
Cached:  33976 kB
Active:  21668 kB
Inact_dirty: 13924 kB
Inact_clean: 0 kB
Inact_target:  392 kB
HighTotal:   0 kB
HighFree:0 kB
LowTotal:   126516 kB
LowFree: 58352 kB
SwapTotal:   32124 kB
SwapFree:32124 kB
root:~ free
 total   used   free sharedbuffers cached
Mem:126516  68172  58344  0   1616  33976
-/+ buffers/cache:  32580  93936
Swap:32124  0  32124


After about 1 minute of doing nothing, I tried it again, and it showed:

root:~ cat /proc/meminfo
total:used:free:  shared: buffers:  cached:
Mem:  129552384 69636096 599162880  1654784 34820096
Swap: 328949760 32894976
MemTotal:   126516 kB
MemFree: 58512 kB
MemShared:   0 kB
Buffers:  1616 kB
Cached:  34004 kB
Active:  11704 kB
Inact_dirty: 23916 kB
Inact_clean: 0 kB
Inact_target:  184 kB
HighTotal:   0 kB
HighFree:0 kB
LowTotal:   126516 kB
LowFree: 58512 kB
SwapTotal:   32124 kB
SwapFree:32124 kB
root:~ free
 total   used   free sharedbuffers cached
Mem:126516  68028  58488  0   1616  34004
-/+ buffers/cache:  32408  94108
Swap:32124  0  32124

# Active pages decreased, # inactive_dirty pages increased.


Then I did the 'find'. This is the state of memory after the 'find' ended:

root:~ cat /proc/meminfo; free
total:used:free:  shared: buffers:  cached:
Mem:  129552384 99676160 298762240 17022976 34881536
Swap: 328949760 32894976
MemTotal:   126516 kB
MemFree: 29176 kB
MemShared:   0 kB
Buffers: 16624 kB
Cached:  34064 kB
Active:  18688 kB
Inact_dirty: 18892 kB
Inact_clean: 13108 kB
Inact_target:  236 kB
HighTotal:   0 kB
HighFree:0 kB
LowTotal:   126516 kB
LowFree: 29176 kB
SwapTotal:   32124 kB
SwapFree:32124 kB
 total   used   free sharedbuffers cached
Mem:126516  97352  29164  0  16624  34064
-/+ buffers/cache:  46664  79852
Swap:32124  0  32124

Active pages increased to 18688kB, and we see some inact_clean. Is this normal
as intended? So after writing this, I began to think that the 'free' command
should look at the number of 'Active' pages to determine how much memory is
actually 'in use' by system processes. Then just for fun, I looked at
/proc/meminfo again and saw this:

root:~ cat /proc/meminfo; free
total:used:free:  shared: buffers:  cached:
Mem:  129552384 101093376 284590080 17039360 36040704
Swap: 328949760 32894976
MemTotal:   126516 kB
MemFree: 27792 kB
MemShared:   0 kB
Buffers: 16640 kB
Cached:  35196 kB
Active:   3128 kB
Inact_dirty: 35704 kB
Inact_clean: 13004 kB
Inact_target:   20 kB
HighTotal:   0 kB
HighFree:0 kB
LowTotal:   126516 kB
LowFree: 27792 kB
SwapTotal:   32124 kB
SwapFree:32124 kB
 total   used   free sharedbuffers cached
Mem:126516  98732  27784  0  16640  35196
-/+ buffers/cache:  46896  79620
Swap:32124  0  32124

Number of active pages decreased to 3 meg. Ok.  So, what value should I use to
determine what actually is in-use by processes. 

Re: Very aggressive swapping after 2 hours rest

2000-09-16 Thread Rik van Riel

On Sat, 16 Sep 2000, Byron Stanoszek wrote:
 On Sat, 16 Sep 2000, Rik van Riel wrote:
  On Sat, 16 Sep 2000, Dietmar Kling wrote:
  
   i thought i add a report to the new VM in 2.4.0pre9

   When I tried to restart my work after 2 hours,
   the machine started swapping madly.
  
  Does this swapping storm get less (or even go
  away?) when you apply my small patch to test9-pre1?
  
  http://www.surriel.com/patches/2.4.0-t9-vmpatch
 
 I think I might have a similar problem with 2.4.0-t8-vmpatch2,

 The size of the buffers increased to 16mb as expected, but also
 the amount of memory 'in use' also increased by 16mb! Free
 shows:

 I'm trying test9 to see if that behaves any better, then I'll try
 2.4.0-t9-vmpatch.
 
 Have you encountered this buffer problem before, Rik?

It's not a problem per se. The VM patch is more agressive
against the cache, and as such less likely to eat into
the RSS of processes. 

OTOH, maybe we want to do /some/ background swapping of
sleeping tasks, to smooth out the VM a bit at the point
where we start to run into the situation where we need
to swap something out...

regards,

Rik
--
"What you're running that piece of shit Gnome?!?!"
   -- Miguel de Icaza, UKUUG 2000

http://www.conectiva.com/   http://www.surriel.com/

-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to [EMAIL PROTECTED]
Please read the FAQ at http://www.tux.org/lkml/



Re: Very aggressive swapping after 2 hours rest

2000-09-16 Thread Byron Stanoszek

On Sat, 16 Sep 2000, Byron Stanoszek wrote:

 I'd like to be able to use that extra 16mb for process memory more than I would
 for cache. When most of my programs get loaded up, it totals to around 24mb on
 average, and medium-to-low disk access is used. I like the way it is now, where
 it won't start swapping unless a process starts eating up memory quickly, or a
 process starts to do a lot of disk access.

I do agree that processes that start up and never get 'touched' again should
definitely get swapped out, but only when system ram is nearing the low point.
At this point, processes who haven't used memory the longest should get
swapped. For example, I have 24 /sbin/mingetty processes that listen on that
32mb system. Most of the time, except when I'm _really_ busy, I only use 6. :-)

Also, one other thing I noticed (in the old VM, haven't noticed it on my 32mb
machine yet) is that when processes get swapped out, doing a 'ps -aux' prints
the SWAP values correctly as '0'. But doing a consecutive 'ps' shows these
processes as '4'. Is there something new in recent kernels that getting process
states actually has to access a page of the process's RAM? Just curious.

 -Byron

-- 
Byron Stanoszek Ph: (330) 644-3059
Systems Programmer  Fax: (330) 644-8110
Commercial Timesharing Inc. Email: [EMAIL PROTECTED]

-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to [EMAIL PROTECTED]
Please read the FAQ at http://www.tux.org/lkml/



Re: Very aggressive swapping after 2 hours rest

2000-09-16 Thread Rik van Riel

On Sat, 16 Sep 2000, Byron Stanoszek wrote:

 Active pages increased to 18688kB, and we see some inact_clean.
 Is this normal as intended?

Yes. There are 2 things going on during that find.

1) find touches buffers all over the place, those
   buffers will be added to the active list
2) kflushd will try to keep the number of inactive
   dirty pages smaller than or equal to the number 
   of inactive clean and free pages

In this case it means that memory is slowly filled
with the buffers find loads into memory, into the
active list. At the same time, the amount of free
memory decreases and kflushd balances the amount of
inactive_dirty pages to a sane value...

 total:used:free:  shared: buffers:  cached:
 Mem:  129552384 101093376 284590080 17039360 36040704
 Swap: 328949760 32894976
 MemTotal:   126516 kB
 MemFree: 27792 kB
 MemShared:   0 kB
 Buffers: 16640 kB
 Cached:  35196 kB

 Active:   3128 kB
 Inact_dirty: 35704 kB
 Inact_clean: 13004 kB
 Inact_target:   20 kB

 Number of active pages decreased to 3 meg. Ok.  So, what value
 should I use to determine what actually is in-use by processes.
 Obviously, 'free' doesn't give the correct results anymore. :)

OK, let me explain something about these numbers.
(I'll write better documentation around Linux Kongress)

MemFree:  memory on the freelist, contains no data
Buffers:  buffer cache memory
Cached:   page cache memory

Active:   buffer or page cache memory which is in active
  use (that is, page-age  0 or many references)
Inact_dirty:  buffer or cache pages with page-age == 0 that
  /might/ be freeable
  (page-buffers is set or page-count == 2 when
  we add the page ... while holding a reference)
Inact_clean:  page cache pages with page-age == 0 of which
  we are /certain/ that they are freeable, these
  are counted almost as free pages
Inact_target: the net amount of allocations we get per second,
  averaged over one minute

We try to keep the following goals:

1) nr_free_pages() + nr_inactive_clean_pages()freepages.high
2) nr_free_pages() + nr_inactive_clean_pages() + nr_inactive_dirty_pages
 freepages.high + inactive_target
3) nr_inactive_dirty_pages  nr_free_pages() + nr_inactive_clean_pages()

Goals #1 and #2 are kept by kswapd, while kflushd mostly takes
care of goal #3.

These 3 goals together make sure that VM performance is smooth
under most circumstances.

regards,

Rik
--
"What you're running that piece of shit Gnome?!?!"
   -- Miguel de Icaza, UKUUG 2000

http://www.conectiva.com/   http://www.surriel.com/

-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to [EMAIL PROTECTED]
Please read the FAQ at http://www.tux.org/lkml/



Re: Very aggressive swapping after 2 hours rest

2000-09-16 Thread Byron Stanoszek

On Sat, 16 Sep 2000, Rik van Riel wrote:

 It would be better to put that in a userspace tool like
 vmstat.

Or modify 'free', which is what I was going to do. How would I find the number
of actual pages-in-use from those variables? I've tried adding and subtracting
several and can't seem to get the 26mb number from the first output of
/proc/meminfo that I showed in my last email:

Buffers: 16648 kB
Cached:  35276 kB
Active:   2036 kB
Inact_dirty: 37264 kB
Inact_clean: 12624 kB
Inact_target:4 kB

=  -/+ buffers/cache:  26628  99888

Thanks,
 Byron

-- 
Byron Stanoszek Ph: (330) 644-3059
Systems Programmer  Fax: (330) 644-8110
Commercial Timesharing Inc. Email: [EMAIL PROTECTED]

-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to [EMAIL PROTECTED]
Please read the FAQ at http://www.tux.org/lkml/



Re: Very aggressive swapping after 2 hours rest

2000-09-16 Thread Byron Stanoszek

On Sat, 16 Sep 2000, Rik van Riel wrote:

 MemFree:  memory on the freelist, contains no data
 Buffers:  buffer cache memory
 Cached:   page cache memory
 
 Active:   buffer or page cache memory which is in active
   use (that is, page-age  0 or many references)
 Inact_dirty:  buffer or cache pages with page-age == 0 that
   /might/ be freeable
   (page-buffers is set or page-count == 2 when
   we add the page ... while holding a reference)
 Inact_clean:  page cache pages with page-age == 0 of which
   we are /certain/ that they are freeable, these
   are counted almost as free pages
 Inact_target: the net amount of allocations we get per second,
   averaged over one minute

I think I understand what those numbers mean, now. :)

But, I guess I'm still looking for a calculation that tells me exactly how many
free (non-in-use) pages that I can allocate before running out of memory. In
other words, how many KB of memory processes are actually taking up, versus
buffer/cache space.

For example, this doesn't tell me how much memory I can allocate before I
get to the point where swapping is inevitable:

root:~/local/benchmarks cat /proc/meminfo; free
 [stuff omitted]
Buffers: 16648 kB
Cached:  35276 kB
Active:   2036 kB
Inact_dirty: 37264 kB
Inact_clean: 12624 kB
Inact_target:4 kB
 total   used   free sharedbuffers cached
Mem:126516  98852  27664  0  16648  35276
-/+ buffers/cache:  46928  79588
Swap:32124  0  32124


So I take a guess and allocate that much memory.

root:~/local/benchmarks ./memspeed 88
Memory read  88M:  784.57 MB/s
Memory fill  88M:  347.12 MB/s
8-byte fill  88M:  344.73 MB/s
Memory copy  44M:  163.47 MB/s
8-byte copy  44M:  231.42 MB/s
Memory cmp   44M:  100.46 MB/s (Test OK)
Mem search   88M:  254.24 MB/s

Overall Memory Performance:  343.56 MB/s
root:~/local/benchmarks echo $[27664+16648+35276]
79588
root:~/local/benchmarks cat /proc/meminfo; free
 [stuff omitted]
Buffers:   264 kB
Cached:   7836 kB
Active:   5080 kB
Inact_dirty:   956 kB
Inact_clean:  2064 kB
Inact_target:0 kB
 total   used   free sharedbuffers cached
Mem:126516  34728  91788  0264   7836
-/+ buffers/cache:  26628  99888
Swap:32124964  31160


Here, the value 26628+964 is closer to what the 'actual' amount of RAM usage
really is by processes (minus shared mem, buffers, and cache). But I was unable
to find that without the allocation. So, my question is, is it possible to add
a line to /proc/meminfo that tells us this information? Or am I going against
the whole grain of the VM management system?

  -Byron

-- 
Byron Stanoszek Ph: (330) 644-3059
Systems Programmer  Fax: (330) 644-8110
Commercial Timesharing Inc. Email: [EMAIL PROTECTED]

-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to [EMAIL PROTECTED]
Please read the FAQ at http://www.tux.org/lkml/



Re: Very aggressive swapping after 2 hours rest

2000-09-16 Thread Rik van Riel

On Sat, 16 Sep 2000, Byron Stanoszek wrote:
 On Sat, 16 Sep 2000, Rik van Riel wrote:
 
  MemFree:  memory on the freelist, contains no data
  Buffers:  buffer cache memory
  Cached:   page cache memory
  
  Active:   buffer or page cache memory which is in active
use (that is, page-age  0 or many references)
  Inact_dirty:  buffer or cache pages with page-age == 0 that
/might/ be freeable
(page-buffers is set or page-count == 2 when
we add the page ... while holding a reference)
  Inact_clean:  page cache pages with page-age == 0 of which
we are /certain/ that they are freeable, these
are counted almost as free pages
  Inact_target: the net amount of allocations we get per second,
averaged over one minute
 
 I think I understand what those numbers mean, now. :)

Cool ;)

 But, I guess I'm still looking for a calculation that tells me
 exactly how many free (non-in-use) pages that I can allocate
 before running out of memory.

  total   used   free sharedbuffers cached
 Mem:126516  34728  91788  0264   7836
 -/+ buffers/cache:  26628  99888
 Swap:32124964  31160
 
 
 Here, the value 26628+964 is closer to what the 'actual' amount
 of RAM usage really is by processes (minus shared mem, buffers,
 and cache). But I was unable to find that without the
 allocation. So, my question is, is it possible to add a line to
 /proc/meminfo that tells us this information?

It would be better to put that in a userspace tool like
vmstat.

Oh, and now we're talking about vmstat, I guess that
program also needs support for displaying the number
of active/inactive_dirty/inactive_clean pages ... ;)

(any volunteers?)

regards,

Rik
--
"What you're running that piece of shit Gnome?!?!"
   -- Miguel de Icaza, UKUUG 2000

http://www.conectiva.com/   http://www.surriel.com/

-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to [EMAIL PROTECTED]
Please read the FAQ at http://www.tux.org/lkml/



Re: Very aggressive swapping after 2 hours rest

2000-09-16 Thread Rik van Riel

On Sat, 16 Sep 2000, Byron Stanoszek wrote:
 On Sat, 16 Sep 2000, Byron Stanoszek wrote:
 
  I'd like to be able to use that extra 16mb for process memory more than I would
  for cache. When most of my programs get loaded up, it totals to around 24mb on
  average, and medium-to-low disk access is used. I like the way it is now, where
  it won't start swapping unless a process starts eating up memory quickly, or a
  process starts to do a lot of disk access.
 
 I do agree that processes that start up and never get 'touched'
 again should definitely get swapped out, but only when system
 ram is nearing the low point. At this point, processes who
 haven't used memory the longest should get swapped.

This is one of the things I'm thinking about at the moment.
Doing something like this should make the VM run a bit more
smooth at the moment the system would have just started
swapping, and the "kick in" of swap wouldn't be as sudden
as it is now.

 Also, one other thing I noticed (in the old VM, haven't noticed
 it on my 32mb machine yet) is that when processes get swapped
 out, doing a 'ps -aux' prints the SWAP values correctly as '0'.
 But doing a consecutive 'ps' shows these processes as '4'. Is
 there something new in recent kernels that getting process
 states actually has to access a page of the process's RAM? Just
 curious.

Shared glibc pages which are still in active use by other
processes aren't swapped out. As long as somebody is still
using this page actively we won't swap it out anyway, so
we might as well keep it mapped in the page table of every
process which references it ...

regards,

Rik
--
"What you're running that piece of shit Gnome?!?!"
   -- Miguel de Icaza, UKUUG 2000

http://www.conectiva.com/   http://www.surriel.com/

-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to [EMAIL PROTECTED]
Please read the FAQ at http://www.tux.org/lkml/



Re: Very aggressive swapping after 2 hours rest

2000-09-16 Thread Albert D. Cahalan

Rik van Riel writes:

 It would be better to put that in a userspace tool like
 vmstat.

 Oh, and now we're talking about vmstat, I guess that
 program also needs support for displaying the number
 of active/inactive_dirty/inactive_clean pages ... ;)

 (any volunteers?)

Umm, OK. I suppose that BSD prints this stuff? If so,
somebody please send me an output sample and explanation
so that I know where to put this stuff.


-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to [EMAIL PROTECTED]
Please read the FAQ at http://www.tux.org/lkml/