Re: * Re: Severe trashing in 2.4.4

2001-05-01 Thread Frank de Lange

On Tue, May 01, 2001 at 04:00:53PM -0700, David S. Miller wrote:
> 
> Frank, thanks for doing all the legwork to resolve the networking
> side of this problem.

No problem...

I just diff'd the 'old' and 'new' kernel trees. The one which produced the
ravenous skb_hungry kernels was for all intents and purposed identical to the
one which produced the (working, bug_free(tm)) kernel I'm currently running...

Must be the weather...

Cheers//Frank
-- 
  W  ___
 ## o o\/ Frank de Lange \
 }#   \|   /  \
  ##---# _/   \
      \  +31-320-252965/
   \[EMAIL PROTECTED]/
-
 [ "Omnis enim res, quae dando non deficit, dum habetur
et non datur, nondum habetur, quomodo habenda est."  ]
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/



Re: * Re: Severe trashing in 2.4.4

2001-05-01 Thread David S. Miller


Frank, thanks for doing all the legwork to resolve the networking
side of this problem.

Later,
David S. Miller
[EMAIL PROTECTED]
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/



Re: * Re: Severe trashing in 2.4.4

2001-05-01 Thread Chris Mason



On Tuesday, May 01, 2001 03:11:58 PM -0700 David <[EMAIL PROTECTED]>
wrote:

> Can't say for a definite fact that it was reiserfs but I can say for a
> definite fact that something fishy happens sometimes.
> 
> If I have a text file open, something.html comes to mind, If I edit it
> and save it in one rxvt and open it in another rxvt, my changes may not
> be there.  If I save it *again* or exit the editing process, I will see
> the changes in the second term.  No, I'm not accidently forgetting to
> save it, I know for a fact that I saved it and the first terminal shows
> the non-modified state with the changes and the second term shows the
> previous data.
> 
> Somewhere something is stuck in cache and what's on disk isn't what's in
> cache and a second process for some reason gets what is on disk and not
> what is in cache.
> 
> It happens infrequently but it -does- happen.

Does it happen with -o notail?  Which editor?

-chris



-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/



Re: * Re: Severe trashing in 2.4.4

2001-05-01 Thread David

Can't say for a definite fact that it was reiserfs but I can say for a 
definite fact that something fishy happens sometimes.

If I have a text file open, something.html comes to mind, If I edit it 
and save it in one rxvt and open it in another rxvt, my changes may not 
be there.  If I save it *again* or exit the editing process, I will see 
the changes in the second term.  No, I'm not accidently forgetting to 
save it, I know for a fact that I saved it and the first terminal shows 
the non-modified state with the changes and the second term shows the 
previous data.

Somewhere something is stuck in cache and what's on disk isn't what's in 
cache and a second process for some reason gets what is on disk and not 
what is in cache.

It happens infrequently but it -does- happen.

David

Frank de Lange wrote:

>Well,
>
>When a puzzled Alexey wondered whether the problems I was seeing with 2.4.4
>might be related to a failure to execute 'make clean' before compiling the
>kernel, I replied in the negative as I *always* clean up before compiling
>anything. Yet, for the sake of science and such I moved the kernel tree and
>started from scratch.
>
>The problems I was seeing are no more, 2.4.4 behaves like a good kernel should.
>
>Was it me? Was it reiserfs? Was is divine intervention? I will probably never
>find out, but for now this thread, and the accompanying scare, can Resquiam In
>Paces.
>
>Cheers//Frank
>


-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/



* Re: Severe trashing in 2.4.4

2001-05-01 Thread Frank de Lange

Well,

When a puzzled Alexey wondered whether the problems I was seeing with 2.4.4
might be related to a failure to execute 'make clean' before compiling the
kernel, I replied in the negative as I *always* clean up before compiling
anything. Yet, for the sake of science and such I moved the kernel tree and
started from scratch.

The problems I was seeing are no more, 2.4.4 behaves like a good kernel should.

Was it me? Was it reiserfs? Was is divine intervention? I will probably never
find out, but for now this thread, and the accompanying scare, can Resquiam In
Paces.

Cheers//Frank
-- 
  W  ___
 ## o o\/ Frank de Lange \
 }#   \|   /  \
  ##---# _/   \
      \  +31-320-252965/
   \[EMAIL PROTECTED]/
-
 [ "Omnis enim res, quae dando non deficit, dum habetur
et non datur, nondum habetur, quomodo habenda est."  ]
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/



Re: Severe trashing in 2.4.4

2001-04-30 Thread Mike Galbraith

On Sun, 29 Apr 2001, Frank de Lange wrote:

> On Sun, Apr 29, 2001 at 01:58:52PM -0400, Alexander Viro wrote:
> > Hmm... I'd say that you also have a leak in kmalloc()'ed stuff - something
> > in 1K--2K range. From your logs it looks like the thing never shrinks and
> > grows prettu fast...
>
> Same goes for buffer_head:
>
> buffer_head44236  48520 96 1188 12131 :  252  126
>
> quite high I think. 2.4.3 shows this, after about the same time and activity:
>
> buffer_head  891   2880 96   72   721 :  252  126

hmm:  do_try_to_free_pages() doesn't call kmem_cache_reap() unless
there's no free page shortage.  If you've got a leak...

if (free_shortage()) {
shrink_dcache_memory(DEF_PRIORITY, gfp_mask);
shrink_icache_memory(DEF_PRIORITY, gfp_mask);
} else {
/*
 * Illogical, but true. At least for now.
 *
 * If we're _not_ under shortage any more, we
 * reap the caches. Why? Because a noticeable
 * part of the caches are the buffer-heads,
 * which we'll want to keep if under shortage.
 */
kmem_cache_reap(gfp_mask);
}

You might try calling it if free_shortage() + inactive shortage() >
freepages.high or some such and then see what sticks out.  Or, for
troubleshooting the leak, just always call it.

Printk says we fail to totally cure the shortage most of the time
once you start swapping.. likely the same for any sustained IO.

-Mike

(if you hoard IO until you can't avoid it, there're no cleanable pages
left in the laundry chute [bye-bye cache] except IO pages.. i digress;)

-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/



Re: Severe trashing in 2.4.4

2001-04-29 Thread Mike Galbraith

On Sun, 29 Apr 2001, Alexander Viro wrote:

> On Sun, 29 Apr 2001, Frank de Lange wrote:
>
> > On Sun, Apr 29, 2001 at 12:27:29PM -0400, Alexander Viro wrote:
> > > What about /proc/slabinfo? Notice that 2.4.4 (and couple of the 2.4.4-pre)
> > > has a bug in prune_icache() that makes it underestimate the amount of
> > > freeable inodes.
> >
> > Gotcha, wrt. slabinfo. Seems 2.4.4 (at least on my box) only knows how to
> > allocate skbuff_head_cache entries, not how to free them. Here's the last
> > /proc/slabinfo entry before I sysRQ'd the box:
>
> > skbuff_head_cache 341136 341136160 14214 142141 :  252  126
> > size-2048  66338  66338   2048 33169 331691 :   60   30
>
> Hmm... I'd say that you also have a leak in kmalloc()'ed stuff - something
> in 1K--2K range. From your logs it looks like the thing never shrinks and
> grows prettu fast...

If it turns out to be difficult to track down, holler and I'll expedite
updating my IKD tree to 2.4.4.

-Mike  (memleak maintenance weenie)

-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/



Re: Severe trashing in 2.4.4

2001-04-29 Thread David S. Miller


Frank de Lange writes:
 > Hm, 'twould be nice to know WHAT to look for (if only for educational
 > purposes), but ok:

We're looking to see if queue collapsing is occuring on
receive.

Later,
David S. Miller
[EMAIL PROTECTED]
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/



Re: Severe trashing in 2.4.4

2001-04-29 Thread Frank de Lange

On Sun, Apr 29, 2001 at 04:45:00PM -0700, David S. Miller wrote:
> 
> Frank de Lange writes:
>  > What do you want me to check for? /proc/net/netstat is a rather busy place...
> 
> Just show us the contents after you reproduce the problem.
> We just want to see if a certain event if being triggered.

Hm, 'twould be nice to know WHAT to look for (if only for educational
purposes), but ok:

 http://www.unternet.org/~frank/projects/linux2404/2404-meminfo/

it contains an extra set of files, named p_n_netstat.*. Same as before, the
.diff contains one-second interval diffs.

Cheers//Frank
-- 
  W  ___
 ## o o\/ Frank de Lange \
 }#   \|   /  \
  ##---# _/   \
      \  +31-320-252965/
   \[EMAIL PROTECTED]/
-
 [ "Omnis enim res, quae dando non deficit, dum habetur
et non datur, nondum habetur, quomodo habenda est."  ]
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/



Re: Severe trashing in 2.4.4

2001-04-29 Thread Frank de Lange

On Mon, Apr 30, 2001 at 12:06:52AM +0200, Manfred Spraul wrote:
> You could enable STATS in mm/slab.c, then the number of alloc and free
> calls would be printed in /proc/slabinfo.
> 
> > Yeah, those as well. I kinda guessed they were related...
> 
> Could you check /proc/sys/net/core/hot_list_length and skb_head_pool
> (not available in /proc, use gdb --core /proc/kcore)? I doubt that this
> causes your problems, but the skb_head code uses a special per-cpu
> linked list for even faster allocations.
> 
> Which network card do you use? Perhaps a bug in the zero-copy code of
> the driver?

I'll give it a go once I reboot into 2.4.4 again (now in 2.4.3 to get some
'work' done). Using the dreaded ne2k cards (two of them), which have caused me
more than one headache already...

I'll have a look at the driver for these cards.

Cheers//Frank

-- 
  W  ___
 ## o o\/ Frank de Lange \
 }#   \|   /  \
  ##---# _/   \
      \  +31-320-252965/
   \[EMAIL PROTECTED]/
-
 [ "Omnis enim res, quae dando non deficit, dum habetur
et non datur, nondum habetur, quomodo habenda est."  ]
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/



Re: Severe trashing in 2.4.4

2001-04-29 Thread Manfred Spraul

> On Sun, Apr 29, 2001 at 01:58:52PM -0400, Alexander Viro wrote:
> > Hmm... I'd say that you also have a leak in kmalloc()'ed stuff -
> > something in 1K--2K range. From your logs it looks like the
> > thing never shrinks and grows prettu fast...

You could enable STATS in mm/slab.c, then the number of alloc and free
calls would be printed in /proc/slabinfo.

> Yeah, those as well. I kinda guessed they were related...

Could you check /proc/sys/net/core/hot_list_length and skb_head_pool
(not available in /proc, use gdb --core /proc/kcore)? I doubt that this
causes your problems, but the skb_head code uses a special per-cpu
linked list for even faster allocations.

Which network card do you use? Perhaps a bug in the zero-copy code of
the driver?

--
Manfred

-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/



Re: Severe trashing in 2.4.4

2001-04-29 Thread Frank de Lange

On Sun, Apr 29, 2001 at 01:58:52PM -0400, Alexander Viro wrote:
> Hmm... I'd say that you also have a leak in kmalloc()'ed stuff - something
> in 1K--2K range. From your logs it looks like the thing never shrinks and
> grows prettu fast...

Same goes for buffer_head:

buffer_head44236  48520 96 1188 12131 :  252  126

quite high I think. 2.4.3 shows this, after about the same time and activity:

buffer_head  891   2880 96   72   721 :  252  126

Cheers//Frank

-- 
  W  ___
 ## o o\/ Frank de Lange \
 }#   \|   /  \
  ##---# _/   \
      \  +31-320-252965/
   \[EMAIL PROTECTED]/
-
 [ "Omnis enim res, quae dando non deficit, dum habetur
et non datur, nondum habetur, quomodo habenda est."  ]
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/



Re: Severe trashing in 2.4.4

2001-04-29 Thread Frank de Lange

On Sun, Apr 29, 2001 at 01:58:52PM -0400, Alexander Viro wrote:
> Hmm... I'd say that you also have a leak in kmalloc()'ed stuff - something
> in 1K--2K range. From your logs it looks like the thing never shrinks and
> grows prettu fast...

Yeah, those as well. I kinda guessed they were related...

Cheers//Frank
-- 
  W  ___
 ## o o\/ Frank de Lange \
 }#   \|   /  \
  ##---# _/   \
      \  +31-320-252965/
   \[EMAIL PROTECTED]/
-
 [ "Omnis enim res, quae dando non deficit, dum habetur
et non datur, nondum habetur, quomodo habenda est."  ]
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/



Re: Severe trashing in 2.4.4

2001-04-29 Thread Alexander Viro



On Sun, 29 Apr 2001, Frank de Lange wrote:

> On Sun, Apr 29, 2001 at 12:27:29PM -0400, Alexander Viro wrote:
> > What about /proc/slabinfo? Notice that 2.4.4 (and couple of the 2.4.4-pre)
> > has a bug in prune_icache() that makes it underestimate the amount of
> > freeable inodes.
> 
> Gotcha, wrt. slabinfo. Seems 2.4.4 (at least on my box) only knows how to
> allocate skbuff_head_cache entries, not how to free them. Here's the last
> /proc/slabinfo entry before I sysRQ'd the box:
 
> skbuff_head_cache 341136 341136160 14214 142141 :  252  126
> size-2048  66338  66338   2048 33169 331691 :   60   30

Hmm... I'd say that you also have a leak in kmalloc()'ed stuff - something
in 1K--2K range. From your logs it looks like the thing never shrinks and
grows prettu fast...

-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/



Re: Severe trashing in 2.4.4

2001-04-29 Thread Frank de Lange

On Sun, Apr 29, 2001 at 12:27:29PM -0400, Alexander Viro wrote:
> What about /proc/slabinfo? Notice that 2.4.4 (and couple of the 2.4.4-pre)
> has a bug in prune_icache() that makes it underestimate the amount of
> freeable inodes.

Gotcha, wrt. slabinfo. Seems 2.4.4 (at least on my box) only knows how to
allocate skbuff_head_cache entries, not how to free them. Here's the last
/proc/slabinfo entry before I sysRQ'd the box:

slabinfo - version: 1.1 (SMP)
kmem_cache68 68232441 :  252  126
nfs_read_data 10 10384111 :  124   62
nfs_write_data10 10384111 :  124   62
nfs_page  40 40 96111 :  252  126
urb_priv   1113 32111 :  252  126
uhci_desc   1074   1239 64   21   211 :  252  126
ip_mrt_cache   0  0 96001 :  252  126
tcp_tw_bucket  0  0 96001 :  252  126
tcp_bind_bucket   16226 32221 :  252  126
tcp_open_request   0  0 64001 :  252  126
inet_peer_cache1 59 64111 :  252  126
ip_fib_hash   20226 32221 :  252  126
ip_dst_cache  13 48160221 :  252  126
arp_cache  2 60128221 :  252  126
blkdev_requests 1024   1040 96   26   261 :  252  126
dnotify cache  0  0 20001 :  252  126
file lock cache  126126 92331 :  252  126
fasync cache   3202 16111 :  252  126
uid_cache  5226 32221 :  252  126
skbuff_head_cache 341136 341136160 14214 142141 :  252  126
sock 201207832   23   232 :  124   62
inode_cache  741   1640480  205  2051 :  124   62
bdev_cache 7 59 64111 :  252  126
sigqueue  58 58132221 :  252  126
dentry_cache 790   3240128  108  1081 :  252  126
dquot  0  0 96001 :  252  126
filp1825   1880 96   47   471 :  252  126
names_cache9  9   4096991 :   60   30
buffer_head  891   2880 96   72   721 :  252  126
mm_struct180180128661 :  252  126
vm_area_struct  4033   4248 64   72   721 :  252  126
fs_cache 207236 64441 :  252  126
files_cache  132135416   15   151 :  124   62
signal_act   108111   1312   37   371 :   60   30
size-131072(DMA)   0  0 13107200   32 :00
size-1310720  0 13107200   32 :00
size-65536(DMA)0  0  6553600   16 :00
size-65536 0  0  6553600   16 :00
size-32768(DMA)0  0  32768008 :00
size-32768 3  3  32768338 :00
size-16384(DMA)0  0  16384004 :00
size-16384 8  9  16384894 :00
size-8192(DMA) 0  0   8192002 :00
size-8192  1  1   8192112 :00
size-4096(DMA) 0  0   4096001 :   60   30
size-4096 73 73   4096   73   731 :   60   30
size-2048(DMA) 0  0   2048001 :   60   30
size-2048  66338  66338   2048 33169 331691 :   60   30
size-1024(DMA) 0  0   1024001 :  124   62
size-1024   6372   6372   1024 1593 15931 :  124   62
size-512(DMA)  0  0512001 :  124   62
size-512   22776  22776512 2847 28471 :  124   62
size-256(DMA)  0  0256001 :  252  126
size-256   75300  75300256 5020 50201 :  252  126
size-128(DMA)  0  0128001 :  252  126
size-1281309   1410128   47   471 :  252  126
size-64(DMA)   0  0 64001 :  252  126
size-64 4838   4838 64   82   821 :  252  126
size-32(DMA)   0  0 32001 :  252  126
size-3233900  33900 32  300  3001 :  252  126

>From the same moment, the contents of /proc/meminfo:

total:used:free:  shared: buffers:  cached:
Mem:  262049792 260423680  16261120  1351680  6348800
Swap: 511926272 39727104 472199168
MemTotal:   255908 kB
MemFree:  1588 kB
MemShared:   0 kB
Buffers:  1320 kB
Cached:   6200 kB
Active:   2648 kB
Inact_dirty:  1260 kB
Inact_clean:  3604 kB
Inact_target: 3960 kB
HighTotal:   0 kB
HighFree:0 kB
LowTotal:   255908 kB
LowFree:  1588 kB
SwapTotal:  499928 k

Re: Severe trashing in 2.4.4

2001-04-29 Thread Alexander Viro



On Sun, 29 Apr 2001, Frank de Lange wrote:

> Running 'nget v0.7' (a command line nntp 'grabber') on 2.4.4 leads to massive
> amounts of memory disappearing in thin air. I'm currently running a single
> instance of this app, and I'm seeing the memory drain away. The system has 256
> MB of physycal memory, and access to 500 MB of swap. Swap is not really being
> used now, but it soon will be. Have a look at the current /proc/meminfo:
> 
> [frank@behemoth mozilla]$ cat /proc/meminfo 
> total:used:free:  shared: buffers:  cached:
> Mem:  262049792 259854336  21954560  1773568 31211520
> Swap: 511926272 4096 511922176
> MemTotal:   255908 kB
> MemFree:  2144 kB
> MemShared:   0 kB
> Buffers:  1732 kB
> Cached:  30480 kB
> Active:  26944 kB
> Inact_dirty:  2384 kB
> Inact_clean:  2884 kB
> Inact_target:  984 kB
> HighTotal:   0 kB
> HighFree:0 kB
> LowTotal:   255908 kB
> LowFree:  2144 kB
> SwapTotal:  499928 kB
> SwapFree:   499924 kB

What about /proc/slabinfo? Notice that 2.4.4 (and couple of the 2.4.4-pre)
has a bug in prune_icache() that makes it underestimate the amount of
freeable inodes.

-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/



Re: Severe trashing in 2.4.4

2001-04-29 Thread Frank de Lange

OK,

I seem to have found the culprit, although I'm stillin the dark st to the
'why' and 'how'.

First, some info:

2.4.4 with Maciej's IO-APIC patch
Abit BP-6, dual Celeron466@466
256MB RAM

So, 'yes, SMP...'

Running 'nget v0.7' (a command line nntp 'grabber') on 2.4.4 leads to massive
amounts of memory disappearing in thin air. I'm currently running a single
instance of this app, and I'm seeing the memory drain away. The system has 256
MB of physycal memory, and access to 500 MB of swap. Swap is not really being
used now, but it soon will be. Have a look at the current /proc/meminfo:

[frank@behemoth mozilla]$ cat /proc/meminfo 
total:used:free:  shared: buffers:  cached:
Mem:  262049792 259854336  21954560  1773568 31211520
Swap: 511926272 4096 511922176
MemTotal:   255908 kB
MemFree:  2144 kB
MemShared:   0 kB
Buffers:  1732 kB
Cached:  30480 kB
Active:  26944 kB
Inact_dirty:  2384 kB
Inact_clean:  2884 kB
Inact_target:  984 kB
HighTotal:   0 kB
HighFree:0 kB
LowTotal:   255908 kB
LowFree:  2144 kB
SwapTotal:  499928 kB
SwapFree:   499924 kB

Also look at the top 10 memory users:

[frank@behemoth mp3]$ ps -xao rss,vsz,pid,command|sort -rn|head 
6388 55320  1310 /usr/bin/X11/XFree86 -depth 16 -gamma 1.6 -auth /var/lib/gdm/:0
3604  8972  1438 gnome-terminal -t [EMAIL PROTECTED]
3116  8356  1405 panel --sm-config-prefix /panel.d/default-ZTNCVS/ --sm-client-i
3084  5484  1401 sawfish --sm-client-id 11c0a801059849521860010240115 --
2940  8388  1696 gnome-terminal --tclass=Remote -x ssh -v ostrogoth.localnet
2748  7536  1432 mini_commander_applet --activate-goad-server mini-commander_app
2692  7656  1413 tasklist_applet --activate-goad-server tasklist_applet --goad-f
2536  7588  1411 deskguide_applet --activate-goad-server deskguide_applet --goad
2320  7388  1383 /usr/bin/gnome-session
2232  7660  1421 multiload_applet --activate-goad-server multiload_applet --goad

(the rest is mostly small stuff, < 1 MB, total of 89 processes)

 [ swap is being hit at this moment... ]

Where did my memory go? 

A few minutes later, with the same process load (minimal), a look at
/proc/meminfo:

[frank@behemoth mozilla]$ cat /proc/meminfo 
total:used:free:  shared: buffers:  cached:
Mem:  262049792 260108288  19415040  1380352 11689984
Swap: 511926272 34279424 477646848
MemTotal:   255908 kB
MemFree:  1896 kB
MemShared:   0 kB
Buffers:  1348 kB
Cached:  11416 kB
Active:   9164 kB
Inact_dirty:  1240 kB
Inact_clean:  2360 kB
Inact_target:  996 kB
HighTotal:   0 kB
HighFree:0 kB
LowTotal:   255908 kB
LowFree:  1896 kB
SwapTotal:  499928 kB
SwapFree:   466452 kB

Already 34MB in swap...

Start xmms, and this is the result:

[frank@behemoth mozilla]$ cat /proc/meminfo 
total:used:free:  shared: buffers:  cached:
Mem:  262049792 260411392  16384000  1380352 10063872
Swap: 511926272 38449152 473477120
MemTotal:   255908 kB
MemFree:  1600 kB
MemShared:   0 kB
Buffers:  1348 kB
Cached:   9828 kB
Active:   6400 kB
Inact_dirty:  1236 kB
Inact_clean:  3540 kB
Inact_target: 2128 kB
HighTotal:   0 kB
HighFree:0 kB
LowTotal:   255908 kB
LowFree:  1600 kB
SwapTotal:  499928 kB
SwapFree:   462380 kB

(top 10 memory users)

[frank@behemoth mp3]$ ps -xao rss,vsz,pid,command|sort -rn|head
2340 56604  1310 /usr/bin/X11/XFree86 -depth 16 -gamma 1.6 -auth /var/lib/gdm/:0
1592  5484  1401 sawfish --sm-client-id 11c0a801059849521860010240115 --
1452 33784  1780 xmms
1436 33784  1785 xmms
1436 33784  1782 xmms
1436 33784  1781 xmms
1296  9000  1438 gnome-terminal -t [EMAIL PROTECTED]
1184  2936  1790 ps -xao rss,vsz,pid,command
1060  7656  1413 tasklist_applet --activate-goad-server tasklist_applet --goad-f
1008  8388  1696 gnome-terminal --tclass=Remote -x ssh -v ostrogoth.localnet

The memory is used somewhere, but nowhere I can find or pinpoint. Not in
buffers, not cached, not by processes I can see in ps or top or /proc. And it
does not come back either. Shooting down the nget process and xmms frees up
some swap, buth the disappeared memory stays that way as can be seen from this
(final) /proc/meminfo / ps combo:

[frank@behemoth mozilla]$ cat /proc/meminfo 
total:used:free:  shared: buffers:  cached:
Mem:  262049792 260411392  16384000  1388544  8568832
Swap: 511926272 36360192 475566080
MemTotal:   255908 kB
MemFree:  1600 kB
MemShared:   0 kB
Buffers:  1356 kB
Cached:   8368 kB
Active:   6284 kB
Inact_dirty:  1236 kB
Inact_clean:  2204 kB
Inact_target:  632 kB
HighTotal:   0 kB
HighFree:0 kB
LowTotal:   255908 kB
LowFree:  1600 kB
SwapTotal:  499928 kB
S