On Fri, Mar 23, 2001 at 10:13:55AM -0500, Jeff Garzik wrote:
> Personally I think the OOM killer itself is fine. I think there are
> problems elsewhere which are triggering the OOM killer when it should
> not be triggered, ie. a leak like Doug Ledford was reporting.
>
> I definitely see heavier page/dcache usage in 2.4 -- but that is to be
> expected due to 2.4 changes! So it is incredibily difficult to quantify
> if something is wrong, and if so, where...
>
> My own impressions of 2.4 are that it "feels faster" for my own uses and
> it's stable. The downsides I find are that heavy fs activity seems to
> imply increased swapping, which jibes with a guess that the page/dcache
> is exceptionally greedy with releasing pages under memory pressure.
>
> </unquantified vague ramble>
Like I said earlier, I should stop theorizing and write the code. Here
is a teeny little patch that adds a bit of pressure to the inode and
dentry slabcaches during inactive shortage.
On the 512MB desktop without the change, the inode+dentry slabs
typically used up about 300MB after running my normal day-to-day
workload for about 24 hours. Now, the inode+dentry slabs are using
only 90MB.
As there is more memory available for the buffer and page caches, kswapd
seems to have less trouble keeping up with my typical workload.
btw. There definitely is a network receive buffer leak somewhere in
either the 3c905C path or higher up in the network layers (2.4.0 or
2.4.1). The normal path does not leak anything.
I was seeing it only for a couple of days when there was a failing
switch that must have randomly corrupted packets. The switch got
replaced and the leakage disappeared, so I went back into a non-ikd
kernel and stopped looking for the problem.
Jan
=================
--- linux/fs/inode.c.orig Thu Mar 22 13:20:55 2001
+++ linux/fs/inode.c Thu Mar 22 14:00:10 2001
@@ -270,19 +270,6 @@
spin_unlock(&inode_lock);
}
-/*
- * Called with the spinlock already held..
- */
-static void sync_all_inodes(void)
-{
- struct super_block * sb = sb_entry(super_blocks.next);
- for (; sb != sb_entry(&super_blocks); sb = sb_entry(sb->s_list.next)) {
- if (!sb->s_dev)
- continue;
- sync_list(&sb->s_dirty);
- }
-}
-
/**
* write_inode_now - write an inode to disk
* @inode: inode to write to disk
@@ -507,8 +494,6 @@
struct inode * inode;
spin_lock(&inode_lock);
- /* go simple and safe syncing everything before starting */
- sync_all_inodes();
entry = inode_unused.prev;
while (entry != &inode_unused)
@@ -554,6 +539,9 @@
if (priority)
count = inodes_stat.nr_unused / priority;
+
+ if (priority < 6)
+ sync_inodes(0);
prune_icache(count);
kmem_cache_shrink(inode_cachep);
--- linux/mm/vmscan.c.orig Thu Mar 22 14:00:41 2001
+++ linux/mm/vmscan.c Thu Mar 22 14:35:26 2001
@@ -845,9 +845,11 @@
* reclaim unused slab cache if memory is low.
*/
if (free_shortage()) {
+ shrink_dcache_memory(5, gfp_mask);
+ shrink_icache_memory(5, gfp_mask);
+ } else {
shrink_dcache_memory(DEF_PRIORITY, gfp_mask);
shrink_icache_memory(DEF_PRIORITY, gfp_mask);
- } else {
/*
* Illogical, but true. At least for now.
*
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to [EMAIL PROTECTED]
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/