Re: [Ext2-devel] Re: inode cache, dentry cache, buffer heads usage
On Mon, 2005-03-14 at 14:41, Andrew Morton wrote: > Badari Pulavarty <[EMAIL PROTECTED]> wrote: > > > > On Mon, 2005-03-14 at 14:11, Andrew Morton wrote: > > > Badari Pulavarty <[EMAIL PROTECTED]> wrote: > > > > > > > > On Thu, 2005-03-10 at 17:47, Andrew Morton wrote: > > > > > Badari Pulavarty <[EMAIL PROTECTED]> wrote: > > > > > > > > > > > > So, why is these slab cache are not getting purged/shrinked even > > > > > > under memory pressure ? (I have seen lowmem as low as 6MB). What > > > > > > can I do to keep the machine healthy ? > > > > > > > > > > Tried increasing /proc/sys/vm/vfs_cache_pressure? (That might not be > > > > > in > > > > > 2.6.8 though). > > > > > > > > > > > > > > > > > > Yep. This helped shrink the slabs, but we end up eating up lots of > > > > the lowmem in Buffers. Is there a way to shrink buffers ? > > > > > > It would require some patchwork. Why is it a problem? That memory is > > > reclaimable. > > > > > > > Well, machine pauses for 5-30 seconds for each vi,cscope, write() etc. > > Why? Dunno. Trying to figure out whats happening here. Lowmem pressure was the top on our list - but nothing to prove it - yet. > > > > How'd you get 1.8gig of lowmem? > > > > 2:2 split > > > > Does a normal kernel exhibit the pauses? We haven't tried 3:1 split on this machine for a while. This machine starts to slow down over the time. (It is up for last 70 days). We are trying to collect all the info and also try everything possible to understand issues - before we reboot. Thanks, Badari - To unsubscribe from this list: send the line "unsubscribe linux-kernel" in the body of a message to [EMAIL PROTECTED] More majordomo info at http://vger.kernel.org/majordomo-info.html Please read the FAQ at http://www.tux.org/lkml/
Re: [Ext2-devel] Re: inode cache, dentry cache, buffer heads usage
On Mon, 2005-03-14 at 14:11, Andrew Morton wrote: > Badari Pulavarty <[EMAIL PROTECTED]> wrote: > > > > On Thu, 2005-03-10 at 17:47, Andrew Morton wrote: > > > Badari Pulavarty <[EMAIL PROTECTED]> wrote: > > > > > > > > So, why is these slab cache are not getting purged/shrinked even > > > > under memory pressure ? (I have seen lowmem as low as 6MB). What > > > > can I do to keep the machine healthy ? > > > > > > Tried increasing /proc/sys/vm/vfs_cache_pressure? (That might not be in > > > 2.6.8 though). > > > > > > > > > > Yep. This helped shrink the slabs, but we end up eating up lots of > > the lowmem in Buffers. Is there a way to shrink buffers ? > > It would require some patchwork. Why is it a problem? That memory is > reclaimable. > Well, machine pauses for 5-30 seconds for each vi,cscope, write() etc. There is 7.5 GB of highmem free, but only 6MB of lowmem. Just trying to free "lowmem" as much as possible. > > $ cat /proc/meminfo > > MemTotal: 16377076 kB > > MemFree: 7495824 kB > > Buffers: 1081708 kB > > Cached:4162492 kB > > SwapCached: 0 kB > > Active:3660756 kB > > Inactive: 4473476 kB > > HighTotal:14548952 kB > > HighFree: 7489600 kB > > LowTotal: 1828124 kB > > LowFree: 6224 kB > > > > How'd you get 1.8gig of lowmem? 2:2 split - Badari - To unsubscribe from this list: send the line "unsubscribe linux-kernel" in the body of a message to [EMAIL PROTECTED] More majordomo info at http://vger.kernel.org/majordomo-info.html Please read the FAQ at http://www.tux.org/lkml/
Re: [Ext2-devel] Re: inode cache, dentry cache, buffer heads usage
Badari Pulavarty <[EMAIL PROTECTED]> wrote: > > On Mon, 2005-03-14 at 14:11, Andrew Morton wrote: > > Badari Pulavarty <[EMAIL PROTECTED]> wrote: > > > > > > On Thu, 2005-03-10 at 17:47, Andrew Morton wrote: > > > > Badari Pulavarty <[EMAIL PROTECTED]> wrote: > > > > > > > > > > So, why is these slab cache are not getting purged/shrinked even > > > > > under memory pressure ? (I have seen lowmem as low as 6MB). What > > > > > can I do to keep the machine healthy ? > > > > > > > > Tried increasing /proc/sys/vm/vfs_cache_pressure? (That might not be in > > > > 2.6.8 though). > > > > > > > > > > > > > > Yep. This helped shrink the slabs, but we end up eating up lots of > > > the lowmem in Buffers. Is there a way to shrink buffers ? > > > > It would require some patchwork. Why is it a problem? That memory is > > reclaimable. > > > > Well, machine pauses for 5-30 seconds for each vi,cscope, write() etc. Why? > > How'd you get 1.8gig of lowmem? > > 2:2 split > Does a normal kernel exhibit the pauses? - To unsubscribe from this list: send the line "unsubscribe linux-kernel" in the body of a message to [EMAIL PROTECTED] More majordomo info at http://vger.kernel.org/majordomo-info.html Please read the FAQ at http://www.tux.org/lkml/
Re: inode cache, dentry cache, buffer heads usage
Badari Pulavarty <[EMAIL PROTECTED]> wrote: > > On Thu, 2005-03-10 at 17:47, Andrew Morton wrote: > > Badari Pulavarty <[EMAIL PROTECTED]> wrote: > > > > > > So, why is these slab cache are not getting purged/shrinked even > > > under memory pressure ? (I have seen lowmem as low as 6MB). What > > > can I do to keep the machine healthy ? > > > > Tried increasing /proc/sys/vm/vfs_cache_pressure? (That might not be in > > 2.6.8 though). > > > > > > Yep. This helped shrink the slabs, but we end up eating up lots of > the lowmem in Buffers. Is there a way to shrink buffers ? It would require some patchwork. Why is it a problem? That memory is reclaimable. > $ cat /proc/meminfo > MemTotal: 16377076 kB > MemFree: 7495824 kB > Buffers: 1081708 kB > Cached:4162492 kB > SwapCached: 0 kB > Active:3660756 kB > Inactive: 4473476 kB > HighTotal:14548952 kB > HighFree: 7489600 kB > LowTotal: 1828124 kB > LowFree: 6224 kB > How'd you get 1.8gig of lowmem? - To unsubscribe from this list: send the line "unsubscribe linux-kernel" in the body of a message to [EMAIL PROTECTED] More majordomo info at http://vger.kernel.org/majordomo-info.html Please read the FAQ at http://www.tux.org/lkml/
Re: inode cache, dentry cache, buffer heads usage
On Thu, 2005-03-10 at 17:47, Andrew Morton wrote: > Badari Pulavarty <[EMAIL PROTECTED]> wrote: > > > > So, why is these slab cache are not getting purged/shrinked even > > under memory pressure ? (I have seen lowmem as low as 6MB). What > > can I do to keep the machine healthy ? > > Tried increasing /proc/sys/vm/vfs_cache_pressure? (That might not be in > 2.6.8 though). > > Yep. This helped shrink the slabs, but we end up eating up lots of the lowmem in Buffers. Is there a way to shrink buffers ? $ cat /proc/meminfo MemTotal: 16377076 kB MemFree: 7495824 kB Buffers: 1081708 kB Cached:4162492 kB SwapCached: 0 kB Active:3660756 kB Inactive: 4473476 kB HighTotal:14548952 kB HighFree: 7489600 kB LowTotal: 1828124 kB LowFree: 6224 kB - To unsubscribe from this list: send the line "unsubscribe linux-kernel" in the body of a message to [EMAIL PROTECTED] More majordomo info at http://vger.kernel.org/majordomo-info.html Please read the FAQ at http://www.tux.org/lkml/
Re: inode cache, dentry cache, buffer heads usage
Badari Pulavarty <[EMAIL PROTECTED]> wrote: > > So, why is these slab cache are not getting purged/shrinked even > under memory pressure ? (I have seen lowmem as low as 6MB). What > can I do to keep the machine healthy ? Tried increasing /proc/sys/vm/vfs_cache_pressure? (That might not be in 2.6.8 though). - To unsubscribe from this list: send the line "unsubscribe linux-kernel" in the body of a message to [EMAIL PROTECTED] More majordomo info at http://vger.kernel.org/majordomo-info.html Please read the FAQ at http://www.tux.org/lkml/
Re: [Ext2-devel] Re: inode cache, dentry cache, buffer heads usage
On Thu, Mar 10, 2005 at 03:23:49AM +0530, Dipankar Sarma wrote: > On Wed, Mar 09, 2005 at 01:29:23PM -0800, Badari Pulavarty wrote: > > On Wed, 2005-03-09 at 13:27, Dipankar Sarma wrote: > > > On Wed, Mar 09, 2005 at 10:55:58AM -0800, Badari Pulavarty wrote: > > > > Hi, > > > > > > > > We have a 8-way P-III, 16GB RAM running 2.6.8-1. We use this as > > > > our server to keep source code, cscopes and do the builds. > > > > This machine seems to slow down over the time. One thing we > > > > keep noticing is it keeps running out of lowmem. Most of > > > > the lowmem is used for ext3 inode cache + dentry cache + > > > > bufferheads + Buffers. So we did 2:2 split - but it improved > > > > thing, but again run into same issues. > > > > > > > > So, why is these slab cache are not getting purged/shrinked even > > > > under memory pressure ? (I have seen lowmem as low as 6MB). What > > > > can I do to keep the machine healthy ? > > > > > > How does /proc/sys/fs/dentry-state look when you run low on lowmem ? > > > > > > > > [EMAIL PROTECTED]:~$ cat /proc/sys/fs/dentry-state > > 1434093 1348947 45 0 0 0 > > [EMAIL PROTECTED]:~$ grep dentry /proc/slabinfo > > dentry_cache 1434094 1857519144 271 : tunables 120 > > 608 : slabdata 68797 68797 0 > > Hmm.. so we are not shrinking dcache despite a large number of > unsed dentries. That is where we need to look. Will dig a bit > tomorrow. Here's my really old patch where I saw some improvement for this scenario... I haven't tried this in a really long time, so I have no idea if it'll work :-) Sonny --- fs/dcache.c.original2004-08-02 15:43:42.629539312 -0500 +++ fs/dcache.c 2004-08-03 18:16:45.007809144 -0500 @@ -31,6 +31,7 @@ #include #include #include +#include /* #define DCACHE_DEBUG 1 */ @@ -60,12 +61,61 @@ static unsigned int d_hash_mask; static unsigned int d_hash_shift; static struct hlist_head *dentry_hashtable; static LIST_HEAD(dentry_unused); +static struct rb_root dentry_tree = RB_ROOT; + +#define RB_NONE (2) +#define ON_RB(node)((node)->rb_color != RB_NONE) +#define RB_CLEAR(node) ((node)->rb_color = RB_NONE ) /* Statistics gathering. */ struct dentry_stat_t dentry_stat = { .age_limit = 45, }; + +/* take a dentry safely off the rbtree */ +static void drb_delete(struct dentry* dentry) +{ + // printk("drb_delete: 0x%p (%s) proc %d\n",dentry,dentry->d_iname,smp_processor_id()); + if (ON_RB(&dentry->d_rb)) { + rb_erase(&dentry->d_rb, &dentry_tree); + RB_CLEAR(&dentry->d_rb); + } else { + /* All allocated dentry objs should be in the tree */ + BUG_ON(1); + } +} + +static +struct dentry * drb_insert(struct dentry * dentry) +{ + struct rb_node ** p = &dentry_tree.rb_node; + struct rb_node * parent = NULL; + struct rb_node * node= &dentry->d_rb; + struct dentry * cur= NULL; + + // printk("drb_insert: 0x%p (%s)\n",dentry,dentry->d_iname); + + while (*p) + { + parent = *p; + cur = rb_entry(parent, struct dentry, d_rb); + + if (dentry < cur) + p = &(*p)->rb_left; + else if (dentry > cur) + p = &(*p)->rb_right; + else { + return cur; + } + } + + rb_link_node(node, parent, p); + rb_insert_color(node,&dentry_tree); + return NULL; +} + + static void d_callback(struct rcu_head *head) { struct dentry * dentry = container_of(head, struct dentry, d_rcu); @@ -189,6 +239,7 @@ kill_it: { list_del(&dentry->d_child); dentry_stat.nr_dentry--;/* For d_free, below */ /*drops the locks, at that point nobody can reach this dentry */ + drb_delete(dentry); dentry_iput(dentry); parent = dentry->d_parent; d_free(dentry); @@ -351,6 +402,7 @@ static inline void prune_one_dentry(stru __d_drop(dentry); list_del(&dentry->d_child); dentry_stat.nr_dentry--;/* For d_free, below */ + drb_delete(dentry); dentry_iput(dentry); parent = dentry->d_parent; d_free(dentry); @@ -360,7 +412,7 @@ static inline void prune_one_dentry(stru } /** - * prune_dcache - shrink the dcache + * prune_lru - shrink the lru list * @count: number of entries to try and free * * Shrink the dcache. This is done when we need @@ -372,7 +424,7 @@ static inline void prune_one_dentry(stru * all the dentries are in use. */ -static void prune_dcache(int count) +static void prune_lru(int count) { spin_lock(&dcache_lock); for (; count ; count--) { @@ -410,6 +462,93 @@ static void prune_dcache(int count) spin_unlock(&dcache_lock); } +/** + * prune_dcache - try and "intelligently" shrink the dcache + * @requeste
Re: inode cache, dentry cache, buffer heads usage
On Wed, Mar 09, 2005 at 01:29:23PM -0800, Badari Pulavarty wrote: > On Wed, 2005-03-09 at 13:27, Dipankar Sarma wrote: > > On Wed, Mar 09, 2005 at 10:55:58AM -0800, Badari Pulavarty wrote: > > > Hi, > > > > > > We have a 8-way P-III, 16GB RAM running 2.6.8-1. We use this as > > > our server to keep source code, cscopes and do the builds. > > > This machine seems to slow down over the time. One thing we > > > keep noticing is it keeps running out of lowmem. Most of > > > the lowmem is used for ext3 inode cache + dentry cache + > > > bufferheads + Buffers. So we did 2:2 split - but it improved > > > thing, but again run into same issues. > > > > > > So, why is these slab cache are not getting purged/shrinked even > > > under memory pressure ? (I have seen lowmem as low as 6MB). What > > > can I do to keep the machine healthy ? > > > > How does /proc/sys/fs/dentry-state look when you run low on lowmem ? > > > > [EMAIL PROTECTED]:~$ cat /proc/sys/fs/dentry-state > 1434093 1348947 45 0 0 0 > [EMAIL PROTECTED]:~$ grep dentry /proc/slabinfo > dentry_cache 1434094 1857519144 271 : tunables 120 > 608 : slabdata 68797 68797 0 Hmm.. so we are not shrinking dcache despite a large number of unsed dentries. That is where we need to look. Will dig a bit tomorrow. Thanks Dipankar - To unsubscribe from this list: send the line "unsubscribe linux-kernel" in the body of a message to [EMAIL PROTECTED] More majordomo info at http://vger.kernel.org/majordomo-info.html Please read the FAQ at http://www.tux.org/lkml/
Re: inode cache, dentry cache, buffer heads usage
On Wed, 2005-03-09 at 13:27, Dipankar Sarma wrote: > On Wed, Mar 09, 2005 at 10:55:58AM -0800, Badari Pulavarty wrote: > > Hi, > > > > We have a 8-way P-III, 16GB RAM running 2.6.8-1. We use this as > > our server to keep source code, cscopes and do the builds. > > This machine seems to slow down over the time. One thing we > > keep noticing is it keeps running out of lowmem. Most of > > the lowmem is used for ext3 inode cache + dentry cache + > > bufferheads + Buffers. So we did 2:2 split - but it improved > > thing, but again run into same issues. > > > > So, why is these slab cache are not getting purged/shrinked even > > under memory pressure ? (I have seen lowmem as low as 6MB). What > > can I do to keep the machine healthy ? > > How does /proc/sys/fs/dentry-state look when you run low on lowmem ? [EMAIL PROTECTED]:~$ cat /proc/sys/fs/dentry-state 1434093 1348947 45 0 0 0 [EMAIL PROTECTED]:~$ grep dentry /proc/slabinfo dentry_cache 1434094 1857519144 271 : tunables 120 608 : slabdata 68797 68797 0 [EMAIL PROTECTED]:~$ cat /proc/meminfo MemTotal: 16377076 kB MemFree: 8343724 kB Buffers:579232 kB Cached:5051848 kB SwapCached: 0 kB Active:2911084 kB Inactive: 3878044 kB HighTotal:14548952 kB HighFree: 8330944 kB LowTotal: 1828124 kB LowFree: 12780 kB SwapTotal: 0 kB SwapFree:0 kB Dirty: 216 kB Writeback: 0 kB Mapped: 301940 kB Slab: 1225772 kB Committed_AS: 771340 kB PageTables: 5768 kB VmallocTotal: 114680 kB VmallocUsed: 312 kB VmallocChunk: 114368 kB HugePages_Total: 0 HugePages_Free: 0 Hugepagesize: 2048 kB - To unsubscribe from this list: send the line "unsubscribe linux-kernel" in the body of a message to [EMAIL PROTECTED] More majordomo info at http://vger.kernel.org/majordomo-info.html Please read the FAQ at http://www.tux.org/lkml/
Re: inode cache, dentry cache, buffer heads usage
On Wed, Mar 09, 2005 at 10:55:58AM -0800, Badari Pulavarty wrote: > Hi, > > We have a 8-way P-III, 16GB RAM running 2.6.8-1. We use this as > our server to keep source code, cscopes and do the builds. > This machine seems to slow down over the time. One thing we > keep noticing is it keeps running out of lowmem. Most of > the lowmem is used for ext3 inode cache + dentry cache + > bufferheads + Buffers. So we did 2:2 split - but it improved > thing, but again run into same issues. > > So, why is these slab cache are not getting purged/shrinked even > under memory pressure ? (I have seen lowmem as low as 6MB). What > can I do to keep the machine healthy ? How does /proc/sys/fs/dentry-state look when you run low on lowmem ? Thanks Dipankar - To unsubscribe from this list: send the line "unsubscribe linux-kernel" in the body of a message to [EMAIL PROTECTED] More majordomo info at http://vger.kernel.org/majordomo-info.html Please read the FAQ at http://www.tux.org/lkml/
inode cache, dentry cache, buffer heads usage
Hi, We have a 8-way P-III, 16GB RAM running 2.6.8-1. We use this as our server to keep source code, cscopes and do the builds. This machine seems to slow down over the time. One thing we keep noticing is it keeps running out of lowmem. Most of the lowmem is used for ext3 inode cache + dentry cache + bufferheads + Buffers. So we did 2:2 split - but it improved thing, but again run into same issues. So, why is these slab cache are not getting purged/shrinked even under memory pressure ? (I have seen lowmem as low as 6MB). What can I do to keep the machine healthy ? Thanks, Badari Meminfo: $ cat /proc/meminfo MemTotal: 16377076 kB MemFree: 9400604 kB Buffers:577368 kB Cached:4002012 kB SwapCached: 0 kB Active:2152196 kB Inactive: 3578624 kB HighTotal:14548952 kB HighFree: 9387328 kB LowTotal: 1828124 kB LowFree: 13276 kB SwapTotal: 0 kB SwapFree:0 kB Dirty: 0 kB Writeback: 0 kB Mapped: 301432 kB Slab: 1227268 kB Committed_AS: 695920 kB PageTables: 5684 kB VmallocTotal: 114680 kB VmallocUsed: 312 kB VmallocChunk: 114368 kB HugePages_Total: 0 HugePages_Free: 0 Hugepagesize: 2048 kB Slabinfo (top users): = ext3_inode_cache 1405201 161531248081 : tunables 54 278 : slabdata 201914 201914 0 dentry_cache 1505485 1864917144 271 : tunables 120 608 : slabdata 69071 69071 0 buffer_head 1099832 1755375 52 751 : tunables 120 608 : slabdata 23405 23405 0 radix_tree_node99919 102522276 141 : tunables 54 27 8 : slabdata 7323 7323 0 - To unsubscribe from this list: send the line "unsubscribe linux-kernel" in the body of a message to [EMAIL PROTECTED] More majordomo info at http://vger.kernel.org/majordomo-info.html Please read the FAQ at http://www.tux.org/lkml/