On Tue, 2004-07-06 at 08:36, John Goerzen wrote:
> On this (2.4.26 machine with Amanda), it looks like this during a
> backup:
> 
> jfs_mp               432    432     80    9    9    1 :  252  126
> jfs_ip             64080 145859    524 20837 20837    1 :  124   62
> 
> I don't know exactly what that means.
> 
> Interestingly, on my 2.6.x machine that is having the starvation
> problems with find, I observed this:
> 
> # name            <active_objs> <num_objs> <objsize> <objperslab> <pagesperslab> : 
> tunables <batchcount> <limit> <sharedfactor> : slabdata <active_slabs> <num_slabs> 
> <sharedavail>
> jfs_mp                41    100     76   50    1 : tunables  120   60    0 : 
> slabdata      2      2      0
> jfs_ip            163017 163017    852    9    2 : tunables   54   27    0 : 
> slabdata  18113  18113      0
> 
> In other works, the lightly-loaded workstation running find appears to
> have more objects than the much more active server.  

That makes sense.  find is only reading inodes and directories, so the
system isn't needing much memory for anything else.  backup is also
reading all of the file data, so there is less memory available for
caching inodes.

> Also, those 18113
> numbers are many times larger than the closest second on the machine,
> and the 163017 number is also the largest on the box.  But I don't
> know enough about the VM system to know what this all means.

find requires a lot of inodes to be read, since it needs to stat each
file to see if it is a directory.  Since the the VM isn't being asked to
do much else, it uses a lot of memory to cache the inodes.  It is not
necessarily behaving badly, but I think you can adjust swappiness to
tune it to be more friendly to other apps.  I don't claim to know much
about VM tuning, though.

-- 
David Kleikamp
IBM Linux Technology Center

_______________________________________________
Jfs-discussion mailing list
[EMAIL PROTECTED]
http://www-124.ibm.com/developerworks/oss/mailman/listinfo/jfs-discussion

Reply via email to