Re: [Jfs-discussion] performance probs - 2.4.28, jsf117, raid5

2004-12-09 Thread Per Jessen
On Mon, 6 Dec 2004 17:37:30 -0500, Sonny Rao wrote:

Right, so there's really only one thing i can think of and it's not
much of a solution.  You can change the memory split so that you can
use all 2gb for kernel memory.  I know there are some patches floating
around to convert the 1GB to a 2GB split, or you can use one of the
so-called 4GB/4GB kernels which keeps the kernel in a totally
different address space.  

This is separate to the option for high-mem support in the kernel? 
OK, I've just found the posting from Ingo Molnar.  

It's really only a solution if all of the inodes in your working set
fit into 2GB, otherwise you're just delaying the inevitable.
Ultimately, this is what 64bit machines (with a lot of ram) are good
for :-) 

Yeah ...

thanks again.  I've managed to tweak our setup, which has helped - but the
problem is that kswapd when under heavy load manages to bring the machine
to a halt as other processes appear to be spinning.  I'm now contemplating
offloading a lot of this IO to another box - fortunately 32bit boxes are
cheap to come by these days. 


/Per Jessen, Zurich

-- 
regards,
Per Jessen, Zurich
http://www.spamchek.com - let your spam stop here!


___
Jfs-discussion mailing list
[EMAIL PROTECTED]
http://www-124.ibm.com/developerworks/oss/mailman/listinfo/jfs-discussion


Re: [Jfs-discussion] performance probs - 2.4.28, jsf117, raid5

2004-12-06 Thread Sonny Rao
On Sun, Dec 05, 2004 at 08:41:35PM +0100, Per Jessen wrote:
 On Sun, 05 Dec 2004 18:40:58 +0100, Per Jessen wrote:
 
 I do a find in a directory that contains 5-600,000 files - which just 
 about makes the box grind to a halt.  The machine is not heavily loaded as 
 such,
 but does write 2 new files/sec to the same filesystem.  Or tries to.  
 
 I need to add - at the same time kswapd is very, very busy, despite only 
 about 1Gb of
 the 2Gb main core being used/active.
 
 
 /Per

Yes, this is a consequence of the way memory is partitioned on IA32
machines (which I'm assuming you're using).  If you look at the amount
of memory being used by the kernel slab cache, I'd bet it's using much
of that 1GB for kernel data structures (inodes, dentrys, etc) and
whenever the kernel needs to allocate some more memory it has to evict
some of those structures which is a very expensive process.

Look at /proc/slabinfo and add up the total number of slabs.

Sonny
___
Jfs-discussion mailing list
[EMAIL PROTECTED]
http://www-124.ibm.com/developerworks/oss/mailman/listinfo/jfs-discussion


Re: [Jfs-discussion] performance probs - 2.4.28, jsf117, raid5

2004-12-05 Thread Per Jessen
On Sun, 05 Dec 2004 18:40:58 +0100, Per Jessen wrote:

I do a find in a directory that contains 5-600,000 files - which just 
about makes the box grind to a halt.  The machine is not heavily loaded as 
such,
but does write 2 new files/sec to the same filesystem.  Or tries to.  

I need to add - at the same time kswapd is very, very busy, despite only about 
1Gb of
the 2Gb main core being used/active.


/Per

-- 
regards,
Per Jessen, Zurich
http://www.spamchek.com - let your spam stop here!


___
Jfs-discussion mailing list
[EMAIL PROTECTED]
http://www-124.ibm.com/developerworks/oss/mailman/listinfo/jfs-discussion


Re: [Jfs-discussion] performance

2002-11-30 Thread Robert K.
One reason may be the location of the partitions.
A low sector located partition is faster than one that lies
at the end of the disk. Disks usually start counting outside

Sean Neakums schrieb:

Hi,

I've been playing with JFS this past day or so, and I am observing a
performance problem.  I am using a patched kernel (version 14a of Rik
van Riel's rmap VM), so this may be implicated somehow.

When I do a build of the LNX-BBC (http://www.lnx-bbc.org/) I get a
build time of about 75 minutes on an ext3 volume, and about 84 minutes
on a JFS volume.  I'm using stock ext3 and JFS version 1.10, on Linux
2.4.19 plus the rmap patch.

Here's some data, from time make install:

On ext3:

real76m37.636s
user38m59.370s
sys 18m34.210s

On JFS:

real84m13.123s
user39m3.020s
sys 18m49.810s

The machine in question is an SMP box with 1.13GHz P-III, 256M of RAM
and IDE disks.  I use ccache (http//ccache.samba.org/) to do these
builds, and see almost identical hit/miss statistics for each run.  I
believe that, due to ccache, the build becomes fairly I/O-bound.
Judging by fact that the wall-clock time shows the only big variation,
I'm guessing (wildly, with no proof) that this may be something to do
with how JFS schedules I/O.

If there is any other information you'd like me to gather, please holler.



___
Jfs-discussion mailing list
[EMAIL PROTECTED]
http://www-124.ibm.com/developerworks/oss/mailman/listinfo/jfs-discussion



Re: [Jfs-discussion] performance

2002-11-30 Thread Sean Neakums
commence  Robert K. quotation:

 One reason may be the location of the partitions.
 A low sector located partition is faster than one that lies
 at the end of the disk. Disks usually start counting outside

This occurred to me too, and so today I created an ext3 filesystem on
the same volume, and redid the build.  I got the same time as I did
for previous ext3 builds, using the same chunk of the disk as with the
on-JFS build.

-- 
 /  |
[|] Sean Neakums|  Questions are a burden to others;
[|] [EMAIL PROTECTED] |  answers a prison for oneself.
 \  |
___
Jfs-discussion mailing list
[EMAIL PROTECTED]
http://www-124.ibm.com/developerworks/oss/mailman/listinfo/jfs-discussion