On Sat, 2006-03-25 at 21:38 -0800, dean gaudet wrote:
On Sat, 25 Mar 2006, Alex Izvorski wrote:
http://linuxraid.pastebin.com/621363 - oprofile annotated assembly
it looks to me like a lot of time is spent in __find_stripe() ... i wonder
if the hash is working properly.
in raid5.c
Hello,
I profiled some raid5 reads using oprofile to try to track down the
suspiciously high cpu load I see. This uses the same 8-disk SATA setup
as I had described earlier. One of runs is on a 1MB chunk raid5, the
other on a 32MB chunk raid5. As Neil suggested memcpy is a big part of
the cpu
On Fri, 2006-03-24 at 15:38 +1100, Neil Brown wrote:
On Thursday March 23, [EMAIL PROTECTED] wrote:
Neil - Thank you very much for the response.
In my tests with identically configured raid0 and raid5 arrays, raid5
initially had much lower throughput during reads. I had assumed that
On Fri, 2006-03-24 at 09:19 -0800, dean gaudet wrote:
On Thu, 23 Mar 2006, Alex Izvorski wrote:
Also the cpu load is measured with Andrew Morton's cyclesoak
tool which I believe to be quite accurate.
there's something cyclesoak does which i'm not sure i agree with:
cyclesoak process
Neil - Thank you very much for the response.
In my tests with identically configured raid0 and raid5 arrays, raid5
initially had much lower throughput during reads. I had assumed that
was because raid5 did parity-checking all the time. It turns out that
raid5 throughput can get fairly close
Paul M. paul at gpmidi.net writes:
Couple questions related to some raid using some firewire disks:
Is it possible to set the stripe size to 16+MB?
Is there a way to force the raid driver to only access one disk at a time?
Thanks,
Paul
Yes, it's possible - just change the #define
Hello,
I have a question: I'd like to have a raid5 array which writes parity data but
does not check it during reads while the array is ok. I would trust each disk
to detect errors itself and cause the array to be degraded if necessary, in
which case that disk would drop out and the parity data