Re: mismatch_cnt != 0

2008-02-23 Thread Carlos Carvalho
Justin Piszcz ([EMAIL PROTECTED]) wrote on 23 February 2008 10:44:
 
 
 On Sat, 23 Feb 2008, Justin Piszcz wrote:
 
 
 
  On Sat, 23 Feb 2008, Michael Tokarev wrote:
 
  Justin Piszcz wrote:
  Should I be worried?
  
  Fri Feb 22 20:00:05 EST 2008: Executing RAID health check for /dev/md3...
  Fri Feb 22 21:00:06 EST 2008: cat /sys/block/md3/md/mismatch_cnt
  Fri Feb 22 21:00:06 EST 2008: 936
  Fri Feb 22 21:00:09 EST 2008: Executing repair on /dev/md3
  Fri Feb 22 22:00:10 EST 2008: cat /sys/block/md3/md/mismatch_cnt
  Fri Feb 22 22:00:10 EST 2008: 936
  
  Your /dev/md3 is a swap, right?
  If it's swap, it's quite common to see mismatches here.  I don't know
  why, and I don't think it's correct (there should be a bug somewhere).
  If it's not swap, there should be no mismatches, UNLESS you initially
  built your array with --assume-clean.
  In any case it's good to understand where those mismatches comes from
  in the first place.
  
  As of the difference (or, rather, lack thereof) of the mismatched
  blocks after check and repair - that's exactly what expected.  Check
  found 936 mismatches, and repair corrected exactly the same amount
  of them.  I.e., if you run check again after repair, you should see
  0 mismatches.
  
  /mjt
  
 
  My /dev/md3 is my main RAID 5 partition.  Even after repair, it showed 936, 
  I 
  will re-run repair.  Also, I did not build my array with --assume-clean and 
  I 
  run my check  array once a week.
 

The only situation where there could be mismatches on a clean array is
if you created it with --assume-clean. After a repair, a check should
give zero mismatches, without reboot.

Of course I'm supposing your hardware is working without glitches...

 After a reboot  check, it is back to 0-- interesting..

Looks like a bug... Which kernel version?
-
To unsubscribe from this list: send the line unsubscribe linux-raid in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: RAID10 far (f2) read throughput on random and sequential / read-ahead

2008-02-23 Thread Keld Jørn Simonsen
I made a reference to your work in the wiki howto on performance.
Thanks!

Keld

On Fri, Feb 22, 2008 at 04:14:05AM +, Nat Makarevitch wrote:
 'md' performs wonderfully. Thanks to every contributor!
 
 I pitted it against a 3ware 9650 and 'md' won on nearly every account (albeit 
 on
 RAID5 for sequential I/O the 3ware is a distant winner):
 http://www.makarevitch.org/rant/raid/#3wmd
 
 On RAID10 f2 a small read-ahead reduces the throughput on sequential read, but
 even a low value (768 for the whole 'md' block device, 0 for the underlying
 spindles) enables very good sequential read performance (300 MB/s on 6 low-end
 Hitachi 500 GB spindles).
 
 What baffles me is that, on a 1.4TB array served by a box having 12 GB RAM 
 (low
 cache-hit ratio), the random access performance remains stable and high (450
 IOPS with 48 threads, 20% writes - 10% fsync'ed), even with a fairly high
 read-ahead (16k). How comes?!
 
 -
 To unsubscribe from this list: send the line unsubscribe linux-raid in
 the body of a message to [EMAIL PROTECTED]
 More majordomo info at  http://vger.kernel.org/majordomo-info.html
-
To unsubscribe from this list: send the line unsubscribe linux-raid in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html