Gordon Henderson wrote:
On Wed, 21 Dec 2005, Sebastian Kuzminsky wrote:
But how does the performance for read and write compare?
Good question! I'll post some performance numbers of the RAID-6
configuration when I have it up and running.
Post your hardware config too if you
Andy Smith [EMAIL PROTECTED] wrote:
On Wed, Dec 21, 2005 at 12:55:47PM +1100, Christopher Smith wrote:
Why would you use RAID6 and not RAID10 with four disks ?
I was wondering the same thing. It's true that RAID6 is guaranteed
to still run degraded after losing 2 devices, whereas a RAID10
Sebastian Kuzminsky [EMAIL PROTECTED] wrote:
Question 1: Why didnt the raid sync I/O show up with vmstat?
Question 2: Why was it limited to 17 MB per second? The maximum was
left at the default, 200 MB/s. The min was also at the default, 1 MB/s.
I get 60 MB/s per disk with hdparm -tT
I just created a RAID array (4-disk RAID-6). When mdadm -C returned,
/proc/mdstat showed it syncing the new array at about 17 MB/s. vmstat 1
showed hardly any blocks in or out, and an almost completely idle cpu.
Question 1: Why didnt the raid sync I/O show up with vmstat?
I switched to iostat
Andrew Burgess [EMAIL PROTECTED] wrote:
Question 1: Why didnt the raid sync I/O show up with vmstat?
I switched to iostat because of similar observations with vmstat. iostat
at least shows you which devices it is looking at and it agrees with
/proc/mdstat's numbers in my experience.
Right.
I'll just call it sync access pattern overhead then.
As another data point, I've been adding more and more
drives to a RAID-1 array. Yesterday I just added a fourth
disk which is still syncing.
mdadm --grow /dev/md0 -n4
mdadm --manage /dev/md0 --add /dev/sde
md0 : active raid1