In my s/w raid5 over h/w raid0 testing, I had just completed
s/w raid5 over 4 h/w raid0's (via Mylex DAC1164P, 5 drives each),
recorded the bonnie results. After using Mylex's config util
(in a DOS reboot) and making the 20 drives into 10 raid0's with
2 drives each, I did the follow:
- killed partition tables with
"dd if=/dev/zero of=<drive> bs=512 count=100" for all 10 drives
- for i in 0 1 2 3 4 5 6 7 8 9; do
echo -e "n\np\n1\n\n\nt\n1\nfd\nw"|fdisk /dev/rd/c0d$i
done
- updated my raidtab with 10 raid-disks and added the 6 devices
- ran mkraid --really-force /dev/md0
It apparently worked, but it looks like something left over in the last
4k of the drives caused some weird output...
[root@rts-test /root]# mkraid --really-force /dev/md0
DESTROYING the contents of /dev/md0 in 5 seconds, Ctrl-C if unsure!
handling MD device /dev/md0
analyzing super-block
disk 0: /dev/rd/c0d0p1, 88866800kB, raid superblock at 88866688kB
disk 1: /dev/rd/c0d1p1, 35547120kB, raid superblock at 35547008kB
disk 2: /dev/rd/c0d2p1, 35547120kB, raid superblock at 35547008kB
disk 3: /dev/rd/c0d3p1, 35547120kB, raid superblock at 35547008kB
disk 4: /dev/rd/c0d4p1, 35547120kB, raid superblock at 35547008kB
disk 5: /dev/rd/c0d5p1, 88866800kB, raid superblock at 88866688kB
disk 6: /dev/rd/c0d6p1, 35547120kB, raid superblock at 35547008kB
disk 7: /dev/rd/c0d7p1, 35547120kB, raid superblock at 35547008kB
disk 8: /dev/rd/c0d8p1, 35547120kB, raid superblock at 35547008kB
disk 9: /dev/rd/c0d9p1, 35547120kB, raid superblock at 35547008kB
But all 10 raid0-drives are a) approx. the 35GB number in size
and b) are the same size as each other.
[root@rts-test /root]# fdisk -l /dev/rd/c0d[0123456789]|grep Disk
Disk /dev/rd/c0d0: 128 heads, 32 sectors, 17357 cylinders
Disk /dev/rd/c0d1: 128 heads, 32 sectors, 17357 cylinders
Disk /dev/rd/c0d2: 128 heads, 32 sectors, 17357 cylinders
Disk /dev/rd/c0d3: 128 heads, 32 sectors, 17357 cylinders
Disk /dev/rd/c0d4: 128 heads, 32 sectors, 17357 cylinders
Disk /dev/rd/c0d5: 128 heads, 32 sectors, 17357 cylinders
Disk /dev/rd/c0d6: 128 heads, 32 sectors, 17357 cylinders
Disk /dev/rd/c0d7: 128 heads, 32 sectors, 17357 cylinders
Disk /dev/rd/c0d8: 128 heads, 32 sectors, 17357 cylinders
Disk /dev/rd/c0d9: 128 heads, 32 sectors, 17357 cylinders
(all 10 "Units" lines said cylinders of 4096 * 512 bytes)
subsequent mke2fs and bonnie's seemed fine, so this is most likely
pretty safe to ignore, I suppose.
here's the results for s/w raid5 on top of 4 h/w raid0's (20 drives)
-------Sequential Output-------- ---Sequential Input-- --Random--
-Per Char- --Block--- -Rewrite-- -Per Char- --Block--- --Seeks---
MB K/sec %CPU K/sec %CPU K/sec %CPU K/sec %CPU K/sec %CPU /sec %CPU
2047 17893 99.0 21009 20.4 10397 27.9 15686 62.6 30820 48.7 841.7 6.9
I'm sure these will get better (and maybe MUCH BETTER :) when KNI works.
Out of curiosity, any idea if bonnie is doing %5d somewhere and any
rates over 100MB/sec would get cut-off?
James
--
Miscellaneous Engineer --- IBM Netfinity Performance Development