Re: About Hardware Raid ...
Andy Poling [EMAIL PROTECTED] writes: If you'll be doing alot of writing, the controller can use it's memory to coalesce the writes and optimize the parity calculations for writing. I believe the Linux buffer-cache will try to do the same. -- Osma Ahvenlampi
Hot Swap
Hello RAID group, I have a Software RAID-1 system here with two IDE drives. The md1 device contains a complete mirror of a RedHat 5.2 root file system. I boot the system with a bootdisk created while md1 was mounted /. The kernel I use is 2.2.3. When the system is running i disconnect one of the IDE cables {';'} oops... Now when I put the IDE cable back on its place I try to raidhotadd the "failed" disk. And I get a: device is busy error. After a reboot of the system its no problem to get the disk back in the array. Is there a way of skipping the "reboot of the system" part ? - Alex
Re: Hot Swap
I doubt it, unless the controller is hotswap capable and you can reload the IDE driver I don't know of any hotswap capable IDE controllers (Not to say that I wouldn't be interested if anyone else on the list does!!!) On Thu, 22 Apr 1999, a.loots wrote: Hello RAID group, I have a Software RAID-1 system here with two IDE drives. The md1 device contains a complete mirror of a RedHat 5.2 root file system. I boot the system with a bootdisk created while md1 was mounted /. The kernel I use is 2.2.3. When the system is running i disconnect one of the IDE cables {';'} oops... Now when I put the IDE cable back on its place I try to raidhotadd the "failed" disk. And I get a: device is busy error. After a reboot of the system its no problem to get the disk back in the array. Is there a way of skipping the "reboot of the system" part ? - Alex James ([EMAIL PROTECTED]) It doesn't run on an open source platform, therefore it, by definition, does not matter.
Re: Hot Swap
I doubt it, unless the controller is hotswap capable and you can reload the IDE driver I don't know of any hotswap capable IDE controllers (Not to say that I wouldn't be interested if anyone else on the list does!!!) I would as well be interested in a hot-swappable ide controller. -sv
Benchmarks/Performance.
Hi, I set up a raid box a while ago and so far it's performed flawlessly... unfortunately the group I'm in are outgrowing it. So I'm putting together a new box and I've got time to test it and benchmark it before putting it into service. The machine is a PPRO 64MB Ram, vanilla SuSE-6.0 box, I downloaded 2.2.6, the patches and the raidtools. Recompiled the kernel, wrote a raidtab, ran mkraid and it all seems to work. (see attached raidtab, md0 is squid cache, md1 /home, md2 main raid 5 array) /dev/md2 is raid5 across 4 WDC AC313000R's (I can only work with what I have in the office) In the raidtab I gave it a chunk size of 128 and I used the following mke2fs command. mke2fs -b 4096 0R stride=32 -m0 /dev/md2 Which from what I've read, should be pretty alright... Basically I suppose what I'm asking is "Am I on the right track?" I'd really appreciate some feedback... because Once this is put into service It'll be our server for the next 12 months at least. -- Bonnie -s 265 on /dev/md1 2 Seagate ST34371N's on AHA2940 ---Sequential Output ---Sequential Input-- --Random-- -Per Char- --Block--- -Rewrite-- -Per Char- --Block--- --Seeks--- MachineMB K/sec %CPU K/sec %CPU K/sec %CPU K/sec %CPU K/sec %CPU /sec %CPU 256 3367 92.5 8991 32.0 3232 17.7 3764 90.3 8410 17.1 77.7 3.5 -- I haven't run bonnie on /dev/md2 yet 'cause it's still synching the array. Thanks for your time and patience... And keep up the good work... -- John Ronan [EMAIL PROTECTED], Telecommunications Software Systems Group - WIT, +353-51-302411, http://www-tssg.wit.ie You've had too much coffee when ... you walk 20 miles on your treadmill before you realise it's not plugged in raidtab
Re: mkraid dies with aborted error
Running RedHat Starbuck 5.9.7, kernel 2.2.6, mkraid version 0.90.0 from the raidtools-0.90-3 RPM package. This looks like the FAQ: Q) Why does raidtools-0.90 fail on my standard 2.2.[56] kermel A) raidtools-0.90 requires raid145 patches to be applied to the kernel mkraid: aborted That means "see /var/log/messages". I followed the directions right out of the HOWTO that is included in the RPM file as far as I can tell. Did it say to patch the kernel ?
RE: Is Raid as frought as it looks?
Sure, with RAID1 one can cheat by going degraded 1 - degraded 5, and the net result is a larger partition, but he's starting from a raw partition ... What about lower end start-ups? Say, if I have one drive, and I just managed to get a second drive, so I'd like to start mirroring (RAID1). There doesn't seem to be any way to make a two disk mirrored bootable md0 without a third drive...is there? Yes, that is what the degraded raid mode is for. In your example, you would create a degraded raid1 (with just one disk), copy the files from your original disk, restart the system with the new /dev/md0 replacing your original disk then hot add the original to the raid set. [EMAIL PROTECTED]
Mirror problems with 2.0.36
[I'm sorry if this is a repeat...] Hi all. I've been playing around with 2.0.36 and raid0145-19990309-2.0.36, and I think I've screwed it up. It was working fine (I believe) until I added a PCI EIDE controller. I disabled /dev/md0 from mounting in the fstab until I could update /etc/raidtab to reflect the new device names, but when I did, it appears to be using only one disk in the mirror: (read) hdb1's sb offset: 6353088 [events: 001a] autorun ... considering hdb1 ... adding hdb1 ... created md0 bindhdb1,1 running: hdb1 now! hdb1's event counter: 001a md0: max total readahead window set to 128k md0: 1 data-disks, max readahead per data-disk: 128k raid1: device hdb1 operational as mirror 0 raid1: md0, not all disks are operational -- trying to recover array raid1: raid set md0 active with 1 out of 2 mirrors md: updating md0 RAID superblock on device hdb1 [events: 001b](write) hdb1's sb offset: 6353088 md: recovery thread got woken up ... md0: no spare disk to reconstruct array! -- continuing in degraded mode md: recovery thread finished ... The following is the contents of /proc/mdstat: Personalities : [raid1] read_ahead 1024 sectors md0 : active raid1 hdb1[0] 6353088 blocks [2/1] [U_] unused devices: none Now that I've removed the PCI controller, updated all necessary files, and am now using the onboard controller, what can be done to rectify this situation? Is it possible to resync the mirrors? What should I have done to correctly move a mirror from hdb/hdc to hde/hdf? Hmm.. Also, can I expect a significant performance improvement using a PCI EIDE controller versus my onboard IDE controller? Thanks, Dave Wreski
Re: Benchmarks/Performance.
On Thu, 22 Apr 1999, Carlos Carvalho wrote: John Ronan ([EMAIL PROTECTED]) wrote on 22 April 1999 16:03: /dev/md2 is raid5 across 4 WDC AC313000R's (I can only work with what I have in the office) In the raidtab I gave it a chunk size of 128 and I used the following mke2fs command. mke2fs -b 4096 0R stride=32 -m0 /dev/md2 I'd like to have a way to measure the performance, but I don't know how. Doug Ledford recommends using bonnie on a just-created array to check performance with various chunck sizes. What bothers me most is that it seems that the best settings depend on your particular files and usage... i tried this with raid0, and if bonnie is any guide, the optimal configuration is 64k chunk size, 4k e2fs block size. -- Paul Jakma [EMAIL PROTECTED] http://hibernia.clubi.ie PGP5 key: http://www.clubi.ie/jakma/publickey.txt --- Fortune: I haven't lost my mind -- it's backed up on tape somewhere.