Re: Recommendations for supported 4-port SATA PCI card ?
Mark Hahn wrote: I have 4x500GB Maxtor SATA drives and I want to attach these to a 4-port SATA PCI card and RAID5 them using md Could anybody recommend a card that will have out of box support on a Fedora system ? which FC release? I believe FC4 would have decent support for promise tx4's (there are at least two - the most recent might not work OOB). sil 3114's ought to work as well. Hi Currently FC4, but I could update to FC5 on these machines if necessary. I should add that performance is not really an issue - it's mostly a read-only data store. I don't really want to fork out 250 UKPounds per SATA card (i.e. 3ware 8506) since I don't need the capabilities (hardware RAID) nor the performance (but I suppose if I'm forking out 750 UKP for disks what's another 250?) Mike Hardy recommended the Addonics ADST114 (sil 3114) which has an entry on this page: http://linuxmafia.com/faq/Hardware/sata.html A little bit outdated though. (Amazing how many RAID cards are fakeraid) Thanks for the help! Ian 3ware. Period. If you're going to use md, get the 8506-4 series rather than either of the 9xxx series cards. before the 9550, I never found attractive in price/performance: expensive as hell and a lot slower than MD. but the 9550 is really quite impressive... regards, mark hahn - To unsubscribe from this list: send the line unsubscribe linux-raid in the body of a message to [EMAIL PROTECTED] More majordomo info at http://vger.kernel.org/majordomo-info.html -- Ian Thurlbeckhttp://www.stams.strath.ac.uk/ Statistics and Modelling Science, University of Strathclyde Livingstone Tower, 26 Richmond Street, Glasgow, UK, G1 1XH Tel: +44 (0)141 548 3667 Fax: +44 (0)141 552 2079 - To unsubscribe from this list: send the line unsubscribe linux-raid in the body of a message to [EMAIL PROTECTED] More majordomo info at http://vger.kernel.org/majordomo-info.html
Re[2]: Recommendations for supported 4-port SATA PCI card ?
Hello Joshua, JBL That's exactly why I recommended them. 3w- has been in the kernel a JBL *long* time and is extremely stable. Sure, it's expensive for just a JBL SATA controller, but not for a solid one that doesn't fall over at random JBL times. Just for my 2 cents: we have a number of different generations of 3Ware cards (IDE and SATA) in our campus network, mostly with Western Digital drives. It seems quite beneficial to use their Raid Edition series of drives instead of desktop ones. Beside the longer warranty and perhaps better manufacturing and better resistance to vibration, according to the WD site, these drives also report errors (if any) quicker to a host adapter. This results in errors being handled by RAID hardware gracefully, instead of the whole drive being considered as timed out. In practice, we had some fun months with a RAID5 set made of desktop Caviars rebuilding once in a while for no apparent reason, with each disk working well for a long time. And we had no such problem with RE disks for over a year now. Hope this helps some list readers make their choice, and please don't consider this an advertisement of certain brands ;) If any other manufaturer offers capabilities similar to WD RE (especially timeouts), please take a better look if you consider a hardware RAID controller. -- Best regards, Jim Klimovmailto:[EMAIL PROTECTED] - To unsubscribe from this list: send the line unsubscribe linux-raid in the body of a message to [EMAIL PROTECTED] More majordomo info at http://vger.kernel.org/majordomo-info.html
Re: Recommendations for supported 4-port SATA PCI card ?
I've got several systems with pairs of Promise Technology, Inc. PDC20318 (SATA150 TX4) and no problems other than lack of support in the install kernels back when we got them (about a year and a half ago). -- Jon Lewis | I route Senior Network Engineer | therefore you are Atlantic Net| _ http://www.lewis.org/~jlewis/pgp for PGP public key_ - To unsubscribe from this list: send the line unsubscribe linux-raid in the body of a message to [EMAIL PROTECTED] More majordomo info at http://vger.kernel.org/majordomo-info.html
Re: sata controllers status=0x51 { DriveReady SeekComplete Error } error=0x84 { DriveStatusError BadCRC }
On Friday 31 March 2006 03:57, David Greaves wrote: Yes. There is a *definite* chance that there is a hardware fault and you need to try hard to confirm that it's not. I ended up spending over $100 on a new (unneeded) PSU - just in case... You mean that it may (possibly) simply be a hardware fault, not the kernel issue possibly related to fua (whatever)? I think not as I have 4 servers, and have seen this on 2 separate servers while using the via vt8237 controller. I would guess that your vt8237 is controlling ata2 and ata1 (if you have no other pci card sata controllers on the machine...). Please can you check if that is true (boot messages in dmesg - however these are probably already overwritten by those kernel errors). Secondly - I see errors through yesterday. Are you saying that you still have problems AFTER you take all the steps of updating to 2.6.16 etc? So what is the point of the update? Does it happen on the silicon image controller too? Is it simply a problem with the via controller chipset and the drivers? Is this definitely only a sata issue and not pata as well? thanks Mitchell Could you give me more of your experience? Honestly it's pretty well documented on linux-ide. The only think I haven't always mentioned is when the array went down. I always got it back through an assemble/force - although once I had 2 of 3 kicked so quickly that they had the same (lower) event count and I used the 'faulty' devices to reconstruct the array. David grep ata messages.log: Mar 26 15:46:47 haze kernel: ata2: no sense translation for op=0x28 cmd=0x25 status: 0x51 Mar 26 15:46:47 haze kernel: ata2: status=0x51 { DriveReady SeekComplete Error } Mar 26 15:47:33 haze kernel: ata2: status=0x59 { DriveReady SeekComplete DataRequest Error } Mar 26 15:47:33 haze kernel: ata2: no sense translation for op=0x28 cmd=0x25 status: 0x51 Mar 26 15:47:33 haze kernel: ata2: status=0x51 { DriveReady SeekComplete Error } Mar 26 16:20:22 haze kernel: ata2: no sense translation for op=0x28 cmd=0x25 status: 0x51 Mar 26 16:20:22 haze kernel: ata2: status=0x51 { DriveReady SeekComplete Error } Mar 26 19:50:12 haze kernel: ata2: no sense translation for op=0x28 cmd=0x25 status: 0x51 Mar 26 19:50:12 haze kernel: ata2: status=0x51 { DriveReady SeekComplete Error } Mar 27 06:25:03 haze kernel: ata2: no sense translation for op=0x28 cmd=0x25 status: 0x51 Mar 27 06:25:03 haze kernel: ata2: status=0x51 { DriveReady SeekComplete Error } Mar 27 06:25:04 haze kernel: ata2: no sense translation for op=0x28 cmd=0x25 status: 0x51 Mar 27 06:25:04 haze kernel: ata2: status=0x51 { DriveReady SeekComplete Error } Mar 27 06:25:06 haze kernel: ata2: no sense translation for op=0x28 cmd=0x25 status: 0x51 Mar 27 06:25:06 haze kernel: ata2: status=0x51 { DriveReady SeekComplete Error } Mar 27 06:25:08 haze kernel: ata2: no sense translation for op=0x28 cmd=0x25 status: 0x51 Mar 27 06:25:08 haze kernel: ata2: status=0x51 { DriveReady SeekComplete Error } Mar 27 06:25:17 haze kernel: ata2: no sense translation for op=0x28 cmd=0x25 status: 0x51 Mar 27 06:25:17 haze kernel: ata2: status=0x51 { DriveReady SeekComplete Error } Mar 27 06:25:29 haze kernel: ata2: no sense translation for op=0x28 cmd=0x25 status: 0x51 Mar 27 06:25:29 haze kernel: ata2: status=0x51 { DriveReady SeekComplete Error } Mar 27 06:25:33 haze kernel: ata2: no sense translation for op=0x28 cmd=0x25 status: 0x51 Mar 27 06:25:33 haze kernel: ata2: status=0x51 { DriveReady SeekComplete Error } Mar 27 06:25:38 haze kernel: ata2: no sense translation for op=0x28 cmd=0x25 status: 0x51 Mar 27 06:25:38 haze kernel: ata2: status=0x51 { DriveReady SeekComplete Error } Mar 27 06:25:45 haze kernel: ata2: no sense translation for op=0x28 cmd=0x25 status: 0x51 Mar 27 06:25:45 haze kernel: ata2: status=0x51 { DriveReady SeekComplete Error } Mar 27 06:25:52 haze kernel: ata2: no sense translation for op=0x28 cmd=0x25 status: 0x51 Mar 27 06:25:52 haze kernel: ata2: status=0x51 { DriveReady SeekComplete Error } Mar 27 06:25:56 haze kernel: ata2: no sense translation for op=0x28 cmd=0x25 status: 0x51 Mar 27 06:25:56 haze kernel: ata2: status=0x51 { DriveReady SeekComplete Error } Mar 27 06:25:59 haze kernel: ata2: no sense translation for op=0x28 cmd=0x25 status: 0x51 Mar 27 06:25:59 haze kernel: ata2: status=0x51 { DriveReady SeekComplete Error } Mar 27 06:26:03 haze kernel: ata2: no sense translation for op=0x28 cmd=0x25 status: 0x51 Mar 27 06:26:03 haze kernel: ata2: status=0x51 { DriveReady SeekComplete Error } Mar 27 06:26:14 haze kernel: ata2: no sense translation for op=0x28 cmd=0x25 status: 0x51 Mar 27 06:26:14 haze kernel: ata2: status=0x51 { DriveReady SeekComplete Error } Mar 27 06:26:20 haze kernel: ata2: no sense translation for op=0x28 cmd=0x25 status: 0x51 Mar 27 06:26:20 haze kernel: ata2: status=0x51 { DriveReady SeekComplete Error } Mar 27 06:27:03 haze kernel: ata2: no
Re: Recommendations for supported 4-port SATA PCI card ?
http://linuxmafia.com/faq/Hardware/sata.html A little bit outdated though. (Amazing how many RAID cards are fakeraid) well, the page is useful in that it discusses particular products, but the fakeraid stuff is just silly name-calling. there are either real (hardware) raid cards, and there are multiport cards. fakeraid just means that there is code in the bios to do software raid 0/1 - not a bad thing in any possible sense, though also of zero value to many people. the prevalence of fakeraid simply shows that to be worth doing, hw raid requires a pretty hefty piece of hardware, like the 3ware 9550. older boards that had, for instance, a single pc100 dimm on them were the kiss of death, since just seeing that you knew that they couldn't move more than 10-20 MB/s, much less than a single disk's worth. - To unsubscribe from this list: send the line unsubscribe linux-raid in the body of a message to [EMAIL PROTECTED] More majordomo info at http://vger.kernel.org/majordomo-info.html
[PATCH] mdadm: monitor event argument passing
I've been looking at the mdadm monitor, and thought it might be useful if it allowed extra context information (in the form of command line arguments) to be sent to the event program, so instead of just: # mdadm -F /dev/md0 -p md_event you could do something like: # mdadm -F /dev/md0 -p md_event -i some_info And the -i some_info will be passed on the command line to the event program. Of course you can usually figure out what the extra context should be in the event program itself, but it may take more work. Here's a short patch (against mdadm 2.4) that does this. Thanks, Paul Signed-Off-By: Paul Clements [EMAIL PROTECTED] Monitor.c | 23 ++- 1 files changed, 22 insertions(+), 1 deletion(-) --- mdadm-2.4/Monitor.c 2006-03-28 17:59:42.0 -0500 +++ mdadm-2.4-event-args/Monitor.c 2006-03-31 13:19:40.0 -0500 @@ -464,6 +464,27 @@ static void alert(char *event, char *dev int dosyslog) { int priority; + int cnt = 0; + char path[PATH_MAX]; + char *space, *ptr; + char *args[256]; + + if (cmd) { + strcpy(path, cmd); + ptr = path; + do { + space = strchr(ptr, ' '); + if (!space) + break; + args[cnt++] = ptr; + *space = 0; + ptr = space+1; + } while (1); + args[cnt++] = ptr; + args[cnt++] = event; + args[cnt++] = dev; + args[cnt++] = disc; + } if (!cmd !mailaddr) { time_t now = time(0); @@ -479,7 +500,7 @@ static void alert(char *event, char *dev case -1: break; case 0: - execl(cmd, cmd, event, dev, disc, NULL); + execv(path, args); //execl(cmd, cmd, event, dev, disc, NULL); exit(2); } }
mdadm and large RAID
Hello, I've been trying to create a very large RAID and have not been successful. I thought this might be a good place to ask. Here's the story. We purchased a box with ~8TB space. It has 2 fiber connections so, we thought, to maximize throughput into the device we could build 2 RAIDs in hardware, on the box, and export each one out each fiber channel and then create a striped raid of the two devices at the hardware level. Due to the way we created the 2 RAIDs each export is ~3TB. What we've tried. We tried making a RAID 0 with mdadm over the two devices. There appears and entry en /dev/mdstats showing the RAID. Then we try to put XFS on it, the machine hard locks. We tried formatting each separate device and there were no problems. The machine just hard-locks when I try to put the filesystem on the RAID. Then I read somewhere that mdadm might not like having the individual parts that make up the RAID be 2TB. We broke each one of the hardware ~3TB partitions in half at the hardware level, so now we have four ~1.5TB devices. When we try to build a RAID 0 over these with mdadm we have the same result. The machine hard locks when we try to format it. There's not errors in the logs. The machine freezes up too quickly. Any help would be greatly appreciated. -Cesar - To unsubscribe from this list: send the line unsubscribe linux-raid in the body of a message to [EMAIL PROTECTED] More majordomo info at http://vger.kernel.org/majordomo-info.html
Re: mdadm and large RAID
On Fri, 31 Mar 2006 at 2:02pm, Cesar Delgado wrote Here's the story. We purchased a box with ~8TB space. It has 2 fiber connections so, we thought, to maximize throughput into the device we could build 2 RAIDs in hardware, on the box, and export each one out each fiber channel and then create a striped raid of the two devices at the hardware level. Due to the way we created the 2 RAIDs each export is ~3TB. What we've tried. We tried making a RAID 0 with mdadm over the two devices. There appears and entry en /dev/mdstats showing the RAID. Then we try to put XFS on it, the machine hard locks. We tried formatting each separate device and there were no problems. The machine just hard-locks when I try to put the filesystem on the RAID. You're missing a lot of vital information: what distribution are you using, what version of mdadm, what fiber adapter, etc. There's not errors in the logs. The machine freezes up too quickly. Have you tried a serial console and or remote syslog? -- Joshua Baker-LePain Department of Biomedical Engineering Duke University - To unsubscribe from this list: send the line unsubscribe linux-raid in the body of a message to [EMAIL PROTECTED] More majordomo info at http://vger.kernel.org/majordomo-info.html
Re: addendum: was Re: recovering data on a failed raid-0 installation
Jeff, I would if I could. but I live on a rather limited income here . :( also, my chances for employment (other than self-contracted services) in phoenix are slim and none (being disabled in a market with a very soft tech sector can lead to that). now, I don't mean to be abrasive, but, so far, I haven't seen much other than: 1. it can't be done or 2. hire a paid consultant. as for 1, if a pro can do it, it stands to reason that someone, somewhere (not a pro) found the way and made his millions on it. as for 2, not going to be possible on $600.00 a month. On Friday 31 March 2006 15:10, Jeff Breidenbach wrote: Technomage, I recommend hiring a paid consultant; your attitude is a little too abrasive for a communal support channel. - To unsubscribe from this list: send the line unsubscribe linux-raid in the body of a message to [EMAIL PROTECTED] More majordomo info at http://vger.kernel.org/majordomo-info.html
Re: addendum: was Re: recovering data on a failed raid-0 installation
mike, yeah coaxing the FS into trying to recover seems to be the sticky bit. :( I have tried all that I know, which is not much considering that this is not my specialty (I am a unix security admin unemployed on disability). we still have the original drives and we have a drive imaging device arriving (should have been here today). I can only hope that it will be able to overcome the problems I have seen with that particular laptop and its flaky IDE subsystem. only problem with the backups, there were none (no spare drives and the person that did the setup hadn't realized until too late that there was an incipient problem with the hardware. :( if the drive imager is successful in recovering the entire contents of the drive (he is a forensics specialist and I am retraining to be one) then we are in business. however, if not. call it a write off and move on I guess. :( thanks for responding though. On Friday 31 March 2006 15:55, you wrote: You were in the right spot, I think raid-0 is just a data-lossy format and my first impression of your post was well, don't pick raid-0, duh - not in a rude way just that you got the defined behavior of the system, data loss on any failure I can't imagine how to coax a filesystem to work when it's missing half it's contents, but maybe a combination of forcing a start on the raid and read-only FS mounts could make it hobble along. I'd restore from backup and be done with it though -Mike Technomage wrote: well? are you guys tapped out on this or should I be looking elsewhere? The *was* the recommended place to seek out help. still waiting - To unsubscribe from this list: send the line unsubscribe linux-raid in the body of a message to [EMAIL PROTECTED] More majordomo info at http://vger.kernel.org/majordomo-info.html - To unsubscribe from this list: send the line unsubscribe linux-raid in the body of a message to [EMAIL PROTECTED] More majordomo info at http://vger.kernel.org/majordomo-info.html
[PATCH] mdadm 2.4: fix write mostly for add and re-add
The following patch makes it possible to tag a device as write-mostly on --add and --re-add with a non-persistent superblock array. Previously, this was not working. Thanks, Paul Signed-Off-By: Paul Clements [EMAIL PROTECTED] Manage.c |2 ++ 1 files changed, 2 insertions(+) --- mdadm-2.4/Manage.c 2006-03-28 01:11:11.0 -0500 +++ mdadm-2.4-fix-write-mostly/Manage.c 2006-03-31 21:56:37.0 -0500 @@ -341,6 +341,8 @@ int Manage_subdevs(char *devname, int fd break; } } + if (dv-writemostly) + disc.state |= (1 MD_DISK_WRITEMOSTLY); if (ioctl(fd,ADD_NEW_DISK, disc)) { fprintf(stderr, Name : add new device failed for %s as %d: %s\n, dv-devname, j, strerror(errno));
Re: addendum: was Re: recovering data on a failed raid-0 installation
ok, seems I made a mistake in how this silly mail client is configured. so, in that, I do apologize for having this show up on the list (doh!) this was meant to be private and should have remained that way. On Friday 31 March 2006 19:01, Technomage wrote: Jeff, I would if I could. but I live on a rather limited income here . :( also, my chances for employment (other than self-contracted services) in phoenix are slim and none (being disabled in a market with a very soft tech sector can lead to that). now, I don't mean to be abrasive, but, so far, I haven't seen much other than: 1. it can't be done or 2. hire a paid consultant. as for 1, if a pro can do it, it stands to reason that someone, somewhere (not a pro) found the way and made his millions on it. as for 2, not going to be possible on $600.00 a month. On Friday 31 March 2006 15:10, Jeff Breidenbach wrote: Technomage, I recommend hiring a paid consultant; your attitude is a little too abrasive for a communal support channel. - To unsubscribe from this list: send the line unsubscribe linux-raid in the body of a message to [EMAIL PROTECTED] More majordomo info at http://vger.kernel.org/majordomo-info.html - To unsubscribe from this list: send the line unsubscribe linux-raid in the body of a message to [EMAIL PROTECTED] More majordomo info at http://vger.kernel.org/majordomo-info.html
Re: addendum: was Re: recovering data on a failed raid-0 installation
mike. given the problem, I have a request. On Friday 31 March 2006 15:55, Mike Hardy wrote: I can't imagine how to coax a filesystem to work when it's missing half it's contents, but maybe a combination of forcing a start on the raid and read-only FS mounts could make it hobble along. we will test any well laid out plan. lay out for us (from beginning to end) all the steps required, in your test. do not be afraid to detail the obvious. it is better that we be in good communication than to be working on assumptions. it will save you a lot of frustration trying to correct for our assumptions, if there are none. tmh - To unsubscribe from this list: send the line unsubscribe linux-raid in the body of a message to [EMAIL PROTECTED] More majordomo info at http://vger.kernel.org/majordomo-info.html
Re: addendum: was Re: recovering data on a failed raid-0 installation
Well, honestly I'm not really sure. I've never done this as I only use the redundant raid levels, and when they're gone, things are a complete hash and there's no hope. In fact, with raid-0 (striping, right? not linear/append?) I believe you are in the same boat. Each large file will have half its contents on the disk that died. So really, there's very little hope. Anyway, I'll try to give you pointers to what I would try anyway, with as much detail as I can. First, you just need to get the raid device up. It sounds like you are actually already doing that, but who knows. If you have one drive but not the other, you could make a sparse file that is the same size as the disk you lost. I know this is possible, but haven't done it so you'll have to see for yourself - I think there are examples in linux-raid archives in reference to testing very large raid arrays. Loopback mount the file as a device (losetup is he command to use here) and now you have a virtual device of the same size as the drive you lost. Recreate the raid array using the drive you have, and the new virtual drive in place of the one you lost. It's probably best to do this with non-persistent superblocks and just generally as read-only as possible for data preservation on the drive you have. So now you have a raid array. For the filesystem, well, I don't know. That's a mess. I assume it's possible to mount the filesystem with some degree of force (probably a literally -force argument) as well as read-only. You may need to point at a different superblock, who knows? You just want to get the filesystem to mount somehow, any way you need to, but hopefully in a read-only mode. I would not even attempt to fsck it. At this point, you have a mostly busted filesystem on a fairly broken raid setup, but it might be possible to pull some data out of it, who knows? You could pull what looks like data but is instead garbage to though - if you don't have md5sums of the files you get (if you get any) it'll be hard to tell without checking them all. Honestly, that's as much as I can think of. I know I'm just repeating myself when I say this, but raid is no replacement for backups. They have different purposes, and backups are no less necessary. I was sorry to hear you didn't have any, because that probably seals the coffin on your data. With regard to people recommending you get a pro. In this field (data recovery) there are software guys (most of the people on this list) that can do a lot while the platters are spinning and there are hardware guys (the pros I think most people are talking about). They have physical tools that can get data out of platters that wouldn't spin otherwise. There's nothing the folks on the list can do really other than recommend seeing someone (or shipping the drive to) one of those dudes. When you get the replacement drive back from them with your data on it, then we're back in software land and you may have half a chance. That said, it sounded like you had already tried to fsck the filesystem on this thing, so you may have hashed the remaining drive. It's hard to say. Truly bleak though... -Mike Technomage wrote: mike. given the problem, I have a request. On Friday 31 March 2006 15:55, Mike Hardy wrote: I can't imagine how to coax a filesystem to work when it's missing half it's contents, but maybe a combination of forcing a start on the raid and read-only FS mounts could make it hobble along. we will test any well laid out plan. lay out for us (from beginning to end) all the steps required, in your test. do not be afraid to detail the obvious. it is better that we be in good communication than to be working on assumptions. it will save you a lot of frustration trying to correct for our assumptions, if there are none. tmh - To unsubscribe from this list: send the line unsubscribe linux-raid in the body of a message to [EMAIL PROTECTED] More majordomo info at http://vger.kernel.org/majordomo-info.html - To unsubscribe from this list: send the line unsubscribe linux-raid in the body of a message to [EMAIL PROTECTED] More majordomo info at http://vger.kernel.org/majordomo-info.html