RE: System / libata IDE controller woes (long)
Well, after more mucking about, and copying data off of the production LVM on to the backup LVM, I noticed that no matter where I put this one Seagate drive, it caused dma_timer_expiry errors. Once I replaced this drive, everything settled down again, and has been running normally. So it's not the old IDE driver code can't handle that many controllers, it can. It's also no problem for libata in a similar configuration. Both work and work well. I have to admit that it sure took me a long time to figure out that the drive was the problem. I guess that sort of thing should move higher in the diagnosis decision tree. You live, you learn. Thanks for everyone's patience and help in this. It helped me keep my sanity through all this. Cheers, Erik. - To unsubscribe from this list: send the line "unsubscribe linux-kernel" in the body of a message to [EMAIL PROTECTED] More majordomo info at http://vger.kernel.org/majordomo-info.html Please read the FAQ at http://www.tux.org/lkml/
RE: System / libata IDE controller woes (long)
Well, after more mucking about, and copying data off of the production LVM on to the backup LVM, I noticed that no matter where I put this one Seagate drive, it caused dma_timer_expiry errors. Once I replaced this drive, everything settled down again, and has been running normally. So it's not the old IDE driver code can't handle that many controllers, it can. It's also no problem for libata in a similar configuration. Both work and work well. I have to admit that it sure took me a long time to figure out that the drive was the problem. I guess that sort of thing should move higher in the diagnosis decision tree. You live, you learn. Thanks for everyone's patience and help in this. It helped me keep my sanity through all this. Cheers, Erik. - To unsubscribe from this list: send the line unsubscribe linux-kernel in the body of a message to [EMAIL PROTECTED] More majordomo info at http://vger.kernel.org/majordomo-info.html Please read the FAQ at http://www.tux.org/lkml/
RE: System / libata IDE controller woes (long)
Hi There! Yea, I thought that it might be power related as well, so I moved 1/2 of the drives from the 500 Watt power supply onto a separate one, and it did not change any of the symptoms. So I think that it's been ruled out. Thanks, Erik. > > Hello, > > Erik Ohrnberger wrote: > > Earlier this year, when I started putting it together, I > gathered my > > hardware. A decent 2 GHz Athlon system with 512 MB RAM, > DVD drive, a > > 40 GB system drive, and a 500 Watt power supply. Then I started > > adding hard disks. To date, I've got 5 80 GB PATA, 2 200 > GB PATA, and 1 60 GB PATA. > > That's 9 hard drives. How did you hook up your power supply? > My dual-rail 450w PS has a lot of problem driving 9 drives > no matter how I hook it up while my 350w power supply can > happily handle the load. I suspect it's because how the > separate 12v rails are configured in the PS. > > It's nothing concrete but I wanna rule PS issue first. If > you've got an extra power supply (buy cheap 350w one if you > don't have one), hook half of the drives to it and see what > happens. Using PS without motherboard is easy. Just ask google. > > Happy holidays. > > -- > tejun > - To unsubscribe from this list: send the line "unsubscribe linux-kernel" in the body of a message to [EMAIL PROTECTED] More majordomo info at http://vger.kernel.org/majordomo-info.html Please read the FAQ at http://www.tux.org/lkml/
System / libata IDE controller woes (long)
First off, Merry Christmas, Seasons Greetings and Happy Holidays! Hang on, this is a bit of a long story, but I think that you'll need the information and background. I want what amounts to a NAS, that I'd like to build on gentoo Linux. I'm familiar with gentoo and the use of EVMS, so I think I'm pretty well prepared from this perspective. Earlier this year, when I started putting it together, I gathered my hardware. A decent 2 GHz Athlon system with 512 MB RAM, DVD drive, a 40 GB system drive, and a 500 Watt power supply. Then I started adding hard disks. To date, I've got 5 80 GB PATA, 2 200 GB PATA, and 1 60 GB PATA. I mounted the drives on a set of aluminum rails that I had a friend make for me. They run vertically, and have slots through which screws are tightened into the normal hard drive's mounting holes. All the communication cables are 80 pin cables, and run pretty much straight to the controller cards, while the power pigtails fan out on the side of the 'tower'. With all these hard drives, I also got 3 Promise 20269 IDE controllers. After I put it all together, and creating 2 logical volumes, one linked EVMS LV, and one RAID5 across 5 80 GB drives. To support this configuration, I connected the drives in the follow manner (using /dev/hdX notation): ide0: /dev/hdc = System boot disk (Motherboard) /dev/hdb = DVD ROM ide1: /dev/hdc = nothing /dev/hdd = nothing ide2: /dev/hde = raid disk(First Promise card) /dev/hdf = lvm disk ide3: /dev/hdg = raid disk /dev/hdh = lvm disk ide4: /dev/hdi = raid disk(Second Promise card) /dev/hdj = lvm disk ide5: /dev/hdk = raid disk /dev/hdl = nothing ide6: /dev/hdm = raid disk(Thrid Promise card) /dev/hdn = nothing ide7: /dev/hdo = nothing /dev/hdp = nothing >From what I understood, this is how you want to connect a set of raid drives so that no one controller is over loaded with IO. But I had to use the other ports to connect the LVM. I started to get 'dma_expiry' errors (see message file extract below): Dec 22 21:29:33 livecd hdg: dma_timer_expiry: dma status == 0x21 Dec 22 21:29:43 livecd hdg: DMA timeout error Dec 22 21:29:43 livecd hdg: dma timeout error: status=0x50 { DriveReady SeekComplete } Dec 22 21:29:43 livecd ide: failed opcode was: unknown Dec 22 21:29:43 livecd hdg: task_in_intr: status=0x51 { DriveReady SeekComplete Error } Dec 22 21:29:43 livecd hdg: task_in_intr: error=0x04 { DriveStatusError } Dec 22 21:29:43 livecd ide: failed opcode was: unknown Dec 22 21:29:43 livecd hdg: task_in_intr: status=0x51 { DriveReady SeekComplete Error } Dec 22 21:29:43 livecd hdg: task_in_intr: error=0x04 { DriveStatusError } Dec 22 21:29:43 livecd ide: failed opcode was: unknown Dec 22 21:29:43 livecd hdg: task_in_intr: status=0x51 { DriveReady SeekComplete Error } Dec 22 21:29:43 livecd hdg: task_in_intr: error=0x04 { DriveStatusError } Dec 22 21:29:43 livecd ide: failed opcode was: unknown Dec 22 21:29:43 livecd hdg: task_in_intr: status=0x51 { DriveReady SeekComplete Error } Dec 22 21:29:43 livecd hdg: task_in_intr: error=0x04 { DriveStatusError } Dec 22 21:29:43 livecd ide: failed opcode was: unknown Dec 22 21:29:43 livecd PDC202XX: Secondary channel reset. Dec 22 21:29:43 livecd ide3: reset: success Dec 22 21:30:03 livecd hdg: dma_timer_expiry: dma status == 0x21 Dec 22 21:30:15 livecd hdg: DMA timeout error Dec 22 21:30:15 livecd hdg: dma timeout error: status=0x80 { Busy } Dec 22 21:30:15 livecd ide: failed opcode was: unknown Dec 22 21:30:15 livecd hdg: DMA disabled Dec 22 21:30:15 livecd PDC202XX: Secondary channel reset. Dec 22 21:30:20 livecd ide3: reset: success Dec 22 21:36:58 livecd hdg: irq timeout: status=0x80 { Busy } Dec 22 21:36:58 livecd ide: failed opcode was: unknown Dec 22 21:36:58 livecd PDC202XX: Secondary channel reset. Dec 22 21:37:33 livecd ide3: reset timed-out, status=0x80 Dec 22 21:37:33 livecd hdg: status timeout: status=0x80 { Busy } Dec 22 21:37:33 livecd ide: failed opcode was: unknown Dec 22 21:37:33 livecd PDC202XX: Secondary channel reset. Dec 22 21:37:33 livecd hdg: drive not ready for command Dec 22 21:37:48 livecd ide3: reset: success Dec 22 21:37:58 livecd hdg: lost interrupt These errors caused the raid array to crash repeatedly, so I gave up on that and changed the raid to an EVMS drive linked logical volume, and changed their connections to as follows: ide0: /dev/hdc = System boot disk (motherboard) /dev/hdb = DVD ROM ide1: /dev/hdc = nothing /dev/hdd = nothing ide2: /dev/hde = lvm1 (first promise card) /dev/hdf = lvm1 ide3: /dev/hdg = lvm1 /dev/hdh = lvm1 ide4: /dev/hdi = lvm1 (second promise card) /dev/hdj = nothing ide5: /dev/hdk = lvm2 /dev/hdl = lvm2 ide6: /dev/hdm = lvm2 (third promise card) /dev/hdn = nothing ide7: /dev/hdo = nothing /dev/hdp =
System / libata IDE controller woes (long)
First off, Merry Christmas, Seasons Greetings and Happy Holidays! Hang on, this is a bit of a long story, but I think that you'll need the information and background. I want what amounts to a NAS, that I'd like to build on gentoo Linux. I'm familiar with gentoo and the use of EVMS, so I think I'm pretty well prepared from this perspective. Earlier this year, when I started putting it together, I gathered my hardware. A decent 2 GHz Athlon system with 512 MB RAM, DVD drive, a 40 GB system drive, and a 500 Watt power supply. Then I started adding hard disks. To date, I've got 5 80 GB PATA, 2 200 GB PATA, and 1 60 GB PATA. I mounted the drives on a set of aluminum rails that I had a friend make for me. They run vertically, and have slots through which screws are tightened into the normal hard drive's mounting holes. All the communication cables are 80 pin cables, and run pretty much straight to the controller cards, while the power pigtails fan out on the side of the 'tower'. With all these hard drives, I also got 3 Promise 20269 IDE controllers. After I put it all together, and creating 2 logical volumes, one linked EVMS LV, and one RAID5 across 5 80 GB drives. To support this configuration, I connected the drives in the follow manner (using /dev/hdX notation): ide0: /dev/hdc = System boot disk (Motherboard) /dev/hdb = DVD ROM ide1: /dev/hdc = nothing /dev/hdd = nothing ide2: /dev/hde = raid disk(First Promise card) /dev/hdf = lvm disk ide3: /dev/hdg = raid disk /dev/hdh = lvm disk ide4: /dev/hdi = raid disk(Second Promise card) /dev/hdj = lvm disk ide5: /dev/hdk = raid disk /dev/hdl = nothing ide6: /dev/hdm = raid disk(Thrid Promise card) /dev/hdn = nothing ide7: /dev/hdo = nothing /dev/hdp = nothing From what I understood, this is how you want to connect a set of raid drives so that no one controller is over loaded with IO. But I had to use the other ports to connect the LVM. I started to get 'dma_expiry' errors (see message file extract below): Dec 22 21:29:33 livecd hdg: dma_timer_expiry: dma status == 0x21 Dec 22 21:29:43 livecd hdg: DMA timeout error Dec 22 21:29:43 livecd hdg: dma timeout error: status=0x50 { DriveReady SeekComplete } Dec 22 21:29:43 livecd ide: failed opcode was: unknown Dec 22 21:29:43 livecd hdg: task_in_intr: status=0x51 { DriveReady SeekComplete Error } Dec 22 21:29:43 livecd hdg: task_in_intr: error=0x04 { DriveStatusError } Dec 22 21:29:43 livecd ide: failed opcode was: unknown Dec 22 21:29:43 livecd hdg: task_in_intr: status=0x51 { DriveReady SeekComplete Error } Dec 22 21:29:43 livecd hdg: task_in_intr: error=0x04 { DriveStatusError } Dec 22 21:29:43 livecd ide: failed opcode was: unknown Dec 22 21:29:43 livecd hdg: task_in_intr: status=0x51 { DriveReady SeekComplete Error } Dec 22 21:29:43 livecd hdg: task_in_intr: error=0x04 { DriveStatusError } Dec 22 21:29:43 livecd ide: failed opcode was: unknown Dec 22 21:29:43 livecd hdg: task_in_intr: status=0x51 { DriveReady SeekComplete Error } Dec 22 21:29:43 livecd hdg: task_in_intr: error=0x04 { DriveStatusError } Dec 22 21:29:43 livecd ide: failed opcode was: unknown Dec 22 21:29:43 livecd PDC202XX: Secondary channel reset. Dec 22 21:29:43 livecd ide3: reset: success Dec 22 21:30:03 livecd hdg: dma_timer_expiry: dma status == 0x21 Dec 22 21:30:15 livecd hdg: DMA timeout error Dec 22 21:30:15 livecd hdg: dma timeout error: status=0x80 { Busy } Dec 22 21:30:15 livecd ide: failed opcode was: unknown Dec 22 21:30:15 livecd hdg: DMA disabled Dec 22 21:30:15 livecd PDC202XX: Secondary channel reset. Dec 22 21:30:20 livecd ide3: reset: success Dec 22 21:36:58 livecd hdg: irq timeout: status=0x80 { Busy } Dec 22 21:36:58 livecd ide: failed opcode was: unknown Dec 22 21:36:58 livecd PDC202XX: Secondary channel reset. Dec 22 21:37:33 livecd ide3: reset timed-out, status=0x80 Dec 22 21:37:33 livecd hdg: status timeout: status=0x80 { Busy } Dec 22 21:37:33 livecd ide: failed opcode was: unknown Dec 22 21:37:33 livecd PDC202XX: Secondary channel reset. Dec 22 21:37:33 livecd hdg: drive not ready for command Dec 22 21:37:48 livecd ide3: reset: success Dec 22 21:37:58 livecd hdg: lost interrupt These errors caused the raid array to crash repeatedly, so I gave up on that and changed the raid to an EVMS drive linked logical volume, and changed their connections to as follows: ide0: /dev/hdc = System boot disk (motherboard) /dev/hdb = DVD ROM ide1: /dev/hdc = nothing /dev/hdd = nothing ide2: /dev/hde = lvm1 (first promise card) /dev/hdf = lvm1 ide3: /dev/hdg = lvm1 /dev/hdh = lvm1 ide4: /dev/hdi = lvm1 (second promise card) /dev/hdj = nothing ide5: /dev/hdk = lvm2 /dev/hdl = lvm2 ide6: /dev/hdm = lvm2 (third promise card) /dev/hdn = nothing ide7: /dev/hdo = nothing /dev/hdp =
RE: System / libata IDE controller woes (long)
Hi There! Yea, I thought that it might be power related as well, so I moved 1/2 of the drives from the 500 Watt power supply onto a separate one, and it did not change any of the symptoms. So I think that it's been ruled out. Thanks, Erik. Hello, Erik Ohrnberger wrote: Earlier this year, when I started putting it together, I gathered my hardware. A decent 2 GHz Athlon system with 512 MB RAM, DVD drive, a 40 GB system drive, and a 500 Watt power supply. Then I started adding hard disks. To date, I've got 5 80 GB PATA, 2 200 GB PATA, and 1 60 GB PATA. That's 9 hard drives. How did you hook up your power supply? My dual-rail 450w PS has a lot of problem driving 9 drives no matter how I hook it up while my 350w power supply can happily handle the load. I suspect it's because how the separate 12v rails are configured in the PS. It's nothing concrete but I wanna rule PS issue first. If you've got an extra power supply (buy cheap 350w one if you don't have one), hook half of the drives to it and see what happens. Using PS without motherboard is easy. Just ask google. Happy holidays. -- tejun - To unsubscribe from this list: send the line unsubscribe linux-kernel in the body of a message to [EMAIL PROTECTED] More majordomo info at http://vger.kernel.org/majordomo-info.html Please read the FAQ at http://www.tux.org/lkml/
Odd system lock up
OK, got the 2.6.19 kernel installed and running OK, full libata wrapping of existing IDE controllers and hard disks. I'm experiencing some odd, random periodic system lockups without any sort of debugging information being captured in the system message log. Perhaps it's a hard disk that's causing the trouble? Is there a way to capture which drive might be causing the issue in the message log? - To unsubscribe from this list: send the line "unsubscribe linux-kernel" in the body of a message to [EMAIL PROTECTED] More majordomo info at http://vger.kernel.org/majordomo-info.html Please read the FAQ at http://www.tux.org/lkml/
Odd system lock up
OK, got the 2.6.19 kernel installed and running OK, full libata wrapping of existing IDE controllers and hard disks. I'm experiencing some odd, random periodic system lockups without any sort of debugging information being captured in the system message log. Perhaps it's a hard disk that's causing the trouble? Is there a way to capture which drive might be causing the issue in the message log? - To unsubscribe from this list: send the line unsubscribe linux-kernel in the body of a message to [EMAIL PROTECTED] More majordomo info at http://vger.kernel.org/majordomo-info.html Please read the FAQ at http://www.tux.org/lkml/
RE: libata update?
> -Original Message- > From: Robert Hancock [mailto:[EMAIL PROTECTED] > Sent: Sunday, December 03, 2006 10:52 AM > To: [EMAIL PROTECTED] > Cc: linux-kernel@vger.kernel.org > Subject: Re: libata update? > > Erik Ohrnberger wrote: > > It's been a number of months since I build a custom kernel with the > > libata incorporated to deal with multiple Promise EIDE controller > > card's 'DMA Expiry' errors. > > > > I'd like to re-build basically the same thing with an > updated kernel > > and updates libata. Could someone point me into the right > direction > > to achieve this? > > > > Thanks in advance > > Erik. > > All of the PATA stuff was merged in 2.6.19, if you want the > very latest and greatest you can try the -mm tree.. > > -- Not to sound too dense or anything, but is there a HOWTO for this kernel to enable the libata code? I've read through the Documentation/kernel-parameters.txt file, and seems like all I have to do is include libata.atapi_enabled=1 combined_mode=libata on the grub.conf line to enable it, but when I boot that kernel with that option, I get an error message: 'Block device /dev/sda1 is not a valid root device' and a 'boot() ::' prompt. Thanks in advance, Erik. - To unsubscribe from this list: send the line "unsubscribe linux-kernel" in the body of a message to [EMAIL PROTECTED] More majordomo info at http://vger.kernel.org/majordomo-info.html Please read the FAQ at http://www.tux.org/lkml/
libata update?
It's been a number of months since I build a custom kernel with the libata incorporated to deal with multiple Promise EIDE controller card's 'DMA Expiry' errors. I'd like to re-build basically the same thing with an updated kernel and updates libata. Could someone point me into the right direction to achieve this? Thanks in advance Erik. - To unsubscribe from this list: send the line "unsubscribe linux-kernel" in the body of a message to [EMAIL PROTECTED] More majordomo info at http://vger.kernel.org/majordomo-info.html Please read the FAQ at http://www.tux.org/lkml/
libata update?
It's been a number of months since I build a custom kernel with the libata incorporated to deal with multiple Promise EIDE controller card's 'DMA Expiry' errors. I'd like to re-build basically the same thing with an updated kernel and updates libata. Could someone point me into the right direction to achieve this? Thanks in advance Erik. - To unsubscribe from this list: send the line unsubscribe linux-kernel in the body of a message to [EMAIL PROTECTED] More majordomo info at http://vger.kernel.org/majordomo-info.html Please read the FAQ at http://www.tux.org/lkml/
RE: libata update?
-Original Message- From: Robert Hancock [mailto:[EMAIL PROTECTED] Sent: Sunday, December 03, 2006 10:52 AM To: [EMAIL PROTECTED] Cc: linux-kernel@vger.kernel.org Subject: Re: libata update? Erik Ohrnberger wrote: It's been a number of months since I build a custom kernel with the libata incorporated to deal with multiple Promise EIDE controller card's 'DMA Expiry' errors. I'd like to re-build basically the same thing with an updated kernel and updates libata. Could someone point me into the right direction to achieve this? Thanks in advance Erik. All of the PATA stuff was merged in 2.6.19, if you want the very latest and greatest you can try the -mm tree.. -- Not to sound too dense or anything, but is there a HOWTO for this kernel to enable the libata code? I've read through the Documentation/kernel-parameters.txt file, and seems like all I have to do is include libata.atapi_enabled=1 combined_mode=libata on the grub.conf line to enable it, but when I boot that kernel with that option, I get an error message: 'Block device /dev/sda1 is not a valid root device' and a 'boot() ::' prompt. Thanks in advance, Erik. - To unsubscribe from this list: send the line unsubscribe linux-kernel in the body of a message to [EMAIL PROTECTED] More majordomo info at http://vger.kernel.org/majordomo-info.html Please read the FAQ at http://www.tux.org/lkml/