Re: problems with Hitachi 1TB SATA drives
* Jeremy Chadwick ([EMAIL PROTECTED]) wrote: > * I'm left questioning why a disk manufacturer would process drives > (by this I mean the manufacturing process) differently based on their > transport type. It would cost a *huge* amount of money to have > separate fabs for SCSI, SAS, and SATA/PATA. One one side you've got consumer drives, where price and capacity are king; you want cheap mass produced large, fairly slow disks, made for modest duty cycles. On the other you've got drives that live in servers in their thousands, running 24/7 on IO heavy workloads, where performance and reliability are king and price is far less important; you end up with smaller, sturdier platters, more powerful actuators and motors, and more extensive testing, not to mention slower growth in capacity. > * All this leads me to the topic of backups. Hard disks are growing > in capacity at a rate which the backup industry cannot follow. It's > getting to the point where you have to buy hard drives to back up the > data on other hard drives, but anyone with half a brain knows RAID is > not a replacement for backups. So you back up one disk to another using proper backup tools and not a RAID system. Have some disks off-site, some offline, and you end up with something that's "good enough" for most people. It'll do me until we get memory diamond, anyway ;) > * SCSI is outrageously expensive even in 2007. I have yet to see any > shred of justification for why SCSI costs so much *even today*. It > costs only a smidgen less than it did 15 years ago. They don't look that expensive to me; sure, when you compare capacity with SATA it's expensive, but the platters are several times smaller, they spin faster, they have better testing, fancier materials, smarter firmware... for a server, why wouldn't you spend a few times more for something with 4x faster seeks and 4x lower failure rates? I have servers with 28 disks and 12 disk RAID-0's. I'm happy to pay extra so I'm not replacing and rebuilding every week :) > * SCSI is on its way out. Seagate recently announced that > they'll no longer be supporting SCSI products, possibly by the end of > next year: > > "Seagate has announced that by next year they will no longer be > supporting SCSI product and will be moving customers to the SATA > interface." > http://www.horizontechnology.com/news/market/market_perspective_storage_04-11-2007.php > > I'm willing to bet others will follow suit. They almost certainly only mean U320; I severely doubt SAS is going anywhere. I really hope so, since Seagate are currently the only company making 15kRPM 2.5" SAS disks (afaik). I do find it surprising they're doing this so soon though. I would have thought they'd have long term support contracts for various server vendors who've only very recently started moving over to SAS. I guess they're expecting their stockpiles to keep people in replacements for the next few years. -- Thomas 'Freaky' Hurst http://hur.st/ ___ freebsd-stable@freebsd.org mailing list http://lists.freebsd.org/mailman/listinfo/freebsd-stable To unsubscribe, send any mail to "[EMAIL PROTECTED]"
Re: problems with Hitachi 1TB SATA drives
Jeremy Chadwick wrote: > * Hard disks are growing in capacity, but are not growing in physical > size. We're pushing 1TB in a 3.5" form factor. And the same applies to > laptop (2.5") drives. The margin of error continues to increase as we > try to cram more and more data in such a small medium. I personally > would *love* to see drives go back to using a 5.25" form factor, > especially for large capacity disks, since chances are it means higher > reliability (read: less chance of error). As far as reliability goes, I agree. However, the problem is, you cannot make 5.25" disks spin at 10 or 15 krpm. Well, maybe you can, but it's a hell of an engineering problem. Even 7200 rpm isn't trivial to do for such large discs. And who wants to buy a slow 3600 rpm 5.25" drive? Apart from that, the larger radius also means slower end-to-end movement for the heads. > * All this leads me to the topic of backups. Hard disks are growing in > capacity at a rate which the backup industry cannot follow. It's > getting to the point where you have to buy hard drives to back up the > data on other hard drives, but anyone with half a brain knows RAID is > not a replacement for backups. Correct, RAID and backups are completely different. But you can use disk drives for both. I solved my backup problem by putting a hot-swap ATA frame into my home server (they're pretty cheap nowadays), and using a bunch of ATA disks as removable media. It's just like tape backups, but much cheaper, faster and easier to use. It beats every tape technology hands down. > going to sit around once a week backing up a terabyte of data to ~120 > dual-layer 8.5GB DVDs? I wouldn't even start thinking about considering that. > The closest thing out there right now is > a product from IOMega called REV, which (at most) offers 70GB of storage > per disk, or 140GB with compression. > > A new IOMega REV (which includes one 70GB disk) costs US$600 MSRP. You > read that right. Ugh. For US$600 you get four 400 GB disk drives, including four trays and one frame (hot-swap capable). That's 1.6 TB of backup capacity. Compare that to 70 GB. I also guess that that "REV" thing is much slower than an ATA disk. Best regards Oliver -- Oliver Fromme, secnetix GmbH & Co. KG, Marktplatz 29, 85567 Grafing b. M. Handelsregister: Registergericht Muenchen, HRA 74606, Geschäftsfuehrung: secnetix Verwaltungsgesellsch. mbH, Handelsregister: Registergericht Mün- chen, HRB 125758, Geschäftsführer: Maik Bachmann, Olaf Erb, Ralf Gebhart FreeBSD-Dienstleistungen, -Produkte und mehr: http://www.secnetix.de/bsd "With sufficient thrust, pigs fly just fine. However, this is not necessarily a good idea. It is hard to be sure where they are going to land, and it could be dangerous sitting under them as they fly overhead." -- RFC 1925 ___ freebsd-stable@freebsd.org mailing list http://lists.freebsd.org/mailman/listinfo/freebsd-stable To unsubscribe, send any mail to "[EMAIL PROTECTED]"
Re: problems with Hitachi 1TB SATA drives
At 08:44 PM 7/23/2007, Bill Swingle wrote: After several tries I was able to get both disks newfs'd and mounted but they quickly fell down with DMA timeouts. On one occasion the machine actually panic'd too: Hi, What options do you have set in the BIOS for the sata controller ? Do you have anything like "compatibility" mode ? If so, try turning it off and put it in the "advanced" or "native" mode. Also I found with some older ICH4 controllers (dont know about 5) if you disabled the secondary IDE controller in the BIOS, "bad things" happen. If you have it off, try turning it back on. ---Mike ___ freebsd-stable@freebsd.org mailing list http://lists.freebsd.org/mailman/listinfo/freebsd-stable To unsubscribe, send any mail to "[EMAIL PROTECTED]"
Re: problems with Hitachi 1TB SATA drives
Josh Paetzel wrote: > On Tuesday 24 July 2007, Jeremy Chadwick wrote: > >> * SCSI is outrageously expensive even in 2007. I have yet to see >> any shred of justification for why SCSI costs so much *even today*. >> It costs only a smidgen less than it did 15 years ago. >> >> * SCSI is on its way out. Seagate recently announced that >> they'll no longer be supporting SCSI products, possibly by the end >> of next year: >> >> "Seagate has announced that by next year they will no longer be >> supporting SCSI product and will be moving customers to the SATA >> interface." >> http://www.horizontechnology.com/news/market/market_perspective_sto >> rage_04-11-2007.php >> >> I'm willing to bet others will follow suit. > > It's more than just an interface. SCSI drives are manufactured with > completely different components than IDE/SATA drives. The platters > have different materials on them, the heads are different, the > actuators are different. The higher spindle speeds present different > engineering challanges, if you know anything about physics you'll > realize the difference between spinning something at 7200rpm and > 15,000rpm is not linear in terms of the forces involved. Actually, the reliability/component quality argument really isn't true anymore. This was especially the case with the IBM DDYS Ultrastar line, and I've heard many rumors since then of the trend continuing. It may not be a universal truth, but it's not the easy distinction that it used to be. > > You're really paying for two things when you buy SCSI/SAS. > > reliability under 100% duty cycle See above. > seek times Yes, very true, both from a spindle speed perspective and from a queue depth perspective. All that said, I'd love to be able to afford SAS for all of my computers. For real workloads, it's far superior to SATA. Scott ___ freebsd-stable@freebsd.org mailing list http://lists.freebsd.org/mailman/listinfo/freebsd-stable To unsubscribe, send any mail to "[EMAIL PROTECTED]"
Re: problems with Hitachi 1TB SATA drives
Wilko Bulte wrote: > On Tue, Jul 24, 2007 at 11:26:04AM -0700, Jeremy Chadwick wrote.. >> On Tue, Jul 24, 2007 at 12:30:49PM -0500, Josh Paetzel wrote: >>> I don't have any experience with the Hitachi 1TB SATA drives, but I >>> know an outfit that was trying out the Seagate 1TB drives and had 8 >>> out of 12 fail their burn-in (a 3 day torture test) My luck with >>> consumer SATA drives has been incredibly dismal, with ~40 of them in >>> service I see multiple failures a year, including drives being DOA >>> and dying after a few weeks of service. I wouldn't be at all >>> surprised if one or both of the drives was bad right out of the box. > >> makes backing up 300GB+ of data easy. Everything that's capable of >> doing this is in the tens of thousands of US dollars, if not more. Am I >> going to sit around once a week backing up a terabyte of data to ~120 >> dual-layer 8.5GB DVDs? Nope. The closest thing out there right now is > > Which are only available in write-once in dual-layer so you would soon have > a landfill worth of DVDs. > >> A new IOMega REV (which includes one 70GB disk) costs US$600 MSRP. You >> read that right. > > Pff. Find a pre-owned SuperDLT or LTO drive? Not the cheapest I guess, > but dual-layer DVDs are not a solution IMHO. > > Or get a Blu-ray disk? Also still $$ > > I'm using an LTO2 drive myself. > >> * SCSI is outrageously expensive even in 2007. I have yet to see any >> shred of justification for why SCSI costs so much *even today*. It >> costs only a smidgen less than it did 15 years ago. >> For non-silly benchmarks, SCSI/SAS/FC is still far superior to SATA, and that is what you as the consumer are paying for. But without those high margins, you as the consumer won't have SATA either, at least not in the current business model. How do you think that R&D gets funded at drive companies? It's definitely not from the razor-thin margins that SATA has. Companies like WD make it work by having a very diverse business to fund SATA, but that's really no different than having SCSI/SAS/FC to fund SATA (though maybe less volatile). >> * SCSI is on its way out. Seagate recently announced that >> they'll no longer be supporting SCSI products, possibly by the end of >> next year: Yes, Seagate might be saying this, and I won't comment on the wisdom of it, other than to say that SAS/FC is not dead despite what Seagate wishes to do. In the future flash storage might ultimately overtake and replace platter storage, but that future has not yet arrived. >> >> "Seagate has announced that by next year they will no longer be >> supporting SCSI product and will be moving customers to the SATA >> interface." >> http://www.horizontechnology.com/news/market/market_perspective_storage_04-11-2007.php > > I imagine this is meant to read as: parallel SCSI, as opposed to SAS. > SAS is very much alive. > If Seagate is plotting a course to be a SATA-only company, I'm not terribly surprised. I would be saddened, though, and I feel sorry for my friends and neighbors who are currently Seagate employees. Scott ___ freebsd-stable@freebsd.org mailing list http://lists.freebsd.org/mailman/listinfo/freebsd-stable To unsubscribe, send any mail to "[EMAIL PROTECTED]"
Re: problems with Hitachi 1TB SATA drives
Hi, all! > To add fuel to the fire, Seagate *again* sent me a refurbished drive > (and I used their advanced replacement program), which has sat in a box > unopened since received. I ended up buying two WD 500GB drives to > replace the single Seagate; one of the drives is used for nothing other > than doing incremental dump(8)s of the other (and the main OS drive). > If either of those 500GB drives fail, I'll be able to recover in some > way somewhat painlessly. We mirror all servers now, given the low prices of S-ATA disks and gmirror(8). Of yourse this is not backup, but it can be a foundation for backup ... > * All this leads me to the topic of backups. Hard disks are growing in > capacity at a rate which the backup industry cannot follow. It's > getting to the point where you have to buy hard drives to back up the > data on other hard drives, but anyone with half a brain knows RAID is > not a replacement for backups. RAID or mirroring gives you availability and resiliency for single disk failures (at least). Backup should give you an archive of file versions reaching back a certain amount of time to restore slowly corrupted data. The backup should be on a server and/or media different than the backed up one. So we are using a system with two 2 TB RAID 5 volumes that serves as an Amanda backup server. Backups are stored in vtapes on the disks (read: directories) managed by Amanda. Since there is no way that a slow filesystem corruption on one of the clients could affect the backup server, I'd call this a reasonable solution to the capacity problem. If the backup server goes up in smoke, it is highly unlikely that we have a severe need of a restore at the very same instant. We cannot carry the tape magazines around anymore, so if the data centre goes up in flames, we are fooled. Eventually a second data centre, second backup server, fast connection and rsync(1) will do the trick. Which leads us to private backups: > There is presently nothing __affordable on the consumer market__ which > makes backing up 300GB+ of data easy. RAID1 USB2 case for S-ATA disk: USD 400 2 WD RE2 500 GB drives: USD 600 Regular incremental backups:priceless ;-)) Again I'm just doing versioned backups to redundant disks. When I'm not doing backups, I'm not connecting the volume to my Mac, so software failure, worms, whatever, ... cannot by accident destroy the backups. Now, long term archiving ... that's another interesting topic. Kind regards, Patrick -- punkt.de GmbH * Vorholzstr. 25 * 76137 Karlsruhe Tel. 0721 9109 0 * Fax 0721 9109 100 [EMAIL PROTECTED] http://www.punkt.de Gf: Jürgen Egeling AG Mannheim 108285 ___ freebsd-stable@freebsd.org mailing list http://lists.freebsd.org/mailman/listinfo/freebsd-stable To unsubscribe, send any mail to "[EMAIL PROTECTED]"
Re: problems with Hitachi 1TB SATA drives
On Tuesday 24 July 2007, Jeremy Chadwick wrote: > > * SCSI is outrageously expensive even in 2007. I have yet to see > any shred of justification for why SCSI costs so much *even today*. > It costs only a smidgen less than it did 15 years ago. > > * SCSI is on its way out. Seagate recently announced that > they'll no longer be supporting SCSI products, possibly by the end > of next year: > > "Seagate has announced that by next year they will no longer be > supporting SCSI product and will be moving customers to the SATA > interface." > http://www.horizontechnology.com/news/market/market_perspective_sto >rage_04-11-2007.php > > I'm willing to bet others will follow suit. It's more than just an interface. SCSI drives are manufactured with completely different components than IDE/SATA drives. The platters have different materials on them, the heads are different, the actuators are different. The higher spindle speeds present different engineering challanges, if you know anything about physics you'll realize the difference between spinning something at 7200rpm and 15,000rpm is not linear in terms of the forces involved. You're really paying for two things when you buy SCSI/SAS. reliability under 100% duty cycle seek times As far as that article goes, I wonder if they are including SAS in the SATA catagory or the SCSI catagory. It's perfectly reasonable to phase out U320 SCSII can't see SAS going away any time soon. -- Thanks, Josh Paetzel pgpEiD1JSN0K4.pgp Description: PGP signature
Re: problems with Hitachi 1TB SATA drives
On Tue, Jul 24, 2007 at 11:26:04AM -0700, Jeremy Chadwick wrote.. > On Tue, Jul 24, 2007 at 12:30:49PM -0500, Josh Paetzel wrote: > > I don't have any experience with the Hitachi 1TB SATA drives, but I > > know an outfit that was trying out the Seagate 1TB drives and had 8 > > out of 12 fail their burn-in (a 3 day torture test) My luck with > > consumer SATA drives has been incredibly dismal, with ~40 of them in > > service I see multiple failures a year, including drives being DOA > > and dying after a few weeks of service. I wouldn't be at all > > surprised if one or both of the drives was bad right out of the box. > makes backing up 300GB+ of data easy. Everything that's capable of > doing this is in the tens of thousands of US dollars, if not more. Am I > going to sit around once a week backing up a terabyte of data to ~120 > dual-layer 8.5GB DVDs? Nope. The closest thing out there right now is Which are only available in write-once in dual-layer so you would soon have a landfill worth of DVDs. > A new IOMega REV (which includes one 70GB disk) costs US$600 MSRP. You > read that right. Pff. Find a pre-owned SuperDLT or LTO drive? Not the cheapest I guess, but dual-layer DVDs are not a solution IMHO. Or get a Blu-ray disk? Also still $$ I'm using an LTO2 drive myself. > * SCSI is outrageously expensive even in 2007. I have yet to see any > shred of justification for why SCSI costs so much *even today*. It > costs only a smidgen less than it did 15 years ago. > > * SCSI is on its way out. Seagate recently announced that > they'll no longer be supporting SCSI products, possibly by the end of > next year: > > "Seagate has announced that by next year they will no longer be > supporting SCSI product and will be moving customers to the SATA > interface." > http://www.horizontechnology.com/news/market/market_perspective_storage_04-11-2007.php I imagine this is meant to read as: parallel SCSI, as opposed to SAS. SAS is very much alive. -- Wilko Bulte [EMAIL PROTECTED] ___ freebsd-stable@freebsd.org mailing list http://lists.freebsd.org/mailman/listinfo/freebsd-stable To unsubscribe, send any mail to "[EMAIL PROTECTED]"
Re: problems with Hitachi 1TB SATA drives
On Tue, Jul 24, 2007 at 12:30:49PM -0500, Josh Paetzel wrote: > I don't have any experience with the Hitachi 1TB SATA drives, but I > know an outfit that was trying out the Seagate 1TB drives and had 8 > out of 12 fail their burn-in (a 3 day torture test) My luck with > consumer SATA drives has been incredibly dismal, with ~40 of them in > service I see multiple failures a year, including drives being DOA > and dying after a few weeks of service. I wouldn't be at all > surprised if one or both of the drives was bad right out of the box. > It could be something else of course, but don't discount the fact > that they could be bad from your troubleshooting just because they > are new. This is good advice, and I considered including it in my (long) previous Email. I removed it at the last minute, however, because the only evidence shown in the mail was ad4 behaving oddly. Some off-topic-of-thread facts worth pointing out, as well as some experiences I've had with disks failing out-of-the-box, and a recent failure that cost me quite a lot of data (some of which financial): * If both of those drives were bought at the same time from the same place, chances are both came from the same fab and were manufactured at the same time. Disk fabs have historically been proven to have "batches" of bad disks (if you want sources I can likely dig up articles discussing it, most of which are confirmed by the manus stating they had a manufacturing process that was questionable from X date to Y date). So, there is much higher chance of both of those drives being bad if they were bought at the same time, vs. if you bought each drive separately at different times of the year. * Hard disks are growing in capacity, but are not growing in physical size. We're pushing 1TB in a 3.5" form factor. And the same applies to laptop (2.5") drives. The margin of error continues to increase as we try to cram more and more data in such a small medium. I personally would *love* to see drives go back to using a 5.25" form factor, especially for large capacity disks, since chances are it means higher reliability (read: less chance of error). * There's a lot of common Internet talk about SATA/PATA drives being less reliable than SCSI. My own opinion (based on years of experience with both workstations and servers) is identical. I've run "old" 36GB SCSI drives for years with, at most, 1 grown defect; while in comparison, I have replaced more PATA drives than I can count. SATA drives fall somewhere in-between (less overall failures). Example: Three months ago I bought a new 500GB Seagate SATA disk and had it fail during the initial newfs. Seagate's own tools determined the disk did indeed have bad blocks. I RMA'd it. The refurbished replacement I received (thanks for not sending me a new drive!) died about 3 months later, and I lost almost 300GB of data, and of that about ~100GB was irreplaceable. It's my own fault for not doing backups [see below]. Another RMA. To add fuel to the fire, Seagate *again* sent me a refurbished drive (and I used their advanced replacement program), which has sat in a box unopened since received. I ended up buying two WD 500GB drives to replace the single Seagate; one of the drives is used for nothing other than doing incremental dump(8)s of the other (and the main OS drive). If either of those 500GB drives fail, I'll be able to recover in some way somewhat painlessly. * I'm left questioning why a disk manufacturer would process drives (by this I mean the manufacturing process) differently based on their transport type. It would cost a *huge* amount of money to have separate fabs for SCSI, SAS, and SATA/PATA. It also would make no sense to have employees/workers handling/building SCSI disks "more carefully" than SATA or PATA. I would assume they're all handled in the same way. But I've never worked in a HD fab, so this is speculative. * All this leads me to the topic of backups. Hard disks are growing in capacity at a rate which the backup industry cannot follow. It's getting to the point where you have to buy hard drives to back up the data on other hard drives, but anyone with half a brain knows RAID is not a replacement for backups. There is presently nothing __affordable on the consumer market__ which makes backing up 300GB+ of data easy. Everything that's capable of doing this is in the tens of thousands of US dollars, if not more. Am I going to sit around once a week backing up a terabyte of data to ~120 dual-layer 8.5GB DVDs? Nope. The closest thing out there right now is a product from IOMega called REV, which (at most) offers 70GB of storage per disk, or 140GB with compression. A new IOMega REV (which includes one 70GB disk) costs US$600 MSRP. You read that right. * SCSI is outrageously expensive even in 2007. I have yet to see any shred of justification for why SCSI costs so much *even today*. It costs only a smidgen less than it did 15 years ago. * SCSI is on
Re: problems with Hitachi 1TB SATA drives
On Tuesday 24 July 2007, Daniel O'Connor wrote: > On Tue, 24 Jul 2007, Jeremy Chadwick wrote: > > On Mon, Jul 23, 2007 at 07:40:21PM -0700, Bill Swingle wrote: > > > Doh, I knew I forgot something in my original email. > > > Here's the full dmesg: http://dub.net/rum.dub.net.dmesg > > > > Actually you did include this in your original Email. I think > > Daniel overlooked it. :-) > > Oops, maybe it was an attachment I forgot to read. > > As you say later - it would be good to know what mode the chipset > is in. > > Might be worth trying AHCI mode if you have it (although maybe ICH5 > is too old for that?) I don't have any experience with the Hitachi 1TB SATA drives, but I know an outfit that was trying out the Seagate 1TB drives and had 8 out of 12 fail their burn-in (a 3 day torture test) My luck with consumer SATA drives has been incredibly dismal, with ~40 of them in service I see multiple failures a year, including drives being DOA and dying after a few weeks of service. I wouldn't be at all surprised if one or both of the drives was bad right out of the box. It could be something else of course, but don't discount the fact that they could be bad from your troubleshooting just because they are new. -- Thanks, Josh Paetzel pgpqtvh04aNlq.pgp Description: PGP signature
RE: problems with Hitachi 1TB SATA drives
Bill Swingle wrote: > I have a fileserver that currently has a 4-disc raid connected to an > IDE 3ware card. I had hoped to replace this dying system with a pair > of synchronized 1TB SATA drives. When trying to newfs them both > eventually failed with DMA READ or WRITE timeouts. Here's some infos: A few things to check: * Is your PSU powerful enough? (Test with the old RAID-5 set disconnected or with a different PSU.) * Does all fans in the box spin easily? (If the bearings of the old drives and fans are giving out they can draw a lot more power.) * If you disconnect ad4, does ad6 work fine by itself? * Does the problem go away if you replace the SATA cables? * Install smartmontools from ports and run a long selftest on both drives. (Or get the drive tools from Hitachi and run a full test from DOS.) * Is ACPI enabled and does 'vmstat -i' show any IRQ sharing? Maybe try to move some PCI cards around. /Daniel Eriksson ___ freebsd-stable@freebsd.org mailing list http://lists.freebsd.org/mailman/listinfo/freebsd-stable To unsubscribe, send any mail to "[EMAIL PROTECTED]"
Re: problems with Hitachi 1TB SATA drives
On Tue, 24 Jul 2007, Jeremy Chadwick wrote: > On Mon, Jul 23, 2007 at 07:40:21PM -0700, Bill Swingle wrote: > > Doh, I knew I forgot something in my original email. > > Here's the full dmesg: http://dub.net/rum.dub.net.dmesg > > Actually you did include this in your original Email. I think Daniel > overlooked it. :-) Oops, maybe it was an attachment I forgot to read. As you say later - it would be good to know what mode the chipset is in. Might be worth trying AHCI mode if you have it (although maybe ICH5 is too old for that?) -- Daniel O'Connor software and network engineer for Genesis Software - http://www.gsoft.com.au "The nice thing about standards is that there are so many of them to choose from." -- Andrew Tanenbaum GPG Fingerprint - 5596 B766 97C0 0E94 4347 295E E593 DC20 7B3F CE8C signature.asc Description: This is a digitally signed message part.
Re: problems with Hitachi 1TB SATA drives
On Mon, Jul 23, 2007 at 07:40:21PM -0700, Bill Swingle wrote: > Doh, I knew I forgot something in my original email. > Here's the full dmesg: http://dub.net/rum.dub.net.dmesg Actually you did include this in your original Email. I think Daniel overlooked it. :-) After looking at your dmesg and your claim, I got confused because your initial statement included the use of a 3ware card. A verbose description of your configuration: * ad0: 43979MB at ata0-master UDMA100 -- hooked to: atapci0: port 0x1f0-0x1f7,0x3f6,0x170-0x177,0x376,0xffa0-0xffaf at device 31.1 on pci0 ata0: on atapci0 ata1: on atapci0 * ad4: 953869MB at ata2-master SATA150 * ad6: 953869MB at ata3-master SATA150 -- both hooked to: atapci1: port 0xec00-0xec07,0xe800-0xe803,0xe400-0xe407,0xe000-0xe003,0xdc00-0xdc0f irq 18 at device 31.2 on pci0 ata2: on atapci1 ata3: on atapci1 * twed0: on twe0 twed0: 583440MB (1194885120 sectors) -- hoooked to: twe0: <3ware Storage Controller. Driver version 1.50.01.002> port 0xb800-0xb80f mem 0xfeaffc00-0xfeaffc0f,0xfe00-0xfe7f irq 17 at device 2.0 on pci3 twe0: [GIANT-LOCKED] twe0: 4 ports, Firmware FE7X 1.05.00.063, BIOS BE7X 1.08.00.048 I have to assume that atapci0 is actually using IRQ 14 even though it's not shown (weird...). Additionally your ICH5 SATA controller is sharing an IRQ with a couple other devices on the PCI bus; this isn't bad, but I'm noting it here in case this turns out to be some weird interrupt problem: em0: port 0xac00-0xac1f mem 0xfd9e-0xfd9f irq 18 at device 1.0 on pci2 uhci2: port 0xd400-0xd41f irq 18 at device 29.2 on pci0 On to this: > Jul 21 00:21:45 rum kernel: ad4: TIMEOUT - WRITE_DMA retrying (1 retry left) > LBA=54194911 > Jul 21 00:22:20 rum kernel: ad4: TIMEOUT - WRITE_DMA retrying (1 retry left) > LBA=107260543 > Jul 21 00:22:57 rum kernel: ad4: FAILURE - device detached > Jul 21 00:22:57 rum kernel: subdisk4: detached > Jul 21 00:22:57 rum kernel: ad4: detached > Jul 21 00:24:19 rum kernel: ad6: FAILURE - device detached > Jul 21 00:24:19 rum kernel: subdisk6: detached > Jul 21 00:24:19 rum kernel: ad6: detached > > ad4: TIMEOUT - WRITE_DMA48 retrying (1 retry left) LBA=1456106111 > ad4: TIMEOUT - WRITE_DMA48 retrying (0 retries left) LBA=1456106111 > ad4: FAILURE - WRITE_DMA48 timed out LBA=1456106111 > ad4: TIMEOUT - WRITE_DMA retrying (1 retry left) LBA=54194911 > ad4: TIMEOUT - WRITE_DMA48 retrying (1 retry left) LBA=461407775 > ad4: TIMEOUT - WRITE_DMA48 retrying (0 retries left) LBA=461407775 > ad4: FAILURE - WRITE_DMA48 timed out LBA=461407775 But then: > When trying to newfs them both eventually failed with DMA READ or > WRITE timeouts. Now I'm confused. :-) I only see evidence of a failure on ad4. The ad6 disk disconnecting from the bus could be caused by the controller getting wedged while waiting for certain transactions sent to ad4 (which are failing). I've seen this scenario happen many times. The panic you got is probably also induced by the same issue. Does the WRITE_DMA/DMA48 problem happen for you when newfs'ing a slice on ad6? > I've read that bad SATA cables could cause this, the cables I'm using > are brand new but are probably pretty cheap. For testing purposes swap them out with some other cables. It may not be the cables at all, so keep the originals around. Also might try using some of that canned air to blow out any dust around the SATA connector ends on the cables, drives, and motherboard. Remaining questions I have: Q: Is your ICH5 controller actually ICH5R and you've turned on some Intel RAID option in the BIOS? Maybe turning it on but leaving the disks in a JBOD fashion (not defining an array)? The reason I ask is that you said you're going to use the Hitachi drives as "a pair of 1TB synchronised drives", which implies RAID-1, yet I don't see use of gmirror or ccd or anything else. :-) Q: What motherboard and model is this? Looks like an Intel. Q: If an Intel, have you gone looking at Intel's site for BIOS updates for that board? Intel is the one company who is thorough about documenting BIOS changes in their Release Notes. It would not surprise me if this turned out to be some kind of weird BIOS bug. Q: Some motherboards let you toggle certain "compatibility" mode stuff for the SATA controller in the BIOS. You might want to flip that to see what happens (if compatibility, try the opposite. And vice-versa of course). Q: Have you searched Google for issues others have reported (such as in Linux) with the HDS721010KLA330 or similar (differently-sized) models? -- | Jeremy Chadwickjdc at parodius.com | | Parodius Networking http://www.parodius.com/ | | UNIX Systems Administrator Mountain View, CA, USA | | Making life hard for others since 1977. PGP: 4BD6C0CB | ___ freebsd-st
Re: problems with Hitachi 1TB SATA drives
Doh, I knew I forgot something in my original email. Here's the full dmesg: http://dub.net/rum.dub.net.dmesg Here's the controller info: atapci1: port 0xec00-0xec07,0xe800-0xe803,0xe400-0xe407,0xe000-0xe003,0xdc00-0xdc0f irq 18 at device 31.2 on pci0 ata2: on atapci1 ata3: on atapci1 -Bill Daniel O'Connor wrote: On Tue, 24 Jul 2007, Bill Swingle wrote: I've read that bad SATA cables could cause this, the cables I'm using are brand new but are probably pretty cheap. Unlike they're both faulty too.. You need to post your dmesg otherwise we have no idea what controller you're using.. -- -=| Bill Swingle - [EMAIL PROTECTED] ___ freebsd-stable@freebsd.org mailing list http://lists.freebsd.org/mailman/listinfo/freebsd-stable To unsubscribe, send any mail to "[EMAIL PROTECTED]"
Re: problems with Hitachi 1TB SATA drives
On Tue, 24 Jul 2007, Bill Swingle wrote: > I've read that bad SATA cables could cause this, the cables I'm using > are brand new but are probably pretty cheap. Unlike they're both faulty too.. You need to post your dmesg otherwise we have no idea what controller you're using.. -- Daniel O'Connor software and network engineer for Genesis Software - http://www.gsoft.com.au "The nice thing about standards is that there are so many of them to choose from." -- Andrew Tanenbaum GPG Fingerprint - 5596 B766 97C0 0E94 4347 295E E593 DC20 7B3F CE8C signature.asc Description: This is a digitally signed message part.
problems with Hitachi 1TB SATA drives
Hello all, I've run across a problem that I hope someone can aid me with. I have a fileserver that currently has a 4-disc raid connected to an IDE 3ware card. I had hoped to replace this dying system with a pair of synchronized 1TB SATA drives. When trying to newfs them both eventually failed with DMA READ or WRITE timeouts. Here's some infos: FreeBSD rum.dub.net 6.2-STABLE FreeBSD 6.2-STABLE #2: Sat Jul 21 09:05:25 PDT 2007 [EMAIL PROTECTED]:/usr/obj/usr/src/sys/GENERIC i386 ad0: 43979MB at ata0-master UDMA100 <-- system disk ad4: 953869MB at ata2-master SATA150 ad6: 953869MB at ata3-master SATA150 twed0: on twe0 twed0: 583440MB (1194885120 sectors) A complete dmesg is at http://dub.net/rum.dub.net.dmesg Initially the attempted newfs would cause this: Jul 21 00:21:45 rum kernel: ad4: TIMEOUT - WRITE_DMA retrying (1 retry left) LBA=54194911 Jul 21 00:22:20 rum kernel: ad4: TIMEOUT - WRITE_DMA retrying (1 retry left) LBA=107260543 Jul 21 00:22:57 rum kernel: ad4: FAILURE - device detached Jul 21 00:22:57 rum kernel: subdisk4: detached Jul 21 00:22:57 rum kernel: ad4: detached Jul 21 00:24:19 rum kernel: ad6: FAILURE - device detached Jul 21 00:24:19 rum kernel: subdisk6: detached Jul 21 00:24:19 rum kernel: ad6: detached After several tries I was able to get both disks newfs'd and mounted but they quickly fell down with DMA timeouts. On one occasion the machine actually panic'd too: ad4: TIMEOUT - WRITE_DMA48 retrying (1 retry left) LBA=1456106111 ad4: TIMEOUT - WRITE_DMA48 retrying (0 retries left) LBA=1456106111 ad4: FAILURE - WRITE_DMA48 timed out LBA=1456106111 ad4: TIMEOUT - WRITE_DMA retrying (1 retry left) LBA=54194911 ad4: TIMEOUT - WRITE_DMA48 retrying (1 retry left) LBA=461407775 ad4: TIMEOUT - WRITE_DMA48 retrying (0 retries left) LBA=461407775 ad4: FAILURE - WRITE_DMA48 timed out LBA=461407775 Fatal trap 12: page fault while in kernel mode fault virtual address = 0x66 fault code = supervisor read, page not present instruction pointer = 0x20:0xc07253c3 stack pointer = 0x28:0xd9724b9c frame pointer = 0x28:0xd9724ba4 code segment= base 0x0, limit 0xf, type 0x1b = DPL 0, pres 1, def32 1, gran 1 processor eflags= interrupt enabled, resume, IOPL = 0 current process = 779 (mdnsd) trap number = 12 panic: page fault I've read that bad SATA cables could cause this, the cables I'm using are brand new but are probably pretty cheap. Help freebsd-stable, you're my only hope! :) -Bill -- -=| Bill Swingle - [EMAIL PROTECTED] ___ freebsd-stable@freebsd.org mailing list http://lists.freebsd.org/mailman/listinfo/freebsd-stable To unsubscribe, send any mail to "[EMAIL PROTECTED]"