Re: Drive Failure or User Error?
On Sun, Aug 20, 2006 at 03:22:45PM -0400, Lowell Gilbert wrote: Jason Morgan [EMAIL PROTECTED] writes: I was setting up a new server (6.1 i386 STABLE) - more specifically, I was mirroring the functioning server drive - when I suddenly got this: ad0 FAILURE - READ_DMA48 status=51READY,DSC,ERROR error=40UNCORRECTABLE LBA=611703808 GEOM_MIRROR: Request failed (error=5). ad0[READ(offset=313192349696, length=131072)] Along with several more errors, which were very similar. At this point, the server pretty much froze and would repeat the error at reboot, and as gmirror began resyncing the drive, the server would crash. I've tried disabling the mirror, fscking (multiple times), removing disks, and I just got done reinstalling (which went just fine) and resyncing. I still get the error and the system becomes unusable. So, my question is - and I suspect this is the case - is this a drive failure or some issue with the mirroring process? It *is* a drive failure, but I don't understand all of what's happening there. It is possible that this is not a FATAL drive failure, but it's hard to be certain from this information. If you can figure out which file contains the bad sector, you can rewrite that file and the drive may be able to recover. Thanks for your reply. After messing with it some more, I decided to just send the drive back and see if I have better luck with the replacement. The sector that was damaged was on an almost-empty portion of the disk, which was a bit strange to me. *shrugs* Thanks again, Jason Morgan ___ freebsd-questions@freebsd.org mailing list http://lists.freebsd.org/mailman/listinfo/freebsd-questions To unsubscribe, send any mail to [EMAIL PROTECTED]
Re: Drive Failure or User Error?
Jason Morgan [EMAIL PROTECTED] writes: I was setting up a new server (6.1 i386 STABLE) - more specifically, I was mirroring the functioning server drive - when I suddenly got this: ad0 FAILURE - READ_DMA48 status=51READY,DSC,ERROR error=40UNCORRECTABLE LBA=611703808 GEOM_MIRROR: Request failed (error=5). ad0[READ(offset=313192349696, length=131072)] Along with several more errors, which were very similar. At this point, the server pretty much froze and would repeat the error at reboot, and as gmirror began resyncing the drive, the server would crash. I've tried disabling the mirror, fscking (multiple times), removing disks, and I just got done reinstalling (which went just fine) and resyncing. I still get the error and the system becomes unusable. So, my question is - and I suspect this is the case - is this a drive failure or some issue with the mirroring process? It *is* a drive failure, but I don't understand all of what's happening there. It is possible that this is not a FATAL drive failure, but it's hard to be certain from this information. If you can figure out which file contains the bad sector, you can rewrite that file and the drive may be able to recover. ___ freebsd-questions@freebsd.org mailing list http://lists.freebsd.org/mailman/listinfo/freebsd-questions To unsubscribe, send any mail to [EMAIL PROTECTED]
Drive Failure or User Error?
I was setting up a new server (6.1 i386 STABLE) - more specifically, I was mirroring the functioning server drive - when I suddenly got this: ad0 FAILURE - READ_DMA48 status=51READY,DSC,ERROR error=40UNCORRECTABLE LBA=611703808 GEOM_MIRROR: Request failed (error=5). ad0[READ(offset=313192349696, length=131072)] Along with several more errors, which were very similar. At this point, the server pretty much froze and would repeat the error at reboot, and as gmirror began resyncing the drive, the server would crash. I've tried disabling the mirror, fscking (multiple times), removing disks, and I just got done reinstalling (which went just fine) and resyncing. I still get the error and the system becomes unusable. So, my question is - and I suspect this is the case - is this a drive failure or some issue with the mirroring process? I followed the ONLamp instructions here: http://www.onlamp.com/pub/a/bsd/2005/11/10/FreeBSD_Basics.html which I've used in the past with success. Thanks in advance for your replies. Jason Morgan ___ freebsd-questions@freebsd.org mailing list http://lists.freebsd.org/mailman/listinfo/freebsd-questions To unsubscribe, send any mail to [EMAIL PROTECTED]
Stored hard drive failure?
Hey folks, I thought a saw a thread on something like this but I can't seem to find it so I figure I might as well ask and see what turns up. The scenario: I use a hard drive to mirror my main hard drive. I then pull the alternate hard drive off the system and store it for later use should the primary drive fail, or the system as a whole fails. How long can the hard drive sit on the shelf before some sort of natural cause that prevents it from spinning up properly? To prevent the above type of failure should the hard drive be spun up to test its integrity? If the drive is to be spun up how often should something like this be done? Any other ideas that might shed light on hard drives failing once put in to storage would be great. I do remember one user responded that on occasion the HD needed a sort of tap at or near the drive spindle to jiggle it lose should it become stuck for some reason. Thanks. ~Mr Anderson ___ freebsd-questions@freebsd.org mailing list http://lists.freebsd.org/mailman/listinfo/freebsd-questions To unsubscribe, send any mail to [EMAIL PROTECTED]
Re: Stored hard drive failure?
On Oct 5, 2005, at 1:13 AM, K Anderson wrote: Hey folks, I thought a saw a thread on something like this but I can't seem to find it so I figure I might as well ask and see what turns up. The scenario: I use a hard drive to mirror my main hard drive. I then pull the alternate hard drive off the system and store it for later use should the primary drive fail, or the system as a whole fails. How long can the hard drive sit on the shelf before some sort of natural cause that prevents it from spinning up properly? How long are you storing them for? I would think that the data on the disk would quickly become out of date and stale before any physical issues would arise. --- Chad Leigh -- Shire.Net LLC Your Web App and Email hosting provider [EMAIL PROTECTED] ___ freebsd-questions@freebsd.org mailing list http://lists.freebsd.org/mailman/listinfo/freebsd-questions To unsubscribe, send any mail to [EMAIL PROTECTED]
Re: Stored hard drive failure?
- Original Message - From: Chad Leigh -- Shire.Net LLC [EMAIL PROTECTED] To: K Anderson [EMAIL PROTECTED] Cc: freebsd-questions@freebsd.org Sent: Wednesday, October 05, 2005 12:20 AM Subject: Re: Stored hard drive failure? On Oct 5, 2005, at 1:13 AM, K Anderson wrote: Hey folks, I thought a saw a thread on something like this but I can't seem to find it so I figure I might as well ask and see what turns up. The scenario: I use a hard drive to mirror my main hard drive. I then pull the alternate hard drive off the system and store it for later use should the primary drive fail, or the system as a whole fails. How long can the hard drive sit on the shelf before some sort of natural cause that prevents it from spinning up properly? How long are you storing them for? I would think that the data on the disk would quickly become out of date and stale before any physical issues would arise. Thanks for your response, Not sure how long I'm storing them (See above question where I asked -- How long can the HD sit on the shelf... and the other questions seemed to be editted out). But you're right the info could become out-of-date unless when I did patch management then I would pull the stored HD off the shelf and hope that it didn't fail because of non-use and re-mirror the main drive then stored the secondary back on the shelf. But then that really doens't hit the other two questions that were editted out. Perhaps if somebody had experience with doing the very scenario I thought of. I know HDs can be touchy but how touchy can they get if they are just sitting on the shelf waiting for resuse and me going, darn that HD is bad now that it sat on the shelf for X number of [days|weeks|months|years]. ~Mr. Anderson ___ freebsd-questions@freebsd.org mailing list http://lists.freebsd.org/mailman/listinfo/freebsd-questions To unsubscribe, send any mail to [EMAIL PROTECTED]
Re: Stored hard drive failure?
On Wed, 5 Oct 2005 00:44:36 -0700, K Anderson [EMAIL PROTECTED] said: Not sure how long I'm storing them (See above question where I asked -- How long can the HD sit on the shelf... and the other questions seemed to be editted out). But you're right the info could become out-of-date unless when I did patch management then I would pull the stored HD off the shelf and hope that it didn't fail because of non-use and re-mirror the main drive then stored the secondary back on the shelf. But then that really doens't hit the other two questions that were editted out. Perhaps if somebody had experience with doing the very scenario I thought of. I know HDs can be touchy but how touchy can they get if they are just sitting on the shelf waiting for resuse and me going, darn that HD is bad now that it sat on the shelf for X number of [days|weeks|months|years]. See the thread: http://www.freebsd.org/cgi/getmsg.cgi?fetch=642921+0+/usr/local/www/db/text/2005/freebsd-questions/20050911.freebsd-questions I have definitely noticed a higher failure rate among drives that have been stored for a number of months. I can't give you any hard numbers, nor should you really believe them even if I did, because this depends on age of the drive, model, design, etc. If you are serious about data redundancy, why not simply set up RAID 1 volumes? They will provide much better redundancy, at a minimal extra cost, and with less work required on your part to maintain the mirrors. Sandy ___ freebsd-questions@freebsd.org mailing list http://lists.freebsd.org/mailman/listinfo/freebsd-questions To unsubscribe, send any mail to [EMAIL PROTECTED]
Re: Stored hard drive failure?
On Wed, 5 Oct 2005, K Anderson wrote: - Original Message - From: Chad Leigh -- Shire.Net LLC [EMAIL PROTECTED] To: K Anderson [EMAIL PROTECTED] Cc: freebsd-questions@freebsd.org Sent: Wednesday, October 05, 2005 12:20 AM Subject: Re: Stored hard drive failure? On Oct 5, 2005, at 1:13 AM, K Anderson wrote: Hey folks, I thought a saw a thread on something like this but I can't seem to find it so I figure I might as well ask and see what turns up. The scenario: I use a hard drive to mirror my main hard drive. I then pull the alternate hard drive off the system and store it for later use should the primary drive fail, or the system as a whole fails. How long can the hard drive sit on the shelf before some sort of natural cause that prevents it from spinning up properly? Two weeks ago I found an old hd on my shelf, which still booted 4.7 -RELEASE properly. But I wonder if some kind of RAID 1 solution wouldn't be more suitable for your situation: you use two hd's anyway and they would be kept in sync automatically. Regards, Uli. How long are you storing them for? I would think that the data on the disk would quickly become out of date and stale before any physical issues would arise. Thanks for your response, Not sure how long I'm storing them (See above question where I asked -- How long can the HD sit on the shelf... and the other questions seemed to be editted out). But you're right the info could become out-of-date unless when I did patch management then I would pull the stored HD off the shelf and hope that it didn't fail because of non-use and re-mirror the main drive then stored the secondary back on the shelf. But then that really doens't hit the other two questions that were editted out. Perhaps if somebody had experience with doing the very scenario I thought of. I know HDs can be touchy but how touchy can they get if they are just sitting on the shelf waiting for resuse and me going, darn that HD is bad now that it sat on the shelf for X number of [days|weeks|months|years]. ~Mr. Anderson ___ freebsd-questions@freebsd.org mailing list http://lists.freebsd.org/mailman/listinfo/freebsd-questions To unsubscribe, send any mail to [EMAIL PROTECTED] * * Peter Ulrich Kruppa - Wuppertal - Germany * * ___ freebsd-questions@freebsd.org mailing list http://lists.freebsd.org/mailman/listinfo/freebsd-questions To unsubscribe, send any mail to [EMAIL PROTECTED]
Re: Stored hard drive failure?
- Original Message - From: Sandy Rutherford [EMAIL PROTECTED] To: freebsd-questions@freebsd.org Cc: K Anderson [EMAIL PROTECTED] Sent: Wednesday, October 05, 2005 1:01 AM Subject: Re: Stored hard drive failure? On Wed, 5 Oct 2005 00:44:36 -0700, K Anderson [EMAIL PROTECTED] said: Not sure how long I'm storing them (See above question where I asked -- How long can the HD sit on the shelf... and the other questions seemed to be editted out). But you're right the info could become out-of-date unless when I did patch management then I would pull the stored HD off the shelf and hope that it didn't fail because of non-use and re-mirror the main drive then stored the secondary back on the shelf. But then that really doens't hit the other two questions that were editted out. Perhaps if somebody had experience with doing the very scenario I thought of. I know HDs can be touchy but how touchy can they get if they are just sitting on the shelf waiting for resuse and me going, darn that HD is bad now that it sat on the shelf for X number of [days|weeks|months|years]. See the thread: http://www.freebsd.org/cgi/getmsg.cgi?fetch=642921+0+/usr/local/www/db/text/2005/freebsd-questions/20050911.freebsd-questions I have definitely noticed a higher failure rate among drives that have been stored for a number of months. I can't give you any hard numbers, nor should you really believe them even if I did, because this depends on age of the drive, model, design, etc. If you are serious about data redundancy, why not simply set up RAID 1 volumes? They will provide much better redundancy, at a minimal extra cost, and with less work required on your part to maintain the mirrors. Sandy Sandy, Thanks that's the thread I was looking for. The reason I asked the question was to find out about powered down hard drives. Got a friend who does a scheme with a drive and leaves it powered down but I didn't want to sound like a loon when I asked him how often does he spin the drive up to test its integrity. Right about the RAID 1 thing though. Thanks again Sandy. ~Mr. Anderson ___ freebsd-questions@freebsd.org mailing list http://lists.freebsd.org/mailman/listinfo/freebsd-questions To unsubscribe, send any mail to [EMAIL PROTECTED]
Re: Stored hard drive failure?
At Wed, 5 Oct 2005 it looks like K Anderson composed: Hey folks, I thought a saw a thread on something like this but I can't seem to find it so I figure I might as well ask and see what turns up. The scenario: I use a hard drive to mirror my main hard drive. I then pull the alternate hard drive off the system and store it for later use should the primary drive fail, or the system as a whole fails. How long can the hard drive sit on the shelf before some sort of natural cause that prevents it from spinning up properly? Well, this may or may not be of any help but are these stored drives kept in a hermetic seal? I just bought a 'de-humidifier' for my room (not for computer reasons but now I'm glad for that reason) and I was SHOCKED to see that after 24-hours it had collected 1/2 gallon of water out of the air and it was just an average day out here in San Francisco, no rain nor fog. I would imagine that it would affect the drives I have stored in boxes in my house too. -- Bill Schoolcraft PO Box 210076 San Francisco, CA 94121 http://billschoolcraft.com ~ You do best what you like most. ___ freebsd-questions@freebsd.org mailing list http://lists.freebsd.org/mailman/listinfo/freebsd-questions To unsubscribe, send any mail to [EMAIL PROTECTED]
Re: Stored hard drive failure?
On Oct 5, 2005, at 1:44 AM, K Anderson wrote: - Original Message - From: Chad Leigh -- Shire.Net LLC [EMAIL PROTECTED] To: K Anderson [EMAIL PROTECTED] Cc: freebsd-questions@freebsd.org Sent: Wednesday, October 05, 2005 12:20 AM Subject: Re: Stored hard drive failure? On Oct 5, 2005, at 1:13 AM, K Anderson wrote: How long can the hard drive sit on the shelf before some sort of natural cause that prevents it from spinning up properly? How long are you storing them for? I would think that the data on the disk would quickly become out of date and stale before any physical issues would arise. Thanks for your response, Not sure how long I'm storing them (See above question where I asked -- How long can the HD sit on the shelf... and the other questions seemed to be editted out). But you're right the info could become out-of-date unless when I did patch management then I would pull the stored HD off the shelf and hope that it didn't fail because of non-use and re-mirror the main drive then stored the secondary back on the shelf. But then that really doens't hit the other two questions that were editted out. Perhaps if somebody had experience with doing the very scenario I thought of. I know HDs can be touchy but how touchy can they get if they are just sitting on the shelf waiting for resuse and me going, darn that HD is bad now that it sat on the shelf for X number of [days|weeks|months| years]. I somewhat regularly retrieve used HDs off the shelf for use in some test or project or another and never have had a problem with a relatively modern HD (like built in the last 5 years) not working, even after sitting on a shelf for 1-2 years. Is your data going to be good after 1-2 years? If you are talking weeks or months sitting there that should not be an issue with modern HD mechanisms Chad --- Chad Leigh -- Shire.Net LLC Your Web App and Email hosting provider [EMAIL PROTECTED] ___ freebsd-questions@freebsd.org mailing list http://lists.freebsd.org/mailman/listinfo/freebsd-questions To unsubscribe, send any mail to [EMAIL PROTECTED]
Re: Stored hard drive failure?
If you're really serious (to borrow a phrase), you'll do backup to several different media and maybe different formats. With RAID or backup to an always-powered second HDD, you can loose all of your disks if the case power supply or MB fails in certain ways. (I know someone who lost a disk when the MB failed.) Or if someone steals your computer or in a fire. With removable HDD, you risk physical damage either from lack of use or shock. FYI, I kept a 45 GB IBM and a 80 GB Seagate drive in a outside storage shed which got hot, cold, and damp for 10 months and they work fine. I guess I've been lucky because I've had only one failure from about 15 lightly-used disks and have occasionally reused 5- to 10-year-old disks for short durations after years on the shelf. ___ freebsd-questions@freebsd.org mailing list http://lists.freebsd.org/mailman/listinfo/freebsd-questions To unsubscribe, send any mail to [EMAIL PROTECTED]
Re: Stored hard drive failure?
- Original Message - From: Gary W. Swearingen [EMAIL PROTECTED] To: K Anderson [EMAIL PROTECTED] Cc: freebsd-questions@freebsd.org Sent: Wednesday, October 05, 2005 9:18 AM Subject: Re: Stored hard drive failure? If you're really serious (to borrow a phrase), you'll do backup to several different media and maybe different formats. With RAID or backup to an always-powered second HDD, you can loose all of your disks if the case power supply or MB fails in certain ways. (I know someone who lost a disk when the MB failed.) Or if someone steals your computer or in a fire. With removable HDD, you risk physical damage either from lack of use or shock. FYI, I kept a 45 GB IBM and a 80 GB Seagate drive in a outside storage shed which got hot, cold, and damp for 10 months and they work fine. I guess I've been lucky because I've had only one failure from about 15 lightly-used disks and have occasionally reused 5- to 10-year-old disks for short durations after years on the shelf. Good feedback, thanks. Yep, best laid plans can go off the beaten path. ~Mr. Anderson ___ freebsd-questions@freebsd.org mailing list http://lists.freebsd.org/mailman/listinfo/freebsd-questions To unsubscribe, send any mail to [EMAIL PROTECTED]
Re: Drive Failure
I dont know what this option means - does this matter? -B Install the `boot0' boot manager. This option causes MBR code to be replaced, without affecting the embedded slice table. Sounds like just what you said you wanted. Hi Jerry, What does without affecting the embedded slice table mean? What si the embedded slice table? And should I make sure that I dont affect it - or should I affect it? also is there anyway to see that, after replacing the MBR on the new drive with boot0cfg, that it is properly bootable? - noah By the way, I notice that you have quit CC-ing the questions list. You should keep that in so it gets in archives and so I don't become the sole counselor - which you don't want, for sure! jerry - Noah jerry *** Working on device /dev/ad2 *** parameters extracted from in-core disklabel are: cylinders=148945 heads=16 sectors/track=63 (1008 blks/cyl) Figures below won't work with BIOS for partitions not in cyl 1 parameters to be used for BIOS calculations are: cylinders=148945 heads=16 sectors/track=63 (1008 blks/cyl) Media sector size is 512 Warning: BIOS sector numbering starts with sector 1 Information from DOS bootblock is: The data for partition 1 is: sysid 165,(FreeBSD/NetBSD/386BSD) start 63, size 150136497 (73308 Meg), flag 80 (active) beg: cyl 0/ head 1/ sector 1; end: cyl 464/ head 15/ sector 63 The data for partition 2 is: UNUSED The data for partition 3 is: UNUSED The data for partition 4 is: UNUSED To Unsubscribe: send mail to [EMAIL PROTECTED] with unsubscribe freebsd-questions in the body of the message To Unsubscribe: send mail to [EMAIL PROTECTED] with unsubscribe freebsd-questions in the body of the message
Re: Drive Failure
I dont know what this option means - does this matter? -B Install the `boot0' boot manager. This option causes MBR code to be replaced, without affecting the embedded slice table. Sounds like just what you said you wanted. Hi Jerry, What does without affecting the embedded slice table mean? What si the embedded slice table? And should I make sure that I dont affect it - or should I affect it? It is the table that specifies the drives slices - 1-4, what it thinks is in them and a flag which says which one is active - to booted from. It is what gets printed when you dofdisk -s ad2 or whatever drive. also is there anyway to see that, after replacing the MBR on the new drive with boot0cfg, that it is properly bootable? The above will tell you which slice is bootable, which in this case would only be slice 1 since you are only making one slice. The other possible slice identifiers (2-4) are not being used. But, it can't tell you if it is proper. Only trying to boot it will tell you that. So, do a little smoke testing. Jump. Dive in. If you are doing all this without backing things up, you're courting disaster anyway. jerry - noah By the way, I notice that you have quit CC-ing the questions list. You should keep that in so it gets in archives and so I don't become the sole counselor - which you don't want, for sure! jerry - Noah jerry *** Working on device /dev/ad2 *** parameters extracted from in-core disklabel are: cylinders=148945 heads=16 sectors/track=63 (1008 blks/cyl) Figures below won't work with BIOS for partitions not in cyl 1 parameters to be used for BIOS calculations are: cylinders=148945 heads=16 sectors/track=63 (1008 blks/cyl) Media sector size is 512 Warning: BIOS sector numbering starts with sector 1 Information from DOS bootblock is: The data for partition 1 is: sysid 165,(FreeBSD/NetBSD/386BSD) start 63, size 150136497 (73308 Meg), flag 80 (active) beg: cyl 0/ head 1/ sector 1; end: cyl 464/ head 15/ sector 63 The data for partition 2 is: UNUSED The data for partition 3 is: UNUSED The data for partition 4 is: UNUSED To Unsubscribe: send mail to [EMAIL PROTECTED] with unsubscribe freebsd-questions in the body of the message To Unsubscribe: send mail to [EMAIL PROTECTED] with unsubscribe freebsd-questions in the body of the message
Re: Drive Failure
Hi, my current system drive is having difficulties some files that were in great shape are now EBADF. This drive is known as ad0 (yup IDE) So I installed a new drive and have moved all files over to it. The new drive is known as ad2. Is my fdisk usage here proper? Is this the proper response from fdisk? What else should I be doing? What might I be doing wrong? root@typhoon# fdisk -B -b ./boot0 ad2 Presuming you want the whole new drive for FreeBSD and want it bootable, you are on the right track. You might want to use the -I switch and the typical location for the boot0 is in /boot/boot0, though ./boot0 works if you are in the right directory. So try: fdisk -I -B -b /boot/boot0 ad2 Looks like that will give you a 73GB FreeBSD slice OK - probably a nominally 80 GB drive. jerry [/mnt/ad2-root/boot] *** Working on device /dev/ad2 *** parameters extracted from in-core disklabel are: cylinders=148945 heads=16 sectors/track=63 (1008 blks/cyl) Figures below won't work with BIOS for partitions not in cyl 1 parameters to be used for BIOS calculations are: cylinders=148945 heads=16 sectors/track=63 (1008 blks/cyl) Media sector size is 512 Warning: BIOS sector numbering starts with sector 1 Information from DOS bootblock is: The data for partition 1 is: sysid 165,(FreeBSD/NetBSD/386BSD) start 63, size 150136497 (73308 Meg), flag 80 (active) beg: cyl 0/ head 1/ sector 1; end: cyl 464/ head 15/ sector 63 The data for partition 2 is: UNUSED The data for partition 3 is: UNUSED The data for partition 4 is: UNUSED Do you want to change the boot code? [n] --- TCB'n, #Noah San Francisco, California --- USA There is a light, that shines beyond all things on earth, beyond us all, beyond the highest heavens. This is the light that shines in our hearts. - The Buddha To Unsubscribe: send mail to [EMAIL PROTECTED] with unsubscribe freebsd-questions in the body of the message
Re: Drive Failure
So Jerry, this is the proper output then? typhoon# fdisk -I -B -b /mnt/ad2-root/boot/boot0 ad2 *** Working on device /dev/ad2 *** typhoon# some mount details seems relevant typhoon# fdisk -I -B -b /mnt/ad2-root/boot/boot0 ad2 I think you want -b /boot/boot0 there. You are trying to initialize ad2, not read a file from it. Presumably, since it is a new disk and you are 'I' initializing it, there is nothing on it to read. That -b /boot/boot0tells it where to get the file that it will put in to the boot sector. Actually /boot/boot0 is the default but I believe but I tend to want to be explicit in these things. You don't want to specify any other file here unless you are trying to put in some special home brew or third party boot sector. If you can't read it from /boot/boot0 because the disk is corrupt then you will have to get a boot CD or a set of boot floppies for FreeBSD to do it with. By the way, the 'I' switch presumes you are using FreeBSD 4.xsomething as it wasn't available in earlier versions of fdisk. Try doing an:fdisk -v ad2 after it is done and see if what it puts out looks good. Then you will have to start on disklabel which will be followed by a newfs for each partition you create with disklabel. jerry typhoon# mount /dev/ad0s1a on / (ufs, local) /dev/ad0s1f on /usr (ufs, local, soft-updates) /dev/ad0s1e on /var (ufs, local, soft-updates) procfs on /proc (procfs, local) /dev/ad2s1a on /mnt/ad2-root (ufs, local) /dev/ad2s1f on /mnt/ad2-root/usr (ufs, local) /dev/ad2s1e on /mnt/ad2-root/var (ufs, local) typhoon# - Noah To Unsubscribe: send mail to [EMAIL PROTECTED] with unsubscribe freebsd-questions in the body of the message
Re: Drive Failure
Try mounting and reading some of the stuff from that disk and see if you can get to it. Then try boot0cfg(8) I dont know what this option means - does this matter? -B Install the `boot0' boot manager. This option causes MBR code to be replaced, without affecting the embedded slice table. Sounds like just what you said you wanted. By the way, I notice that you have quit CC-ing the questions list. You should keep that in so it gets in archives and so I don't become the sole counselor - which you don't want, for sure! jerry - Noah jerry *** Working on device /dev/ad2 *** parameters extracted from in-core disklabel are: cylinders=148945 heads=16 sectors/track=63 (1008 blks/cyl) Figures below won't work with BIOS for partitions not in cyl 1 parameters to be used for BIOS calculations are: cylinders=148945 heads=16 sectors/track=63 (1008 blks/cyl) Media sector size is 512 Warning: BIOS sector numbering starts with sector 1 Information from DOS bootblock is: The data for partition 1 is: sysid 165,(FreeBSD/NetBSD/386BSD) start 63, size 150136497 (73308 Meg), flag 80 (active) beg: cyl 0/ head 1/ sector 1; end: cyl 464/ head 15/ sector 63 The data for partition 2 is: UNUSED The data for partition 3 is: UNUSED The data for partition 4 is: UNUSED To Unsubscribe: send mail to [EMAIL PROTECTED] with unsubscribe freebsd-questions in the body of the message
Re: Drive Failure
also how do I ID the second drive? Am I doing this correcly? I don't know what you mean by ID-ing the second drive. jerry typhoon# fdisk -sv ad2 /dev/ad2: 148945 cyl 16 hd 63 sec PartStartSize Type Flags 1: 63 150136497 0xa5 0x80 typhoon# fdisk -sv ad0 /dev/ad0: 9729 cyl 255 hd 63 sec PartStartSize Type Flags 1: 63 156296322 0xa5 0x80 typhoon# boot0cfg -v -b ./boot/boot0 /dev/ad2 # flag start chs type end chs offset size 1 0x80 0: 1: 1 0xa5464: 15:63 63150136497 version=1.0 drive=0x80 mask=0xf ticks=182 options=nopacket,update,nosetdrv default_selection=F1 (Slice 1) typhoon# pwd /mnt/ad2-root/boot typhoon# To Unsubscribe: send mail to [EMAIL PROTECTED] with unsubscribe freebsd-questions in the body of the message