I just wanted to thank all of you who helped me with this problem.
dd_rescue was the ticket. I used Knoppix and dd_rescue to copy the
entire /dev/hdf drive to a brand new drive. Took almost 36 hours to
copy 250GB. After that, I replaced hdf with the new drive and
rebooted the machine bac
OK, so 952 errors (about 450k) and 25+ hours later, I have a copy of
the hdf drive on a brand new 250GB drive thanks to dd_rescue.
I haven't tried swapping it to the array. That's the next step. I
imagine, I'll be able to mdadm --assemble --force and have it take
the 4 drives into the arr
Arthur Britto wrote:
> On Sun, 2006-04-23 at 17:17 -0700, Tim Bostrom wrote:
>> I bought two extra 250GB drives - I'll try using dd_rescue as
>> recommended and see if I can get a "good" copy of hdf online.
>
> You might want to use dd_rhelp:
> http://www.kalysto.org/utilities/dd_rhelp/index.en.ht
On Sun, 2006-04-23 at 17:17 -0700, Tim Bostrom wrote:
> I bought two extra 250GB drives - I'll try using dd_rescue as
> recommended and see if I can get a "good" copy of hdf online.
You might want to use dd_rhelp:
http://www.kalysto.org/utilities/dd_rhelp/index.en.html
-Arthur
-
To unsubscrib
First let me say - thank you for responding. I'm still trying to
figure out this problem.
On Apr 21, 2006, at 8:54 PM Apr 21, 2006, Molle Bestefich wrote:
Tim Bostrom wrote:
It appears that /dev/hdf1 failed this past week and /dev/hdh1
failed back in February.
An obvious question woul
Carlos Carvalho wrote:
> What you can also do is dd the disk to another one and try to rebuild
> the array with the new disk so that you won't get errors during the
> reconstruction.
Right, neat hack.
> Some people prefer to use ddrescue instead of dd; I've never tried it.
I can definitely recom
Carlos Carvalho wrote:
> Molle Bestefich ([EMAIL PROTECTED]) wrote on 22 April 2006 05:54:
> >Tim Bostrom wrote:
> >> raid5: Disk failure on hdf1, disabling device.
> >
> >MD doesn't like to find errors when it's rebuilding.
> >It will kick that disk off the array, which will cause MD to retur
Molle Bestefich ([EMAIL PROTECTED]) wrote on 22 April 2006 05:54:
>Tim Bostrom wrote:
>> raid5: Disk failure on hdf1, disabling device.
>
>MD doesn't like to find errors when it's rebuilding.
>It will kick that disk off the array, which will cause MD to return
>crap (instead of stopping the a
Tim Bostrom wrote:
> It appears that /dev/hdf1 failed this past week and /dev/hdh1 failed back in
> February.
An obvious question would be, how much have you been altering the
contents of the array since February?
> I tried a mdadm --assemble --force and was able to get the following:
> ===
Good day,
I'm running FC4 kernel 2.6.11-1.1369 with a 5 disk RAID5 array.
This past weekend after a reboot to my machine, /dev/md0 will no
longer mount and Fedora will abort booting the system and force me to
fix the filesystem. Upon further investigation, it looks like I lost
two driv
10 matches
Mail list logo