> -----Original Message-----
> From: Aaron Mason <simplersolut...@gmail.com>
> Sent: Monday, October 14, 2019 7:13 PM
> To: Steven Surdock <ssurd...@engineered-net.com>
> Cc: misc@openbsd.org
> Subject: Re: Softraid data recovery
> 
> On Tue, Oct 15, 2019 at 7:34 AM Steven Surdock <ssurdock@engineered-
> net.com> wrote:
> >
> > I have a simple RAID1 configuration on wd0, wd1.  I was in the process
> of performing a rebuild on wd1, as it failed during some heavy reads.
> During the rebuild wd0 went into a failure state.  After some
> troubleshooting I decided to reboot and now my RAID disk, sd1, is
> unavailable.  Disks wd0 and wd1 don't show any errors, but I have a
> replacement disk.  I have backups for the critical data and I'd like to
> try and recover as much recent data as possible.  My thought was to
> create a disk image of the "/home/public" data and mount it using
> vnconfig, but I seem to be having issues with the appropriate 'dd'
> command to do that.
> >
> > How can I recover as much data as possible off the failed RAID array.
> > If I recreate the array, "bioctl -c 1 -l /dev/wd0d,/dev/wd1d
> softraid0", will the existing data be preserved?
> >
> > root@host# disklabel wd0
> > # /dev/rwd0c:
> > type: ESDI
> > disk: ESDI/IDE disk
> > label: WDC WD4001FAEX-0
> > duid: acce36f25df51c8c
> > flags:
> > bytes/sector: 512
> > sectors/track: 63
> > tracks/cylinder: 255
> > sectors/cylinder: 16065
> > cylinders: 486401
> > total sectors: 7814037168
> > boundstart: 64
> > boundend: 4294961685
> > drivedata: 0
> >
> > 16 partitions:
> > #                size           offset  fstype [fsize bsize   cpg]
> >   c:       7814037168                0  unused
> >   d:       7814037104               64    RAID
> >
> > root@host# more /var/backups/disklabel.sd1.backup # /dev/rsd1c:
> > type: SCSI
> > disk: SCSI disk
> > label: SR RAID 1
> > duid: 8ec2330eabf7cd26
> > flags:
> > bytes/sector: 512
> > sectors/track: 63
> > tracks/cylinder: 255
> > sectors/cylinder: 16065
> > cylinders: 486401
> > total sectors: 7814036576
> > boundstart: 64
> > boundend: 7814036576
> > drivedata: 0
> >
> > 16 partitions:
> > #                size           offset  fstype [fsize bsize   cpg]
> >   a:       2147488704               64  4.2BSD   8192 65536     1 #
> /home/public/
> >   c:       7814036576                0  unused
> >   d:       5666547712       2147488768  4.2BSD   8192 65536     1 #
> /home/Backups/
> >
> 
> I think at this point you're far better off restoring from backup.
> You do have a backup, right?
> 
> As for the disks, ddrescue would be a better option than dd - it'll keep
> trying if it encounters another URE whereas dd will up and quit.
> Expect it to take several days on disks that big - it's designed to be
> gentle to dying disks.

I believe the disks are mostly healthy.  In fact I've tried several attempts at 
dd'ing the data from wd0 with no read issues.  It takes about 12 hours to read 
1TB.  I suspect I'm not aligning sectors properly and the filesystem is not 
readable.  I've tried making an image of /home/public (which is _mostly_ backed 
up), but fsck doesn't see a reasonable filesystem after I vnconfig the image.  
So, if anyone has some insight on 'dd if=/dev/wd0d of=public.img bs=512 
count=5666547712 skip=xx', it would be great.

Reply via email to