Folks,

I currently use OpenSolaris on a Toshiba M10 laptop.

One morning the system wouldn't boot OpenSolaris 2009.06 (it was simply unable 
progress to the second stage grub). On further investigation I discovered the 
hdd partition slice with rpool appeared to have bad sectors.

Faced with either a rebuild or an attempt at recovery, I first made an attempt 
to recover the slice before rebuilding.

The c7t0d0 HDD (p0) was divided into p1 (NTFS 24GB), p2 (OpenSolaris 24GB), p3 
(OpenSolaris zfs pool for data 160GB) and p4 (50GB extended with 32GB pcfs, 
12GB linux and linux swap) partitions (or something close to that). On the 
first Solaris partition (p2), slice 0 was the OpenSolaris rpool zpool.

Incidentally *only* the c7t0d0s0 slice appeared to have bad sectors (I do 
wonder what the significance is of this?).

To recover I booted the OpenSolaris 2009.06 live CD and was able to import the 
ZFS pool which was configured on p3 and ran dd if=/dev/rdsk/c7t0d0s0 bs=512 
conv=sync, noerror of=/p0/s0image.dd

This took longer than my maintenance window allowed (due to the sector read 
error) and I ended up aborting the attempt with a significant amount of sectors 
already captured. At the next opportunity I ran the command again using the 
skip directive to capture the balance of slice 0 (rpool). The result was that I 
had two files comprising the good c7t0d0s0 sectors (s0image_start.dd and 
s0image_end.dd) 

At this stage I was able to run 'zfs -l s0image_start.dd' and see the first two 
vdev labels and 'zfs -l s0image_end.dd' and see the last two vdev labels.

I then combined the two files (I tried various approaches eg. cat and dd with 
the append directive) however only the first two vdev labels appear to be 
readable in the resulting s0image_s0.dd?  The resulting file size, which I 
expect is largely good sectors with padding for bad sectors, matches that of 
the prtvtoc s0 sectors count multiplied by 512.

Can anyone advise .. why I am unable to read the third and forth vdev labels 
once the start and end files are combined? 

Is there another approach that may prove more fruitful?

Once I have the file (with labels being in the correct places) I was intending 
to attempt to import the vdev  zpool as rpool2 or run repairs (as far as was 
possible anyway) to see what data could be recovered (besides it was an 
opportunity to get another close look at ZFS).
-- 
This message posted from opensolaris.org
_______________________________________________
opensolaris-help mailing list
opensolaris-help@opensolaris.org

Reply via email to