--As of December 31, 2011 1:40:59 PM -0800, Drew Tomlinson is alleged to have said:

Thus it appears I am missing ad16 that I used to have.  My data zpool was
the bulk of my system with over 600 gig of files and things I'd like to
have back.  I thought that by creating a raidz1 I could avoid having to
back up the huge drive and avoid this grief.  However it appears I have
lost 2 disks at the same time.  :(

Any thoughts before I just give up on recovering my data pool?

Ouch. All I can really say is 'Redundancy is not backup', but that's a bit trite...

The one thing you haven't mentioned trying that might be worth the attempt is trying the recovery from a 9.0 disk. There has been work done on the ZFS system, and it's possible that something might work. But that's mostly just to be thorough...

As for what it was telling you: It was just saying it couldn't open the drives. ;) Which does bring up one other option: If you've got a different drive controller, you might try plugging the drives into it. (In the hopes that it's the *controller* and not the drive that's gone bad. Unlikely, bit it *does* happen.)

(Depending on the value of the data pool, a good data recovery service might be able to do something as well. But they'd have to be a very good service, and know what they were working with.)

And regarding my root pool, my system can't mount root and start.  What
do I need to do to boot from my degraded root pool.  Here's the current
status:

# zpool status
   pool: root
  state: DEGRADED
status: One or more devices could not be opened.  Sufficient replicas
exist for
         the pool to continue functioning in a degraded state.
action: Attach the missing device and online it using 'zpool online'.
    see: http://www.sun.com/msg/ZFS-8000-2Q
  scrub: none requested
config:

         NAME                                            STATE     READ
WRITE CKSUM
         root                                            DEGRADED     0
0     0
           mirror                                        DEGRADED     0
0     0
             gptid/5b623854-6c46-11de-ae82-001b21361de7  ONLINE       0
0     0
             12032653780322685599                        UNAVAIL      0
0     0  was /dev/ad6p3

Do I just need to do a 'zpool detach root /dev/ad6p3' to remove it from
the pool and get it to boot?  And then once I replace the disk a 'zpool
attach root <new partition>' to fix?

Thanks for your time.

Personally, I'd do a 'zpool replace /dev/ad6p3 /dev/$NEWDRIVE', but the above should work as well. What's odd though is that you can't boot from it as is: Degraded should be considered functional, and it should let you boot. You mentioned updating the zpool to v15. Did you update the boot block at the same time? (Just checking the basics.) It'd need to be able to read the updated zpool.

Daniel T. Staal

---------------------------------------------------------------
This email copyright the author.  Unless otherwise noted, you
are expressly allowed to retransmit, quote, or otherwise use
the contents for non-commercial purposes.  This copyright will
expire 5 years after the author's death, or in 30 years,
whichever is longer, unless such a period is in excess of
local copyright law.
---------------------------------------------------------------
_______________________________________________
freebsd-questions@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-questions
To unsubscribe, send any mail to "freebsd-questions-unsubscr...@freebsd.org"

Reply via email to