Eugene Gladchenko wrote:
> Hi,
>
> I'm running FreeBSD 7.1-PRERELEASE with a 500-gig ZFS drive. Recently I've
> encountered a FreeBSD problem (PR kern/128083) and decided about updating the
> motherboard BIOS. It looked like the update went right but after that I was
> shocked to see my ZFS destroyed! Rolling the BIOS back did not help.
>
> Now it looks like that:
>
> # zpool status
> pool: tank
> state: UNAVAIL
> status: One or more devices could not be used because the label is missing
> or invalid. There are insufficient replicas for the pool to continue
> functioning.
> action: Destroy and re-create the pool from a backup source.
> see: http://www.sun.com/msg/ZFS-8000-5E
> scrub: none requested
> config:
>
> NAME STATE READ WRITE CKSUM
> tank UNAVAIL 0 0 0 insufficient replicas
> ad4 UNAVAIL 0 0 0 corrupted data
> # zdb -l /dev/ad4
> --------------------------------------------
> LABEL 0
> --------------------------------------------
> version=6
> name='tank'
> state=0
> txg=4
> pool_guid=12069359268725642778
> hostid=2719189110
> hostname='home.gladchenko.ru'
> top_guid=5515037892630596686
> guid=5515037892630596686
> vdev_tree
> type='disk'
> id=0
> guid=5515037892630596686
> path='/dev/ad4'
> devid='ad:5QM0WF9G'
> whole_disk=0
> metaslab_array=14
> metaslab_shift=32
> ashift=9
> asize=500103118848
> --------------------------------------------
> LABEL 1
> --------------------------------------------
> version=6
> name='tank'
> state=0
> txg=4
> pool_guid=12069359268725642778
> hostid=2719189110
> hostname='home.gladchenko.ru'
> top_guid=5515037892630596686
> guid=5515037892630596686
> vdev_tree
> type='disk'
> id=0
> guid=5515037892630596686
> path='/dev/ad4'
> devid='ad:5QM0WF9G'
> whole_disk=0
> metaslab_array=14
> metaslab_shift=32
> ashift=9
> asize=500103118848
> --------------------------------------------
> LABEL 2
> --------------------------------------------
> failed to unpack label 2
> --------------------------------------------
> LABEL 3
> --------------------------------------------
> failed to unpack label 3
>
This would occur if the beginning of the partition was intact,
but the end is not. Causes for the latter include:
1. partition table changes (or vtoc for SMI labels)
2. something overwrote data at the end
If the cause is #1, then restoring the partion should work. If
the cause is #2, then the data may be gone.
Note: ZFS can import a pool with one working label, but if
more of the data is actually unavailable or overwritten, then
it may not be able to get to a consistent state.
-- richard
> #
>
> I've tried to import the problem pool into OpenSolaris 2008.05 with no
> success:
>
> # zpool import
> pool: tank
> id: 12069359268725642778
> state: UNAVAIL
> status: The pool was last accessed by another system.
> action: The pool cannot be imported due to damaged devices or data.
> see: http://www.sun.com/msg/ZFS-8000-EY
> config:
>
> tank UNAVAIL 0 0 0 insufficient replicas
> c3d0s2 UNAVAIL 0 0 0 corrupted data
> #
>
> Is there a way to recover my files from this broken pool? Maybe at least some
> of them? The drive was 4/5 full. :(
>
> I would appreciate any help.
>
> p.s. I already bought another drive of the same size yesterday. My next ZFS
> experience definitely will be a mirrored one.
> --
> This message posted from opensolaris.org
> _______________________________________________
> zfs-discuss mailing list
> [email protected]
> http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
>
_______________________________________________
zfs-discuss mailing list
[email protected]
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss