I had a single spare 500GB HDD and I decided to install a FreeBSD file
server in it for learning purposes, and I moved almost all of my data
to it. Yesterday, and naturally after no longer having backups of the
data in the server, I had a controller failure (SiS 180 (oh, the
quality)) and the HDD was considered unplugged. When I noticed a few
checksum failures on `zfs status` (including two on metadata (small
hex numbers)), I tried running `zfs scrub tank`, thinking it was a
regular data corruption and then the box locked up. I had also
converted the pool to v14 a few days before, so the freebsd v13 tools
couldn't do anything to help.

Today I downloaded the OpenSolaris 134 snapshot image and booted it to
try and rescue the pool, but:

# zpool status
no pools available

So I couldn't run a clean or an export or destroy to reimport with -D.
I tried to run a regular import:

# zpool import
 pool: tank
   id: 6157028625215863355
state: FAULTED
status: The pool was last accessed by another system.
action: The pool cannot be imported due to damaged devices or data.
The pool may be active on another system, but can be imported using
the '-f' flag.
  see: http://www.sun.com/msg/ZFS-8000-EY
config:

tank        FAULTED  corrupted data
 c5d0p1    UNAVAIL  corrupted data

There was no important data written in the past two days or so, thus
using an older uberblock would't be a problem, so I tried using the
new recovery option:

# mkdir -p /mnt/tank && zpool import -fF -R /mnt/tank tank
cannot import 'tank': one or more devices is currently unavailable
Destroy and re-create the pool from
a backup source.

I tried googling for other people with similar issues, but almost all
of them had raids and other complex configuration and were not really
related to this problem.
After seeing that on some cases labels were corrupted, I tried running
zdb -l on mine:

# zdb -l /dev/dsk/c5d0p1
--------------------------------------------
LABEL 0
--------------------------------------------
failed to unpack label 0
--------------------------------------------
LABEL 1
--------------------------------------------
failed to unpack label 1
--------------------------------------------
LABEL 2
--------------------------------------------
    version: 14
    name: 'tank'
    state: 0
    txg: 11420324
    pool_guid: 6157028625215863355
    hostid: 2563111091
    hostname: ''
    top_guid: 1987270273092463401
    guid: 1987270273092463401
    vdev_tree:
        type: 'disk'
        id: 0
        guid: 1987270273092463401
        path: '/dev/ad6s1d'
        whole_disk: 0
        metaslab_array: 23
        metaslab_shift: 32
        ashift: 9
        asize: 497955373056
        is_log: 0
        DTL: 111
--------------------------------------------
LABEL 3
--------------------------------------------
    version: 14
    name: 'tank'
    state: 0
    txg: 11420324
    pool_guid: 6157028625215863355
    hostid: 2563111091
    hostname: ''
    top_guid: 1987270273092463401
    guid: 1987270273092463401
    vdev_tree:
        type: 'disk'
        id: 0
        guid: 1987270273092463401
        path: '/dev/ad6s1d'
        whole_disk: 0
        metaslab_array: 23
        metaslab_shift: 32
        ashift: 9
        asize: 497955373056
        is_log: 0
        DTL: 111

I'm looking for pointers on how to fix this situation, since the disk
still has available metadata.
_______________________________________________
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Reply via email to