I have a problem that could either be easily solved, or potentially coul
dhave me royally screwed.
I had a FreeBSD 8.0 system crash on me, and I lost some binaries
including zfs tools. I tried fixing with Fixit but had no such luck so I
rebuilt world and kernel on a fresh hard drive. The old system had zpool
raidz containing da0 and da1. Because of the crash I didn't get to
export this pool, but when I try to import now I get this:
# zpool import tank
cannot import 'tank': one or more devices is currently unavailable
If I take I look at the list I only see this:
# zpool import
pool: tank
id: 4433502968625883981
state: UNAVAIL
status: One or more devices contains corrupted data.
action: The pool cannot be imported due to damaged devices or data.
see: http://www.sun.com/msg/ZFS-8000-5E
config:
tank UNAVAIL insufficient replicas
da1 ONLINE
if I list destroyed pools:
# zpool import -D
pool: tank
id: 12367720188787195607
state: ONLINE (DESTROYED)
action: The pool can be imported using its name or numeric identifier.
config:
tank ONLINE
da0 ONLINE
TADA! There's the missing drive. So what happened?
if I debug each drive...
bad drive:
# zdb -l /dev/da0
--------------------------------------------
LABEL 0
--------------------------------------------
version=13
name='tank'
state=2
txg=50
pool_guid=12367720188787195607
hostid=2180312168
hostname='proj.bullseye.tv'
top_guid=6830294387039432583
guid=6830294387039432583
vdev_tree
type='disk'
id=0
guid=6830294387039432583
path='/dev/da0'
whole_disk=0
metaslab_array=23
metaslab_shift=36
ashift=9
asize=6998387326976
is_log=0
--------------------------------------------
LABEL 1
--------------------------------------------
version=13
name='tank'
state=2
txg=50
pool_guid=12367720188787195607
hostid=2180312168
hostname='proj.bullseye.tv'
top_guid=6830294387039432583
guid=6830294387039432583
vdev_tree
type='disk'
id=0
guid=6830294387039432583
path='/dev/da0'
whole_disk=0
metaslab_array=23
metaslab_shift=36
ashift=9
asize=6998387326976
is_log=0
--------------------------------------------
LABEL 2
--------------------------------------------
version=13
name='tank'
state=2
txg=50
pool_guid=12367720188787195607
hostid=2180312168
hostname='proj.bullseye.tv'
top_guid=6830294387039432583
guid=6830294387039432583
vdev_tree
type='disk'
id=0
guid=6830294387039432583
path='/dev/da0'
whole_disk=0
metaslab_array=23
metaslab_shift=36
ashift=9
asize=6998387326976
is_log=0
--------------------------------------------
LABEL 3
--------------------------------------------
version=13
name='tank'
state=2
txg=50
pool_guid=12367720188787195607
hostid=2180312168
hostname='proj.bullseye.tv'
top_guid=6830294387039432583
guid=6830294387039432583
vdev_tree
type='disk'
id=0
guid=6830294387039432583
path='/dev/da0'
whole_disk=0
metaslab_array=23
metaslab_shift=36
ashift=9
asize=6998387326976
is_log=0
and good drive:
# zdb -l /dev/da1
--------------------------------------------
LABEL 0
--------------------------------------------
version=13
name='tank'
state=0
txg=4
pool_guid=4433502968625883981
hostid=2180312168
hostname='zproj.bullseye.tv'
top_guid=11718615808151907516
guid=11718615808151907516
vdev_tree
type='disk'
id=1
guid=11718615808151907516
path='/dev/da1'
whole_disk=0
metaslab_array=23
metaslab_shift=36
ashift=9
asize=7001602260992
is_log=0
--------------------------------------------
LABEL 1
--------------------------------------------
version=13
name='tank'
state=0
txg=4
pool_guid=4433502968625883981
hostid=2180312168
hostname='zproj.bullseye.tv'
top_guid=11718615808151907516
guid=11718615808151907516
vdev_tree
type='disk'
id=1
guid=11718615808151907516
path='/dev/da1'
whole_disk=0
metaslab_array=23
metaslab_shift=36
ashift=9
asize=7001602260992
is_log=0
--------------------------------------------
LABEL 2
--------------------------------------------
version=13
name='tank'
state=0
txg=4
pool_guid=4433502968625883981
hostid=2180312168
hostname='zproj.bullseye.tv'
top_guid=11718615808151907516
guid=11718615808151907516
vdev_tree
type='disk'
id=1
guid=11718615808151907516
path='/dev/da1'
whole_disk=0
metaslab_array=23
metaslab_shift=36
ashift=9
asize=7001602260992
is_log=0
--------------------------------------------
LABEL 2
--------------------------------------------
version=13
name='tank'
state=0
txg=4
pool_guid=4433502968625883981
hostid=2180312168
hostname='zproj.bullseye.tv'
top_guid=11718615808151907516
guid=11718615808151907516
vdev_tree
type='disk'
id=1
guid=11718615808151907516
path='/dev/da1'
whole_disk=0
metaslab_array=23
metaslab_shift=36
ashift=9
asize=7001602260992
is_log=0
--------------------------------------------
LABEL 3
--------------------------------------------
version=13
name='tank'
state=0
txg=4
pool_guid=4433502968625883981
hostid=2180312168
hostname='zproj.bullseye.tv'
top_guid=11718615808151907516
guid=11718615808151907516
vdev_tree
type='disk'
id=1
guid=11718615808151907516
path='/dev/da1'
whole_disk=0
metaslab_array=23
metaslab_shift=36
ashift=9
asize=7001602260992
is_log=0
One thing that stands out to me is that they have different hostnames.
da0 has the box's current hostname "proj", while da1 has the previous
box's hostname "zproj"... could this be the issue? If so, how to I
change it in this state?
But wait, a look at dmesg reveals:
da0: 6674186MB (13668734464 512 byte sectors: 255H 63S/T 850839C)
GEOM: da0: corrupt or invalid GPT detected.
GEOM: da0: GPT rejected -- may not be recoverable.
Could the drive just be completely effed? If so, then I think I am too.
Please help!
_______________________________________________
freebsd-questions@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-questions
To unsubscribe, send any mail to "freebsd-questions-unsubscr...@freebsd.org"