Ok I played around with the physical configuration and placed them on the
original controller and zdb -l is now able to unpack LABEL 0,1,2,3 for all
drives in the pool. I also changed the hostname in opensolaris to
"freenas.local" as that is what was listed in the zdb -l(although I doubt this
matters).
The new setup looks like this:
@freenas:~/dskp0s# zpool import
pool: Raidz
id: 14119036174566039103
state: FAULTED
status: The pool metadata is corrupted.
action: The pool cannot be imported due to damaged devices or data.
The pool may be active on another system, but can be imported using
the '-f' flag.
see: http://www.sun.com/msg/ZFS-8000-72
config:
Raidz FAULTED corrupted data
raidz1-0 FAULTED corrupted data
c5t0d0p0 ONLINE
c5t1d0p0 ONLINE
c5t2d0s2 ONLINE
c5t3d0p0 ONLINE
c5t4d0p0 ONLINE
c5t5d0p0 ONLINE
@freenas:~/dskp0s# ls -l /dev/dsk/c5*
lrwxrwxrwx 1 root root 62 Jul 23 08:05 /dev/dsk/c5t0d0p0 ->
../../devices/p...@0,0/pci8086,2...@1e/pci9005,2...@2/d...@0,0:q
lrwxrwxrwx 1 root root 62 Jul 23 08:05 /dev/dsk/c5t1d0p0 ->
../../devices/p...@0,0/pci8086,2...@1e/pci9005,2...@2/d...@1,0:q
lrwxrwxrwx 1 root root 62 Jul 23 08:05 /dev/dsk/c5t2d0s2 ->
../../devices/p...@0,0/pci8086,2...@1e/pci9005,2...@2/d...@2,0:c
lrwxrwxrwx 1 root root 62 Jul 23 08:05 /dev/dsk/c5t3d0p0 ->
../../devices/p...@0,0/pci8086,2...@1e/pci9005,2...@2/d...@3,0:q
lrwxrwxrwx 1 root root 62 Jul 23 08:05 /dev/dsk/c5t4d0p0 ->
../../devices/p...@0,0/pci8086,2...@1e/pci9005,2...@2/d...@4,0:q
lrwxrwxrwx 1 root root 62 Jul 23 08:05 /dev/dsk/c5t5d0p0 ->
../../devices/p...@0,0/pci8086,2...@1e/pci9005,2...@2/d...@5,0:q
The output of the above command was edited to show only the devices listed in
the pool.
I then made symlinks to the devices directly as follows in a dir call /dskp0s
@freenas:~/dskp0s# ls -l
total 17
lrwxrwxrwx 1 root root 57 Jul 23 08:40 aacdu0 ->
/devices/p...@0,0/pci8086,2...@1e/pci9005,2...@2/d...@0,0:q
lrwxrwxrwx 1 root root 57 Jul 23 08:40 aacdu1 ->
/devices/p...@0,0/pci8086,2...@1e/pci9005,2...@2/d...@1,0:q
lrwxrwxrwx 1 root root 57 Jul 23 08:40 aacdu2 ->
/devices/p...@0,0/pci8086,2...@1e/pci9005,2...@2/d...@2,0:q
lrwxrwxrwx 1 root root 57 Jul 23 08:40 aacdu3 ->
/devices/p...@0,0/pci8086,2...@1e/pci9005,2...@2/d...@3,0:q
lrwxrwxrwx 1 root root 57 Jul 23 08:41 aacdu4 ->
/devices/p...@0,0/pci8086,2...@1e/pci9005,2...@2/d...@4,0:q
lrwxrwxrwx 1 root root 57 Jul 23 08:41 aacdu5 ->
/devices/p...@0,0/pci8086,2...@1e/pci9005,2...@2/d...@5,0:q
-rw-r--r-- 1 root root 1992 Jul 26 07:30 zpool.cache
Note: aacdu2 symlink was linked to d...@2,0:q instead of d...@2,0:c because in
FreeNAS the disks should be identical(maybe this is part of the problem)? zdb
-l completes with either symlink.
then I ran these commands from /dskp0s directory
@freenas:~/dskp0s# zpool import Raidz
cannot import 'Raidz': pool may be in use from other system
use '-f' to import anyway
@freenas:~/dskp0s# zpool import -d . Raidz
cannot import 'Raidz': pool may be in use from other system
use '-f' to import anyway
@freenas:~/dskp0s# zpool import -f Raidz
cannot import 'Raidz': one or more devices is currently unavailable
Destroy and re-create the pool from
a backup source.
@freenas:~/dskp0s# zpool import -d . -f Raidz
cannot import 'Raidz': one or more devices is currently unavailable
Destroy and re-create the pool from
a backup source.
@freenas:~/dskp0s# zpool import -F Raidz
cannot import 'Raidz': pool may be in use from other system
use '-f' to import anyway
@freenas:~/dskp0s# zpool import -d . -F Raidz
cannot import 'Raidz': pool may be in use from other system
use '-f' to import anyway
@freenas:~/dskp0s# zdb -l aacdu0
--------------------------------------------
LABEL 0
--------------------------------------------
version: 6
name: 'Raidz'
state: 0
txg: 11730350
pool_guid: 14119036174566039103
hostid: 0
hostname: 'freenas.local'
top_guid: 16879648846521942561
guid: 6543046729241888600
vdev_tree:
type: 'raidz'
id: 0
guid: 16879648846521942561
nparity: 1
metaslab_array: 14
metaslab_shift: 32
ashift: 9
asize: 6000992059392
children[0]:
type: 'disk'
id: 0
guid: 6543046729241888600
path: '/dev/aacdu0'
whole_disk: 0
children[1]:
type: 'disk'
id: 1
guid: 14313209149820231630
path: '/dev/aacdu1'
whole_disk: 0
children[2]:
type: 'disk'
id: 2
guid: 5383435113781649515
path: '/dev/aacdu2'
whole_disk: 0
children[3]:
type: 'disk'
id: 3
guid: 9586044621389086913
path: '/dev/aacdu3'
whole_disk: 0
DTL: 1372
children[4]:
type: 'disk'
id: 4
guid: 10401318729908601665
path: '/dev/aacdu4'
whole_disk: 0
children[5]:
type: 'disk'
id: 5
guid: 6282477769796963197
path: '/dev/aacdu5'
whole_disk: 0
--------------------------------------------
LABEL 1
--------------------------------------------
version: 6
name: 'Raidz'
state: 0
txg: 11730350
pool_guid: 14119036174566039103
hostid: 0
hostname: 'freenas.local'
top_guid: 16879648846521942561
guid: 6543046729241888600
vdev_tree:
type: 'raidz'
id: 0
guid: 16879648846521942561
nparity: 1
metaslab_array: 14
metaslab_shift: 32
ashift: 9
asize: 6000992059392
children[0]:
type: 'disk'
id: 0
guid: 6543046729241888600
path: '/dev/aacdu0'
whole_disk: 0
children[1]:
type: 'disk'
id: 1
guid: 14313209149820231630
path: '/dev/aacdu1'
whole_disk: 0
children[2]:
type: 'disk'
id: 2
guid: 5383435113781649515
path: '/dev/aacdu2'
whole_disk: 0
children[3]:
type: 'disk'
id: 3
guid: 9586044621389086913
path: '/dev/aacdu3'
whole_disk: 0
DTL: 1372
children[4]:
type: 'disk'
id: 4
guid: 10401318729908601665
path: '/dev/aacdu4'
whole_disk: 0
children[5]:
type: 'disk'
id: 5
guid: 6282477769796963197
path: '/dev/aacdu5'
whole_disk: 0
--------------------------------------------
LABEL 2
--------------------------------------------
version: 6
name: 'Raidz'
state: 0
txg: 11730350
pool_guid: 14119036174566039103
hostid: 0
hostname: 'freenas.local'
top_guid: 16879648846521942561
guid: 6543046729241888600
vdev_tree:
type: 'raidz'
id: 0
guid: 16879648846521942561
nparity: 1
metaslab_array: 14
metaslab_shift: 32
ashift: 9
asize: 6000992059392
children[0]:
type: 'disk'
id: 0
guid: 6543046729241888600
path: '/dev/aacdu0'
whole_disk: 0
children[1]:
type: 'disk'
id: 1
guid: 14313209149820231630
path: '/dev/aacdu1'
whole_disk: 0
children[2]:
type: 'disk'
id: 2
guid: 5383435113781649515
path: '/dev/aacdu2'
whole_disk: 0
children[3]:
type: 'disk'
id: 3
guid: 9586044621389086913
path: '/dev/aacdu3'
whole_disk: 0
DTL: 1372
children[4]:
type: 'disk'
id: 4
guid: 10401318729908601665
path: '/dev/aacdu4'
whole_disk: 0
children[5]:
type: 'disk'
id: 5
guid: 6282477769796963197
path: '/dev/aacdu5'
whole_disk: 0
--------------------------------------------
LABEL 3
--------------------------------------------
version: 6
name: 'Raidz'
state: 0
txg: 11730350
pool_guid: 14119036174566039103
hostid: 0
hostname: 'freenas.local'
top_guid: 16879648846521942561
guid: 6543046729241888600
vdev_tree:
type: 'raidz'
id: 0
guid: 16879648846521942561
nparity: 1
metaslab_array: 14
metaslab_shift: 32
ashift: 9
asize: 6000992059392
children[0]:
type: 'disk'
id: 0
guid: 6543046729241888600
path: '/dev/aacdu0'
whole_disk: 0
children[1]:
type: 'disk'
id: 1
guid: 14313209149820231630
path: '/dev/aacdu1'
whole_disk: 0
children[2]:
type: 'disk'
id: 2
guid: 5383435113781649515
path: '/dev/aacdu2'
whole_disk: 0
children[3]:
type: 'disk'
id: 3
guid: 9586044621389086913
path: '/dev/aacdu3'
whole_disk: 0
DTL: 1372
children[4]:
type: 'disk'
id: 4
guid: 10401318729908601665
path: '/dev/aacdu4'
whole_disk: 0
children[5]:
type: 'disk'
id: 5
guid: 6282477769796963197
path: '/dev/aacdu5'
whole_disk: 0
The zpool.cache file is from the original machine, I copied it to try "zpool
import -c" but that didn't work.
Zpool import -f doesn't work even if I specify a directory and states that one
or more devices is unavailable (am I symlinking incorrectly)
Zpool import -F states that pool may be in use from other system
use '-f' to import anyway (which doesn't work). (I tried to place the drives
back on the original FreeNAS setup and export but nothing happens. "Zpool
export Raidz" does not return any output
I have not tried any other command as I am afraid of causing more damage. Any
suggestions would be appreciated.
--
This message posted from opensolaris.org
_______________________________________________
zfs-discuss mailing list
[email protected]
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss