Re: [zfs-discuss] zfs fails to import zpool

2010-07-26 Thread Jorge Montes IV
Ok I played around with the physical configuration and placed them on the 
original controller and zdb -l is now able to unpack LABEL 0,1,2,3 for all 
drives in the pool.  I also changed the hostname in opensolaris to 
freenas.local  as that is what was listed in the zdb -l(although I doubt this 
matters).  

The new setup looks like this:
 
@freenas:~/dskp0s# zpool import
  pool: Raidz
id: 14119036174566039103
 state: FAULTED
status: The pool metadata is corrupted.
action: The pool cannot be imported due to damaged devices or data.
The pool may be active on another system, but can be imported using
the '-f' flag.
   see: http://www.sun.com/msg/ZFS-8000-72
config:

Raidz FAULTED  corrupted data
  raidz1-0FAULTED  corrupted data
c5t0d0p0  ONLINE
c5t1d0p0  ONLINE
c5t2d0s2  ONLINE
c5t3d0p0  ONLINE
c5t4d0p0  ONLINE
c5t5d0p0  ONLINE

@freenas:~/dskp0s# ls -l /dev/dsk/c5*
lrwxrwxrwx   1 root root  62 Jul 23 08:05 /dev/dsk/c5t0d0p0 - 
../../devices/p...@0,0/pci8086,2...@1e/pci9005,2...@2/d...@0,0:q
lrwxrwxrwx   1 root root  62 Jul 23 08:05 /dev/dsk/c5t1d0p0 - 
../../devices/p...@0,0/pci8086,2...@1e/pci9005,2...@2/d...@1,0:q
lrwxrwxrwx   1 root root  62 Jul 23 08:05 /dev/dsk/c5t2d0s2 - 
../../devices/p...@0,0/pci8086,2...@1e/pci9005,2...@2/d...@2,0:c
lrwxrwxrwx   1 root root  62 Jul 23 08:05 /dev/dsk/c5t3d0p0 - 
../../devices/p...@0,0/pci8086,2...@1e/pci9005,2...@2/d...@3,0:q
lrwxrwxrwx   1 root root  62 Jul 23 08:05 /dev/dsk/c5t4d0p0 - 
../../devices/p...@0,0/pci8086,2...@1e/pci9005,2...@2/d...@4,0:q
lrwxrwxrwx   1 root root  62 Jul 23 08:05 /dev/dsk/c5t5d0p0 - 
../../devices/p...@0,0/pci8086,2...@1e/pci9005,2...@2/d...@5,0:q

The output of the above command was edited to show only the devices listed in 
the pool.

I then made symlinks to the devices directly as follows in a dir call /dskp0s

@freenas:~/dskp0s# ls -l
total 17
lrwxrwxrwx   1 root root  57 Jul 23 08:40 aacdu0 - 
/devices/p...@0,0/pci8086,2...@1e/pci9005,2...@2/d...@0,0:q
lrwxrwxrwx   1 root root  57 Jul 23 08:40 aacdu1 - 
/devices/p...@0,0/pci8086,2...@1e/pci9005,2...@2/d...@1,0:q
lrwxrwxrwx   1 root root  57 Jul 23 08:40 aacdu2 - 
/devices/p...@0,0/pci8086,2...@1e/pci9005,2...@2/d...@2,0:q
lrwxrwxrwx   1 root root  57 Jul 23 08:40 aacdu3 - 
/devices/p...@0,0/pci8086,2...@1e/pci9005,2...@2/d...@3,0:q
lrwxrwxrwx   1 root root  57 Jul 23 08:41 aacdu4 - 
/devices/p...@0,0/pci8086,2...@1e/pci9005,2...@2/d...@4,0:q
lrwxrwxrwx   1 root root  57 Jul 23 08:41 aacdu5 - 
/devices/p...@0,0/pci8086,2...@1e/pci9005,2...@2/d...@5,0:q
-rw-r--r--   1 root root1992 Jul 26 07:30 zpool.cache

Note: aacdu2 symlink was linked to d...@2,0:q instead of d...@2,0:c because in 
FreeNAS the disks should be identical(maybe this is part of the problem)?  zdb 
-l completes with either symlink.

then I ran these commands from /dskp0s directory

@freenas:~/dskp0s# zpool import Raidz
cannot import 'Raidz': pool may be in use from other system
use '-f' to import anyway

@freenas:~/dskp0s# zpool import -d . Raidz
cannot import 'Raidz': pool may be in use from other system
use '-f' to import anyway

@freenas:~/dskp0s# zpool import -f Raidz
cannot import 'Raidz': one or more devices is currently unavailable
Destroy and re-create the pool from
a backup source.

@freenas:~/dskp0s# zpool import -d . -f Raidz
cannot import 'Raidz': one or more devices is currently unavailable
Destroy and re-create the pool from
a backup source.

@freenas:~/dskp0s# zpool import -F Raidz
cannot import 'Raidz': pool may be in use from other system
use '-f' to import anyway

@freenas:~/dskp0s# zpool import -d . -F Raidz
cannot import 'Raidz': pool may be in use from other system
use '-f' to import anyway

@freenas:~/dskp0s# zdb -l aacdu0

LABEL 0

version: 6
name: 'Raidz'
state: 0
txg: 11730350
pool_guid: 14119036174566039103
hostid: 0
hostname: 'freenas.local'
top_guid: 16879648846521942561
guid: 6543046729241888600
vdev_tree:
type: 'raidz'
id: 0
guid: 16879648846521942561
nparity: 1
metaslab_array: 14
metaslab_shift: 32
ashift: 9
asize: 6000992059392
children[0]:
type: 'disk'
id: 0
guid: 6543046729241888600
path: '/dev/aacdu0'
whole_disk: 0
children[1]:
type: 'disk'
id: 1
guid: 14313209149820231630
path: '/dev/aacdu1'
whole_disk: 0
children[2]:
type: 'disk'
id: 2
guid: 5383435113781649515
path: '/dev/aacdu2'

Re: [zfs-discuss] zfs fails to import zpool

2010-07-21 Thread Jorge Montes IV
I think this maybe my problem(at least I hope it is):

http://opensolaris.org/jive/thread.jspa?messageID=489905#489905



But I am not sure what this means

Everything in /dev/dsk and /dev/rdsk is a symlink or directory,
so you can fake them out with a temporary directory and
clever use of the zpool import -d command. Examples are
in the archives. (from Richard Elling's post)

Where are these archives located?
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] zfs fails to import zpool

2010-07-21 Thread Erik Trimble

On 7/21/2010 1:36 AM, Jorge Montes IV wrote:

I think this maybe my problem(at least I hope it is):

http://opensolaris.org/jive/thread.jspa?messageID=489905#489905



But I am not sure what this means

Everything in /dev/dsk and /dev/rdsk is a symlink or directory,
so you can fake them out with a temporary directory and
clever use of the zpool import -d command. Examples are
in the archives. (from Richard Elling's post)

Where are these archives located?
   

http://mail.opensolaris.org/pipermail/zfs-discuss/

use Google to search them.

--
Erik Trimble
Java System Support
Mailstop:  usca22-123
Phone:  x17195
Santa Clara, CA

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] zfs fails to import zpool

2010-07-20 Thread Jorge Montes IV
Last week my FreeNAS server began to beep constantly so I rebooted it through 
the webgui. When the machine finished booting I logged back in to the webgui 
and I noted that my zpool (Raidz) was faulted.  Most of the data on this pool 
is replaceable but I had some pictures on this pool that were not backed up 
that I would really like to recover. At the time that the machine started to 
beep I was verifying a torrent and streaming a movie. 

Here is a little more info about my setup:

FreeNAS 0.7.1 Shere (revision 4997)
intel pentium 4
tyan s5161
2Gb ram
adaptec aar-21610sa
1x 250Gb maxtor boot drive/storage
6x 1tb WD drives in raidz (pool name Raidz)
ZFS filesystem version 6
ZFS storage pool version 6


Commands I tried and their output:

freenas:~# zpool import
no pools available to import

freenas:~# zpool status -v
pool: Raidz
state: FAULTED
status: The pool metadata is corrupted and the pool cannot be opened.
action: Destroy and re-create the pool from a backup source.
see: http://www.sun.com/msg/ZFS-8000-CS
scrub: none requested
config:

NAME STATE READ WRITE CKSUM
Raidz FAULTED 0 0 6 corrupted data
raidz1 FAULTED 0 0 6 corrupted data
aacdu0 ONLINE 0 0 0
aacdu1 ONLINE 0 0 0
aacdu2 ONLINE 0 0 0
aacdu3 ONLINE 0 0 0
aacdu4 ONLINE 0 0 0
aacdu5 ONLINE 0 0 0

freenas:~# zpool import -f
no pools available to import

freenas:~# zpool import -f Raidz
cannot import 'Raidz': no such pool available


I transfered the drives to another motherboard (asus, core2 duo, 4Gb ram) and 
booted freebsd 8.1RC1 with ZFSv14.  I got the following output with zpool 
status -v

===

pool: Raidz
id: 14119036174566039103
state: ONLINE
status: The pool is formatted using an older on-disk version.
action: The pool can be imported using its name or numeric identifier, though
some features will not be available without an explicit 'zpool upgrade'.
config:

Raidz ONLINE
raidz1 ONLINE
ada3 ONLINE
ada5 ONLINE
ada4 ONLINE
ada1 ONLINE
ada2 ONLINE
ada0 ONLINE
=

but when I ran zpool import Raidz it told me to use -f flag.  Doing so gave 
me a fatal trap 12 error.  

The command zdb -l /dev/ada0

LABEL 0

version=6
name='Raidz'
state=0
txg=11730350
pool_guid=14119036174566039103
hostid=0
hostname='freenas.local'
top_guid=16879648846521942561
guid=6282477769796963197
vdev_tree
type='raidz'
id=0
guid=16879648846521942561
nparity=1
metaslab_array=14
metaslab_shift=32
ashift=9
asize=6000992059392
children[0]
type='disk'
id=0
guid=6543046729241888600
path='/dev/aacdu0'
whole_disk=0
children[1]
type='disk'
id=1
guid=14313209149820231630
path='/dev/aacdu1'
whole_disk=0
children[2]
type='disk'
id=2
guid=5383435113781649515
path='/dev/aacdu2'
whole_disk=0
children[3]
type='disk'
id=3
guid=9586044621389086913
path='/dev/aacdu3'
whole_disk=0
DTL=1372
children[4]
type='disk'
id=4
guid=10401318729908601665
path='/dev/aacdu4'
whole_disk=0
children[5]
type='disk'
id=5
guid=6282477769796963197
path='/dev/aacdu5'
whole_disk=0

LABEL 1

version=6
name='Raidz'
state=0
txg=11730350
pool_guid=14119036174566039103
hostid=0
hostname='freenas.local'
top_guid=16879648846521942561
guid=6282477769796963197
vdev_tree
type='raidz'
id=0
guid=16879648846521942561
nparity=1
metaslab_array=14
metaslab_shift=32
ashift=9
asize=6000992059392
children[0]
type='disk'
id=0
guid=6543046729241888600
path='/dev/aacdu0'
whole_disk=0
children[1]
type='disk'
id=1
guid=14313209149820231630
path='/dev/aacdu1'
whole_disk=0
children[2]
type='disk'
id=2
guid=5383435113781649515
path='/dev/aacdu2'
whole_disk=0
children[3]
type='disk'
id=3
guid=9586044621389086913
path='/dev/aacdu3'
whole_disk=0
DTL=1372
children[4]
type='disk'
id=4
guid=10401318729908601665
path='/dev/aacdu4'
whole_disk=0
children[5]
type='disk'
id=5
guid=6282477769796963197
path='/dev/aacdu5'
whole_disk=0

LABEL 2

failed to unpack label 2

LABEL 3

failed to unpack label 3
+


After this I install the controller and used different ports and cabling setups 
to get the guid and device path to math and I was able to unpack all labels on 
all drives.  I am not sure what any of this means and fortunately I didn't 
record the outputs.  I can redo this if necessary.




I then booted opensolaris 2009.06 and 4 out of the 6 disks showed corrupt data 
when I did zpool import. When I tried to import the pool it couldn't but I 
can't remember the output. I have never used opensolaris and could not ssh into 
it to record