Re: [zfs-discuss] invalid vdev configuration meltdown

2010-07-15 Thread Tim Castle
Thank you for the reply Mark. 
I flashed my sata card and it's now compatible with open solaris: I can see all 
the remaining good drives. 

j...@opensolaris:~# zpool import
  pool: files
id: 3459234681059189202
 state: UNAVAIL
status: One or more devices contains corrupted data.
action: The pool cannot be imported due to damaged devices or data.
   see: http://www.sun.com/msg/ZFS-8000-5E
config:

files  UNAVAIL  insufficient replicas
  raidz1   UNAVAIL  insufficient replicas
c9d1s8 UNAVAIL  corrupted data
c9d0p0 ONLINE
/dev/ad16  OFFLINE
c10d1s8UNAVAIL  corrupted data
c7d1p0 ONLINE
c10d0p0ONLINE
j...@opensolaris:~# zpool import files
cannot import 'files': pool may be in use from other system
use '-f' to import anyway
j...@opensolaris:~# zpool import -f files
internal error: Value too large for defined data type
Abort (core dumped)
j...@opensolaris:~# zpool import -d /dev

...shows nothing after 20 minutes

Tim
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] invalid vdev configuration meltdown

2010-07-15 Thread Mark J Musante

On Thu, 15 Jul 2010, Tim Castle wrote:


j...@opensolaris:~# zpool import -d /dev

...shows nothing after 20 minutes


OK, then one other thing to try is to create a new directory, e.g. /mydev, 
and create in it symbolic links to only those drives that are part of your 
pool.


Based on your label output, I see:

path='/dev/ad6'
path='/dev/ad4'
path='/dev/ad16'
path='/dev/ad18'
path='/dev/ad8'
path='/dev/ad10'

I'm guessing /dev has many more entries in, and the zpool import command 
is hanging in its attempt to open each one of those.


So try doing:

# ln -s /dev/ad6 /mydev/ad6
...
# ln -s /dev/ad10 /mydev/ad10

This way, you can issue zpool import -d /mydev and the import code 
should *only* see the devices that are part of the pool.

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] invalid vdev configuration meltdown

2010-07-15 Thread Tim Castle
Alright, I created the links 

# ln -s /dev/ad6 /mydev/ad6
...
# ln -s /dev/ad10 /mydev/ad10

and ran 'zpool import -d /mydev'
Nothing - the links in /mydev are all broken.


Thanks again,
Tim
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] invalid vdev configuration meltdown

2010-07-14 Thread Mark J Musante


What does 'zpool import -d /dev' show?

On Wed, 14 Jul 2010, Tim Castle wrote:


My raidz1 (ZFSv6) had a power failure, and a disk failure. Now:


j...@opensolaris:~# zpool import
 pool: files
   id: 3459234681059189202
state: UNAVAIL
status: One or more devices contains corrupted data.
action: The pool cannot be imported due to damaged devices or data.
  see: http://www.sun.com/msg/ZFS-8000-5E
config:

files  UNAVAIL  insufficient replicas
  raidz1   UNAVAIL  insufficient replicas
c8d1s8 UNAVAIL  corrupted data
c9d0p0 ONLINE
/dev/ad16  OFFLINE
c9d1s8 UNAVAIL  corrupted data
/dev/ad8   UNAVAIL  corrupted data
c8d0p0 ONLINE
j...@opensolaris:~# zpool import files
cannot import 'files': pool may be in use from other system
use '-f' to import anyway
j...@opensolaris:~# zpool import -f files
cannot import 'files': invalid vdev configuration


ad16 is the dead drive.
ad8 is fine but disconnected. I can only connect 4 sata drives to open solaris: 
my pci sata card isn't compatible.
I created and used the pool with FreeNAS, which gives me the same error when 
all 5 drives are connected.

So why do c8d1s8 c9d1s8 show up as slices? c9d0p0, c8d0p0, and ad8 when 
connected, show up as partitions.

zdb -l returns the same thing for all 5 drives. Labels 0 and 1 are fine. 2 and 
3 fail to unpack.


j...@opensolaris:~# zdb -l /dev/dsk/c8d1s8

LABEL 0

   version=6
   name='files'
   state=0
   txg=2123835
   pool_guid=3459234681059189202
   hostid=0
   hostname='freenas.local'
   top_guid=18367164273662411813
   guid=7276810192259058351
   vdev_tree
   type='raidz'
   id=0
   guid=18367164273662411813
   nparity=1
   metaslab_array=14
   metaslab_shift=32
   ashift=9
   asize=6001199677440
   children[0]
   type='disk'
   id=0
   guid=7276810192259058351
   path='/dev/ad6'
   devid='ad:STF602MR3GHBZP'
   whole_disk=0
   DTL=1012
   children[1]
   type='disk'
   id=1
   guid=5425645052930513342
   path='/dev/ad4'
   devid='ad:STF602MR3EZ0WP'
   whole_disk=0
   DTL=1011
   children[2]
   type='disk'
   id=2
   guid=4766543340687449042
   path='/dev/ad16'
   devid='ad:GTA000PAG7PGGA'
   whole_disk=0
   DTL=1010
   offline=1
   children[3]
   type='disk'
   id=3
   guid=16172918065436695818
   path='/dev/ad18'
   devid='ad:WD-WCAU42121120'
   whole_disk=0
   DTL=1009
   children[4]
   type='disk'
   id=4
   guid=3693181954889803829
   path='/dev/ad8'
   devid='ad:STF602MR3EYWJP'
   whole_disk=0
   DTL=1008
   children[5]
   type='disk'
   id=5
   guid=5419080715831351987
   path='/dev/ad10'
   devid='ad:STF602MR3ESPYP'
   whole_disk=0
   DTL=1007

LABEL 1

   version=6
   name='files'
   state=0
   txg=2123835
   pool_guid=3459234681059189202
   hostid=0
   hostname='freenas.local'
   top_guid=18367164273662411813
   guid=7276810192259058351
   vdev_tree
   type='raidz'
   id=0
   guid=18367164273662411813
   nparity=1
   metaslab_array=14
   metaslab_shift=32
   ashift=9
   asize=6001199677440
   children[0]
   type='disk'
   id=0
   guid=7276810192259058351
   path='/dev/ad6'
   devid='ad:STF602MR3GHBZP'
   whole_disk=0
   DTL=1012
   children[1]
   type='disk'
   id=1
   guid=5425645052930513342
   path='/dev/ad4'
   devid='ad:STF602MR3EZ0WP'
   whole_disk=0
   DTL=1011
   children[2]
   type='disk'
   id=2
   guid=4766543340687449042
   path='/dev/ad16'
   devid='ad:GTA000PAG7PGGA'
   whole_disk=0
   DTL=1010
   offline=1
   children[3]
   type='disk'
   id=3
   guid=16172918065436695818
   path='/dev/ad18'
   devid='ad:WD-WCAU42121120'
   whole_disk=0
   DTL=1009
   children[4]
   type='disk'
   id=4