Re: [zfs-discuss] invalid vdev configuration after power failure

2010-10-05 Thread diyanc

Kyle Kakligian smallart at gmail.com writes:

 I'm not sure why `zfs import` choked on this [typical?] error case,
 but its easy to fix with a very careful dd. I took a different and
 very roundabout approach to recover my data, however, since I'm not
 confident in my 'careful' skills. (after all, where's my backup?)
 Instead, on a linux workstation where I am more cozy, I compiled
 zfs-fuse from the source with a slight modification to ignore labels 2
 and 3. fusermount worked great and I recovered my data without issue.

Hi,

waking up the old thread,

would you mind sharing the information how to
edit the zfs-fuse to ignoring labels?

thanks,

regards,

diyanc

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] invalid vdev configuration meltdown

2010-07-15 Thread Tim Castle
Thank you for the reply Mark. 
I flashed my sata card and it's now compatible with open solaris: I can see all 
the remaining good drives. 

j...@opensolaris:~# zpool import
  pool: files
id: 3459234681059189202
 state: UNAVAIL
status: One or more devices contains corrupted data.
action: The pool cannot be imported due to damaged devices or data.
   see: http://www.sun.com/msg/ZFS-8000-5E
config:

files  UNAVAIL  insufficient replicas
  raidz1   UNAVAIL  insufficient replicas
c9d1s8 UNAVAIL  corrupted data
c9d0p0 ONLINE
/dev/ad16  OFFLINE
c10d1s8UNAVAIL  corrupted data
c7d1p0 ONLINE
c10d0p0ONLINE
j...@opensolaris:~# zpool import files
cannot import 'files': pool may be in use from other system
use '-f' to import anyway
j...@opensolaris:~# zpool import -f files
internal error: Value too large for defined data type
Abort (core dumped)
j...@opensolaris:~# zpool import -d /dev

...shows nothing after 20 minutes

Tim
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] invalid vdev configuration meltdown

2010-07-15 Thread Mark J Musante

On Thu, 15 Jul 2010, Tim Castle wrote:


j...@opensolaris:~# zpool import -d /dev

...shows nothing after 20 minutes


OK, then one other thing to try is to create a new directory, e.g. /mydev, 
and create in it symbolic links to only those drives that are part of your 
pool.


Based on your label output, I see:

path='/dev/ad6'
path='/dev/ad4'
path='/dev/ad16'
path='/dev/ad18'
path='/dev/ad8'
path='/dev/ad10'

I'm guessing /dev has many more entries in, and the zpool import command 
is hanging in its attempt to open each one of those.


So try doing:

# ln -s /dev/ad6 /mydev/ad6
...
# ln -s /dev/ad10 /mydev/ad10

This way, you can issue zpool import -d /mydev and the import code 
should *only* see the devices that are part of the pool.

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] invalid vdev configuration meltdown

2010-07-15 Thread Tim Castle
Alright, I created the links 

# ln -s /dev/ad6 /mydev/ad6
...
# ln -s /dev/ad10 /mydev/ad10

and ran 'zpool import -d /mydev'
Nothing - the links in /mydev are all broken.


Thanks again,
Tim
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] invalid vdev configuration meltdown

2010-07-14 Thread Tim Castle
My raidz1 (ZFSv6) had a power failure, and a disk failure. Now:


j...@opensolaris:~# zpool import
  pool: files
id: 3459234681059189202
 state: UNAVAIL
status: One or more devices contains corrupted data.
action: The pool cannot be imported due to damaged devices or data.
   see: http://www.sun.com/msg/ZFS-8000-5E
config:

files  UNAVAIL  insufficient replicas
  raidz1   UNAVAIL  insufficient replicas
c8d1s8 UNAVAIL  corrupted data
c9d0p0 ONLINE
/dev/ad16  OFFLINE
c9d1s8 UNAVAIL  corrupted data
/dev/ad8   UNAVAIL  corrupted data
c8d0p0 ONLINE
j...@opensolaris:~# zpool import files
cannot import 'files': pool may be in use from other system
use '-f' to import anyway
j...@opensolaris:~# zpool import -f files
cannot import 'files': invalid vdev configuration


ad16 is the dead drive. 
ad8 is fine but disconnected. I can only connect 4 sata drives to open solaris: 
my pci sata card isn't compatible. 
I created and used the pool with FreeNAS, which gives me the same error when 
all 5 drives are connected. 

So why do c8d1s8 c9d1s8 show up as slices? c9d0p0, c8d0p0, and ad8 when 
connected, show up as partitions. 

zdb -l returns the same thing for all 5 drives. Labels 0 and 1 are fine. 2 and 
3 fail to unpack.


j...@opensolaris:~# zdb -l /dev/dsk/c8d1s8

LABEL 0

version=6
name='files'
state=0
txg=2123835
pool_guid=3459234681059189202
hostid=0
hostname='freenas.local'
top_guid=18367164273662411813
guid=7276810192259058351
vdev_tree
type='raidz'
id=0
guid=18367164273662411813
nparity=1
metaslab_array=14
metaslab_shift=32
ashift=9
asize=6001199677440
children[0]
type='disk'
id=0
guid=7276810192259058351
path='/dev/ad6'
devid='ad:STF602MR3GHBZP'
whole_disk=0
DTL=1012
children[1]
type='disk'
id=1
guid=5425645052930513342
path='/dev/ad4'
devid='ad:STF602MR3EZ0WP'
whole_disk=0
DTL=1011
children[2]
type='disk'
id=2
guid=4766543340687449042
path='/dev/ad16'
devid='ad:GTA000PAG7PGGA'
whole_disk=0
DTL=1010
offline=1
children[3]
type='disk'
id=3
guid=16172918065436695818
path='/dev/ad18'
devid='ad:WD-WCAU42121120'
whole_disk=0
DTL=1009
children[4]
type='disk'
id=4
guid=3693181954889803829
path='/dev/ad8'
devid='ad:STF602MR3EYWJP'
whole_disk=0
DTL=1008
children[5]
type='disk'
id=5
guid=5419080715831351987
path='/dev/ad10'
devid='ad:STF602MR3ESPYP'
whole_disk=0
DTL=1007

LABEL 1

version=6
name='files'
state=0
txg=2123835
pool_guid=3459234681059189202
hostid=0
hostname='freenas.local'
top_guid=18367164273662411813
guid=7276810192259058351
vdev_tree
type='raidz'
id=0
guid=18367164273662411813
nparity=1
metaslab_array=14
metaslab_shift=32
ashift=9
asize=6001199677440
children[0]
type='disk'
id=0
guid=7276810192259058351
path='/dev/ad6'
devid='ad:STF602MR3GHBZP'
whole_disk=0
DTL=1012
children[1]
type='disk'
id=1
guid=5425645052930513342
path='/dev/ad4'
devid='ad:STF602MR3EZ0WP'
whole_disk=0
DTL=1011
children[2]
type='disk'
id=2
guid=4766543340687449042
path='/dev/ad16'
devid='ad:GTA000PAG7PGGA'
whole_disk=0
DTL=1010
offline=1
children[3]
type='disk'
id=3
guid=16172918065436695818
path='/dev/ad18'
devid='ad:WD-WCAU42121120'
whole_disk=0
DTL=1009
children[4]
type='disk'
 

Re: [zfs-discuss] invalid vdev configuration meltdown

2010-07-14 Thread Mark J Musante


What does 'zpool import -d /dev' show?

On Wed, 14 Jul 2010, Tim Castle wrote:


My raidz1 (ZFSv6) had a power failure, and a disk failure. Now:


j...@opensolaris:~# zpool import
 pool: files
   id: 3459234681059189202
state: UNAVAIL
status: One or more devices contains corrupted data.
action: The pool cannot be imported due to damaged devices or data.
  see: http://www.sun.com/msg/ZFS-8000-5E
config:

files  UNAVAIL  insufficient replicas
  raidz1   UNAVAIL  insufficient replicas
c8d1s8 UNAVAIL  corrupted data
c9d0p0 ONLINE
/dev/ad16  OFFLINE
c9d1s8 UNAVAIL  corrupted data
/dev/ad8   UNAVAIL  corrupted data
c8d0p0 ONLINE
j...@opensolaris:~# zpool import files
cannot import 'files': pool may be in use from other system
use '-f' to import anyway
j...@opensolaris:~# zpool import -f files
cannot import 'files': invalid vdev configuration


ad16 is the dead drive.
ad8 is fine but disconnected. I can only connect 4 sata drives to open solaris: 
my pci sata card isn't compatible.
I created and used the pool with FreeNAS, which gives me the same error when 
all 5 drives are connected.

So why do c8d1s8 c9d1s8 show up as slices? c9d0p0, c8d0p0, and ad8 when 
connected, show up as partitions.

zdb -l returns the same thing for all 5 drives. Labels 0 and 1 are fine. 2 and 
3 fail to unpack.


j...@opensolaris:~# zdb -l /dev/dsk/c8d1s8

LABEL 0

   version=6
   name='files'
   state=0
   txg=2123835
   pool_guid=3459234681059189202
   hostid=0
   hostname='freenas.local'
   top_guid=18367164273662411813
   guid=7276810192259058351
   vdev_tree
   type='raidz'
   id=0
   guid=18367164273662411813
   nparity=1
   metaslab_array=14
   metaslab_shift=32
   ashift=9
   asize=6001199677440
   children[0]
   type='disk'
   id=0
   guid=7276810192259058351
   path='/dev/ad6'
   devid='ad:STF602MR3GHBZP'
   whole_disk=0
   DTL=1012
   children[1]
   type='disk'
   id=1
   guid=5425645052930513342
   path='/dev/ad4'
   devid='ad:STF602MR3EZ0WP'
   whole_disk=0
   DTL=1011
   children[2]
   type='disk'
   id=2
   guid=4766543340687449042
   path='/dev/ad16'
   devid='ad:GTA000PAG7PGGA'
   whole_disk=0
   DTL=1010
   offline=1
   children[3]
   type='disk'
   id=3
   guid=16172918065436695818
   path='/dev/ad18'
   devid='ad:WD-WCAU42121120'
   whole_disk=0
   DTL=1009
   children[4]
   type='disk'
   id=4
   guid=3693181954889803829
   path='/dev/ad8'
   devid='ad:STF602MR3EYWJP'
   whole_disk=0
   DTL=1008
   children[5]
   type='disk'
   id=5
   guid=5419080715831351987
   path='/dev/ad10'
   devid='ad:STF602MR3ESPYP'
   whole_disk=0
   DTL=1007

LABEL 1

   version=6
   name='files'
   state=0
   txg=2123835
   pool_guid=3459234681059189202
   hostid=0
   hostname='freenas.local'
   top_guid=18367164273662411813
   guid=7276810192259058351
   vdev_tree
   type='raidz'
   id=0
   guid=18367164273662411813
   nparity=1
   metaslab_array=14
   metaslab_shift=32
   ashift=9
   asize=6001199677440
   children[0]
   type='disk'
   id=0
   guid=7276810192259058351
   path='/dev/ad6'
   devid='ad:STF602MR3GHBZP'
   whole_disk=0
   DTL=1012
   children[1]
   type='disk'
   id=1
   guid=5425645052930513342
   path='/dev/ad4'
   devid='ad:STF602MR3EZ0WP'
   whole_disk=0
   DTL=1011
   children[2]
   type='disk'
   id=2
   guid=4766543340687449042
   path='/dev/ad16'
   devid='ad:GTA000PAG7PGGA'
   whole_disk=0
   DTL=1010
   offline=1
   children[3]
   type='disk'
   id=3
   guid=16172918065436695818
   path='/dev/ad18'
   devid='ad:WD-WCAU42121120'
   whole_disk=0
   DTL=1009
   children[4]
   type='disk'
   id=4
   

Re: [zfs-discuss] invalid vdev configuration

2009-06-04 Thread Richard Elling

Since you did not export the pool, it may be looking for the wrong
devices.  Try this:
   zpool export vault
   zpool import vault

which will clear the old entries out of the zpool.cache and look for
the new devices.

More below...

Brian Leonard wrote:

I had a machine die the other day and take one of its zfs pools with it. I booted the new machine, 
with the same disks but a different SATA controller, and the rpool was mounted but another pool 
vault was not.  If I try to import it I get invalid vdev configuration.  
fmdump shows zfs.vdev.bad_label, and checking the label with zdb I find labels 2 and 3 missing.  
How can I get my pool back?  Thanks.

snv_98

zpool import
  pool: vault
id: 196786381623412270
 state: UNAVAIL
action: The pool cannot be imported due to damaged devices or data.
config:

vault   UNAVAIL  insufficient replicas
  mirrorUNAVAIL  corrupted data
c6d1p0  ONLINE
c7d1p0  ONLINE


fmdump -eV
Jun 04 2009 07:43:47.165169453 ereport.fs.zfs.vdev.bad_label
nvlist version: 0
class = ereport.fs.zfs.vdev.bad_label
ena = 0x8ebd8837ae1
detector = (embedded nvlist)
nvlist version: 0
version = 0x0
scheme = zfs
pool = 0x2bb202be54c462e
vdev = 0xaa3f2fd35788620b
(end detector)

pool = vault
pool_guid = 0x2bb202be54c462e
pool_context = 2
pool_failmode = wait
vdev_guid = 0xaa3f2fd35788620b
vdev_type = mirror
parent_guid = 0x2bb202be54c462e
parent_type = root
prev_state = 0x7
__ttl = 0x1
__tod = 0x4a27c183 0x9d8492d

Jun 04 2009 07:43:47.165169794 ereport.fs.zfs.zpool
nvlist version: 0
class = ereport.fs.zfs.zpool
ena = 0x8ebd8837ae1
detector = (embedded nvlist)
nvlist version: 0
version = 0x0
scheme = zfs
pool = 0x2bb202be54c462e
(end detector)

pool = vault
pool_guid = 0x2bb202be54c462e
pool_context = 2
pool_failmode = wait
__ttl = 0x1
__tod = 0x4a27c183 0x9d84a82


zdb -l /dev/rdsk/c6d1p0
  


It is unusual to have a vdev on a partition (c6d1p0).  It is
more common to have a vdev on a slice in the partition
(eg. c6d1s0).  The view of partition and slice into a device
may overlap, but not completely overlap. For example,
on one of my machines:
   c0t0d0p0 is physical blocks 0-976735935
   c0t0d0s0 is physical blocks 16065-308512259

If your system has the same starting block, but different sizes
for the c6d1p0 and c6d1s0, then zfs may not be able to see
the labels at the end (label 2 and 3).

Above, I used slice 0 as an example, your system may use a
different slice.  But you can run zdb -l on all of them to find
the proper, complete slice.
-- richard



LABEL 0

version=13
name='vault'
state=0
txg=42243
pool_guid=196786381623412270
hostid=997759551
hostname='philo'
top_guid=12267576494733681163
guid=16901406274466991796
vdev_tree
type='mirror'
id=0
guid=12267576494733681163
whole_disk=0
metaslab_array=14
metaslab_shift=33
ashift=9
asize=1000199946240
is_log=0
children[0]
type='disk'
id=0
guid=16901406274466991796
path='/dev/dsk/c1t1d0p0'
devid='id1,s...@f3b789a3f48e44b860003d3320001/q'
phys_path='/p...@0,0/pci1043,8...@7/d...@1,0:q'
whole_disk=0
DTL=77
children[1]
type='disk'
id=1
guid=6231056817092537765
path='/dev/dsk/c1t0d0p0'
devid='id1,s...@f3b789a3f48e44b86000263f9/q'
phys_path='/p...@0,0/pci1043,8...@7/d...@0,0:q'
whole_disk=0
DTL=76

LABEL 1

version=13
name='vault'
state=0
txg=42243
pool_guid=196786381623412270
hostid=997759551
hostname='philo'
top_guid=12267576494733681163
guid=16901406274466991796
vdev_tree
type='mirror'
id=0
guid=12267576494733681163
whole_disk=0
metaslab_array=14
metaslab_shift=33
ashift=9
asize=1000199946240
is_log=0
children[0]
type='disk'
id=0
guid=16901406274466991796
path='/dev/dsk/c1t1d0p0'
devid='id1,s...@f3b789a3f48e44b860003d3320001/q'
phys_path='/p...@0,0/pci1043,8...@7/d...@1,0:q'
whole_disk=0
DTL=77
children[1]
type='disk'
id=1
   

Re: [zfs-discuss] invalid vdev configuration

2009-06-04 Thread Brian Leonard
 Since you did not export the pool, it may be looking for the wrong
 devices.  Try this:
 zpool export vault
 zpool import vault

That was the first thing I tried, with no luck.

 Above, I used slice 0 as an example, your system may use a
 different slice.  But you can run zdb -l on all of them to find

Aha, zdb found complete label sets for the vault pool on /dev/rdsk/c6d1 and 
c7d1.  The incomplete labels were c6d1p0 and c7d1p0.  Could I just zpool 
replace c6d1p0 with c6d1 and c7d1p0 with c7d0?
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] invalid vdev configuration

2009-06-04 Thread Richard Elling

Brian Leonard wrote:

Since you did not export the pool, it may be looking for the wrong
devices.  Try this:
zpool export vault
zpool import vault



That was the first thing I tried, with no luck.

  

Above, I used slice 0 as an example, your system may use a
different slice.  But you can run zdb -l on all of them to find



Aha, zdb found complete label sets for the vault pool on /dev/rdsk/c6d1 and 
c7d1.  The incomplete labels were c6d1p0 and c7d1p0.  Could I just zpool replace c6d1p0 
with c6d1 and c7d1p0 with c7d0?
  


h... export the pool again.  Then try simply zpool import and
it should show the way it sees vault.  Reply with that output.
-- richard

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] invalid vdev configuration

2009-06-04 Thread Brian Leonard
 h... export the pool again.  Then try simply zpool import 
 and it should show the way it sees vault.  Reply with that output.

zpool export vault
cannot open 'vault': no such pool


zpool import
  pool: vault
id: 196786381623412270
 state: UNAVAIL
action: The pool cannot be imported due to damaged devices or data.
config:

vault   UNAVAIL  insufficient replicas
  mirrorUNAVAIL  corrupted data
c6d1p0  ONLINE
c7d1p0  ONLINE
[
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] invalid vdev configuration

2009-06-04 Thread Victor Latushkin

Brian Leonard wrote:

Since you did not export the pool, it may be looking for the wrong
devices.  Try this:
zpool export vault
zpool import vault


That was the first thing I tried, with no luck.


Above, I used slice 0 as an example, your system may use a
different slice.  But you can run zdb -l on all of them to find


Aha, zdb found complete label sets for the vault pool on
/dev/rdsk/c6d1 and c7d1.  The incomplete labels were c6d1p0 and
c7d1p0.  Could I just zpool replace c6d1p0 with c6d1 and c7d1p0 with
c7d0?


You cannot import pool, so you cannot do any replacements with 'zpool 
replace'.


Check contents of /dev/dsk and /dev/rdsk to see if there are some 
missing links there for devices in question. You may want to run


devfsadm -c disk -sv
devfsadm -c disk -Csv

and see if it reports anything.

Try to move c6d1p0 and c7d1p0 out of /dev/dsk and /dev/rdsk and see if 
you can import the pool.


victor
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] invalid vdev configuration

2009-06-04 Thread Richard Elling

Victor took the words right out of my fingers :-) more below...

Victor Latushkin wrote:

Brian Leonard wrote:

Since you did not export the pool, it may be looking for the wrong
devices.  Try this:
zpool export vault
zpool import vault


That was the first thing I tried, with no luck.


Above, I used slice 0 as an example, your system may use a
different slice.  But you can run zdb -l on all of them to find


Aha, zdb found complete label sets for the vault pool on
/dev/rdsk/c6d1 and c7d1.  The incomplete labels were c6d1p0 and
c7d1p0.  Could I just zpool replace c6d1p0 with c6d1 and c7d1p0 with
c7d0?


You cannot import pool, so you cannot do any replacements with 'zpool 
replace'.


Check contents of /dev/dsk and /dev/rdsk to see if there are some 
missing links there for devices in question. You may want to run


devfsadm -c disk -sv
devfsadm -c disk -Csv

and see if it reports anything.

Try to move c6d1p0 and c7d1p0 out of /dev/dsk and /dev/rdsk and see if 
you can import the pool.


Another way to do this is to create a new directory and symlink
only the slices (actually, /dev/* is just a directory of symlinks)
Then you can tell zpool to only look at that directory and not /dev.
Something like:

 mkdir /mytmpdev
 cd /mytmpdev
 for i in /dev/rdsk/c[67]d*s* ; do
   ln -s $i
 done
 zpool import -d /mytmpdev

This should show the proper slices for vault.
-- richard

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] invalid vdev configuration

2009-06-04 Thread Brian Leonard
 Check contents of /dev/dsk and /dev/rdsk to see if
 there are some 
 missing links there for devices in question. You may
 want to run
 
 devfsadm -c disk -sv
 devfsadm -c disk -Csv
 
 and see if it reports anything.

There were quite a few links it removed, all on c0.
 
 Try to move c6d1p0 and c7d1p0 out of /dev/dsk and
 /dev/rdsk and see if 
 you can import the pool.

That worked! It was able to import the pool on c6d1 and c7d1.  Clearly I have a
little more reading to do regarding how Solaris manages disks.  Thanks!
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] invalid vdev configuration

2009-06-03 Thread Brian Leonard
I had a machine die the other day and take one of its zfs pools with it. I 
booted the new machine, with the same disks but a different SATA controller, 
and the rpool was mounted but another pool vault was not.  If I try to import 
it I get invalid vdev configuration.  fmdump shows zfs.vdev.bad_label, and 
checking the label with zdb I find labels 2 and 3 missing.  How can I get my 
pool back?  Thanks.

snv_98

zpool import
  pool: vault
id: 196786381623412270
 state: UNAVAIL
action: The pool cannot be imported due to damaged devices or data.
config:

vault   UNAVAIL  insufficient replicas
  mirrorUNAVAIL  corrupted data
c6d1p0  ONLINE
c7d1p0  ONLINE


fmdump -eV
Jun 04 2009 07:43:47.165169453 ereport.fs.zfs.vdev.bad_label
nvlist version: 0
class = ereport.fs.zfs.vdev.bad_label
ena = 0x8ebd8837ae1
detector = (embedded nvlist)
nvlist version: 0
version = 0x0
scheme = zfs
pool = 0x2bb202be54c462e
vdev = 0xaa3f2fd35788620b
(end detector)

pool = vault
pool_guid = 0x2bb202be54c462e
pool_context = 2
pool_failmode = wait
vdev_guid = 0xaa3f2fd35788620b
vdev_type = mirror
parent_guid = 0x2bb202be54c462e
parent_type = root
prev_state = 0x7
__ttl = 0x1
__tod = 0x4a27c183 0x9d8492d

Jun 04 2009 07:43:47.165169794 ereport.fs.zfs.zpool
nvlist version: 0
class = ereport.fs.zfs.zpool
ena = 0x8ebd8837ae1
detector = (embedded nvlist)
nvlist version: 0
version = 0x0
scheme = zfs
pool = 0x2bb202be54c462e
(end detector)

pool = vault
pool_guid = 0x2bb202be54c462e
pool_context = 2
pool_failmode = wait
__ttl = 0x1
__tod = 0x4a27c183 0x9d84a82


zdb -l /dev/rdsk/c6d1p0

LABEL 0

version=13
name='vault'
state=0
txg=42243
pool_guid=196786381623412270
hostid=997759551
hostname='philo'
top_guid=12267576494733681163
guid=16901406274466991796
vdev_tree
type='mirror'
id=0
guid=12267576494733681163
whole_disk=0
metaslab_array=14
metaslab_shift=33
ashift=9
asize=1000199946240
is_log=0
children[0]
type='disk'
id=0
guid=16901406274466991796
path='/dev/dsk/c1t1d0p0'
devid='id1,s...@f3b789a3f48e44b860003d3320001/q'
phys_path='/p...@0,0/pci1043,8...@7/d...@1,0:q'
whole_disk=0
DTL=77
children[1]
type='disk'
id=1
guid=6231056817092537765
path='/dev/dsk/c1t0d0p0'
devid='id1,s...@f3b789a3f48e44b86000263f9/q'
phys_path='/p...@0,0/pci1043,8...@7/d...@0,0:q'
whole_disk=0
DTL=76

LABEL 1

version=13
name='vault'
state=0
txg=42243
pool_guid=196786381623412270
hostid=997759551
hostname='philo'
top_guid=12267576494733681163
guid=16901406274466991796
vdev_tree
type='mirror'
id=0
guid=12267576494733681163
whole_disk=0
metaslab_array=14
metaslab_shift=33
ashift=9
asize=1000199946240
is_log=0
children[0]
type='disk'
id=0
guid=16901406274466991796
path='/dev/dsk/c1t1d0p0'
devid='id1,s...@f3b789a3f48e44b860003d3320001/q'
phys_path='/p...@0,0/pci1043,8...@7/d...@1,0:q'
whole_disk=0
DTL=77
children[1]
type='disk'
id=1
guid=6231056817092537765
path='/dev/dsk/c1t0d0p0'
devid='id1,s...@f3b789a3f48e44b86000263f9/q'
phys_path='/p...@0,0/pci1043,8...@7/d...@0,0:q'
whole_disk=0
DTL=76

LABEL 2

failed to unpack label 2

LABEL 3

failed to unpack label 3
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] invalid vdev configuration after power failure

2009-03-06 Thread Joe S
On Thu, Mar 5, 2009 at 1:09 PM, Kyle Kakligian kaklig...@google.com wrote:
 On Wed, Mar 4, 2009 at 7:59 PM, Richard Elling richard.ell...@gmail.com 
 wrote:
 additional comment below...

 Kyle Kakligian wrote:

 On Mon, Mar 2, 2009 at 8:30 AM, Blake blake.ir...@gmail.com wrote:


 that link suggests that this is a problem with a dirty export:


 Yes, a loss of power should mean there was no clean export.


Hmmm. I was under the impression that a power loss wouldn't pose a
threat to my ZFS filesystems..
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] invalid vdev configuration after power failure

2009-03-06 Thread Kyle Kakligian
SOLVED

According to `zdb -l /dev/rdsk/vdev`, one of my drives was missing
two of its four redundant labels. (#2 and #3) These two are next to
each other at the end of the device so it makes some sense that they
could both be garbled.

I'm not sure why `zfs import` choked on this [typical?] error case,
but its easy to fix with a very careful dd. I took a different and
very roundabout approach to recover my data, however, since I'm not
confident in my 'careful' skills. (after all, where's my backup?)
Instead, on a linux workstation where I am more cozy, I compiled
zfs-fuse from the source with a slight modification to ignore labels 2
and 3. fusermount worked great and I recovered my data without issue.

Thanks to everyone for all the help. I'm impressed by the amount of
community support here on the list!

On Fri, Mar 6, 2009 at 12:17 AM, Joe S js.li...@gmail.com wrote:
 On Thu, Mar 5, 2009 at 1:09 PM, Kyle Kakligian kaklig...@google.com wrote:
 On Wed, Mar 4, 2009 at 7:59 PM, Richard Elling richard.ell...@gmail.com 
 wrote:
 additional comment below...

 Kyle Kakligian wrote:

 On Mon, Mar 2, 2009 at 8:30 AM, Blake blake.ir...@gmail.com wrote:


 that link suggests that this is a problem with a dirty export:


 Yes, a loss of power should mean there was no clean export.


 Hmmm. I was under the impression that a power loss wouldn't pose a
 threat to my ZFS filesystems..

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] invalid vdev configuration after power failure

2009-03-05 Thread Kyle Kakligian
On Wed, Mar 4, 2009 at 7:59 PM, Richard Elling richard.ell...@gmail.com wrote:
 additional comment below...

 Kyle Kakligian wrote:

 On Mon, Mar 2, 2009 at 8:30 AM, Blake blake.ir...@gmail.com wrote:


 that link suggests that this is a problem with a dirty export:


 Yes, a loss of power should mean there was no clean export.

 On Mon, Mar 2, 2009 at 8:30 AM, Blake blake.ir...@gmail.com wrote:


 maybe try importing on system A again, doing a 'zpool export', waiting
 for completion, then moving to system B to import?


 I tried that first, but system A will have none of it. It fails with
 the cannot import 'pool0': invalid vdev configuration error just
 like system B above.

 On Mon, Mar 2, 2009 at 3:57 AM, Victor Latushkin
 victor.latush...@sun.com wrote:


 What OpenSolaris build are you running?


 snv_101b

 On Mon, Mar 2, 2009 at 3:57 AM, Victor Latushkin
 victor.latush...@sun.com wrote:


 Could you please provide output of the following commands:
 zdb -u pool0
 zdb -bcsv pool0

 Add '-e' if you are running it on your test system.


 In both cases, the output is zdb: can't open pool0: Invalid argument


 I've been meaning to do add some basic can't import the pool
 troubleshooting
 tips to the ZFS Troubleshooting Guide, but am a little bit behind right now.
 For reference, the guide is at:
 http://www.solarisinternals.com/wiki/index.php/ZFS_Troubleshooting_Guide

 If the pool cannot be imported, then trying to run zdb [options] poolname
 may (will?) not work.  A good first step to troubleshoot this to ensure that
 all of the labels can be read from each vdev.  As a reminder, there are 4
 labels per vdev.  To read them, try
   zdb -l /dev/rdsk/vdev
 where vdev is the physical device name, usually something like c0t0d0s0.
 If you cannot read all 4 labels from all of the vdevs, then you should try
 to
 solve that problem first, before moving onto further troubleshooting.
 -- richard


I will look into that doc, as it appears that my label2 and label3
guys are not parse-able.
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] invalid vdev configuration after power failure

2009-03-04 Thread Kyle Kakligian
On Mon, Mar 2, 2009 at 8:30 AM, Blake blake.ir...@gmail.com wrote:
 that link suggests that this is a problem with a dirty export:
Yes, a loss of power should mean there was no clean export.

On Mon, Mar 2, 2009 at 8:30 AM, Blake blake.ir...@gmail.com wrote:
 maybe try importing on system A again, doing a 'zpool export', waiting
 for completion, then moving to system B to import?
I tried that first, but system A will have none of it. It fails with
the cannot import 'pool0': invalid vdev configuration error just
like system B above.

On Mon, Mar 2, 2009 at 3:57 AM, Victor Latushkin
victor.latush...@sun.com wrote:
 What OpenSolaris build are you running?
snv_101b

On Mon, Mar 2, 2009 at 3:57 AM, Victor Latushkin
victor.latush...@sun.com wrote:
 Could you please provide output of the following commands:
 zdb -u pool0
 zdb -bcsv pool0

 Add '-e' if you are running it on your test system.
In both cases, the output is zdb: can't open pool0: Invalid argument
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] invalid vdev configuration after power failure

2009-03-04 Thread Kyle Kakligian
On Mon, Mar 2, 2009 at 8:30 AM, Blake blake.ir...@gmail.com wrote:
 that link suggests that this is a problem with a dirty export:
Yes, a loss of power should mean there was no clean export.

On Mon, Mar 2, 2009 at 8:30 AM, Blake blake.ir...@gmail.com wrote:
 maybe try importing on system A again, doing a 'zpool export', waiting
 for completion, then moving to system B to import?
I tried that first, but system A will have none of it. It fails with
the cannot import 'pool0': invalid vdev configuration error just
like system B above.

On Mon, Mar 2, 2009 at 3:57 AM, Victor Latushkin
victor.latush...@sun.com wrote:
 What OpenSolaris build are you running?
snv_101b

On Mon, Mar 2, 2009 at 3:57 AM, Victor Latushkin
victor.latush...@sun.com wrote:
 Could you please provide output of the following commands:
 zdb -u pool0
 zdb -bcsv pool0

 Add '-e' if you are running it on your test system.
In both cases, the output is zdb: can't open pool0: Invalid argument
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] invalid vdev configuration after power failure

2009-03-04 Thread Richard Elling

additional comment below...

Kyle Kakligian wrote:

On Mon, Mar 2, 2009 at 8:30 AM, Blake blake.ir...@gmail.com wrote:
  

that link suggests that this is a problem with a dirty export:


Yes, a loss of power should mean there was no clean export.

On Mon, Mar 2, 2009 at 8:30 AM, Blake blake.ir...@gmail.com wrote:
  

maybe try importing on system A again, doing a 'zpool export', waiting
for completion, then moving to system B to import?


I tried that first, but system A will have none of it. It fails with
the cannot import 'pool0': invalid vdev configuration error just
like system B above.

On Mon, Mar 2, 2009 at 3:57 AM, Victor Latushkin
victor.latush...@sun.com wrote:
  

What OpenSolaris build are you running?


snv_101b

On Mon, Mar 2, 2009 at 3:57 AM, Victor Latushkin
victor.latush...@sun.com wrote:
  

Could you please provide output of the following commands:
zdb -u pool0
zdb -bcsv pool0

Add '-e' if you are running it on your test system.


In both cases, the output is zdb: can't open pool0: Invalid argument
  


I've been meaning to do add some basic can't import the pool 
troubleshooting

tips to the ZFS Troubleshooting Guide, but am a little bit behind right now.
For reference, the guide is at:
http://www.solarisinternals.com/wiki/index.php/ZFS_Troubleshooting_Guide

If the pool cannot be imported, then trying to run zdb [options] poolname
may (will?) not work.  A good first step to troubleshoot this to ensure that
all of the labels can be read from each vdev.  As a reminder, there are 4
labels per vdev.  To read them, try
   zdb -l /dev/rdsk/vdev
where vdev is the physical device name, usually something like c0t0d0s0.
If you cannot read all 4 labels from all of the vdevs, then you should 
try to

solve that problem first, before moving onto further troubleshooting.
-- richard

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] invalid vdev configuration after power failure

2009-03-02 Thread Blake
that link suggests that this is a problem with a dirty export:

http://www.sun.com/msg/ZFS-8000-EY

maybe try importing on system A again, doing a 'zpool export', waiting
for completion, then moving to system B to import?

On Sun, Mar 1, 2009 at 2:29 PM, Kyle Kakligian small...@gmail.com wrote:
 What does it mean for a vdev to have an invalid configuration and how
 can it be fixed or reset? As you can see, the following pool can no
 longer be imported: (Note that the last accessed by another system
 warning is because I moved these drives to my test workstation.)

 ~$ zpool import -f pool0
 cannot import 'pool0': invalid vdev configuration

 ~$ zpool import
  pool: pool0
    id: 5915552147942272438
  state: UNAVAIL
 status: The pool was last accessed by another system.
 action: The pool cannot be imported due to damaged devices or data.
   see: http://www.sun.com/msg/ZFS-8000-EY
 config:

        pool0       UNAVAIL  insufficient replicas
          raidz1    UNAVAIL  corrupted data
            c5d1p0  ONLINE
            c4d0p0  ONLINE
            c4d1p0  ONLINE
            c6d0p0  ONLINE
            c5d0p0  ONLINE
 ___
 zfs-discuss mailing list
 zfs-discuss@opensolaris.org
 http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss