On Wed, Dec 06, 2006 at 12:35:58PM -0800, Jim Hranicky wrote:
> > If those are the original path ids, and you didn't
> > move the disks on the bus? Why is the is_spare flag
>
> Well, I'm not sure, but these drives were set as spares in another pool
> I deleted -- should I have done something to
Hold fire on the re-init until one of the devs chips in, maybe I'm barking up
the wrong tree ;)
--a
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-di
> If those are the original path ids, and you didn't
> move the disks on the bus? Why is the is_spare flag
Well, I'm not sure, but these drives were set as spares in another pool
I deleted -- should I have done something to the drives (fdisk?) before
rearranging it?
The rest of the options are
Hi Jim,
That looks interesting though, I'm not a zfs expert by any means but look at
some of the properties of the children elements of the mirror:-
version=3
name='zmir'
state=0
txg=770
pool_guid=5904723747772934703
vdev_tree
type='root'
id=0
guid=5904723747772934703
children[0]
type='mirror'
id
Here's the output of zdb:
zmir
version=3
name='zmir'
state=0
txg=770
pool_guid=5904723747772934703
vdev_tree
type='root'
id=0
guid=5904723747772934703
children[0]
type='mirror'
id=0
guid=1506718
> I think the pool is busted. Even the message printed in your
> previous email is bad:
>
>DATASET OBJECT RANGE
> 15 0 lvl=4294967295 blkid=0
>
> as level is way out of range.
I think this could be from dmu_objset_open_impl().
It sets object to 0 and level to -1 (= 429