Re: [zfs-discuss] can't import zpool after upgrade to solaris 10u6

2009-01-10 Thread Steve Goldthorpe
There's definately something strange going on as these are the only uberblocks 
I can find by scanning /dev/dsk/c0t0d0s7 - nothing to conflict with my theory 
so far:

TXG: 106052 TIME: 2009-01-04:11:06:12 BLK: 0e29000 (14848000) VER: 10 GUID_SUM: 
9f8d9ef301489223 (11497020190282519075)
TXG: 106052 TIME: 2009-01-04:11:06:12 BLK: 0e69000 (15110144) VER: 10 GUID_SUM: 
9f8d9ef301489223 (11497020190282519075)
TXG: 106053 TIME: 2009-01-04:11:06:42 BLK: 0e29400 (14849024) VER: 10 GUID_SUM: 
9f8d9ef301489223 (11497020190282519075)
TXG: 106053 TIME: 2009-01-04:11:06:42 BLK: 0e69400 (1568) VER: 10 GUID_SUM: 
9f8d9ef301489223 (11497020190282519075)
skipped 248 blocks...
TXG: 114710 TIME: 2009-01-07:11:14:10 BLK: 0e1d800 (14800896) VER: 10 GUID_SUM: 
9f8d9ef301489223 (11497020190282519075)
TXG: 114710 TIME: 2009-01-07:11:14:10 BLK: 0e5d800 (15063040) VER: 10 GUID_SUM: 
9f8d9ef301489223 (11497020190282519075)
TXG: 114715 TIME: 2009-01-07:11:15:41 BLK: 0e1ec00 (14806016) VER: 10 GUID_SUM: 
9f8d9ef301489223 (11497020190282519075)
TXG: 114715 TIME: 2009-01-07:11:15:41 BLK: 0e5ec00 (15068160) VER: 10 GUID_SUM: 
9f8d9ef301489223 (11497020190282519075)

TXG: 1830158 TIME: 2008-11-20:08:46:41 BLK: 0023800 (145408) VER: 4 GUID_SUM: 
9ab0d28ccc7d2e94 (11146640579909987988)
TXG: 1830158 TIME: 2008-11-20:08:46:41 BLK: 0063800 (407552) VER: 4 GUID_SUM: 
9ab0d28ccc7d2e94 (11146640579909987988)
TXG: 1830382 TIME: 2008-11-20:09:05:20 BLK: 003b800 (243712) VER: 4 GUID_SUM: 
9ab0d28ccc7d2e94 (11146640579909987988)
TXG: 1830382 TIME: 2008-11-20:09:05:20 BLK: 007b800 (505856) VER: 4 GUID_SUM: 
9ab0d28ccc7d2e94 (11146640579909987988)
skipped 248 blocks...
TXG: 1832026 TIME: 2008-11-20:11:22:18 BLK: 0036800 (223232) VER: 4 GUID_SUM: 
9ab0d28ccc7d2e94 (11146640579909987988)
TXG: 1832026 TIME: 2008-11-20:11:22:18 BLK: 0076800 (485376) VER: 4 GUID_SUM: 
9ab0d28ccc7d2e94 (11146640579909987988)
TXG: 1832027 TIME: 2008-11-20:11:22:19 BLK: 0036c00 (224256) VER: 4 GUID_SUM: 
9ab0d28ccc7d2e94 (11146640579909987988)
TXG: 1832027 TIME: 2008-11-20:11:22:19 BLK: 0076c00 (486400) VER: 4 GUID_SUM: 
9ab0d28ccc7d2e94 (11146640579909987988)

# zdb -l /dev/dsk/c0t0d0s7

LABEL 0

version=4
name='zpool'
state=0
txg=1809157
pool_guid=17419375665629462002
top_guid=12174008987990077602
guid=12174008987990077602
vdev_tree
type='disk'
id=0
guid=12174008987990077602
path='/dev/dsk/c0t0d0s7'
devid='id1,s...@n5000cca321ca2647/h'
whole_disk=0
metaslab_array=14
metaslab_shift=30
ashift=9
asize=129904410624
DTL=24

LABEL 1

version=4
name='zpool'
state=0
txg=1809157
pool_guid=17419375665629462002
top_guid=12174008987990077602
guid=12174008987990077602
vdev_tree
type='disk'
id=0
guid=12174008987990077602
path='/dev/dsk/c0t0d0s7'
devid='id1,s...@n5000cca321ca2647/h'
whole_disk=0
metaslab_array=14
metaslab_shift=30
ashift=9
asize=129904410624
DTL=24

LABEL 2


LABEL 3


-Steve

 After having a think I've come up with the following
 hypothesis:
 
 1) When I was on Solaris 10u4 things were working
 fine.
 2) When I re-installed with Solaris 10u6 and imported
 the zpool (with zpool import -f), it created a
 zpool.cache file and didn't update the on disk data
 structures for some reason.
 3) When I re-installed Solaris 10u6, I lost the
 zpool.cache file and now zfs looks at the data
 structures on the disk and they are inconsistent.
 
 Could the above have actually happened?  It would
 explain what I'm seeing.
 
 -Steve
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] can't import zpool after upgrade to solaris 10u6

2009-01-10 Thread JZ
Hi Gold,
9987988 sounds factual to me...

IMHO,
z

- Original Message - 
From: Steve Goldthorpe s...@waistcoat.org.uk
To: zfs-discuss@opensolaris.org
Sent: Saturday, January 10, 2009 6:59 PM
Subject: Re: [zfs-discuss] can't import zpool after upgrade to solaris 10u6


 There's definately something strange going on as these are the only 
 uberblocks I can find by scanning /dev/dsk/c0t0d0s7 - nothing to conflict 
 with my theory so far:

 TXG: 106052 TIME: 2009-01-04:11:06:12 BLK: 0e29000 (14848000) VER: 10 
 GUID_SUM: 9f8d9ef301489223 (11497020190282519075)
 TXG: 106052 TIME: 2009-01-04:11:06:12 BLK: 0e69000 (15110144) VER: 10 
 GUID_SUM: 9f8d9ef301489223 (11497020190282519075)
 TXG: 106053 TIME: 2009-01-04:11:06:42 BLK: 0e29400 (14849024) VER: 10 
 GUID_SUM: 9f8d9ef301489223 (11497020190282519075)
 TXG: 106053 TIME: 2009-01-04:11:06:42 BLK: 0e69400 (1568) VER: 10 
 GUID_SUM: 9f8d9ef301489223 (11497020190282519075)
 skipped 248 blocks...
 TXG: 114710 TIME: 2009-01-07:11:14:10 BLK: 0e1d800 (14800896) VER: 10 
 GUID_SUM: 9f8d9ef301489223 (11497020190282519075)
 TXG: 114710 TIME: 2009-01-07:11:14:10 BLK: 0e5d800 (15063040) VER: 10 
 GUID_SUM: 9f8d9ef301489223 (11497020190282519075)
 TXG: 114715 TIME: 2009-01-07:11:15:41 BLK: 0e1ec00 (14806016) VER: 10 
 GUID_SUM: 9f8d9ef301489223 (11497020190282519075)
 TXG: 114715 TIME: 2009-01-07:11:15:41 BLK: 0e5ec00 (15068160) VER: 10 
 GUID_SUM: 9f8d9ef301489223 (11497020190282519075)

 TXG: 1830158 TIME: 2008-11-20:08:46:41 BLK: 0023800 (145408) VER: 4 
 GUID_SUM: 9ab0d28ccc7d2e94 (11146640579909987988)
 TXG: 1830158 TIME: 2008-11-20:08:46:41 BLK: 0063800 (407552) VER: 4 
 GUID_SUM: 9ab0d28ccc7d2e94 (11146640579909987988)
 TXG: 1830382 TIME: 2008-11-20:09:05:20 BLK: 003b800 (243712) VER: 4 
 GUID_SUM: 9ab0d28ccc7d2e94 (11146640579909987988)
 TXG: 1830382 TIME: 2008-11-20:09:05:20 BLK: 007b800 (505856) VER: 4 
 GUID_SUM: 9ab0d28ccc7d2e94 (11146640579909987988)
 skipped 248 blocks...
 TXG: 1832026 TIME: 2008-11-20:11:22:18 BLK: 0036800 (223232) VER: 4 
 GUID_SUM: 9ab0d28ccc7d2e94 (11146640579909987988)
 TXG: 1832026 TIME: 2008-11-20:11:22:18 BLK: 0076800 (485376) VER: 4 
 GUID_SUM: 9ab0d28ccc7d2e94 (11146640579909987988)
 TXG: 1832027 TIME: 2008-11-20:11:22:19 BLK: 0036c00 (224256) VER: 4 
 GUID_SUM: 9ab0d28ccc7d2e94 (11146640579909987988)
 TXG: 1832027 TIME: 2008-11-20:11:22:19 BLK: 0076c00 (486400) VER: 4 
 GUID_SUM: 9ab0d28ccc7d2e94 (11146640579909987988)

 # zdb -l /dev/dsk/c0t0d0s7
 
 LABEL 0
 
 version=4
 name='zpool'
 state=0
 txg=1809157
 pool_guid=17419375665629462002
 top_guid=12174008987990077602
 guid=12174008987990077602
 vdev_tree
 type='disk'
 id=0
 guid=12174008987990077602
 path='/dev/dsk/c0t0d0s7'
 devid='id1,s...@n5000cca321ca2647/h'
 whole_disk=0
 metaslab_array=14
 metaslab_shift=30
 ashift=9
 asize=129904410624
 DTL=24
 
 LABEL 1
 
 version=4
 name='zpool'
 state=0
 txg=1809157
 pool_guid=17419375665629462002
 top_guid=12174008987990077602
 guid=12174008987990077602
 vdev_tree
 type='disk'
 id=0
 guid=12174008987990077602
 path='/dev/dsk/c0t0d0s7'
 devid='id1,s...@n5000cca321ca2647/h'
 whole_disk=0
 metaslab_array=14
 metaslab_shift=30
 ashift=9
 asize=129904410624
 DTL=24
 
 LABEL 2
 
 
 LABEL 3
 

 -Steve

 After having a think I've come up with the following
 hypothesis:

 1) When I was on Solaris 10u4 things were working
 fine.
 2) When I re-installed with Solaris 10u6 and imported
 the zpool (with zpool import -f), it created a
 zpool.cache file and didn't update the on disk data
 structures for some reason.
 3) When I re-installed Solaris 10u6, I lost the
 zpool.cache file and now zfs looks at the data
 structures on the disk and they are inconsistent.

 Could the above have actually happened?  It would
 explain what I'm seeing.

 -Steve
 -- 
 This message posted from opensolaris.org
 ___
 zfs-discuss mailing list
 zfs-discuss@opensolaris.org
 http://mail.opensolaris.org/mailman/listinfo/zfs-discuss 

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] can't import zpool after upgrade to solaris 10u6

2009-01-09 Thread Steve Goldthorpe
After having a think I've come up with the following hypothesis:

1) When I was on Solaris 10u4 things were working fine.
2) When I re-installed with Solaris 10u6 and imported the zpool (with zpool 
import -f), it created a zpool.cache file and didn't update the on disk data 
structures for some reason.
3) When I re-installed Solaris 10u6, I lost the zpool.cache file and now zfs 
looks at the data structures on the disk and they are inconsistent.

Could the above have actually happened?  It would explain what I'm seeing.

-Steve
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] can't import zpool after upgrade to solaris 10u6

2009-01-08 Thread Steve Goldthorpe
Here's what I did:
* had a t1000 with a zpool under /dev/dsk/c0t0d0s7 on solaris 10u4
* re-installed with solaris 10u6 (disk layout unchanged)
* imported zpool with zpool import -f (I'm forever forgetting to export them 
first) - this was ok
* re-installed with solaris 10u6 and more up-to-date patches (again forgetting 
to export it)

When I do zpool import i get the following:
# zpool import 
  pool: zpool
id: 17419375665629462002
 state: FAULTED
status: The pool metadata is corrupted.
action: The pool cannot be imported due to damaged devices or data.
The pool may be active on another system, but can be imported using
the '-f' flag.
   see: http://www.sun.com/msg/ZFS-8000-72
config:

zpool   FAULTED  corrupted data
  c0t0d0s7  ONLINE

So I thought I'd done something wrong, however checked the partition layout and 
it's not changed.  However after doing a bit of poking about, I've found some 
weird stuff - what zdb -l is showing and what's actually on the disk doesn't 
seem to tally - I can't find that transaction ID from zdb and there seems to be 
a mixture of version 4 and version 10 uberblocks on disk (and they all have 
bigger transaction IDs than zdb is showing).

Am I missing something?

-Steve

# zdb -l /dev/dsk/c0t0d0s7

LABEL 0

version=4
name='zpool'
state=0
txg=1809157
pool_guid=17419375665629462002
top_guid=12174008987990077602
guid=12174008987990077602
vdev_tree
type='disk'
id=0
guid=12174008987990077602
path='/dev/dsk/c0t0d0s7'
devid='id1,s...@n5000cca321ca2647/h'
whole_disk=0
metaslab_array=14
metaslab_shift=30
ashift=9
asize=129904410624
DTL=24

LABEL 1

version=4
name='zpool'
state=0
txg=1809157
pool_guid=17419375665629462002
top_guid=12174008987990077602
guid=12174008987990077602
vdev_tree
type='disk'
id=0
guid=12174008987990077602
path='/dev/dsk/c0t0d0s7'
devid='id1,s...@n5000cca321ca2647/h'
whole_disk=0
metaslab_array=14
metaslab_shift=30
ashift=9
asize=129904410624
DTL=24

LABEL 2


LABEL 3


-- (sample output from a little script i knocked up)

Uberblock Offset: 002 (131072)
Uber version: 4
Transaction group: 1831936
Timestamp: 2008-11-20:11:14:49
GUID_SUM: 9ab0d28ccc7d2e94

Uberblock Offset: 0020400 (132096)
Uber version: 4
Transaction group: 1831937
Timestamp: 2008-11-20:11:14:54
GUID_SUM: 9ab0d28ccc7d2e94
...
Uber version: 10
Transaction group: 114560
Timestamp: 2009-01-07:09:59:11
GUID_SUM: 9f8d9ef301489223

Uberblock Offset: 0e18400 (14779392)
Uber version: 10
Transaction group: 114561
Timestamp: 2009-01-07:09:59:41
GUID_SUM: 9f8d9ef301489223
...
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss