[osol-discuss] zpool upgrade and zfs upgrade behavior on b145

2010-09-09 Thread Chris Mosetick
Not sure what the best list to send this to is right now, so I have selected
a few, apologies in advance.

A couple questions.  First I have a physical host (call him bob) that was
just installed with b134 a few days ago.  I upgraded to b145 using the
instructions on the Illumos wiki yesterday.  The pool has been upgraded (27)
and the zfs file systems have been upgraded (5).

ch...@bob:~# zpool upgrade rpool
This system is currently running ZFS pool version 27.
Pool 'rpool' is already formatted using the current version.

ch...@bob:~# zfs upgrade rpool
7 file systems upgraded

The file systems have been upgraded according to "zfs get version rpool"

Looks ok to me.

However, I now get an error when I run zdb -D.  I can't remember exactly
when I turned dedup on, but I moved some data on rpool, and "zpool list"
shows 1.74x ratio.

ch...@bob:~# zdb -D rpool
zdb: can't open 'rpool': No such file or directory

Also, running zdb by itself, returns expected output, but still says my
rpool is version 22.  Is that expected?

I never ran zdb before the upgrade, since it was a clean install from the
b134 iso to go straight to b145.  One thing I will mention is that the
hostname of the machine was changed too (using these
instructions).
bob used to be eric.  I don't know if that matters, but I can't open up the
"Users and Groups" from Gnome anymore, *"unable to su"* so something is
still not right there.

Moving on, I have another fresh install of b134 from iso inside a virtualbox
virtual machine, on a total different physical machine.  This machine is
named weston and was upgraded to b145 using the same Illumos wiki
instructions.  His name has never changed.  When I run the same zdb -D
command I get the expected output.

ch...@weston:~# zdb -D rpool
DDT-sha256-zap-unique: 11 entries, size 558 on disk, 744 in core
dedup = 1.00, compress = 7.51, copies = 1.00, dedup * compress / copies =
7.51

However, after zpool and zfs upgrades *on both machines*, they still say the
rpool is version 22.  Is that expected/correct?  I added a new virtual disk
to the vm weston to see what would happen if I made a new pool on the new
disk.

ch...@weston:~# zpool create test c5t1d0

Well, the new "test" pool shows version 27, but rpool is still listed at 22
by zdb.  Is this expected /correct behavior?  See the output below to see
the rpool and test pool version numbers according to zdb on the host weston.


Can anyone provide any insight into what I'm seeing?  Do I need to delete my
b134 boot environments for rpool to show as version 27 in zdb?  Why does zdb
-D rpool give me can't open on the host bob?

Thank you in advance,

-Chris

ch...@weston:~# zdb
rpool:
version: 22
name: 'rpool'
state: 0
txg: 7254
pool_guid: 17616386148370290153
hostid: 8413798
hostname: 'weston'
vdev_children: 1
vdev_tree:
type: 'root'
id: 0
guid: 17616386148370290153
create_txg: 4
children[0]:
type: 'disk'
id: 0
guid: 14826633751084073618
path: '/dev/dsk/c5t0d0s0'
devid: 'id1,s...@sata_vbox_harddiskvbf6ff53d9-49330fdb/a'
phys_path: '/p...@0,0/pci8086,2...@d/d...@0,0:a'
whole_disk: 0
metaslab_array: 23
metaslab_shift: 28
ashift: 9
asize: 32172408832
is_log: 0
create_txg: 4
test:
version: 27
name: 'test'
state: 0
txg: 26
pool_guid: 13455895622924169480
hostid: 8413798
hostname: 'weston'
vdev_children: 1
vdev_tree:
type: 'root'
id: 0
guid: 13455895622924169480
create_txg: 4
children[0]:
type: 'disk'
id: 0
guid: 7436238939623596891
path: '/dev/dsk/c5t1d0s0'
devid: 'id1,s...@sata_vbox_harddiskvba371da65-169e72ea/a'
phys_path: '/p...@0,0/pci8086,2...@d/d...@1,0:a'
whole_disk: 1
metaslab_array: 30
metaslab_shift: 24
ashift: 9
asize: 3207856128
is_log: 0
create_txg: 4
___
opensolaris-discuss mailing list
opensolaris-discuss@opensolaris.org

Re: [osol-discuss] zpool upgrade and zfs upgrade behavior on b145

2010-09-10 Thread Mike DeMarco
I would just ran an upgrade on a pool at version 18 to version 22.

 zpool upgrade euclid
This system is currently running ZFS pool version 22.

Successfully upgraded 'euclid' from version 18 to version 22

And then look ad zdb.
zdb
euclid:

version: 18
name: 'euclid'
state: 0
txg: 234922
pool_guid: 4786614771504599496
hostid: 4343974
hostname: 'euclid-clevo'
vdev_children: 1
vdev_tree:
type: 'root'
id: 0
guid: 4786614771504599496
children[0]:
type: 'disk'
id: 0
guid: 2080840568716594129
path: '/dev/dsk/c4t0d0p2'
devid: 'id1,s...@sata_st9320423as_5vj0tb3s/s'
phys_path: '/p...@0,0/pci1558,8...@1f,2/d...@0,0:s'
whole_disk: 0
metaslab_array: 23
metaslab_shift: 31
ashift: 9
asize: 256048103424
is_log: 0
rewind_txg_ts: 1283472786
seconds_of_rewind: 0
verify_data_errors: 0

So I would say that the upgrade is not changing the pool header information. 
This is probably a bug.

zpool get version does report the proper pool version.
-- 
This message posted from opensolaris.org
___
opensolaris-discuss mailing list
opensolaris-discuss@opensolaris.org


Re: [osol-discuss] zpool upgrade and zfs upgrade behavior on b145

2010-09-10 Thread Cindy Swearingen

Hi Mike,

I think you are seeing this bug:

http://bugs.opensolaris.org/bugdatabase/view_bug.do?bug_id=6538600

pool version is not updated in the label config after zpool upgrade and 
before reboot


I saw strange zdb behavior after a pool upgrade on b146 and an
export/import of the pool cleared this problem.

Thanks,

Cindy

On 09/10/10 06:56, Mike DeMarco wrote:

I would just ran an upgrade on a pool at version 18 to version 22.

 zpool upgrade euclid
This system is currently running ZFS pool version 22.

Successfully upgraded 'euclid' from version 18 to version 22

And then look ad zdb.
zdb
euclid:

version: 18
name: 'euclid'
state: 0
txg: 234922
pool_guid: 4786614771504599496
hostid: 4343974
hostname: 'euclid-clevo'
vdev_children: 1
vdev_tree:
type: 'root'
id: 0
guid: 4786614771504599496
children[0]:
type: 'disk'
id: 0
guid: 2080840568716594129
path: '/dev/dsk/c4t0d0p2'
devid: 'id1,s...@sata_st9320423as_5vj0tb3s/s'
phys_path: '/p...@0,0/pci1558,8...@1f,2/d...@0,0:s'
whole_disk: 0
metaslab_array: 23
metaslab_shift: 31
ashift: 9
asize: 256048103424
is_log: 0
rewind_txg_ts: 1283472786
seconds_of_rewind: 0
verify_data_errors: 0

So I would say that the upgrade is not changing the pool header information. 
This is probably a bug.

zpool get version does report the proper pool version.

___
opensolaris-discuss mailing list
opensolaris-discuss@opensolaris.org