[osol-discuss] zpool upgrade and zfs upgrade behavior on b145

2010-09-09 Thread Chris Mosetick
Not sure what the best list to send this to is right now, so I have selected
a few, apologies in advance.

A couple questions.  First I have a physical host (call him bob) that was
just installed with b134 a few days ago.  I upgraded to b145 using the
instructions on the Illumos wiki yesterday.  The pool has been upgraded (27)
and the zfs file systems have been upgraded (5).

ch...@bob:~# zpool upgrade rpool
This system is currently running ZFS pool version 27.
Pool 'rpool' is already formatted using the current version.

ch...@bob:~# zfs upgrade rpool
7 file systems upgraded

The file systems have been upgraded according to "zfs get version rpool"

Looks ok to me.

However, I now get an error when I run zdb -D.  I can't remember exactly
when I turned dedup on, but I moved some data on rpool, and "zpool list"
shows 1.74x ratio.

ch...@bob:~# zdb -D rpool
zdb: can't open 'rpool': No such file or directory

Also, running zdb by itself, returns expected output, but still says my
rpool is version 22.  Is that expected?

I never ran zdb before the upgrade, since it was a clean install from the
b134 iso to go straight to b145.  One thing I will mention is that the
hostname of the machine was changed too (using these
instructions).
bob used to be eric.  I don't know if that matters, but I can't open up the
"Users and Groups" from Gnome anymore, *"unable to su"* so something is
still not right there.

Moving on, I have another fresh install of b134 from iso inside a virtualbox
virtual machine, on a total different physical machine.  This machine is
named weston and was upgraded to b145 using the same Illumos wiki
instructions.  His name has never changed.  When I run the same zdb -D
command I get the expected output.

ch...@weston:~# zdb -D rpool
DDT-sha256-zap-unique: 11 entries, size 558 on disk, 744 in core
dedup = 1.00, compress = 7.51, copies = 1.00, dedup * compress / copies =
7.51

However, after zpool and zfs upgrades *on both machines*, they still say the
rpool is version 22.  Is that expected/correct?  I added a new virtual disk
to the vm weston to see what would happen if I made a new pool on the new
disk.

ch...@weston:~# zpool create test c5t1d0

Well, the new "test" pool shows version 27, but rpool is still listed at 22
by zdb.  Is this expected /correct behavior?  See the output below to see
the rpool and test pool version numbers according to zdb on the host weston.


Can anyone provide any insight into what I'm seeing?  Do I need to delete my
b134 boot environments for rpool to show as version 27 in zdb?  Why does zdb
-D rpool give me can't open on the host bob?

Thank you in advance,

-Chris

ch...@weston:~# zdb
rpool:
version: 22
name: 'rpool'
state: 0
txg: 7254
pool_guid: 17616386148370290153
hostid: 8413798
hostname: 'weston'
vdev_children: 1
vdev_tree:
type: 'root'
id: 0
guid: 17616386148370290153
create_txg: 4
children[0]:
type: 'disk'
id: 0
guid: 14826633751084073618
path: '/dev/dsk/c5t0d0s0'
devid: 'id1,s...@sata_vbox_harddiskvbf6ff53d9-49330fdb/a'
phys_path: '/p...@0,0/pci8086,2...@d/d...@0,0:a'
whole_disk: 0
metaslab_array: 23
metaslab_shift: 28
ashift: 9
asize: 32172408832
is_log: 0
create_txg: 4
test:
version: 27
name: 'test'
state: 0
txg: 26
pool_guid: 13455895622924169480
hostid: 8413798
hostname: 'weston'
vdev_children: 1
vdev_tree:
type: 'root'
id: 0
guid: 13455895622924169480
create_txg: 4
children[0]:
type: 'disk'
id: 0
guid: 7436238939623596891
path: '/dev/dsk/c5t1d0s0'
devid: 'id1,s...@sata_vbox_harddiskvba371da65-169e72ea/a'
phys_path: '/p...@0,0/pci8086,2...@d/d...@1,0:a'
whole_disk: 1
metaslab_array: 30
metaslab_shift: 24
ashift: 9
asize: 3207856128
is_log: 0
create_txg: 4
___
opensolaris-discuss mailing list
opensolaris-discuss@opensolaris.org

Re: [osol-discuss] Anonymous NFS file permissions

2010-09-15 Thread Chris Mosetick
You need to set permissions on the file system properly.  If you just want
everyone to have access:

zfs setsharenfs=name=chris,guestok=true tank/export/chris

for CIFS
zfs setsharesmb=name=chris,guestok=true tank/export/chris

you should also look at idmap for cifs:

read the manual first:

man idmap

My server is joined to active directory, using smbadm.  Here are the two
mappings that I'm currently using in my environment:

idamp add 
winname:administra...@mydomain.com"
unixuser:admin

idamp add wingroup:"Domain Admins"@mydomain.com" unixgroup:sysadmin

those are the only two mappings I have had to set so far.  All the domain
users seem to work themselves out as long as I set permissions on the shares
having the Domain Administrator as owner.

idmap list

will display all your current mappings.

when you messup a whole bunch, idmap remove -a is nice.


On Wed, Sep 15, 2010 at 7:44 PM, valrh...@gmail.com wrote:

> I am running NexentaStor 3.0.3-1 on a fileserver, and have it set up for
> CIFS and NFS access. I have three clients, running Win7, Ubuntu 10.04 LTS
> and OSol B134 (will upgrade soon to OpenIndiana!). For the NFS machines, if
> I create a folder or file with Ubuntu (using just the standard NFS mounting
> into Linux), I can't open the folder or read its files with the OSol
> machine. And vice versa: the Ubuntu machine can't read folders created over
> NFS from the OSOl machine. In both cases, the folder is identified as being
> owned by "anonymous NFS user." I can go in and manually change the folder
> permission, but that's a tad tedious when you have a large number of
> folders. How can I fix this? This is an internal server that I just want
> everyone to be able to access. I already asked on the NExenta forum, but
> there seems to be relatively little activity, and I haven't heard back. Many
> thanks in advance!!
> --
> This message posted from opensolaris.org
> ___
> opensolaris-discuss mailing list
> opensolaris-discuss@opensolaris.org
>
___
opensolaris-discuss mailing list
opensolaris-discuss@opensolaris.org

Re: [osol-discuss] [illumos-Developer] zpool upgrade and zfs upgrade behavior on b145

2010-09-27 Thread Chris Mosetick
The strange behavior that I witnessed on the machine that had its hostname
renamed was never resolved or investigated further.  Luckily it was a
experiment/test machine.  About the zpool headers not getting updated after
a zpool upgrade?  I filled a bug on the Illumos bug tracker:

http://illumos.org/issues/217

Somehow my formatting on the bug entry got mangled. :)

Luckily this bug appears to just effect zdb.  The pools and file systems are
in fact upgraded after you initiate the upgrade.  FYI I have witnessed the
same behavior when upgrading pools created on on a clean OS b134 machine to
Open Indiana b147, zpool version 28.

I'm under the impression that this bug would not be difficult to fix, but
being that zdb does not seem to be well documented, maybe in fact it would
be hard to track down?



2010/9/27 Réfi Richárd 

> Hi,
>
> Did you solved your issue?
>
> RR
>
> On Fri, Sep 10, 2010 at 2:56 AM, Chris Mosetick wrote:
>
>> Not sure what the best list to send this to is right now, so I have
>> selected a few, apologies in advance.
>>
>> A couple questions.  First I have a physical host (call him bob) that was
>> just installed with b134 a few days ago.  I upgraded to b145 using the
>> instructions on the Illumos wiki yesterday.  The pool has been upgraded (27)
>> and the zfs file systems have been upgraded (5).
>>
>> ch...@bob:~# zpool upgrade rpool
>> This system is currently running ZFS pool version 27.
>> Pool 'rpool' is already formatted using the current version.
>>
>> ch...@bob:~# zfs upgrade rpool
>> 7 file systems upgraded
>>
>> The file systems have been upgraded according to "zfs get version rpool"
>>
>> Looks ok to me.
>>
>> However, I now get an error when I run zdb -D.  I can't remember exactly
>> when I turned dedup on, but I moved some data on rpool, and "zpool list"
>> shows 1.74x ratio.
>>
>> ch...@bob:~# zdb -D rpool
>> zdb: can't open 'rpool': No such file or directory
>>
>> Also, running zdb by itself, returns expected output, but still says my
>> rpool is version 22.  Is that expected?
>>
>> I never ran zdb before the upgrade, since it was a clean install from the
>> b134 iso to go straight to b145.  One thing I will mention is that the
>> hostname of the machine was changed too (using these 
>> instructions<http://wiki.genunix.org/wiki/index.php/Change_hostname_HOWTO>).
>> bob used to be eric.  I don't know if that matters, but I can't open up the
>> "Users and Groups" from Gnome anymore, *"unable to su"* so something is
>> still not right there.
>>
>> Moving on, I have another fresh install of b134 from iso inside a
>> virtualbox virtual machine, on a total different physical machine.  This
>> machine is named weston and was upgraded to b145 using the same Illumos wiki
>> instructions.  His name has never changed.  When I run the same zdb -D
>> command I get the expected output.
>>
>> ch...@weston:~# zdb -D rpool
>> DDT-sha256-zap-unique: 11 entries, size 558 on disk, 744 in core
>> dedup = 1.00, compress = 7.51, copies = 1.00, dedup * compress / copies =
>> 7.51
>>
>> However, after zpool and zfs upgrades *on both machines*, they still say
>> the rpool is version 22.  Is that expected/correct?  I added a new virtual
>> disk to the vm weston to see what would happen if I made a new pool on the
>> new disk.
>>
>> ch...@weston:~# zpool create test c5t1d0
>>
>> Well, the new "test" pool shows version 27, but rpool is still listed at
>> 22 by zdb.  Is this expected /correct behavior?  See the output below to see
>> the rpool and test pool version numbers according to zdb on the host weston.
>>
>>
>> Can anyone provide any insight into what I'm seeing?  Do I need to delete
>> my b134 boot environments for rpool to show as version 27 in zdb?  Why does
>> zdb -D rpool give me can't open on the host bob?
>>
>> Thank you in advance,
>>
>> -Chris
>>
>> ch...@weston:~# zdb
>> rpool:
>> version: 22
>> name: 'rpool'
>> state: 0
>> txg: 7254
>> pool_guid: 17616386148370290153
>> hostid: 8413798
>> hostname: 'weston'
>> vdev_children: 1
>> vdev_tree:
>> type: 'root'
>> id: 0
>> guid: 17616386148370290153
>> create_txg: 4
>> children[0]:
>> type: 'disk'
>> id: 0
>> guid: 14826633751

Re: [osol-discuss] [illumos-Developer] zpool upgrade and zfs upgrade behavior on b145

2010-09-29 Thread Chris Mosetick
Hi Cindy,

I did see your first email pointing to that
bug.
Apologies for not addressing it earlier.  It is my opinion that the behavior
Mike, and I  (or anyone else upgrading pools
right now) is seeing is a entirely new and different bug.  The bug you point
to, originally submitted in 2007 says it manifests itself before a reboot.
Also you say exporting and importing clear the problem.  After several
reboots, zdb still shows the older pool version, which means that this is a
new bug or perhaps the bug you are referencing is not listing clearly and
accurately what it should be and is incomplete.

Suppose an export and import can update the pool label config on a large
storage pool, great.  How would someone go about exporting the rpool the
operating system is on??  As far as I know, It's impossible to export the
zpool the operating system is running on.  I don't think it can be done, but
I'm new so maybe I'm missing something.

One option I have not explored that might work:  Booting to a live CD that
has the same or higher pool version present and then doing:   zpool import
&& zpool import -f rpool && zpool export rpool  and then rebooting into the
operating system.  Perhaps this might be an option that "works" to update
the label config / zdb for rpool but I think fixing the root problem would
be much more beneficial for everyone in the long run.  Being that zdb is a
troubleshooting/debugging tool, I would think that it's necessary for it to
be aware of the proper pool version to work properly and so admins know
what's really going on with their pools.  The bottom line here is that if
zdb is going to be part of zfs, it needs to display what is currently on
disk, including the label config.  If I were an admin thinking about
trusting hundreds of GB's of data to zfs I would want the debugger to show
me whats really on the disks.

Additionally, even though zpool and zfs "get version" display the true and
updated versions, I'm not convinced that the problem is zdb, as the label
config is almost certainly set by the zpool and/or zfs commands.  Somewhere,
something is not happening that is supposed to when initiating a zpool
upgrade, but since I know virtually nothing of the internals of zfs, I do
not know where.

Sincerely,

-Chris
___
opensolaris-discuss mailing list
opensolaris-discuss@opensolaris.org

Re: [osol-discuss] [illumos-Developer] zpool upgrade and zfs upgrade behavior on b145

2010-09-29 Thread Chris Mosetick
Well strangely enough, I just logged into a OS b145 machine.  It's rpool is
not mirrored, just a single disk.  I know that zdb reported zpool version 22
after at least the first 3 reboots after rpool upgrade, so I stopped
checking.  zdb now reports version 27.  This machine has probably been
rebooted about five or six times since the pool version upgrade.  One should
not have to reboot six times!  More mystery to this pool upgrade behavior!!

-Chris
___
opensolaris-discuss mailing list
opensolaris-discuss@opensolaris.org