Re: [osol-discuss] about zfs exported on nfs

2010-03-15 Thread Robert Thurlow

Harry Putnam wrote:

Robert Thurlow  writes:



This makes a lot more sense.  NFSv4 should have worked for
you if you had the client and server both set to the same
NFSv4 domain - if you care to work on this, we can.



Thanks for the offer.  Is there something NFSv4 offers that would make
it worth doing?


Probably the most tangible difference is the fact that a
modern NFSv4 client will permit you to see child mounts,
which we have sometimes called mirror mounts.  With V3,
the only way you can properly handle nested shares on a
server is to list them all in an automounter map; with
V4, they just show up properly.  Linux clients and newer
Nevada clients (post snv_77) will do this.

And when I mention NFSv4 domain, it is not necessarily
related to a DNS or NIS domain.  They can be different,
and sometimes just are different.  This is about how a
client or server converts a UID to a string like
"thur...@sun.com" as V4 needs it, and how the other end
converts it back to a numeric UID.  If the domains
don't match, you see "nobody".  I don't recall how to
set or see the NFSv4 domain on Linux, but on Solaris you
can see it with "cat /var/run/nfs4_domain" and set it in
/etc/default/nfs.

Rob T
___
opensolaris-discuss mailing list
opensolaris-discuss@opensolaris.org


Re: [osol-discuss] about zfs exported on nfs

2010-03-15 Thread Harry Putnam
Matthias Pfützner 
writes:

> Hey, Harry, no problem! Sometimes we all can't see the forest for the
> trees... We all assumed, it must have been something like that.
>
> Glad it worked out finally!

You put quite a bit into it, thanks for you patience and time.

___
opensolaris-discuss mailing list
opensolaris-discuss@opensolaris.org

Re: [osol-discuss] about zfs exported on nfs

2010-03-15 Thread Harry Putnam
Robert Thurlow  writes:

> Harry Putnam wrote:
>
>> Man, I'm really sorry to the list for all my huffing and puffing when
>> I'm pretty sure I had been claiming I had the right settings in
>> /etc/default/nfs (but didn't).
>
> This makes a lot more sense.  NFSv4 should have worked for
> you if you had the client and server both set to the same
> NFSv4 domain - if you care to work on this, we can.
>
> Rob T

Thanks for the offer.  Is there something NFSv4 offers that would make
it worth doing?

This is a home lan so only light usage.

My domain is a fake homeboy domain that will not resolve on any
nameserver (I don't run a local DNS server).

  `HOST.local.lan'
   192.168.0.NNN

However its there in /etc/hosts, so for internal work, the resolver
looks there first and it seems to fly. 

___
opensolaris-discuss mailing list
opensolaris-discuss@opensolaris.org


Re: [osol-discuss] about zfs exported on nfs

2010-03-15 Thread Robert Thurlow

Harry Putnam wrote:


Man, I'm really sorry to the list for all my huffing and puffing when
I'm pretty sure I had been claiming I had the right settings in
/etc/default/nfs (but didn't).


This makes a lot more sense.  NFSv4 should have worked for
you if you had the client and server both set to the same
NFSv4 domain - if you care to work on this, we can.

Rob T
___
opensolaris-discuss mailing list
opensolaris-discuss@opensolaris.org


Re: [osol-discuss] about zfs exported on nfs

2010-03-15 Thread Matthias Pfützner
Hey, Harry, no problem! Sometimes we all can't see the forest for the
trees... We all assumed, it must have been something like that.

Glad it worked out finally!

 Matthias

You (Harry Putnam) wrote:
> Harry Putnam  writes:
> 
> > That info from your linux client `mount' cmd may tell whats wrong
> > here.  Mine has one extra item in there: 
> >
> >   OSOL_SERVER:/pub on /pub type
> >   nfs (rw,users,addr=192.168.0.29,vers=4,clientaddr=192.168.0.2)
> >
> > Notice the `vers=4' so apparently my linux client is taking a share
> > that is being offered as nfs=vers3, but trying to mount it as vers4.
> >
> > Mathias P. asked me that very question, and I wasn't sure how to find
> > out.  I think this may at least show its happening.
> 
> Haaa, I may have solved my problem.  Turns out to be some very sloppy
> work on my part.
> 
> All this time I've been setting:
> 
> /etc/default/nfs:
>   NFS_CLIENT_VERSMAX=3
> 
> When I guess I should have been setting:
> 
>   NFS_SERVER_VERSMAX=3
> (and that was commented out)
> 
> I think I may have gotten a bit confused there. Now I'm telling the
> server only to offer version=3.  And guess what?  The client users on
> linux host can now read/write with there own uid:gid.
> 
> Man, I'm really sorry to the list for all my huffing and puffing when
> I'm pretty sure I had been claiming I had the right settings in
> /etc/default/nfs (but didn't).
-- 
Matthias Pfützner | Tel.: +49 700 PFUETZNER  | And no matter what hard-
Lichtenbergstr.73 | mailto:matth...@pfuetzner.de | ware you have, it's really
D-64289 Darmstadt | AIM: pfuetz, ICQ: 300967487  | hard to learn to play 
Germany  | http://www.pfuetzner.de/matthias/ | piano. R. Needleman, Byte
___
opensolaris-discuss mailing list
opensolaris-discuss@opensolaris.org


Re: [osol-discuss] about zfs exported on nfs

2010-03-15 Thread Harry Putnam
Harry Putnam  writes:

> That info from your linux client `mount' cmd may tell whats wrong
> here.  Mine has one extra item in there: 
>
>   OSOL_SERVER:/pub on /pub type
>   nfs (rw,users,addr=192.168.0.29,vers=4,clientaddr=192.168.0.2)
>
> Notice the `vers=4' so apparently my linux client is taking a share
> that is being offered as nfs=vers3, but trying to mount it as vers4.
>
> Mathias P. asked me that very question, and I wasn't sure how to find
> out.  I think this may at least show its happening.

Haaa, I may have solved my problem.  Turns out to be some very sloppy
work on my part.

All this time I've been setting:

/etc/default/nfs:
  NFS_CLIENT_VERSMAX=3

When I guess I should have been setting:

  NFS_SERVER_VERSMAX=3
(and that was commented out)

I think I may have gotten a bit confused there. Now I'm telling the
server only to offer version=3.  And guess what?  The client users on
linux host can now read/write with there own uid:gid.

Man, I'm really sorry to the list for all my huffing and puffing when
I'm pretty sure I had been claiming I had the right settings in
/etc/default/nfs (but didn't).

___
opensolaris-discuss mailing list
opensolaris-discuss@opensolaris.org


Re: [osol-discuss] about zfs exported on nfs

2010-03-15 Thread Harry Putnam
Chris  writes:

> Hi Harry, I get files created with UID & GID set by client.  See
> below (some names have been altered to protect the innocent, any
> inconsistencies are due to that editing)

[...]

Thanks... nothing like some actual data to see how it ends up.

So something is definitely going amiss on my setup.

It wasn't clear how your osol nfs share is set on the server.

Is it just done with `zfs set' like:
 `zfs set sharenfs=on pool/path'

Or something more complex?

> from mount list: 192.168.0.110:/darkstar/nebulae on
> /home/chris/osolnfsmount type nfs (rw,nolock,addr=192.168.0.110)

That info from your linux client `mount' cmd may tell whats wrong
here.  Mine has one extra item in there: 

  OSOL_SERVER:/pub on /pub type
  nfs (rw,users,addr=192.168.0.29,vers=4,clientaddr=192.168.0.2)

Notice the `vers=4' so apparently my linux client is taking a share
that is being offered as nfs=vers3, but trying to mount it as vers4.

Mathias P. asked me that very question, and I wasn't sure how to find
out.  I think this may at least show its happening.

I'm curious what your doing here:

 chris_rem...@plato-gent /home/chris $ chmod g-s osolnfsmount/test

Is that turning off set-gid?

Oh, and one more thing to pester you with... can I see the details of
how the client is mounting the nfs share.

I know you've shown the output of `mount' but is there a notation
somewhere that does the job?

And your client is OSX then?... not linux?

___
opensolaris-discuss mailing list
opensolaris-discuss@opensolaris.org


Re: [osol-discuss] about zfs exported on nfs

2010-03-15 Thread Chris
Hi Harry, I get files created with UID & GID set by client.  See below (some 
names have been altered to protect the innocent, any inconsistencies are due to 
that editing)

from mount list:
192.168.0.110:/darkstar/nebulae on /home/chris/osolnfsmount type nfs 
(rw,nolock,addr=192.168.0.110)

ch...@plato-gent ~ $ ll osolnfsmount
total 21
drwxr-sr-x 2 root dialout  2 Jan 13 13:41 archive
drwxr-sr-x 3 chris_remote dialout 14 Feb 26 12:55 downloads
drwxr-sr-x 2 root dialout  2 Jan  6 14:31 projects
ch...@plato-gent ~ $ touch osolnfsmount/test.file
ch...@plato-gent ~ $ ll osolnfsmount
total 21
drwxr-sr-x 2 root dialout  2 Jan 13 13:41 archive
drwxr-sr-x 3 chris_remote dialout 14 Feb 26 12:55 downloads
drwxr-sr-x 2 root dialout  2 Jan  6 14:31 projects
-rw-r--r-- 1 chrisdialout  0 Mar 15  2010 test.file

ch...@plato-gent ~ $ sudo su chris_remote
Password: 

chris_rem...@plato-gent /home/chris $ touch osolnfsmount/test.2.file
chris_rem...@plato-gent /home/chris $ ls -l osolnfsmount
total 22
drwxr-sr-x 2 root dialout  2 Jan 13 13:41 archive
drwxr-sr-x 3 chris_remote dialout 14 Feb 26 12:55 downloads
drwxr-sr-x 2 root dialout  2 Jan  6 14:31 projects
-rw-r--r-- 1 chris_remote dialout  0 Mar 15  2010 test.2.file
-rw-r--r-- 1 chrisdialout  0 Mar 15  2010 test.file

chris_rem...@plato-gent /home/chris $ mkdir umask 002
chris_rem...@plato-gent /home/chris $ mkdir osolnfsmount/test
chris_rem...@plato-gent /home/chris $ chmod g-s osolnfsmount/test
chris_rem...@plato-gent /home/chris $ touch osolnfsmount/test/test.file
chris_rem...@plato-gent /home/chris $ ls -l osolnfsmount/test
total 1
-rw-rw-r-- 1 chris_remote chris_remote 0 Mar 15  2010 test.file  

dialout group is GID 20, which happens to be "staff" on OSX.
-- 
This message posted from opensolaris.org
___
opensolaris-discuss mailing list
opensolaris-discuss@opensolaris.org


Re: [osol-discuss] about zfs exported on nfs

2010-03-14 Thread Matthias Pfützner
You (Harry Putnam) wrote:
> Matthias Pfützner 
> writes:
> > So, if user "willi" on the Linux client hat UID 4711, then on the SERVER you
> > should see "4711" as the owner of a file, that had been created by "willi" 
> > on
> > a filesystems that had been mounted from the server. The server does not 
> > need
> > to know, that 4711 is "willi", it will still create the file with the ID
> > 4711. That's the basice rule... as long, as the UID is NOT 0 or root...
> 
> Do you have any linux clients to osol nfs server?

No, sadly not...

> If not Matthias, then is there anyone else here who has an osol NFS
> server with a linux client, where you can show a simple `ls -l' (on
> the client) in a directory created on the mounted nfs share, by a
> linux user.

Matthias
-- 
Matthias Pfützner | Tel.: +49 700 PFUETZNER  | Die interessantesten
Lichtenbergstr.73 | mailto:matth...@pfuetzner.de | Interaktionen finden im 
D-64289 Darmstadt | AIM: pfuetz, ICQ: 300967487  | Kopf statt, nicht in der 
Germany  | http://www.pfuetzner.de/matthias/ | Maus.  (Brian Eno)
___
opensolaris-discuss mailing list
opensolaris-discuss@opensolaris.org


Re: [osol-discuss] about zfs exported on nfs

2010-03-14 Thread Matthias Pfützner
Edward,

You (Edward Ned Harvey) wrote:
> > Do you have any linux clients to osol nfs server?
> > 
> > If not Matthias, then is there anyone else here who has an osol NFS
> > server with a linux client, where you can show a simple `ls -l' (on
> > the client) in a directory created on the mounted nfs share, by a
> > linux user.
> 
> I still think my original answer is the answer for you.  The only difference 
> between your system and mine is that my NFS server is Solaris, and yours is 
> OpenSolaris.  Since you said you were confused by my post, let me try to 
> break it down:
> 
> On my solaris NFS server, I don't want to export my filesystem to every 
> single IP address on the LAN.  I want only specific clients to be able to 
> access the server.  In the example below, I'm allowing "someclient" to access 
> the server.  These clients must have matching forward and reverse DNS.  You 
> could probably set up for exporting NFS to a subnet if you wanted to, but 
> you'd have to read the "man share" command to figure that out.  I have 
> exported my NFS filesystem using the following command:
> share -F nfs -o 
> rw=someclient.domain.com,root=someclient.domain.com,anon=4294967294 /export

I'm not sure, the "anon=???" option is needed... That's a bizarre UID to use,
and might be specific to your setup... And the "rw" option is only needed, as
you state, to LIMIT the write option to that specific host...

So, the "standard"

zfs set sharenfs=ro...@192.168.2 rpool/export

should really be sufficient...

> To make the "share" command persistent across reboots, just stick it into 
> /etc/dfs/dfstab after you've run it manually once.

Not needed with ZFS, as that is a property of the ZFS filesystem, and is
persistent, not only across reboots, but even accros "transports" to different
servers, if you unplug the disks and plug them to a different server. Beauty
of ZFS!

> If you like, instead of using "share" and "dfstab" as described above, you 
> can set the sharenfs property on your ZFS filesystem.  It has the same 
> effect, according to "man zfs"
> 
> If you have a RHEL/Centos 5 client, you have autofs5 installed by default.  
> You can configure automount to automatically mount the NFS server upon 
> attempted access.  Here's how:
> 
> Edit /etc/auto.master and stick this line in:
> /-  /etc/auto.direct --timeout=1200
> 
> Also edit /etc/auto.direct and stick this line in:
> /path/to/mountpoint  -fstype=nfs,noacl,rw,hard,intr,posix 
> server.domain.com:/export
> 
> If you have a RHEL/Centos 4 client, you have autofs4 installed by default, 
> and autofs4 doesn't support "direct" automount as described above.  "Direct" 
> automount is much better than what they could do in autofs4, so it's worth 
> while to remove autofs4 and install autofs5 instead.  Then, you can use the 
> same "auto.master" and "auto.direct" as described above.

Sadly, as mentioned, I don't have a Linux client to test, so, Harry, sorry...

Still, the above commands for the Linux side look plausible to me...

   Matthias
-- 
Matthias Pfützner | Tel.: +49 700 PFUETZNER  | Die interessantesten
Lichtenbergstr.73 | mailto:matth...@pfuetzner.de | Interaktionen finden im 
D-64289 Darmstadt | AIM: pfuetz, ICQ: 300967487  | Kopf statt, nicht in der 
Germany  | http://www.pfuetzner.de/matthias/ | Maus.  (Brian Eno)
___
opensolaris-discuss mailing list
opensolaris-discuss@opensolaris.org


Re: [osol-discuss] about zfs exported on nfs

2010-03-14 Thread Casper . Dik


>I've had bad experience setting NFS mounts in /etc/fstab.  The problem is:
>If the filesystem can't mount for any reason, then the machine doesn't come
>up.  Unless you set it as a "soft" mount, in which case, the slightest
>little network glitch causes clients to lose their minds.

There is also a "bg" mount option: the mount will continue in the 
background when it fails; however, if a NFS mount is always needed, I 
suggest to create it in /etc/auto_direct.  The mount isn't performed at 
boot then but it is delayed until the mountpoint is accessed.

>What I wrote in the previous email, about using automount and hard
>interruptable NFS mounts was very well thought out and based on years of
>commercial deployment of NFS systems.  Like I said, it's rock solid if
>configured as I described.  It's resilient against network failure during
>boot, or during operation, yet it's force-interruptable by root if
>necessary, which is extremely rare.

I agree.  automount and not /etc/vfstab.

Casper

___
opensolaris-discuss mailing list
opensolaris-discuss@opensolaris.org


Re: [osol-discuss] about zfs exported on nfs

2010-03-13 Thread Harry Putnam
Edward Ned Harvey 
writes:
>> Do you have any linux clients to osol nfs server?
>> 
>> If not Matthias, then is there anyone else here who has an osol NFS
>> server with a linux client, where you can show a simple `ls -l' (on
>> the client) in a directory created on the mounted nfs share, by a
>> linux user.
>
> I still think my original answer is the answer for you.  The only
> difference between your system and mine is that my NFS server is
> Solaris, and yours is OpenSolaris.  Since you said you were confused
> by my post, let me try to break it down:

But yet you were unwilling to show the output of ls -l from the linux
client on a mounted NFS directory.. as I requested, preferring to go
into lots of detail about auto mounting.

Its not the mounting that is a problem for me, and in the light usage
scenario of single user machine, auto mount is not a high
requirement... it just isn't an issue.

Its what I see with unix permissions on the created files.

When your linux users create files on a mounted nfs share, do those
files have that users uid gid, or something else?

Can you show a `ls -l (from linux client) on a mounted NFS share.

___
opensolaris-discuss mailing list
opensolaris-discuss@opensolaris.org


Re: [osol-discuss] about zfs exported on nfs

2010-03-13 Thread Edward Ned Harvey
> Do you have any linux clients to osol nfs server?
> 
> If not Matthias, then is there anyone else here who has an osol NFS
> server with a linux client, where you can show a simple `ls -l' (on
> the client) in a directory created on the mounted nfs share, by a
> linux user.

I still think my original answer is the answer for you.  The only difference 
between your system and mine is that my NFS server is Solaris, and yours is 
OpenSolaris.  Since you said you were confused by my post, let me try to break 
it down:

On my solaris NFS server, I don't want to export my filesystem to every single 
IP address on the LAN.  I want only specific clients to be able to access the 
server.  In the example below, I'm allowing "someclient" to access the server.  
These clients must have matching forward and reverse DNS.  You could probably 
set up for exporting NFS to a subnet if you wanted to, but you'd have to read 
the "man share" command to figure that out.  I have exported my NFS filesystem 
using the following command:
share -F nfs -o 
rw=someclient.domain.com,root=someclient.domain.com,anon=4294967294 /export

To make the "share" command persistent across reboots, just stick it into 
/etc/dfs/dfstab after you've run it manually once.

If you like, instead of using "share" and "dfstab" as described above, you can 
set the sharenfs property on your ZFS filesystem.  It has the same effect, 
according to "man zfs"

If you have a RHEL/Centos 5 client, you have autofs5 installed by default.  You 
can configure automount to automatically mount the NFS server upon attempted 
access.  Here's how:

Edit /etc/auto.master and stick this line in:
/-  /etc/auto.direct --timeout=1200

Also edit /etc/auto.direct and stick this line in:
/path/to/mountpoint  -fstype=nfs,noacl,rw,hard,intr,posix 
server.domain.com:/export

If you have a RHEL/Centos 4 client, you have autofs4 installed by default, and 
autofs4 doesn't support "direct" automount as described above.  "Direct" 
automount is much better than what they could do in autofs4, so it's worth 
while to remove autofs4 and install autofs5 instead.  Then, you can use the 
same "auto.master" and "auto.direct" as described above.

___
opensolaris-discuss mailing list
opensolaris-discuss@opensolaris.org


Re: [osol-discuss] about zfs exported on nfs

2010-03-13 Thread Edward Ned Harvey
> The Linux host needs to be able to MOUNT the NFS-exported files.
> 
> The /etc/auto.master file is using a later "extension" to the NFS
> system, name
> "automount". This only mounts directories, when they are accessed,
> therefore
> "auto-mount".
> 
> You could also add the to-be-mounted diretories into /etc/fstab, so
> that they
> are mounted ALWAYS.

I've had bad experience setting NFS mounts in /etc/fstab.  The problem is:
If the filesystem can't mount for any reason, then the machine doesn't come
up.  Unless you set it as a "soft" mount, in which case, the slightest
little network glitch causes clients to lose their minds.

What I wrote in the previous email, about using automount and hard
interruptable NFS mounts was very well thought out and based on years of
commercial deployment of NFS systems.  Like I said, it's rock solid if
configured as I described.  It's resilient against network failure during
boot, or during operation, yet it's force-interruptable by root if
necessary, which is extremely rare.

___
opensolaris-discuss mailing list
opensolaris-discuss@opensolaris.org


Re: [osol-discuss] about zfs exported on nfs

2010-03-13 Thread Harry Putnam
Matthias Pfützner 
writes:
> So, if user "willi" on the Linux client hat UID 4711, then on the SERVER you
> should see "4711" as the owner of a file, that had been created by "willi" on
> a filesystems that had been mounted from the server. The server does not need
> to know, that 4711 is "willi", it will still create the file with the ID
> 4711. That's the basice rule... as long, as the UID is NOT 0 or root...

Do you have any linux clients to osol nfs server?

If not Matthias, then is there anyone else here who has an osol NFS
server with a linux client, where you can show a simple `ls -l' (on
the client) in a directory created on the mounted nfs share, by a
linux user.


___
opensolaris-discuss mailing list
opensolaris-discuss@opensolaris.org

Re: [osol-discuss] about zfs exported on nfs

2010-03-13 Thread Matthias Pfützner
You (Harry Putnam) wrote:
> Matthias Pfützner 
> writes:
> 
> > Harry,
> >
> > all very bizarre... ;-)
> 
> Probably do to ill informed bumbling on my part.
> 
> [...]
> 
> >> > What's the output of:
> >> >
> >> > ls -ld  /export/home/reader
> >> >
> >> > Does that resolve and list the user-name and group-name?
> >> 
> >> yes, well no.  
> >> 
> >> I've found what may explain some of this... it appears somewhere in
> >> the last few updates on client... my carefully hand edited gid has
> >> disappeared.
> >
> > Strange! I don't know of any automated tool, that would edit either
> > /etc/passwd or /etc/group...
> 
> I should have said `reinstall' instead of `upgrade' (on client). There
> was a full `from scratch' reinstall, so basic stuff was reset to
> default and haven't gotten it all back together yet.  Still finding
> things I had setup but forgotten about.
> 
> Somewhere on these opensolaris lists (mnths ago), I was told that I
> needed a user with same uid gid on both server and client for NFS to
> work well but I'm not seeing that anywhere in share_nfs.

You need the SAME entries on Client and Server in "/etc/passwd" and
"/etc/group" so, that it works "correctly" (meaning, the way you want
it). Still, EVEN without the entries on the server, sou should at least see
the correct UIG and GID when doing an ls -al on the SERVER. No need to have
the "strings" in /etc/passwd or /etc/group, the NFS server does not need to
KNOW about those...

share_nfs does not need to and will not show these user ids...

So, if user "willi" on the Linux client hat UID 4711, then on the SERVER you
should see "4711" as the owner of a file, that had been created by "willi" on
a filesystems that had been mounted from the server. The server does not need
to know, that 4711 is "willi", it will still create the file with the ID
4711. That's the basice rule... as long, as the UID is NOT 0 or root...

> [...]
> 
> > Back to NVS v4 vs. v3. I've no idea, on how Linux does define the
> > version, you mentioned, it's been done in a config file, and that
> > you did set that to be v3.
> 
> Thanks again for your patience.

You're welcome!

> I'm not sure either (about nfs on linux)  but am checking into that
> now.

Matthias
-- 
Matthias Pfützner | Tel.: +49 700 PFUETZNER  | Wenn die Materie nichts
Lichtenbergstr.73 | mailto:matth...@pfuetzner.de | ist, dann sind wir
D-64289 Darmstadt | AIM: pfuetz, ICQ: 300967487  | Materialisten.
Germany  | http://www.pfuetzner.de/matthias/ | Pier Paolo Pasolini
___
opensolaris-discuss mailing list
opensolaris-discuss@opensolaris.org


Re: [osol-discuss] about zfs exported on nfs

2010-03-13 Thread Harry Putnam
Matthias Pfützner 
writes:

> Harry,
>
> all very bizarre... ;-)

Probably do to ill informed bumbling on my part.

[...]

>> > What's the output of:
>> >
>> > ls -ld  /export/home/reader
>> >
>> > Does that resolve and list the user-name and group-name?
>> 
>> yes, well no.  
>> 
>> I've found what may explain some of this... it appears somewhere in
>> the last few updates on client... my carefully hand edited gid has
>> disappeared.
>
> Strange! I don't know of any automated tool, that would edit either
> /etc/passwd or /etc/group...

I should have said `reinstall' instead of `upgrade' (on client). There
was a full `from scratch' reinstall, so basic stuff was reset to
default and haven't gotten it all back together yet.  Still finding
things I had setup but forgotten about.

Somewhere on these opensolaris lists (mnths ago), I was told that I
needed a user with same uid gid on both server and client for NFS to
work well but I'm not seeing that anywhere in share_nfs.

[...]

> Back to NVS v4 vs. v3. I've no idea, on how Linux does define the
> version, you mentioned, it's been done in a config file, and that
> you did set that to be v3.

Thanks again for your patience.

I'm not sure either (about nfs on linux)  but am checking into that
now.

___
opensolaris-discuss mailing list
opensolaris-discuss@opensolaris.org

Re: [osol-discuss] about zfs exported on nfs

2010-03-13 Thread Matthias Pfützner
Harry,

all very bizarre... ;-)

You (Harry Putnam) wrote:
> Matthias Pfützner 
> writes:
> 
> > Harry,
> >
> > sorry for my first answer, now that you rephrased some of the
> > original post, I now remember, what your initial problam really
> > was... More inline below...
> 
> It may not have been or still may not be much of a clear exposition on
> my part either...
> 
> [...]
> 
> > OK, so that ROOT can chnage things, you need to add options to the
> > EXPORT side (aka, the ZFS server)
> 
> [...]
> 
> >  zfs set sharenfs=ro...@192.168.2 pfuetz/smb-share
> 
> Thanks for that.

You're welcome!

> > So, with that knowledge, we need to consult "man share_nfs":
> >
> > The things interesting to you might be the following options:
> >
> >  anon=uidSet uid to be the effective user  ID
> >  of   unknown   users.   By  default,
> >  unknown users are given  the  effec-
> >  tive  user  ID UID_NOBODY. If uid is
> >  set to -1, access is denied.
> >
> >  root=access_listOnly  root  users  from  the   hosts
> >  specified  in  access_list have root
> >  access. See  access_list  below.  By
> >  default, no host has root access, so
> >  root  users   are   mapped   to   an
> >  anonymous  user ID (see the anon=uid
> >  option described  above).  Netgroups
> >  can  be  used  if  the  file  system
> >  shared is using UNIX  authentication
> >  ( AUTH_SYS).
> 
> Not really.  After more careful reading of share_nfs, mainly as an aid
> to working with your posts and info, I'm thinking the defaults should
> do just about all I need.

Agreed! Accept, possibly, if needed, the "root" option might be handy...

> I brought up roots inability on client, to change uid:gid but only
> because the defaults appear not to be working.  For example:
> 
>  nosuid
> 
>=>  By default, clients are allowed to create  files  on
>=>  the  shared  file  system  with the setuid or setgid
>=>  mode enabled. Specifying nosuid  causes  the  server
>file system to silently ignore any attempt to enable
>the setuid or setgid mode bits.
> 
> With emphasis on the arrowed lines (not on `nosuid').

Right!

> Its very possible I'm reading that wrong but it appears to be saying
> the client machine users will be able to write files with there own
> uid:gid.

Right!

> That does not occur here.  All files are written nobody:nobody

Strange!

> [...]
> 
> >> And again, the USER:GID exists on both server and client with
> >> the same numeric uid:gid:
> >> 
> >>  osol (server) host:
> >> 
> >>  uname -a
> >>   SunOS zfs 5.11 snv_133 i86pc i386 i86pc Solaris
> >> 
> >>   root # ls -lnd /export/home/reader
> >>   drwxr-xr-x 60 1000 10 173 2010-03-11 12:13 /export/home/reader
> 
> On closer checking I see the above info is erroneous/.

Even stranger... ;-)

> > Which "ls" command? /usr/gnu/bin/ls, or /usr/bin/ls? Still: Here we see, 
> > that
> > the file had been created by a user with User-ID: 1000 and Group-ID: 10.
> 
> A default osol install leaves gnu tools first in the path so it was
> gnu `ls' I posted.

Right! I only asked, because, if you would be using "sharesmb" also, then
files created via Windows Servers will be nobody:nobody regardless, and all
"access info" is captured in ACLs, and gnu ls simply does not show these
ACLs... You need the Solaris native "ls" to see them...

> > What's the output of:
> >
> > ls -ld  /export/home/reader
> >
> > Does that resolve and list the user-name and group-name?
> 
> yes, well no.  
> 
> I've found what may explain some of this... it appears somewhere in
> the last few updates on client... my carefully hand edited gid has
> disappeared.

Strange! I don't know of any automated tool, that would edit either
/etc/passwd or /etc/group...

> There was a point whereas neither users default gid matched but each
> belonged to group `wheel', which had been hand edited to match.
> 
> So there was a shared gid on both ends.  That is no longer the case.
> Not sure why.
> 
> But even then the share_nfs man page seems to indicate that the client
> users will write files to there own gid (there is no mention of
> matching gid on both ends) 

You're right again!

Back to NVS v4 vs. v3. I've no idea, on how Linux does define the version, you
mentioned, it's been done in a config file, and that you did set that to be
v3.

Are you sure, that the Linux box does HONOUR that setting? Or might it be the
case, that that is ignored, (perhaps root or daemon or ... can't read it at
boot time)

On Solaris, you would force that in /etc/default/nfs (see "man nfs"), still,
that file needs to be readable at boot

Re: [osol-discuss] about zfs exported on nfs

2010-03-12 Thread Harry Putnam
Matthias Pfützner 
writes:

> Harry,
>
> sorry for my first answer, now that you rephrased some of the
> original post, I now remember, what your initial problam really
> was... More inline below...

It may not have been or still may not be much of a clear exposition on
my part either...

[...]

> OK, so that ROOT can chnage things, you need to add options to the
> EXPORT side (aka, the ZFS server)

[...]

>  zfs set sharenfs=ro...@192.168.2 pfuetz/smb-share

Thanks for that.

[...]

> You state, that you have the users and groups entries on BOTH
> (client + server) done exactly same by hand, so that should be good,
> still, I would verify that. And also please check on the ZFS-Server
> the file: /etc/nsswitch.conf for the entries for:
>
> passwd
> group
>
> They should look like:
>
> passwd: files
> group:  files
>

nisswitch is setup as you've shown

> So, with that knowledge, we need to consult "man share_nfs":
>
> The things interesting to you might be the following options:
>
>  anon=uidSet uid to be the effective user  ID
>  of   unknown   users.   By  default,
>  unknown users are given  the  effec-
>  tive  user  ID UID_NOBODY. If uid is
>  set to -1, access is denied.
>
>  root=access_listOnly  root  users  from  the   hosts
>  specified  in  access_list have root
>  access. See  access_list  below.  By
>  default, no host has root access, so
>  root  users   are   mapped   to   an
>  anonymous  user ID (see the anon=uid
>  option described  above).  Netgroups
>  can  be  used  if  the  file  system
>  shared is using UNIX  authentication
>  ( AUTH_SYS).

Not really.  After more careful reading of share_nfs, mainly as an aid
to working with your posts and info, I'm thinking the defaults should
do just about all I need.

I brought up roots inability on client, to change uid:gid but only
because the defaults appear not to be working.  For example:

 nosuid

   =>  By default, clients are allowed to create  files  on
   =>  the  shared  file  system  with the setuid or setgid
   =>  mode enabled. Specifying nosuid  causes  the  server
   file system to silently ignore any attempt to enable
   the setuid or setgid mode bits.

With emphasis on the arrowed lines (not on `nosuid').

Its very possible I'm reading that wrong but it appears to be saying
the client machine users will be able to write files with there own
uid:gid.

That does not occur here.  All files are written nobody:nobody

[...]

>> And again, the USER:GID exists on both server and client with
>> the same numeric uid:gid:
>> 
>>  osol (server) host:
>> 
>>  uname -a
>>   SunOS zfs 5.11 snv_133 i86pc i386 i86pc Solaris
>> 
>>   root # ls -lnd /export/home/reader
>>   drwxr-xr-x 60 1000 10 173 2010-03-11 12:13 /export/home/reader

On closer checking I see the above info is erroneous/.

> Which "ls" command? /usr/gnu/bin/ls, or /usr/bin/ls? Still: Here we see, that
> the file had been created by a user with User-ID: 1000 and Group-ID: 10.

A default osol install leaves gnu tools first in the path so it was
gnu `ls' I posted.
>
> What's the output of:
>
> ls -ld  /export/home/reader
>
> Does that resolve and list the user-name and group-name?

yes, well no.  

I've found what may explain some of this... it appears somewhere in
the last few updates on client... my carefully hand edited gid has
disappeared.

There was a point whereas neither users default gid matched but each
belonged to group `wheel', which had been hand edited to match.

So there was a shared gid on both ends.  That is no longer the case.
Not sure why.

But even then the share_nfs man page seems to indicate that the client
users will write files to there own gid (there is no mention of
matching gid on both ends) 


___
opensolaris-discuss mailing list
opensolaris-discuss@opensolaris.org

Re: [osol-discuss] about zfs exported on nfs

2010-03-12 Thread Harry Putnam
Mike Gerdts  writes:

>>
>>  Any files/directories create by the linux user end up with
>>  nobody:nobody uid:gid and any attempt to change that from the client
>>  host fails, even if done as root.
>
> It looks to me like you are using NFSv4 and the NFS mapping domains do
> not match.  See /etc/default/nfs on OpenSolaris and I don't know what
> on Linux.

No, that has always been set with:
  grep '^[^#].*VERSMAX'  /etc/default/nfs
  NFS_CLIENT_VERSMAX=3

___
opensolaris-discuss mailing list
opensolaris-discuss@opensolaris.org

Re: [osol-discuss] about zfs exported on nfs

2010-03-12 Thread Mike Gerdts
On Mon, Mar 8, 2010 at 10:54 PM, Harry Putnam  wrote:
> summary:
>
>  A zfs fs set with smb and nfs on, and set chmod g-s (set-gid) with
>  a local users uid:gid is being mounted by a remote linux host (and
>  windows hosts, but not discussing that here).
>
>  The remote user is the same as the local user in both numeric UID
>  and numeric GID
>
>  The zfs nfs/cifs share is mounted like this on a linux client:
>  mount -t nfs -o users,exec,dev,suid
>
>  Any files/directories create by the linux user end up with
>  nobody:nobody uid:gid and any attempt to change that from the client
>  host fails, even if done as root.

It looks to me like you are using NFSv4 and the NFS mapping domains do
not match.  See /etc/default/nfs on OpenSolaris and I don't know what
on Linux.

>
> Details:
>
> I'm not sure when this trouble started... its been a while, long
> enough to have existed over a couple of builds (b129 b133). But was
> not always a problem.
>
> I jumped from 129 to 133 so don't know about builds in between.
>
> I have a zfs_fs .. /projects on zpool z3
>
> this is a hierarchy that is fairly deep but only the top level is zfs.
> (Aside:  That is something I intend to change soon)
>
> That is, the whole thing, of course, is zfs, but the lower levels have
> been created by whatever remote host was working there.
>
> z3/projects has these two settings:
>  z3/projects  sharenfs               on
>  z3/projects  sharesmb               name=projects
>
> So both cifs and nfs are turned on making the zfs host both a zfs and
> nfs server.
>
> Also when  z3/projects was created, it was set:
>  chmod g-s (set gid) right away.
>
> The remote linux user in this discussion has the same numeric UID and
> GID as the local zfs user who is owner of /projects
>
> Later, and more than once by now, I've run this command from the zfs
> host:
>  /bin/chmod -R A=everyone@:full_set:fd:allow /projects
>
> to get read/write to work when working from windows hosts.
>
> The filesystem is primarily accessed as an nsf mounted filesystem on a
> linux (gentoo linux) host.  But is also used over cifs by a couple of
> windows hosts.
>
> On the linux client host, `/projects' gets mounted like this:
>  mount -t nfs -o users,exec,dev,suid
>
> That has been the case both before having the problem and now.
>
> The trouble I see is that all files get created with:
>   nobody:nobody
>
> as UID:GID, even though /projects is set as normal USER:GROUP of a user
> on the zfs/nfs server.
>
> From the remote (we only deal with the linux remote here) any attempt
> to change uid:gid fails, even if done by root on the remote.
>
>
> ___
> opensolaris-discuss mailing list
> opensolaris-discuss@opensolaris.org
>



-- 
Mike Gerdts
http://mgerdts.blogspot.com/
___
opensolaris-discuss mailing list
opensolaris-discuss@opensolaris.org


Re: [osol-discuss] about zfs exported on nfs

2010-03-12 Thread Matthias Pfützner
Harry,

sorry for my first answer, now that you rephrased some of the original post, I
now remember, what your initial problam really was... More inline below...

You (Harry Putnam) wrote:
> And made these settings:

[...]

>   zfs get sharenfs z3/projects
>   NAME PROPERTY  VALUE SOURCE
>   z3/projects  sharenfs  onlocal
> 
> The problem is that when mounted on linux client with this command:
>  (one example of 3 shares)
> 
>mount -t nfs -o users,exec,dev,suid zfs:/projects /projects
> 
> Wheres `zfs' is the name of the server host.
> 
> All those detailed were spelled out with some care in the OP. 
> 
> The problem... also detailed in OP and slightly rewritten here:
> 
>   The trouble I see is that all files get created with: 
>  nobody:nobody
>  as UID:GID
>   even though /projects is set as normal USER:GROUP of a user
>   on the zfs/nfs server.
>   (that would be USER=reader GROUP=wheel)
> 
>   From the remote (linux client),  any attempt
>   to change uid:gid fails, even if done by root on the remote.

OK, so that ROOT can chnage things, you need to add options to the EXPORT
side (aka, the ZFS server), and then you need to do something like, BECAUSE
NFS was made a bit more secure in versions after .1 (root on the client should
not automatically have root-priviliges on the server seide, therefore that
needs to be sprcified explicitly! And you might need even more options,. more
on that below, the example is just one small from my setup at home):

zfs set sharenfs=ro...@192.168.2 pfuetz/smb-share

"man zfs" states at the sharenfs section:

 sharenfs=on | off | opts

 Controls whether the file system is shared via NFS,  and
 what  options  are  used. A file system with a"sharenfs"
 property of "off" is managed through  traditional  tools
 such  as  share(1M),  unshare(1M), and dfstab(4). Other-
 wise,  the  file  system  is  automatically  shared  and
 unshared  with  the  "zfs  share" and "zfs unshare" com-
 mands. If the property is set  to  "on",  the  share(1M)
 command  is  invoked  with  no  options.  Otherwise, the
 share(1M) command is invoked with options equivalent  to
 the contents of this property.

 When the "sharenfs" property is changed for  a  dataset,
 the dataset and any children inheriting the property are
 re-shared with the new options, only if the property was
 previously "off", or if they were shared before the pro-
 perty was changed. If the new  property  is  "off",  the
 file systems are unshared.

You state, that you have the users and groups entries on BOTH (client +
server) done exactly same by hand, so that should be good, still, I would
verify that. And also please check on the ZFS-Server the file:
/etc/nsswitch.conf for the entries for:

passwd
group

They should look like:

passwd: files
group:  files

So, with that knowledge, we need to consult "man share_nfs":

The things interesting to you might be the following options:

 anon=uidSet uid to be the effective user  ID
 of   unknown   users.   By  default,
 unknown users are given  the  effec-
 tive  user  ID UID_NOBODY. If uid is
 set to -1, access is denied.

 root=access_listOnly  root  users  from  the   hosts
 specified  in  access_list have root
 access. See  access_list  below.  By
 default, no host has root access, so
 root  users   are   mapped   to   an
 anonymous  user ID (see the anon=uid
 option described  above).  Netgroups
 can  be  used  if  the  file  system
 shared is using UNIX  authentication
 ( AUTH_SYS).

Sadly I don't know, if the Linux host (I don't actively manage Linux hosts, so
I have limited knowhow here!) might also need some additional option, when
mounting the NFS FileSystems...

> So certain things cannot be done.  For example a sandbox setup where I
> test procmail recipes will not accept the .procmailrc file since its 
> set:
>   nobody:nobody
> Instead of that USER:GID
> 
> And again, the USER:GID exists on both server and client with
> the same numeric uid:gid:
> 
>  osol (server) host:
> 
>  uname -a
>   SunOS zfs 5.11 snv_133 i86pc i386 i86pc Solaris
> 
>   root # ls -lnd /export/home/reader
>   drwxr-xr-x 60 1000 10 173 2010-03-11 12:13 /export/home/reader

Which "ls" command? /usr/gnu/bin/ls, or /usr/bin/ls? Still: Here we see, that
the file had been created by a user with User-ID: 1000 and Group-ID: 10.

What's the output of:

ls -ld  /export/home/reader

Does that resolve

Re: [osol-discuss] about zfs exported on nfs

2010-03-11 Thread Harry Putnam
Matthias Pfützner 
writes:


[...]

> You need TWO things:
>
> You need to START the NFS server:
>
> svcadm enable svc:/network/nfs/server:default
>
> and then you need to SHARE some directories. If these are located on
> a ZFS pool, you can easily share that zpool by:
>
> zfs set sharenfs=on zpoolname
>

OK, Edward H.s'  post confused me a bit, but as it turns out I have
done the TWO things all along.  So I had it right.

I did do some research on this matter a ways back.

And made these settings:

  svcs -a|grep nfs
  disabled   11:53:11 svc:/network/nfs/cbd:default
  online 11:53:36 svc:/network/nfs/status:default
  online 11:53:37 svc:/network/nfs/nlockmgr:default
  online 11:53:37 svc:/network/nfs/mapid:default
  online 11:53:40 svc:/network/nfs/rquota:default
  online 11:53:40 svc:/network/nfs/client:default
  online 11:53:42 svc:/network/nfs/server:default

  zfs get sharenfs z3/projects
  NAME PROPERTY  VALUE SOURCE
  z3/projects  sharenfs  onlocal

The problem is that when mounted on linux client with this command:
 (one example of 3 shares)

   mount -t nfs -o users,exec,dev,suid zfs:/projects /projects

Wheres `zfs' is the name of the server host.

All those detailed were spelled out with some care in the OP. 

The problem... also detailed in OP and slightly rewritten here:

  The trouble I see is that all files get created with: 
 nobody:nobody
 as UID:GID
  even though /projects is set as normal USER:GROUP of a user
  on the zfs/nfs server.
  (that would be USER=reader GROUP=wheel)

  From the remote (linux client),  any attempt
  to change uid:gid fails, even if done by root on the remote.

So certain things cannot be done.  For example a sandbox setup where I
test procmail recipes will not accept the .procmailrc file since its 
set:
  nobody:nobody
Instead of that USER:GID

And again, the USER:GID exists on both server and client with
the same numeric uid:gid:

 osol (server) host:

 uname -a
  SunOS zfs 5.11 snv_133 i86pc i386 i86pc Solaris

  root # ls -lnd /export/home/reader
  drwxr-xr-x 60 1000 10 173 2010-03-11 12:13 /export/home/reader

----   ---=---   -   
linux (client) host:

  root # uname -a
  Linux reader 2.6.33-gentoo #2 SMP Sun Feb 28 22:43:57 CST 2010 
  i686 Intel(R) Celeron(R) CPU 3.06GHz GenuineIntel GNU/Linux  

  root # ls -lnd /home/reader/no_bak/procmail_ex
  drwxr-xr-x 2 1000 10 48 Mar 11 14:02 /home/reader/no_bak/procmail_ex


> The Linux host needs to be able to MOUNT the NFS-exported files.

> The /etc/auto.master file is using a later "extension" to the NFS
> system, name "automount". This only mounts directories, when they
> are accessed, therefore "auto-mount".
>
> You could also add the to-be-mounted diretories into /etc/fstab, so
> that they are mounted ALWAYS.

I do it in the init scripts, with the same result.

> But, it seems, you might need to digg a bit around, and get some
> intorductoriy 
> infos on NFS AUTOMOUNT

Well, that is no doubt true...

I didn't use automounting on the linux host but am not having a
problem mounting the shares, with the command shown above.
I do the mount in initscript `local.start' so the shares are always
mounted.

But I am having the problem described above.  Even though, far as I
know, I haven't made any changes on either end regarding exporting the
shares or mounting them.  But the problems began somewhere a month or
two ago.

I have made at least 2 upgrades with these settings in place on the
server end... and the linux end.  Now at b133 on the solaris end.  

I've probably made some change and forgot it.. or something similar
but having trouble tracking down the problem.

The settings on both ends are now as shown above.  But the problem
with all files being created nobody:nobody persists.

___
opensolaris-discuss mailing list
opensolaris-discuss@opensolaris.org

Re: [osol-discuss] about zfs exported on nfs

2010-03-11 Thread Matthias Pfützner
You (Harry Putnam) wrote:
> Edward Ned Harvey 
> writes:
> > I don't know if there is something deeper going on here, I'll just start by
> > saying I'm doing the same thing (but the server is solaris) and I don't have
> > any problems.  This has been in production for quite some time, and used
> > heavily by many users and various nfs clients.  It's rock solid, and
> > everyone loves it.  Very tried and tested.  ;-)  Here is how I am set up:
> >
> > Filesystem exported by the following line in /etc/dfs/dfstab:
> > share -F nfs -o
> > rw=someclient.domain.com,root=someclient.domain.com,anon=4294967294 /export
> 
> Is `share' literal... or the name of a zfs_fs like:
> /some/shared_zfs_fs?  What does the number `4294967294' signifiy.

If you have ZFS, you don't NEED to put those things into /etc/dfs/dfstab. The
nice thing about ZFS is, it keeps its own "export" list handy.

You need TWO things:

You need to START the NFS server:

svcadm enable svc:/network/nfs/server:default

and then you need to SHARE some directories. If these are located on a ZFS
pool, you can easily share that zpool by:

zfs set sharenfs=on zpoolname

where zpoolname is the name of a zpool...

More details here:

http://blogs.pfuetzner.de/matthias/?p=268

To your questions:

Yes share is literal, because it's the command 
For the "number", see "man share_nfs":

 anon=uidSet uid to be the effective user  ID
 of   unknown   users.   By  default,
 unknown users are given  the  effec-
 tive  user  ID UID_NOBODY. If uid is
 set to -1, access is denied.

So, it should be sufficient to put 

share -F nfs /export

into /etc/dfs/dfstab, but only, if that /export is NOT on a zpool!
Otherwise it should be

zfs set sharenfs=on export

assuming, that you called to zpool on which /export lives, export...

> > Filesystem mounted by Linux (RHEL/Centos 4 and 5) clients:
> > RHEL/Centos 4 machines upgraded to autofs5.
> >
> > Following line in /etc/auto.master:
> > /-  /etc/auto.direct --timeout=1200
> >
> > Following line in /etc/auto.direct:
> > /path/to/mountpoint  -fstype=nfs,noacl,rw,hard,intr,posix
> > server.domain.com:/export
> 
> Egad... I don't even recognize about 90% of that..
> So didn't set nothing with `zfs set' regarding sharenfs?

See above, YES, when using ZFS you should do it via zfs set sharenfs. The
wonderful and nice thing is: It's a property of the zpool, so EVEN after a
"zpool export" and an "zpool import" on a different machine, that config is
maintained!

> But I'm not a well trained sytem admin... just a homeboy with a home
> zfs/nfs/cifs server.
> 
> And on the linux hosts, there is no /etc/exports involved?  Or does
> your /etc/auto.master and /etc/auto.direct do the job /etc/exports
> traditionally has done?

The Linux host needs to be able to MOUNT the NFS-exported files.

The /etc/auto.master file is using a later "extension" to the NFS system, name
"automount". This only mounts directories, when they are accessed, therefore
"auto-mount".

You could also add the to-be-mounted diretories into /etc/fstab, so that they
are mounted ALWAYS.

But, it seems, you might need to digg a bit around, and get some intorductoriy
infos on

NFS
AUTOMOUNT
and
ZFS

HTH,
Matthias
-- 
Matthias Pfützner | Tel.: +49 700 PFUETZNER  | Der Schlaflose ist zu faul
Lichtenbergstr.73 | mailto:matth...@pfuetzner.de | zum Träumen.
D-64289 Darmstadt | AIM: pfuetz, ICQ: 300967487  |
Germany  | http://www.pfuetzner.de/matthias/ |  Günter Eichberger
___
opensolaris-discuss mailing list
opensolaris-discuss@opensolaris.org


Re: [osol-discuss] about zfs exported on nfs

2010-03-10 Thread Harry Putnam
Edward Ned Harvey 
writes:
> I don't know if there is something deeper going on here, I'll just start by
> saying I'm doing the same thing (but the server is solaris) and I don't have
> any problems.  This has been in production for quite some time, and used
> heavily by many users and various nfs clients.  It's rock solid, and
> everyone loves it.  Very tried and tested.  ;-)  Here is how I am set up:
>
> Filesystem exported by the following line in /etc/dfs/dfstab:
> share -F nfs -o
> rw=someclient.domain.com,root=someclient.domain.com,anon=4294967294 /export

Is `share' literal... or the name of a zfs_fs like:
/some/shared_zfs_fs?  What does the number `4294967294' signifiy.

> Filesystem mounted by Linux (RHEL/Centos 4 and 5) clients:
> RHEL/Centos 4 machines upgraded to autofs5.
>
> Following line in /etc/auto.master:
> /-  /etc/auto.direct --timeout=1200
>
> Following line in /etc/auto.direct:
> /path/to/mountpoint  -fstype=nfs,noacl,rw,hard,intr,posix
> server.domain.com:/export

Egad... I don't even recognize about 90% of that..
So didn't set nothing with `zfs set' regarding sharenfs?

But I'm not a well trained sytem admin... just a homeboy with a home
zfs/nfs/cifs server.

And on the linux hosts, there is no /etc/exports involved?  Or does
your /etc/auto.master and /etc/auto.direct do the job /etc/exports
traditionally has done?

___
opensolaris-discuss mailing list
opensolaris-discuss@opensolaris.org


Re: [osol-discuss] about zfs exported on nfs

2010-03-10 Thread Edward Ned Harvey
>   A zfs fs set with smb and nfs on, and set chmod g-s (set-gid) with
>   a local users uid:gid is being mounted by a remote linux host (and
>   windows hosts, but not discussing that here).
> 
>   The remote user is the same as the local user in both numeric UID
>   and numeric GID
> 
>   The zfs nfs/cifs share is mounted like this on a linux client:
>   mount -t nfs -o users,exec,dev,suid
> 
>   Any files/directories create by the linux user end up with
>   nobody:nobody uid:gid and any attempt to change that from the client
>   host fails, even if done as root.

I don't know if there is something deeper going on here, I'll just start by
saying I'm doing the same thing (but the server is solaris) and I don't have
any problems.  This has been in production for quite some time, and used
heavily by many users and various nfs clients.  It's rock solid, and
everyone loves it.  Very tried and tested.  ;-)  Here is how I am set up:

Filesystem exported by the following line in /etc/dfs/dfstab:
share -F nfs -o
rw=someclient.domain.com,root=someclient.domain.com,anon=4294967294 /export

Filesystem mounted by Linux (RHEL/Centos 4 and 5) clients:
RHEL/Centos 4 machines upgraded to autofs5.

Following line in /etc/auto.master:
/-  /etc/auto.direct --timeout=1200

Following line in /etc/auto.direct:
/path/to/mountpoint  -fstype=nfs,noacl,rw,hard,intr,posix
server.domain.com:/export

___
opensolaris-discuss mailing list
opensolaris-discuss@opensolaris.org


[osol-discuss] about zfs exported on nfs

2010-03-09 Thread Harry Putnam
summary:

  A zfs fs set with smb and nfs on, and set chmod g-s (set-gid) with
  a local users uid:gid is being mounted by a remote linux host (and
  windows hosts, but not discussing that here).

  The remote user is the same as the local user in both numeric UID
  and numeric GID
  
  The zfs nfs/cifs share is mounted like this on a linux client:
  mount -t nfs -o users,exec,dev,suid

  Any files/directories create by the linux user end up with
  nobody:nobody uid:gid and any attempt to change that from the client
  host fails, even if done as root.

Details:

I'm not sure when this trouble started... its been a while, long
enough to have existed over a couple of builds (b129 b133). But was
not always a problem.

I jumped from 129 to 133 so don't know about builds in between.

I have a zfs_fs .. /projects on zpool z3

this is a hierarchy that is fairly deep but only the top level is zfs.
(Aside:  That is something I intend to change soon)

That is, the whole thing, of course, is zfs, but the lower levels have
been created by whatever remote host was working there.

z3/projects has these two settings:
  z3/projects  sharenfs   on
  z3/projects  sharesmb   name=projects

So both cifs and nfs are turned on making the zfs host both a zfs and
nfs server.

Also when  z3/projects was created, it was set:
  chmod g-s (set gid) right away.

The remote linux user in this discussion has the same numeric UID and
GID as the local zfs user who is owner of /projects
 
Later, and more than once by now, I've run this command from the zfs
host:
  /bin/chmod -R A=everyone@:full_set:fd:allow /projects

to get read/write to work when working from windows hosts.

The filesystem is primarily accessed as an nsf mounted filesystem on a
linux (gentoo linux) host.  But is also used over cifs by a couple of
windows hosts.

On the linux client host, `/projects' gets mounted like this:
  mount -t nfs -o users,exec,dev,suid

That has been the case both before having the problem and now.

The trouble I see is that all files get created with: 
   nobody:nobody

as UID:GID, even though /projects is set as normal USER:GROUP of a user
on the zfs/nfs server.

>From the remote (we only deal with the linux remote here) any attempt
to change uid:gid fails, even if done by root on the remote.


___
opensolaris-discuss mailing list
opensolaris-discuss@opensolaris.org