Matthias Pfützner <matth...@pfuetzner.de>
writes:

[...]

> You need TWO things:
>
> You need to START the NFS server:
>
> svcadm enable svc:/network/nfs/server:default
>
> and then you need to SHARE some directories. If these are located on
> a ZFS pool, you can easily share that zpool by:
>
>         zfs set sharenfs=on zpoolname
>

OK, Edward H.s'  post confused me a bit, but as it turns out I have
done the TWO things all along.  So I had it right.

I did do some research on this matter a ways back.

And made these settings:

  svcs -a|grep nfs
  disabled       11:53:11 svc:/network/nfs/cbd:default
  online         11:53:36 svc:/network/nfs/status:default
  online         11:53:37 svc:/network/nfs/nlockmgr:default
  online         11:53:37 svc:/network/nfs/mapid:default
  online         11:53:40 svc:/network/nfs/rquota:default
  online         11:53:40 svc:/network/nfs/client:default
  online         11:53:42 svc:/network/nfs/server:default

  zfs get sharenfs z3/projects
  NAME         PROPERTY  VALUE     SOURCE
  z3/projects  sharenfs  on        local

The problem is that when mounted on linux client with this command:
 (one example of 3 shares)

   mount -t nfs -o users,exec,dev,suid zfs:/projects /projects

Wheres `zfs' is the name of the server host.

All those detailed were spelled out with some care in the OP. 

The problem... also detailed in OP and slightly rewritten here:

  The trouble I see is that all files get created with: 
     nobody:nobody
     as UID:GID
  even though /projects is set as normal USER:GROUP of a user
  on the zfs/nfs server.
  (that would be USER=reader GROUP=wheel)

  From the remote (linux client),  any attempt
  to change uid:gid fails, even if done by root on the remote.

So certain things cannot be done.  For example a sandbox setup where I
test procmail recipes will not accept the .procmailrc file since its 
set:
  nobody:nobody
Instead of that USER:GID

And again, the USER:GID exists on both server and client with
the same numeric uid:gid:

 osol (server) host:

     uname -a
  SunOS zfs 5.11 snv_133 i86pc i386 i86pc Solaris

  root # ls -lnd /export/home/reader
  drwxr-xr-x 60 1000 10 173 2010-03-11 12:13 /export/home/reader

-------        ---------       ---=---       ---------      -------- 
linux (client) host:

  root # uname -a
  Linux reader 2.6.33-gentoo #2 SMP Sun Feb 28 22:43:57 CST 2010 
  i686 Intel(R) Celeron(R) CPU 3.06GHz GenuineIntel GNU/Linux  

  root # ls -lnd /home/reader/no_bak/procmail_ex
  drwxr-xr-x 2 1000 10 48 Mar 11 14:02 /home/reader/no_bak/procmail_ex


> The Linux host needs to be able to MOUNT the NFS-exported files.

> The /etc/auto.master file is using a later "extension" to the NFS
> system, name "automount". This only mounts directories, when they
> are accessed, therefore "auto-mount".
>
> You could also add the to-be-mounted diretories into /etc/fstab, so
> that they are mounted ALWAYS.

I do it in the init scripts, with the same result.

> But, it seems, you might need to digg a bit around, and get some
> intorductoriy 
> infos on NFS AUTOMOUNT

Well, that is no doubt true...

I didn't use automounting on the linux host but am not having a
problem mounting the shares, with the command shown above.
I do the mount in initscript `local.start' so the shares are always
mounted.

But I am having the problem described above.  Even though, far as I
know, I haven't made any changes on either end regarding exporting the
shares or mounting them.  But the problems began somewhere a month or
two ago.

I have made at least 2 upgrades with these settings in place on the
server end... and the linux end.  Now at b133 on the solaris end.  

I've probably made some change and forgot it.. or something similar
but having trouble tracking down the problem.

The settings on both ends are now as shown above.  But the problem
with all files being created nobody:nobody persists.

_______________________________________________
opensolaris-discuss mailing list
opensolaris-discuss@opensolaris.org

Reply via email to