[ ... ]

> The server is setup as follows:
> mkdir /export/orabak
> mount /dev/mapper/orabakp1 /export/orabak
> cat /etc/exports
> /export 192.168.10.0/24(rw,no_root_squash,fsid=0)
> /export/orabak 192.168.10.0/24(rw,no_root_squash)
> exportfs -v
> /export/orabak 
> 192.168.10.0/24(rw,wdelay,no_root_squash,no_subtree_check,anonuid=65534,anongid=65534)
> /export 
> 192.168.10.0/24(rw,wdelay,no_root_squash,no_subtree_check,fsid=0,anonuid=65534,anongid=65534)
[ ... ]
> So the nfs4 daemon is serving up the underlying mountpoint, not the 
> filesystem mounted on it.

I have done a couple experiments here (with 5.3) and it looks
like that:

* If you export a directory before mounting a block device on it,
  and it is then mounted on a client, of course the original
  contents of the directory will be seen on the client.

* If you mount something on the server on an exported directory
  any *subsequent* mounts from a client will see the newly mounted
  block device, but clients that have already mounted it will see
  the original content of the directory.

* If you umount and mount the exported directory the client will
  now see the newly mounted content.

In other words, when a client mounts an exported directory, it
gets a "reference" to that directory's inode, and that inode
stays associated with that mount. Presumably the server caches
a pointer to that inode in the table of exports, and the cache
is not by declared export, but by export instance.

Now the sequence of operations you have done above makes it look
like that you mounted on the client *after* mounting on the
server, but I wonder if that's true. If it is the case, perhaps
some kind of patch in 5.4 has introduced slightly different
semantics in the NFS server as to when the export's inode is cached.

As an experiment, after moounting '/dev/mapper/orabakp1', restart
the NFS service.

_______________________________________________
rhelv5-list mailing list
[email protected]
https://www.redhat.com/mailman/listinfo/rhelv5-list

Reply via email to