Travis,

In the past I have used NFS in a custer and in general the performance was very bad when compared to SMB. Now this was a few years ago, ie: not using mapcache, but my experience was that we ran into a few issues with NFS:

1. NFS CPU overhead was significantly more
2. connection recovery was poor if a server was pulled from the cluster and added back again (ie: recovery after an error condition)
3. the protocol seems to have a lot more over head

-Steve W

On 11/21/2011 9:39 AM, thomas bonfort wrote:
What kind of performance issues? The current locking code only uses
the presence/absence of a file for it's locking functions, and does
not rely on flock/fcntl.

--
thomas

On Mon, Nov 21, 2011 at 15:16, Travis Kirstine<traviskirst...@gmail.com>  wrote:
Thomas,

We have been running into some performance issues mapcache and nfs.
We feel the issue may be related to how nfs locks files/directories
compared to smb.  We are trying a few thing on our end (disable
locking / nfs4 etc).  Do you have any ideas?

Regards

On 20 October 2011 12:19, thomas bonfort<thomas.bonf...@gmail.com>  wrote:
So, this discussion inspired me to completely rework the locking
mechanism in mapcache, to stop relying on file locks which have their
quirks on network filesystems.
I have tried using multiple apache instances configured to used a
SMB-mounted lock directory and hammered both instances on unseeded
identical area to force locking, and ended up with absolutely no
duplicate wms requests or failed requests for the clients.
The code is committed in trunk. Thanks for bringing this up, this
allowed me to really simplify the locking code and remove a lot of
unneeded stuff :)

--
thomas

On Thu, Oct 20, 2011 at 17:08, Travis Kirstine<traviskirst...@gmail.com>  wrote:
Andreas and Thomas

Thanks for you responses,  I have discussed this with some of our IT
staff and they had similar solution as Andreas using gfs.  Their
comments are below:

"I suspect this scheme is not reliable over NFS. The problem is the
directory updates are not synchronized across multiple nodes. I had a
similar issue with the IMAP E-mail protocol. Our workaround currently
is to force each user to leverage a single server.

Ref:
http://wiki.dovecot.org/NFS

Seems like there's some tweaks to disable directory attribute caching
but this can trigger slower performance.
Only workaround is to use GFS which I found to have it's own issues. "

Regards



On 20 October 2011 05:32, Eichner, Andreas - SID-NLKM
<andreas.eich...@sid.sachsen.de>  wrote:

We use TileCache.py on two servers with the cache on an OCFS2 on a
shared LUN in the SAN. No known issues with that for now. Note: Spurious
stale lock files occurred already on a single machine. There seemed to
be issues with lots of requests and a very slow upstream server. I used
a cron job to delete lock files older than 5 minutes or so.
As Thomas noted, if the lock files are created on a shared filesystem
and you make sure the filesystem you use is able to lock files properly
(read the docs carefully!) there's no reason why it should not work.

_______________________________________________
mapserver-users mailing list
mapserver-users@lists.osgeo.org
http://lists.osgeo.org/mailman/listinfo/mapserver-users



_______________________________________________
mapserver-users mailing list
mapserver-users@lists.osgeo.org
http://lists.osgeo.org/mailman/listinfo/mapserver-users

_______________________________________________
mapserver-users mailing list
mapserver-users@lists.osgeo.org
http://lists.osgeo.org/mailman/listinfo/mapserver-users

Reply via email to