We're using the following with our NetApp device:

-rw,soft,intr,noexec,nosuid,nodev,users,rsize=65535,wsize=65535,proto=tcp

​Whilst 'soft' is generally not recommended I've found that using 'hard'
causes the broker to lock up immediately.
The settings we have in place above were chosen by our storage team.

Thanks,
Paul

On Fri, Mar 18, 2016 at 3:57 PM, James A. Robinson <jim.robin...@gmail.com>
wrote:

> Lowering timeo on the client side doesn't appear to do jack to help with
> this
> situation.  I'm reluctant to switch from "hard" to "soft" because of the
> warning
> that it can easily lead to corruption issues, but I do see some discussions
> where people say flipping to that mode helped them detect lost locks.
>
> Anyone who has an HA configuration using NFS that they know works for
> failover care to share exactly what mount settings they are using?
>
>
> On Fri, Mar 18, 2016 at 8:51 AM, James A. Robinson <jim.robin...@gmail.com
> >
> wrote:
>
> > Yes indeed there was a problem w/ the underlying NFS connection, logged
> at
> > the OS level.  It's funny, this service wasn't even under load when the
> > timeout happened, so NFS is living up to my expectations already.
> >
> > So I could either lower the client side timeouts to fit within the 30
> > second lease, or I could raise the server side lease time to match the
> 180
> > seconds the client will try for.
> >
>

Reply via email to