Hi all,

We're getting ready to roll out Gluster using standard NFS from the clients, 
and CTDB and RRDNS to help facilitate HA.   I thought we were good to know, but 
recently had an issue where there wasn't enough memory on one of the gluster 
nodes in a test cluster, and OOM killer took out the NFS daemon process.   
Since there was still IP traffic between nodes and the gluster-based local CTDB 
mount for the lock file was intact, CTDB didn't kick in an initiate failover, 
and all clients connected to the node where NFS was killed lost their 
connections.   We'll obviously fix the lack of memory, but going forward how 
can we protect against clients getting disconnected if the NFS daemon should be 
stopped for any reason?

Our cluster is 3 nodes, 1 is a silent witness node to help with split brain, 
and the other 2 host the volumes with one brick per node, and 1x2 replication.

Is there something incorrect about my setup, or is this a known downfall to 
using standard NFS mounts with gluster?

Thanks,
Kris
_______________________________________________
Gluster-users mailing list
Gluster-users@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-users

Reply via email to