[Gluster-users] geo-replication appears to be working, but "failures" are logged for each file transferred.

2016-02-02 Thread Kris Laib
I recently setup geo-replication between two sites, and everything appears to be working, however, if I do a "gluster volume geo-replication SourceVol DestVol status detail", I see that the "failures" column seems to be incrementing for each file it transfers. I've looked at the logs in /var/

Re: [Gluster-users] How to maintain HA using NFS clients if the NFS daemon process gets killed on a gluster node?

2016-01-28 Thread Kris Laib
seconds or less. I didn’t actually need to set CTDB_MANAGES_NFS, just adding the new event monitor in /etc/ctdb/events.d did the trick. Thanks! From: Raghavendra Talur [mailto:rta...@redhat.com] Sent: Wednesday, January 27, 2016 10:22 PM To: Kris Laib Cc: Soumya Koduri ; gluster-users

Re: [Gluster-users] How to maintain HA using NFS clients if the NFS daemon process gets killed on a gluster node?

2016-01-27 Thread Kris Laib
t seem to get speeds higher than 30 MB/s using the Gluster FUSE client (I posted more details on that earlier today to this group as well, looking for advice there). -Kris From: Soumya Koduri Sent: Wednesday, January 27, 2016 8:15 PM To: Kris Lai

[Gluster-users] Write speed issues with 16MB files and using Gluster fuse mount vs NFS

2016-01-27 Thread Kris Laib
Hi all, We were initially planning on using NFS mounts for our gluster deployment to reduce the amount of client-side changes we had to make to swap out our existing NFS solution. I ran into an HA issue with NFS (see my other post from this morning), so started looking into the Fuse client as

[Gluster-users] How to maintain HA using NFS clients if the NFS daemon process gets killed on a gluster node?

2016-01-27 Thread Kris Laib
Hi all, We're getting ready to roll out Gluster using standard NFS from the clients, and CTDB and RRDNS to help facilitate HA. I thought we were good to know, but recently had an issue where there wasn't enough memory on one of the gluster nodes in a test cluster, and OOM killer took out the