Hello,
I have a gluster volume set up using geo-replication on two slaves
however I'm seeing inconsistent status output on the slave nodes.
Here is the status shown by gluster volume geo-replication status on
each node.
[root@foo-gluster-srv3 ~]# gluster volume geo-replication status
MASTER
I was able to get this working by deleting the geo-replication session
and recreating it. Not sure why it broke in the first place but it is
working now.
On 04/28/2017 03:08 PM, Michael Watters wrote:
> I've just upgraded my gluster hosts from gluster 3.8 to 3.10 and it
> appears th
I've just upgraded my gluster hosts from gluster 3.8 to 3.10 and it
appears that geo-replication on my volume is now broken. Here are the
log entries from the master. I've tried restarting the geo-replication
process several times which did not help. Is there any way to resolve this?
Thanks. Luckily this system isn't in production yet so it shouldn't be
a big deal to take gluster offline for a bit.
On 01/06/2017 12:28 AM, Atin Mukherjee wrote:
> On Fri, Jan 6, 2017 at 4:14 AM, Michael Watters <watte...@watters.ws
> <mailto:watte...@watters.ws>> wrote:
&
I've set up a small gluster cluster running three nodes and I would like
to rename one of the hosts. What is the proper procedure for changing
the host name on a node? Do I simply stop the gluster service, detach
the peer and then run sed on the files under /var/lib/gluster to use the
new name?
Have you done comparisons against Lustre? From what I've seen Lustre
performance is 2x faster than a replicated gluster volume.
On 1/4/17 5:43 PM, Lindsay Mathieson wrote:
> Hi all, just wanted to mention that since I had sole use of our
> cluster over the holidays and a complete set of