On 02/04/2017 06:20 AM, ML Wong wrote:
Thanks so much for your promptly response, Soumya.
That helps clearing out one of my questions. I am trying to figure out
why NFS service did not failover/pick-up the NFS clients last time when
one of our cluster-nodes failed.
Though i could see, in coros
Okay, s a specific permutation of the replace-brick command does seem
to do what i was looking for. In this case server1 is being replaced by
server2. The command executes successfully and I then see the data on
server 2.
gluster volume replace-brick gv0 server1:/data/glusterfs/gv0/brick1/bric
All:
I am quite new to gluster so this is likely my lack of knowledge. Here's my
scenario: I have a distribute replicate volume with a replica count of 3.
Each brick is on a different server and the total number of bricks in the
volume is 3.
Now lets say one server goes bad or down. Now i want to
All:
My understanding is that in gluster 3.3, a check was added to see if a
directory (or any of it's ancestors) is already part of a volume. So far so
good. However, I believe this check may be inappropriately getting applied
in my case.
Here's the scenario:
1. in three node gluster cluster,