On 5/28/2017 9:24 PM, Ravishankar N wrote:
I think you should try to find if there were self-heals pending to
gluster1 before you brought gluster2 down or the VMs should not have
paused.
yes, if I watch for and then force outstanding heals (if the self-heal
hasn't kicked in) prior to shutt
On 05/29/2017 10:45 AM, wk wrote:
OK, can I assume SOME pause is expected when Gluster first sees
gluster2 go down which would unpause after a timeout period. I have
seen that behaviour as well.
Yes, when you power off/shutdown/reboot a node, the mount hangs for a
bit due to not receiving th
On 5/28/2017 9:24 PM, Ravishankar N wrote:
Just to elaborate further, if all nodes were up to begin with and
there were zero self-heals pending, and you only brought down only
gluster2, writes must still be allowed. I guess in your case, there
must be some pending heals from gluster2 to glust
On 05/29/2017 03:36 AM, W Kern wrote:
So I have testbed composed of a simple 2+1 replicate 3 with ARB testbed.
gluster1, gluster2 and gluster-arb (with shards)
My testing involves some libvirt VMs running continuous write fops on
a localhost fuse mount on gluster1
Works great when all the pi
So I have testbed composed of a simple 2+1 replicate 3 with ARB testbed.
gluster1, gluster2 and gluster-arb (with shards)
My testing involves some libvirt VMs running continuous write fops on a
localhost fuse mount on gluster1
Works great when all the pieces are there. Once I figured out the