Hi all,
I still need help with this. After adding another set of bricks to the
volume the original problem went away and healing was complete.
Now, after a instance was terminated and replaced, the replaced node is
exhibiting the same issue.
I turned on debug logging on the volume for the bricks
Hi all,
I'm running a 3 node gluster cluster in AWS (m4.xlarge, 4 x st1 500G disks)
and just starting out.
glusterfs 3.7.11 built on Apr 18 2016 12:57:30 --servers
glusterfs 3.7.13 built on Jul 8 2016 15:46:50 -- fuse clients
Volume Name: marketplace_nfs
Type: Distributed-Replicate
Volume ID: 26