I think I should provide some additional info
To be more explicit the volumes are replicated volume created with the
command
gluster volume create $VOL replica 2
dc1strg001x:/zfspool/glusterfs/$VOL/data
dc1strg002x:/zfspool/glusterfs/$VOL/data
I also decided to use "real" file for the
So I have now followed your procedure and I have the feeling that it worked
because I find the same amount of files on the geo-rep slave node as on the
master node. But there where quite a lot of operation not permitted and no such
file or directory warnings on the slave node geo-rep log file.
Both the client and the server are running Ubuntu 14.04 with GlusterFS
3.7 from Ubuntu PPA
I am going to use Gluster to create a simple replicated NFS server. I
was hoping to use the Native FUSE client to also get seamless fail over
but am running into performance issue that are going to
On 02/26/2016 01:01 AM, Joe Julian wrote:
>
> On February 25, 2016 8:32:44 PM PST, Kyle Maas
> wrote:
>> On 02/25/2016 08:20 PM, Ravishankar N wrote:
>>> On 02/25/2016 11:36 PM, Kyle Maas wrote:
How can I tell what AFR version a cluster is using for self-heal?