Recently, we lost a brick in a 4-node distribute + replica 2 volume. The host
was fine so we simply fixed the hardware failure, recreated the zpool and zfs,
set the correct trusted.glusterfs.volume-id, restarted the gluster daemons on
the host and the heal got to work. The version running is
Just wondering if we can expect 3.6.3 to make it to launchpad anytime soon?
Thanks,
-t
___
Gluster-users mailing list
Gluster-users@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-users
Hi, all:
We had a two-node gluster cluster (replicated, 2 replicas) that recently we
added two more node/bricks to and performed a rebalance upon, thus making it a
distributed-replicate volume. Since doing so, we now see for any NFS access,
read or write, a “Remote I/O error” whenever