On 11/28/2015 12:06 AM, Tom Pepper wrote:
Recently, we lost a brick in a 4-node distribute + replica 2 volume. The host
was fine so we simply fixed the hardware failure, recreated the zpool and zfs,
set the correct trusted.glusterfs.volume-id, restarted the gluster daemons on
the host and t
Recently, we lost a brick in a 4-node distribute + replica 2 volume. The host
was fine so we simply fixed the hardware failure, recreated the zpool and zfs,
set the correct trusted.glusterfs.volume-id, restarted the gluster daemons on
the host and the heal got to work. The version running is 3