[Gluster-users] Directory ctime/mtime not synced on node being healed

2015-11-27 Thread Tom Pepper
Recently, we lost a brick in a 4-node distribute + replica 2 volume. The host was fine so we simply fixed the hardware failure, recreated the zpool and zfs, set the correct trusted.glusterfs.volume-id, restarted the gluster daemons on the host and the heal got to work. The version running is

[Gluster-users] 3.6.3 Ubuntu PPA

2015-05-23 Thread Tom Pepper
Just wondering if we can expect 3.6.3 to make it to launchpad anytime soon? Thanks, -t ___ Gluster-users mailing list Gluster-users@gluster.org http://www.gluster.org/mailman/listinfo/gluster-users

[Gluster-users] NFS I/O errors after replicated - distributed+replicate add-brick

2015-02-26 Thread Tom Pepper
Hi, all: We had a two-node gluster cluster (replicated, 2 replicas) that recently we added two more node/bricks to and performed a rebalance upon, thus making it a distributed-replicate volume. Since doing so, we now see for any NFS access, read or write, a “Remote I/O error” whenever