to reproduce the problem.
Regards,
Michael Ward.
*From:*Joe Julian [mailto:j...@julianfamily.org]
*Sent:* Wednesday, 4 January 2017 4:32 PM
*To:* Ravishankar N <ravishan...@redhat.com>; Michael Ward
<michael.w...@melbourneit.com.au>; gluster-users@gluster.org
*Subject:* Re: [Gluster
: Joe Julian [mailto:j...@julianfamily.org]
Sent: Wednesday, 4 January 2017 4:32 PM
To: Ravishankar N <ravishan...@redhat.com>; Michael Ward
<michael.w...@melbourneit.com.au>; gluster-users@gluster.org
Subject: Re: [Gluster-users] GFID Mismatch - Automatic Correction ?
Shouldn
Ward <michael.w...@melbourneit.com.au>; gluster-users@gluster.org
Subject: Re: [Gluster-users] GFID Mismatch - Automatic Correction ?
On 01/04/2017 09:31 AM, Michael Ward wrote:
Hey,
To give some more context around the initial incident.. These systems are
hosted in AWS. The gluster
03.fqdn.com:/export/glus_brick0/brick
>>
>> Status: Connected
>>
>> Number of entries in split-brain: 0
>>
>> Clients show this in the gluster.log:
>>
>> [2017-01-04 03:13:40.863695] W [MSGID: 108008]
>> [afr-self-heal-name.c:354:afr_selfheal_name_gfid_
u very much for your time,
Michael Ward
*From:*Ravishankar N [mailto:ravishan...@redhat.com]
*Sent:* Wednesday, 4 January 2017 12:21 PM
*To:* Michael Ward <michael.w...@melbourneit.com.au>;
gluster-users@gluster.org
*Subject:* Re: [Gluster-users] GFID Mismatch - Automatic Correction ?
On 0
export-glus_brick0-brick.log
file.
Thank you very much for your time,
Michael Ward
From: Ravishankar N [mailto:ravishan...@redhat.com]
Sent: Wednesday, 4 January 2017 12:21 PM
To: Michael Ward <michael.w...@melbourneit.com.au>; gluster-users@gluster.org
Subject: Re: [Gluster-users] GFID Mismatch
On 01/04/2017 06:27 AM, Michael Ward wrote:
Hi,
We have a 2 data node plus 1 arbiter node replicate gluster volume
running gluster 3.8.5. Clients are also using 3.8.5.
One of the data nodes failed the other night, and whilst it was down,
several files were replaced on the second data node
Hi,
We have a 2 data node plus 1 arbiter node replicate gluster volume running
gluster 3.8.5. Clients are also using 3.8.5.
One of the data nodes failed the other night, and whilst it was down, several
files were replaced on the second data node / arbiter (and thus the filesystem
path was