2017-07-19 11:22 GMT+02:00 yayo (j) <jag...@gmail.com>:
> running the "gluster volume heal engine" don't solve the problem...
>
> Some extra info:
>
> We have recently changed the gluster from: 2 (full repliacated) + 1
> arbiter to 3 full replicated cluster bu
2017-07-25 11:31 GMT+02:00 Sahina Bose :
>
>> Other errors on unsync gluster elements still remain... This is a
>> production env, so, there is any chance to subscribe to RH support?
>>
>
> The unsynced entries - did you check for disconnect messages in the mount
> log as
2017-07-25 7:42 GMT+02:00 Kasturi Narra :
> These errors are because not having glusternw assigned to the correct
> interface. Once you attach that these errors should go away. This has
> nothing to do with the problem you are seeing.
>
Hi,
You talking about errors like
>
> All these ip are pingable and hosts resolvible across all 3 nodes but,
>> only the 10.10.10.0 network is the decidated network for gluster (rosolved
>> using gdnode* host names) ... You think that remove other entries can fix
>> the problem? So, sorry, but, how can I remove other entries?
>>
>
> Thanks
> kasturi
>
> On Sat, Jul 22, 2017 at 11:43 AM, Ravishankar N <ravishan...@redhat.com>
> wrote:
>
>>
>> On 07/21/2017 11:41 PM, yayo (j) wrote:
>>
>> Hi,
>>
>> Sorry for follow up again, but, checking the ovirt interfa
on*
*user.cifs: off*
*storage.owner-gid: 36*
*features.shard-block-size: 512MB*
*network.ping-timeout: 30*
*performance.strict-o-direct: on*
*cluster.granular-entry-heal: on*
*auth.allow: **
server.allow-insecure: on
2017-07-21 19:13 GMT+02:00 yayo (j) <jag...@gmail.com>:
> 2
2017-07-20 14:48 GMT+02:00 Ravishankar N :
>
> But it does say something. All these gfids of completed heals in the log
> below are the for the ones that you have given the getfattr output of. So
> what is likely happening is there is an intermittent connection problem
>
2017-07-20 14:48 GMT+02:00 Ravishankar N :
>
> But it does say something. All these gfids of completed heals in the log
> below are the for the ones that you have given the getfattr output of. So
> what is likely happening is there is an intermittent connection problem
>
2017-07-20 11:34 GMT+02:00 Ravishankar N :
>
> Could you check if the self-heal daemon on all nodes is connected to the 3
> bricks? You will need to check the glustershd.log for that.
> If it is not connected, try restarting the shd using `gluster volume start
> engine
Hi,
Thank you for the answer and sorry for delay:
2017-07-19 16:55 GMT+02:00 Ravishankar N :
1. What does the glustershd.log say on all 3 nodes when you run the
> command? Does it complain anything about these files?
>
No, glustershd.log is clean, no extra log after
10 matches
Mail list logo