Just move it away (to be on the safe side) and trigger a full heal.
Best Regards,
Strahil Nikolov
В сряда, 10 март 2021 г., 13:01:21 ч. Гринуич+2, Maria Souvalioti
написа:
Should I delete the file and restart glusterd on the ov-no1 server?
Thank you very much
On 3/10/21 10:2
It seems that the affected file can be moved away on ov-no1.ariadne-t.local, as
the other 2 bricks "blame" the entry on ov-no1.ariadne-t.local .
After that , you will need to "gluster volume heal full" to
trigger the heal.
Best Regards,
Strahil Nikolov
В сряда, 10 март 2021 г., 12:58:10
Should I delete the file and restart glusterd on the ov-no1 server?
Thank you very much
On 3/10/21 10:21 AM, Strahil Nikolov via Users wrote:
> It seems to me that ov-no1 didn't update the file properly.
>
> What was the output of the gluster volume heal command ?
>
> Best Regards,
> Strahil Ni
The gluster volume heal engine command didn't output anything in the CLI.
The gluster volume heal engine info gives:
# gluster volume heal engine info
Brick ov-no1.ariadne-t.local:/gluster_bricks/engine/engine
Status: Connected
Number of entries: 0
Brick ov-no2.ariadne-t.local:/gluster_bricks/
It seems to me that ov-no1 didn't update the file properly.
What was the output of the gluster volume heal command ?
Best Regards,Strahil Nikolov
The output of the getfattr command on the nodes was the following:
Node1:
[root@ov-no1 ~]# getfattr -d -m . -e hex
/gluster_bricks/engine/engin
The output of the getfattr command on the nodes was the following:
Node1:
[root@ov-no1 ~]# getfattr -d -m . -e hex
/gluster_bricks/engine/engine/80f6e393-9718-4738-a14a-64cf43c3d8c2/images/d5de54b6-9f8e-4fba-819b-ebf6780757d2/a48555f4-be23-4467-8a54-400ae7baf9d7
getfattr: Removing leading '/' fro
Sorry I run the getfattr command wrongly.
I run it again as
getfattr -d -m . -e hex
/gluster_bricks/engine/engine/80f6e393-9718-4738-a14a-64cf43c3d8c2/images/d5de54b6-9f8e-4fba-819b-ebf6780757d2/a48555f4-be23-4467-8a54-400ae7baf9d7
on each node and I got different results on the following attri
The output of the command seems quite wierd: 'getfattr -d -m . -e hex file' Is
it the same on all nodes ?
Best Regards,Strahil Nikolov
On Tue, Mar 9, 2021 at 15:36, Maria Souvalioti
wrote: ___
Users mailing list -- users@ovirt.org
To unsubscrib
The commandgetfattr -n replica.split-brain-status gives the
following:
[root@ov-no1 ~]# getfattr -n replica.split-brain-status
/rhev/data-center/mnt/glusterSD/ov-no1.ariadne-t.local\:_engine/80f6e393-9718-4738-a14a-64cf43c3d8c2/images/d5de54b6-9f8e-4fba-819b-ebf6780757d2/a48555f4-be23-4467-8a54-
Also check the status of the file on each brick with the getfattr command ( see
https://docs.gluster.org/en/latest/Troubleshooting/resolving-splitbrain/ ) and
provide the output.
Best Regards,Strahil Nikolov
Thank you for your reply.
I'm trying that right now and I see it triggered the se
Thank you for your reply.
I'm trying that right now and I see it triggered the self-healing process.
I will come back with an update.
Best regards.
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy St
Thank you.
I have tried that and it didn't work as the system sees that the file is not in
split-brain.
I have also tried force heal and full heal and still nothing. I always end up
with the entry being stuck in unsynched stage.
___
Users mailing list
If it's a VM image, just use dd to read the whole file.dd
if=VM_imageof=/dev/null bs=10M status=progress
Best Regards,Strahil Nikolov
On Fri, Mar 5, 2021 at 15:48, Alex K wrote:
___
Users mailing list -- users@ovirt.org
To unsubscribe send an em
On Thu, Mar 4, 2021 at 8:59 PM wrote:
> Hello again,
> I've tried to heal the brick with latest-mtime, but I get the following:
>
> gluster volume heal engine split-brain latest-mtime
> /80f6e393-9718-4738-a14a-64cf43c3d8c2/images/d5de54b6-9f8e-4fba-819b-ebf6780757d2/a48555f4-be23-4467-8a54-400ae
Hello again,
I've tried to heal the brick with latest-mtime, but I get the following:
gluster volume heal engine split-brain latest-mtime
/80f6e393-9718-4738-a14a-64cf43c3d8c2/images/d5de54b6-9f8e-4fba-819b-ebf6780757d2/a48555f4-be23-4467-8a54-400ae7baf9d7
Healing
/80f6e393-9718-4738-a14a-64cf4
I tried only the simple healing because I wasn't sure if I'd mess the gluster
more than it already is.
I will try latest-mtime in a couple of hours because the system is a production
system and I have to do it after office hours. I will come back with an update.
Thank you very much for your help
On Wed, Mar 3, 2021, 19:13 wrote:
> Hello,
>
> Thank you very much for your reply.
>
> I get the following from the below gluster commands:
>
> [root@ov-no1 ~]# gluster volume heal engine info split-brain
> Brick ov-no1.ariadne-t.local:/gluster_bricks/engine/engine
>
Hello,
Thank you very much for your reply.
I get the following from the below gluster commands:
[root@ov-no1 ~]# gluster volume heal engine info split-brain
Brick ov-no1.ariadne-t.local:/gluster_bricks/engine/engine
Status: Connected
Number
On Mon, Mar 1, 2021, 15:20 wrote:
> Hello again,
>
> I am back with a brief description of the situation I am in, and questions
> about the recovery.
>
> oVirt environment: 4.3.5.2 Hyperconverged
> GlusterFS: Replica 2 + Arbiter 1
> GlusterFS volumes: data, engine, vmstore
>
> The current situati
+Gobinda Das , +Satheesaran Sundaramoorthi
maybe you can help here
Il giorno lun 1 mar 2021 alle ore 14:20 ha
scritto:
> Hello again,
>
> I am back with a brief description of the situation I am in, and questions
> about the recovery.
>
> oVirt environment: 4.3.5.2 Hyperconverged
> GlusterFS:
Hello again,
I am back with a brief description of the situation I am in, and questions
about the recovery.
oVirt environment: 4.3.5.2 Hyperconverged
GlusterFS: Replica 2 + Arbiter 1
GlusterFS volumes: data, engine, vmstore
The current situation is the following:
- The Cluster is in Global M
21 matches
Mail list logo