As I mentioned in the slack, the safest approach is to:
1. Reduce the volume to replica 1 (there is no need to keep the arbiter until 
resynchronization
gluster volume remove-brick VOLUME replica 1  
beclovkvma02.bec.net:/data/brick2/brick2   
beclovkvma03.bec.net:/data/brick1/brick2 
beclovkvma02.bec.net:/data/brick3/brick3 
beclovkvma03.bec.net:/data/brick1/brick3 
beclovkvma02.bec.net:/data/brick4/brick4 
beclovkvma03.bec.net:/data/brick1/brick4 
beclovkvma02.bec.net:/data/brick5/brick5 
beclovkvma03.bec.net:/data/brick1/brick5 
beclovkvma02.bec.net:/data/brick6/brick6 
beclovkvma03.bec.net:/data/brick1/brick6 
beclovkvma02.bec.net:/data/brick7/brick7 
beclovkvma03.bec.net:/data/brick1/brick7 
beclovkvma02.bec.net:/data/brick8/brick8 
beclovkvma03.bec.net:/data/brick1/brick8 force
Note: I might have missed a brick, so verify that you are selecting all bricks 
for the arbiter and beclovkvma02
2. Remove the broken nodegluster peer detach beclovkvma02.bec.net force
3. Add the freshly installed host:gluster peer probe beclovkvma04.bec.net
4. Umount all bricks on the arbiter.Then reformat them:mkfs.xfs -f -i size=512 
/path/to/each/arbiter/brick/LV
5. Check if fstab is using UUID and if yes -> update with the /dev/VG/LV or 
with the new UUIDs (blkid should help)
6. Mount all bricks on the arbiter - no errors should be reported:mount -a
7. Umount , reformat and remount all bricks on beclovkvma04.bec.net . Don't 
forget to check the fstab. 'mount -a' is your first friend
8. Readd the bricks to the volume. Order is important (first 04, then arbiter, 
04, arbiter...)
gluster volume brick-add VOLUME replica 3 arbiter1 
beclovkvma04.bec.net:/data/brick2/brick2   
beclovkvma03.bec.net:/data/brick1/brick2 
beclovkvma04.bec.net:/data/brick3/brick3 
beclovkvma03.bec.net:/data/brick1/brick3 
beclovkvma04.bec.net:/data/brick4/brick4 
beclovkvma03.bec.net:/data/brick1/brick4 
beclovkvma04.bec.net:/data/brick5/brick5 
beclovkvma03.bec.net:/data/brick1/brick5 
beclovkvma04.bec.net:/data/brick6/brick6 
beclovkvma03.bec.net:/data/brick1/brick6 
beclovkvma04.bec.net:/data/brick7/brick7 
beclovkvma03.bec.net:/data/brick1/brick7 
beclovkvma04.bec.net:/data/brick8/brick8 
beclovkvma03.bec.net:/data/brick1/brick8
9. Trigger the full heal:gluster volume heal VOLUME full
10. If your bricks are high performant and you need to speed up the healing you 
can increase these volume settings:
- cluster.shd-max-threads- cluster.shd-wait-qlenght

Best Regards,Strahil Nikolov 
 
  On Fri, Nov 12, 2021 at 8:21, dhanaraj.ramesh--- via Users<users@ovirt.org> 
wrote:   wanted to remove beclovkvma02.bec.net as the node was dead, now I 
reinstalled this node and trying to add as 4th node - beclovkvma04.bec.net 
however since the system UUID is same Im not able to add the node in ovirt 
gluster.. 
_______________________________________________
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/XXY6FD7G6PUYKEBQ6ZORVYZI4L6NSRFW/
  
_______________________________________________
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/QJONICSZUUAXJJAMBJQYMJRCM73S4VKG/

Reply via email to