As I mentioned in the slack, the safest approach is to:
1. Reduce the volume to replica 1 (there is no need to keep the arbiter until
resynchronization
gluster volume remove-brick VOLUME replica 1
beclovkvma02.bec.net:/data/brick2/brick2
beclovkvma03.bec.net:/data/brick1/brick2
beclovkvma02.
wanted to remove beclovkvma02.bec.net as the node was dead, now I reinstalled
this node and trying to add as 4th node - beclovkvma04.bec.net however since
the system UUID is same Im not able to add the node in ovirt gluster..
___
Users mailing list --
Hi Strahil Nikolov
Volume Name: datastore1
Type: Distributed-Replicate
Volume ID: bc362259-14d4-4357-96bd-8db6492dc788
Status: Started
Snapshot Count: 0
Number of Bricks: 7 x (2 + 1) = 21
Transport-type: tcp
Bricks:
Brick1: beclovkvma01.bec.net:/data/brick2/brick2
Brick2: beclovkvma02.bec.net:/da
Please provide 'gluster volume info datastore1' and specify which bricks you
want to remove.
Best Regards,Strahil Nikolov
On Thu, Nov 11, 2021 at 6:13, dhanaraj.ramesh--- via Users
wrote: Hi Strahil Nikolov
Thank you for the suggestion but it does not help...
[root@beclovkvma01 ~]#
Hi Strahil Nikolov
Thank you for the suggestion but it does not help...
[root@beclovkvma01 ~]# sudo gluster volume remove-brick datastore1 replica 1
beclovkvma02.bec..net:/data/brick2/brick2
beclovkvma02.bec..net:/data/brick3/brick3
beclovkvma02.bec..net:/data/brick4/brick4
beclovkvma02
You have to specify the volume type.When you remove 1 brick from a replica 3
volume - you are actually converting it to replica 2.
As you got 2 data bricks + 1 arbiter, then Just remove the arbiter brick and
the missing node's brick:
gluster volume remove-brick VOL replica 1 node2:/brick node3:/b
the volume configured with Distributed Replicate volume with 7 bricks, when I
try from GUI getting below error.
Error while executing action Remove Gluster Volume Bricks: Volume remove brick
force failed: rc=-1 out=() err=['Remove arbiter brick(s) only when converting
from arbiter to replica 2
when I try to remove all the node 2 bricks getting below error
volume remove-brick commit force: failed: Bricks not from same subvol for
replica
when try to remove node 2 just one brick getting below error
volume remove-brick commit force: failed: Remove brick incorrect brick count of
1 for r
In order to remove a dead host you will need to:- Remove all bricks
(originating from that host ) in all volumes of the TSPgluster volume
remove-brick engine host3:/gluster_bricks/brick1/ force- Remove the host from
the TSPgluster peer-detach host3
- Next remove the host from oVirt. In some case
Hello
There is an ansible playbook[2] for replacing a failed host in a
gluster-enabled cluster, do check it out [1], and see if that would work
out for you.
[1]
https://github.com/gluster/gluster-ansible/blob/master/playbooks/hc-ansible-deployment/README#L57
[2]https://github.com/gluster/gluster-
10 matches
Mail list logo