It's an effect that also had me puzzled for a long time: To my understanding 
gluster volume command should only ever show peers that contribute bricks to a 
volume, not peers in general.

Now perhaps an exception needs to be made for hosts that have been enabled to 
run the management engine, as that might require Gluster insights.

But I have also seen that with hosts that I only added temporarily to oVirt as 
a compute node and I believe I have even seen really ill effects:

Given G1+G2+G3 holding 3R or 2R+1A bricks for the typical oVirt volumes 
engine/data/vmstore, I had been adding C1+C2+C3 as mere hosts.

When I then rebooted G3, while C1+C2 were also down, all of a sudden G1+G2 
would shut down their bricks (and refuse to bring them back up) for lack of 
quota... I had to bring C1+C2 back up to regain quota and then delete C1+C2 
from oVirt to take them out of the quorum for a volume to which they 
contributed no bricks.

And then often enough, I had to actually detach them as peers via the gluster 
CLI, because the GUI didn't finish the job.

Now of course, that only works when G1+G2+G3 are actually all up, too, because 
otherwise peers can't be detached....

I've just posted a query on this issue yesterday: Hopefully someone from the 
development team will shed some insights into the logic, so we can test better 
and potentially open an issue to fix.
_______________________________________________
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/HBTFTHXGIKQ7BJPKQLLZF2AVXVHR5M53/

Reply via email to