My understanding is that in a HCI environment, the storage nodes should be 
rather static, but that the pure compute nodes, can be much more dynamic or 
opportunistic: actually those should/could even be switched off and restarted 
as part of oVirt's resource optimization.

The 'pure compute' nodes fall into two major categories, a) those who can run 
the management engine, b) those who can't.

What I don't quite understand is why both types seem to be made 'gluster 
peers', when they don't contribute storage bricks: shouldn't they just be able 
to mount Gluster volumes?

The reason for my concern is that I actually want to manage these computing 
hosts with much more freedom. I may have them join and take on workloads, or 
not and I may want to shut them down.

To my irritation I sometimes even see these 'missing' hosts being considered 
for quorum decisions or being listed e.g. when I do a 'gluster volume engine 
status'. I find hosts there, that definitely are not contributing bricks to 
'engine' (or any other volume).

Then I'm not even sure I have consistent behavior when I remove hosts from 
oVirt: I'd swear that quite a few remain as Gluster peers, even if they are 
completely invisible from the oVirt GUI (while hosted-engine --vm-status will 
still list them).

So what's the theory here and would it be a bug if removed hosts remain gluster 
members?
_______________________________________________
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/334GRWOQ7EFHS6IN3FB633PSZFKQSG4E/

Reply via email to