Sorry if it's a duplicate: I go an error on post... And yes I posted it on the 
Gluster slack first, but I am using Gluster only because the marketing on oVirt 
HCI worked so well...

I got 3 recycled servers for an oVirt test environment first and set those up 
as 3-node HCI using defaults mostly, 2 replica + 1 arbiter, 'engine', 'vmstore' 
and 'data' volumes with single bricks for each node. I call these group A.

Then I got another set of five machines, let's call them group B, with somewhat 
different hardware characteristics than group A, but nicely similar between 
themselves. I wanted to add these to the farm as compute nodes but also use 
their storage as general GlusterFS storage for a wider use.

Group B machines were added as hosts and set up to run hosted-engine, but they 
do not contribute bricks to the normal oVirt volumes 'engine', 'vmstore' or 
'data'. With some Ansible trickery I managed to set up two dispersed volumes (4 
data: 1 redundancy) on group B 'scratch' and 'tape', mostly for external 
GlusterFS use. oVirt picked them up automagically, so I guess they could also 
be used with VMs.

I expect to get more machines and adding them one-by-one to dispersed volumes 
with a fine balance between capacity and redundancy made me so enthusiastic 
about oVirt HCI in the first place...

After some weeks of fine operation I had to restart a machine from group B for 
maintenance. When it came back up, GlusterD refuses to come online, because it 
doesn't have "quorum for volumes 'engine', 'vmstore' and 'data'"....

It's a small surprise it doesn't *have* quorum, what's a bigger surprise is 
that it *asks* for quorum in a volume where it's not contributing any bricks. 
What's worse is that it then refuses to start serving its bricks for 'scratch' 
and 'tape', which are now growing apart without any chance of healing.

How do I fix this?

Is this a bug (my interpretation) or do I fundamentlly misunderstand how 
Gluster as a hyper scale out file system is supposed to work with potentially 
thousands of hosts contributing each dozens of bricks to each of hundreds of 
volumes in a single name space?
_______________________________________________
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives:

Reply via email to