So also on my engine storage domain. Shouldnt we see the mount options in
mount -l output? It appears fault tolerance worked (sort of - see more
below) during my test
[root@appovirtp01 ~]# grep -i mnt_options
/etc/ovirt-hosted-engine/hosted-engine.conf
mnt_options=backup-volfile-servers=n2:n3
[ro
Hey All -
So I havent tested this yet but what I do know is that I did setup
backupvol option when I added the data gluster volume, however, mount
options on mount -l do not show it as being used
n1:/data on /rhev/data-center/mnt/glusterSD/n1:_data type fuse.glusterfs
(rw,relatime,user_id=0,group
I had the very same impression. It doesn't look like that it works then.
So for a fully redundant where you can loose a complete host you must
have at least 3 nodes then ?
Fernando
On 01/09/2017 12:53, Jim Kusznir wrote:
Huh...Ok., how do I convert the arbitrar to full replica, then? I was
Hi charles,
The right option is backup-volfile-servers and not
'backupvolfile-server'.
So can you please use the first one and test ?
Thanks
kasturi
On Sat, Sep 2, 2017 at 5:23 AM, Charles Kozler wrote:
> Jim -
>
> result of this test...engine crashed but all VM's on the gluster domai
Hi Jim,
I looked at the gluster volume info and that looks to be fine for me.
Recommended config is arbiter for data and vmstore and for engine it should
be replica 3 since we would want HE to be available always.
If i understand right the problem you are facing is when you shut down one
o
Jim -
result of this test...engine crashed but all VM's on the gluster domain
(backed by the same physical nodes/hardware/gluster process/etc) stayed up
fine
I guess there is some functional difference between 'backupvolfile-server'
and 'backup-volfile-servers'?
Perhaps try latter and see what h
Jim -
One thing I noticed is that, by accident, I used
'backupvolfile-server=node2:node3' which is apparently a supported setting.
It would appear, by reading the man page of mount.glusterfs, the syntax is
slightly different. not sure if my setting being different has different
impacts
hosted-eng
Jim -
here is my test:
- All VM's on node2: hosted engine and 1 test VM
- Test VM on gluster storage domain (with mount options set)
- hosted engine is on gluster as well, with settings persisted to
hosted-engine.conf for backupvol
All VM's stayed up. Nothing in dmesg of the test vm indicating a
I can confirm that I did set it up manually, and I did specify backupvol,
and in the "manage domain" storage settings, I do have under mount
options, backup-volfile-servers=192.168.8.12:192.168.8.13 (and this was
done at initial install time).
The "used managed gluster" checkbox is NOT checked, a
@ Jim - here is my setup which I will test in a few (brand new cluster) and
report back what I found in my tests
- 3x servers direct connected via 10Gb
- 2 of those 3 setup in ovirt as hosts
- Hosted engine
- Gluster replica 3 (no arbiter) for all volumes
- 1x engine volume gluster replica 3 manua
So, after reading the first document twice and the 2nd link thoroughly
once, I believe that the arbitrator volume should be sufficient and count
for replica / split brain. EG, if any one full replica is down, and the
arbitrator and the other replica is up, then it should have quorum and all
should
Thank you!
I created my cluster following these instructions:
https://www.ovirt.org/blog/2016/08/up-and-running-with-ovirt-4-0-and-gluster-storage/
(I built it about 10 months ago)
I used their recipe for automated gluster node creation. Originally I
thought I had 3 replicas, then I started re
On 9/1/2017 8:53 AM, Jim Kusznir wrote:
Huh...Ok., how do I convert the arbitrar to full replica, then? I was
misinformed when I created this setup. I thought the arbitrator held
enough metadata that it could validate or refudiate any one replica
(kinda like the parity drive for a RAID-4 a
These can get a little confusing but this explains it best:
https://gluster.readthedocs.io/en/latest/Administrator%20Guide/arbiter-volumes-and-quorum/#replica-2-and-replica-3-volumes
Basically in the first paragraph they are explaining why you cant have HA
with quorum for 2 nodes. Here is another
I'm now also confused as to what the point of an arbiter is / what it does
/ why one would use it.
On Fri, Sep 1, 2017 at 11:44 AM, Jim Kusznir wrote:
> Thanks for the help!
>
> Here's my gluster volume info for the data export/brick (I have 3: data,
> engine, and iso, but they're all configured
Thanks for the help!
Here's my gluster volume info for the data export/brick (I have 3: data,
engine, and iso, but they're all configured the same):
Volume Name: data
Type: Replicate
Volume ID: e670c488-ac16-4dd1-8bd3-e43b2e42cc59
Status: Started
Snapshot Count: 0
Number of Bricks: 1 x (2 + 1) =
@Kasturi - Looks good now. Cluster showed down for a moment but VM's stayed
up in their appropriate places. Thanks!
< Anyone on this list please feel free to correct my response to Jim if its
wrong>
@ Jim - If you can share your gluster volume info / status I can confirm
(to the best of my knowle
Speaking of the "use managed gluster", I created this gluster setup under
ovirt 4.0 when that wasn't there. I've gone into my settings and checked
the box and saved it at least twice, but when I go back into the storage
settings, its not checked again.
The "about" box in the gui reports that I'm
Huh...Ok., how do I convert the arbitrar to full replica, then? I was
misinformed when I created this setup. I thought the arbitrator held
enough metadata that it could validate or refudiate any one replica (kinda
like the parity drive for a RAID-4 array). I was also under the impression
that o
yes, that is the same option i was asking about. Apologies that i had
mentioned a different name.
So, ovirt will automatically detect it if you select the option 'use
managed gluster volume'. While adding a storage domain after specifying the
host , you could just select the checkbox and that will
Are you referring to "Mount Options" - > http://i.imgur.com/bYfbyzz.png
Then no, but that would explain why it wasnt working :-). I guess I had a
silly assumption that oVirt would have detected it and automatically taken
up the redundancy that was configured inside the replica set / brick
detectio
Hi Charles,
One question, while configuring a storage domain you are saying
"host to use: " node1, then in the connection details you say node1:/data.
What about the backup-volfile-servers option in the UI while configuring
storage domain? Are you specifying that too?
Thanks
kasturi
@ Jim - you have only two data volumes and lost quorum. Arbitrator only
stores metadata, no actual files. So yes, you were running in degraded mode
so some operations were hindered.
@ Sahina - Yes, this actually worked fine for me once I did that. However,
the issue I am still facing, is when I go
To the OP question, when you set up a gluster storage domain, you need to
specify backup-volfile-servers=: where server2 and
server3 also have bricks running. When server1 is down, and the volume is
mounted again - server2 or server3 are queried to get the gluster volfiles.
@Jim, if this does not
If gluster drops in quorum so that it has less votes than it should it
will stop file operations until quorum is back to normal.If i rember it
right you need two bricks to write for quorum to be met and that the
arbiter only is a vote to avoid split brain.
Basically what you have is a raid5 solutio
Hi all:
Sorry to hijack the thread, but I was about to start essentially the same
thread.
I have a 3 node cluster, all three are hosts and gluster nodes (replica 2 +
arbitrar). I DO have the mnt_options=backup-volfile-servers= set:
storage=192.168.8.11:/engine
mnt_options=backup-volfile-servers
Typo..."Set it up and then failed that **HOST**"
And upon that host going down, the storage domain went down. I only have
hosted storage domain and this new one - is this why the DC went down and
no SPM could be elected?
I dont recall this working this way in early 4.0 or 3.6
On Thu, Aug 31, 201
So I've tested this today and I failed a node. Specifically, I setup a
glusterfs domain and selected "host to use: node1". Set it up and then
failed that VM
However, this did not work and the datacenter went down. My engine stayed
up, however, it seems configuring a domain to pin to a host to use
yes, right. What you can do is edit the hosted-engine.conf file and there
is a parameter as shown below [1] and replace h2 and h3 with your second
and third storage servers. Then you will need to restart ovirt-ha-agent and
ovirt-ha-broker services in all the nodes .
[1] 'mnt_options=backup-volfil
Hi Kasturi -
Thanks for feedback
> If cockpit+gdeploy plugin would be have been used then that would have
automatically detected glusterfs replica 3 volume created during Hosted
Engine deployment and this question would not have been asked
Actually, doing hosted-engine --deploy it too also auto
Hi,
During Hosted Engine setup question about glusterfs volume is being
asked because you have setup the volumes yourself. If cockpit+gdeploy
plugin would be have been used then that would have automatically detected
glusterfs replica 3 volume created during Hosted Engine deployment and this
qu
Hello -
I have successfully created a hyperconverged hosted engine setup consisting
of 3 nodes - 2 for VM's and the third purely for storage. I manually
configured it all, did not use ovirt node or anything. Built the gluster
volumes myself
However, I noticed that when setting up the hosted engin
32 matches
Mail list logo