Jim -
result of this test...engine crashed but all VM's on the gluster domain
(backed by the same physical nodes/hardware/gluster process/etc) stayed up
fine
I guess there is some functional difference between 'backupvolfile-server'
and 'backup-volfile-servers'?
Perhaps try latter and see what
Jim -
One thing I noticed is that, by accident, I used
'backupvolfile-server=node2:node3' which is apparently a supported setting.
It would appear, by reading the man page of mount.glusterfs, the syntax is
slightly different. not sure if my setting being different has different
impacts
Jim -
here is my test:
- All VM's on node2: hosted engine and 1 test VM
- Test VM on gluster storage domain (with mount options set)
- hosted engine is on gluster as well, with settings persisted to
hosted-engine.conf for backupvol
All VM's stayed up. Nothing in dmesg of the test vm indicating
I can confirm that I did set it up manually, and I did specify backupvol,
and in the "manage domain" storage settings, I do have under mount
options, backup-volfile-servers=192.168.8.12:192.168.8.13 (and this was
done at initial install time).
The "used managed gluster" checkbox is NOT checked,
@ Jim - here is my setup which I will test in a few (brand new cluster) and
report back what I found in my tests
- 3x servers direct connected via 10Gb
- 2 of those 3 setup in ovirt as hosts
- Hosted engine
- Gluster replica 3 (no arbiter) for all volumes
- 1x engine volume gluster replica 3
Good day all.
Just playing with ovirt. New to it but seems quite good.
Single instance/nfs share/centos7/ovirt 4.1
Had a power outage and this error message is in my logs whilst trying to
activate a downed host. The snippet below is from engine.log.
2017-09-01 13:32:03,092-07 INFO
So, after reading the first document twice and the 2nd link thoroughly
once, I believe that the arbitrator volume should be sufficient and count
for replica / split brain. EG, if any one full replica is down, and the
arbitrator and the other replica is up, then it should have quorum and all
Thank you!
I created my cluster following these instructions:
https://www.ovirt.org/blog/2016/08/up-and-running-with-ovirt-4-0-and-gluster-storage/
(I built it about 10 months ago)
I used their recipe for automated gluster node creation. Originally I
thought I had 3 replicas, then I started
On 9/1/2017 8:53 AM, Jim Kusznir wrote:
Huh...Ok., how do I convert the arbitrar to full replica, then? I was
misinformed when I created this setup. I thought the arbitrator held
enough metadata that it could validate or refudiate any one replica
(kinda like the parity drive for a RAID-4
These can get a little confusing but this explains it best:
https://gluster.readthedocs.io/en/latest/Administrator%20Guide/arbiter-volumes-and-quorum/#replica-2-and-replica-3-volumes
Basically in the first paragraph they are explaining why you cant have HA
with quorum for 2 nodes. Here is another
I'm now also confused as to what the point of an arbiter is / what it does
/ why one would use it.
On Fri, Sep 1, 2017 at 11:44 AM, Jim Kusznir wrote:
> Thanks for the help!
>
> Here's my gluster volume info for the data export/brick (I have 3: data,
> engine, and iso, but
Thanks for the help!
Here's my gluster volume info for the data export/brick (I have 3: data,
engine, and iso, but they're all configured the same):
Volume Name: data
Type: Replicate
Volume ID: e670c488-ac16-4dd1-8bd3-e43b2e42cc59
Status: Started
Snapshot Count: 0
Number of Bricks: 1 x (2 + 1) =
Hi everyone,
I installed 2 hosts on a new cluster and the servers take a really long to
boot up (about 8 minutes).
When a host crashes or is powered off the ovirt-manager starts it via power
management, since the servers takes all that time to boot up the
ovirt-manager thinks it failed to start
On host info I can see:
Cluster compatibility level: 3.6,4.0,4.1
could is this the problem?
Il 30/08/2017 16:32, Stefano Danzi ha scritto:
above the logs.
PS cluster compatibility level is 4.1
engine:
2017-08-30 16:26:07,928+02 INFO
[org.ovirt.engine.core.bll.UpdateClusterCommand] (default
It seems I'm still having problems with metrics related packages after
upgrade to 4.1.5.
Similar to this one at 4.1 time:
http://lists.ovirt.org/pipermail/users/2017-February/079670.html
A cluster with hosts at 4.1.1 has been updated to 4.1.5
Hosts are CentOS 7.3
yum update method on host was
@Kasturi - Looks good now. Cluster showed down for a moment but VM's stayed
up in their appropriate places. Thanks!
< Anyone on this list please feel free to correct my response to Jim if its
wrong>
@ Jim - If you can share your gluster volume info / status I can confirm
(to the best of my
Hi There,
Our 3 node Data center is up and running and I am populating it with the
required vm's.
It's a test lab so I want to provide pre fabricated environments for
different users.
I have already set up a CentOS box for nested virtualisation which works
quite well but I would really like to
Speaking of the "use managed gluster", I created this gluster setup under
ovirt 4.0 when that wasn't there. I've gone into my settings and checked
the box and saved it at least twice, but when I go back into the storage
settings, its not checked again.
The "about" box in the gui reports that I'm
Huh...Ok., how do I convert the arbitrar to full replica, then? I was
misinformed when I created this setup. I thought the arbitrator held
enough metadata that it could validate or refudiate any one replica (kinda
like the parity drive for a RAID-4 array). I was also under the impression
that
Thank you!
I created all the VMs using the sparce allocation method. I wanted a
method that would create disks that did not immediately occupy their full
declared size (eg, allow overcommit of disk space, as most VM hard drives
are 30-50% empty for their entire life).
I kinda figured that it
yes, that is the same option i was asking about. Apologies that i had
mentioned a different name.
So, ovirt will automatically detect it if you select the option 'use
managed gluster volume'. While adding a storage domain after specifying the
host , you could just select the checkbox and that
Are you referring to "Mount Options" - > http://i.imgur.com/bYfbyzz.png
Then no, but that would explain why it wasnt working :-). I guess I had a
silly assumption that oVirt would have detected it and automatically taken
up the redundancy that was configured inside the replica set / brick
Hi Charles,
One question, while configuring a storage domain you are saying
"host to use: " node1, then in the connection details you say node1:/data.
What about the backup-volfile-servers option in the UI while configuring
storage domain? Are you specifying that too?
Thanks
kasturi
On Fri, Sep 1, 2017 at 8:41 AM, Jim Kusznir wrote:
> Hi all:
>
> I have several VMs, all thin provisioned, on my small storage (self-hosted
> gluster / hyperconverged cluster). I'm now noticing that some of my VMs
> (espicially my only Windows VM) are using even MORE disk
@ Jim - you have only two data volumes and lost quorum. Arbitrator only
stores metadata, no actual files. So yes, you were running in degraded mode
so some operations were hindered.
@ Sahina - Yes, this actually worked fine for me once I did that. However,
the issue I am still facing, is when I
Hi everyone,
I have a question regard of host in Ovirt. Is it possible that we can add
host which is registered in different data-center?
--
*Khoi Thinh*
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users
To the OP question, when you set up a gluster storage domain, you need to
specify backup-volfile-servers=: where server2 and
server3 also have bricks running. When server1 is down, and the volume is
mounted again - server2 or server3 are queried to get the gluster volfiles.
@Jim, if this does not
If gluster drops in quorum so that it has less votes than it should it
will stop file operations until quorum is back to normal.If i rember it
right you need two bricks to write for quorum to be met and that the
arbiter only is a vote to avoid split brain.
Basically what you have is a raid5
28 matches
Mail list logo