Re: [ovirt-users] hyperconverged question

2017-09-01 Thread Charles Kozler
Jim - result of this test...engine crashed but all VM's on the gluster domain (backed by the same physical nodes/hardware/gluster process/etc) stayed up fine I guess there is some functional difference between 'backupvolfile-server' and 'backup-volfile-servers'? Perhaps try latter and see what

Re: [ovirt-users] hyperconverged question

2017-09-01 Thread Charles Kozler
Jim - One thing I noticed is that, by accident, I used 'backupvolfile-server=node2:node3' which is apparently a supported setting. It would appear, by reading the man page of mount.glusterfs, the syntax is slightly different. not sure if my setting being different has different impacts

Re: [ovirt-users] hyperconverged question

2017-09-01 Thread Charles Kozler
Jim - here is my test: - All VM's on node2: hosted engine and 1 test VM - Test VM on gluster storage domain (with mount options set) - hosted engine is on gluster as well, with settings persisted to hosted-engine.conf for backupvol All VM's stayed up. Nothing in dmesg of the test vm indicating

Re: [ovirt-users] hyperconverged question

2017-09-01 Thread Jim Kusznir
I can confirm that I did set it up manually, and I did specify backupvol, and in the "manage domain" storage settings, I do have under mount options, backup-volfile-servers=192.168.8.12:192.168.8.13 (and this was done at initial install time). The "used managed gluster" checkbox is NOT checked,

Re: [ovirt-users] hyperconverged question

2017-09-01 Thread Charles Kozler
@ Jim - here is my setup which I will test in a few (brand new cluster) and report back what I found in my tests - 3x servers direct connected via 10Gb - 2 of those 3 setup in ovirt as hosts - Hosted engine - Gluster replica 3 (no arbiter) for all volumes - 1x engine volume gluster replica 3

[ovirt-users] ERROR [org.ovirt.vdsm.jsonrpc.client.reactors.Reactor] (SSL Stomp Reactor) [] Unable to process messages General SSLEngine problem

2017-09-01 Thread Gary Balliet
Good day all. Just playing with ovirt. New to it but seems quite good. Single instance/nfs share/centos7/ovirt 4.1 Had a power outage and this error message is in my logs whilst trying to activate a downed host. The snippet below is from engine.log. 2017-09-01 13:32:03,092-07 INFO

Re: [ovirt-users] hyperconverged question

2017-09-01 Thread Jim Kusznir
So, after reading the first document twice and the 2nd link thoroughly once, I believe that the arbitrator volume should be sufficient and count for replica / split brain. EG, if any one full replica is down, and the arbitrator and the other replica is up, then it should have quorum and all

Re: [ovirt-users] hyperconverged question

2017-09-01 Thread Jim Kusznir
Thank you! I created my cluster following these instructions: https://www.ovirt.org/blog/2016/08/up-and-running-with-ovirt-4-0-and-gluster-storage/ (I built it about 10 months ago) I used their recipe for automated gluster node creation. Originally I thought I had 3 replicas, then I started

Re: [ovirt-users] hyperconverged question

2017-09-01 Thread WK
On 9/1/2017 8:53 AM, Jim Kusznir wrote: Huh...Ok., how do I convert the arbitrar to full replica, then?  I was misinformed when I created this setup.  I thought the arbitrator held enough metadata that it could validate or refudiate  any one replica (kinda like the parity drive for a RAID-4

Re: [ovirt-users] hyperconverged question

2017-09-01 Thread Charles Kozler
These can get a little confusing but this explains it best: https://gluster.readthedocs.io/en/latest/Administrator%20Guide/arbiter-volumes-and-quorum/#replica-2-and-replica-3-volumes Basically in the first paragraph they are explaining why you cant have HA with quorum for 2 nodes. Here is another

Re: [ovirt-users] hyperconverged question

2017-09-01 Thread Jim Kusznir
I'm now also confused as to what the point of an arbiter is / what it does / why one would use it. On Fri, Sep 1, 2017 at 11:44 AM, Jim Kusznir wrote: > Thanks for the help! > > Here's my gluster volume info for the data export/brick (I have 3: data, > engine, and iso, but

Re: [ovirt-users] hyperconverged question

2017-09-01 Thread Jim Kusznir
Thanks for the help! Here's my gluster volume info for the data export/brick (I have 3: data, engine, and iso, but they're all configured the same): Volume Name: data Type: Replicate Volume ID: e670c488-ac16-4dd1-8bd3-e43b2e42cc59 Status: Started Snapshot Count: 0 Number of Bricks: 1 x (2 + 1) =

[ovirt-users] Slow booting host - restart loop

2017-09-01 Thread Bernardo Juanicó
Hi everyone, I installed 2 hosts on a new cluster and the servers take a really long to boot up (about 8 minutes). When a host crashes or is powered off the ovirt-manager starts it via power management, since the servers takes all that time to boot up the ovirt-manager thinks it failed to start

Re: [ovirt-users] Native Access on gluster storage domain

2017-09-01 Thread Stefano Danzi
On host info I can see: Cluster compatibility level: 3.6,4.0,4.1 could is this the problem? Il 30/08/2017 16:32, Stefano Danzi ha scritto: above the logs. PS cluster compatibility level is 4.1 engine: 2017-08-30 16:26:07,928+02 INFO [org.ovirt.engine.core.bll.UpdateClusterCommand] (default

[ovirt-users] Still messages about metrics related packages update after 4.1.5

2017-09-01 Thread Gianluca Cecchi
It seems I'm still having problems with metrics related packages after upgrade to 4.1.5. Similar to this one at 4.1 time: http://lists.ovirt.org/pipermail/users/2017-February/079670.html A cluster with hosts at 4.1.1 has been updated to 4.1.5 Hosts are CentOS 7.3 yum update method on host was

Re: [ovirt-users] hyperconverged question

2017-09-01 Thread Charles Kozler
@Kasturi - Looks good now. Cluster showed down for a moment but VM's stayed up in their appropriate places. Thanks! < Anyone on this list please feel free to correct my response to Jim if its wrong> @ Jim - If you can share your gluster volume info / status I can confirm (to the best of my

[ovirt-users] vm grouping

2017-09-01 Thread david caughey
Hi There, Our 3 node Data center is up and running and I am populating it with the required vm's. It's a test lab so I want to provide pre fabricated environments for different users. I have already set up a CentOS box for nested virtualisation which works quite well but I would really like to

Re: [ovirt-users] hyperconverged question

2017-09-01 Thread Jim Kusznir
Speaking of the "use managed gluster", I created this gluster setup under ovirt 4.0 when that wasn't there. I've gone into my settings and checked the box and saved it at least twice, but when I go back into the storage settings, its not checked again. The "about" box in the gui reports that I'm

Re: [ovirt-users] hyperconverged question

2017-09-01 Thread Jim Kusznir
Huh...Ok., how do I convert the arbitrar to full replica, then? I was misinformed when I created this setup. I thought the arbitrator held enough metadata that it could validate or refudiate any one replica (kinda like the parity drive for a RAID-4 array). I was also under the impression that

Re: [ovirt-users] Storage slowly expanding

2017-09-01 Thread Jim Kusznir
Thank you! I created all the VMs using the sparce allocation method. I wanted a method that would create disks that did not immediately occupy their full declared size (eg, allow overcommit of disk space, as most VM hard drives are 30-50% empty for their entire life). I kinda figured that it

Re: [ovirt-users] hyperconverged question

2017-09-01 Thread Kasturi Narra
yes, that is the same option i was asking about. Apologies that i had mentioned a different name. So, ovirt will automatically detect it if you select the option 'use managed gluster volume'. While adding a storage domain after specifying the host , you could just select the checkbox and that

Re: [ovirt-users] hyperconverged question

2017-09-01 Thread Charles Kozler
Are you referring to "Mount Options" - > http://i.imgur.com/bYfbyzz.png Then no, but that would explain why it wasnt working :-). I guess I had a silly assumption that oVirt would have detected it and automatically taken up the redundancy that was configured inside the replica set / brick

Re: [ovirt-users] hyperconverged question

2017-09-01 Thread Kasturi Narra
Hi Charles, One question, while configuring a storage domain you are saying "host to use: " node1, then in the connection details you say node1:/data. What about the backup-volfile-servers option in the UI while configuring storage domain? Are you specifying that too? Thanks kasturi

Re: [ovirt-users] Storage slowly expanding

2017-09-01 Thread Yaniv Kaul
On Fri, Sep 1, 2017 at 8:41 AM, Jim Kusznir wrote: > Hi all: > > I have several VMs, all thin provisioned, on my small storage (self-hosted > gluster / hyperconverged cluster). I'm now noticing that some of my VMs > (espicially my only Windows VM) are using even MORE disk

Re: [ovirt-users] hyperconverged question

2017-09-01 Thread Charles Kozler
@ Jim - you have only two data volumes and lost quorum. Arbitrator only stores metadata, no actual files. So yes, you were running in degraded mode so some operations were hindered. @ Sahina - Yes, this actually worked fine for me once I did that. However, the issue I am still facing, is when I

[ovirt-users] Install failed when adding host in Ovirt

2017-09-01 Thread Khoi Thinh
Hi everyone, I have a question regard of host in Ovirt. Is it possible that we can add host which is registered in different data-center? -- *Khoi Thinh* ___ Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users

Re: [ovirt-users] hyperconverged question

2017-09-01 Thread Sahina Bose
To the OP question, when you set up a gluster storage domain, you need to specify backup-volfile-servers=: where server2 and server3 also have bricks running. When server1 is down, and the volume is mounted again - server2 or server3 are queried to get the gluster volfiles. @Jim, if this does not

Re: [ovirt-users] hyperconverged question

2017-09-01 Thread Johan Bernhardsson
If gluster drops in quorum so that it has less votes than it should it will stop file operations until quorum is back to normal.If i rember it right you need two bricks to write for quorum to be met and that the arbiter only is a vote to avoid split brain. Basically what you have is a raid5