[ovirt-users] Re: Parent checkpoint ID does not match the actual leaf checkpoint

2020-07-19 Thread Nir Soffer
On Sun, Jul 19, 2020 at 5:38 PM Łukasz Kołaciński 
wrote:

> Hello,
> Thanks to previous answers, I was able to make backups. Unfortunately, we
> had some infrastructure issues and after the host reboots new problems
> appeared. I am not able to do any backup using the commands that worked
> yesterday. I looked through the logs and there is something like this:
>
> 2020-07-17 15:06:30,644+02 ERROR
> [org.ovirt.engine.core.bll.StartVmBackupCommand]
> (EE-ManagedScheduledExecutorService-engineScheduledThreadPool-Thread-54)
> [944a1447-4ea5-4a1c-b971-0bc612b6e45e] Failed to execute VM backup
> operation 'StartVmBackup': {}:
> org.ovirt.engine.core.common.errors.EngineException: EngineException:
> org.ovirt.engine.core.vdsbroker.vdsbroker.VDSErrorException:
> VDSGenericException: VDSErrorException: Failed to StartVmBackupVDS, error =
> Checkpoint Error: {'parent_checkpoint_id': None, 'leaf_checkpoint_id':
> 'cd078706-84c0-4370-a6ec-654ccd6a21aa', 'vm_id':
> '116aa6eb-31a1-43db-9b1e-ad6e32fb9260', 'reason': '*Parent checkpoint ID
> does not match the actual leaf checkpoint*'}, code = 1610 (Failed with
> error unexpected and code 16)
>
>
It looks like engine sent:

parent_checkpoint_id: None

This issue was fix in engine few weeks ago.

Which engine and vdsm versions are you testing?


> at
> deployment.engine.ear.bll.jar//org.ovirt.engine.core.bll.VdsHandler.handleVdsResult(VdsHandler.java:114)
> at
> deployment.engine.ear.bll.jar//org.ovirt.engine.core.bll.VDSBrokerFrontendImpl.runVdsCommand(VDSBrokerFrontendImpl.java:33)
> at
> deployment.engine.ear.bll.jar//org.ovirt.engine.core.bll.CommandBase.runVdsCommand(CommandBase.java:2114)
> at
> deployment.engine.ear.bll.jar//org.ovirt.engine.core.bll.StartVmBackupCommand.performVmBackupOperation(StartVmBackupCommand.java:368)
> at
> deployment.engine.ear.bll.jar//org.ovirt.engine.core.bll.StartVmBackupCommand.runVmBackup(StartVmBackupCommand.java:225)
> at
> deployment.engine.ear.bll.jar//org.ovirt.engine.core.bll.StartVmBackupCommand.performNextOperation(StartVmBackupCommand.java:199)
> at
> deployment.engine.ear.bll.jar//org.ovirt.engine.core.bll.SerialChildCommandsExecutionCallback.childCommandsExecutionEnded(SerialChildCommandsExecutionCallback.java:32)
> at
> deployment.engine.ear.bll.jar//org.ovirt.engine.core.bll.ChildCommandsCallbackBase.doPolling(ChildCommandsCallbackBase.java:80)
> at
> deployment.engine.ear.bll.jar//org.ovirt.engine.core.bll.tasks.CommandCallbacksPoller.invokeCallbackMethodsImpl(CommandCallbacksPoller.java:175)
> at
> deployment.engine.ear.bll.jar//org.ovirt.engine.core.bll.tasks.CommandCallbacksPoller.invokeCallbackMethods(CommandCallbacksPoller.java:109)
> at
> java.base/java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:515)
> at
> java.base/java.util.concurrent.FutureTask.runAndReset(FutureTask.java:305)
> at
> org.glassfish.javax.enterprise.concurrent//org.glassfish.enterprise.concurrent.internal.ManagedScheduledThreadPoolExecutor$ManagedScheduledFutureTask.access$201(ManagedScheduledThreadPoolExecutor.java:383)
> at
> org.glassfish.javax.enterprise.concurrent//org.glassfish.enterprise.concurrent.internal.ManagedScheduledThreadPoolExecutor$ManagedScheduledFutureTask.run(ManagedScheduledThreadPoolExecutor.java:534)
> at
> java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)
> at
> java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)
> at java.base/java.lang.Thread.run(Thread.java:834)
> at
> org.glassfish.javax.enterprise.concurrent//org.glassfish.enterprise.concurrent.ManagedThreadFactoryImpl$ManagedThread.run(ManagedThreadFactoryImpl.java:250)
>
>
> And the last error is:
>
> 2020-07-17 15:13:45,835+02 ERROR
> [org.ovirt.engine.core.bll.StartVmBackupCommand]
> (EE-ManagedScheduledExecutorService-engineScheduledThreadPool-Thread-14)
> [f553c1f2-1c99-4118-9365-ba6b862da936] Failed to execute VM backup
> operation 'GetVmBackupInfo': {}:
> org.ovirt.engine.core.common.errors.EngineException: EngineException:
> org.ovirt.engine.core.vdsbroker.vdsbroker.VDSErrorException:
> VDSGenericException: VDSErrorException: Failed to GetVmBackupInfoVDS, error
> = No such backup Error: {'vm_id': '116aa6eb-31a1-43db-9b1e-ad6e32fb9260',
> 'backup_id': 'bf1c26f7-c3e5-437c-bb5a-255b8c1b3b73', 'reason': '*VM
> backup not exists: Domain backup job id not found: no domain backup job
> present'*}, code = 1601 (Failed with error unexpected and code 16)
>
>
This is likely a result of the first error. If starting backup failed the
backup entity
is deleted.


> (these errors are from full backup)
>
> Like I said this is very strange because everything was working correctly.
>
>
> Regards
>
> Łukasz Kołaciński
>
> Junior Java Developer
>
> e-mail: l.kolacin...@storware.eu
> 
>
>
>
>
> *[image: STORWARE]* 
>
>
>
> *ul. Leszno 8/44 01-192 Warszawa www.storware.eu
> *
>
> *[image: facebook]* 

[ovirt-users] Re: oVirt install questions

2020-07-19 Thread Jayme
You would setup three servers first in hyperconverged using either replica
3 or replica 3 arbiter 1 then add your fourth host afterward as a compute
only host that can host vms but does not participate in glusterfs storage.

On Sun, Jul 19, 2020 at 3:12 PM David White via Users 
wrote:

> Thank you.
> So to make sure I understand what you're saying, it sounds like if I need
> 4 nodes (or more), I should NOT do a "hyperconverged" installation, but
> should instead prepare Gluster separately from the oVirt Manager
> installation. Do I understand this correctly?
>
> If that is the case, can I still use some of the servers for dual purposes
> (Gluster + oVirt Manager)? I'm most likely going to need more servers for
> the storage than I will need for the RAM & CPU, which is a little bit
> opposite of what you wrote (using 3 servers for Gluster and adding
> additional nodes for RAM & CPU).
>
>
> Sent with ProtonMail Secure Email.
>
> ‐‐‐ Original Message ‐‐‐
> On Sunday, July 19, 2020 9:57 AM, Strahil Nikolov via Users <
> users@ovirt.org> wrote:
>
> > Hi David,
> >
>
> > it's a little bit different.
> >
>
> > Ovirt supports 'replica 3' (3 directories hsot the same content) or
> 'replica 3 arbiter 1' (2 directories host same data, third directory
> contains metadata to prevent split brain situations) volumes.
> >
>
> > If you have 'replica 3' it is smart to keep the data on separate hosts,
> although you can keep it on the same host (but then you should use no
> replica and oVirt's Single node setup).
> >
>
> > When you extend , yoou need to add bricks (fancy name for a directory)
> in the x3 count.
> >
>
> > If you wish that you want to use 5 nodes, you can go with 'replica 3
> arbiter 1' volume, where ServerA & ServerB host data and ServerC host only
> metadata (arbiter). Then you can extend and for example ServerC can host
> again metadata while ServerD & ServerE host data for the second replica set.
> >
>
> > You can even use only 3 servers for Gluster , while much more systems as
> ovirt nodes (CPU & RAM) to host VMs.
> > In case of a 4 node setup - 3 hosts have the gluster data and the 4th -
> is not part of ths gluster, just hosting VMs.
> >
>
> > Best Regards,
> > Strahil Nikolov
> >
>
> > На 19 юли 2020 г. 15:25:10 GMT+03:00, David White via Users
> users@ovirt.org написа:
> >
>
> > > Thanks again for explaining all of this to me.
> > > Much appreciated.
> > > Regarding the hyperconverged environment,
> > > reviewing
> https://www.ovirt.org/documentation/gluster-hyperconverged/chap-Deploying_Hyperconverged.html
> ,
> > > it appears to state that you need, exactly, 3 physical servers.
> > > Is it possible to run a hyperconverged environment with more than 3
> > > physical servers?
> > > Because of the way that the gluster triple-redundancy works, I knew
> > > that I would need to size all 3 physical servers' SSD drives to store
> > > 100% of the data, but there's a possibility that 1 particular (future)
> > > customer is going to need about 10TB of disk space.
> > > For that reason, I'm thinking about what it would look like to have 4
> > > or even 5 physical servers in order to increase the total amount of
> > > disk space made available to oVirt as a whole. And then from there, I
> > > would of course setup a number of virtual disks that I would attach
> > > back to that customer's VM.
> > > So to recap, if I were to have a 5-node Gluster Hyperconverged
> > > environment, I'm hoping that the data would still only be required to
> > > replicate across 3 nodes. Does this make sense? Is this how data
> > > replication works? Almost like a RAID -- add more drives, and the RAID
> > > gets expanded?
> > > Sent with ProtonMail Secure Email.
> > > ‐‐‐ Original Message ‐‐‐
> > > On Tuesday, June 23, 2020 4:41 PM, Jayme jay...@gmail.com wrote:
> > >
>
> > > > Yes this is the point of hyperconverged. You only need three hosts to
> > > > setup a proper hci cluster. I would recommend ssds for gluster
> storage.
> > > > You could get away with non raid to save money since you can do
> replica
> > > > three with gluster meaning your data is fully replicated across all
> > > > three hosts.
> > >
>
> > > > On Tue, Jun 23, 2020 at 5:17 PM David White via Users
> > > > users@ovirt.org wrote:
> > >
>
> > > > > Thanks.
> > > > > I've only been considering SSD drives for storage, as that is what
> > > > > I currently have in the cloud.
> > > >
>
> > > > >
>
> > >
>
> > > > > I think I've seen some things in the documents about oVirt and
> > > > > gluster hyperconverged.
> > > >
>
> > > > > Is it possible to run oVirt and Gluster together on the same
> > > > > hardware? So 3 physical hosts would run CentOS or something, and I
> > > > > would install oVirt Node + Gluster onto the same base host OS? If
> so,
> > > > > then I could probably make that fit into my budget.
> > > >
>
> > > > >
>
> > >
>
> > > > > Sent with ProtonMail Secure Email.
> > >
>
> > > > > ‐‐‐ Original Message ‐‐‐
> > > > > On 

[ovirt-users] Re: oVirt install questions

2020-07-19 Thread David White via Users
Thank you.
So to make sure I understand what you're saying, it sounds like if I need 4 
nodes (or more), I should NOT do a "hyperconverged" installation, but should 
instead prepare Gluster separately from the oVirt Manager installation. Do I 
understand this correctly?

If that is the case, can I still use some of the servers for dual purposes 
(Gluster + oVirt Manager)? I'm most likely going to need more servers for the 
storage than I will need for the RAM & CPU, which is a little bit opposite of 
what you wrote (using 3 servers for Gluster and adding additional nodes for RAM 
& CPU).


Sent with ProtonMail Secure Email.

‐‐‐ Original Message ‐‐‐
On Sunday, July 19, 2020 9:57 AM, Strahil Nikolov via Users  
wrote:

> Hi David,
> 

> it's a little bit different.
> 

> Ovirt supports 'replica 3' (3 directories hsot the same content) or 'replica 
> 3 arbiter 1' (2 directories host same data, third directory contains metadata 
> to prevent split brain situations) volumes.
> 

> If you have 'replica 3' it is smart to keep the data on separate hosts, 
> although you can keep it on the same host (but then you should use no replica 
> and oVirt's Single node setup).
> 

> When you extend , yoou need to add bricks (fancy name for a directory) in the 
> x3 count.
> 

> If you wish that you want to use 5 nodes, you can go with 'replica 3 arbiter 
> 1' volume, where ServerA & ServerB host data and ServerC host only metadata 
> (arbiter). Then you can extend and for example ServerC can host again 
> metadata while ServerD & ServerE host data for the second replica set.
> 

> You can even use only 3 servers for Gluster , while much more systems as 
> ovirt nodes (CPU & RAM) to host VMs.
> In case of a 4 node setup - 3 hosts have the gluster data and the 4th - is 
> not part of ths gluster, just hosting VMs.
> 

> Best Regards,
> Strahil Nikolov
> 

> На 19 юли 2020 г. 15:25:10 GMT+03:00, David White via Users users@ovirt.org 
> написа:
> 

> > Thanks again for explaining all of this to me.
> > Much appreciated.
> > Regarding the hyperconverged environment,
> > reviewing 
> > https://www.ovirt.org/documentation/gluster-hyperconverged/chap-Deploying_Hyperconverged.html,
> > it appears to state that you need, exactly, 3 physical servers.
> > Is it possible to run a hyperconverged environment with more than 3
> > physical servers?
> > Because of the way that the gluster triple-redundancy works, I knew
> > that I would need to size all 3 physical servers' SSD drives to store
> > 100% of the data, but there's a possibility that 1 particular (future)
> > customer is going to need about 10TB of disk space.
> > For that reason, I'm thinking about what it would look like to have 4
> > or even 5 physical servers in order to increase the total amount of
> > disk space made available to oVirt as a whole. And then from there, I
> > would of course setup a number of virtual disks that I would attach
> > back to that customer's VM.
> > So to recap, if I were to have a 5-node Gluster Hyperconverged
> > environment, I'm hoping that the data would still only be required to
> > replicate across 3 nodes. Does this make sense? Is this how data
> > replication works? Almost like a RAID -- add more drives, and the RAID
> > gets expanded?
> > Sent with ProtonMail Secure Email.
> > ‐‐‐ Original Message ‐‐‐
> > On Tuesday, June 23, 2020 4:41 PM, Jayme jay...@gmail.com wrote:
> > 

> > > Yes this is the point of hyperconverged. You only need three hosts to
> > > setup a proper hci cluster. I would recommend ssds for gluster storage.
> > > You could get away with non raid to save money since you can do replica
> > > three with gluster meaning your data is fully replicated across all
> > > three hosts.
> > 

> > > On Tue, Jun 23, 2020 at 5:17 PM David White via Users
> > > users@ovirt.org wrote:
> > 

> > > > Thanks.
> > > > I've only been considering SSD drives for storage, as that is what
> > > > I currently have in the cloud.
> > > 

> > > > 

> > 

> > > > I think I've seen some things in the documents about oVirt and
> > > > gluster hyperconverged.
> > > 

> > > > Is it possible to run oVirt and Gluster together on the same
> > > > hardware? So 3 physical hosts would run CentOS or something, and I
> > > > would install oVirt Node + Gluster onto the same base host OS? If so,
> > > > then I could probably make that fit into my budget.
> > > 

> > > > 

> > 

> > > > Sent with ProtonMail Secure Email.
> > 

> > > > ‐‐‐ Original Message ‐‐‐
> > > > On Monday, June 22, 2020 1:02 PM, Strahil Nikolov via Users
> > > > users@ovirt.org wrote:
> > > 

> > > > 

> > 

> > > > > Hey David,
> > 

> > > > > keep in mind that you need some big NICs.
> > > > > I started my oVirt lab with 1 Gbit NIC and later added 4
> > > > > dual-port 1 Gbit NICs and I had to create multiple gluster volumes and
> > > > > multiple storage domains.
> > > 

> > > > > Yet, windows VMs cannot use software raid for boot devices, thus
> > > > > it's a 

[ovirt-users] Re: Deploying new Ovirt Node on new 4.4 cluster. ansible code incorrect

2020-07-19 Thread Strahil Nikolov via Users
There  is a  bug already opened for that behaviour:

https://bugzilla.redhat.com/show_bug.cgi?id=1858234

Best Regards,
Strahil Nikolov

На 19 юли 2020 г. 13:26:01 GMT+03:00, erin.s...@bookit.com написа:
>Hi Guys we attempted to deploy a new ovirt cluster two weeks ago. 4.4.1
>and 4.4.0 Once we got the engine up we tried to add a new node and
>Ansible began to fail on the deploy scripts. I search the logs and
>discovered the Ansible script was not evaluating the right versions of
>python, python3. Also the yum module should of been dnf. Once i change
>those in many places it worked great. I am just trying to figure out
>what went wrong with our deploy so this doesn't happen again.  we are
>using a hosted engine on a centos 8 vm. please let me know what logs i
>can grab or apps to run for more details. 
>details:
>
>
>engine:
>[root@ovirt ~]# uname -a 
>Linux ovirt.ism.ld 4.18.0-193.6.3.el8_2.x86_64 #1 SMP Wed Jun 10
>11:09:32 UTC 2020 x86_64 x86_64 x86_64 GNU/Linux
>[root@ovirt ~]# cat /etc/centos-release
>CentOS Linux release 8.2.2004 (Core) 
>[root@ovirt ~]# uname -a 
>Linux ovirt.ism.ld 4.18.0-193.6.3.el8_2.x86_64 #1 SMP Wed Jun 10
>11:09:32 UTC 2020 x86_64 x86_64 x86_64 GNU/Linux
>[root@ovirt ~]# cat /etc/os-release 
>NAME="CentOS Linux"
>VERSION="8 (Core)"
>ID="centos"
>ID_LIKE="rhel fedora"
>VERSION_ID="8"
>PLATFORM_ID="platform:el8"
>PRETTY_NAME="CentOS Linux 8 (Core)"
>ANSI_COLOR="0;31"
>CPE_NAME="cpe:/o:centos:centos:8"
>HOME_URL="https://www.centos.org/;
>BUG_REPORT_URL="https://bugs.centos.org/;
>
>
>node2:
>
>CENTOS_MANTISBT_PROJECT="CentOS-8"
>CENTOS_MANTISBT_PROJECT_VERSION="8"
>REDHAT_SUPPORT_PRODUCT="centos"
>REDHAT_SUPPORT_PRODUCT_VERSION="8"
>[root@web2 ~]# cat /etc/centos-release
>CentOS Linux release 8.2.2004 (Core) 
>[root@web2 ~]#  cat /etc/os-release 
>NAME="CentOS Linux"
>VERSION="8 (Core)"
>ID="centos"
>ID_LIKE="rhel fedora"
>VERSION_ID="8"
>VARIANT="oVirt Node 4.4.1.1"
>VARIANT_ID="ovirt-node"
>PRETTY_NAME="oVirt Node 4.4.1.1"
>ANSI_COLOR="0;31"
>CPE_NAME="cpe:/o:centos:centos:8"
>HOME_URL="https://www.ovirt.org/;
>BUG_REPORT_URL="https://bugzilla.redhat.com/;
>PLATFORM_ID="platform:el8"
>[root@web2 ~]# uname -a 
>Linux web2.ism.ld 4.18.0-193.6.3.el8_2.x86_64 #1 SMP Wed Jun 10
>11:09:32 UTC 2020 x86_64 x86_64 x86_64 GNU/Linux
>[root@web2 ~]#  cat /etc/os-release 
>NAME="CentOS Linux"
>VERSION="8 (Core)"
>ID="centos"
>ID_LIKE="rhel fedora"
>VERSION_ID="8"
>VARIANT="oVirt Node 4.4.1.1"
>VARIANT_ID="ovirt-node"
>PRETTY_NAME="oVirt Node 4.4.1.1"
>ANSI_COLOR="0;31"
>CPE_NAME="cpe:/o:centos:centos:8"
>HOME_URL="https://www.ovirt.org/;
>BUG_REPORT_URL="https://bugzilla.redhat.com/;
>PLATFORM_ID="platform:el8"
>[root@web2 ~]# cat /etc/centos-release
>CentOS Linux release 8.2.2004 (Core) 
>
>node1:
>[root@web1 ~]# uname -a
>Linux web1.ism.ld 4.18.0-147.8.1.el8_1.x86_64 #1 SMP Thu Apr 9 13:49:54
>UTC 2020 x86_64 x86_64 x86_64 GNU/Linux
>[root@web1 ~]# dnf update
>CentOS-8 - Gluster 7   
>0.0  B/s |   0  B 00:00
>Failed to download metadata for repo 'ovirt-4.4-centos-gluster7'
>Error: Failed to download metadata for repo 'ovirt-4.4-centos-gluster7'
>[root@web1 ~]# cat /etc/centos-release
>CentOS Linux release 8.1.1911 (Core)
>
>[root@web1 ~]#  cat /etc/os-release 
>NAME="CentOS Linux"
>VERSION="8 (Core)"
>ID="centos"
>ID_LIKE="rhel fedora"
>VERSION_ID="8"
>VARIANT="oVirt Node 4.4.0"
>VARIANT_ID="ovirt-node"
>PRETTY_NAME="oVirt Node 4.4.0"
>ANSI_COLOR="0;31"
>CPE_NAME="cpe:/o:centos:centos:8"
>HOME_URL="https://www.ovirt.org/;
>BUG_REPORT_URL="https://bugzilla.redhat.com/;
>PLATFORM_ID="platform:el8"
>
>___
>Users mailing list -- users@ovirt.org
>To unsubscribe send an email to users-le...@ovirt.org
>Privacy Statement: https://www.ovirt.org/privacy-policy.html
>oVirt Code of Conduct:
>https://www.ovirt.org/community/about/community-guidelines/
>List Archives:
>https://lists.ovirt.org/archives/list/users@ovirt.org/message/XEZHEHWNWJW4THUSHXFUDG4TGEYVAEDP/
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/2MX3SZB7YT4W2GFCMTKJD6PJZZGXCMGI/


[ovirt-users] Re: Info on transporting a gluster storage domain

2020-07-19 Thread Gianluca Cecchi
On Fri, Jul 17, 2020 at 2:36 PM Gianluca Cecchi 
wrote:
[snip

> I want  to scratch env1 but preserve and then import vmstore and big2
> storage domains in env2
>

[snip]


> and then?
> What should I do at gluster commands level to "import" the already setup
> gluster bricks/volumes and then at oVirt side to import the corresponding
> storage domain?
> BTW: can I import the previous storage domain named "vmstore" giving
> anothe r name such as vmstore2, not to create conflict with already
> existing "vmstore" storage domain or the name is hard coded when importing
> and creating possible conflict?
>
> Thanks,
> Gianluca
>

It seems that a backup of /var/lib/glusterd is needed, but I don't have
it...
Can I somehow restart my original volume only from having the pre-existing
brick files/directories?
Thanks,
Gianluca
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/HNZVG236MXSURZ2YAUXIRYHXB4A4IGH3/


[ovirt-users] Re: oVirt install questions

2020-07-19 Thread Strahil Nikolov via Users
Hi David,

it's a little bit different.

Ovirt  supports  'replica  3' (3  directories hsot the same content) or 
'replica  3 arbiter 1' (2  directories host same data, third directory contains 
metadata to prevent split brain situations)  volumes.

If you have 'replica  3' it is smart to keep the data on separate hosts, 
although you can keep it on the same host (but then you should use no replica 
and oVirt's Single node setup).

When you extend , yoou need to add bricks (fancy name for a directory) in the 
x3 count.

If you  wish that you want to use 5 nodes,  you can go with 'replica 3 arbiter 
1' volume,  where ServerA & ServerB host data and ServerC host only metadata 
(arbiter). Then you can extend and for example ServerC can host again metadata 
while ServerD & ServerE host data for the second replica set.

You can even use only 3 servers for Gluster , while much more  systems as ovirt 
nodes (CPU & RAM)  to host VMs.
In case of a 4 node setup - 3 hosts have the gluster data and the 4th - is not 
part of ths gluster, just hosting VMs.


Best  Regards,
Strahil Nikolov

На 19 юли 2020 г. 15:25:10 GMT+03:00, David White via Users  
написа:
>Thanks again for explaining all of this to me.
>Much appreciated.
>
>Regarding the hyperconverged environment,
>reviewing 
>https://www.ovirt.org/documentation/gluster-hyperconverged/chap-Deploying_Hyperconverged.html,
>it appears to state that you need, exactly, 3 physical servers.
>
>Is it possible to run a hyperconverged environment with more than 3
>physical servers?
>Because of the way that the gluster triple-redundancy works, I knew
>that I would need to size all 3 physical servers' SSD drives to store
>100% of the data, but there's a possibility that 1 particular (future)
>customer is going to need about 10TB of disk space.
>
>For that reason, I'm thinking about what it would look like to have 4
>or even 5 physical servers in order to increase the total amount of
>disk space made available to oVirt as a whole. And then from there, I
>would of course setup a number of virtual disks that I would attach
>back to that customer's VM.
>
>So to recap, if I were to have a 5-node Gluster Hyperconverged
>environment, I'm hoping that the data would still only be required to
>replicate across 3 nodes. Does this make sense? Is this how data
>replication works? Almost like a RAID -- add more drives, and the RAID
>gets expanded?
>
>Sent with ProtonMail Secure Email.
>
>‐‐‐ Original Message ‐‐‐
>On Tuesday, June 23, 2020 4:41 PM, Jayme  wrote:
>
>> Yes this is the point of hyperconverged. You only need three hosts to
>setup a proper hci cluster. I would recommend ssds for gluster storage.
>You could get away with non raid to save money since you can do replica
>three with gluster meaning your data is fully replicated across all
>three hosts.
>> 
>
>> On Tue, Jun 23, 2020 at 5:17 PM David White via Users
> wrote:
>> 
>
>> > Thanks.
>> > I've only been considering SSD drives for storage, as that is what
>I currently have in the cloud.
>> > 
>
>> > I think I've seen some things in the documents about oVirt and
>gluster hyperconverged.
>> > Is it possible to run oVirt and Gluster together on the same
>hardware? So 3 physical hosts would run CentOS or something, and I
>would install oVirt Node + Gluster onto the same base host OS? If so,
>then I could probably make that fit into my budget.
>> > 
>
>> > Sent with ProtonMail Secure Email.
>> > 
>
>> > ‐‐‐ Original Message ‐‐‐
>> > On Monday, June 22, 2020 1:02 PM, Strahil Nikolov via Users
> wrote:
>> > 
>
>> > > Hey David,
>> > >
>> > 
>
>> > > keep in mind that you need some big NICs.
>> > > I started my oVirt lab with 1 Gbit NIC and later added 4
>dual-port 1 Gbit NICs and I had to create multiple gluster volumes and
>multiple storage domains.
>> > > Yet, windows VMs cannot use software raid for boot devices, thus
>it's a pain in the @$$.
>> > > I think that optimal is to have several 10Gbit NICs (at least 1
>for gluster and 1 for oVirt live migration).
>> > > Also, NVMEs can be used as lvm cache for spinning disks.
>> > >
>> > 
>
>> > > Best Regards,
>> > > Strahil Nikolov
>> > >
>> > 
>
>> > > На 22 юни 2020 г. 18:50:01 GMT+03:00, David White
>dmwhite...@protonmail.com написа:
>> > >
>> > 
>
>> > > > > For migration between hosts you need a shared storage. SAN,
>Gluster,
>> > > > > CEPH, NFS, iSCSI are among the ones already supported (CEPH
>is a little
>> > > > > bit experimental).
>> > > >
>> > 
>
>> > > > Sounds like I'll be using NFS or Gluster after all.
>> > > > Thank you.
>> > > >
>> > 
>
>> > > > > The engine is just a management layer. KVM/qemu has that
>option a
>> > > > > long time ago, yet it's some manual work to do it.
>> > > > > Yeah, this environment that I'm building is expected to grow
>over time
>> > > > > (although that growth could go slowly), so I'm trying to
>architect
>> > > > > things properly now to make future growth easier to deal
>with. I'm also
>> > > > > trying to balance 

[ovirt-users] Parent checkpoint ID does not match the actual leaf checkpoint

2020-07-19 Thread Łukasz Kołaciński
Hello,
Thanks to previous answers, I was able to make backups. Unfortunately, we had 
some infrastructure issues and after the host reboots new problems appeared. I 
am not able to do any backup using the commands that worked yesterday. I looked 
through the logs and there is something like this:

2020-07-17 15:06:30,644+02 ERROR 
[org.ovirt.engine.core.bll.StartVmBackupCommand] 
(EE-ManagedScheduledExecutorService-engineScheduledThreadPool-Thread-54) 
[944a1447-4ea5-4a1c-b971-0bc612b6e45e] Failed to execute VM backup operation 
'StartVmBackup': {}: org.ovirt.engine.core.common.errors.EngineException: 
EngineException: org.ovirt.engine.core.vdsbroker.vdsbroker.VDSErrorException: 
VDSGenericException: VDSErrorException: Failed to StartVmBackupVDS, error = 
Checkpoint Error: {'parent_checkpoint_id': None, 'leaf_checkpoint_id': 
'cd078706-84c0-4370-a6ec-654ccd6a21aa', 'vm_id': 
'116aa6eb-31a1-43db-9b1e-ad6e32fb9260', 'reason': 'Parent checkpoint ID does 
not match the actual leaf checkpoint'}, code = 1610 (Failed with error 
unexpected and code 16)
at 
deployment.engine.ear.bll.jar//org.ovirt.engine.core.bll.VdsHandler.handleVdsResult(VdsHandler.java:114)
at 
deployment.engine.ear.bll.jar//org.ovirt.engine.core.bll.VDSBrokerFrontendImpl.runVdsCommand(VDSBrokerFrontendImpl.java:33)
at 
deployment.engine.ear.bll.jar//org.ovirt.engine.core.bll.CommandBase.runVdsCommand(CommandBase.java:2114)
at 
deployment.engine.ear.bll.jar//org.ovirt.engine.core.bll.StartVmBackupCommand.performVmBackupOperation(StartVmBackupCommand.java:368)
at 
deployment.engine.ear.bll.jar//org.ovirt.engine.core.bll.StartVmBackupCommand.runVmBackup(StartVmBackupCommand.java:225)
at 
deployment.engine.ear.bll.jar//org.ovirt.engine.core.bll.StartVmBackupCommand.performNextOperation(StartVmBackupCommand.java:199)
at 
deployment.engine.ear.bll.jar//org.ovirt.engine.core.bll.SerialChildCommandsExecutionCallback.childCommandsExecutionEnded(SerialChildCommandsExecutionCallback.java:32)
at 
deployment.engine.ear.bll.jar//org.ovirt.engine.core.bll.ChildCommandsCallbackBase.doPolling(ChildCommandsCallbackBase.java:80)
at 
deployment.engine.ear.bll.jar//org.ovirt.engine.core.bll.tasks.CommandCallbacksPoller.invokeCallbackMethodsImpl(CommandCallbacksPoller.java:175)
at 
deployment.engine.ear.bll.jar//org.ovirt.engine.core.bll.tasks.CommandCallbacksPoller.invokeCallbackMethods(CommandCallbacksPoller.java:109)
at 
java.base/java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:515)
at java.base/java.util.concurrent.FutureTask.runAndReset(FutureTask.java:305)
at 
org.glassfish.javax.enterprise.concurrent//org.glassfish.enterprise.concurrent.internal.ManagedScheduledThreadPoolExecutor$ManagedScheduledFutureTask.access$201(ManagedScheduledThreadPoolExecutor.java:383)
at 
org.glassfish.javax.enterprise.concurrent//org.glassfish.enterprise.concurrent.internal.ManagedScheduledThreadPoolExecutor$ManagedScheduledFutureTask.run(ManagedScheduledThreadPoolExecutor.java:534)
at 
java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)
at 
java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)
at java.base/java.lang.Thread.run(Thread.java:834)
at 
org.glassfish.javax.enterprise.concurrent//org.glassfish.enterprise.concurrent.ManagedThreadFactoryImpl$ManagedThread.run(ManagedThreadFactoryImpl.java:250)

And the last error is:

2020-07-17 15:13:45,835+02 ERROR 
[org.ovirt.engine.core.bll.StartVmBackupCommand] 
(EE-ManagedScheduledExecutorService-engineScheduledThreadPool-Thread-14) 
[f553c1f2-1c99-4118-9365-ba6b862da936] Failed to execute VM backup operation 
'GetVmBackupInfo': {}: org.ovirt.engine.core.common.errors.EngineException: 
EngineException: org.ovirt.engine.core.vdsbroker.vdsbroker.VDSErrorException: 
VDSGenericException: VDSErrorException: Failed to GetVmBackupInfoVDS, error = 
No such backup Error: {'vm_id': '116aa6eb-31a1-43db-9b1e-ad6e32fb9260', 
'backup_id': 'bf1c26f7-c3e5-437c-bb5a-255b8c1b3b73', 'reason': 'VM backup not 
exists: Domain backup job id not found: no domain backup job present'}, code = 
1601 (Failed with error unexpected and code 16)
(these errors are from full backup)

Like I said this is very strange because everything was working correctly.



Regards

Łukasz Kołaciński

Junior Java Developer

e-mail: l.kolacin...@storware.eu





[STORWARE]

ul. Leszno 8/44
01-192 Warszawa
www.storware.eu 

[facebook]

[twitter]

[linkedin]

[Storware_Stopka_09]



Storware Spółka z o.o. nr wpisu do ewidencji KRS dla M.St. Warszawa 000510131 , 
NIP 5213672602. Wiadomość ta jest przeznaczona jedynie dla osoby lub podmiotu, 
który jest jej adresatem i może zawierać poufne i/lub uprzywilejowane 

[ovirt-users] Deploying new Ovirt Node on new 4.4 cluster. ansible code incorrect

2020-07-19 Thread erin . sims
Hi Guys we attempted to deploy a new ovirt cluster two weeks ago. 4.4.1 and 
4.4.0 Once we got the engine up we tried to add a new node and Ansible began to 
fail on the deploy scripts. I search the logs and discovered the Ansible script 
was not evaluating the right versions of python, python3. Also the yum module 
should of been dnf. Once i change those in many places it worked great. I am 
just trying to figure out what went wrong with our deploy so this doesn't 
happen again.  we are using a hosted engine on a centos 8 vm. please let me 
know what logs i can grab or apps to run for more details. 
details:


engine:
[root@ovirt ~]# uname -a 
Linux ovirt.ism.ld 4.18.0-193.6.3.el8_2.x86_64 #1 SMP Wed Jun 10 11:09:32 UTC 
2020 x86_64 x86_64 x86_64 GNU/Linux
[root@ovirt ~]# cat /etc/centos-release
CentOS Linux release 8.2.2004 (Core) 
[root@ovirt ~]# uname -a 
Linux ovirt.ism.ld 4.18.0-193.6.3.el8_2.x86_64 #1 SMP Wed Jun 10 11:09:32 UTC 
2020 x86_64 x86_64 x86_64 GNU/Linux
[root@ovirt ~]# cat /etc/os-release 
NAME="CentOS Linux"
VERSION="8 (Core)"
ID="centos"
ID_LIKE="rhel fedora"
VERSION_ID="8"
PLATFORM_ID="platform:el8"
PRETTY_NAME="CentOS Linux 8 (Core)"
ANSI_COLOR="0;31"
CPE_NAME="cpe:/o:centos:centos:8"
HOME_URL="https://www.centos.org/;
BUG_REPORT_URL="https://bugs.centos.org/;


node2:

CENTOS_MANTISBT_PROJECT="CentOS-8"
CENTOS_MANTISBT_PROJECT_VERSION="8"
REDHAT_SUPPORT_PRODUCT="centos"
REDHAT_SUPPORT_PRODUCT_VERSION="8"
[root@web2 ~]# cat /etc/centos-release
CentOS Linux release 8.2.2004 (Core) 
[root@web2 ~]#  cat /etc/os-release 
NAME="CentOS Linux"
VERSION="8 (Core)"
ID="centos"
ID_LIKE="rhel fedora"
VERSION_ID="8"
VARIANT="oVirt Node 4.4.1.1"
VARIANT_ID="ovirt-node"
PRETTY_NAME="oVirt Node 4.4.1.1"
ANSI_COLOR="0;31"
CPE_NAME="cpe:/o:centos:centos:8"
HOME_URL="https://www.ovirt.org/;
BUG_REPORT_URL="https://bugzilla.redhat.com/;
PLATFORM_ID="platform:el8"
[root@web2 ~]# uname -a 
Linux web2.ism.ld 4.18.0-193.6.3.el8_2.x86_64 #1 SMP Wed Jun 10 11:09:32 UTC 
2020 x86_64 x86_64 x86_64 GNU/Linux
[root@web2 ~]#  cat /etc/os-release 
NAME="CentOS Linux"
VERSION="8 (Core)"
ID="centos"
ID_LIKE="rhel fedora"
VERSION_ID="8"
VARIANT="oVirt Node 4.4.1.1"
VARIANT_ID="ovirt-node"
PRETTY_NAME="oVirt Node 4.4.1.1"
ANSI_COLOR="0;31"
CPE_NAME="cpe:/o:centos:centos:8"
HOME_URL="https://www.ovirt.org/;
BUG_REPORT_URL="https://bugzilla.redhat.com/;
PLATFORM_ID="platform:el8"
[root@web2 ~]# cat /etc/centos-release
CentOS Linux release 8.2.2004 (Core) 

node1:
[root@web1 ~]# uname -a
Linux web1.ism.ld 4.18.0-147.8.1.el8_1.x86_64 #1 SMP Thu Apr 9 13:49:54 UTC 
2020 x86_64 x86_64 x86_64 GNU/Linux
[root@web1 ~]# dnf update
CentOS-8 - Gluster 7
   
0.0  B/s |   0  B 00:00
Failed to download metadata for repo 'ovirt-4.4-centos-gluster7'
Error: Failed to download metadata for repo 'ovirt-4.4-centos-gluster7'
[root@web1 ~]# cat /etc/centos-release
CentOS Linux release 8.1.1911 (Core)

[root@web1 ~]#  cat /etc/os-release 
NAME="CentOS Linux"
VERSION="8 (Core)"
ID="centos"
ID_LIKE="rhel fedora"
VERSION_ID="8"
VARIANT="oVirt Node 4.4.0"
VARIANT_ID="ovirt-node"
PRETTY_NAME="oVirt Node 4.4.0"
ANSI_COLOR="0;31"
CPE_NAME="cpe:/o:centos:centos:8"
HOME_URL="https://www.ovirt.org/;
BUG_REPORT_URL="https://bugzilla.redhat.com/;
PLATFORM_ID="platform:el8"

___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/XEZHEHWNWJW4THUSHXFUDG4TGEYVAEDP/


[ovirt-users] PKI Problem

2020-07-19 Thread ramon
Hi

I did a fresh installation of version 4.4.0.3. After the engine setup I 
replaced the apache certificate with a custom certificate. I used this article 
to do it: https://myhomelab.gr/linux/2020/01/20/replacing_ovirt_ssl.html

To summarize, I replaced those files with my own authority and the signed 
custom certificate

/etc/pki/ovirt-engine/keys/apache.key.nopass
/etc/pki/ovirt-engine/certs/apache.cer
/etc/pki/ovirt-engine/apache-ca.pem

That worked so far, apache uses now my certificate, login is possible. To setup 
a new machine, I need to upload an iso image, which failed. I found this error 
in /var/log/ovirt-imageio/daemon.log

2020-07-08 20:43:23,750 INFO(Thread-10) [http] OPEN client=192.168.1.228
2020-07-08 20:43:23,767 INFO(Thread-10) [backends.http] Open backend 
netloc='the_secret_hostname:54322' 
path='/images/ef60404c-dc69-4a3d-bfaa-8571f675f3e1' 
cafile='/etc/pki/ovirt-engine/apache-ca.pem' secure=True
2020-07-08 20:43:23,770 ERROR   (Thread-10) [http] Server error
Traceback (most recent call last):
  File "/usr/lib64/python3.6/site-packages/ovirt_imageio/_internal/http.py", 
line 699, in __call__
self.dispatch(req, resp)
  File "/usr/lib64/python3.6/site-packages/ovirt_imageio/_internal/http.py", 
line 744, in dispatch
return method(req, resp, *match.groups())
  File "/usr/lib64/python3.6/site-packages/ovirt_imageio/_internal/cors.py", 
line 84, in wrapper
return func(self, req, resp, *args)
  File "/usr/lib64/python3.6/site-packages/ovirt_imageio/_internal/images.py", 
line 66, in put
backends.get(req, ticket, self.config),
  File 
"/usr/lib64/python3.6/site-packages/ovirt_imageio/_internal/backends/__init__.py",
 line 53, in get
cafile=config.tls.ca_file)
  File 
"/usr/lib64/python3.6/site-packages/ovirt_imageio/_internal/backends/http.py", 
line 48, in open
secure=options.get("secure", True))
  File 
"/usr/lib64/python3.6/site-packages/ovirt_imageio/_internal/backends/http.py", 
line 63, in __init__
options = self._options()
  File 
"/usr/lib64/python3.6/site-packages/ovirt_imageio/_internal/backends/http.py", 
line 364, in _options
self._con.request("OPTIONS", self.url.path)
  File "/usr/lib64/python3.6/http/client.py", line 1254, in request
self._send_request(method, url, body, headers, encode_chunked)
  File "/usr/lib64/python3.6/http/client.py", line 1300, in _send_request
self.endheaders(body, encode_chunked=encode_chunked)
  File "/usr/lib64/python3.6/http/client.py", line 1249, in endheaders
self._send_output(message_body, encode_chunked=encode_chunked)
  File "/usr/lib64/python3.6/http/client.py", line 1036, in _send_output
self.send(msg)
  File "/usr/lib64/python3.6/http/client.py", line 974, in send
self.connect()
  File "/usr/lib64/python3.6/http/client.py", line 1422, in connect
server_hostname=server_hostname)
  File "/usr/lib64/python3.6/ssl.py", line 365, in wrap_socket
_context=self, _session=session)
  File "/usr/lib64/python3.6/ssl.py", line 776, in __init__
self.do_handshake()
  File "/usr/lib64/python3.6/ssl.py", line 1036, in do_handshake
self._sslobj.do_handshake()
  File "/usr/lib64/python3.6/ssl.py", line 648, in do_handshake
self._sslobj.do_handshake()
ssl.SSLError: [SSL: CERTIFICATE_VERIFY_FAILED] certificate verify failed 
(_ssl.c:897)
2020-07-08 20:43:23,770 INFO(Thread-10) [http] CLOSE client=192.168.1.228 
[connection 1 ops, 0.019775 s] [dispatch 1 ops, 0.003114 s]

I'm a python developer so I had no problem reading the traceback. 

The SSL handshake fails when image-io tries to connect to what I think is 
called an ovn-provider. But it is using my new authority certificate 
cafile='/etc/pki/ovirt-engine/apache-ca.pem' which does not validate the 
certificate generated by the ovirt engine setup, which the ovn-provider 
probably uses.

I didn't exactly know where the parameter for the validation ca file is. 
Probably it is the ca_file parameter in 
/etc/ovirt-imageio/conf.d/50-engine.conf. But that needs to be set to my own 
authority ca file.

I modified the python file to set the ca_file parameter to the engine setups 
ca_file directly 

/usr/lib64/python3.6/site-packages/ovirt_imageio/_internal/backends/__init__.py

So the function call around line 50 looks like this:

backend = module.open(
ticket.url,
mode,
sparse=ticket.sparse,
dirty=ticket.dirty,
cafile='/etc/pki/ovirt-engine/ca.pem' #config.tls.ca_file
)

Now the image upload works, but obviously this is not the way to fix things. Is 
there an other way to make image-io accept the certificate from the engine 
setup, while using my custom certificate? I don't want to replace the 
certificates of all ovirt components with custom certificates. I only need the 
weblogin with my custom certificate.

Regards
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to 

[ovirt-users] about ovirt fence

2020-07-19 Thread 崔涛的个人邮箱
If I configure fence for ovirt hosts, is the fence system used to fence host
or to fence the vm running on the host ?

 

___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/RRUWVGBITO7AQMBVFEJQ4MLVOL4VIGBU/


[ovirt-users] Upgrade from 4.3 to 4.4 fails with db or user ovirt_engine_history already exists

2020-07-19 Thread None via Users
Currently, our upgrade to 4.4 fails with error:
FATAL: Existing database 'ovirt_engine_history' or user 'ovirt_engine_history' 
found and temporary ones created

We have upgraded the running 4.3 installation to the latest version and also 
use the latest packages for the upgrade on the new CentOS 8.2 installation. The 
back-up is made following the Hosted Engine upgrade steps in the manual, using: 
`engine-backup --scope=all --mode=backup --file=backup.bck --log=backuplog.log`

The upgrade is performed after copying the backup.bck file to the new server 
and using `hosted-engine --deploy --restore-from-file=backup.bck`

After creating the Engine VM, the installation process hangs when the backup is 
restored. We tried it several times, using a complete or a partial back-up.

Old/current oVirt version: 4.3.10.4-1.el7
New version: 4.4.1.8
ovirt-ansible-hosted-engine-setup: 1.1.6

Did anyone get the same error while upgrading an existing installation?
Thanks!

Error log Ansible on Host:

2020-07-15 12:34:09,361+0200 INFO otopi.ovirt_hosted_engine_setup.ansible_utils 
ansible_utils._process_output:109 TASK [ovirt.hosted_engine_setup : Run 
engine-backup]
2020-07-15 12:35:28,778+0200 DEBUG 
otopi.ovirt_hosted_engine_setup.ansible_utils ansible_utils._process_output:103 
{'msg': 'non-zero return code', 'cmd': 'engine-backup --mode=restore 
--log=/var/log/ovirt-engine/setup/restore-backup-$(date -u +%Y%m%d%H%M%S).log 
--file=/root/engine_backup --provision-all-databases --restore-permissions', 
'stdout': "Start of engine-backup with mode 'restore'\nscope: all\narchive 
file: /root/engine_backup\nlog file: 
/var/log/ovirt-engine/setup/restore-backup-20200715103410.log\nPreparing to 
restore:\n- Unpacking file '/root/engine_backup'\nRestoring:\n- 
Files\n--\nPlease
 note:\n\nOperating system is different from the one used during 
backup.\nCurrent operating system: centos8\nOperating system at backup: 
centos7\n\nApache httpd configuration will not be restored.\nYou will be asked 
about it on the next engine-setup 
run.\n--
 \nProvisioning PostgreSQL users/databases:\n- user 
'engine', database 'engine'\n- extra user 'ovirt_engine_history' having grants 
on database engine, created with a random password\n- user 
'ovirt_engine_history', database 'ovirt_engine_history'", 'stderr': "FATAL: 
Existing database 'ovirt_engine_history' or user 'ovirt_engine_history' found 
and temporary ones created - Please clean up everything and try again", 'rc': 
1, 'start': '2020-07-15 12:34:10.824630', 'end': '2020-07-15 12:35:28.488261', 
'delta': '0:01:17.663631', 'changed': True, 'invocation': {'module_args': 
{'_raw_params': 'engine-backup --mode=restore 
--log=/var/log/ovirt-engine/setup/restore-backup-$(date -u +%Y%m%d%H%M%S).log 
--file=/root/engine_backup --provision-all-databases --restore-permissions', 
'_uses_shell': True, 'warn': True, 'stdin_add_newline': True, 
'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 
'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines'
 : ["Start of engine-backup with mode 'restore'", 'scope: all', 'archive file: 
/root/engine_backup', 'log file: 
/var/log/ovirt-engine/setup/restore-backup-20200715103410.log', 'Preparing to 
restore:', "- Unpacking file '/root/engine_backup'", 'Restoring:', '- Files', 
'--',
 'Please note:', '', 'Operating system is different from the one used during 
backup.', 'Current operating system: centos8', 'Operating system at backup: 
centos7', '', 'Apache httpd configuration will not be restored.', 'You will be 
asked about it on the next engine-setup run.', 
'--',
 'Provisioning PostgreSQL users/databases:', "- user 'engine', database 
'engine'", "- extra user 'ovirt_engine_history' having grants on database 
engine, created with a random password", "- user 'ovirt_engine_history', 
database 'ovirt_engine_history'"], 'stderr_lines': ["FATAL: Existing d
 atabase 'ovirt_engine_history' or user 'ovirt_engine_history' found and 
temporary ones created - Please clean up everything and try again"], 
'_ansible_no_log': False, '_ansible_delegated_vars': {'ansible_host': 
'ovirt-management.dc1.triplon', 'ansible_port': None, 'ansible_user': 'root'}}
2020-07-15 12:35:28,879+0200 ERROR 
otopi.ovirt_hosted_engine_setup.ansible_utils ansible_utils._process_output:107 
fatal: [localhost -> ovirt-management.dc1.triplon]: FAILED! => {"changed": 
true, "cmd": "engine-backup --mode=restore 
--log=/var/log/ovirt-engine/setup/restore-backup-$(date -u +%Y%m%d%H%M%S).log 
--file=/root/engine_backup --provision-all-databases --restore-permissions", 
"delta": "0:01:17.663631", "end": "2020-07-15 12:35:28.488261", "msg": 
"non-zero return code", "rc": 1, 

[ovirt-users] Re: VM Snapshot inconsistent

2020-07-19 Thread Benny Zlotnik
Sorry, I only replied to the question, in addition to removing the
image from the images table, you may also need to set the parent as
the active image and remove the snapshot referenced by this image from
the database. Can you provide the output of:
$ psql -U engine -d engine -c "select * from images where
image_group_id = ";

As well as
$ psql -U engine -d engine -c "SELECT s.* FROM snapshots s, images i
where i.vm_snapshot_id = s.snapshot_id and i.image_guid =
'6197b30d-0732-4cc7-aef0-12f9f6e9565b';"

On Sun, Jul 19, 2020 at 12:49 PM Benny Zlotnik  wrote:
>
> It can be done by deleting from the images table:
> $ psql -U engine -d engine -c "DELETE FROM images WHERE image_guid =
> '6197b30d-0732-4cc7-aef0-12f9f6e9565b'";
>
> of course the database should be backed up before doing this
>
>
>
> On Fri, Jul 17, 2020 at 6:45 PM Nir Soffer  wrote:
> >
> > On Thu, Jul 16, 2020 at 11:33 AM Arsène Gschwind
> >  wrote:
> >
> > > It looks like the Pivot completed successfully, see attached vdsm.log.
> > > Is there a way to recover that VM?
> > > Or would it be better to recover the VM from Backup?
> >
> > This what we see in the log:
> >
> > 1. Merge request recevied
> >
> > 2020-07-13 11:18:30,282+0200 INFO  (jsonrpc/7) [api.virt] START
> > merge(drive={u'imageID': u'd7bd480d-2c51-4141-a386-113abf75219e',
> > u'volumeID': u'6197b30d-0732-4cc7-aef0-12f9f6e9565b', u'domainID':
> > u'33777993-a3a5-4aad-a24c-dfe5e473faca', u'poolID':
> > u'0002-0002-0002-0002-0289'},
> > baseVolUUID=u'8e412b5a-85ec-4c53-a5b8-dfb4d6d987b8',
> > topVolUUID=u'6197b30d-0732-4cc7-aef0-12f9f6e9565b', bandwidth=u'0',
> > jobUUID=u'720410c3-f1a0-4b25-bf26-cf40aa6b1f97')
> > from=:::10.34.38.31,39226,
> > flow_id=4a8b9527-06a3-4be6-9bb9-88630febc227,
> > vmId=b5534254-660f-44b1-bc83-d616c98ba0ba (api:48)
> >
> > To track this job, we can use the jobUUID: 
> > 720410c3-f1a0-4b25-bf26-cf40aa6b1f97
> > and the top volume UUID: 6197b30d-0732-4cc7-aef0-12f9f6e9565b
> >
> > 2. Starting the merge
> >
> > 2020-07-13 11:18:30,690+0200 INFO  (jsonrpc/7) [virt.vm]
> > (vmId='b5534254-660f-44b1-bc83-d616c98ba0ba') Starting merge with
> > jobUUID=u'720410c3-f1a0-4b25-bf26-cf40aa6b1f97', original
> > chain=8e412b5a-85ec-4c53-a5b8-dfb4d6d987b8 <
> > 6197b30d-0732-4cc7-aef0-12f9f6e9565b (top), disk='sda', base='sda[1]',
> > top=None, bandwidth=0, flags=12 (vm:5945)
> >
> > We see the original chain:
> > 8e412b5a-85ec-4c53-a5b8-dfb4d6d987b8 <
> > 6197b30d-0732-4cc7-aef0-12f9f6e9565b (top)
> >
> > 3. The merge was completed, ready for pivot
> >
> > 2020-07-13 11:19:00,992+0200 INFO  (libvirt/events) [virt.vm]
> > (vmId='b5534254-660f-44b1-bc83-d616c98ba0ba') Block job ACTIVE_COMMIT
> > for drive 
> > /rhev/data-center/mnt/blockSD/33777993-a3a5-4aad-a24c-dfe5e473faca/images/d7bd480d-2c51-4141-a386-113abf75219e/6197b30d-0732-4cc7-aef0-12f9f6e9565b
> > is ready (vm:5847)
> >
> > At this point parent volume contains all the data in top volume and we can 
> > pivot
> > to the parent volume.
> >
> > 4. Vdsm detect that the merge is ready, and start the clean thread
> > that will complete the merge
> >
> > 2020-07-13 11:19:06,166+0200 INFO  (periodic/1) [virt.vm]
> > (vmId='b5534254-660f-44b1-bc83-d616c98ba0ba') Starting cleanup thread
> > for job: 720410c3-f1a0-4b25-bf26-cf40aa6b1f97 (vm:5809)
> >
> > 5. Requesting pivot to parent volume:
> >
> > 2020-07-13 11:19:06,717+0200 INFO  (merge/720410c3) [virt.vm]
> > (vmId='b5534254-660f-44b1-bc83-d616c98ba0ba') Requesting pivot to
> > complete active layer commit (job
> > 720410c3-f1a0-4b25-bf26-cf40aa6b1f97) (vm:6205)
> >
> > 6. Pivot was successful
> >
> > 2020-07-13 11:19:06,734+0200 INFO  (libvirt/events) [virt.vm]
> > (vmId='b5534254-660f-44b1-bc83-d616c98ba0ba') Block job ACTIVE_COMMIT
> > for drive 
> > /rhev/data-center/mnt/blockSD/33777993-a3a5-4aad-a24c-dfe5e473faca/images/d7bd480d-2c51-4141-a386-113abf75219e/6197b30d-0732-4cc7-aef0-12f9f6e9565b
> > has completed (vm:5838)
> >
> > 7. Vdsm wait until libvirt updates the xml:
> >
> > 2020-07-13 11:19:06,756+0200 INFO  (merge/720410c3) [virt.vm]
> > (vmId='b5534254-660f-44b1-bc83-d616c98ba0ba') Pivot completed (job
> > 720410c3-f1a0-4b25-bf26-cf40aa6b1f97) (vm:6219)
> >
> > 8. Syncronizing vdsm metadata
> >
> > 2020-07-13 11:19:06,776+0200 INFO  (merge/720410c3) [vdsm.api] START
> > imageSyncVolumeChain(sdUUID='33777993-a3a5-4aad-a24c-dfe5e473faca',
> > imgUUID='d7bd480d-2c51-4141-a386-113abf75219e',
> > volUUID='6197b30d-0732-4cc7-aef0-12f9f6e9565b',
> > newChain=['8e412b5a-85ec-4c53-a5b8-dfb4d6d987b8']) from=internal,
> > task_id=b8f605bd-8549-4983-8fc5-f2ebbe6c4666 (api:48)
> >
> > We can see the new chain:
> > ['8e412b5a-85ec-4c53-a5b8-dfb4d6d987b8']
> >
> > 2020-07-13 11:19:07,005+0200 INFO  (merge/720410c3) [storage.Image]
> > Current chain=8e412b5a-85ec-4c53-a5b8-dfb4d6d987b8 <
> > 6197b30d-0732-4cc7-aef0-12f9f6e9565b (top)  (image:1221)
> >
> > The old chain:
> > 8e412b5a-85ec-4c53-a5b8-dfb4d6d987b8 <
> > 

[ovirt-users] Re: oVirt install questions

2020-07-19 Thread David White via Users
Thanks again for explaining all of this to me.
Much appreciated.

Regarding the hyperconverged environment, reviewing 
https://www.ovirt.org/documentation/gluster-hyperconverged/chap-Deploying_Hyperconverged.html,
 it appears to state that you need, exactly, 3 physical servers.

Is it possible to run a hyperconverged environment with more than 3 physical 
servers?
Because of the way that the gluster triple-redundancy works, I knew that I 
would need to size all 3 physical servers' SSD drives to store 100% of the 
data, but there's a possibility that 1 particular (future) customer is going to 
need about 10TB of disk space.

For that reason, I'm thinking about what it would look like to have 4 or even 5 
physical servers in order to increase the total amount of disk space made 
available to oVirt as a whole. And then from there, I would of course setup a 
number of virtual disks that I would attach back to that customer's VM.

So to recap, if I were to have a 5-node Gluster Hyperconverged environment, I'm 
hoping that the data would still only be required to replicate across 3 nodes. 
Does this make sense? Is this how data replication works? Almost like a RAID -- 
add more drives, and the RAID gets expanded?

Sent with ProtonMail Secure Email.

‐‐‐ Original Message ‐‐‐
On Tuesday, June 23, 2020 4:41 PM, Jayme  wrote:

> Yes this is the point of hyperconverged. You only need three hosts to setup a 
> proper hci cluster. I would recommend ssds for gluster storage. You could get 
> away with non raid to save money since you can do replica three with gluster 
> meaning your data is fully replicated across all three hosts.
> 

> On Tue, Jun 23, 2020 at 5:17 PM David White via Users  wrote:
> 

> > Thanks.
> > I've only been considering SSD drives for storage, as that is what I 
> > currently have in the cloud.
> > 

> > I think I've seen some things in the documents about oVirt and gluster 
> > hyperconverged.
> > Is it possible to run oVirt and Gluster together on the same hardware? So 3 
> > physical hosts would run CentOS or something, and I would install oVirt 
> > Node + Gluster onto the same base host OS? If so, then I could probably 
> > make that fit into my budget.
> > 

> > Sent with ProtonMail Secure Email.
> > 

> > ‐‐‐ Original Message ‐‐‐
> > On Monday, June 22, 2020 1:02 PM, Strahil Nikolov via Users 
> >  wrote:
> > 

> > > Hey David,
> > >
> > 

> > > keep in mind that you need some big NICs.
> > > I started my oVirt lab with 1 Gbit NIC and later added 4 dual-port 1 Gbit 
> > > NICs and I had to create multiple gluster volumes and multiple storage 
> > > domains.
> > > Yet, windows VMs cannot use software raid for boot devices, thus it's a 
> > > pain in the @$$.
> > > I think that optimal is to have several 10Gbit NICs (at least 1 for 
> > > gluster and 1 for oVirt live migration).
> > > Also, NVMEs can be used as lvm cache for spinning disks.
> > >
> > 

> > > Best Regards,
> > > Strahil Nikolov
> > >
> > 

> > > На 22 юни 2020 г. 18:50:01 GMT+03:00, David White 
> > > dmwhite...@protonmail.com написа:
> > >
> > 

> > > > > For migration between hosts you need a shared storage. SAN, Gluster,
> > > > > CEPH, NFS, iSCSI are among the ones already supported (CEPH is a 
> > > > > little
> > > > > bit experimental).
> > > >
> > 

> > > > Sounds like I'll be using NFS or Gluster after all.
> > > > Thank you.
> > > >
> > 

> > > > > The engine is just a management layer. KVM/qemu has that option a
> > > > > long time ago, yet it's some manual work to do it.
> > > > > Yeah, this environment that I'm building is expected to grow over time
> > > > > (although that growth could go slowly), so I'm trying to architect
> > > > > things properly now to make future growth easier to deal with. I'm 
> > > > > also
> > > > > trying to balance availability concerns with budget constraints
> > > > > starting out.
> > > >
> > 

> > > > Given that NFS would also be a single point of failure, I'll probably
> > > > go with Gluster, as long as I can fit the storage requirements into the
> > > > overall budget.
> > > > Sent with ProtonMail Secure Email.
> > > > ‐‐‐ Original Message ‐‐‐
> > > > On Monday, June 22, 2020 6:31 AM, Strahil Nikolov via Users
> > > > users@ovirt.org wrote:
> > > >
> > 

> > > > > На 22 юни 2020 г. 11:06:16 GMT+03:00, David White via
> > > > > usersus...@ovirt.org написа:
> > > >
> > 

> > > > > > Thank you and Strahil for your responses.
> > > > > > They were both very helpful.
> > > >
> > 

> > > > > > > I think a hosted engine installation VM wants 16GB RAM configured
> > > > > > > though I've built older versions with 8GB RAM.
> > > > > > > For modern VMs CentOS8 x86_64 recommends at least 2GB for a host.
> > > > > > > CentOS7 was OK with 1, CentOS6 maybe 512K.
> > > > > > > The tendency is always increasing with updated OS versions.
> > > >
> > 

> > > > > > Ok, so to clarify my question a little bit, I'm trying to figure
> > > > > > out
> > > > >
> > 

> > > > > > 

[ovirt-users] Re: Strange SD problem

2020-07-19 Thread Arsène Gschwind
Hi,

Any hint on that.
I'm wondering if the LUN serial is read out somewhere or if it is computed 
somehow.


On Fri, 2020-07-17 at 06:33 +, Arsène Gschwind wrote:
Hi,

I have some running VM on it, this isn't easy...
If I remove that LUN from the SD, what will happen to the disk defined in that 
SD? will they get lost?


On Thu, 2020-07-16 at 17:58 +0300, Ahmad Khiet wrote:
Hi,
if its the same LUN, then why not remove and import back?


On Thu, Jul 16, 2020 at 3:21 PM Arsène Gschwind 
mailto:arsene.gschw...@unibas.ch>> wrote:
Hi,

We did compare engine backups and found some differences in the LUNs

"public"."luns"  (restored db from 2020.04.09)


physical_volume_id  lun_idvolume_group_id   
serial lun_mapping  
vendor_id   product_id   device_sizediscard_max_size

wEx3tY-OELy-gOtD-CFDp-az4D-EyYO-1SAAqd  repl_HanaDB_osd_01
a1q5Jr-Bd7h-wEVJ-9b0C-Ggnr-M1JI-kyXeDVSHUAWEI_XSG1_2102350RMG10HC210053 
 6HUAWEI  XSG1 4096   268435456

DPUtaW-Q5zp-aZos-HriP-5Z0v-hiWO-w7rmwG  repl_HanaLogs_osd_01  
4TCXZ7-R1l1-xkdU-u0vx-S3n4-JWcE-qksPd1SHUAWEI_XSG1_2102350RMG10HC200035 
 7HUAWEI  XSG1 2048   268435456


"public"."luns"  (current db)



physical_volume_id  lun_idvolume_group_id   
serial lun_mapping  
vendor_id   product_id   device_sizediscard_max_size

wEx3tY-OELy-gOtD-CFDp-az4D-EyYO-1SAAqd  repl_HanaDB_osd_01
a1q5Jr-Bd7h-wEVJ-9b0C-Ggnr-M1JI-kyXeDVSHUAWEI_XSG1_2102350RMG10HC210053 
 6HUAWEI  XSG1 4096   268435456

repl_HanaLogs_osd_01
SHUAWEI_XSG1_2102350RMG10HC210054  7
HUAWEI  XSG1 2548   268435456

We observed that the physical_volume_id and volume_group_id is missing from the 
corrupt SD.
We also observed that the serial has changed on the corrupted SD/LUN.
Is the serial calculated or read somewhere?
Would it be possible to inject the missing values in the engine DB to recover 
to a consistent state?

Thanks for any help.
Arsene


On Wed, 2020-07-15 at 13:24 +0300, Ahmad Khiet wrote:
Hi Arsène,

can you please send which version are you referring to?

as shown in the log: Storage domains with IDs 
[6b82f31b-fa2a-406b-832d-64d9666e1bcc] could not be synchronized. To 
synchronize them, please move them to maintenance and then activate.
can you put them in maintenance and then activate them back so it will be 
synced?
I guess that it is out of sync, that's why the "Add" button appears to already 
added LUNs



On Tue, Jul 14, 2020 at 4:58 PM Arsène Gschwind 
mailto:arsene.gschw...@unibas.ch>> wrote:
Hi Ahmad,

I did the following:

1. Storage -> Storage Domains
2 Click the existing Storage Domain and click "Manage Domain"
and then I see next to the LUN which is already part of the SD the "Add" button

I do not want to click add since it may destroy the existing SD or the content 
of the LUNs.
In the Engine Log I see the following:


020-07-14 09:57:45,131+02 WARN  
[org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector] 
(EE-ManagedThreadFactory-engine-Thread-20) [277145f2] EVENT_ID: 
STORAGE_DOMAINS_COULD_NOT_BE_SYNCED(1,046), Storage domains with IDs 
[6b82f31b-fa2a-406b-832d-64d9666e1bcc] could not be synchronized. To 
synchronize them, please move them to maintenance and then activate.

Thanks a lot

On Tue, 2020-07-14 at 16:07 +0300, Ahmad Khiet wrote:
Hi Arsène Gschwind,

it's really strange that you see "Add" on a LUN that already has been added to 
the database.
to verify the steps you did make at first,
1- Storage -> Storage Domains
2- New Domain - [ select iSCSI ]
3- click on "+" on the iscsi target, then you see the "Add" button is available
4- after clicking add and ok, then this error will be shown in the logs
is that right?

can you also attach vdsm log?




On Tue, Jul 14, 2020 at 1:15 PM Arsène Gschwind 
mailto:arsene.gschw...@unibas.ch>> wrote:
Hello all,

I've checked all my multipath configuration and everything seems korrekt.
Is there a way to correct this, may be in the DB?

I really need some help, thanks a lot.
Arsène

On Tue, 2020-07-14 at 00:29 +, Arsène Gschwind wrote:
HI,

I'm having a strange behavior with a SD. When trying to manage the SD I see 
they "Add" button for the LUN which should already be the one use for that SD.
In the Logs I see the following:

2020-07-13 17:48:07,292+02 ERROR 
[org.ovirt.engine.core.dal.dbbroker.BatchProcedureExecutionConnectionCallback] 
(EE-ManagedThreadFactory-engine-Thread-95) [51091853] Can't execute batch: 
Batch entry 0 select * from 

[ovirt-users] Re: VM Snapshot inconsistent

2020-07-19 Thread Benny Zlotnik
It can be done by deleting from the images table:
$ psql -U engine -d engine -c "DELETE FROM images WHERE image_guid =
'6197b30d-0732-4cc7-aef0-12f9f6e9565b'";

of course the database should be backed up before doing this



On Fri, Jul 17, 2020 at 6:45 PM Nir Soffer  wrote:
>
> On Thu, Jul 16, 2020 at 11:33 AM Arsène Gschwind
>  wrote:
>
> > It looks like the Pivot completed successfully, see attached vdsm.log.
> > Is there a way to recover that VM?
> > Or would it be better to recover the VM from Backup?
>
> This what we see in the log:
>
> 1. Merge request recevied
>
> 2020-07-13 11:18:30,282+0200 INFO  (jsonrpc/7) [api.virt] START
> merge(drive={u'imageID': u'd7bd480d-2c51-4141-a386-113abf75219e',
> u'volumeID': u'6197b30d-0732-4cc7-aef0-12f9f6e9565b', u'domainID':
> u'33777993-a3a5-4aad-a24c-dfe5e473faca', u'poolID':
> u'0002-0002-0002-0002-0289'},
> baseVolUUID=u'8e412b5a-85ec-4c53-a5b8-dfb4d6d987b8',
> topVolUUID=u'6197b30d-0732-4cc7-aef0-12f9f6e9565b', bandwidth=u'0',
> jobUUID=u'720410c3-f1a0-4b25-bf26-cf40aa6b1f97')
> from=:::10.34.38.31,39226,
> flow_id=4a8b9527-06a3-4be6-9bb9-88630febc227,
> vmId=b5534254-660f-44b1-bc83-d616c98ba0ba (api:48)
>
> To track this job, we can use the jobUUID: 
> 720410c3-f1a0-4b25-bf26-cf40aa6b1f97
> and the top volume UUID: 6197b30d-0732-4cc7-aef0-12f9f6e9565b
>
> 2. Starting the merge
>
> 2020-07-13 11:18:30,690+0200 INFO  (jsonrpc/7) [virt.vm]
> (vmId='b5534254-660f-44b1-bc83-d616c98ba0ba') Starting merge with
> jobUUID=u'720410c3-f1a0-4b25-bf26-cf40aa6b1f97', original
> chain=8e412b5a-85ec-4c53-a5b8-dfb4d6d987b8 <
> 6197b30d-0732-4cc7-aef0-12f9f6e9565b (top), disk='sda', base='sda[1]',
> top=None, bandwidth=0, flags=12 (vm:5945)
>
> We see the original chain:
> 8e412b5a-85ec-4c53-a5b8-dfb4d6d987b8 <
> 6197b30d-0732-4cc7-aef0-12f9f6e9565b (top)
>
> 3. The merge was completed, ready for pivot
>
> 2020-07-13 11:19:00,992+0200 INFO  (libvirt/events) [virt.vm]
> (vmId='b5534254-660f-44b1-bc83-d616c98ba0ba') Block job ACTIVE_COMMIT
> for drive 
> /rhev/data-center/mnt/blockSD/33777993-a3a5-4aad-a24c-dfe5e473faca/images/d7bd480d-2c51-4141-a386-113abf75219e/6197b30d-0732-4cc7-aef0-12f9f6e9565b
> is ready (vm:5847)
>
> At this point parent volume contains all the data in top volume and we can 
> pivot
> to the parent volume.
>
> 4. Vdsm detect that the merge is ready, and start the clean thread
> that will complete the merge
>
> 2020-07-13 11:19:06,166+0200 INFO  (periodic/1) [virt.vm]
> (vmId='b5534254-660f-44b1-bc83-d616c98ba0ba') Starting cleanup thread
> for job: 720410c3-f1a0-4b25-bf26-cf40aa6b1f97 (vm:5809)
>
> 5. Requesting pivot to parent volume:
>
> 2020-07-13 11:19:06,717+0200 INFO  (merge/720410c3) [virt.vm]
> (vmId='b5534254-660f-44b1-bc83-d616c98ba0ba') Requesting pivot to
> complete active layer commit (job
> 720410c3-f1a0-4b25-bf26-cf40aa6b1f97) (vm:6205)
>
> 6. Pivot was successful
>
> 2020-07-13 11:19:06,734+0200 INFO  (libvirt/events) [virt.vm]
> (vmId='b5534254-660f-44b1-bc83-d616c98ba0ba') Block job ACTIVE_COMMIT
> for drive 
> /rhev/data-center/mnt/blockSD/33777993-a3a5-4aad-a24c-dfe5e473faca/images/d7bd480d-2c51-4141-a386-113abf75219e/6197b30d-0732-4cc7-aef0-12f9f6e9565b
> has completed (vm:5838)
>
> 7. Vdsm wait until libvirt updates the xml:
>
> 2020-07-13 11:19:06,756+0200 INFO  (merge/720410c3) [virt.vm]
> (vmId='b5534254-660f-44b1-bc83-d616c98ba0ba') Pivot completed (job
> 720410c3-f1a0-4b25-bf26-cf40aa6b1f97) (vm:6219)
>
> 8. Syncronizing vdsm metadata
>
> 2020-07-13 11:19:06,776+0200 INFO  (merge/720410c3) [vdsm.api] START
> imageSyncVolumeChain(sdUUID='33777993-a3a5-4aad-a24c-dfe5e473faca',
> imgUUID='d7bd480d-2c51-4141-a386-113abf75219e',
> volUUID='6197b30d-0732-4cc7-aef0-12f9f6e9565b',
> newChain=['8e412b5a-85ec-4c53-a5b8-dfb4d6d987b8']) from=internal,
> task_id=b8f605bd-8549-4983-8fc5-f2ebbe6c4666 (api:48)
>
> We can see the new chain:
> ['8e412b5a-85ec-4c53-a5b8-dfb4d6d987b8']
>
> 2020-07-13 11:19:07,005+0200 INFO  (merge/720410c3) [storage.Image]
> Current chain=8e412b5a-85ec-4c53-a5b8-dfb4d6d987b8 <
> 6197b30d-0732-4cc7-aef0-12f9f6e9565b (top)  (image:1221)
>
> The old chain:
> 8e412b5a-85ec-4c53-a5b8-dfb4d6d987b8 <
> 6197b30d-0732-4cc7-aef0-12f9f6e9565b (top)
>
> 2020-07-13 11:19:07,006+0200 INFO  (merge/720410c3) [storage.Image]
> Unlinking subchain: ['6197b30d-0732-4cc7-aef0-12f9f6e9565b']
> (image:1231)
> 2020-07-13 11:19:07,017+0200 INFO  (merge/720410c3) [storage.Image]
> Leaf volume 6197b30d-0732-4cc7-aef0-12f9f6e9565b is being removed from
> the chain. Marking it ILLEGAL to prevent data corruption (image:1239)
>
> This matches what we see on storage.
>
> 9. Merge job is untracked
>
> 2020-07-13 11:19:21,134+0200 INFO  (periodic/1) [virt.vm]
> (vmId='b5534254-660f-44b1-bc83-d616c98ba0ba') Cleanup thread
> 
> successfully completed, untracking job
> 720410c3-f1a0-4b25-bf26-cf40aa6b1f97
> (base=8e412b5a-85ec-4c53-a5b8-dfb4d6d987b8,
> top=6197b30d-0732-4cc7-aef0-12f9f6e9565b) (vm:5752)
>
> This was a successful 

[ovirt-users] Ovirt Hosted Engine Setup

2020-07-19 Thread Vijay Sachdeva via Users
Hi All,

 

I am trying to deploy hosted engine setup, the setup got stuck at below for 
hours:

 

 

Is this is a known bug?

 

Thanks

Vijay Sachdeva

 

___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/P262DKO5PXV7LJIWMM4TMZH6IHKPLFFX/