[ovirt-users] Re: Enabling Libgfapi in 4.3.8 - VMs won't start

2020-02-10 Thread Stephen Panicho
I used the cockpit-based hc setup and "option rpc-auth-allow-insecure" is
absent from /etc/glusterfs/glusterd.vol.

I'm going to redo the cluster this week and report back. Thanks for the tip!

On Mon, Feb 10, 2020 at 6:01 PM Darrell Budic 
wrote:

> The hosts will still mount the volume via FUSE, but you might double check
> you set the storage up as Gluster and not NFS.
>
> Then gluster used to need some config in glusterd.vol to set
>
> option rpc-auth-allow-insecure on
>
> I’m not sure if that got added to a hyper converged setup or not, but I’d
> check it.
>
> On Feb 10, 2020, at 4:41 PM, Stephen Panicho  wrote:
>
> No, this was a relatively new cluster-- only a couple days old. Just a
> handful of VMs including the engine.
>
> On Mon, Feb 10, 2020 at 5:26 PM Jayme  wrote:
>
>> Curious do the vms have active snapshots?
>>
>> On Mon, Feb 10, 2020 at 5:59 PM  wrote:
>>
>>> Hello, all. I have a 3-node Hyperconverged oVirt 4.3.8 cluster running
>>> on CentOS 7.7 hosts. I was investigating poor Gluster performance and heard
>>> about libgfapi, so I thought I'd give it a shot. Looking through the
>>> documentation, followed by lots of threads and BZ reports, I've done the
>>> following to enable it:
>>>
>>> First, I shut down all VMs except the engine. Then...
>>>
>>> On the hosts:
>>> 1. setsebool -P virt_use_glusterfs on
>>> 2. dynamic_ownership=0 in /etc/libvirt/qemu.conf
>>>
>>> On the engine VM:
>>> 1. engine-config -s LibgfApiSupported=true --cver=4.3
>>> 2. systemctl restart ovirt-engine
>>>
>>> VMs now fail to launch. Am I doing this correctly? I should also note
>>> that the hosts still have the Gluster domain mounted via FUSE.
>>>
>>> Here's a relevant bit from engine.log:
>>>
>>> 2020-02-06T16:38:32.573511Z qemu-kvm: -drive file=gluster://
>>> node1.fs.trashnet.xyz:24007/vmstore/781717e5-1cff-43a1-b586-9941503544e8/images/a1d56b14-6d72-4f46-a0aa-eb0870c36bc4/a2314816-7970-49ce-a80c-ab0d1cf17c78,file.debug=4,format=qcow2,if=none,id=drive-ua-a1d56b14-6d72-4f46-a0aa-eb0870c36bc4,serial=a1d56b14-6d72-4f46-a0aa-eb0870c36bc4,werror=stop,rerror=stop,cache=none,discard=unmap,aio=native:
>>> Could not read qcow2 header: Invalid argument.
>>>
>>> The full engine.log from one of the attempts:
>>>
>>> 2020-02-06 16:38:24,909Z INFO
>>> [org.ovirt.engine.core.vdsbroker.monitoring.VmAnalyzer]
>>> (ForkJoinPool-1-worker-12) [] add VM
>>> 'df9dbac4-35c0-40ee-acd4-a1cfc959aa8b'(yumcache) to rerun treatment
>>> 2020-02-06 16:38:25,010Z ERROR
>>> [org.ovirt.engine.core.vdsbroker.monitoring.VmsMonitoring]
>>> (ForkJoinPool-1-worker-12) [] Rerun VM
>>> 'df9dbac4-35c0-40ee-acd4-a1cfc959aa8b'. Called from VDS '
>>> node2.ovirt.trashnet.xyz'
>>> 2020-02-06 16:38:25,091Z WARN
>>> [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector]
>>> (EE-ManagedThreadFactory-engine-Thread-216) [] EVENT_ID:
>>> USER_INITIATED_RUN_VM_FAILED(151), Failed to run VM yumcache on Host
>>> node2.ovirt.trashnet.xyz.
>>> 2020-02-06 16:38:25,166Z INFO  [org.ovirt.engine.core.bll.RunVmCommand]
>>> (EE-ManagedThreadFactory-engine-Thread-216) [] Lock Acquired to object
>>> 'EngineLock:{exclusiveLocks='[df9dbac4-35c0-40ee-acd4-a1cfc959aa8b=VM]',
>>> sharedLocks=''}'
>>> 2020-02-06 16:38:25,179Z INFO
>>> [org.ovirt.engine.core.vdsbroker.IsVmDuringInitiatingVDSCommand]
>>> (EE-ManagedThreadFactory-engine-Thread-216) [] START,
>>> IsVmDuringInitiatingVDSCommand(
>>> IsVmDuringInitiatingVDSCommandParameters:{vmId='df9dbac4-35c0-40ee-acd4-a1cfc959aa8b'}),
>>> log id: 2107f52a
>>> 2020-02-06 16:38:25,181Z INFO
>>> [org.ovirt.engine.core.vdsbroker.IsVmDuringInitiatingVDSCommand]
>>> (EE-ManagedThreadFactory-engine-Thread-216) [] FINISH,
>>> IsVmDuringInitiatingVDSCommand, return: false, log id: 2107f52a
>>> 2020-02-06 16:38:25,298Z INFO  [org.ovirt.engine.core.bll.RunVmCommand]
>>> (EE-ManagedThreadFactory-engine-Thread-216) [] Running command:
>>> RunVmCommand internal: false. Entities affected :  ID:
>>> df9dbac4-35c0-40ee-acd4-a1cfc959aa8b Type: VMAction group RUN_VM with role
>>> type USER
>>> 2020-02-06 16:38:25,313Z INFO
>>> [org.ovirt.engine.core.bll.utils.EmulatedMachineUtils]
>>> (EE-ManagedThreadFactory-engine-Thread-216) [] Emulated machine
>>> 'pc-q35-rhel7.6.0' which is different than that of the cluster is set for
>>> 'yumcache'(df9dbac4-35c0-40ee-acd4-a1cfc959aa8b)
>>> 2020-02-06 16:38:25,382Z INFO
>>> [org.ovirt.engine.core.vdsbroker.UpdateVmDynamicDataVDSCommand]
>>> (EE-ManagedThreadFactory-engine-Thread-216) [] START,
>>> UpdateVmDynamicDataVDSCommand(
>>> UpdateVmDynamicDataVDSCommandParameters:{hostId='null',
>>> vmId='df9dbac4-35c0-40ee-acd4-a1cfc959aa8b',
>>> vmDynamic='org.ovirt.engine.core.common.businessentities.VmDynamic@9774a64'}),
>>> log id: 4a83911f
>>> 2020-02-06 16:38:25,417Z INFO
>>> [org.ovirt.engine.core.vdsbroker.UpdateVmDynamicDataVDSCommand]
>>> (EE-ManagedThreadFactory-engine-Thread-216) [] FINISH,
>>> UpdateVmDynamicDataVDSCommand, return: , log id: 4a83911f
>>> 2020-02-06 16:38:25,418Z INFO
>>> 

[ovirt-users] Re: Enabling Libgfapi in 4.3.8 - VMs won't start

2020-02-10 Thread Darrell Budic
The hosts will still mount the volume via FUSE, but you might double check you 
set the storage up as Gluster and not NFS.

Then gluster used to need some config in glusterd.vol to set 

option rpc-auth-allow-insecure on

I’m not sure if that got added to a hyper converged setup or not, but I’d check 
it.

> On Feb 10, 2020, at 4:41 PM, Stephen Panicho  wrote:
> 
> No, this was a relatively new cluster-- only a couple days old. Just a 
> handful of VMs including the engine.
> 
> On Mon, Feb 10, 2020 at 5:26 PM Jayme  > wrote:
> Curious do the vms have active snapshots?
> 
> On Mon, Feb 10, 2020 at 5:59 PM  > wrote:
> Hello, all. I have a 3-node Hyperconverged oVirt 4.3.8 cluster running on 
> CentOS 7.7 hosts. I was investigating poor Gluster performance and heard 
> about libgfapi, so I thought I'd give it a shot. Looking through the 
> documentation, followed by lots of threads and BZ reports, I've done the 
> following to enable it:
> 
> First, I shut down all VMs except the engine. Then...
> 
> On the hosts:
> 1. setsebool -P virt_use_glusterfs on
> 2. dynamic_ownership=0 in /etc/libvirt/qemu.conf
> 
> On the engine VM:
> 1. engine-config -s LibgfApiSupported=true --cver=4.3
> 2. systemctl restart ovirt-engine
> 
> VMs now fail to launch. Am I doing this correctly? I should also note that 
> the hosts still have the Gluster domain mounted via FUSE.
> 
> Here's a relevant bit from engine.log:
> 
> 2020-02-06T16:38:32.573511Z qemu-kvm: -drive 
> file=gluster://node1.fs.trashnet.xyz:24007/vmstore/781717e5-1cff-43a1-b586-9941503544e8/images/a1d56b14-6d72-4f46-a0aa-eb0870c36bc4/a2314816-7970-49ce-a80c-ab0d1cf17c78,file.debug=4,format=qcow2,if=none,id=drive-ua-a1d56b14-6d72-4f46-a0aa-eb0870c36bc4,serial=a1d56b14-6d72-4f46-a0aa-eb0870c36bc4,werror=stop,rerror=stop,cache=none,discard=unmap,aio=native
>  
> :
>  Could not read qcow2 header: Invalid argument.
> 
> The full engine.log from one of the attempts:
> 
> 2020-02-06 16:38:24,909Z INFO  
> [org.ovirt.engine.core.vdsbroker.monitoring.VmAnalyzer] 
> (ForkJoinPool-1-worker-12) [] add VM 
> 'df9dbac4-35c0-40ee-acd4-a1cfc959aa8b'(yumcache) to rerun treatment
> 2020-02-06 16:38:25,010Z ERROR 
> [org.ovirt.engine.core.vdsbroker.monitoring.VmsMonitoring] 
> (ForkJoinPool-1-worker-12) [] Rerun VM 
> 'df9dbac4-35c0-40ee-acd4-a1cfc959aa8b'. Called from VDS 
> 'node2.ovirt.trashnet.xyz '
> 2020-02-06 16:38:25,091Z WARN  
> [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector] 
> (EE-ManagedThreadFactory-engine-Thread-216) [] EVENT_ID: 
> USER_INITIATED_RUN_VM_FAILED(151), Failed to run VM yumcache on Host 
> node2.ovirt.trashnet.xyz .
> 2020-02-06 16:38:25,166Z INFO  [org.ovirt.engine.core.bll.RunVmCommand] 
> (EE-ManagedThreadFactory-engine-Thread-216) [] Lock Acquired to object 
> 'EngineLock:{exclusiveLocks='[df9dbac4-35c0-40ee-acd4-a1cfc959aa8b=VM]', 
> sharedLocks=''}'
> 2020-02-06 16:38:25,179Z INFO  
> [org.ovirt.engine.core.vdsbroker.IsVmDuringInitiatingVDSCommand] 
> (EE-ManagedThreadFactory-engine-Thread-216) [] START, 
> IsVmDuringInitiatingVDSCommand( 
> IsVmDuringInitiatingVDSCommandParameters:{vmId='df9dbac4-35c0-40ee-acd4-a1cfc959aa8b'}),
>  log id: 2107f52a
> 2020-02-06 16:38:25,181Z INFO  
> [org.ovirt.engine.core.vdsbroker.IsVmDuringInitiatingVDSCommand] 
> (EE-ManagedThreadFactory-engine-Thread-216) [] FINISH, 
> IsVmDuringInitiatingVDSCommand, return: false, log id: 2107f52a
> 2020-02-06 16:38:25,298Z INFO  [org.ovirt.engine.core.bll.RunVmCommand] 
> (EE-ManagedThreadFactory-engine-Thread-216) [] Running command: RunVmCommand 
> internal: false. Entities affected :  ID: 
> df9dbac4-35c0-40ee-acd4-a1cfc959aa8b Type: VMAction group RUN_VM with role 
> type USER
> 2020-02-06 16:38:25,313Z INFO  
> [org.ovirt.engine.core.bll.utils.EmulatedMachineUtils] 
> (EE-ManagedThreadFactory-engine-Thread-216) [] Emulated machine 
> 'pc-q35-rhel7.6.0' which is different than that of the cluster is set for 
> 'yumcache'(df9dbac4-35c0-40ee-acd4-a1cfc959aa8b)
> 2020-02-06 16:38:25,382Z INFO  
> [org.ovirt.engine.core.vdsbroker.UpdateVmDynamicDataVDSCommand] 
> (EE-ManagedThreadFactory-engine-Thread-216) [] START, 
> UpdateVmDynamicDataVDSCommand( 
> UpdateVmDynamicDataVDSCommandParameters:{hostId='null', 
> vmId='df9dbac4-35c0-40ee-acd4-a1cfc959aa8b', 
> vmDynamic='org.ovirt.engine.core.common.businessentities.VmDynamic@9774a64'}),
>  log id: 4a83911f
> 2020-02-06 16:38:25,417Z INFO  
> [org.ovirt.engine.core.vdsbroker.UpdateVmDynamicDataVDSCommand] 
> (EE-ManagedThreadFactory-engine

[ovirt-users] Re: I wrote an article on using Ansible to backup oVirt VMs

2020-02-10 Thread Torsten Stolpmann

Thanks Jayme, much appreciated!

On 10.02.2020 16:59, Jayme wrote:
I've been part of this mailing list for a while now and have received a 
lot of great advice and help on various subjects. I read the list daily 
and one thing I've noticed is that many users are curious about backup 
options for oVirt (myself included). I wanted to share with the 
community a solution I've come up with to easily backup multiple running 
oVirt VMs to OVA format using some basic Ansible playbooks. I've put 
together a blog post detailing the process which also includes links to 
a Github repo containing the playbooks here: 
https://blog.silverorange.com/backing-up-ovirt-vms-with-ansible-4c2fca8b3b43


Any feedback, suggestions or questions are welcome. I hope this 
information is helpful.


Thanks!

- Jayme

___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/U65CV5A6WC6SCB2R5N66Y7HPXQ3ZQT2H/


___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/SLMGCTPHMQXOL7XV2SCF35V5VH5LP7RV/


[ovirt-users] Re: Enabling Libgfapi in 4.3.8 - VMs won't start

2020-02-10 Thread Stephen Panicho
No, this was a relatively new cluster-- only a couple days old. Just a
handful of VMs including the engine.

On Mon, Feb 10, 2020 at 5:26 PM Jayme  wrote:

> Curious do the vms have active snapshots?
>
> On Mon, Feb 10, 2020 at 5:59 PM  wrote:
>
>> Hello, all. I have a 3-node Hyperconverged oVirt 4.3.8 cluster running on
>> CentOS 7.7 hosts. I was investigating poor Gluster performance and heard
>> about libgfapi, so I thought I'd give it a shot. Looking through the
>> documentation, followed by lots of threads and BZ reports, I've done the
>> following to enable it:
>>
>> First, I shut down all VMs except the engine. Then...
>>
>> On the hosts:
>> 1. setsebool -P virt_use_glusterfs on
>> 2. dynamic_ownership=0 in /etc/libvirt/qemu.conf
>>
>> On the engine VM:
>> 1. engine-config -s LibgfApiSupported=true --cver=4.3
>> 2. systemctl restart ovirt-engine
>>
>> VMs now fail to launch. Am I doing this correctly? I should also note
>> that the hosts still have the Gluster domain mounted via FUSE.
>>
>> Here's a relevant bit from engine.log:
>>
>> 2020-02-06T16:38:32.573511Z qemu-kvm: -drive file=gluster://
>> node1.fs.trashnet.xyz:24007/vmstore/781717e5-1cff-43a1-b586-9941503544e8/images/a1d56b14-6d72-4f46-a0aa-eb0870c36bc4/a2314816-7970-49ce-a80c-ab0d1cf17c78,file.debug=4,format=qcow2,if=none,id=drive-ua-a1d56b14-6d72-4f46-a0aa-eb0870c36bc4,serial=a1d56b14-6d72-4f46-a0aa-eb0870c36bc4,werror=stop,rerror=stop,cache=none,discard=unmap,aio=native:
>> Could not read qcow2 header: Invalid argument.
>>
>> The full engine.log from one of the attempts:
>>
>> 2020-02-06 16:38:24,909Z INFO
>> [org.ovirt.engine.core.vdsbroker.monitoring.VmAnalyzer]
>> (ForkJoinPool-1-worker-12) [] add VM
>> 'df9dbac4-35c0-40ee-acd4-a1cfc959aa8b'(yumcache) to rerun treatment
>> 2020-02-06 16:38:25,010Z ERROR
>> [org.ovirt.engine.core.vdsbroker.monitoring.VmsMonitoring]
>> (ForkJoinPool-1-worker-12) [] Rerun VM
>> 'df9dbac4-35c0-40ee-acd4-a1cfc959aa8b'. Called from VDS '
>> node2.ovirt.trashnet.xyz'
>> 2020-02-06 16:38:25,091Z WARN
>> [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector]
>> (EE-ManagedThreadFactory-engine-Thread-216) [] EVENT_ID:
>> USER_INITIATED_RUN_VM_FAILED(151), Failed to run VM yumcache on Host
>> node2.ovirt.trashnet.xyz.
>> 2020-02-06 16:38:25,166Z INFO  [org.ovirt.engine.core.bll.RunVmCommand]
>> (EE-ManagedThreadFactory-engine-Thread-216) [] Lock Acquired to object
>> 'EngineLock:{exclusiveLocks='[df9dbac4-35c0-40ee-acd4-a1cfc959aa8b=VM]',
>> sharedLocks=''}'
>> 2020-02-06 16:38:25,179Z INFO
>> [org.ovirt.engine.core.vdsbroker.IsVmDuringInitiatingVDSCommand]
>> (EE-ManagedThreadFactory-engine-Thread-216) [] START,
>> IsVmDuringInitiatingVDSCommand(
>> IsVmDuringInitiatingVDSCommandParameters:{vmId='df9dbac4-35c0-40ee-acd4-a1cfc959aa8b'}),
>> log id: 2107f52a
>> 2020-02-06 16:38:25,181Z INFO
>> [org.ovirt.engine.core.vdsbroker.IsVmDuringInitiatingVDSCommand]
>> (EE-ManagedThreadFactory-engine-Thread-216) [] FINISH,
>> IsVmDuringInitiatingVDSCommand, return: false, log id: 2107f52a
>> 2020-02-06 16:38:25,298Z INFO  [org.ovirt.engine.core.bll.RunVmCommand]
>> (EE-ManagedThreadFactory-engine-Thread-216) [] Running command:
>> RunVmCommand internal: false. Entities affected :  ID:
>> df9dbac4-35c0-40ee-acd4-a1cfc959aa8b Type: VMAction group RUN_VM with role
>> type USER
>> 2020-02-06 16:38:25,313Z INFO
>> [org.ovirt.engine.core.bll.utils.EmulatedMachineUtils]
>> (EE-ManagedThreadFactory-engine-Thread-216) [] Emulated machine
>> 'pc-q35-rhel7.6.0' which is different than that of the cluster is set for
>> 'yumcache'(df9dbac4-35c0-40ee-acd4-a1cfc959aa8b)
>> 2020-02-06 16:38:25,382Z INFO
>> [org.ovirt.engine.core.vdsbroker.UpdateVmDynamicDataVDSCommand]
>> (EE-ManagedThreadFactory-engine-Thread-216) [] START,
>> UpdateVmDynamicDataVDSCommand(
>> UpdateVmDynamicDataVDSCommandParameters:{hostId='null',
>> vmId='df9dbac4-35c0-40ee-acd4-a1cfc959aa8b',
>> vmDynamic='org.ovirt.engine.core.common.businessentities.VmDynamic@9774a64'}),
>> log id: 4a83911f
>> 2020-02-06 16:38:25,417Z INFO
>> [org.ovirt.engine.core.vdsbroker.UpdateVmDynamicDataVDSCommand]
>> (EE-ManagedThreadFactory-engine-Thread-216) [] FINISH,
>> UpdateVmDynamicDataVDSCommand, return: , log id: 4a83911f
>> 2020-02-06 16:38:25,418Z INFO
>> [org.ovirt.engine.core.vdsbroker.CreateVDSCommand]
>> (EE-ManagedThreadFactory-engine-Thread-216) [] START, CreateVDSCommand(
>> CreateVDSCommandParameters:{hostId='c3465ca2-395e-4c0c-b72e-b5b7153df452',
>> vmId='df9dbac4-35c0-40ee-acd4-a1cfc959aa8b', vm='VM [yumcache]'}), log id:
>> 5e07ba66
>> 2020-02-06 16:38:25,420Z INFO
>> [org.ovirt.engine.core.vdsbroker.vdsbroker.CreateBrokerVDSCommand]
>> (EE-ManagedThreadFactory-engine-Thread-216) [] START,
>> CreateBrokerVDSCommand(HostName = node1.ovirt.trashnet.xyz,
>> CreateVDSCommandParameters:{hostId='c3465ca2-395e-4c0c-b72e-b5b7153df452',
>> vmId='df9dbac4-35c0-40ee-acd4-a1cfc959aa8b', vm='VM [yumcache]'}), log id:
>> 1bfa03c4
>> 2020-02-06 16:38:25,424Z INFO
>> [

[ovirt-users] Re: Enabling Libgfapi in 4.3.8 - VMs won't start

2020-02-10 Thread Jayme
Curious do the vms have active snapshots?

On Mon, Feb 10, 2020 at 5:59 PM  wrote:

> Hello, all. I have a 3-node Hyperconverged oVirt 4.3.8 cluster running on
> CentOS 7.7 hosts. I was investigating poor Gluster performance and heard
> about libgfapi, so I thought I'd give it a shot. Looking through the
> documentation, followed by lots of threads and BZ reports, I've done the
> following to enable it:
>
> First, I shut down all VMs except the engine. Then...
>
> On the hosts:
> 1. setsebool -P virt_use_glusterfs on
> 2. dynamic_ownership=0 in /etc/libvirt/qemu.conf
>
> On the engine VM:
> 1. engine-config -s LibgfApiSupported=true --cver=4.3
> 2. systemctl restart ovirt-engine
>
> VMs now fail to launch. Am I doing this correctly? I should also note that
> the hosts still have the Gluster domain mounted via FUSE.
>
> Here's a relevant bit from engine.log:
>
> 2020-02-06T16:38:32.573511Z qemu-kvm: -drive file=gluster://
> node1.fs.trashnet.xyz:24007/vmstore/781717e5-1cff-43a1-b586-9941503544e8/images/a1d56b14-6d72-4f46-a0aa-eb0870c36bc4/a2314816-7970-49ce-a80c-ab0d1cf17c78,file.debug=4,format=qcow2,if=none,id=drive-ua-a1d56b14-6d72-4f46-a0aa-eb0870c36bc4,serial=a1d56b14-6d72-4f46-a0aa-eb0870c36bc4,werror=stop,rerror=stop,cache=none,discard=unmap,aio=native:
> Could not read qcow2 header: Invalid argument.
>
> The full engine.log from one of the attempts:
>
> 2020-02-06 16:38:24,909Z INFO
> [org.ovirt.engine.core.vdsbroker.monitoring.VmAnalyzer]
> (ForkJoinPool-1-worker-12) [] add VM
> 'df9dbac4-35c0-40ee-acd4-a1cfc959aa8b'(yumcache) to rerun treatment
> 2020-02-06 16:38:25,010Z ERROR
> [org.ovirt.engine.core.vdsbroker.monitoring.VmsMonitoring]
> (ForkJoinPool-1-worker-12) [] Rerun VM
> 'df9dbac4-35c0-40ee-acd4-a1cfc959aa8b'. Called from VDS '
> node2.ovirt.trashnet.xyz'
> 2020-02-06 16:38:25,091Z WARN
> [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector]
> (EE-ManagedThreadFactory-engine-Thread-216) [] EVENT_ID:
> USER_INITIATED_RUN_VM_FAILED(151), Failed to run VM yumcache on Host
> node2.ovirt.trashnet.xyz.
> 2020-02-06 16:38:25,166Z INFO  [org.ovirt.engine.core.bll.RunVmCommand]
> (EE-ManagedThreadFactory-engine-Thread-216) [] Lock Acquired to object
> 'EngineLock:{exclusiveLocks='[df9dbac4-35c0-40ee-acd4-a1cfc959aa8b=VM]',
> sharedLocks=''}'
> 2020-02-06 16:38:25,179Z INFO
> [org.ovirt.engine.core.vdsbroker.IsVmDuringInitiatingVDSCommand]
> (EE-ManagedThreadFactory-engine-Thread-216) [] START,
> IsVmDuringInitiatingVDSCommand(
> IsVmDuringInitiatingVDSCommandParameters:{vmId='df9dbac4-35c0-40ee-acd4-a1cfc959aa8b'}),
> log id: 2107f52a
> 2020-02-06 16:38:25,181Z INFO
> [org.ovirt.engine.core.vdsbroker.IsVmDuringInitiatingVDSCommand]
> (EE-ManagedThreadFactory-engine-Thread-216) [] FINISH,
> IsVmDuringInitiatingVDSCommand, return: false, log id: 2107f52a
> 2020-02-06 16:38:25,298Z INFO  [org.ovirt.engine.core.bll.RunVmCommand]
> (EE-ManagedThreadFactory-engine-Thread-216) [] Running command:
> RunVmCommand internal: false. Entities affected :  ID:
> df9dbac4-35c0-40ee-acd4-a1cfc959aa8b Type: VMAction group RUN_VM with role
> type USER
> 2020-02-06 16:38:25,313Z INFO
> [org.ovirt.engine.core.bll.utils.EmulatedMachineUtils]
> (EE-ManagedThreadFactory-engine-Thread-216) [] Emulated machine
> 'pc-q35-rhel7.6.0' which is different than that of the cluster is set for
> 'yumcache'(df9dbac4-35c0-40ee-acd4-a1cfc959aa8b)
> 2020-02-06 16:38:25,382Z INFO
> [org.ovirt.engine.core.vdsbroker.UpdateVmDynamicDataVDSCommand]
> (EE-ManagedThreadFactory-engine-Thread-216) [] START,
> UpdateVmDynamicDataVDSCommand(
> UpdateVmDynamicDataVDSCommandParameters:{hostId='null',
> vmId='df9dbac4-35c0-40ee-acd4-a1cfc959aa8b',
> vmDynamic='org.ovirt.engine.core.common.businessentities.VmDynamic@9774a64'}),
> log id: 4a83911f
> 2020-02-06 16:38:25,417Z INFO
> [org.ovirt.engine.core.vdsbroker.UpdateVmDynamicDataVDSCommand]
> (EE-ManagedThreadFactory-engine-Thread-216) [] FINISH,
> UpdateVmDynamicDataVDSCommand, return: , log id: 4a83911f
> 2020-02-06 16:38:25,418Z INFO
> [org.ovirt.engine.core.vdsbroker.CreateVDSCommand]
> (EE-ManagedThreadFactory-engine-Thread-216) [] START, CreateVDSCommand(
> CreateVDSCommandParameters:{hostId='c3465ca2-395e-4c0c-b72e-b5b7153df452',
> vmId='df9dbac4-35c0-40ee-acd4-a1cfc959aa8b', vm='VM [yumcache]'}), log id:
> 5e07ba66
> 2020-02-06 16:38:25,420Z INFO
> [org.ovirt.engine.core.vdsbroker.vdsbroker.CreateBrokerVDSCommand]
> (EE-ManagedThreadFactory-engine-Thread-216) [] START,
> CreateBrokerVDSCommand(HostName = node1.ovirt.trashnet.xyz,
> CreateVDSCommandParameters:{hostId='c3465ca2-395e-4c0c-b72e-b5b7153df452',
> vmId='df9dbac4-35c0-40ee-acd4-a1cfc959aa8b', vm='VM [yumcache]'}), log id:
> 1bfa03c4
> 2020-02-06 16:38:25,424Z INFO
> [org.ovirt.engine.core.vdsbroker.builder.vminfo.VmInfoBuildUtils]
> (EE-ManagedThreadFactory-engine-Thread-216) [] Kernel FIPS - Guid:
> c3465ca2-395e-4c0c-b72e-b5b7153df452 fips: false
> 2020-02-06 16:38:25,435Z INFO
> [org.ovirt.engine.core.vdsbroker.vd

[ovirt-users] Re: I wrote an article on using Ansible to backup oVirt VMs

2020-02-10 Thread s . panicho
This is excellent! Thanks for sharing.
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/MBS4WMAIU3HGEKGIYBIC5HTLGY2VEF6N/


[ovirt-users] Enabling Libgfapi in 4.3.8 - VMs won't start

2020-02-10 Thread s . panicho
Hello, all. I have a 3-node Hyperconverged oVirt 4.3.8 cluster running on 
CentOS 7.7 hosts. I was investigating poor Gluster performance and heard about 
libgfapi, so I thought I'd give it a shot. Looking through the documentation, 
followed by lots of threads and BZ reports, I've done the following to enable 
it:

First, I shut down all VMs except the engine. Then...

On the hosts:
1. setsebool -P virt_use_glusterfs on
2. dynamic_ownership=0 in /etc/libvirt/qemu.conf

On the engine VM:
1. engine-config -s LibgfApiSupported=true --cver=4.3
2. systemctl restart ovirt-engine

VMs now fail to launch. Am I doing this correctly? I should also note that the 
hosts still have the Gluster domain mounted via FUSE.

Here's a relevant bit from engine.log:

2020-02-06T16:38:32.573511Z qemu-kvm: -drive 
file=gluster://node1.fs.trashnet.xyz:24007/vmstore/781717e5-1cff-43a1-b586-9941503544e8/images/a1d56b14-6d72-4f46-a0aa-eb0870c36bc4/a2314816-7970-49ce-a80c-ab0d1cf17c78,file.debug=4,format=qcow2,if=none,id=drive-ua-a1d56b14-6d72-4f46-a0aa-eb0870c36bc4,serial=a1d56b14-6d72-4f46-a0aa-eb0870c36bc4,werror=stop,rerror=stop,cache=none,discard=unmap,aio=native:
 Could not read qcow2 header: Invalid argument.

The full engine.log from one of the attempts:

2020-02-06 16:38:24,909Z INFO  
[org.ovirt.engine.core.vdsbroker.monitoring.VmAnalyzer] 
(ForkJoinPool-1-worker-12) [] add VM 
'df9dbac4-35c0-40ee-acd4-a1cfc959aa8b'(yumcache) to rerun treatment
2020-02-06 16:38:25,010Z ERROR 
[org.ovirt.engine.core.vdsbroker.monitoring.VmsMonitoring] 
(ForkJoinPool-1-worker-12) [] Rerun VM 'df9dbac4-35c0-40ee-acd4-a1cfc959aa8b'. 
Called from VDS 'node2.ovirt.trashnet.xyz'
2020-02-06 16:38:25,091Z WARN  
[org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector] 
(EE-ManagedThreadFactory-engine-Thread-216) [] EVENT_ID: 
USER_INITIATED_RUN_VM_FAILED(151), Failed to run VM yumcache on Host 
node2.ovirt.trashnet.xyz.
2020-02-06 16:38:25,166Z INFO  [org.ovirt.engine.core.bll.RunVmCommand] 
(EE-ManagedThreadFactory-engine-Thread-216) [] Lock Acquired to object 
'EngineLock:{exclusiveLocks='[df9dbac4-35c0-40ee-acd4-a1cfc959aa8b=VM]', 
sharedLocks=''}'
2020-02-06 16:38:25,179Z INFO  
[org.ovirt.engine.core.vdsbroker.IsVmDuringInitiatingVDSCommand] 
(EE-ManagedThreadFactory-engine-Thread-216) [] START, 
IsVmDuringInitiatingVDSCommand( 
IsVmDuringInitiatingVDSCommandParameters:{vmId='df9dbac4-35c0-40ee-acd4-a1cfc959aa8b'}),
 log id: 2107f52a
2020-02-06 16:38:25,181Z INFO  
[org.ovirt.engine.core.vdsbroker.IsVmDuringInitiatingVDSCommand] 
(EE-ManagedThreadFactory-engine-Thread-216) [] FINISH, 
IsVmDuringInitiatingVDSCommand, return: false, log id: 2107f52a
2020-02-06 16:38:25,298Z INFO  [org.ovirt.engine.core.bll.RunVmCommand] 
(EE-ManagedThreadFactory-engine-Thread-216) [] Running command: RunVmCommand 
internal: false. Entities affected :  ID: df9dbac4-35c0-40ee-acd4-a1cfc959aa8b 
Type: VMAction group RUN_VM with role type USER
2020-02-06 16:38:25,313Z INFO  
[org.ovirt.engine.core.bll.utils.EmulatedMachineUtils] 
(EE-ManagedThreadFactory-engine-Thread-216) [] Emulated machine 
'pc-q35-rhel7.6.0' which is different than that of the cluster is set for 
'yumcache'(df9dbac4-35c0-40ee-acd4-a1cfc959aa8b)
2020-02-06 16:38:25,382Z INFO  
[org.ovirt.engine.core.vdsbroker.UpdateVmDynamicDataVDSCommand] 
(EE-ManagedThreadFactory-engine-Thread-216) [] START, 
UpdateVmDynamicDataVDSCommand( 
UpdateVmDynamicDataVDSCommandParameters:{hostId='null', 
vmId='df9dbac4-35c0-40ee-acd4-a1cfc959aa8b', 
vmDynamic='org.ovirt.engine.core.common.businessentities.VmDynamic@9774a64'}), 
log id: 4a83911f
2020-02-06 16:38:25,417Z INFO  
[org.ovirt.engine.core.vdsbroker.UpdateVmDynamicDataVDSCommand] 
(EE-ManagedThreadFactory-engine-Thread-216) [] FINISH, 
UpdateVmDynamicDataVDSCommand, return: , log id: 4a83911f
2020-02-06 16:38:25,418Z INFO  
[org.ovirt.engine.core.vdsbroker.CreateVDSCommand] 
(EE-ManagedThreadFactory-engine-Thread-216) [] START, CreateVDSCommand( 
CreateVDSCommandParameters:{hostId='c3465ca2-395e-4c0c-b72e-b5b7153df452', 
vmId='df9dbac4-35c0-40ee-acd4-a1cfc959aa8b', vm='VM [yumcache]'}), log id: 
5e07ba66
2020-02-06 16:38:25,420Z INFO  
[org.ovirt.engine.core.vdsbroker.vdsbroker.CreateBrokerVDSCommand] 
(EE-ManagedThreadFactory-engine-Thread-216) [] START, 
CreateBrokerVDSCommand(HostName = node1.ovirt.trashnet.xyz, 
CreateVDSCommandParameters:{hostId='c3465ca2-395e-4c0c-b72e-b5b7153df452', 
vmId='df9dbac4-35c0-40ee-acd4-a1cfc959aa8b', vm='VM [yumcache]'}), log id: 
1bfa03c4
2020-02-06 16:38:25,424Z INFO  
[org.ovirt.engine.core.vdsbroker.builder.vminfo.VmInfoBuildUtils] 
(EE-ManagedThreadFactory-engine-Thread-216) [] Kernel FIPS - Guid: 
c3465ca2-395e-4c0c-b72e-b5b7153df452 fips: false
2020-02-06 16:38:25,435Z INFO  
[org.ovirt.engine.core.vdsbroker.vdsbroker.CreateBrokerVDSCommand] 
(EE-ManagedThreadFactory-engine-Thread-216) [] VM http://ovirt.org/vm/tune/1.0"; 
xmlns:ovirt-vm="http://ovirt.org/vm/1.0";>
  yumcache
  df9dbac4-35c0-40ee-acd4-a1cf

[ovirt-users] I wrote an article on using Ansible to backup oVirt VMs

2020-02-10 Thread Jayme
I've been part of this mailing list for a while now and have received a lot
of great advice and help on various subjects. I read the list daily and one
thing I've noticed is that many users are curious about backup options for
oVirt (myself included). I wanted to share with the community a solution
I've come up with to easily backup multiple running oVirt VMs to OVA format
using some basic Ansible playbooks. I've put together a blog post detailing
the process which also includes links to a Github repo containing the
playbooks here:
https://blog.silverorange.com/backing-up-ovirt-vms-with-ansible-4c2fca8b3b43

Any feedback, suggestions or questions are welcome. I hope this information
is helpful.

Thanks!

- Jayme
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/U65CV5A6WC6SCB2R5N66Y7HPXQ3ZQT2H/


[ovirt-users] Re: Update hosted engine to 4.3.8 from 4.3.7 questions

2020-02-10 Thread d03
Update successful.  Thanks.  :)
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/FBBMH3BBTSWYDTQZE45DFQQIZWKM6T45/


[ovirt-users] Re: issue connecting 4.3.8 node to nfs domain

2020-02-10 Thread Amit Bawer
On Mon, Feb 10, 2020 at 4:13 PM Jorick Astrego  wrote:

>
> On 2/10/20 1:27 PM, Jorick Astrego wrote:
>
> Hmm, I didn't notice that.
>
> I did a check on the NFS server and I found the
> "1ed0a635-67ee-4255-aad9-b70822350706" in the exportdom path
> (/data/exportdom).
>
> This was an old NFS export domain that has been deleted for a while now. I
> remember finding somewhere an issue with old domains still being active
> after removal but I cannot find it now.
>
> I unexported the directory on the nfs server and now I have to correct
> mount and it activates fine.
>
> Thanks!
>
> Still weird that it picks another nfs mount path to mount that has been
> removed months ago from engine.
>
This is because vdsm scans for domains by storage i.e. looking up under
/rhev/data-center/mnt/* in case of nfs domains [1]


> It's not listed in the database on engine:
>
The table lists the valid domains known to engine, removals/additions of
storage domains update this table.

If you removed the old nfs domain, but the nfs storage was not available to
the time (i.e. not mounted) then storage format could fail silently [2]
and yet this table would still be updated for the SD removal. [3]

Haven't tested this out, and may need to unmount at a very specific moment
to achieve this in [2],
but looking around with the kind assistance of +Benny Zlotnik
 on engine side makes this assumption seem possible.

[1]
https://github.com/oVirt/vdsm/blob/821afbbc238ba379c12666922fc1ac80482ee383/lib/vdsm/storage/fileSD.py#L888
[2]
https://github.com/oVirt/vdsm/blob/master/lib/vdsm/storage/fileSD.py#L628
[3]
https://github.com/oVirt/ovirt-engine/blob/master/backend/manager/modules/bll/src/main/java/org/ovirt/engine/core/bll/storage/domain/RemoveStorageDomainCommand.java#L77

engine=# select * from storage_domain_static ;
>   id  |
> storage|  storage_name  | storage_domain_type |
> storage_type | storage_domain_format_type | _create_date
> | _update_date  | reco
> verable | last_time_used_as_master |storage_description |
> storage_comment | wipe_after_delete | warning_low_space_indicator |
> critical_space_action_blocker | first_metadata_device | vg_metadata_device
> | discard_after_delet
> e | backup | warning_low_confirmed_space_indicator | block_size
>
> --+--++-+--++---+---+-
>
> +--++-+---+-+---+---++
> --++---+
>  782a61af-a520-44c4-8845-74bf92888552 |
> 640ab34d-aa5d-478b-97be-e3f810558628 | ISO_DOMAIN
> |   2 |1 | 0  |
> 2017-11-16 09:49:49.225478+01 |   | t
> |0 | ISO_DOMAIN
> | | f |
> |   |
> || f
>   | f  |   |512
>  072fbaa1-08f3-4a40-9f34-a5ca22dd1d74 |
> ceab03af-7220-4d42-8f5c-9b557f5d29af | ovirt-image-repository
> |   4 |8 | 0  |
> 2016-10-14 20:40:44.700381+02 | 2018-04-06 14:03:31.201898+02 | t
> |0 | Public Glance repository for oVirt
> | | f |
> |   |
> || f
>   | f  |   |512
>  b30bab9d-9a66-44ce-ad17-2eb4ee858d8f |
> 40d191b0-b7f8-48f9-bf6f-327275f51fef | ssd-6
> |   1 |7 | 4  |
> 2017-06-25 12:45:24.52974+02  | 2019-01-24 15:35:57.013832+01 | t
> |1498461838176 |
> | | f |  10
> | 5 |
> || f
>   | f  |   |512
>  95b4e5d2-2974-4d5f-91e4-351f75a15435 |
> f11fed97-513a-4a10-b85c-2afe68f42608 | ssd-3
> |   1 |7 | 4  |
> 2019-01-10 12:15:55.20347+01  | 2019-01-24 15:35:57.013832+01 | t
> |0 |
> | | f |  10
> | 5 |
> || f
>   | f  |10 |512
>  f5d2f7c6-093f-46d6-a844-224d92db5ef9 |
> b8b456f0-27c3-49b9-b5e9-9fa81fb3cdaa | backupnfs
> |   1 |1 | 4  |
> 2018-01-19 13:31:25.899738+01 | 2019-02-14 14:36:22.3171

[ovirt-users] Re: issue connecting 4.3.8 node to nfs domain

2020-02-10 Thread Jorick Astrego

On 2/10/20 1:27 PM, Jorick Astrego wrote:
>
> Hmm, I didn't notice that.
>
> I did a check on the NFS server and I found the
> "1ed0a635-67ee-4255-aad9-b70822350706" in the exportdom path
> (/data/exportdom).
>
> This was an old NFS export domain that has been deleted for a while
> now. I remember finding somewhere an issue with old domains still
> being active after removal but I cannot find it now.
>
> I unexported the directory on the nfs server and now I have to correct
> mount and it activates fine.
>
> Thanks!
>
Still weird that it picks another nfs mount path to mount that has been
removed months ago from engine.

It's not listed in the database on engine:

engine=# select * from storage_domain_static ;
  id  |  
storage    |  storage_name  |
storage_domain_type | storage_type | storage_domain_format_type
| _create_date  | _update_date  | reco
verable | last_time_used_as_master |   
storage_description | storage_comment | wipe_after_delete |
warning_low_space_indicator | critical_space_action_blocker |
first_metadata_device | vg_metadata_device | discard_after_delet
e | backup | warning_low_confirmed_space_indicator | block_size

--+--++-+--++---+---+-

+--++-+---+-+---+---++
--++---+
 782a61af-a520-44c4-8845-74bf92888552 |
640ab34d-aa5d-478b-97be-e3f810558628 | ISO_DOMAIN
|   2 |    1 | 0  |
2017-11-16 09:49:49.225478+01 |   | t  
    |    0 |
ISO_DOMAIN | |
f |
|   |  
|    | f 
  | f  |   |    512
 072fbaa1-08f3-4a40-9f34-a5ca22dd1d74 |
ceab03af-7220-4d42-8f5c-9b557f5d29af | ovirt-image-repository
|   4 |    8 | 0  |
2016-10-14 20:40:44.700381+02 | 2018-04-06 14:03:31.201898+02 | t  
    |    0 | Public Glance repository for
oVirt | | f
| |  
|   |    | f 
  | f  |   |    512
 b30bab9d-9a66-44ce-ad17-2eb4ee858d8f |
40d191b0-b7f8-48f9-bf6f-327275f51fef | ssd-6 
|   1 |    7 | 4  |
2017-06-25 12:45:24.52974+02  | 2019-01-24 15:35:57.013832+01 | t  
    |    1498461838176
|    | |
f |  10
| 5 |  
|    | f 
  | f  |   |    512
 95b4e5d2-2974-4d5f-91e4-351f75a15435 |
f11fed97-513a-4a10-b85c-2afe68f42608 | ssd-3 
|   1 |    7 | 4  |
2019-01-10 12:15:55.20347+01  | 2019-01-24 15:35:57.013832+01 | t  
    |    0
|    | |
f |  10
| 5 |  
|    | f 
  | f  |    10 |    512
 f5d2f7c6-093f-46d6-a844-224d92db5ef9 |
b8b456f0-27c3-49b9-b5e9-9fa81fb3cdaa | backupnfs 
|   1 |    1 | 4  |
2018-01-19 13:31:25.899738+01 | 2019-02-14 14:36:22.3171+01   | t  
    |    1530772724454
|    | |
f |  10
| 5 |  
|    | f 
  | f  | 0 |    512
 33f1ba00-6a16-4e58-b4c5-94426f1c4482 |
6b6b7899-c82b-4417-b453-0b3b0ac11deb | ssd-4 
|   1 |    7 | 4  |
2017-06-25 12:43:49.339884+02 | 2019-02-27 21:30:23.35823

[ovirt-users] Re: issue connecting 4.3.8 node to nfs domain

2020-02-10 Thread Amit Bawer
On Mon, Feb 10, 2020 at 2:27 PM Jorick Astrego  wrote:

>
> On 2/10/20 11:09 AM, Amit Bawer wrote:
>
> compared it with host having nfs domain working
> this
>
> On Mon, Feb 10, 2020 at 11:11 AM Jorick Astrego 
> wrote:
>
>>
>> On 2/9/20 10:27 AM, Amit Bawer wrote:
>>
>>
>>
>> On Thu, Feb 6, 2020 at 11:07 AM Jorick Astrego 
>> wrote:
>>
>>> Hi,
>>>
>>> Something weird is going on with our ovirt node 4.3.8 install mounting a
>>> nfs share.
>>>
>>> We have a NFS domain for a couple of backup disks and we have a couple
>>> of 4.2 nodes connected to it.
>>>
>>> Now I'm adding a fresh cluster of 4.3.8 nodes and the backupnfs mount
>>> doesn't work.
>>>
>>> (annoying you cannot copy the text from the events view)
>>>
>>> The domain is up and working
>>>
>>> ID:f5d2f7c6-093f-46d6-a844-224d92db5ef9
>>> Size: 10238 GiB
>>> Available:2491 GiB
>>> Used:7747 GiB
>>> Allocated: 3302 GiB
>>> Over Allocation Ratio:37%
>>> Images:7
>>> Path:*.*.*.*:/data/ovirt
>>> NFS Version: AUTO
>>> Warning Low Space Indicator:10% (1023 GiB)
>>> Critical Space Action Blocker:5 GiB
>>>
>>> But somehow the node appears to thin thinks it's an LVM volume? It tries
>>> to find the VGs volume group but fails... which is not so strange as it is
>>> an NFS volume:
>>>
>>> 2020-02-05 14:17:54,190+ WARN  (monitor/f5d2f7c) [storage.LVM]
>>> Reloading VGs failed (vgs=[u'f5d2f7c6-093f-46d6-a844-224d92db5ef9'] rc=5
>>> out=[] err=['  Volume group "f5d2f7c6-093f-46d6-a844-224d92db5ef9" not
>>> found', '  Cannot process volume group
>>> f5d2f7c6-093f-46d6-a844-224d92db5ef9']) (lvm:470)
>>> 2020-02-05 14:17:54,201+ ERROR (monitor/f5d2f7c) [storage.Monitor]
>>> Setting up monitor for f5d2f7c6-093f-46d6-a844-224d92db5ef9 failed
>>> (monitor:330)
>>> Traceback (most recent call last):
>>>   File "/usr/lib/python2.7/site-packages/vdsm/storage/monitor.py", line
>>> 327, in _setupLoop
>>> self._setupMonitor()
>>>   File "/usr/lib/python2.7/site-packages/vdsm/storage/monitor.py", line
>>> 349, in _setupMonitor
>>> self._produceDomain()
>>>   File "/usr/lib/python2.7/site-packages/vdsm/utils.py", line 159, in
>>> wrapper
>>> value = meth(self, *a, **kw)
>>>   File "/usr/lib/python2.7/site-packages/vdsm/storage/monitor.py", line
>>> 367, in _produceDomain
>>> self.domain = sdCache.produce(self.sdUUID)
>>>   File "/usr/lib/python2.7/site-packages/vdsm/storage/sdc.py", line 110,
>>> in produce
>>> domain.getRealDomain()
>>>   File "/usr/lib/python2.7/site-packages/vdsm/storage/sdc.py", line 51,
>>> in getRealDomain
>>> return self._cache._realProduce(self._sdUUID)
>>>   File "/usr/lib/python2.7/site-packages/vdsm/storage/sdc.py", line 134,
>>> in _realProduce
>>> domain = self._findDomain(sdUUID)
>>>   File "/usr/lib/python2.7/site-packages/vdsm/storage/sdc.py", line 151,
>>> in _findDomain
>>> return findMethod(sdUUID)
>>>   File "/usr/lib/python2.7/site-packages/vdsm/storage/sdc.py", line 176,
>>> in _findUnfetchedDomain
>>> raise se.StorageDomainDoesNotExist(sdUUID)
>>> StorageDomainDoesNotExist: Storage domain does not exist:
>>> (u'f5d2f7c6-093f-46d6-a844-224d92db5ef9',)
>>>
>>> The volume is actually mounted fine on the node:
>>>
>>> On NFS server
>>>
>>> Feb  5 15:47:09 back1en rpc.mountd[4899]: authenticated mount request
>>> from *.*.*.*:673 for /data/ovirt (/data/ovirt)
>>>
>>> On the host
>>>
>>> mount|grep nfs
>>>
>>> *.*.*.*:/data/ovirt on /rhev/data-center/mnt/*.*.*.*:_data_ovirt type
>>> nfs
>>> (rw,relatime,vers=3,rsize=1048576,wsize=1048576,namlen=255,soft,nolock,nosharecache,proto=tcp,timeo=600,retrans=6,sec=sys,mountaddr=*.*.*.*,mountvers=3,mountport=20048,mountproto=udp,local_lock=all,addr=*.*.*.*)
>>>
>>> And I can see the files:
>>>
>>> ls -alrt /rhev/data-center/mnt/*.*.*.*:_data_ovirt
>>> total 4
>>> drwxr-xr-x. 5 vdsm kvm61 Oct 26  2016
>>> 1ed0a635-67ee-4255-aad9-b70822350706
>>>
>>>
>> What ls -lart for 1ed0a635-67ee-4255-aad9-b70822350706 is showing?
>>
>> ls -arlt 1ed0a635-67ee-4255-aad9-b70822350706/
>> total 4
>> drwxr-xr-x. 2 vdsm kvm93 Oct 26  2016 dom_md
>> drwxr-xr-x. 5 vdsm kvm61 Oct 26  2016 .
>> drwxr-xr-x. 4 vdsm kvm40 Oct 26  2016 master
>> drwxr-xr-x. 5 vdsm kvm  4096 Oct 26  2016 images
>> drwxrwxrwx. 3 root root   86 Feb  5 14:37 ..
>>
> On a working nfs domain host we have following storage hierarchy,
> feece142-9e8d-42dc-9873-d154f60d0aac is the nfs domain in my case
>
> /rhev/data-center/
> ├── edefe626-3ada-11ea-9877-525400b37767
> ...
> │   ├── feece142-9e8d-42dc-9873-d154f60d0aac ->
> /rhev/data-center/mnt/10.35.18.45:
> _exports_data/feece142-9e8d-42dc-9873-d154f60d0aac
> │   └── mastersd ->
> /rhev/data-center/mnt/blockSD/a6a14714-6eaa-4054-9503-0ea3fcc38531
> └── mnt
> ├── 10.35.18.45:_exports_data
> │   └── feece142-9e8d-42dc-9873-d154f60d0aac
> │   ├── dom_md
> │   │   ├── ids
> │   │   ├── inbox
> │   │   ├── leases
> │   │   ├── metadata
> │   │   ├── outbox
> │   │   └── xleases
> 

[ovirt-users] Re: issue connecting 4.3.8 node to nfs domain

2020-02-10 Thread Jorick Astrego

On 2/10/20 11:09 AM, Amit Bawer wrote:
> compared it with host having nfs domain working
> this
>
> On Mon, Feb 10, 2020 at 11:11 AM Jorick Astrego  > wrote:
>
>
> On 2/9/20 10:27 AM, Amit Bawer wrote:
>>
>>
>> On Thu, Feb 6, 2020 at 11:07 AM Jorick Astrego
>> mailto:jor...@netbulae.eu>> wrote:
>>
>> Hi,
>>
>> Something weird is going on with our ovirt node 4.3.8 install
>> mounting a nfs share.
>>
>> We have a NFS domain for a couple of backup disks and we have
>> a couple of 4.2 nodes connected to it.
>>
>> Now I'm adding a fresh cluster of 4.3.8 nodes and the
>> backupnfs mount doesn't work.
>>
>> (annoying you cannot copy the text from the events view)
>>
>> The domain is up and working
>>
>> ID:f5d2f7c6-093f-46d6-a844-224d92db5ef9
>> Size:10238 GiB
>> Available:2491 GiB
>> Used:7747 GiB
>> Allocated:3302 GiB
>> Over Allocation Ratio:37%
>> Images:7
>> Path:*.*.*.*:/data/ovirt
>> NFS Version:AUTO
>> Warning Low Space Indicator:10% (1023 GiB)
>> Critical Space Action Blocker:5 GiB
>>
>> But somehow the node appears to thin thinks it's an LVM
>> volume? It tries to find the VGs volume group but fails...
>> which is not so strange as it is an NFS volume:
>>
>> 2020-02-05 14:17:54,190+ WARN  (monitor/f5d2f7c)
>> [storage.LVM] Reloading VGs failed
>> (vgs=[u'f5d2f7c6-093f-46d6-a844-224d92db5ef9'] rc=5
>> out=[] err=['  Volume group
>> "f5d2f7c6-093f-46d6-a844-224d92db5ef9" not found', ' 
>> Cannot process volume group
>> f5d2f7c6-093f-46d6-a844-224d92db5ef9']) (lvm:470)
>> 2020-02-05 14:17:54,201+ ERROR (monitor/f5d2f7c)
>> [storage.Monitor] Setting up monitor for
>> f5d2f7c6-093f-46d6-a844-224d92db5ef9 failed (monitor:330)
>> Traceback (most recent call last):
>>   File
>> "/usr/lib/python2.7/site-packages/vdsm/storage/monitor.py",
>> line 327, in _setupLoop
>>     self._setupMonitor()
>>   File
>> "/usr/lib/python2.7/site-packages/vdsm/storage/monitor.py",
>> line 349, in _setupMonitor
>>     self._produceDomain()
>>   File "/usr/lib/python2.7/site-packages/vdsm/utils.py",
>> line 159, in wrapper
>>     value = meth(self, *a, **kw)
>>   File
>> "/usr/lib/python2.7/site-packages/vdsm/storage/monitor.py",
>> line 367, in _produceDomain
>>     self.domain = sdCache.produce(self.sdUUID)
>>   File
>> "/usr/lib/python2.7/site-packages/vdsm/storage/sdc.py",
>> line 110, in produce
>>     domain.getRealDomain()
>>   File
>> "/usr/lib/python2.7/site-packages/vdsm/storage/sdc.py",
>> line 51, in getRealDomain
>>     return self._cache._realProduce(self._sdUUID)
>>   File
>> "/usr/lib/python2.7/site-packages/vdsm/storage/sdc.py",
>> line 134, in _realProduce
>>     domain = self._findDomain(sdUUID)
>>   File
>> "/usr/lib/python2.7/site-packages/vdsm/storage/sdc.py",
>> line 151, in _findDomain
>>     return findMethod(sdUUID)
>>   File
>> "/usr/lib/python2.7/site-packages/vdsm/storage/sdc.py",
>> line 176, in _findUnfetchedDomain
>>     raise se.StorageDomainDoesNotExist(sdUUID)
>> StorageDomainDoesNotExist: Storage domain does not exist:
>> (u'f5d2f7c6-093f-46d6-a844-224d92db5ef9',)
>>
>> The volume is actually mounted fine on the node:
>>
>> On NFS server
>>
>> Feb  5 15:47:09 back1en rpc.mountd[4899]: authenticated
>> mount request from *.*.*.*:673 for /data/ovirt (/data/ovirt)
>>
>> On the host
>>
>> mount|grep nfs
>>
>> *.*.*.*:/data/ovirt on
>> /rhev/data-center/mnt/*.*.*.*:_data_ovirt type nfs
>> 
>> (rw,relatime,vers=3,rsize=1048576,wsize=1048576,namlen=255,soft,nolock,nosharecache,proto=tcp,timeo=600,retrans=6,sec=sys,mountaddr=*.*.*.*,mountvers=3,mountport=20048,mountproto=udp,local_lock=all,addr=*.*.*.*)
>>
>> And I can see the files:
>>
>> ls -alrt /rhev/data-center/mnt/*.*.*.*:_data_ovirt
>> total 4
>> drwxr-xr-x. 5 vdsm kvm    61 Oct 26  2016
>> 1ed0a635-67ee-4255-aad9-b70822350706
>>
>>
>> What ls -lart for 1ed0a635-67ee-4255-aad9-b70822350706 is showing?
>
> ls -arlt 1ed0a635-67ee-4255-aad9-b70822350706/
> total 4
> drwxr-xr-x. 2 vdsm kvm    93 Oct 26  2016 dom_md
> drwxr-xr-x. 5 vdsm kvm

[ovirt-users] Re: issue connecting 4.3.8 node to nfs domain

2020-02-10 Thread Amit Bawer
compared it with host having nfs domain working
this

On Mon, Feb 10, 2020 at 11:11 AM Jorick Astrego  wrote:

>
> On 2/9/20 10:27 AM, Amit Bawer wrote:
>
>
>
> On Thu, Feb 6, 2020 at 11:07 AM Jorick Astrego  wrote:
>
>> Hi,
>>
>> Something weird is going on with our ovirt node 4.3.8 install mounting a
>> nfs share.
>>
>> We have a NFS domain for a couple of backup disks and we have a couple of
>> 4.2 nodes connected to it.
>>
>> Now I'm adding a fresh cluster of 4.3.8 nodes and the backupnfs mount
>> doesn't work.
>>
>> (annoying you cannot copy the text from the events view)
>>
>> The domain is up and working
>>
>> ID:f5d2f7c6-093f-46d6-a844-224d92db5ef9
>> Size: 10238 GiB
>> Available:2491 GiB
>> Used:7747 GiB
>> Allocated: 3302 GiB
>> Over Allocation Ratio:37%
>> Images:7
>> Path:*.*.*.*:/data/ovirt
>> NFS Version: AUTO
>> Warning Low Space Indicator:10% (1023 GiB)
>> Critical Space Action Blocker:5 GiB
>>
>> But somehow the node appears to thin thinks it's an LVM volume? It tries
>> to find the VGs volume group but fails... which is not so strange as it is
>> an NFS volume:
>>
>> 2020-02-05 14:17:54,190+ WARN  (monitor/f5d2f7c) [storage.LVM]
>> Reloading VGs failed (vgs=[u'f5d2f7c6-093f-46d6-a844-224d92db5ef9'] rc=5
>> out=[] err=['  Volume group "f5d2f7c6-093f-46d6-a844-224d92db5ef9" not
>> found', '  Cannot process volume group
>> f5d2f7c6-093f-46d6-a844-224d92db5ef9']) (lvm:470)
>> 2020-02-05 14:17:54,201+ ERROR (monitor/f5d2f7c) [storage.Monitor]
>> Setting up monitor for f5d2f7c6-093f-46d6-a844-224d92db5ef9 failed
>> (monitor:330)
>> Traceback (most recent call last):
>>   File "/usr/lib/python2.7/site-packages/vdsm/storage/monitor.py", line
>> 327, in _setupLoop
>> self._setupMonitor()
>>   File "/usr/lib/python2.7/site-packages/vdsm/storage/monitor.py", line
>> 349, in _setupMonitor
>> self._produceDomain()
>>   File "/usr/lib/python2.7/site-packages/vdsm/utils.py", line 159, in
>> wrapper
>> value = meth(self, *a, **kw)
>>   File "/usr/lib/python2.7/site-packages/vdsm/storage/monitor.py", line
>> 367, in _produceDomain
>> self.domain = sdCache.produce(self.sdUUID)
>>   File "/usr/lib/python2.7/site-packages/vdsm/storage/sdc.py", line 110,
>> in produce
>> domain.getRealDomain()
>>   File "/usr/lib/python2.7/site-packages/vdsm/storage/sdc.py", line 51,
>> in getRealDomain
>> return self._cache._realProduce(self._sdUUID)
>>   File "/usr/lib/python2.7/site-packages/vdsm/storage/sdc.py", line 134,
>> in _realProduce
>> domain = self._findDomain(sdUUID)
>>   File "/usr/lib/python2.7/site-packages/vdsm/storage/sdc.py", line 151,
>> in _findDomain
>> return findMethod(sdUUID)
>>   File "/usr/lib/python2.7/site-packages/vdsm/storage/sdc.py", line 176,
>> in _findUnfetchedDomain
>> raise se.StorageDomainDoesNotExist(sdUUID)
>> StorageDomainDoesNotExist: Storage domain does not exist:
>> (u'f5d2f7c6-093f-46d6-a844-224d92db5ef9',)
>>
>> The volume is actually mounted fine on the node:
>>
>> On NFS server
>>
>> Feb  5 15:47:09 back1en rpc.mountd[4899]: authenticated mount request
>> from *.*.*.*:673 for /data/ovirt (/data/ovirt)
>>
>> On the host
>>
>> mount|grep nfs
>>
>> *.*.*.*:/data/ovirt on /rhev/data-center/mnt/*.*.*.*:_data_ovirt type nfs
>> (rw,relatime,vers=3,rsize=1048576,wsize=1048576,namlen=255,soft,nolock,nosharecache,proto=tcp,timeo=600,retrans=6,sec=sys,mountaddr=*.*.*.*,mountvers=3,mountport=20048,mountproto=udp,local_lock=all,addr=*.*.*.*)
>>
>> And I can see the files:
>>
>> ls -alrt /rhev/data-center/mnt/*.*.*.*:_data_ovirt
>> total 4
>> drwxr-xr-x. 5 vdsm kvm61 Oct 26  2016
>> 1ed0a635-67ee-4255-aad9-b70822350706
>>
>>
> What ls -lart for 1ed0a635-67ee-4255-aad9-b70822350706 is showing?
>
> ls -arlt 1ed0a635-67ee-4255-aad9-b70822350706/
> total 4
> drwxr-xr-x. 2 vdsm kvm93 Oct 26  2016 dom_md
> drwxr-xr-x. 5 vdsm kvm61 Oct 26  2016 .
> drwxr-xr-x. 4 vdsm kvm40 Oct 26  2016 master
> drwxr-xr-x. 5 vdsm kvm  4096 Oct 26  2016 images
> drwxrwxrwx. 3 root root   86 Feb  5 14:37 ..
>
On a working nfs domain host we have following storage hierarchy,
feece142-9e8d-42dc-9873-d154f60d0aac is the nfs domain in my case

/rhev/data-center/
├── edefe626-3ada-11ea-9877-525400b37767
...
│   ├── feece142-9e8d-42dc-9873-d154f60d0aac ->
/rhev/data-center/mnt/10.35.18.45:
_exports_data/feece142-9e8d-42dc-9873-d154f60d0aac
│   └── mastersd ->
/rhev/data-center/mnt/blockSD/a6a14714-6eaa-4054-9503-0ea3fcc38531
└── mnt
├── 10.35.18.45:_exports_data
│   └── feece142-9e8d-42dc-9873-d154f60d0aac
│   ├── dom_md
│   │   ├── ids
│   │   ├── inbox
│   │   ├── leases
│   │   ├── metadata
│   │   ├── outbox
│   │   └── xleases
│   └── images
│   ├── 915e6f45-ea13-428c-aab2-fb27798668e5
│   │   ├── b83843d7-4c5a-4872-87a4-d0fe27a2c3d2
│   │   ├── b83843d7-4c5a-4872-87a4-d0fe27a2c3d2.lease
│   │   └── b83843d7-4c5a-4872-87a4-d0fe27a2c3d2.meta

[ovirt-users] Re: Can't install virtio-win with EL7.7/Ovirt-4.3.8 -- rpm error

2020-02-10 Thread Dominic Coulombe
Hello,

I've got the same behavior on oVirt 4.3.8.2-1.el7 running on CentOS
7.7.1908.

Thanks.



On Thu, Feb 6, 2020 at 1:01 PM Derek Atkins  wrote:

> Hi,
>
> I was trying to install the virtio-win package, but it gives an error:
>
> ERROR You need to update rpm to handle:
> rpmlib(PayloadIsZstd) <= 5.4.18-1 is needed by virtio-win-0.1.173-6.noarch
>
> Is this a known problem with current 4.3.x and EL7.7?
>
> -derek
>
> --
>Derek Atkins 617-623-3745
>de...@ihtfp.com www.ihtfp.com
>Computer and Internet Security Consultant
> ___
> Users mailing list -- users@ovirt.org
> To unsubscribe send an email to users-le...@ovirt.org
> Privacy Statement: https://www.ovirt.org/site/privacy-policy/
> oVirt Code of Conduct:
> https://www.ovirt.org/community/about/community-guidelines/
> List Archives:
> https://lists.ovirt.org/archives/list/users@ovirt.org/message/OR47UVYR6DWQX6NFTGUFU5JOVJVFOWDO/
>
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/OTPC2UMCWVNSJ3FUR34LVXGYMIJH3JXA/


[ovirt-users] Re: Failed to add vm as host (CPU_TYPE_UNSUPPORTED_IN_THIS_CLUSTER_VERSION)

2020-02-10 Thread Ritesh Chikatwar
i ran cat /proc/cpuinfo this on vm

https://pastebin.com/gcFceysR

On Mon, Feb 10, 2020 at 2:27 PM Erick Perez - Quadrian Enterprises <
epe...@quadrianweb.com> wrote:

> Please check this bug report.
> Are you running an unsupported CPU? Can you post /proc/cpuinfo
>  https://bugzilla.redhat.com/show_bug.cgi?id=1670152
>
> On Mon, Feb 10, 2020 at 3:18 AM Ritesh Chikatwar 
> wrote:
>
>> Hello,
>>
>> error is Host host moved to Non-Operational state as host CPU type is not
>> supported in this cluster compatibility version or is not supported at all.
>>
>> Vm is running centos 8:
>> enabled nested virtualization by this way:
>> cat /sys/module/kvm_intel/parameters/nested
>>   vi /etc/modprobe.d/kvm-nested.conf
>>   ---> Contains
>>   options kvm-intel nested=1
>>   options kvm-intel enable_shadow_vmcs=1
>>   options kvm-intel enable_apicv=1
>>   options kvm-intel ept=1
>>   modprobe -r kvm_intel
>>   modprobe -a kvm_intel
>>   cat /sys/module/kvm_intel/parameters/nested
>> --> 1
>>
>> Engine Log:
>> https://pastebin.com/hgrwzRhE
>>
>> Ovirt-engine Cluster Version is 4.4
>> I am running ovirt engine on fedora 30.
>> Cluster CPU Type Intel Nehlam Family.
>>
>> Any thoughts will be appreciated.
>>
>> Rit
>> ___
>> Users mailing list -- users@ovirt.org
>> To unsubscribe send an email to users-le...@ovirt.org
>> Privacy Statement: https://www.ovirt.org/site/privacy-policy/
>> oVirt Code of Conduct:
>> https://www.ovirt.org/community/about/community-guidelines/
>> List Archives:
>> https://lists.ovirt.org/archives/list/users@ovirt.org/message/5GDFN7ESHHDBCA6CAAXWM4NMSNEAS7EO/
>>
>
>
> --
>
> -
> Erick Perez
> Quadrian Enterprises S.A. - Panama, Republica de Panama
> Skype chat: eaperezh
> WhatsApp IM: +507-6675-5083
> -
> ___
> Users mailing list -- users@ovirt.org
> To unsubscribe send an email to users-le...@ovirt.org
> Privacy Statement: https://www.ovirt.org/site/privacy-policy/
> oVirt Code of Conduct:
> https://www.ovirt.org/community/about/community-guidelines/
> List Archives:
> https://lists.ovirt.org/archives/list/users@ovirt.org/message/F2JHA3AN46D4IP4NQQCG5RFXBTGB3UUD/
>
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/HAEYSQU2DU5K7ZTRG7K5PV2PCL4FYF3D/


[ovirt-users] Re: issue connecting 4.3.8 node to nfs domain

2020-02-10 Thread Jorick Astrego

On 2/9/20 10:27 AM, Amit Bawer wrote:
>
>
> On Thu, Feb 6, 2020 at 11:07 AM Jorick Astrego  > wrote:
>
> Hi,
>
> Something weird is going on with our ovirt node 4.3.8 install
> mounting a nfs share.
>
> We have a NFS domain for a couple of backup disks and we have a
> couple of 4.2 nodes connected to it.
>
> Now I'm adding a fresh cluster of 4.3.8 nodes and the backupnfs
> mount doesn't work.
>
> (annoying you cannot copy the text from the events view)
>
> The domain is up and working
>
> ID:f5d2f7c6-093f-46d6-a844-224d92db5ef9
> Size:10238 GiB
> Available:2491 GiB
> Used:7747 GiB
> Allocated:3302 GiB
> Over Allocation Ratio:37%
> Images:7
> Path:*.*.*.*:/data/ovirt
> NFS Version:AUTO
> Warning Low Space Indicator:10% (1023 GiB)
> Critical Space Action Blocker:5 GiB
>
> But somehow the node appears to thin thinks it's an LVM volume? It
> tries to find the VGs volume group but fails... which is not so
> strange as it is an NFS volume:
>
> 2020-02-05 14:17:54,190+ WARN  (monitor/f5d2f7c)
> [storage.LVM] Reloading VGs failed
> (vgs=[u'f5d2f7c6-093f-46d6-a844-224d92db5ef9'] rc=5 out=[]
> err=['  Volume group "f5d2f7c6-093f-46d6-a844-224d92db5ef9"
> not found', '  Cannot process volume group
> f5d2f7c6-093f-46d6-a844-224d92db5ef9']) (lvm:470)
> 2020-02-05 14:17:54,201+ ERROR (monitor/f5d2f7c)
> [storage.Monitor] Setting up monitor for
> f5d2f7c6-093f-46d6-a844-224d92db5ef9 failed (monitor:330)
> Traceback (most recent call last):
>   File
> "/usr/lib/python2.7/site-packages/vdsm/storage/monitor.py",
> line 327, in _setupLoop
>     self._setupMonitor()
>   File
> "/usr/lib/python2.7/site-packages/vdsm/storage/monitor.py",
> line 349, in _setupMonitor
>     self._produceDomain()
>   File "/usr/lib/python2.7/site-packages/vdsm/utils.py", line
> 159, in wrapper
>     value = meth(self, *a, **kw)
>   File
> "/usr/lib/python2.7/site-packages/vdsm/storage/monitor.py",
> line 367, in _produceDomain
>     self.domain = sdCache.produce(self.sdUUID)
>   File "/usr/lib/python2.7/site-packages/vdsm/storage/sdc.py",
> line 110, in produce
>     domain.getRealDomain()
>   File "/usr/lib/python2.7/site-packages/vdsm/storage/sdc.py",
> line 51, in getRealDomain
>     return self._cache._realProduce(self._sdUUID)
>   File "/usr/lib/python2.7/site-packages/vdsm/storage/sdc.py",
> line 134, in _realProduce
>     domain = self._findDomain(sdUUID)
>   File "/usr/lib/python2.7/site-packages/vdsm/storage/sdc.py",
> line 151, in _findDomain
>     return findMethod(sdUUID)
>   File "/usr/lib/python2.7/site-packages/vdsm/storage/sdc.py",
> line 176, in _findUnfetchedDomain
>     raise se.StorageDomainDoesNotExist(sdUUID)
> StorageDomainDoesNotExist: Storage domain does not exist:
> (u'f5d2f7c6-093f-46d6-a844-224d92db5ef9',)
>
> The volume is actually mounted fine on the node:
>
> On NFS server
>
> Feb  5 15:47:09 back1en rpc.mountd[4899]: authenticated mount
> request from *.*.*.*:673 for /data/ovirt (/data/ovirt)
>
> On the host
>
> mount|grep nfs
>
> *.*.*.*:/data/ovirt on
> /rhev/data-center/mnt/*.*.*.*:_data_ovirt type nfs
> 
> (rw,relatime,vers=3,rsize=1048576,wsize=1048576,namlen=255,soft,nolock,nosharecache,proto=tcp,timeo=600,retrans=6,sec=sys,mountaddr=*.*.*.*,mountvers=3,mountport=20048,mountproto=udp,local_lock=all,addr=*.*.*.*)
>
> And I can see the files:
>
> ls -alrt /rhev/data-center/mnt/*.*.*.*:_data_ovirt
> total 4
> drwxr-xr-x. 5 vdsm kvm    61 Oct 26  2016
> 1ed0a635-67ee-4255-aad9-b70822350706
>
>
> What ls -lart for 1ed0a635-67ee-4255-aad9-b70822350706 is showing?

ls -arlt 1ed0a635-67ee-4255-aad9-b70822350706/
total 4
drwxr-xr-x. 2 vdsm kvm    93 Oct 26  2016 dom_md
drwxr-xr-x. 5 vdsm kvm    61 Oct 26  2016 .
drwxr-xr-x. 4 vdsm kvm    40 Oct 26  2016 master
drwxr-xr-x. 5 vdsm kvm  4096 Oct 26  2016 images
drwxrwxrwx. 3 root root   86 Feb  5 14:37 ..

Regards,

Jorick Astrego





Met vriendelijke groet, With kind regards,

Jorick Astrego

Netbulae Virtualization Experts 



Tel: 053 20 30 270  i...@netbulae.euStaalsteden 4-3A
KvK 08198180
Fax: 053 20 30 271  www.netbulae.eu 7547 TA Enschede
BTW NL821234584B01



___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Condu

[ovirt-users] Re: Failed to add vm as host (CPU_TYPE_UNSUPPORTED_IN_THIS_CLUSTER_VERSION)

2020-02-10 Thread Erick Perez - Quadrian Enterprises
Please check this bug report.
Are you running an unsupported CPU? Can you post /proc/cpuinfo
 https://bugzilla.redhat.com/show_bug.cgi?id=1670152

On Mon, Feb 10, 2020 at 3:18 AM Ritesh Chikatwar 
wrote:

> Hello,
>
> error is Host host moved to Non-Operational state as host CPU type is not
> supported in this cluster compatibility version or is not supported at all.
>
> Vm is running centos 8:
> enabled nested virtualization by this way:
> cat /sys/module/kvm_intel/parameters/nested
>   vi /etc/modprobe.d/kvm-nested.conf
>   ---> Contains
>   options kvm-intel nested=1
>   options kvm-intel enable_shadow_vmcs=1
>   options kvm-intel enable_apicv=1
>   options kvm-intel ept=1
>   modprobe -r kvm_intel
>   modprobe -a kvm_intel
>   cat /sys/module/kvm_intel/parameters/nested
> --> 1
>
> Engine Log:
> https://pastebin.com/hgrwzRhE
>
> Ovirt-engine Cluster Version is 4.4
> I am running ovirt engine on fedora 30.
> Cluster CPU Type Intel Nehlam Family.
>
> Any thoughts will be appreciated.
>
> Rit
> ___
> Users mailing list -- users@ovirt.org
> To unsubscribe send an email to users-le...@ovirt.org
> Privacy Statement: https://www.ovirt.org/site/privacy-policy/
> oVirt Code of Conduct:
> https://www.ovirt.org/community/about/community-guidelines/
> List Archives:
> https://lists.ovirt.org/archives/list/users@ovirt.org/message/5GDFN7ESHHDBCA6CAAXWM4NMSNEAS7EO/
>


-- 

-
Erick Perez
Quadrian Enterprises S.A. - Panama, Republica de Panama
Skype chat: eaperezh
WhatsApp IM: +507-6675-5083
-
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/F2JHA3AN46D4IP4NQQCG5RFXBTGB3UUD/


[ovirt-users] Failed to add vm as host (CPU_TYPE_UNSUPPORTED_IN_THIS_CLUSTER_VERSION)

2020-02-10 Thread Ritesh Chikatwar
Hello,

error is Host host moved to Non-Operational state as host CPU type is not
supported in this cluster compatibility version or is not supported at all.

Vm is running centos 8:
enabled nested virtualization by this way:
cat /sys/module/kvm_intel/parameters/nested
  vi /etc/modprobe.d/kvm-nested.conf
  ---> Contains
  options kvm-intel nested=1
  options kvm-intel enable_shadow_vmcs=1
  options kvm-intel enable_apicv=1
  options kvm-intel ept=1
  modprobe -r kvm_intel
  modprobe -a kvm_intel
  cat /sys/module/kvm_intel/parameters/nested
--> 1

Engine Log:
https://pastebin.com/hgrwzRhE

Ovirt-engine Cluster Version is 4.4
I am running ovirt engine on fedora 30.
Cluster CPU Type Intel Nehlam Family.

Any thoughts will be appreciated.

Rit
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/5GDFN7ESHHDBCA6CAAXWM4NMSNEAS7EO/