[ovirt-users] Re: VMs import over slow 1gig interface instead of fast 10gig interface?

2019-01-24 Thread kulshereglobalsoft
Really nice ! Thanks for sharing this wonderful information its really 
informative.
https://get-office-2019.com/
http://mcafee-com-activate-code.com/
https://mcafeeactivate.website/
https://quicksetupme.com/
https://davidpcexpert.wordpress.com/blog/
https://mexwell1122.blogspot.com/
https://quicksetupme.com/norton-nu16/
https://askofficesetup.com/
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/7UTW3RNPVWNSFYCBBFWG75ERPD275YRP/


[ovirt-users] Re: Unable to get the proper console of vm

2019-01-24 Thread Shikhar Verma
Yes iso image is relevant.

Shikhar Verma

On Thu, 24 Jan 2019, 13:59 Michal Skrivanek 
>
> > On 21 Jan 2019, at 15:54, Shikhar Verma  wrote:
> >
> > Hi,
> >
> > I have created the virtual machine from ovirt manager but when I am
> trying to get the console of the vm to do the installation of it but it is
> only giving two two line and even i have tried as run once and selected CD
> ROM as first priority and attached the ISO of centos7
>
> is the iso alright? does it boot elsewhere? does your vm have enough ram?
>
> >
> > SeaBIOS (version 1.11.0-2.e17)
> > Machine UUID ---
> >
> > Also, from manager, newly launched vm is showing green..
> >
> > And from the host machine, it is showing this error
> >
> > Jan 21 19:23:24 servera libvirtd: 2019-01-21 13:53:24.286+: 12800:
> error : qemuDomainAgentAvailable:9133 : Guest agent is not responding: QEMU
> guest agent is not connected
>
> because it’s not booted yet. irrelevant.
>
> >
> > I am using the latest version version of ovirt-engine & host as well.
> >
> > Please respond.
> >
> > Thanks
> > Shikhar
> > ___
> > Users mailing list -- users@ovirt.org
> > To unsubscribe send an email to users-le...@ovirt.org
> > Privacy Statement: https://www.ovirt.org/site/privacy-policy/
> > oVirt Code of Conduct:
> https://www.ovirt.org/community/about/community-guidelines/
> > List Archives:
> https://lists.ovirt.org/archives/list/users@ovirt.org/message/52HAFOXSXLJRI47DB3JBM7HY3VXGC6CM/
>
>
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/NLTWS2X6RVAD7TIEFF7K42AWKGNWVVTO/


[ovirt-users] Nvidia Grid K2 and Ovirt GPU Passtrough

2019-01-24 Thread jarheadx
Hello,

i have tried every documentation that i found to bring GPU Passtrough with a 
Nvidia Grid K2 to ovirt, but i failed. 

I am confident that it has to run with my hardware, but i have no ideas 
anymore. Maybe the community can help me. 

Actually my VM (Win7 and Win10) crashes or hangs on startup. 

I have done theese steps:

1. lspci -nnk
.
07:00.0 VGA compatible controller [0300]: NVIDIA Corporation GK104GL [GRID K2] 
[10de:11bf] (rev a1)
Subsystem: NVIDIA Corporation Device [10de:100a]
Kernel driver in use: pci-stub
Kernel modules: nouveau
08:00.0 VGA compatible controller [0300]: NVIDIA Corporation GK104GL [GRID K2] 
[10de:11bf] (rev a1)
Subsystem: NVIDIA Corporation Device [10de:100a]
Kernel driver in use: pci-stub
Kernel modules: nouveau
.

2. /etc/default/grub
.
GRUB_CMDLINE_LINUX="crashkernel=auto rd.lvm.lv=cl/root rd.lvm.lv=cl/swap rhgb 
quiet pci-stub.ids=10de:11bf rdblacklist=nouveau amd_iommu=on"


3.
 Added Line

 "options vfio-pci ids=10de:11bf"
 
to nano /etc/modprobe.d/vfio.conf

dmesg | grep -i vfio ->

[   11.202767] VFIO - User Level meta-driver version: 0.3
[   11.315368] vfio_pci: add [10de:11bf[:]] class 0x00/
[ 1032.582778] vfio_ecap_init: :07:00.0 hiding ecap 0x19@0x900
[ 1046.232009] vfio-pci :07:00.0: irq 61 for MSI/MSI-X

 -
After assigning the GPU to the VM, the OS hangs on startup

Any ideas? 

Best Regards
Reza
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/7TW2DY3CSA35Y3LJTEACY3IRIUH57422/


[ovirt-users] Re: ovirt 4.2 HCI rollout

2019-01-24 Thread Markus Schaufler
no...

all logs in that folder are attached in the mail before.


Von: Simone Tiraboschi 
Gesendet: Donnerstag, 24. Jänner 2019 15:16:52
An: Markus Schaufler
Cc: Dominik Holler; users@ovirt.org
Betreff: Re: [ovirt-users] ovirt 4.2 HCI rollout



On Thu, Jan 24, 2019 at 3:14 PM Markus Schaufler 
mailto:markus.schauf...@digit-all.at>> wrote:

The hosted engine is not running and cannot be started.


Do you have on your first host a directory like 
/var/log/ovirt-hosted-engine-setup/engine-logs-2019-01-21T22:47:03Z with logs 
from the engine VM?




Von: Simone Tiraboschi mailto:stira...@redhat.com>>
Gesendet: Donnerstag, 24. Jänner 2019 14:45:59
An: Markus Schaufler
Cc: Dominik Holler; users@ovirt.org
Betreff: Re: [ovirt-users] ovirt 4.2 HCI rollout



On Thu, Jan 24, 2019 at 2:21 PM Markus Schaufler 
mailto:markus.schauf...@digit-all.at>> wrote:

Hi,


thanks for the replies.


I updated to 4.2.8 and tried again:


[ INFO ] TASK [Check engine VM health]
[ ERROR ] fatal: [localhost]: FAILED! => {"attempts": 120, "changed": true, 
"cmd": ["hosted-engine", "--vm-status", "--json"], "delta": "0:00:00.165316", 
"end": "2019-01-24 14:12:06.899564", "rc": 0, "start": "2019-01-24 
14:12:06.734248", "stderr": "", "stderr_lines": [], "stdout": "{\"1\": 
{\"conf_on_shared_storage\": true, \"live-data\": true, \"extra\": 
\"metadata_parse_version=1\\nmetadata_feature_version=1\\ntimestamp=3049 (Thu 
Jan 24 14:11:59 2019)\\nhost-id=1\\nscore=3400\\nvm_conf_refresh_time=3049 (Thu 
Jan 24 14:11:59 
2019)\\nconf_on_shared_storage=True\\nmaintenance=False\\nstate=EngineStarting\\nstopped=False\\n\",
 \"hostname\": \"HCI01.res01.ads.ooe.local\", \"host-id\": 1, 
\"engine-status\": {\"reason\": \"failed liveliness check\", \"health\": 
\"bad\", \"vm\": \"up\", \"detail\": \"Up\"}, \"score\": 3400, \"stopped\": 
false, \"maintenance\": false, \"crc32\": \"0c1a3ddb\", 
\"local_conf_timestamp\": 3049, \"host-ts\": 3049}, \"global_maintenance\": 
false}", "stdout_lines": ["{\"1\": {\"conf_on_shared_storage\": true, 
\"live-data\": true, \"extra\": 
\"metadata_parse_version=1\\nmetadata_feature_version=1\\ntimestamp=3049 (Thu 
Jan 24 14:11:59 2019)\\nhost-id=1\\nscore=3400\\nvm_conf_refresh_time=3049 (Thu 
Jan 24 14:11:59 
2019)\\nconf_on_shared_storage=True\\nmaintenance=False\\nstate=EngineStarting\\nstopped=False\\n\",
 \"hostname\": \"HCI01.res01.ads.ooe.local\", \"host-id\": 1, 
\"engine-status\": {\"reason\": \"failed liveliness check\", \"health\": 
\"bad\", \"vm\": \"up\", \"detail\": \"Up\"}, \"score\": 3400, \"stopped\": 
false, \"maintenance\": false, \"crc32\": \"0c1a3ddb\", 
\"local_conf_timestamp\": 3049, \"host-ts\": 3049}, \"global_maintenance\": 
false}"]}


It's still the same issue: the host fail to properly check the status of the 
engine over a dedicate health page.

You should connect to ovirt-hci.res01.ads.ooe.local and check the status of 
ovirt-engine service and /var/log/ovirt-engine/engine.log there.



[ INFO ] TASK [Check VM status at virt level]
[ INFO ] changed: [localhost]
[ INFO ] TASK [Fail if engine VM is not running]
[ INFO ] skipping: [localhost]
[ INFO ] TASK [Get target engine VM IPv4 address]
[ INFO ] changed: [localhost]
[ INFO ] TASK [Get VDSM's target engine VM stats]
[ INFO ] changed: [localhost]
[ INFO ] TASK [Convert stats to JSON format]
[ INFO ] ok: [localhost]
[ INFO ] TASK [Get target engine VM IPv4 address from VDSM stats]
[ INFO ] ok: [localhost]
[ INFO ] TASK [Fail if the Engine has no IP address]
[ INFO ] skipping: [localhost]
[ INFO ] TASK [Fail if Engine IP is different from engine's FQDN resolved IP]
[ INFO ] skipping: [localhost]
[ INFO ] TASK [Get target engine VM IPv4 address]
[ INFO ] skipping: [localhost]
[ INFO ] TASK [Reconfigure OVN central address]
[ ERROR ] fatal: [localhost]: FAILED! => {"msg": "The task includes an option 
with an undefined variable. The error was: 'dict object' has no attribute 
'stdout_lines'\n\nThe error appears to have been in 
'/usr/share/ovirt-hosted-engine-setup/ansible/create_target_vm.yml': line 518, 
column 5, but may\nbe elsewhere in the file depending on the exact syntax 
problem.\n\nThe offending line appears to be:\n\n # 
https://github.com/oVirt/ovirt-engine/blob/master/packaging/playbooks/roles/ovirt-provider-ovn-driver/tasks/main.yml\n
 - name: Reconfigure OVN central address\n ^ here\n"}



attached you'll find the setup logs.


best regards,

Markus Schaufler


Von: Simone Tiraboschi mailto:stira...@redhat.com>>
Gesendet: Donnerstag, 24. Jänner 2019 11:56:50
An: Dominik Holler
Cc: Markus Schaufler; users@ovirt.org
Betreff: Re: [ovirt-users] ovirt 4.2 HCI rollout



On Thu, Jan 24, 2019 at 9:40 AM Dominik Holler 
mailto:dhol...@redhat.com>> wrote:
On Tue, 22 Jan 2019 

[ovirt-users] Re: How to replace vMware infrastructure with oVirt

2019-01-24 Thread Greg Sheremeta
On Thu, Jan 24, 2019 at 1:12 PM Mannish Kumar 
wrote:

> Hi,
>
> I have two Esxi hosts managed by VMware vCenter Server. I want to create a
> similar infrastructure with oVirt. I know that oVirt is similar to VMware
> vCenter Server but not sure what to replace the Esxi hosts with in oVirt
> Environment.
>

Either EL hosts or oVirt nodes:
https://ovirt.org/download/#download-ovirt-node-or-setup-hosts

Also:
https://www.ovirt.org/documentation/vmm-guide/chap-Administrative_Tasks.html#importing-a-virtual-machine-from-a-vmware-provider

Let us know if you have further questions.

Best wishes,
Greg


>
> I am looking to build oVirt with Self-Hosted Engine.It would be great help
> if someone could help me to build this.
> ___
> Users mailing list -- users@ovirt.org
> To unsubscribe send an email to users-le...@ovirt.org
> Privacy Statement: https://www.ovirt.org/site/privacy-policy/
> oVirt Code of Conduct:
> https://www.ovirt.org/community/about/community-guidelines/
> List Archives:
> https://lists.ovirt.org/archives/list/users@ovirt.org/message/ZF7SYHL2QDRGZV7NFFJNQ6COBZEKJXXN/
>


-- 

GREG SHEREMETA

SENIOR SOFTWARE ENGINEER - TEAM LEAD - RHV UX

Red Hat NA



gsher...@redhat.comIRC: gshereme

___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/KZJSSAE6OSHKT4KRWP5ONTKI7PJWWI55/


[ovirt-users] How to replace vMware infrastructure with oVirt

2019-01-24 Thread Mannish Kumar
Hi,

I have two Esxi hosts managed by VMware vCenter Server. I want to create a 
similar infrastructure with oVirt. I know that oVirt is similar to VMware 
vCenter Server but not sure what to replace the Esxi hosts with in oVirt 
Environment.

I am looking to build oVirt with Self-Hosted Engine.It would be great help if 
someone could help me to build this.
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/ZF7SYHL2QDRGZV7NFFJNQ6COBZEKJXXN/


[ovirt-users] Ovirt 4.2.8 allows to remove a gluster volume without detaching the storage domain

2019-01-24 Thread Strahil Nikolov
Hello Community,
As I'm still experimenting with my ovirt lab , I have managed somehow to remove 
my gluster volume ('gluster volume list' confirms it) whithout detaching the 
storage domain.
This sounds to me as bug, am I right ?
Steps to reproduce:1. Create a replica 3 arbiter 1 gluster volume2. Create a 
storage domain of it3. Go to Volumes and select the name of the volume4. Press 
remove and confirm . The tasks fails , but the volume is now gone in gluster .
I guess , I have to do some cleanup in the DB in order to fix that.
Best Regards,Strahil Nikolov
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/CIU7OGRQU5IJ2JJLSRYS7DJXB3DNQSLQ/


[ovirt-users] Re: ovirt 4.2 HCI rollout

2019-01-24 Thread Markus Schaufler
The hosted engine is not running and cannot be started.




Von: Simone Tiraboschi 
Gesendet: Donnerstag, 24. Jänner 2019 14:45:59
An: Markus Schaufler
Cc: Dominik Holler; users@ovirt.org
Betreff: Re: [ovirt-users] ovirt 4.2 HCI rollout



On Thu, Jan 24, 2019 at 2:21 PM Markus Schaufler 
mailto:markus.schauf...@digit-all.at>> wrote:

Hi,


thanks for the replies.


I updated to 4.2.8 and tried again:


[ INFO ] TASK [Check engine VM health]
[ ERROR ] fatal: [localhost]: FAILED! => {"attempts": 120, "changed": true, 
"cmd": ["hosted-engine", "--vm-status", "--json"], "delta": "0:00:00.165316", 
"end": "2019-01-24 14:12:06.899564", "rc": 0, "start": "2019-01-24 
14:12:06.734248", "stderr": "", "stderr_lines": [], "stdout": "{\"1\": 
{\"conf_on_shared_storage\": true, \"live-data\": true, \"extra\": 
\"metadata_parse_version=1\\nmetadata_feature_version=1\\ntimestamp=3049 (Thu 
Jan 24 14:11:59 2019)\\nhost-id=1\\nscore=3400\\nvm_conf_refresh_time=3049 (Thu 
Jan 24 14:11:59 
2019)\\nconf_on_shared_storage=True\\nmaintenance=False\\nstate=EngineStarting\\nstopped=False\\n\",
 \"hostname\": \"HCI01.res01.ads.ooe.local\", \"host-id\": 1, 
\"engine-status\": {\"reason\": \"failed liveliness check\", \"health\": 
\"bad\", \"vm\": \"up\", \"detail\": \"Up\"}, \"score\": 3400, \"stopped\": 
false, \"maintenance\": false, \"crc32\": \"0c1a3ddb\", 
\"local_conf_timestamp\": 3049, \"host-ts\": 3049}, \"global_maintenance\": 
false}", "stdout_lines": ["{\"1\": {\"conf_on_shared_storage\": true, 
\"live-data\": true, \"extra\": 
\"metadata_parse_version=1\\nmetadata_feature_version=1\\ntimestamp=3049 (Thu 
Jan 24 14:11:59 2019)\\nhost-id=1\\nscore=3400\\nvm_conf_refresh_time=3049 (Thu 
Jan 24 14:11:59 
2019)\\nconf_on_shared_storage=True\\nmaintenance=False\\nstate=EngineStarting\\nstopped=False\\n\",
 \"hostname\": \"HCI01.res01.ads.ooe.local\", \"host-id\": 1, 
\"engine-status\": {\"reason\": \"failed liveliness check\", \"health\": 
\"bad\", \"vm\": \"up\", \"detail\": \"Up\"}, \"score\": 3400, \"stopped\": 
false, \"maintenance\": false, \"crc32\": \"0c1a3ddb\", 
\"local_conf_timestamp\": 3049, \"host-ts\": 3049}, \"global_maintenance\": 
false}"]}


It's still the same issue: the host fail to properly check the status of the 
engine over a dedicate health page.

You should connect to ovirt-hci.res01.ads.ooe.local and check the status of 
ovirt-engine service and /var/log/ovirt-engine/engine.log there.



[ INFO ] TASK [Check VM status at virt level]
[ INFO ] changed: [localhost]
[ INFO ] TASK [Fail if engine VM is not running]
[ INFO ] skipping: [localhost]
[ INFO ] TASK [Get target engine VM IPv4 address]
[ INFO ] changed: [localhost]
[ INFO ] TASK [Get VDSM's target engine VM stats]
[ INFO ] changed: [localhost]
[ INFO ] TASK [Convert stats to JSON format]
[ INFO ] ok: [localhost]
[ INFO ] TASK [Get target engine VM IPv4 address from VDSM stats]
[ INFO ] ok: [localhost]
[ INFO ] TASK [Fail if the Engine has no IP address]
[ INFO ] skipping: [localhost]
[ INFO ] TASK [Fail if Engine IP is different from engine's FQDN resolved IP]
[ INFO ] skipping: [localhost]
[ INFO ] TASK [Get target engine VM IPv4 address]
[ INFO ] skipping: [localhost]
[ INFO ] TASK [Reconfigure OVN central address]
[ ERROR ] fatal: [localhost]: FAILED! => {"msg": "The task includes an option 
with an undefined variable. The error was: 'dict object' has no attribute 
'stdout_lines'\n\nThe error appears to have been in 
'/usr/share/ovirt-hosted-engine-setup/ansible/create_target_vm.yml': line 518, 
column 5, but may\nbe elsewhere in the file depending on the exact syntax 
problem.\n\nThe offending line appears to be:\n\n # 
https://github.com/oVirt/ovirt-engine/blob/master/packaging/playbooks/roles/ovirt-provider-ovn-driver/tasks/main.yml\n
 - name: Reconfigure OVN central address\n ^ here\n"}



attached you'll find the setup logs.


best regards,

Markus Schaufler


Von: Simone Tiraboschi mailto:stira...@redhat.com>>
Gesendet: Donnerstag, 24. Jänner 2019 11:56:50
An: Dominik Holler
Cc: Markus Schaufler; users@ovirt.org
Betreff: Re: [ovirt-users] ovirt 4.2 HCI rollout



On Thu, Jan 24, 2019 at 9:40 AM Dominik Holler 
mailto:dhol...@redhat.com>> wrote:
On Tue, 22 Jan 2019 11:15:12 +
Markus Schaufler 
mailto:markus.schauf...@digit-all.at>> wrote:

> Thanks for your reply,
>
> getent ahosts ovirt-hci.res01.ads.ooe.local | cut -d' ' -f1 | uniq
> 10.1.31.20
>
> attached you'll find the logs.
>

Thanks, to my eyes this looks like a bug.
I tried to isolate the relevant lines in the attached playbook.

Markus, would you be so kind to check if ovirt-4.2.8 is working for you?


OK, understood: the real error was just a few lines before what Dominik pointed 
out:

"stdout": "{\"1\": {\"conf_on_shared_storage\": true, \"live-data\": true, 

[ovirt-users] Re: ovirt 4.2 HCI rollout

2019-01-24 Thread Simone Tiraboschi
On Thu, Jan 24, 2019 at 3:20 PM Markus Schaufler <
markus.schauf...@digit-all.at> wrote:

> no...
>
> all logs in that folder are attached in the mail before.
>

OK, unfortunately in this case I can just suggest to retry and, when it
reaches
[ INFO ] TASK [Check engine VM health]

try to connect to the engine VM via ssh and check what's happening there to
ovirt-engine



> --
> *Von:* Simone Tiraboschi 
> *Gesendet:* Donnerstag, 24. Jänner 2019 15:16:52
> *An:* Markus Schaufler
> *Cc:* Dominik Holler; users@ovirt.org
> *Betreff:* Re: [ovirt-users] ovirt 4.2 HCI rollout
>
>
>
> On Thu, Jan 24, 2019 at 3:14 PM Markus Schaufler <
> markus.schauf...@digit-all.at> wrote:
>
> The hosted engine is not running and cannot be started.
>
>
>
> Do you have on your first host a directory
> like /var/log/ovirt-hosted-engine-setup/engine-logs-2019-01-21T22:47:03Z
> with logs from the engine VM?
>
>
>
> --
> *Von:* Simone Tiraboschi 
> *Gesendet:* Donnerstag, 24. Jänner 2019 14:45:59
> *An:* Markus Schaufler
> *Cc:* Dominik Holler; users@ovirt.org
> *Betreff:* Re: [ovirt-users] ovirt 4.2 HCI rollout
>
>
>
> On Thu, Jan 24, 2019 at 2:21 PM Markus Schaufler <
> markus.schauf...@digit-all.at> wrote:
>
> Hi,
>
>
> thanks for the replies.
>
>
> I updated to 4.2.8 and tried again:
>
>
> [ INFO ] TASK [Check engine VM health]
> [ ERROR ] fatal: [localhost]: FAILED! => {"attempts": 120, "changed":
> true, "cmd": ["hosted-engine", "--vm-status", "--json"], "delta":
> "0:00:00.165316", "end": "2019-01-24 14:12:06.899564", "rc": 0, "start":
> "2019-01-24 14:12:06.734248", "stderr": "", "stderr_lines": [], "stdout":
> "{\"1\": {\"conf_on_shared_storage\": true, \"live-data\": true, \"extra\":
> \"metadata_parse_version=1\\nmetadata_feature_version=1\\ntimestamp=3049
> (Thu Jan 24 14:11:59
> 2019)\\nhost-id=1\\nscore=3400\\nvm_conf_refresh_time=3049 (Thu Jan 24
> 14:11:59
> 2019)\\nconf_on_shared_storage=True\\nmaintenance=False\\nstate=EngineStarting\\nstopped=False\\n\",
> \"hostname\": \"HCI01.res01.ads.ooe.local\", \"host-id\": 1,
> \"engine-status\": {\"reason\": \"failed liveliness check\", \"health\":
> \"bad\", \"vm\": \"up\", \"detail\": \"Up\"}, \"score\": 3400, \"stopped\":
> false, \"maintenance\": false, \"crc32\": \"0c1a3ddb\",
> \"local_conf_timestamp\": 3049, \"host-ts\": 3049}, \"global_maintenance\":
> false}", "stdout_lines": ["{\"1\": {\"conf_on_shared_storage\": true,
> \"live-data\": true, \"extra\":
> \"metadata_parse_version=1\\nmetadata_feature_version=1\\ntimestamp=3049
> (Thu Jan 24 14:11:59
> 2019)\\nhost-id=1\\nscore=3400\\nvm_conf_refresh_time=3049 (Thu Jan 24
> 14:11:59
> 2019)\\nconf_on_shared_storage=True\\nmaintenance=False\\nstate=EngineStarting\\nstopped=False\\n\",
> \"hostname\": \"HCI01.res01.ads.ooe.local\", \"host-id\": 1,
> \"engine-status\": {\"reason\": \"failed liveliness check\", \"health\":
> \"bad\", \"vm\": \"up\", \"detail\": \"Up\"}, \"score\": 3400, \"stopped\":
> false, \"maintenance\": false, \"crc32\": \"0c1a3ddb\",
> \"local_conf_timestamp\": 3049, \"host-ts\": 3049}, \"global_maintenance\":
> false}"]}
>
>
>
> It's still the same issue: the host fail to properly check the status of
> the engine over a dedicate health page.
>
> You should connect to ovirt-hci.res01.ads.ooe.local and check the status
> of ovirt-engine service and /var/log/ovirt-engine/engine.log there.
>
>
>
> [ INFO ] TASK [Check VM status at virt level]
> [ INFO ] changed: [localhost]
> [ INFO ] TASK [Fail if engine VM is not running]
> [ INFO ] skipping: [localhost]
> [ INFO ] TASK [Get target engine VM IPv4 address]
> [ INFO ] changed: [localhost]
> [ INFO ] TASK [Get VDSM's target engine VM stats]
> [ INFO ] changed: [localhost]
> [ INFO ] TASK [Convert stats to JSON format]
> [ INFO ] ok: [localhost]
> [ INFO ] TASK [Get target engine VM IPv4 address from VDSM stats]
> [ INFO ] ok: [localhost]
> [ INFO ] TASK [Fail if the Engine has no IP address]
> [ INFO ] skipping: [localhost]
> [ INFO ] TASK [Fail if Engine IP is different from engine's FQDN resolved
> IP]
> [ INFO ] skipping: [localhost]
> [ INFO ] TASK [Get target engine VM IPv4 address]
> [ INFO ] skipping: [localhost]
> [ INFO ] TASK [Reconfigure OVN central address]
> [ ERROR ] fatal: [localhost]: FAILED! => {"msg": "The task includes an
> option with an undefined variable. The error was: 'dict object' has no
> attribute 'stdout_lines'\n\nThe error appears to have been in
> '/usr/share/ovirt-hosted-engine-setup/ansible/create_target_vm.yml': line
> 518, column 5, but may\nbe elsewhere in the file depending on the exact
> syntax problem.\n\nThe offending line appears to be:\n\n #
> https://github.com/oVirt/ovirt-engine/blob/master/packaging/playbooks/roles/ovirt-provider-ovn-driver/tasks/main.yml\n
> - name: Reconfigure OVN central address\n ^ here\n"}
>
>
>
> attached you'll find the setup logs.
>
>
> best regards,
>
> Markus Schaufler
> --
> *Von:* Simone 

[ovirt-users] Re: ovirt 4.2 HCI rollout

2019-01-24 Thread Simone Tiraboschi
On Thu, Jan 24, 2019 at 3:14 PM Markus Schaufler <
markus.schauf...@digit-all.at> wrote:

> The hosted engine is not running and cannot be started.
>
>
>
Do you have on your first host a directory
like /var/log/ovirt-hosted-engine-setup/engine-logs-2019-01-21T22:47:03Z
with logs from the engine VM?


>
> --
> *Von:* Simone Tiraboschi 
> *Gesendet:* Donnerstag, 24. Jänner 2019 14:45:59
> *An:* Markus Schaufler
> *Cc:* Dominik Holler; users@ovirt.org
> *Betreff:* Re: [ovirt-users] ovirt 4.2 HCI rollout
>
>
>
> On Thu, Jan 24, 2019 at 2:21 PM Markus Schaufler <
> markus.schauf...@digit-all.at> wrote:
>
> Hi,
>
>
> thanks for the replies.
>
>
> I updated to 4.2.8 and tried again:
>
>
> [ INFO ] TASK [Check engine VM health]
> [ ERROR ] fatal: [localhost]: FAILED! => {"attempts": 120, "changed":
> true, "cmd": ["hosted-engine", "--vm-status", "--json"], "delta":
> "0:00:00.165316", "end": "2019-01-24 14:12:06.899564", "rc": 0, "start":
> "2019-01-24 14:12:06.734248", "stderr": "", "stderr_lines": [], "stdout":
> "{\"1\": {\"conf_on_shared_storage\": true, \"live-data\": true, \"extra\":
> \"metadata_parse_version=1\\nmetadata_feature_version=1\\ntimestamp=3049
> (Thu Jan 24 14:11:59
> 2019)\\nhost-id=1\\nscore=3400\\nvm_conf_refresh_time=3049 (Thu Jan 24
> 14:11:59
> 2019)\\nconf_on_shared_storage=True\\nmaintenance=False\\nstate=EngineStarting\\nstopped=False\\n\",
> \"hostname\": \"HCI01.res01.ads.ooe.local\", \"host-id\": 1,
> \"engine-status\": {\"reason\": \"failed liveliness check\", \"health\":
> \"bad\", \"vm\": \"up\", \"detail\": \"Up\"}, \"score\": 3400, \"stopped\":
> false, \"maintenance\": false, \"crc32\": \"0c1a3ddb\",
> \"local_conf_timestamp\": 3049, \"host-ts\": 3049}, \"global_maintenance\":
> false}", "stdout_lines": ["{\"1\": {\"conf_on_shared_storage\": true,
> \"live-data\": true, \"extra\":
> \"metadata_parse_version=1\\nmetadata_feature_version=1\\ntimestamp=3049
> (Thu Jan 24 14:11:59
> 2019)\\nhost-id=1\\nscore=3400\\nvm_conf_refresh_time=3049 (Thu Jan 24
> 14:11:59
> 2019)\\nconf_on_shared_storage=True\\nmaintenance=False\\nstate=EngineStarting\\nstopped=False\\n\",
> \"hostname\": \"HCI01.res01.ads.ooe.local\", \"host-id\": 1,
> \"engine-status\": {\"reason\": \"failed liveliness check\", \"health\":
> \"bad\", \"vm\": \"up\", \"detail\": \"Up\"}, \"score\": 3400, \"stopped\":
> false, \"maintenance\": false, \"crc32\": \"0c1a3ddb\",
> \"local_conf_timestamp\": 3049, \"host-ts\": 3049}, \"global_maintenance\":
> false}"]}
>
>
>
> It's still the same issue: the host fail to properly check the status of
> the engine over a dedicate health page.
>
> You should connect to ovirt-hci.res01.ads.ooe.local and check the status
> of ovirt-engine service and /var/log/ovirt-engine/engine.log there.
>
>
>
> [ INFO ] TASK [Check VM status at virt level]
> [ INFO ] changed: [localhost]
> [ INFO ] TASK [Fail if engine VM is not running]
> [ INFO ] skipping: [localhost]
> [ INFO ] TASK [Get target engine VM IPv4 address]
> [ INFO ] changed: [localhost]
> [ INFO ] TASK [Get VDSM's target engine VM stats]
> [ INFO ] changed: [localhost]
> [ INFO ] TASK [Convert stats to JSON format]
> [ INFO ] ok: [localhost]
> [ INFO ] TASK [Get target engine VM IPv4 address from VDSM stats]
> [ INFO ] ok: [localhost]
> [ INFO ] TASK [Fail if the Engine has no IP address]
> [ INFO ] skipping: [localhost]
> [ INFO ] TASK [Fail if Engine IP is different from engine's FQDN resolved
> IP]
> [ INFO ] skipping: [localhost]
> [ INFO ] TASK [Get target engine VM IPv4 address]
> [ INFO ] skipping: [localhost]
> [ INFO ] TASK [Reconfigure OVN central address]
> [ ERROR ] fatal: [localhost]: FAILED! => {"msg": "The task includes an
> option with an undefined variable. The error was: 'dict object' has no
> attribute 'stdout_lines'\n\nThe error appears to have been in
> '/usr/share/ovirt-hosted-engine-setup/ansible/create_target_vm.yml': line
> 518, column 5, but may\nbe elsewhere in the file depending on the exact
> syntax problem.\n\nThe offending line appears to be:\n\n #
> https://github.com/oVirt/ovirt-engine/blob/master/packaging/playbooks/roles/ovirt-provider-ovn-driver/tasks/main.yml\n
> - name: Reconfigure OVN central address\n ^ here\n"}
>
>
>
> attached you'll find the setup logs.
>
>
> best regards,
>
> Markus Schaufler
> --
> *Von:* Simone Tiraboschi 
> *Gesendet:* Donnerstag, 24. Jänner 2019 11:56:50
> *An:* Dominik Holler
> *Cc:* Markus Schaufler; users@ovirt.org
> *Betreff:* Re: [ovirt-users] ovirt 4.2 HCI rollout
>
>
>
> On Thu, Jan 24, 2019 at 9:40 AM Dominik Holler  wrote:
>
> On Tue, 22 Jan 2019 11:15:12 +
> Markus Schaufler  wrote:
>
> > Thanks for your reply,
> >
> > getent ahosts ovirt-hci.res01.ads.ooe.local | cut -d' ' -f1 | uniq
> > 10.1.31.20
> >
> > attached you'll find the logs.
> >
>
> Thanks, to my eyes this looks like a bug.
> I tried to isolate the relevant lines in the attached playbook.
>
> Markus, would you be so kind to check if 

[ovirt-users] Re: ovirt 4.2 HCI rollout

2019-01-24 Thread Simone Tiraboschi
On Thu, Jan 24, 2019 at 2:21 PM Markus Schaufler <
markus.schauf...@digit-all.at> wrote:

> Hi,
>
>
> thanks for the replies.
>
>
> I updated to 4.2.8 and tried again:
>
>
> [ INFO ] TASK [Check engine VM health]
> [ ERROR ] fatal: [localhost]: FAILED! => {"attempts": 120, "changed":
> true, "cmd": ["hosted-engine", "--vm-status", "--json"], "delta":
> "0:00:00.165316", "end": "2019-01-24 14:12:06.899564", "rc": 0, "start":
> "2019-01-24 14:12:06.734248", "stderr": "", "stderr_lines": [], "stdout":
> "{\"1\": {\"conf_on_shared_storage\": true, \"live-data\": true, \"extra\":
> \"metadata_parse_version=1\\nmetadata_feature_version=1\\ntimestamp=3049
> (Thu Jan 24 14:11:59
> 2019)\\nhost-id=1\\nscore=3400\\nvm_conf_refresh_time=3049 (Thu Jan 24
> 14:11:59
> 2019)\\nconf_on_shared_storage=True\\nmaintenance=False\\nstate=EngineStarting\\nstopped=False\\n\",
> \"hostname\": \"HCI01.res01.ads.ooe.local\", \"host-id\": 1,
> \"engine-status\": {\"reason\": \"failed liveliness check\", \"health\":
> \"bad\", \"vm\": \"up\", \"detail\": \"Up\"}, \"score\": 3400, \"stopped\":
> false, \"maintenance\": false, \"crc32\": \"0c1a3ddb\",
> \"local_conf_timestamp\": 3049, \"host-ts\": 3049}, \"global_maintenance\":
> false}", "stdout_lines": ["{\"1\": {\"conf_on_shared_storage\": true,
> \"live-data\": true, \"extra\":
> \"metadata_parse_version=1\\nmetadata_feature_version=1\\ntimestamp=3049
> (Thu Jan 24 14:11:59
> 2019)\\nhost-id=1\\nscore=3400\\nvm_conf_refresh_time=3049 (Thu Jan 24
> 14:11:59
> 2019)\\nconf_on_shared_storage=True\\nmaintenance=False\\nstate=EngineStarting\\nstopped=False\\n\",
> \"hostname\": \"HCI01.res01.ads.ooe.local\", \"host-id\": 1,
> \"engine-status\": {\"reason\": \"failed liveliness check\", \"health\":
> \"bad\", \"vm\": \"up\", \"detail\": \"Up\"}, \"score\": 3400, \"stopped\":
> false, \"maintenance\": false, \"crc32\": \"0c1a3ddb\",
> \"local_conf_timestamp\": 3049, \"host-ts\": 3049}, \"global_maintenance\":
> false}"]}
>


It's still the same issue: the host fail to properly check the status of
the engine over a dedicate health page.

You should connect to ovirt-hci.res01.ads.ooe.local and check the status of
ovirt-engine service and /var/log/ovirt-engine/engine.log there.



> [ INFO ] TASK [Check VM status at virt level]
> [ INFO ] changed: [localhost]
> [ INFO ] TASK [Fail if engine VM is not running]
> [ INFO ] skipping: [localhost]
> [ INFO ] TASK [Get target engine VM IPv4 address]
> [ INFO ] changed: [localhost]
> [ INFO ] TASK [Get VDSM's target engine VM stats]
> [ INFO ] changed: [localhost]
> [ INFO ] TASK [Convert stats to JSON format]
> [ INFO ] ok: [localhost]
> [ INFO ] TASK [Get target engine VM IPv4 address from VDSM stats]
> [ INFO ] ok: [localhost]
> [ INFO ] TASK [Fail if the Engine has no IP address]
> [ INFO ] skipping: [localhost]
> [ INFO ] TASK [Fail if Engine IP is different from engine's FQDN resolved
> IP]
> [ INFO ] skipping: [localhost]
> [ INFO ] TASK [Get target engine VM IPv4 address]
> [ INFO ] skipping: [localhost]
> [ INFO ] TASK [Reconfigure OVN central address]
> [ ERROR ] fatal: [localhost]: FAILED! => {"msg": "The task includes an
> option with an undefined variable. The error was: 'dict object' has no
> attribute 'stdout_lines'\n\nThe error appears to have been in
> '/usr/share/ovirt-hosted-engine-setup/ansible/create_target_vm.yml': line
> 518, column 5, but may\nbe elsewhere in the file depending on the exact
> syntax problem.\n\nThe offending line appears to be:\n\n #
> https://github.com/oVirt/ovirt-engine/blob/master/packaging/playbooks/roles/ovirt-provider-ovn-driver/tasks/main.yml\n
> - name: Reconfigure OVN central address\n ^ here\n"}
>
>
>
> attached you'll find the setup logs.
>
>
> best regards,
>
> Markus Schaufler
> --
> *Von:* Simone Tiraboschi 
> *Gesendet:* Donnerstag, 24. Jänner 2019 11:56:50
> *An:* Dominik Holler
> *Cc:* Markus Schaufler; users@ovirt.org
> *Betreff:* Re: [ovirt-users] ovirt 4.2 HCI rollout
>
>
>
> On Thu, Jan 24, 2019 at 9:40 AM Dominik Holler  wrote:
>
> On Tue, 22 Jan 2019 11:15:12 +
> Markus Schaufler  wrote:
>
> > Thanks for your reply,
> >
> > getent ahosts ovirt-hci.res01.ads.ooe.local | cut -d' ' -f1 | uniq
> > 10.1.31.20
> >
> > attached you'll find the logs.
> >
>
> Thanks, to my eyes this looks like a bug.
> I tried to isolate the relevant lines in the attached playbook.
>
> Markus, would you be so kind to check if ovirt-4.2.8 is working for you?
>
>
>
> OK, understood: the real error was just a few lines before what Dominik
> pointed out:
>
> "stdout": "{\"1\": {\"conf_on_shared_storage\": true, \"live-data\":
> true, \"extra\":
> \"metadata_parse_version=1\\nmetadata_feature_version=1\\ntimestamp=5792
> (Mon Jan 21 13:57:45
> 2019)\\nhost-id=1\\nscore=3000\\nvm_conf_refresh_time=5792 (Mon Jan 21
> 13:57:45
> 2019)\\nconf_on_shared_storage=True\\nmaintenance=False\\nstate=EngineStarting\\nstopped=False\\n\",
> \"hostname\": \"HCI01.res01.ads.ooe.local\", 

[ovirt-users] lvm problem

2019-01-24 Thread Nyika Csaba
Hi all,

 

I have an Ovirt 4.2.8 cluster. Nodes are 4.2 ovirt nodes and the volumes (5) 
attached to nodes by FC.

2 weeks ago i made a small vm (centos 7 based) for me to test (name A). After 
the test i dropped a vm.
Next day a made an another vm (named B) to the developes and tired to add a new 
disk to the vm (B). Than a original volume group (vg) of the vm (B) missed and 
i got back the yesterday dropped vm`s (A) vg!

I tired to restart a vm, but the vm never started again.

I dropped this vm (B) too, and tired to add a new disk an older running vm (C) 
too, but a volume group changed to that vm`s vg what i dropped before (B).

 

I checked this „error” and i got it ,then i delete or move disks from the end 
of the FC volume.

 

Have sombody ever seen error like this?

 

Thanks,
  csaba

Ps: in this cluster a managed 120 productive vm, so….

___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/PDXBPDJRN5NYWR6AUF5WKIWS2KDRN4AO/


[ovirt-users] [ANN] oVirt 4.3.0 Third Release Candidate is now available for testing

2019-01-24 Thread Sandro Bonazzola
The oVirt Project is pleased to announce the availability of the Third
Release Candidate of oVirt 4.3.0, as of January 24th, 2018

This is pre-release software. This pre-release should not to be used in
production.

Please take a look at our community page[1] to learn how to ask questions
and interact with developers and users.

All issues or bugs should be reported via oVirt Bugzilla[2].

This update is the first release candidate of the 4.3.0 version.

This release brings more than 130 enhancements and more than 450 bug fixes
on top of oVirt 4.2 series.

What's new in oVirt 4.3.0?

* Q35 chipset, support booting using UEFI and Secure Boot

* Skylake-server and AMD EPYC support

* New smbus driver in windows guest tools

* Improved support for v2v

* OVA export / import of Templates

* Full support for live migration of High Performance VMs

* Microsoft Failover clustering support (SCSI Persistent Reservation) for
Direct LUN disks

* Hundreds of bug fixes on top of oVirt 4.2 series

* New VM portal details page (see a preview here:
https://imgur.com/a/ExINpci)

* New Cluster upgrade UI

* OVN security groups

* IPv6 (static host addresses)

* Support of Neutron from RDO OpenStack 13 as external network provider

* Support of using Skydive from RDO OpenStack 14 as Tech Preview

* Support for 3.6 and 4.0 data centers, clusters and hosts were removed

* Now using PostgreSQL 10

* New metrics support using rsyslog instead of fluentd


This release is available now on x86_64 architecture for:

* Red Hat Enterprise Linux 7.6 or later

* CentOS Linux (or similar) 7.6 or later


This release supports Hypervisor Hosts on x86_64 and ppc64le architectures
for:

* Red Hat Enterprise Linux 7.6 or later

* CentOS Linux (or similar) 7.6 or later

* oVirt Node 4.3 (available for x86_64 only)

Experimental tech preview for x86_64 and s390x architectures for Fedora 28
is also included.

See the release notes draft [3] for installation / upgrade instructions and
a list of new features and bugs fixed.

Notes:

- oVirt Appliance is already available for both CentOS 7 and Fedora 28
(tech preview).

- oVirt Node NG  is already available for CentOS 7

- oVirt Node NG for Fedora 28 (tech preview) is being delayed due to build
issues with the build system.

Additional Resources:

* Read more about the oVirt 4.3.0 release highlights:
http://www.ovirt.org/release/4.3.0/

* Get more oVirt project updates on Twitter: https://twitter.com/ovirt

* Check out the latest project news on the oVirt blog:
http://www.ovirt.org/blog/


[1] https://www.ovirt.org/community/

[2] https://bugzilla.redhat.com/enter_bug.cgi?classification=oVirt

[3] http://www.ovirt.org/release/4.3.0/

[4] http://resources.ovirt.org/pub/ovirt-4.3-pre/iso/


-- 

SANDRO BONAZZOLA

MANAGER, SOFTWARE ENGINEERING, EMEA R RHV

Red Hat EMEA 

sbona...@redhat.com

___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/CFX4K7K6WTVVVQJHP2XAAZQYSNMOFXYI/


[ovirt-users] Lvm problem

2019-01-24 Thread csabany
Hi all,

I have an Ovirt 4.2.8 cluster. Nodes are 4.2 ovirt nodes and the volumes (5) 
attached to nodes by FC.
2 weeks ago i made a small vm (centos 7 based) for me to test (name A). After 
the test i dropped a vm. 
Next day a made an another vm (named B) to the developes and tired to add a new 
disk to the vm (B). Than a original volume group (vg) of the vm (B) missed and 
i got back the yesterday dropped vm`s (A) vg!
I tired to restart a vm, but the vm never started again.
I dropped this vm (B) too, and tired to add a new disk an older running vm (C) 
too, but a volume group changed to that vm`s vg what i dropped before (B).

I checked this „error” and i got it ,then i delete or move disks from the end 
of the FC volume.

Have sombody ever seen error like this?

Thanks,
  csaba
Ps: in this cluster a managed 120 productive vm, so….
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/PXD65FJLAQTPQFFLHIKVRIEVUOT4UZAR/


[ovirt-users] Re: ovirt 4.2 HCI rollout

2019-01-24 Thread Simone Tiraboschi
On Thu, Jan 24, 2019 at 9:40 AM Dominik Holler  wrote:

> On Tue, 22 Jan 2019 11:15:12 +
> Markus Schaufler  wrote:
>
> > Thanks for your reply,
> >
> > getent ahosts ovirt-hci.res01.ads.ooe.local | cut -d' ' -f1 | uniq
> > 10.1.31.20
> >
> > attached you'll find the logs.
> >
>
> Thanks, to my eyes this looks like a bug.
> I tried to isolate the relevant lines in the attached playbook.
>
> Markus, would you be so kind to check if ovirt-4.2.8 is working for you?
>


OK, understood: the real error was just a few lines before what Dominik
pointed out:

"stdout": "{\"1\": {\"conf_on_shared_storage\": true, \"live-data\":
true, \"extra\":
\"metadata_parse_version=1\\nmetadata_feature_version=1\\ntimestamp=5792
(Mon Jan 21 13:57:45
2019)\\nhost-id=1\\nscore=3000\\nvm_conf_refresh_time=5792 (Mon Jan 21
13:57:45
2019)\\nconf_on_shared_storage=True\\nmaintenance=False\\nstate=EngineStarting\\nstopped=False\\n\",
\"hostname\": \"HCI01.res01.ads.ooe.local\", \"host-id\": 1,
\"engine-status\": {\"reason\": \"failed liveliness check\", \"health\":
\"bad\", \"vm\": \"up\", \"detail\": \"Up\"}, \"score\": 3000, \"stopped\":
false, \"maintenance\": false, \"crc32\": \"ba303717\",
\"local_conf_timestamp\": 5792, \"host-ts\": 5792}, \"global_maintenance\":
false}",
"stdout_lines": [
"{\"1\": {\"conf_on_shared_storage\": true, \"live-data\": true,
\"extra\":
\"metadata_parse_version=1\\nmetadata_feature_version=1\\ntimestamp=5792
(Mon Jan 21 13:57:45
2019)\\nhost-id=1\\nscore=3000\\nvm_conf_refresh_time=5792 (Mon Jan 21
13:57:45
2019)\\nconf_on_shared_storage=True\\nmaintenance=False\\nstate=EngineStarting\\nstopped=False\\n\",
\"hostname\": \"HCI01.res01.ads.ooe.local\", \"host-id\": 1,
\"engine-status\": {\"reason\": \"failed liveliness check\", \"health\":
\"bad\", \"vm\": \"up\", \"detail\": \"Up\"}, \"score\": 3000, \"stopped\":
false, \"maintenance\": false, \"crc32\": \"ba303717\",
\"local_conf_timestamp\": 5792, \"host-ts\": 5792}, \"global_maintenance\":
false}"
]
}"
2019-01-21 13:57:46,695+0100 ERROR ansible failed {'status': 'FAILED',
'ansible_type': 'task', 'ansible_task': u'Check engine VM health',
'ansible_result': u'type: \nstr: {\'_ansible_parsed\': True,
\'stderr_lines\': [], u\'changed\': True, u\'end\': u\'2019-01-21
13:57:46.242423\', \'_ansible_no_log\': False, u\'stdout\': u\'{"1":
{"conf_on_shared_storage": true, "live-data": true, "extra":
"metadata_parse_version=1nmetadata_feature_version=1ntimestamp=5792
(Mon Jan 21 13:57:4', 'ansible_host': u'localhost', 'ansible_playbook':
u'/usr/share/ovirt-hosted-engine-setup/ansible/create_target_vm.yml'}

and in particular it's here:
for some reason we got  \"engine-status\": {\"reason\": \"failed liveliness
check\", \"health\": \"bad\", \"vm\": \"up\", \"detail\": \"Up\"}
over 120 attempts: we have to check engine.log (it got collected as well
from the engine VM) to understand why the engine was failing to start.



>
> > 
> > Von: Dominik Holler 
> > Gesendet: Montag, 21. Jänner 2019 17:52:35
> > An: Markus Schaufler
> > Cc: users@ovirt.org; Simone Tiraboschi
> > Betreff: Re: [ovirt-users] ovirt 4.2 HCI rollout
> >
> > Would you please share the related ovirt-host-deploy-ansible-*.log
> > stored on the host in /var/log/ovirt-hosted-engine-setup ?
> >
> > Would you please also share the output of
> > getent ahosts YOUR_HOSED_ENGNE_FQDN | cut -d' ' -f1 | uniq
> > if executed on this host?
> >
> >
> > On Mon, 21 Jan 2019 13:37:53 -
> > "Markus Schaufler"  wrote:
> >
> > > Hi,
> > >
> > > I'm trying a (nested) ovirt 4.2.7 HCI rollout on 3 centos VM's by
> > > following
> > >
> https://ovirt.org/documentation/gluster-hyperconverged/chap-Deploying_Hyperconverged.html
> > > gluster deployment was successful but at HE deployment "stage 5" I
> > > got following error:
> > >
> > > [ INFO ] TASK [Reconfigure OVN central address]
> > > [ ERROR ] fatal: [localhost]: FAILED! => {"msg": "The task includes
> > > an option with an undefined variable. The error was: 'dict object'
> > > has no attribute 'stdout_lines'\n\nThe error appears to have been
> > > in
> > > '/usr/share/ovirt-hosted-engine-setup/ansible/create_target_vm.yml':
> > > line 522, column 5, but may\nbe elsewhere in the file depending on
> > > the exact syntax problem.\n\nThe offending line appears to be:\n\n
> > > #
> > >
> https://github.com/oVirt/ovirt-engine/blob/master/packaging/playbooks/roles/ovirt-provider-ovn-driver/tasks/main.yml\n
> > > - name: Reconfigure OVN central address\n ^ here\n"}
> > >
> > >
> > > /var/log/messages:
> > > Jan 21 14:09:56 HCI01 journal: ovirt-ha-agent
> > > ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine ERROR
> > > Engine VM stopped on localhost Jan 21 14:10:01 HCI01 systemd:
> > > Started Session 22 of user root. Jan 21 14:10:02 HCI01 systemd:
> > > Started Session c306 of user root. Jan 21 14:10:03 HCI01 systemd:
> > > Started Session c307 of user root. Jan 21 14:10:06 HCI01
> > > vdsm[3650]: 

[ovirt-users] Re: latest pycurl 7.43 brokes ovirtsdk4

2019-01-24 Thread Ondra Machacek

Can you please open issue on AWX: https://github.com/ansible/awx/issues ?

On 1/23/19 5:18 PM, Nathanaël Blanchet wrote:
And the AWX embedded pycurl 7.43 also brakes the ovirt4.py dynamic 
inventory!


  [WARNING]: Unable to parse /opt/awx/embedded/lib/python2.7/site-
packages/awx/plugins/inventory/ovirt4.py as an inventory source

Le 23/01/2019 à 11:55, Nathanaël Blanchet a écrit :



Le 23/01/2019 à 09:27, Ondra Machacek a écrit :

On 1/22/19 5:54 PM, Nathanaël Blanchet wrote:

Hi all,

If anyone uses latest pycurl 7.43 provided by pip or ansible 
tower/awx, any ovirtsdk4 calling will issue with the log:


The full traceback is:
WARNING: The below traceback may *not* be related to the actual 
failure.
   File "/tmp/ansible_ovirt_auth_payload_L1HK9E/__main__.py", line 
202, in 

 import ovirtsdk4 as sdk
   File 
"/opt/awx/embedded/lib64/python2.7/site-packages/ovirtsdk4/__init__.py", 
line 22, in 

 import pycurl

fatal: [localhost]: FAILED! => {
 "changed": false,
 "invocation": {
 "module_args": {
 "ca_file": null,
 "compress": true,
 "headers": null,
 "hostname": null,
 "insecure": true,
 "kerberos": false,
 "ovirt_auth": null,
 "password": "VALUE_SPECIFIED_IN_NO_LOG_PARAMETER",
 "state": "present",
 "timeout": 0,
 "token": null,
 "url": "https://acore.v100.abes.fr/ovirt-engine/api;,
 "username": "admin@internal"
 }
 },
 "msg": "ovirtsdk4 version 4.2.4 or higher is required for this 
module"

}

The only way is to set the version of pycurl with

pip install -U "pycurl == 7.19.0"

(Before this, in tower/awx, you should  create venv)


What's the version of AWX, where pycurl 7.43 is provided? I use latest
and I have 7.19. But anyway, I've tried to update to 7.43, and this 
worked for me with nss:


AWX 2.1.2
/opt/awx/embedded/lib64/python2.7/site-packages/pycurl-7.43.0.1.dist-info



$ source venv/awx/bin/activate
$ export PYCURL_SSL_LIBRARY=nss; pip install pycurl --compile 
--no-cache-dir

$ python -c 'import pycurl; print pycurl.version'
PycURL/7.43.0.2 libcurl/7.29.0 NSS/3.36 zlib/1.2.7 libidn/1.28 
libssh2/1.4.3


Yes, I've tried your trick and 7.43 works with the nss support like 
you say, but...


  * how can anyone guess he needs the nss library
  * it doesn't work out the box with the awx embedded  pycurl, so we
    must use venv

So it should be good to compile the embedded awx pycurl to natively 
support nss, out of venv.






--
Nathanaël Blanchet

Supervision réseau
Pôle Infrastrutures Informatiques
227 avenue Professeur-Jean-Louis-Viala
34193 MONTPELLIER CEDEX 5
Tél. 33 (0)4 67 54 84 55
Fax  33 (0)4 67 54 84 14
blanc...@abes.fr
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/PMOHDZADCP3R6GKYFUHSDH5NRAZJGNOM/ 




___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/XHMXO3BPP2ZM5W4LM57TC5462TEKEWCC/


[ovirt-users] Re: Ovirt snapshot issues

2019-01-24 Thread Elad Ben Aharon
Thanks!

+Fred Rolland  seems like the same issue as reported
in https://bugzilla.redhat.com/show_bug.cgi?id=1555116

2019-01-24 10:12:08,240+02 ERROR
[org.ovirt.engine.core.bll.snapshots.RemoveSnapshotCommand] (default
task-544) [416c625f-e57b-46b8-bf74-5b774191fada] Error during
ValidateFailure.: java.lang.NullPointerExceptio
n
   at org.ovirt.engine.core.bll.validator.storage.
StorageDomainValidator.getTotalSizeForMerge(StorageDomainValidator.java:205)
[bll.jar:]
   at org.ovirt.engine.core.bll.validator.storage.
StorageDomainValidator.hasSpaceForMerge(StorageDomainValidator.java:241)
[bll.jar:]
   at org.ovirt.engine.core.bll.validator.storage.
MultipleStorageDomainsValidator.lambda$allDomainsHaveSpaceForMerge$6(
MultipleStorageDomainsValidator.java:122) [bll.jar:]
   at 
java.util.stream.ReferencePipeline$3$1.accept(ReferencePipeline.java:193)
[rt.jar:1.8.0_191]



On Thu, Jan 24, 2019 at 10:25 AM Alex K  wrote:

> When I get the error the engine.log  logs the attached engine-partial.log.
> At vdsm.log at SPM host I don't see any error generated.
> Full logs also attached.
>
> Thanx,
> Alex
>
>
>
>
> On Wed, Jan 23, 2019 at 5:53 PM Elad Ben Aharon 
> wrote:
>
>> Hi,
>>
>> Can you please provide engine.log and vdsm.log?
>>
>> On Wed, Jan 23, 2019 at 5:41 PM Alex K  wrote:
>>
>>> Hi all,
>>>
>>> I have ovirt 4.2.7, self-hosted on top gluster, with two servers.
>>> I have a specific VM which has encountered some snapshot issues.
>>> The engine lists 4 snapshots and when trying to delete one of them I get
>>> "General command validation failure".
>>>
>>> The VM was being backed up periodically by a python script which was
>>> creating a snapshot -> clone -> export -> delete clone -> delete snapshot.
>>> There were times where the VM was complaining of some illegal snapshots
>>> following such backup procedures and I had to delete such illegal snapshots
>>> references from the engine DB (following some steps found online),
>>> otherwise I would not be able to start the VM if it was shut down. Seems
>>> though that this is not a clean process and leaves the underlying image of
>>> the VM in an inconsistent state in regards to its snapshots as when
>>> checking the backing chain of the image file I get:
>>>
>>> *b46d8efe-885b-4a68-94ca-e8f437566bee* (active VM)* ->*
>>> *b7673dca-6e10-4a0f-9885-1c91b86616af ->*
>>> *4f636d91-a66c-4d68-8720-d2736a3765df ->*
>>> 6826cb76-6930-4b53-a9f5-fdeb0e8012ac ->
>>> 61eea475-1135-42f4-b8d1-da6112946bac ->
>>> *604d84c3-8d5f-4bb6-a2b5-0aea79104e43 ->*
>>> 1e75898c-9790-4163-ad41-847cfe84db40 ->
>>> *cf8707f2-bf1f-4827-8dc2-d7e6ffcc3d43 ->*
>>> 3f54c98e-07ca-4810-82d8-cbf3964c7ce5 (raw image)
>>>
>>> The bold ones are the ones shown at engine GUI. The VM runs normally
>>> without issues.
>>> I was thinking if I could use qemu-img commit to consolidate and remove
>>> the snapshots that are not referenced from engine anymore. Any ideas from
>>> your side?
>>>
>>> Thanx,
>>> Alex
>>> ___
>>> Users mailing list -- users@ovirt.org
>>> To unsubscribe send an email to users-le...@ovirt.org
>>> Privacy Statement: https://www.ovirt.org/site/privacy-policy/
>>> oVirt Code of Conduct:
>>> https://www.ovirt.org/community/about/community-guidelines/
>>> List Archives:
>>> https://lists.ovirt.org/archives/list/users@ovirt.org/message/DDZXH5UG6QEH76A5EO4STZ4YV7RIQQ2I/
>>>
>>
>>
>> --
>>
>> Elad Ben Aharon
>>
>> ASSOCIATE MANAGER, RHV storage QE
>>
> ___
> Users mailing list -- users@ovirt.org
> To unsubscribe send an email to users-le...@ovirt.org
> Privacy Statement: https://www.ovirt.org/site/privacy-policy/
> oVirt Code of Conduct:
> https://www.ovirt.org/community/about/community-guidelines/
> List Archives:
> https://lists.ovirt.org/archives/list/users@ovirt.org/message/6IJLQCVUHR6ZDNEMHL52PF7H54UADRWT/
>


-- 

Elad Ben Aharon

ASSOCIATE MANAGER, RHV storage QE
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/ZTTMXFXNATZR7YQREBBUO24RLDYVGAQI/


[ovirt-users] Re: How to connect to a guest with vGPU ?

2019-01-24 Thread Josep Manel Andrés Moscardó

Hi Michael,
Thanks for the info. I am using NVIDIA M60 right now. So what would you 
do to gain remote GUI access ? VNC + something else to have 3D acceleration?


Thanks.

On 24/1/19 9:27, Michal Skrivanek wrote:



On 22 Jan 2019, at 10:39, Josep Manel Andrés Moscardó 
mailto:josep.mosca...@embl.de>> wrote:


I am reading about VirGL but I am not sure how to set it up, can 
anyone point me to some documentation?


It’s in development for quite some time, but it's not yet completely 
ready. Feel free to reach out on spice-list re current status




Cheers.



On 17/1/19 17:20, Josep Manel Andrés Moscardó wrote:

Hi,
I got vGPU through mdev working but I am wondering how I would 
connect to the client and make use of the GPU. So far I try to access 
the console through SPICE and at some point in the boot process it 
switches to GPU and I cannot see anything else.


it doesn’t have SPICE support
it recently gained VNC support which is not yet integrated into oVirt, 
but it will likely happen reasonably soon


btw which card is it? nvidia or intel?

Thanks,
michal


Thanks.


--
Josep Manel Andrés Moscardó
Systems Engineer, IT Operations
EMBL Heidelberg
T +49 6221 387-8394

___
Users mailing list -- users@ovirt.org 
To unsubscribe send an email to users-le...@ovirt.org 


Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List 
Archives:https://lists.ovirt.org/archives/list/users@ovirt.org/message/UFB2OFQO2KJ2DLU3MDQ6FBPKFRUQ2VZE/




--
Josep Manel Andrés Moscardó
Systems Engineer, IT Operations
EMBL Heidelberg
T +49 6221 387-8394



smime.p7s
Description: S/MIME Cryptographic Signature
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/SX7HEFCUMCY5G7ZQKGSI2FGGHRSTD723/


[ovirt-users] Re: ovirt 4.2 HCI rollout

2019-01-24 Thread Dominik Holler
On Tue, 22 Jan 2019 11:15:12 +
Markus Schaufler  wrote:

> Thanks for your reply,
> 
> getent ahosts ovirt-hci.res01.ads.ooe.local | cut -d' ' -f1 | uniq
> 10.1.31.20
> 
> attached you'll find the logs.
> 

Thanks, to my eyes this looks like a bug.
I tried to isolate the relevant lines in the attached playbook.

Markus, would you be so kind to check if ovirt-4.2.8 is working for you? 

> 
> Von: Dominik Holler 
> Gesendet: Montag, 21. Jänner 2019 17:52:35
> An: Markus Schaufler
> Cc: users@ovirt.org; Simone Tiraboschi
> Betreff: Re: [ovirt-users] ovirt 4.2 HCI rollout
> 
> Would you please share the related ovirt-host-deploy-ansible-*.log
> stored on the host in /var/log/ovirt-hosted-engine-setup ?
> 
> Would you please also share the output of
> getent ahosts YOUR_HOSED_ENGNE_FQDN | cut -d' ' -f1 | uniq
> if executed on this host?
> 
> 
> On Mon, 21 Jan 2019 13:37:53 -
> "Markus Schaufler"  wrote:
> 
> > Hi,
> >
> > I'm trying a (nested) ovirt 4.2.7 HCI rollout on 3 centos VM's by
> > following
> > https://ovirt.org/documentation/gluster-hyperconverged/chap-Deploying_Hyperconverged.html
> > gluster deployment was successful but at HE deployment "stage 5" I
> > got following error:
> >
> > [ INFO ] TASK [Reconfigure OVN central address]
> > [ ERROR ] fatal: [localhost]: FAILED! => {"msg": "The task includes
> > an option with an undefined variable. The error was: 'dict object'
> > has no attribute 'stdout_lines'\n\nThe error appears to have been
> > in
> > '/usr/share/ovirt-hosted-engine-setup/ansible/create_target_vm.yml':
> > line 522, column 5, but may\nbe elsewhere in the file depending on
> > the exact syntax problem.\n\nThe offending line appears to be:\n\n
> > #
> > https://github.com/oVirt/ovirt-engine/blob/master/packaging/playbooks/roles/ovirt-provider-ovn-driver/tasks/main.yml\n
> > - name: Reconfigure OVN central address\n ^ here\n"}
> >
> >
> > /var/log/messages:
> > Jan 21 14:09:56 HCI01 journal: ovirt-ha-agent
> > ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine ERROR
> > Engine VM stopped on localhost Jan 21 14:10:01 HCI01 systemd:
> > Started Session 22 of user root. Jan 21 14:10:02 HCI01 systemd:
> > Started Session c306 of user root. Jan 21 14:10:03 HCI01 systemd:
> > Started Session c307 of user root. Jan 21 14:10:06 HCI01
> > vdsm[3650]: WARN executor state: count=5 workers=set([ > name=periodic/4 waiting task#=141 at 0x7fd2d4316910>,  > name=periodic/1 running  > action= > 0x7fd2d4679490> at 0x7fd2d4679710> timeout=7.5, duration=7 at
> > 0x7fd2d4679490> 0x7fd33c1e0ed0> disca rded task#=413 at
> > 0x7fd2d4679490> 0x7fd33c1e0ed0> 0x7fd2d5ed0510>,  > 0x7fd2d4679490> 0x7fd33c1e0ed0> 0x7fd2d5ed0510>name=periodic/3
> > 0x7fd2d4679490> 0x7fd33c1e0ed0> 0x7fd2d5ed0510>waiting task#=414 at
> > 0x7fd2d4679490> 0x7fd33c1e0ed0> 0x7fd2d5ed0510>0x7fd2d5ed0b10>,
> > 0x7fd2d4679490> 0x7fd33c1e0ed0> 0x7fd2d5ed0510>0x7fd2d5ed0b10> > 0x7fd2d4679490> 0x7fd33c1e0ed0> 
> > 0x7fd2d5ed0510>0x7fd2d5ed0b10>name=periodic/5
> > 0x7fd2d4679490> 0x7fd33c1e0ed0> 0x7fd2d5ed0510>0x7fd2d5ed0b10>waiting
> > 0x7fd2d4679490> 0x7fd33c1e0ed0> 0x7fd2d5ed0510>0x7fd2d5ed0b10>task#=0
> > 0x7fd2d4679490> 0x7fd33c1e0ed0> 0x7fd2d5ed0510>0x7fd2d5ed0b10>at
> > 0x7fd2d4679490> 0x7fd33c1e0ed0> 
> > 0x7fd2d5ed0510>0x7fd2d5ed0b10>0x7fd2d425f650>,
> > 0x7fd2d4679490> 0x7fd33c1e0ed0> 
> > 0x7fd2d5ed0510>0x7fd2d5ed0b10>0x7fd2d425f650> > 0x7fd2d4679490> 0x7fd33c1e0ed0> 
> > 0x7fd2d5ed0510>0x7fd2d5ed0b10>0x7fd2d425f650>name
> > 0x7fd2d4679490> 0x7fd33c1e0ed0> 
> > 0x7fd2d5ed0510>0x7fd2d5ed0b10>0x7fd2d425f650>=periodic/2
> > 0x7fd2d4679490> 0x7fd33c1e0ed0> 
> > 0x7fd2d5ed0510>0x7fd2d5ed0b10>0x7fd2d425f650>waiting
> > 0x7fd2d4679490> 0x7fd33c1e0ed0> 
> > 0x7fd2d5ed0510>0x7fd2d5ed0b10>0x7fd2d425f650>task#=412
> > 0x7fd2d4679490> 0x7fd33c1e0ed0> 
> > 0x7fd2d5ed0510>0x7fd2d5ed0b10>0x7fd2d425f650>at
> > 0x7fd2d4679490> 0x7fd33c1e0ed0> 
> > 0x7fd2d5ed0510>0x7fd2d5ed0b10>0x7fd2d425f650>0x7fd2d5ed07d0>])
> > 0x7fd2d4679490> 0x7fd33c1e0ed0> 
> > 0x7fd2d5ed0510>0x7fd2d5ed0b10>0x7fd2d425f650>0x7fd2d5ed07d0>Jan
> > 0x7fd2d4679490> 0x7fd33c1e0ed0> 
> > 0x7fd2d5ed0510>0x7fd2d5ed0b10>0x7fd2d425f650>0x7fd2d5ed07d0>21
> > 0x7fd2d4679490> 0x7fd33c1e0ed0> 
> > 0x7fd2d5ed0510>0x7fd2d5ed0b10>0x7fd2d425f650>0x7fd2d5ed07d0>14:10:06
> > 0x7fd2d4679490> 0x7fd33c1e0ed0> 
> > 0x7fd2d5ed0510>0x7fd2d5ed0b10>0x7fd2d425f650>0x7fd2d5ed07d0>HCI01
> > 0x7fd2d4679490> 0x7fd33c1e0ed0> 
> > 0x7fd2d5ed0510>0x7fd2d5ed0b10>0x7fd2d425f650>0x7fd2d5ed07d0>kernel:
> > 0x7fd2d4679490> 0x7fd33c1e0ed0> 
> > 0x7fd2d5ed0510>0x7fd2d5ed0b10>0x7fd2d425f650>0x7fd2d5ed07d0>ovirtmgmt:
> > 0x7fd2d4679490> 0x7fd33c1e0ed0> 
> > 0x7fd2d5ed0510>0x7fd2d5ed0b10>0x7fd2d425f650>0x7fd2d5ed07d0>port
> > 0x7fd2d4679490> 0x7fd33c1e0ed0> 
> > 0x7fd2d5ed0510>0x7fd2d5ed0b10>0x7fd2d425f650>0x7fd2d5ed07d0>2(vnet0)
> > 0x7fd2d4679490> 0x7fd33c1e0ed0> 
> > 0x7fd2d5ed0510>0x7fd2d5ed0b10>0x7fd2d425f650>0x7fd2d5ed07d0>entered
> > 0x7fd2d4679490> 0x7fd33c1e0ed0> 
> > 

[ovirt-users] Re: Unable to get the proper console of vm

2019-01-24 Thread Michal Skrivanek


> On 21 Jan 2019, at 15:54, Shikhar Verma  wrote:
> 
> Hi,
> 
> I have created the virtual machine from ovirt manager but when I am trying to 
> get the console of the vm to do the installation of it but it is only giving 
> two two line and even i have tried as run once and selected CD ROM as first 
> priority and attached the ISO of centos7

is the iso alright? does it boot elsewhere? does your vm have enough ram?

> 
> SeaBIOS (version 1.11.0-2.e17)
> Machine UUID ---
> 
> Also, from manager, newly launched vm is showing green..
> 
> And from the host machine, it is showing this error
> 
> Jan 21 19:23:24 servera libvirtd: 2019-01-21 13:53:24.286+: 12800: error 
> : qemuDomainAgentAvailable:9133 : Guest agent is not responding: QEMU guest 
> agent is not connected

because it’s not booted yet. irrelevant.

> 
> I am using the latest version version of ovirt-engine & host as well.
> 
> Please respond.
> 
> Thanks
> Shikhar
> ___
> Users mailing list -- users@ovirt.org
> To unsubscribe send an email to users-le...@ovirt.org
> Privacy Statement: https://www.ovirt.org/site/privacy-policy/
> oVirt Code of Conduct: 
> https://www.ovirt.org/community/about/community-guidelines/
> List Archives: 
> https://lists.ovirt.org/archives/list/users@ovirt.org/message/52HAFOXSXLJRI47DB3JBM7HY3VXGC6CM/
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/YXZVBGPWEMZFEBHMJLSNLESB2Y76B7EV/


[ovirt-users] Re: How to connect to a guest with vGPU ?

2019-01-24 Thread Michal Skrivanek


> On 22 Jan 2019, at 10:39, Josep Manel Andrés Moscardó 
>  wrote:
> 
> I am reading about VirGL but I am not sure how to set it up, can anyone point 
> me to some documentation?

It’s in development for quite some time, but it's not yet completely ready. 
Feel free to reach out on spice-list re current status

> 
> Cheers.
> 
> 
> 
> On 17/1/19 17:20, Josep Manel Andrés Moscardó wrote:
>> Hi,
>> I got vGPU through mdev working but I am wondering how I would connect to 
>> the client and make use of the GPU. So far I try to access the console 
>> through SPICE and at some point in the boot process it switches to GPU and I 
>> cannot see anything else.

it doesn’t have SPICE support
it recently gained VNC support which is not yet integrated into oVirt, but it 
will likely happen reasonably soon

btw which card is it? nvidia or intel?

Thanks,
michal

>> Thanks.
> 
> -- 
> Josep Manel Andrés Moscardó
> Systems Engineer, IT Operations
> EMBL Heidelberg
> T +49 6221 387-8394
> 
> ___
> Users mailing list -- users@ovirt.org
> To unsubscribe send an email to users-le...@ovirt.org
> Privacy Statement: https://www.ovirt.org/site/privacy-policy/
> oVirt Code of Conduct: 
> https://www.ovirt.org/community/about/community-guidelines/
> List Archives: 
> https://lists.ovirt.org/archives/list/users@ovirt.org/message/UFB2OFQO2KJ2DLU3MDQ6FBPKFRUQ2VZE/
>  
> 
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/FB2IYTGEQYIZVSOZST3OHJH6KGO4UQWW/


[ovirt-users] Re: oVirt 4.2.8 CPU Compatibility

2019-01-24 Thread Michal Skrivanek


> On 23 Jan 2019, at 18:36, Vinícius Ferrão  wrote:
> 
> There’s any way to just put the CPUs on a legacy mode to simply use the 
> servers, even with a low performance? We have a machine with an Opteron 6380 
> and it appears to be the same case as Uwe reported. Our plan was to add this 
> machine to the Datacenter in an isolated Cluster.

you could always just add it back to the db, ServerCPUList in vdc_options in 
somewhat ugly, but you can copy from 4.2 entry
obviously not supported, but AFAIK it will work just fine. There are no real 
dependencies on the CPU in oVirt itself, only compatibility issues with most 
recent Windows versions

Thanks,
michal
> 
> Thanks,
> 
>> On 23 Jan 2019, at 13:49, Lucie Leistnerova  wrote:
>> 
>> Hello Uwe,
>> 
>> On 1/23/19 9:21 AM, Uwe Laverenz wrote:
>>> Hi,
>>> 
>>> Am Dienstag, den 22.01.2019, 15:46 +0100 schrieb Lucie Leistnerova:
>>> 
 Yes, it should be supported also in 4.2.8. According to Release
 notes for 4.2.7 this warning is related to 4.3 version.
 
 https://www.ovirt.org/release/4.2.7/
 
 BZ 1623259 Mark clusters with deprecated CPU type
 In the current release, for compatibility versions 4.2 and 4.3, a
 warning in the Cluster screen indicates that the CPU types currently
 used are not supported in 4.3. The warning enables the user to change
 the cluster CPU type to a supported CPU type.
>>> Does this mean that I would not be able to install OVirt 4.3 on
>>> machines with Opteron 6174 cpu? Or would I just get a warning?
>>> 
>>> I was thinking of recycling our old DL385 machines for an OVirt/Gluster
>>> testing lab. :)
>> 
>> As I understand it, you won't be able to add such host to cluster with 
>> compatibility version 4.3.
>> So after engine upgrade to 4.3, old cluster and hosts will still work. Until 
>> you will need to update also cluster compatibility version to 4.3
>> 
>>> cu,
>>> Uwe
>>> 
>>> ___
>>> Users mailing list -- users@ovirt.org
>>> To unsubscribe send an email to users-le...@ovirt.org
>>> Privacy Statement: https://www.ovirt.org/site/privacy-policy/
>>> oVirt Code of Conduct: 
>>> https://www.ovirt.org/community/about/community-guidelines/
>>> List Archives: 
>>> https://lists.ovirt.org/archives/list/users@ovirt.org/message/H5OCE7Y53RZ2WEOCCU77PBV7DZTTHZTR/
>> Best regards,
>> 
>> -- 
>> Lucie Leistnerova
>> Quality Engineer, QE Cloud, RHVM
>> Red Hat EMEA
>> 
>> IRC: lleistne @ #rhev-qe
>> ___
>> Users mailing list -- users@ovirt.org
>> To unsubscribe send an email to users-le...@ovirt.org
>> Privacy Statement: https://www.ovirt.org/site/privacy-policy/
>> oVirt Code of Conduct: 
>> https://www.ovirt.org/community/about/community-guidelines/
>> List Archives: 
>> https://lists.ovirt.org/archives/list/users@ovirt.org/message/ZSLZRQGOAGN5TK4NZFQAA5YGINRBW23T/
> ___
> Users mailing list -- users@ovirt.org
> To unsubscribe send an email to users-le...@ovirt.org
> Privacy Statement: https://www.ovirt.org/site/privacy-policy/
> oVirt Code of Conduct: 
> https://www.ovirt.org/community/about/community-guidelines/
> List Archives: 
> https://lists.ovirt.org/archives/list/users@ovirt.org/message/F65QRWSOLRVPUO52XJCMLDILS7A3ZUM4/
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/5G57LNOBE7RZRT2YDFAE3IDHRWVHAIHA/