[ovirt-users] Spice Client Connection Issues Using aSpice

2018-02-18 Thread Jeremy Tourville
Hello,

I am having trouble connecting to my guest vm (Kali Linux) which is running 
spice. My engine is running version: 4.2.1.7-1.el7.centos.

I am using oVirt Node as my host running version: 4.2.1.1.


I have taken the following steps to try and get everything running properly.

  1.  Download the root CA certificate 
https://ovirtengine.lan/ovirt-engine/services/pki-resource?resource=ca-certificate=X509-PEM-CA
  2.  Edit the vm and define the graphical console entries.  Video type is set 
to QXL, Graphics protocol is spice, USB support is enabled.
  3.  Install the guest agent in Debian per the instructions here - 
https://www.ovirt.org/documentation/how-to/guest-agent/install-the-guest-agent-in-debian/
  It is my understanding that installing the guest agent will also install the 
virt IO device drivers.
  4.  Install the spice-vdagent per the instructions here - 
https://www.ovirt.org/documentation/how-to/guest-agent/install-the-spice-guest-agent/
  5.   On the aSpice client I have imported the CA certficate from step 1 
above.  I defined the connection using the IP of my Node and TLS port 5901.

To troubleshoot my connection issues I confirmed the port being used to listen.
virsh # domdisplay Kali
spice://172.30.42.12?tls-port=5901

I see the following when attempting to connect.
tail -f /var/log/libvirt/qemu/Kali.log

140400191081600:error:14094438:SSL routines:ssl3_read_bytes:tlsv1 alert 
internal error:s3_pkt.c:1493:SSL alert number 80
((null):27595): Spice-Warning **: reds_stream.c:379:reds_stream_ssl_accept: 
SSL_accept failed, error=1

I came across some documentation that states in the caveat section "Certificate 
of spice SSL should be separate certificate."
https://www.ovirt.org/develop/release-management/features/infra/pki/

Is this still the case for version 4?  The document references version 3.2 and 
3.3.  If so, how do I generate a new certificate for use with spice?  Please 
let me know if you require further info to troubleshoot, I am happy to provide 
it.  Many thanks in advance.









___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Unable to connect to the graphic server

2018-02-18 Thread Yedidyah Bar David
On Mon, Feb 19, 2018 at 12:46 AM, Alex Bartonek  wrote:
>
>
>  Original Message 
>  On February 18, 2018 12:32 AM, Yedidyah Bar David  wrote:
>
> >On Fri, Feb 16, 2018 at 5:50 AM, Alex Bartonek a...@unix1337.com wrote:
> >> Original Message 
> >> On February 15, 2018 12:52 AM, Yedidyah Bar David d...@redhat.com wrote:
> >>>On Wed, Feb 14, 2018 at 9:20 PM, Alex Bartonek a...@unix1337.com wrote:
>  Original Message 
>  On February 14, 2018 2:23 AM, Yedidyah Bar David d...@redhat.com wrote:
> >On Wed, Feb 14, 2018 at 5:20 AM, Alex Bartonek a...@unix1337.com wrote:
> >>I've built and rebuilt about 4 oVirt servers.  Consider myself pretty 
> >>good
> >> at this.  LOL.
> >> So I am setting up a oVirt server for a friend on his r710.  CentOS 7, 
> >> ovirt
> >> 4.2.   /etc/hosts has the correct IP and FQDN setup.
> >> When I build a VM and try to open a console session via  SPICE I am 
> >> unable
> >> to connect to the graphic server.  I'm connecting from a Windows 10 
> >> box.
> >> Using virt-manager to connect.
> >> What happens when you try?
> >> Unable to connect to the graphic console is what the error says.  Here 
> >> is the .vv file other than the cert stuff in it:
> >> [virt-viewer]
> >> type=spice
> >> host=192.168.1.83
> >> port=-1
> >> password=
> >> Password is valid for 120 seconds.
> >>
> >delete-this-file=1
>  fullscreen=0
>  title=Win_7_32bit:%d
>  toggle-fullscreen=shift+f11
>  release-cursor=shift+f12
>  tls-port=5900
>  enable-smartcard=0
>  enable-usb-autoshare=1
>  usb-filter=-1,-1,-1,-1,0
>  tls-ciphers=DEFAULT
> host-subject=O=williams.com,CN=randb.williams.com
>  Port 5900 is listening by IP on the server, so that looks correct.  I 
>  shut the firewall off just in case it was the issue..no go.
> Did you verify that you can connect there manually (e.g. with telnet)?
> >>> Can you run a sniffer on both sides to make sure traffic passes correctly?
> >>> Can you check vdsm/libvirt logs on the host side?
> >>>Ok.. I must have tanked it on install with the firewall.  The firewall is 
> >>>blocking port 5900.  This is on CentOS 7.  If I flush the rules, it works.
> >>
> >
> > Thanks for the report.
> >
> > Did you choose to have firewall configured automatically, or did you
> > configure it yourself?
>
>
> I did configure the host to manage the firewall.  Just to make sure, I 
> deleted the host, recreated and still had the issue.  I ended up making the 
> firewall rule manually which took care of it.  Never had to do that before.

Can you please share relevant logs? On the engine in
/var/log/ovirt-engine/host-deploy and
/var/log/ovirt-engine/engine.log. Thanks!

Also adding Ondra.

Best regards,
-- 
Didi
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Unable to connect to the graphic server

2018-02-18 Thread Alex Bartonek

 Original Message 
 On February 18, 2018 12:32 AM, Yedidyah Bar David  wrote:

>On Fri, Feb 16, 2018 at 5:50 AM, Alex Bartonek a...@unix1337.com wrote:
>> Original Message 
>> On February 15, 2018 12:52 AM, Yedidyah Bar David d...@redhat.com wrote:
>>>On Wed, Feb 14, 2018 at 9:20 PM, Alex Bartonek a...@unix1337.com wrote:
 Original Message 
 On February 14, 2018 2:23 AM, Yedidyah Bar David d...@redhat.com wrote:
>On Wed, Feb 14, 2018 at 5:20 AM, Alex Bartonek a...@unix1337.com wrote:
>>I've built and rebuilt about 4 oVirt servers.  Consider myself pretty good
>> at this.  LOL.
>> So I am setting up a oVirt server for a friend on his r710.  CentOS 7, 
>> ovirt
>> 4.2.   /etc/hosts has the correct IP and FQDN setup.
>> When I build a VM and try to open a console session via  SPICE I am 
>> unable
>> to connect to the graphic server.  I'm connecting from a Windows 10 box.
>> Using virt-manager to connect.
>> What happens when you try?
>> Unable to connect to the graphic console is what the error says.  Here 
>> is the .vv file other than the cert stuff in it:
>> [virt-viewer]
>> type=spice
>> host=192.168.1.83
>> port=-1
>> password=
>> Password is valid for 120 seconds.
>>
>delete-this-file=1
 fullscreen=0
 title=Win_7_32bit:%d
 toggle-fullscreen=shift+f11
 release-cursor=shift+f12
 tls-port=5900
 enable-smartcard=0
 enable-usb-autoshare=1
 usb-filter=-1,-1,-1,-1,0
 tls-ciphers=DEFAULT
host-subject=O=williams.com,CN=randb.williams.com
 Port 5900 is listening by IP on the server, so that looks correct.  I shut 
 the firewall off just in case it was the issue..no go.
Did you verify that you can connect there manually (e.g. with telnet)?
>>> Can you run a sniffer on both sides to make sure traffic passes correctly?
>>> Can you check vdsm/libvirt logs on the host side?
>>>Ok.. I must have tanked it on install with the firewall.  The firewall is 
>>>blocking port 5900.  This is on CentOS 7.  If I flush the rules, it works.
>>
>
> Thanks for the report.
>
> Did you choose to have firewall configured automatically, or did you
> configure it yourself?


I did configure the host to manage the firewall.  Just to make sure, I deleted 
the host, recreated and still had the issue.  I ended up making the firewall 
rule manually which took care of it.  Never had to do that before.

-Alex
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] qcow2 images corruption

2018-02-18 Thread Nir Soffer
On Wed, Feb 7, 2018 at 7:09 PM Nicolas Ecarnot  wrote:

> Hello,
>
> TL; DR : qcow2 images keep getting corrupted. Any workaround?
>
> Long version:
> This discussion has already been launched by me on the oVirt and on
> qemu-block mailing list, under similar circumstances but I learned
> further things since months and here are some informations :
>
> - We are using 2 oVirt 3.6.7.5-1.el7.centos datacenters, using CentOS
> 7.{2,3} hosts
> - Hosts :
>- CentOS 7.2 1511 :
>  - Kernel = 3.10.0 327
>  - KVM : 2.3.0-31
>  - libvirt : 1.2.17
>  - vdsm : 4.17.32-1
>- CentOS 7.3 1611 :
>  - Kernel 3.10.0 514
>  - KVM : 2.3.0-31
>  - libvirt 2.0.0-10
>  - vdsm : 4.17.32-1
> - Our storage is 2 Equallogic SANs connected via iSCSI on a dedicated
> network
>

In 3.6 and iSCSI storage you have the issue of lvmetad service, activating
oVirt volumes by default, and also activating guest lvs inside oVirt raw
volumes.
This can lead to data corruption if an lv was activated before it was
extended
on another host, and the lv size on the host does not reflect the actual lv
size.
We had many bugs related to this, check this for related bugs:
https://bugzilla.redhat.com/1374545

To avoid this issue, you need to

1. edit /etc/lvm/lvm.conf global/use_lvmetad to:

use_lvmetad = 0

2. disable and mask these services:

- lvm2-lvmetad.socket
- lvm2-lvmetad.service

Note that this will may cause warnings from systemd during boot, the
warnings
are harmless:
https://bugzilla.redhat.com/1462792

For extra safety and better performance, you should also setup lvm filter
on all hosts.

Check this for example how it is done in 4.x:
https://www.ovirt.org/blog/2017/12/lvm-configuration-the-easy-way/

Since you run 3.6 you will have to setup the filter manually in the same
way.

Nir


> - Depends on weeks, but all in all, there are around 32 hosts, 8 storage
> domains and for various reasons, very few VMs (less than 200).
> - One peculiar point is that most of our VMs are provided an additional
> dedicated network interface that is iSCSI-connected to some volumes of
> our SAN - these volumes not being part of the oVirt setup. That could
> lead to a lot of additional iSCSI traffic.
>
>  From times to times, a random VM appears paused by oVirt.
> Digging into the oVirt engine logs, then into the host vdsm logs, it
> appears that the host considers the qcow2 image as corrupted.
> Along what I consider as a conservative behavior, vdsm stops any
> interaction with this image and marks it as paused.
> Any try to unpause it leads to the same conservative pause.
>
> After having found (https://access.redhat.com/solutions/1173623) the
> right logical volume hosting the qcow2 image, I can run qemu-img check
> on it.
> - On 80% of my VMs, I find no errors.
> - On 15% of them, I find Leaked cluster errors that I can correct using
> "qemu-img check -r all"
> - On 5% of them, I find Leaked clusters errors and further fatal errors,
> which can not be corrected with qemu-img.
> In rare cases, qemu-img can correct them, but destroys large parts of
> the image (becomes unusable), and on other cases it can not correct them
> at all.
>
> Months ago, I already sent a similar message but the error message was
> about No space left on device
> (https://www.mail-archive.com/qemu-block@gnu.org/msg00110.html).
>
> This time, I don't have this message about space, but only corruption.
>
> I kept reading and found a similar discussion in the Proxmox group :
> https://lists.ovirt.org/pipermail/users/2018-February/086750.html
>
>
> https://forum.proxmox.com/threads/qcow2-corruption-after-snapshot-or-heavy-disk-i-o.32865/page-2
>
> What I read similar to my case is :
> - usage of qcow2
> - heavy disk I/O
> - using the virtio-blk driver
>
> In the proxmox thread, they tend to say that using virtio-scsi is the
> solution. Having asked this question to oVirt experts
> (https://lists.ovirt.org/pipermail/users/2018-February/086753.html) but
> it's not clear the driver is to blame.
>
> I agree with the answer Yaniv Kaul gave to me, saying I have to properly
> report the issue, so I'm longing to know which peculiar information I
> can give you now.
>
> As you can imagine, all this setup is in production, and for most of the
> VMs, I can not "play" with them. Moreover, we launched a campaign of
> nightly stopping every VM, qemu-img check them one by one, then boot.
> So it might take some time before I find another corrupted image.
> (which I'll preciously store for debug)
>
> Other informations : We very rarely do snapshots, but I'm close to
> imagine that automated migrations of VMs could trigger similar behaviors
> on qcow2 images.
>
> Last point about the versions we use : yes that's old, yes we're
> planning to upgrade, but we don't know when.
>
> Regards,
>
> --
> Nicolas ECARNOT
> ___
> Users mailing list
> Users@ovirt.org
> http://lists.ovirt.org/mailman/listinfo/users
>

Re: [ovirt-users] Ovirt backups lead to unresponsive VM

2018-02-18 Thread Nir Soffer
On Sun, Feb 18, 2018 at 8:04 PM Alex K  wrote:

> Are there any examples on using ovirt-imageio to backup a VM or where I
> could find details of RESTAPI for this functionality?
> I might attempt to write a python script for this purpose.
>

Here:
-
https://github.com/oVirt/ovirt-engine-sdk/blob/master/sdk/examples/download_disk_snapshots.py
-
https://github.com/oVirt/ovirt-engine-sdk/blob/master/sdk/examples/upload_disk_snapshots.py

You probably need to add the vm configuration to complete the backup.


>
> Thanx,
> Alex
>
> On Tue, Feb 13, 2018 at 8:59 PM, Alex K  wrote:
>
>> Thank you Nir for the below.
>>
>> I am putting some comments inline in blue.
>>
>>
>> On Tue, Feb 13, 2018 at 7:33 PM, Nir Soffer  wrote:
>>
>>> On Wed, Jan 24, 2018 at 3:19 PM Alex K  wrote:
>>>
 Hi all,

 I have a cluster with 3 nodes, using ovirt 4.1 in a self hosted setup
 on top glusterfs.
 On some VMs (especially one Windows server 2016 64bit with 500 GB of
 disk). Guest agents are installed at VMs. i almost always observe that
 during the backup of the VM the VM is rendered unresponsive (dashboard
 shows a question mark at the VM status and VM does not respond to ping or
 to anything).

 For scheduled backups I use:

 https://github.com/wefixit-AT/oVirtBackup

 The script does the following:

 1. snapshot VM (this is done ok without any failure)

>>>
>>> This is a very cheap operation
>>>
>>>
 2. Clone snapshot (this steps renders the VM unresponsive)

>>>
>>> This copy 500g of data. In gluster case, it copies 1500g of data, since
>>> in glusterfs, the client
>>> is doing the replication.
>>>
>>> Maybe your network or gluster server is too slow? Can you describe the
>>> network topology?
>>>
>>> Please attach also the volume info for the gluster volume, maybe it is
>>> not configured in the
>>> best way?
>>>
>>
>> The network is 1Gbit. The hosts (3 hosts) are decent ones and new
>> hardware with each host having: 32GB RAM, 16 CPU cores and 2 TB of storage
>> in RAID10.
>> The VMS hosted (7 VMs) exhibit high performance. The VMs are Windows 2016
>> and Windows10.
>> The network topology is: two networks defined at ovirt: ovirtmgmt is for
>> the managment and access network and "storage" is a separate network, where
>> each server is connected with two network cables at a managed switch with
>> mode 6 load balancing. this storage network is used for gluster traffic.
>> Attached the volume configuration.
>>
>>> 3. Export Clone

>>>
>>> This copy 500g to the export domain. If the export domain is on
>>> glusterfs as well, you
>>> copy now another 1500g of data.
>>>
>>>
>> Export domain a Synology NAS with NFS share.  If the cloning succeeds
>> then export is completed ok.
>>
>>> 4. Delete clone

 5. Delete snapshot

>>>
>>> Not clear why do you need to clone the vm before you export it, you can
>>> save half of
>>> the data copies.
>>>
>> Because I cannot export the VM while it is running. It does not provide
>> such option.
>>
>>>
>>> If you 4.2, you can backup the vm *while the vm is running* by:
>>> - Take a snapshot
>>> - Get the vm ovf from the engine api
>>> - Download the vm disks using ovirt-imageio and store the snaphosts in
>>> your backup
>>>   storage
>>> - Delete a snapshot
>>>
>>> In this flow, you would copy 500g.
>>>
>>> I am not aware about this option. checking quickly at site this seems
>> that it is still half implemented? Is there any script that I may use and
>> test this? I am interested to have these backups scheduled.
>>
>>
>>> Daniel, please correct me if I'm wrong regarding doing this online.
>>>
>>> Regardless, a vm should not become non-responsive while cloning. Please
>>> file a bug
>>> for this and attach engine, vdsm, and glusterfs logs.
>>>
>>>
>> Nir
>>>
>>> Do you have any similar experience? Any suggestions to address this?

 I have never seen such issue with hosted Linux VMs.

 The cluster has enough storage to accommodate the clone.


 Thanx,

 Alex



 ___
 Users mailing list
 Users@ovirt.org
 http://lists.ovirt.org/mailman/listinfo/users

>>>
>>
>
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Ovirt backups lead to unresponsive VM

2018-02-18 Thread Alex K
Hi all,

Are there any examples on using ovirt-imageio to backup a VM or where I
could find details of RESTAPI for this functionality?
I might attempt to write a python script for this purpose.

Thanx,
Alex

On Tue, Feb 13, 2018 at 8:59 PM, Alex K  wrote:

> Thank you Nir for the below.
>
> I am putting some comments inline in blue.
>
>
> On Tue, Feb 13, 2018 at 7:33 PM, Nir Soffer  wrote:
>
>> On Wed, Jan 24, 2018 at 3:19 PM Alex K  wrote:
>>
>>> Hi all,
>>>
>>> I have a cluster with 3 nodes, using ovirt 4.1 in a self hosted setup on
>>> top glusterfs.
>>> On some VMs (especially one Windows server 2016 64bit with 500 GB of
>>> disk). Guest agents are installed at VMs. i almost always observe that
>>> during the backup of the VM the VM is rendered unresponsive (dashboard
>>> shows a question mark at the VM status and VM does not respond to ping or
>>> to anything).
>>>
>>> For scheduled backups I use:
>>>
>>> https://github.com/wefixit-AT/oVirtBackup
>>>
>>> The script does the following:
>>>
>>> 1. snapshot VM (this is done ok without any failure)
>>>
>>
>> This is a very cheap operation
>>
>>
>>> 2. Clone snapshot (this steps renders the VM unresponsive)
>>>
>>
>> This copy 500g of data. In gluster case, it copies 1500g of data, since
>> in glusterfs, the client
>> is doing the replication.
>>
>> Maybe your network or gluster server is too slow? Can you describe the
>> network topology?
>>
>> Please attach also the volume info for the gluster volume, maybe it is
>> not configured in the
>> best way?
>>
>
> The network is 1Gbit. The hosts (3 hosts) are decent ones and new hardware
> with each host having: 32GB RAM, 16 CPU cores and 2 TB of storage in
> RAID10.
> The VMS hosted (7 VMs) exhibit high performance. The VMs are Windows 2016
> and Windows10.
> The network topology is: two networks defined at ovirt: ovirtmgmt is for
> the managment and access network and "storage" is a separate network, where
> each server is connected with two network cables at a managed switch with
> mode 6 load balancing. this storage network is used for gluster traffic.
> Attached the volume configuration.
>
>> 3. Export Clone
>>>
>>
>> This copy 500g to the export domain. If the export domain is on glusterfs
>> as well, you
>> copy now another 1500g of data.
>>
>>
> Export domain a Synology NAS with NFS share.  If the cloning succeeds then
> export is completed ok.
>
>> 4. Delete clone
>>>
>>> 5. Delete snapshot
>>>
>>
>> Not clear why do you need to clone the vm before you export it, you can
>> save half of
>> the data copies.
>>
> Because I cannot export the VM while it is running. It does not provide
> such option.
>
>>
>> If you 4.2, you can backup the vm *while the vm is running* by:
>> - Take a snapshot
>> - Get the vm ovf from the engine api
>> - Download the vm disks using ovirt-imageio and store the snaphosts in
>> your backup
>>   storage
>> - Delete a snapshot
>>
>> In this flow, you would copy 500g.
>>
>> I am not aware about this option. checking quickly at site this seems
> that it is still half implemented? Is there any script that I may use and
> test this? I am interested to have these backups scheduled.
>
>
>> Daniel, please correct me if I'm wrong regarding doing this online.
>>
>> Regardless, a vm should not become non-responsive while cloning. Please
>> file a bug
>> for this and attach engine, vdsm, and glusterfs logs.
>>
>>
> Nir
>>
>> Do you have any similar experience? Any suggestions to address this?
>>>
>>> I have never seen such issue with hosted Linux VMs.
>>>
>>> The cluster has enough storage to accommodate the clone.
>>>
>>>
>>> Thanx,
>>>
>>> Alex
>>>
>>>
>>>
>>> ___
>>> Users mailing list
>>> Users@ovirt.org
>>> http://lists.ovirt.org/mailman/listinfo/users
>>>
>>
>
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Failing live migration with SPICE

2018-02-18 Thread Alex K
On Sun, Feb 18, 2018 at 4:25 PM, Michal Skrivanek <
michal.skriva...@redhat.com> wrote:

>
>
> On 18 Feb 2018, at 13:09, Alex K  wrote:
>
> I see that the latest guest tools for 4.1 are dated 27-04-2017.
>
> http://resources.ovirt.org/pub/ovirt-4.1/iso/oVirt-toolsSetup/
>
> http://resources.ovirt.org/pub/ovirt-4.2/iso/oVirt-
> toolsSetup/4.2-1.el7.centos/
>
> Can I use the tools from 4.2 at and install them at Windows VMs running on
> top 4.1?
>
>
> yes you can, tools are compatible and it almost always makes sense to run
> latest regardless what’s your ovirt cluster version
>
> Great!. I will try those.

>
> Thanx,
> Alex
>
> On Sun, Feb 18, 2018 at 1:53 PM, Alex K  wrote:
>
>> Seems that this is due to:
>>
>> https://bugzilla.redhat.com/show_bug.cgi?id=1446147
>>
>> I will check If i can find newer guest agents.
>>
>> On Sun, Feb 18, 2018 at 1:46 PM, Alex K  wrote:
>>
>>> Hi all,
>>>
>>> I am running a 3 node ovirt 4.1 selft hosted setup.
>>> I have consistently observed that windows 10 VMs with SPICE console fail
>>> to live migrate. Other VMs (windows server 2016) do migrate normally.
>>>
>>> VDSM log indicates:
>>>
>>> internal error: unable to execute QEMU command 'migrate': qxl: guest
>>> bug: command not in ram bar (migration:287)
>>> 2018-02-18 11:41:59,586+ ERROR (migsrc/2cf3a254) [virt.vm]
>>> (vmId='2cf3a254-8450-44cf-b023-e0a49827dac0') Failed to migrate
>>> (migration:429)
>>> if ret == -1: raise libvirtError ('virDomainMigrateToURI3() failed',
>>> dom=self)
>>> libvirtError: internal error: unable to execute QEMU command 'migrate':
>>> qxl: guest bug: command not in ram bar
>>>
>>> Seems as a guest agent bug for Windows 10? Is there any fix?
>>>
>>> Thanx,
>>> Alex
>>>
>>
>>
> ___
> Users mailing list
> Users@ovirt.org
> http://lists.ovirt.org/mailman/listinfo/users
>
>
>
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Failing live migration with SPICE

2018-02-18 Thread Michal Skrivanek


> On 18 Feb 2018, at 13:09, Alex K  wrote:
> 
> I see that the latest guest tools for 4.1 are dated 27-04-2017. 
> 
> http://resources.ovirt.org/pub/ovirt-4.1/iso/oVirt-toolsSetup/ 
> 
> 
> http://resources.ovirt.org/pub/ovirt-4.2/iso/oVirt-toolsSetup/4.2-1.el7.centos/
>  
> 
> 
> Can I use the tools from 4.2 at and install them at Windows VMs running on 
> top 4.1? 

yes you can, tools are compatible and it almost always makes sense to run 
latest regardless what’s your ovirt cluster version

> 
> Thanx, 
> Alex
> 
> On Sun, Feb 18, 2018 at 1:53 PM, Alex K  > wrote:
> Seems that this is due to: 
> 
> https://bugzilla.redhat.com/show_bug.cgi?id=1446147 
> 
> 
> I will check If i can find newer guest agents. 
> 
> On Sun, Feb 18, 2018 at 1:46 PM, Alex K  > wrote:
> Hi all, 
> 
> I am running a 3 node ovirt 4.1 selft hosted setup. 
> I have consistently observed that windows 10 VMs with SPICE console fail to 
> live migrate. Other VMs (windows server 2016) do migrate normally. 
> 
> VDSM log indicates: 
> 
> internal error: unable to execute QEMU command 'migrate': qxl: guest bug: 
> command not in ram bar (migration:287)
> 2018-02-18 11:41:59,586+ ERROR (migsrc/2cf3a254) [virt.vm] 
> (vmId='2cf3a254-8450-44cf-b023-e0a49827dac0') Failed to migrate 
> (migration:429)
> if ret == -1: raise libvirtError ('virDomainMigrateToURI3() failed', 
> dom=self)
> libvirtError: internal error: unable to execute QEMU command 'migrate': qxl: 
> guest bug: command not in ram bar
> 
> Seems as a guest agent bug for Windows 10? Is there any fix?
> 
> Thanx, 
> Alex
> 
> 
> ___
> Users mailing list
> Users@ovirt.org
> http://lists.ovirt.org/mailman/listinfo/users

___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] How to protect SHE VM from being deleted in following setup

2018-02-18 Thread Michal Skrivanek


> On 17 Feb 2018, at 08:22, Vrgotic, Marko  wrote:
> 
> Dear oVirt community,
>  
> I have SHE on the Gluster (not managed by SHE).
> Due to limitations of VM Portal, I have given couple of trusted Users, 
> trimmed down Admin access, so that they can create VMs.
>  
> However, this does make me bit worried, since the SHE VM could get deleted as 
> any other VM in the pool.

Why do you give them permissions to HE VM? You should be able to give them 
creation, but not let them delete VMs they do not own

>  
> The SHE VM has its own storage pool, but it’s part of same Hypervisor Cluster 
> (limitations of available HW), therefore my Users can see it and accidentally 
> delete it – it can happen!
>  
> QUESTION: Any advices that could help me protect SHE VM from being deleted?


There’s “Delete Protection” property for every VM, that prevents people from 
accidentally deleting them. Might be enough, messing with permissions might be 
tricky.

Thanks,
michal
>  
> Any suggestions, ideas are highly welcome.
>  
> Thank you.
>  
> Best regards,
> Marko Vrgotic
> ___
> Users mailing list
> Users@ovirt.org 
> http://lists.ovirt.org/mailman/listinfo/users 
> 
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Failing live migration with SPICE

2018-02-18 Thread Alex K
I see that the latest guest tools for 4.1 are dated 27-04-2017.

http://resources.ovirt.org/pub/ovirt-4.1/iso/oVirt-toolsSetup/

http://resources.ovirt.org/pub/ovirt-4.2/iso/oVirt-toolsSetup/4.2-1.el7.centos/

Can I use the tools from 4.2 at and install them at Windows VMs running on
top 4.1?

Thanx,
Alex

On Sun, Feb 18, 2018 at 1:53 PM, Alex K  wrote:

> Seems that this is due to:
>
> https://bugzilla.redhat.com/show_bug.cgi?id=1446147
>
> I will check If i can find newer guest agents.
>
> On Sun, Feb 18, 2018 at 1:46 PM, Alex K  wrote:
>
>> Hi all,
>>
>> I am running a 3 node ovirt 4.1 selft hosted setup.
>> I have consistently observed that windows 10 VMs with SPICE console fail
>> to live migrate. Other VMs (windows server 2016) do migrate normally.
>>
>> VDSM log indicates:
>>
>> internal error: unable to execute QEMU command 'migrate': qxl: guest bug:
>> command not in ram bar (migration:287)
>> 2018-02-18 11:41:59,586+ ERROR (migsrc/2cf3a254) [virt.vm]
>> (vmId='2cf3a254-8450-44cf-b023-e0a49827dac0') Failed to migrate
>> (migration:429)
>> if ret == -1: raise libvirtError ('virDomainMigrateToURI3() failed',
>> dom=self)
>> libvirtError: internal error: unable to execute QEMU command 'migrate':
>> qxl: guest bug: command not in ram bar
>>
>> Seems as a guest agent bug for Windows 10? Is there any fix?
>>
>> Thanx,
>> Alex
>>
>
>
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Failing live migration with SPICE

2018-02-18 Thread Alex K
Seems that this is due to:

https://bugzilla.redhat.com/show_bug.cgi?id=1446147

I will check If i can find newer guest agents.

On Sun, Feb 18, 2018 at 1:46 PM, Alex K  wrote:

> Hi all,
>
> I am running a 3 node ovirt 4.1 selft hosted setup.
> I have consistently observed that windows 10 VMs with SPICE console fail
> to live migrate. Other VMs (windows server 2016) do migrate normally.
>
> VDSM log indicates:
>
> internal error: unable to execute QEMU command 'migrate': qxl: guest bug:
> command not in ram bar (migration:287)
> 2018-02-18 11:41:59,586+ ERROR (migsrc/2cf3a254) [virt.vm]
> (vmId='2cf3a254-8450-44cf-b023-e0a49827dac0') Failed to migrate
> (migration:429)
> if ret == -1: raise libvirtError ('virDomainMigrateToURI3() failed',
> dom=self)
> libvirtError: internal error: unable to execute QEMU command 'migrate':
> qxl: guest bug: command not in ram bar
>
> Seems as a guest agent bug for Windows 10? Is there any fix?
>
> Thanx,
> Alex
>
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


[ovirt-users] Failing live migration with SPICE

2018-02-18 Thread Alex K
Hi all,

I am running a 3 node ovirt 4.1 selft hosted setup.
I have consistently observed that windows 10 VMs with SPICE console fail to
live migrate. Other VMs (windows server 2016) do migrate normally.

VDSM log indicates:

internal error: unable to execute QEMU command 'migrate': qxl: guest bug:
command not in ram bar (migration:287)
2018-02-18 11:41:59,586+ ERROR (migsrc/2cf3a254) [virt.vm]
(vmId='2cf3a254-8450-44cf-b023-e0a49827dac0') Failed to migrate
(migration:429)
if ret == -1: raise libvirtError ('virDomainMigrateToURI3() failed',
dom=self)
libvirtError: internal error: unable to execute QEMU command 'migrate':
qxl: guest bug: command not in ram bar

Seems as a guest agent bug for Windows 10? Is there any fix?

Thanx,
Alex
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


[ovirt-users] ovirt change of email alert

2018-02-18 Thread Alex K
Hi all,

I had put a specif email alert during the deploy and then I wanted to
change it.
I did the following:

At one of the hosts ra:

hosted-engine --set-shared-config destination-emails ale...@domain.com
--type=broker

systemctl restart ovirt-ha-broker.service

I had to do the above since changing the email from GUI did not have any
effect.

After the above the emails are received at the new email address but the
cluster seems to have some issue recognizing the state of engine. i am
flooded with emails that " EngineMaybeAway-EngineUnexpectedlyDown "

I have restarted at each host also the ovirt-ha-agent.service.
Did put the cluster to global maintenance and then disabled global
maintenance.

host agent logs I have:

MainThread::ERROR::2018-02-18
11:12:20,751::hosted_engine::720::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(_initialize_sanlock)
cannot get lock on host id 1: host already holds lock on a different host id

One other host logs:
MainThread::INFO::2018-02-18
11:20:23,692::states::682::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(score)
Score is 0 due to unexpected vm shutdown at Sun Feb 18 11:15:13 2018
MainThread::INFO::2018-02-18
11:20:23,692::hosted_engine::453::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(start_monitoring)
Current state EngineUnexpectedlyDown (score: 0)

The engine status on 3 hosts is:
hosted-engine --vm-status


--== Host 1 status ==--

conf_on_shared_storage : True
Status up-to-date  : True
Hostname   : v0
Host ID: 1
Engine status  : {"reason": "vm not running on this
host", "health": "bad", "vm": "down", "detail": "unknown"}
Score  : 0
stopped: False
Local maintenance  : False
crc32  : cfd15dac
local_conf_timestamp   : 4721144
Host timestamp : 4721144
Extra metadata (valid at timestamp):
metadata_parse_version=1
metadata_feature_version=1
timestamp=4721144 (Sun Feb 18 11:20:33 2018)
host-id=1
score=0
vm_conf_refresh_time=4721144 (Sun Feb 18 11:20:33 2018)
conf_on_shared_storage=True
maintenance=False
state=EngineUnexpectedlyDown
stopped=False
timeout=Tue Feb 24 15:29:44 1970


--== Host 2 status ==--

conf_on_shared_storage : True
Status up-to-date  : True
Hostname   : v1
Host ID: 2
Engine status  : {"reason": "vm not running on this
host", "health": "bad", "vm": "down", "detail": "unknown"}
Score  : 0
stopped: False
Local maintenance  : False
crc32  : 5cbcef4c
local_conf_timestamp   : 2499416
Host timestamp : 2499416
Extra metadata (valid at timestamp):
metadata_parse_version=1
metadata_feature_version=1
timestamp=2499416 (Sun Feb 18 11:20:46 2018)
host-id=2
score=0
vm_conf_refresh_time=2499416 (Sun Feb 18 11:20:46 2018)
conf_on_shared_storage=True
maintenance=False
state=EngineUnexpectedlyDown
stopped=False
timeout=Thu Jan 29 22:18:42 1970


--== Host 3 status ==--

conf_on_shared_storage : True
Status up-to-date  : False
Hostname   : v2
Host ID: 3
Engine status  : unknown stale-data
Score  : 3400
stopped: False
Local maintenance  : False
crc32  : f064d529
local_conf_timestamp   : 2920612
Host timestamp : 2920611
Extra metadata (valid at timestamp):
metadata_parse_version=1
metadata_feature_version=1
timestamp=2920611 (Sun Feb 18 10:47:31 2018)
host-id=3
score=3400
vm_conf_refresh_time=2920612 (Sun Feb 18 10:47:32 2018)
conf_on_shared_storage=True
maintenance=False
state=GlobalMaintenance
stopped=False


Putting each host at maintenance then activating them back does not resolve
the issue. Seems I have to avoid defining email address during deploy and
have it set only later at GUI.

How one can recover from this situation?


Thanx,
Alex
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Import Domain and snapshot issue ... please help !!!

2018-02-18 Thread Maor Lipchuk
Ala,

IIUC you mentioned that locked snapshot can still be removed.
Can you please guide how to do that?

Regards,
Maor

On Fri, Feb 16, 2018 at 10:50 AM, Enrico Becchetti <
enrico.becche...@pg.infn.it> wrote:

> After reboot engine virtual machine task disappear but virtual disk is
> still locked ,
> any ideas to remove that lock ?
> Thanks again.
> Enrico
>
>
> l 16/02/2018 09:45, Enrico Becchetti ha scritto:
>
>Dear All,
> Are there tools to remove this task (in attach) ?
>
> taskcleaner.sh it's seems doens't work:
>
> [root@ovirt-new dbutils]# ./taskcleaner.sh -v -r
> select exists (select * from information_schema.tables where table_schema
> = 'public' and table_name = 'command_entities');
>  t
> SELECT DeleteAllCommands();
>  6
> [root@ovirt-new dbutils]# ./taskcleaner.sh -v -R
> select exists (select * from information_schema.tables where table_schema
> = 'public' and table_name = 'command_entities');
>  t
>  This will remove all async_tasks table content!!!
> Caution, this operation should be used with care. Please contact support
> prior to running this command
> Are you sure you want to proceed? [y/n]
> y
> TRUNCATE TABLE async_tasks cascade;
> TRUNCATE TABLE
>
> after that I see the same running tasks . Does It make sense ?
>
> Thanks
> Best Regards
> Enrico
>
>
> Il 14/02/2018 15:53, Enrico Becchetti ha scritto:
>
> Dear All,
> old snapsahots seem to be the problem. In fact domain DATA_FC running in
> 3.5 had some
> lvm snapshot volume. Before deactivate DATA_FC  I didin't remove this
> snapshots so when
> I attach this volume to new ovirt 4.2 and import all vm at the same time I
> also import
> all snapshots but now How I can remove them ? Throught ovirt web interface
> the remove
> tasks running are still hang. Are there any other methods ?
> Thank to following this case.
> Best Regads
> Enrico
>
> Il 14/02/2018 14:34, Maor Lipchuk ha scritto:
>
> Seems like all the engine logs are full with the same error.
> From vdsm.log.16.xz I can see an error which might explain this failure:
>
> 2018-02-12 07:51:16,161+0100 INFO  (ioprocess communication (40573))
> [IOProcess] Starting ioprocess (__init__:447)
> 2018-02-12 07:51:16,201+0100 INFO  (jsonrpc/3) [vdsm.api] FINISH
> mergeSnapshots return=None from=:::10.0.0.46,57032,
> flow_id=fd4041b3-2301-44b0-aa65-02bd089f6568, 
> task_id=1be430dc-eeb0-4dc9-92df-3f5b7943c6e0
> (api:52)
> 2018-02-12 07:51:16,275+0100 INFO  (jsonrpc/3) [jsonrpc.JsonRpcServer] RPC
> call Image.mergeSnapshots succeeded in 0.13 seconds (__init__:573)
> 2018-02-12 07:51:16,276+0100 INFO  (tasks/1) [storage.ThreadPool.WorkerThread]
> START task 1be430dc-eeb0-4dc9-92df-3f5b7943c6e0 (cmd= Task.commit of >,
> args=None) (threadPool:208)
> 2018-02-12 07:51:16,543+0100 INFO  (tasks/1) [storage.Image]
> sdUUID=47b7c9aa-ef53-48bc-bb55-4a1a0ba5c8d5 vmUUID=
> imgUUID=ee9ab34c-47a8-4306-95d7-dd4318c69ef5 
> ancestor=9cdc96de-65b7-4187-8ec3-8190b78c1825
> successor=8f595e80-1013-4c14-a2f5-252bce9526fdpostZero=False
> discard=False (image:1240)
> 2018-02-12 07:51:16,669+0100 ERROR (tasks/1) [storage.TaskManager.Task]
> (Task='1be430dc-eeb0-4dc9-92df-3f5b7943c6e0') Unexpected error (task:875)
> Traceback (most recent call last):
>   File "/usr/lib/python2.7/site-packages/vdsm/storage/task.py", line 882,
> in _run
> return fn(*args, **kargs)
>   File "/usr/lib/python2.7/site-packages/vdsm/storage/task.py", line 336,
> in run
> return self.cmd(*self.argslist, **self.argsdict)
>   File "/usr/lib/python2.7/site-packages/vdsm/storage/securable.py", line
> 79, in wrapper
> return method(self, *args, **kwargs)
>   File "/usr/lib/python2.7/site-packages/vdsm/storage/sp.py", line 1853,
> in mergeSnapshots
> discard)
>   File "/usr/lib/python2.7/site-packages/vdsm/storage/image.py", line
> 1251, in merge
> srcVol = vols[successor]
> KeyError: u'8f595e80-1013-4c14-a2f5-252bce9526fd'
>
> Ala, maybe you know if there is any known issue with mergeSnapshots?
> The usecase here are VMs from oVirt 3.5 which got registered to oVirt 4.2.
>
> Regards,
> Maor
>
>
> On Wed, Feb 14, 2018 at 10:11 AM, Enrico Becchetti <
> enrico.becche...@pg.infn.it> wrote:
>
>>   Hi,
>> also you can download them throught these
>> links:
>>
>> https://owncloud.pg.infn.it/index.php/s/QpsTyGxtRTPYRTD
>> https://owncloud.pg.infn.it/index.php/s/ph8pLcABe0nadeb
>>
>> Thanks again 
>>
>> Best Regards
>> Enrico
>>
>> Il 13/02/2018 14:52, Maor Lipchuk ha scritto:
>>
>>
>>
>> On Tue, Feb 13, 2018 at 3:51 PM, Maor Lipchuk 
>> wrote:
>>
>>>
>>> On Tue, Feb 13, 2018 at 3:42 PM, Enrico Becchetti <
>>> enrico.becche...@pg.infn.it> wrote:
>>>
 see the attach files please ... thanks for your attention !!!

>>>
>>>
>>> Seems like the engine logs does not contain the entire process, can you
>>> please share older logs since the import operation?
>>>
>>
>> And VDSM logs as well from your host
>>
>>
>>>
>>>
 Best Regards
 Enrico


 Il 13/02/2018 14:09, 

Re: [ovirt-users] database restoration

2018-02-18 Thread Fabrice Bacchella


> Le 18 févr. 2018 à 08:05, Yedidyah Bar David  a écrit :
> 
> On Fri, Feb 16, 2018 at 1:04 PM, Fabrice Bacchella
> > wrote:
>> I'm running a restoration test and getting the following log generated by 
>> engine-backup --mode=restore:
> 
> Which version?

9.2, distribution package.

> 
> Did you also get any error on stdout/stderr, or only in the log?

Only the logs

> TL;DR no need to worry, can be ignored.
> ...

Thanks, looks good to me.



___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users