[ovirt-users] Re: Poor I/O Performance (again...)

2019-04-14 Thread Leo David
Hi,
Thank you Alex, I was looking for some optimisation settings as well, since
I am pretty much in the same boat, using ssd based replicate-distributed
volumes across 12 hosts.
Could anyone else (maybe even from from ovirt or rhev team) validate these
settings or add some other tweaks as well, so we can use them as standard ?
Thank you very much again !

On Mon, Apr 15, 2019, 05:56 Alex McWhirter  wrote:

> On 2019-04-14 20:27, Jim Kusznir wrote:
>
> Hi all:
>
> I've had I/O performance problems pretty much since the beginning of using
> oVirt.  I've applied several upgrades as time went on, but strangely, none
> of them have alleviated the problem.  VM disk I/O is still very slow to the
> point that running VMs is often painful; it notably affects nearly all my
> VMs, and makes me leary of starting any more.  I'm currently running 12 VMs
> and the hosted engine on the stack.
>
> My configuration started out with 1Gbps networking and hyperconverged
> gluster running on a single SSD on each node.  It worked, but I/O was
> painfully slow.  I also started running out of space, so I added an SSHD on
> each node, created another gluster volume, and moved VMs over to it.  I
> also ran that on a dedicated 1Gbps network.  I had recurring disk failures
> (seems that disks only lasted about 3-6 months; I warrantied all three at
> least once, and some twice before giving up).  I suspect the Dell PERC 6/i
> was partly to blame; the raid card refused to see/acknowledge the disk, but
> plugging it into a normal PC showed no signs of problems.  In any case,
> performance on that storage was notably bad, even though the gig-e
> interface was rarely taxed.
>
> I put in 10Gbps ethernet and moved all the storage on that none the less,
> as several people here said that 1Gbps just wasn't fast enough.  Some
> aspects improved a bit, but disk I/O is still slow.  And I was still having
> problems with the SSHD data gluster volume eating disks, so I bought a
> dedicated NAS server (supermicro 12 disk dedicated FreeNAS NFS storage
> system on 10Gbps ethernet).  Set that up.  I found that it was actually
> FASTER than the SSD-based gluster volume, but still slow.  Lately its been
> getting slower, too...Don't know why.  The FreeNAS server reports network
> loads around 4MB/s on its 10Gbe interface, so its not network constrained.
> At 4MB/s, I'd sure hope the 12 spindle SAS interface wasn't constrained
> either.  (and disk I/O operations on the NAS itself complete much
> faster).
>
> So, running a test on my NAS against an ISO file I haven't accessed in
> months:
>
>  # dd
> if=en_windows_server_2008_r2_standard_enterprise_datacenter_and_web_x64_dvd_x15-59754.iso
> of=/dev/null bs=1024k count=500
>
> 500+0 records in
> 500+0 records out
> 524288000 bytes transferred in 2.459501 secs (213168465 bytes/sec)
>
> Running it on one of my hosts:
> root@unifi:/home/kusznir# time dd if=/dev/sda of=/dev/null bs=1024k
> count=500
> 500+0 records in
> 500+0 records out
> 524288000 bytes (524 MB, 500 MiB) copied, 7.21337 s, 72.7 MB/s
>
> (I don't know if this is a true apples to apples comparison, as I don't
> have a large file inside this VM's image).  Even this is faster than I
> often see.
>
> I have a VoIP Phone server running as a VM.  Voicemail and other
> recordings usually fail due to IO issues opening and writing the files.
> Often, the first 4 or so seconds of the recording is missed; sometimes the
> entire thing just fails.  I didn't use to have this problem, but its
> definately been getting worse.  I finally bit the bullet and ordered a
> physical server dedicated for my VoIP System...But I still want to figure
> out why I'm having all these IO problems.  I read on the list of people
> running 30+ VMs...I feel that my IO can't take any more VMs with any
> semblance of reliability.  We have a Quickbooks server on here too
> (windows), and the performance is abysmal; my CPA is charging me extra
> because of all the lost staff time waiting on the system to respond and
> generate reports.
>
> I'm at my whits end...I started with gluster on SSD with 1Gbps network,
> migrated to 10Gbps network, and now to dedicated high performance NAS box
> over NFS, and still have performance issues.I don't know how to
> troubleshoot the issue any further, but I've never had these kinds of
> issues when I was playing with other VM technologies.  I'd like to get to
> the point where I can resell virtual servers to customers, but I can't do
> so with my current performance levels.
>
> I'd greatly appreciate help troubleshooting this further.
>
> --Jim
>
>
> ___
> Users mailing list -- users@ovirt.org
> To unsubscribe send an email to users-le...@ovirt.org
> Privacy Statement: https://www.ovirt.org/site/privacy-policy/
> oVirt Code of Conduct:
> https://www.ovirt.org/community/about/community-guidelines/
> List Archives:
> 

[ovirt-users] Re: Poor I/O Performance (again...)

2019-04-14 Thread Alex McWhirter
On 2019-04-14 20:27, Jim Kusznir wrote:

> Hi all:
> 
> I've had I/O performance problems pretty much since the beginning of using 
> oVirt.  I've applied several upgrades as time went on, but strangely, none of 
> them have alleviated the problem.  VM disk I/O is still very slow to the 
> point that running VMs is often painful; it notably affects nearly all my 
> VMs, and makes me leary of starting any more.  I'm currently running 12 VMs 
> and the hosted engine on the stack. 
> 
> My configuration started out with 1Gbps networking and hyperconverged gluster 
> running on a single SSD on each node.  It worked, but I/O was painfully slow. 
>  I also started running out of space, so I added an SSHD on each node, 
> created another gluster volume, and moved VMs over to it.  I also ran that on 
> a dedicated 1Gbps network.  I had recurring disk failures (seems that disks 
> only lasted about 3-6 months; I warrantied all three at least once, and some 
> twice before giving up).  I suspect the Dell PERC 6/i was partly to blame; 
> the raid card refused to see/acknowledge the disk, but plugging it into a 
> normal PC showed no signs of problems.  In any case, performance on that 
> storage was notably bad, even though the gig-e interface was rarely taxed. 
> 
> I put in 10Gbps ethernet and moved all the storage on that none the less, as 
> several people here said that 1Gbps just wasn't fast enough.  Some aspects 
> improved a bit, but disk I/O is still slow.  And I was still having problems 
> with the SSHD data gluster volume eating disks, so I bought a dedicated NAS 
> server (supermicro 12 disk dedicated FreeNAS NFS storage system on 10Gbps 
> ethernet).  Set that up.  I found that it was actually FASTER than the 
> SSD-based gluster volume, but still slow.  Lately its been getting slower, 
> too...Don't know why.  The FreeNAS server reports network loads around 4MB/s 
> on its 10Gbe interface, so its not network constrained.  At 4MB/s, I'd sure 
> hope the 12 spindle SAS interface wasn't constrained either.  (and disk 
> I/O operations on the NAS itself complete much faster). 
> 
> So, running a test on my NAS against an ISO file I haven't accessed in 
> months: 
> 
> # dd 
> if=en_windows_server_2008_r2_standard_enterprise_datacenter_and_web_x64_dvd_x15-59754.iso
>  of=/dev/null bs=1024k count=500  
> 
> 500+0 records in 
> 500+0 records out 
> 524288000 bytes transferred in 2.459501 secs (213168465 bytes/sec) 
> 
> Running it on one of my hosts: 
> 
> root@unifi:/home/kusznir# time dd if=/dev/sda of=/dev/null bs=1024k count=500 
> 500+0 records in 
> 500+0 records out 
> 524288000 bytes (524 MB, 500 MiB) copied, 7.21337 s, 72.7 MB/s 
> 
> (I don't know if this is a true apples to apples comparison, as I don't have 
> a large file inside this VM's image).  Even this is faster than I often see. 
> 
> I have a VoIP Phone server running as a VM.  Voicemail and other recordings 
> usually fail due to IO issues opening and writing the files.  Often, the 
> first 4 or so seconds of the recording is missed; sometimes the entire thing 
> just fails.  I didn't use to have this problem, but its definately been 
> getting worse.  I finally bit the bullet and ordered a physical server 
> dedicated for my VoIP System...But I still want to figure out why I'm having 
> all these IO problems.  I read on the list of people running 30+ VMs...I feel 
> that my IO can't take any more VMs with any semblance of reliability.  We 
> have a Quickbooks server on here too (windows), and the performance is 
> abysmal; my CPA is charging me extra because of all the lost staff time 
> waiting on the system to respond and generate reports. 
> 
> I'm at my whits end...I started with gluster on SSD with 1Gbps network, 
> migrated to 10Gbps network, and now to dedicated high performance NAS box 
> over NFS, and still have performance issues.I don't know how to 
> troubleshoot the issue any further, but I've never had these kinds of issues 
> when I was playing with other VM technologies.  I'd like to get to the point 
> where I can resell virtual servers to customers, but I can't do so with my 
> current performance levels. 
> 
> I'd greatly appreciate help troubleshooting this further. 
> 
> --Jim 
> 
> ___
> Users mailing list -- users@ovirt.org
> To unsubscribe send an email to users-le...@ovirt.org
> Privacy Statement: https://www.ovirt.org/site/privacy-policy/
> oVirt Code of Conduct: 
> https://www.ovirt.org/community/about/community-guidelines/
> List Archives: 
> https://lists.ovirt.org/archives/list/users@ovirt.org/message/ZR64VABNT2SGKLNP3XNTHCGFZXSOJAQF/

Been working on optimizing the same. This is where im at currently. 

Gluster volume settings. 

diagnostics.count-fop-hits: on
diagnostics.latency-measurement: on
performance.write-behind-window-size: 64MB
performance.flush-behind: on
performance.stat-prefetch: 

[ovirt-users] Re: Tuning Gluster Writes

2019-04-14 Thread Alex McWhirter
On 2019-04-14 17:07, Strahil Nikolov wrote:

> Some kernels do not like values below 5%, thus I prefer to use  
> vm.dirty_bytes & vm.dirty_background_bytes. 
> Try the following ones (comment out the vdsm.conf values ): 
> 
> vm.dirty_background_bytes = 2 
> vm.dirty_bytes = 45000 
> It's more like shooting in the dark , but it might help. 
> 
> Best Regards, 
> Strahil Nikolov 
> 
> В неделя, 14 април 2019 г., 19:06:07 ч. Гринуич+3, Alex McWhirter 
>  написа: 
> 
> On 2019-04-13 03:15, Strahil wrote:
>> Hi,
>> 
>> What is your dirty  cache settings on the gluster servers  ?
>> 
>> Best Regards,
>> Strahil NikolovOn Apr 13, 2019 00:44, Alex McWhirter  
>> wrote:
>>> 
>>> I have 8 machines acting as gluster servers. They each have 12 drives
>>> raid 50'd together (3 sets of 4 drives raid 5'd then 0'd together as
>>> one).
>>> 
>>> They connect to the compute hosts and to each other over lacp'd 10GB
>>> connections split across two cisco nexus switched with VPC.
>>> 
>>> Gluster has the following set.
>>> 
>>> performance.write-behind-window-size: 4MB
>>> performance.flush-behind: on
>>> performance.stat-prefetch: on
>>> server.event-threads: 4
>>> client.event-threads: 8
>>> performance.io-thread-count: 32
>>> network.ping-timeout: 30
>>> cluster.granular-entry-heal: enable
>>> performance.strict-o-direct: on
>>> storage.owner-gid: 36
>>> storage.owner-uid: 36
>>> features.shard: on
>>> cluster.shd-wait-qlength: 1
>>> cluster.shd-max-threads: 8
>>> cluster.locking-scheme: granular
>>> cluster.data-self-heal-algorithm: full
>>> cluster.server-quorum-type: server
>>> cluster.quorum-type: auto
>>> cluster.eager-lock: enable
>>> network.remote-dio: off
>>> performance.low-prio-threads: 32
>>> performance.io-cache: off
>>> performance.read-ahead: off
>>> performance.quick-read: off
>>> auth.allow: *
>>> user.cifs: off
>>> transport.address-family: inet
>>> nfs.disable: off
>>> performance.client-io-threads: on
>>> 
>>> 
>>> I have the following sysctl values on gluster client and servers, 
>>> using
>>> libgfapi, MTU 9K
>>> 
>>> net.core.rmem_max = 134217728
>>> net.core.wmem_max = 134217728
>>> net.ipv4.tcp_rmem = 4096 87380 134217728
>>> net.ipv4.tcp_wmem = 4096 65536 134217728
>>> net.core.netdev_max_backlog = 30
>>> net.ipv4.tcp_moderate_rcvbuf =1
>>> net.ipv4.tcp_no_metrics_save = 1
>>> net.ipv4.tcp_congestion_control=htcp
>>> 
>>> reads with this setup are perfect, benchmarked in VM to be about 
>>> 770MB/s
>>> sequential with disk access times of < 1ms. Writes on the other hand 
>>> are
>>> all over the place. They peak around 320MB/s sequential write, which 
>>> is
>>> what i expect but it seems as if there is some blocking going on.
>>> 
>>> During the write test i will hit 320MB/s briefly, then 0MB/s as disk
>>> access time shoot to over 3000ms, then back to 320MB/s. It averages 
>>> out
>>> to about 110MB/s afterwards.
>>> 
>>> Gluster version is 3.12.15 ovirt is 4.2.7.5
>>> 
>>> Any ideas on what i could tune to eliminate or minimize that blocking?
>>> ___
>>> Users mailing list -- users@ovirt.org
>>> To unsubscribe send an email to users-le...@ovirt.org
>>> Privacy Statement: https://www.ovirt.org/site/privacy-policy/
>>> oVirt Code of Conduct: 
>>> https://www.ovirt.org/community/about/community-guidelines/
>>> List Archives: 
>>> https://lists.ovirt.org/archives/list/users@ovirt.org/message/Z7F72BKYKAGICERZETSA4KCLQYR3AORR/
>>>  
> 
>> ___
>> Users mailing list -- users@ovirt.org
>> To unsubscribe send an email to users-le...@ovirt.org
>> Privacy Statement: https://www.ovirt.org/site/privacy-policy/
>> oVirt Code of Conduct:
>> https://www.ovirt.org/community/about/community-guidelines/
>> List Archives: 
>> https://lists.ovirt.org/archives/list/users@ovirt.org/message/FMB6NCNJL2WKEDWPAM4OJIRF2GIDJUUE/
> 
> Just the vdsm defaults
> 
> vm.dirty_ratio = 5
> vm.dirty_background_ratio = 2
> 
> these boxes only have 8gb of ram as well, so those percentages should be 
> super small. 
> 
> ___
> Users mailing list -- users@ovirt.org
> To unsubscribe send an email to users-le...@ovirt.org
> Privacy Statement: https://www.ovirt.org/site/privacy-policy/
> oVirt Code of Conduct: 
> https://www.ovirt.org/community/about/community-guidelines/
> List Archives: 
> https://lists.ovirt.org/archives/list/users@ovirt.org/message/5U6QGARQSLFXMPP2EB57DSEACZ3H5SBY/

i will try this, 

I went in and disabled TCP offload on all the nics, huge performance
boost. went from 110MB/s to 240MB/s seq writes, reads lost a bit of
performance going down to 680MB/s, but that's a decent trade off.
Latency is still really high though, need to work on that. I think some
more TCP tuning might help.___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: 

[ovirt-users] Poor I/O Performance (again...)

2019-04-14 Thread Jim Kusznir
Hi all:

I've had I/O performance problems pretty much since the beginning of using
oVirt.  I've applied several upgrades as time went on, but strangely, none
of them have alleviated the problem.  VM disk I/O is still very slow to the
point that running VMs is often painful; it notably affects nearly all my
VMs, and makes me leary of starting any more.  I'm currently running 12 VMs
and the hosted engine on the stack.

My configuration started out with 1Gbps networking and hyperconverged
gluster running on a single SSD on each node.  It worked, but I/O was
painfully slow.  I also started running out of space, so I added an SSHD on
each node, created another gluster volume, and moved VMs over to it.  I
also ran that on a dedicated 1Gbps network.  I had recurring disk failures
(seems that disks only lasted about 3-6 months; I warrantied all three at
least once, and some twice before giving up).  I suspect the Dell PERC 6/i
was partly to blame; the raid card refused to see/acknowledge the disk, but
plugging it into a normal PC showed no signs of problems.  In any case,
performance on that storage was notably bad, even though the gig-e
interface was rarely taxed.

I put in 10Gbps ethernet and moved all the storage on that none the less,
as several people here said that 1Gbps just wasn't fast enough.  Some
aspects improved a bit, but disk I/O is still slow.  And I was still having
problems with the SSHD data gluster volume eating disks, so I bought a
dedicated NAS server (supermicro 12 disk dedicated FreeNAS NFS storage
system on 10Gbps ethernet).  Set that up.  I found that it was actually
FASTER than the SSD-based gluster volume, but still slow.  Lately its been
getting slower, too...Don't know why.  The FreeNAS server reports network
loads around 4MB/s on its 10Gbe interface, so its not network constrained.
At 4MB/s, I'd sure hope the 12 spindle SAS interface wasn't constrained
either.  (and disk I/O operations on the NAS itself complete much
faster).

So, running a test on my NAS against an ISO file I haven't accessed in
months:

 # dd
if=en_windows_server_2008_r2_standard_enterprise_datacenter_and_web_x64_dvd_x15-59754.iso
of=/dev/null bs=1024k count=500

500+0 records in
500+0 records out
524288000 bytes transferred in 2.459501 secs (213168465 bytes/sec)

Running it on one of my hosts:
root@unifi:/home/kusznir# time dd if=/dev/sda of=/dev/null bs=1024k
count=500
500+0 records in
500+0 records out
524288000 bytes (524 MB, 500 MiB) copied, 7.21337 s, 72.7 MB/s

(I don't know if this is a true apples to apples comparison, as I don't
have a large file inside this VM's image).  Even this is faster than I
often see.

I have a VoIP Phone server running as a VM.  Voicemail and other recordings
usually fail due to IO issues opening and writing the files.  Often, the
first 4 or so seconds of the recording is missed; sometimes the entire
thing just fails.  I didn't use to have this problem, but its definately
been getting worse.  I finally bit the bullet and ordered a physical server
dedicated for my VoIP System...But I still want to figure out why I'm
having all these IO problems.  I read on the list of people running 30+
VMs...I feel that my IO can't take any more VMs with any semblance of
reliability.  We have a Quickbooks server on here too (windows), and the
performance is abysmal; my CPA is charging me extra because of all the lost
staff time waiting on the system to respond and generate reports.

I'm at my whits end...I started with gluster on SSD with 1Gbps network,
migrated to 10Gbps network, and now to dedicated high performance NAS box
over NFS, and still have performance issues.I don't know how to
troubleshoot the issue any further, but I've never had these kinds of
issues when I was playing with other VM technologies.  I'd like to get to
the point where I can resell virtual servers to customers, but I can't do
so with my current performance levels.

I'd greatly appreciate help troubleshooting this further.

--Jim
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/ZR64VABNT2SGKLNP3XNTHCGFZXSOJAQF/


[ovirt-users] Re: Tuning Gluster Writes

2019-04-14 Thread Strahil Nikolov
 Some kernels do not like values below 5%, thus I prefer to use vm.dirty_bytes 
& vm.dirty_background_bytes.
Try the following ones (comment out the vdsm.conf values 
):vm.dirty_background_bytes = 2vm.dirty_bytes = 45000
It's more like shooting in the dark , but it might help.
Best Regards,Strahil Nikolov
В неделя, 14 април 2019 г., 19:06:07 ч. Гринуич+3, Alex McWhirter 
 написа:  
 
 On 2019-04-13 03:15, Strahil wrote:
> Hi,
> 
> What is your dirty  cache settings on the gluster servers  ?
> 
> Best Regards,
> Strahil NikolovOn Apr 13, 2019 00:44, Alex McWhirter  
> wrote:
>> 
>> I have 8 machines acting as gluster servers. They each have 12 drives
>> raid 50'd together (3 sets of 4 drives raid 5'd then 0'd together as
>> one).
>> 
>> They connect to the compute hosts and to each other over lacp'd 10GB
>> connections split across two cisco nexus switched with VPC.
>> 
>> Gluster has the following set.
>> 
>> performance.write-behind-window-size: 4MB
>> performance.flush-behind: on
>> performance.stat-prefetch: on
>> server.event-threads: 4
>> client.event-threads: 8
>> performance.io-thread-count: 32
>> network.ping-timeout: 30
>> cluster.granular-entry-heal: enable
>> performance.strict-o-direct: on
>> storage.owner-gid: 36
>> storage.owner-uid: 36
>> features.shard: on
>> cluster.shd-wait-qlength: 1
>> cluster.shd-max-threads: 8
>> cluster.locking-scheme: granular
>> cluster.data-self-heal-algorithm: full
>> cluster.server-quorum-type: server
>> cluster.quorum-type: auto
>> cluster.eager-lock: enable
>> network.remote-dio: off
>> performance.low-prio-threads: 32
>> performance.io-cache: off
>> performance.read-ahead: off
>> performance.quick-read: off
>> auth.allow: *
>> user.cifs: off
>> transport.address-family: inet
>> nfs.disable: off
>> performance.client-io-threads: on
>> 
>> 
>> I have the following sysctl values on gluster client and servers, 
>> using
>> libgfapi, MTU 9K
>> 
>> net.core.rmem_max = 134217728
>> net.core.wmem_max = 134217728
>> net.ipv4.tcp_rmem = 4096 87380 134217728
>> net.ipv4.tcp_wmem = 4096 65536 134217728
>> net.core.netdev_max_backlog = 30
>> net.ipv4.tcp_moderate_rcvbuf =1
>> net.ipv4.tcp_no_metrics_save = 1
>> net.ipv4.tcp_congestion_control=htcp
>> 
>> reads with this setup are perfect, benchmarked in VM to be about 
>> 770MB/s
>> sequential with disk access times of < 1ms. Writes on the other hand 
>> are
>> all over the place. They peak around 320MB/s sequential write, which 
>> is
>> what i expect but it seems as if there is some blocking going on.
>> 
>> During the write test i will hit 320MB/s briefly, then 0MB/s as disk
>> access time shoot to over 3000ms, then back to 320MB/s. It averages 
>> out
>> to about 110MB/s afterwards.
>> 
>> Gluster version is 3.12.15 ovirt is 4.2.7.5
>> 
>> Any ideas on what i could tune to eliminate or minimize that blocking?
>> ___
>> Users mailing list -- users@ovirt.org
>> To unsubscribe send an email to users-le...@ovirt.org
>> Privacy Statement: https://www.ovirt.org/site/privacy-policy/
>> oVirt Code of Conduct: 
>> https://www.ovirt.org/community/about/community-guidelines/
>> List Archives: 
>> https://lists.ovirt.org/archives/list/users@ovirt.org/message/Z7F72BKYKAGICERZETSA4KCLQYR3AORR/
> ___
> Users mailing list -- users@ovirt.org
> To unsubscribe send an email to users-le...@ovirt.org
> Privacy Statement: https://www.ovirt.org/site/privacy-policy/
> oVirt Code of Conduct:
> https://www.ovirt.org/community/about/community-guidelines/
> List Archives:
> https://lists.ovirt.org/archives/list/users@ovirt.org/message/FMB6NCNJL2WKEDWPAM4OJIRF2GIDJUUE/

Just the vdsm defaults

vm.dirty_ratio = 5
vm.dirty_background_ratio = 2

these boxes only have 8gb of ram as well, so those percentages should be 
super small.

  ___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/5U6QGARQSLFXMPP2EB57DSEACZ3H5SBY/


[ovirt-users] oVirt 4.3.2 missing/wrong status of VM

2019-04-14 Thread Strahil Nikolov
As I couldn't find the exact mail thread, I'm attaching my 
/usr/lib/python2.7/site-packages/vdsm/virt/guestagent.py which fixes the 
missing/wrong status of VMs.
You will need to restart vdsmd (I'm not sure how safe is that with running 
guests) in order to start working.
Best Regards,Strahil Nikolov

guestagent.py
Description: Binary data
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/KK4NVC3U37HPKCO4KPO4YRBFCKYPDRGE/


[ovirt-users] Re: Failed to add storage domain

2019-04-14 Thread thunderlight1
Update: I checked the log-file /var/log/vdsm/vdsm.log and found this message:

2019-04-14 10:40:40,323+0200 INFO  (jsonrpc/0) [storage.StorageDomain] 
sdUUID=f68db244-c11c-4dd1-9001-26a05071b4da (fileSD:540)
2019-04-14 10:40:40,326+0200 INFO  (jsonrpc/0) [storage.StoragePool] Creating 
pool directory '/rhev/data-center/1de9e54a-5e57-11e9-ba26-00163e16c73a' (sp:634)
2019-04-14 10:40:40,327+0200 INFO  (jsonrpc/0) [storage.fileUtils] Creating 
directory: /rhev/data-center/1de9e54a-5e57-11e9-ba26-00163e16c73a mode: None 
(fileUtils:199)
2019-04-14 10:40:40,327+0200 INFO  (jsonrpc/0) [storage.SANLock] Acquiring host 
id for domain f68db244-c11c-4dd1-9001-26a05071b4da (id=250, async=False) 
(clusterlock:29
4)
2019-04-14 10:40:41,292+0200 INFO  (vmrecovery) [vdsm.api] START 
getConnectedStoragePoolsList(options=None) from=internal, 
task_id=61946b04-7df3-410e-8f05-ad8a17a13915
(api:48)
2019-04-14 10:40:41,292+0200 INFO  (vmrecovery) [vdsm.api] FINISH 
getConnectedStoragePoolsList return={'poollist': []} from=internal, 
task_id=61946b04-7df3-410e-8f05-ad
8a17a13915 (api:54)
2019-04-14 10:40:41,292+0200 INFO  (vmrecovery) [vds] recovery: waiting for 
storage pool to go up (clientIF:709)
2019-04-14 10:40:41,329+0200 INFO  (jsonrpc/0) [vdsm.api] FINISH 
createStoragePool error=Cannot acquire host id: 
(u'f68db244-c11c-4dd1-9001-26a05071b4da', SanlockExcept
ion(19, 'Sanlock lockspace add failure', 'No such device')) 
from=:::192.168.122.168,33246, flow_id=74a96844, 
task_id=78af21d4-d8df-463f-b7fe-fe295d0e25ed (api:52)
2019-04-14 10:40:41,329+0200 ERROR (jsonrpc/0) [storage.TaskManager.Task] 
(Task='78af21d4-d8df-463f-b7fe-fe295d0e25ed') Unexpected error (task:875)
Traceback (most recent call last):
  File "/usr/lib/python2.7/site-packages/vdsm/storage/task.py", line 882, in 
_run
return fn(*args, **kargs)
  File "", line 2, in createStoragePool
  File "/usr/lib/python2.7/site-packages/vdsm/common/api.py", line 50, in method
ret = func(*args, **kwargs)
  File "/usr/lib/python2.7/site-packages/vdsm/storage/hsm.py", line 1003, in 
createStoragePool
leaseParams)
  File "/usr/lib/python2.7/site-packages/vdsm/storage/sp.py", line 636, in 
create
self._acquireTemporaryClusterLock(msdUUID, leaseParams)
  File "/usr/lib/python2.7/site-packages/vdsm/storage/sp.py", line 567, in 
_acquireTemporaryClusterLock
msd.acquireHostId(self.id)
  File "/usr/lib/python2.7/site-packages/vdsm/storage/sd.py", line 889, in 
acquireHostId
self._manifest.acquireHostId(hostId, async)
  File "/usr/lib/python2.7/site-packages/vdsm/storage/sd.py", line 476, in 
acquireHostId
self._domainLock.acquireHostId(hostId, async)
  File "/usr/lib/python2.7/site-packages/vdsm/storage/clusterlock.py", line 
325, in acquireHostId
raise se.AcquireHostIdFailure(self._sdUUID, e)
AcquireHostIdFailure: Cannot acquire host id: 
(u'f68db244-c11c-4dd1-9001-26a05071b4da', SanlockException(19, 'Sanlock 
lockspace add failure', 'No such device'))
2019-04-14 10:40:41,330+0200 INFO  (jsonrpc/0) [storage.TaskManager.Task] 
(Task='78af21d4-d8df-463f-b7fe-fe295d0e25ed') aborting: Task is aborted: 
"Cannot acquire host id: (u'f68db244-c11c-4dd1-9001-26a05071b4da', 
SanlockException(19, 'Sanlock lockspace add failure', 'No such device'))" - 
code 661 (task:1181)
2019-04-14 10:40:41,330+0200 ERROR (jsonrpc/0) [storage.Dispatcher] FINISH 
createStoragePool error=Cannot acquire host id: 
(u'f68db244-c11c-4dd1-9001-26a05071b4da', SanlockException(19, 'Sanlock 
lockspace add failure', 'No such device')) (dispatcher:83)
2019-04-14 10:40:41,330+0200 INFO  (jsonrpc/0) [jsonrpc.JsonRpcServer] RPC call 
StoragePool.create failed (error 661) in 1.01 seconds (__init__:312)
2019-04-14 10:40:42,882+0200 INFO  (jsonrpc/2) [api.host] START getStats() 
from=:::192.168.122.168,33246 (api:48)
2019-04-14 10:40:42,894+0200 INFO  (jsonrpc/2) [vdsm.api] START 
repoStats(domains=()) from=:::192.168.122.168,33246, 
task_id=889a11ff-b596-4111-bb96-33427c26a6c5 (api:48)


Looks like it has some issue when IPv6 is disabled or am I misunderstanding the 
log message?
Is there some way I can fix that without enabling IPv6 in that case?
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/GZF7BNGEVT7BJOUEXCKFILHAZTPZ3CZU/


[ovirt-users] Re: Tuning Gluster Writes

2019-04-14 Thread Alex McWhirter

On 2019-04-14 13:05, Alex McWhirter wrote:

On 2019-04-14 12:07, Alex McWhirter wrote:

On 2019-04-13 03:15, Strahil wrote:

Hi,

What is your dirty  cache settings on the gluster servers  ?

Best Regards,
Strahil NikolovOn Apr 13, 2019 00:44, Alex McWhirter 
 wrote:


I have 8 machines acting as gluster servers. They each have 12 
drives

raid 50'd together (3 sets of 4 drives raid 5'd then 0'd together as
one).

They connect to the compute hosts and to each other over lacp'd 10GB
connections split across two cisco nexus switched with VPC.

Gluster has the following set.

performance.write-behind-window-size: 4MB
performance.flush-behind: on
performance.stat-prefetch: on
server.event-threads: 4
client.event-threads: 8
performance.io-thread-count: 32
network.ping-timeout: 30
cluster.granular-entry-heal: enable
performance.strict-o-direct: on
storage.owner-gid: 36
storage.owner-uid: 36
features.shard: on
cluster.shd-wait-qlength: 1
cluster.shd-max-threads: 8
cluster.locking-scheme: granular
cluster.data-self-heal-algorithm: full
cluster.server-quorum-type: server
cluster.quorum-type: auto
cluster.eager-lock: enable
network.remote-dio: off
performance.low-prio-threads: 32
performance.io-cache: off
performance.read-ahead: off
performance.quick-read: off
auth.allow: *
user.cifs: off
transport.address-family: inet
nfs.disable: off
performance.client-io-threads: on


I have the following sysctl values on gluster client and servers, 
using

libgfapi, MTU 9K

net.core.rmem_max = 134217728
net.core.wmem_max = 134217728
net.ipv4.tcp_rmem = 4096 87380 134217728
net.ipv4.tcp_wmem = 4096 65536 134217728
net.core.netdev_max_backlog = 30
net.ipv4.tcp_moderate_rcvbuf =1
net.ipv4.tcp_no_metrics_save = 1
net.ipv4.tcp_congestion_control=htcp

reads with this setup are perfect, benchmarked in VM to be about 
770MB/s
sequential with disk access times of < 1ms. Writes on the other hand 
are
all over the place. They peak around 320MB/s sequential write, which 
is

what i expect but it seems as if there is some blocking going on.

During the write test i will hit 320MB/s briefly, then 0MB/s as disk
access time shoot to over 3000ms, then back to 320MB/s. It averages 
out

to about 110MB/s afterwards.

Gluster version is 3.12.15 ovirt is 4.2.7.5

Any ideas on what i could tune to eliminate or minimize that 
blocking?

___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/Z7F72BKYKAGICERZETSA4KCLQYR3AORR/

___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct:
https://www.ovirt.org/community/about/community-guidelines/
List Archives:
https://lists.ovirt.org/archives/list/users@ovirt.org/message/FMB6NCNJL2WKEDWPAM4OJIRF2GIDJUUE/


Just the vdsm defaults

vm.dirty_ratio = 5
vm.dirty_background_ratio = 2

these boxes only have 8gb of ram as well, so those percentages should
be super small.
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct:
https://www.ovirt.org/community/about/community-guidelines/
List Archives:
https://lists.ovirt.org/archives/list/users@ovirt.org/message/H4XWDEHYKD2MQUR45QLMMSK6FBX44KIG/


doing a gluster profile my bricks give me some odd numbers.

 %-latency   Avg-latency   Min-Latency   Max-Latency   No. of calls 
Fop
 -   ---   ---   ---    
   
  0.00 131.00 us 131.00 us 131.00 us  1 
  FSTAT
  0.01 104.50 us  77.00 us 118.00 us 14 
 STATFS
  0.01  95.38 us  45.00 us 130.00 us 16 
   STAT
  0.10 252.39 us 124.00 us 329.00 us 61 
 LOOKUP
  0.22  55.68 us  16.00 us 180.00 us635
FINODELK
  0.43 543.41 us  50.00 us1760.00 us125 
  FSYNC
  1.52 573.75 us  76.00 us5463.00 us422
FXATTROP
 97.727443.50 us 184.00 us   34917.00 us   2092 
  WRITE


 %-latency   Avg-latency   Min-Latency   Max-Latency   No. of calls 
Fop
 -   ---   ---   ---    
   
  0.00   0.00 us   0.00 us   0.00 us 70 
 FORGET
  0.00   0.00 us   0.00 us   0.00 us   1792 
RELEASE
  0.00   0.00 us   0.00 us   0.00 us  23422  
RELEASEDIR
  0.01 126.20 

[ovirt-users] Re: Tuning Gluster Writes

2019-04-14 Thread Alex McWhirter

On 2019-04-14 12:07, Alex McWhirter wrote:

On 2019-04-13 03:15, Strahil wrote:

Hi,

What is your dirty  cache settings on the gluster servers  ?

Best Regards,
Strahil NikolovOn Apr 13, 2019 00:44, Alex McWhirter  
wrote:


I have 8 machines acting as gluster servers. They each have 12 drives
raid 50'd together (3 sets of 4 drives raid 5'd then 0'd together as
one).

They connect to the compute hosts and to each other over lacp'd 10GB
connections split across two cisco nexus switched with VPC.

Gluster has the following set.

performance.write-behind-window-size: 4MB
performance.flush-behind: on
performance.stat-prefetch: on
server.event-threads: 4
client.event-threads: 8
performance.io-thread-count: 32
network.ping-timeout: 30
cluster.granular-entry-heal: enable
performance.strict-o-direct: on
storage.owner-gid: 36
storage.owner-uid: 36
features.shard: on
cluster.shd-wait-qlength: 1
cluster.shd-max-threads: 8
cluster.locking-scheme: granular
cluster.data-self-heal-algorithm: full
cluster.server-quorum-type: server
cluster.quorum-type: auto
cluster.eager-lock: enable
network.remote-dio: off
performance.low-prio-threads: 32
performance.io-cache: off
performance.read-ahead: off
performance.quick-read: off
auth.allow: *
user.cifs: off
transport.address-family: inet
nfs.disable: off
performance.client-io-threads: on


I have the following sysctl values on gluster client and servers, 
using

libgfapi, MTU 9K

net.core.rmem_max = 134217728
net.core.wmem_max = 134217728
net.ipv4.tcp_rmem = 4096 87380 134217728
net.ipv4.tcp_wmem = 4096 65536 134217728
net.core.netdev_max_backlog = 30
net.ipv4.tcp_moderate_rcvbuf =1
net.ipv4.tcp_no_metrics_save = 1
net.ipv4.tcp_congestion_control=htcp

reads with this setup are perfect, benchmarked in VM to be about 
770MB/s
sequential with disk access times of < 1ms. Writes on the other hand 
are
all over the place. They peak around 320MB/s sequential write, which 
is

what i expect but it seems as if there is some blocking going on.

During the write test i will hit 320MB/s briefly, then 0MB/s as disk
access time shoot to over 3000ms, then back to 320MB/s. It averages 
out

to about 110MB/s afterwards.

Gluster version is 3.12.15 ovirt is 4.2.7.5

Any ideas on what i could tune to eliminate or minimize that 
blocking?

___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/Z7F72BKYKAGICERZETSA4KCLQYR3AORR/

___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct:
https://www.ovirt.org/community/about/community-guidelines/
List Archives:
https://lists.ovirt.org/archives/list/users@ovirt.org/message/FMB6NCNJL2WKEDWPAM4OJIRF2GIDJUUE/


Just the vdsm defaults

vm.dirty_ratio = 5
vm.dirty_background_ratio = 2

these boxes only have 8gb of ram as well, so those percentages should
be super small.
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct:
https://www.ovirt.org/community/about/community-guidelines/
List Archives:
https://lists.ovirt.org/archives/list/users@ovirt.org/message/H4XWDEHYKD2MQUR45QLMMSK6FBX44KIG/


doing a gluster profile my bricks give me some odd numbers.

 %-latency   Avg-latency   Min-Latency   Max-Latency   No. of calls  
   Fop
 -   ---   ---   ---     
  
  0.00 131.00 us 131.00 us 131.00 us  1  
 FSTAT
  0.01 104.50 us  77.00 us 118.00 us 14  
STATFS
  0.01  95.38 us  45.00 us 130.00 us 16  
  STAT
  0.10 252.39 us 124.00 us 329.00 us 61  
LOOKUP
  0.22  55.68 us  16.00 us 180.00 us635
FINODELK
  0.43 543.41 us  50.00 us1760.00 us125  
 FSYNC
  1.52 573.75 us  76.00 us5463.00 us422
FXATTROP
 97.727443.50 us 184.00 us   34917.00 us   2092  
 WRITE


 %-latency   Avg-latency   Min-Latency   Max-Latency   No. of calls  
   Fop
 -   ---   ---   ---     
  
  0.00   0.00 us   0.00 us   0.00 us 70  
FORGET
  0.00   0.00 us   0.00 us   0.00 us   1792 
RELEASE
  0.00   0.00 us   0.00 us   0.00 us  23422  
RELEASEDIR
  0.01 126.20 us  80.00 us 210.00 us 20  

[ovirt-users] VDSM command CreateStoragePoolVDS faile

2019-04-14 Thread fangjian
Hi, I tried to setup Ovirt in my environment but encountered some problems in 
data domain creation.
oVirt Manager Version 4.3.2.1-1.el7
oVirt Node Version 4.3.2
1. Create new data center & cluster;   successful
2. Create new host 'centos-node-01' in cluster;   successful
3. Prepare the NFS storage;  successful
4. Tried to create new datadomain and attach NFS storage to host 
'centos-node-01' but failed.  
Message: VDSM centos-node-01 command CreateStoragePoolVDS failed: Cannot 
acquire host id: (u'ec225640-f05e-4a9d-bdc4-5219065704ec', SanlockException(2, 
'Sanlock lockspace add failure', 'No such file or directory'))

How can I fix the problem and attach the NFS storage to new datadomain? 
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/E5LH6BLSDOHRIXPSPENHHSAZUYZFMOJB/


[ovirt-users] Re: Tuning Gluster Writes

2019-04-14 Thread Alex McWhirter

On 2019-04-13 03:15, Strahil wrote:

Hi,

What is your dirty  cache settings on the gluster servers  ?

Best Regards,
Strahil NikolovOn Apr 13, 2019 00:44, Alex McWhirter  
wrote:


I have 8 machines acting as gluster servers. They each have 12 drives
raid 50'd together (3 sets of 4 drives raid 5'd then 0'd together as
one).

They connect to the compute hosts and to each other over lacp'd 10GB
connections split across two cisco nexus switched with VPC.

Gluster has the following set.

performance.write-behind-window-size: 4MB
performance.flush-behind: on
performance.stat-prefetch: on
server.event-threads: 4
client.event-threads: 8
performance.io-thread-count: 32
network.ping-timeout: 30
cluster.granular-entry-heal: enable
performance.strict-o-direct: on
storage.owner-gid: 36
storage.owner-uid: 36
features.shard: on
cluster.shd-wait-qlength: 1
cluster.shd-max-threads: 8
cluster.locking-scheme: granular
cluster.data-self-heal-algorithm: full
cluster.server-quorum-type: server
cluster.quorum-type: auto
cluster.eager-lock: enable
network.remote-dio: off
performance.low-prio-threads: 32
performance.io-cache: off
performance.read-ahead: off
performance.quick-read: off
auth.allow: *
user.cifs: off
transport.address-family: inet
nfs.disable: off
performance.client-io-threads: on


I have the following sysctl values on gluster client and servers, 
using

libgfapi, MTU 9K

net.core.rmem_max = 134217728
net.core.wmem_max = 134217728
net.ipv4.tcp_rmem = 4096 87380 134217728
net.ipv4.tcp_wmem = 4096 65536 134217728
net.core.netdev_max_backlog = 30
net.ipv4.tcp_moderate_rcvbuf =1
net.ipv4.tcp_no_metrics_save = 1
net.ipv4.tcp_congestion_control=htcp

reads with this setup are perfect, benchmarked in VM to be about 
770MB/s
sequential with disk access times of < 1ms. Writes on the other hand 
are
all over the place. They peak around 320MB/s sequential write, which 
is

what i expect but it seems as if there is some blocking going on.

During the write test i will hit 320MB/s briefly, then 0MB/s as disk
access time shoot to over 3000ms, then back to 320MB/s. It averages 
out

to about 110MB/s afterwards.

Gluster version is 3.12.15 ovirt is 4.2.7.5

Any ideas on what i could tune to eliminate or minimize that blocking?
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/Z7F72BKYKAGICERZETSA4KCLQYR3AORR/

___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct:
https://www.ovirt.org/community/about/community-guidelines/
List Archives:
https://lists.ovirt.org/archives/list/users@ovirt.org/message/FMB6NCNJL2WKEDWPAM4OJIRF2GIDJUUE/


Just the vdsm defaults

vm.dirty_ratio = 5
vm.dirty_background_ratio = 2

these boxes only have 8gb of ram as well, so those percentages should be 
super small.

___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/H4XWDEHYKD2MQUR45QLMMSK6FBX44KIG/


[ovirt-users] Re: oVirt 4.3.2 Error: genev_sys_6081 is not present in the system

2019-04-14 Thread Nardus Geldenhuys
Also get this after install new ovirt node. It stops after about 20 minutes.
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/IGJUPKTPCVJ32NXIA5ULP6BMBGOWICIS/


[ovirt-users] Re: Expand existing gluster storage in ovirt 4.2/4.3

2019-04-14 Thread adrianquintero
ok, I can try out in the next couple of weeks, however I still need to 
understand the gluster volumes/bricks layout  and if this scaleout falls out of 
the hyperconverged requirements.



thanks
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/HQIAUKXDRFDDXQVMIUVUJ2UYIQEL2L56/


[ovirt-users] Re: Host reinstall: ModuleNotFoundError: No module named 'rpmUtils'

2019-04-14 Thread Yedidyah Bar David
On Thu, Apr 11, 2019 at 6:20 PM John Florian  wrote:
>
> On 4/10/19 3:06 AM, Yedidyah Bar David wrote:
> > On Mon, Apr 8, 2019 at 1:06 AM John Florian  wrote:
> >> After mucking around trying to use jumbo MTU for my iSCSI storage nets 
> >> (which apparently I can't do because my Cisco 3560 switch only supports 
> >> 1500 max for its vlan interfaces) I got one of my Hosts screwed up.  I 
> >> likely could rebuild it from scratch but I suspect that's overkill.  I 
> >> simply tried to do a reinstall via the GUI.  That fails.  Looking at the 
> >> ovirt-host-deploy log I see several tracebacks with $SUBJECT.
> > Can you please share these logs?
> Here's an example.  Please read ahead before digging into the log though.
>
> https://paste.fedoraproject.org/paste/956Bvf2UXzSwCSjxkD0OEQ/deactivate/vyLHHPInqQ2kz2Xr5KL55V2q2deoVEgmD1hNXtjcTtQQmljFU4gms2QoydmCTTvJ
>
> >
> >>  Since Python pays my bills I figure this is an easy fix.  Except ... I 
> >> see this on the host:
> >>
> >> $ rpm -qf /usr/lib/python2.7/site-packages/rpmUtils/
> >> yum-3.4.3-161.el7.centos.noarch
> >> $ python
> >> Python 2.7.5 (default, Oct 30 2018, 23:45:53)
> >> [GCC 4.8.5 20150623 (Red Hat 4.8.5-36)] on linux2
> >> Type "help", "copyright", "credits" or "license" for more information.
> >> Tab completion has been enabled.
> > import rpmUtils
> >
> >> I'm guessing this must mean the tracebacks are from Python 3
> > Probably. Do you have it installed?
> Yes, the host has both python34-3.4.9-3.el7.x86_64 and
> python36-3.6.6-5.el7.x86_64.  These are required for some of my local
> packages.

OK

> >
> > In 4.3 we default to python3 if found. This is currently broken on
> > EL7, and we decided to not fix. See also:
> >
> > https://bugzilla.redhat.com/show_bug.cgi?id=1688811
> >
> > This one is specifically about python34, and causes a different
> > backtrace than yours.
> Yup, but very informative just the same.  For now, I'll just remove the
> extra stuff so that the host deploy can finish.  I assume it's OK to
> have Python 3 installed for my things once the deploy is done.  Is that
> a reasonable assumption?  I mean, everything seemed to be running fine
> until I made the mess and tried using the host reinstall as a handy cleanup.

Yes, this should work. Alternatively, you can try this, on your hosts, before
adding them:

mkdir /etc/otopi.env.d
echo 'OTOPI_PYTHON=/usr/bin/python' > /etc/otopi.env.d/my-python.env

(Directory and extension are mandatory - we source *.env from there).

> >
> > Now that 4.3 is out, I don't mind reverting this decision (of
> > defaulting to python3) if it's considered premature, considering that
> > most developers probably use master branches (4.4) by now (and that
> > python3 support is still not finished :-(, although should work for
> > host-deploy on fedora).
> >
> >> since I can clearly see the module doesn't exist for either Python 3.4 or 
> >> 3.6.  So this smells like a packaging bug somehow related to upgrading 
> >> from 4.2.  I mean, I can't imagine a brand new install fails this 
> >> blatantly.  Either that or this import error has nothing to do with my 
> >> reinstall failure.
> > It's not a packaging bug. The way 'Add host' works is:
> >
> > 1. The engine creates a tarfile containing otopi + all needed
> > modules/plugins (including host-deploy) and python libraries. This is
> > cached, and you can check it if you want, at:
> > /var/cache/ovirt-engine/ovirt-host-deploy.tar .
> >
> > 2. The engine ssh'es (is that a verb?)
> I think it should be.  I use it all the time though it always sounds
> awkward.  Text became a verb.   Google became a verb.  Why's this one so
> tough?  :-\  Maybe its because it sounds like we trying to hush a crying
> baby.
> >  to the host, copies there the
> > tar file, opens it, and runs it. Then, the code in it runs. You can
> > find in engine.log the (long) command line it runs on the host via
> > ssh.
> >
> > At this point, the code that runs there still can't do anything about
> > packaging. In particular, it can't Require: any specific versions of
> > anything, etc., because it's not installed by rpm but copied from the
> > engine.
> Good to know this!  Just to make sure I read that right, you're saying
> that "host deploy code" that runs on the host is not rpm packaged, but
> when that code runs it is installing rpms.  So once it's done,
> everything that makes a host a host is via rpm, just not the "how" it
> got there.  Am I right?

Exactly.

> >
> > But this is not really relevant. If you think this is a real bug,
> > please (re)open one, and we'll think what we can do. Opinions/ideas
> > are obviously welcome :-)
> Well, it doesn't sound like a bug as much as an expectation.

You are still welcome to file a bug, then.

>  I guess
> when I color outside the lines by adding my own local packages all bets
> are off.  Still, I'm a little surprised how this one manifests since
> this kind of thing doesn't usually matter.  I'm mostly a victim of my
> age 

[ovirt-users] Re: VM Snapshots not erasable and not bootable

2019-04-14 Thread Jonathan Baecker

Am 14.04.2019 um 13:57 schrieb Eyal Shenitzky:



On Sun, Apr 14, 2019 at 2:28 PM Jonathan Baecker > wrote:


Am 14.04.2019 um 12:13 schrieb Eyal Shenitzky:

Seems like your SPM went down while you had running Live merge
operation.

Can you please submit a bug and attach the logs?


Yes I can do - but you really think this is a bug? Because in that
time I had only one host running, so this was the SPM. And the
time in the log is exactly this time when the host was restarting.
But the merging jobs and snapshot deleting was starting ~20 hours
before.

We should investigate and see if there is a bug or not.
I overview the logs and saw some NPE that might suggest that there may 
be a bug here.
Please attach all the logs including the beginning of the snapshot 
deletion.



Ok, I did:

https://bugzilla.redhat.com/show_bug.cgi?id=1699627

The logs are also in full length.





On Sun, Apr 14, 2019 at 9:40 AM Jonathan Baecker
mailto:jonba...@gmail.com>> wrote:

Am 14.04.2019 um 07:05 schrieb Eyal Shenitzky:

Hi Jonathan,

Can you please add the engine and VDSM logs?

Thanks,


Hi Eyal,

my last message had the engine.log in a zip included.

Here are both again, but I delete some lines to get it smaller.




On Sun, Apr 14, 2019 at 12:24 AM Jonathan Baecker
mailto:jonba...@gmail.com>> wrote:

Hello,

I make automatically backups of my VMs and last night
there was making
some new one. But somehow ovirt could not delete the
snapshots anymore,
in the log it show that it tried the hole day to delete
them but they
had to wait until the merge command was done.

In the evening the host was totally crashed and started
again. Now I can
not delete the snapshots manually and I can also not
start the VMs
anymore. In the web interface I get the message:

VM timetrack is down with error. Exit message: Bad
volume specification
{'address': {'bus': '0', 'controller': '0', 'type':
'drive', 'target':
'0', 'unit': '0'}, 'serial':
'fd3b80fd-49ad-44ac-9efd-1328300582cd',
'index': 0, 'iface': 'scsi', 'apparentsize': '1572864',
'specParams':
{}, 'cache': 'none', 'imageID':
'fd3b80fd-49ad-44ac-9efd-1328300582cd',
'truesize': '229888', 'type': 'disk', 'domainID':
'9c3f06cf-7475-448e-819b-f4f52fa7d782', 'reqsize': '0',
'format': 'cow',
'poolID': '59ef3a18-002f-02d1-0220-0124',
'device': 'disk',
'path':

'/rhev/data-center/59ef3a18-002f-02d1-0220-0124/9c3f06cf-7475-448e-819b-f4f52fa7d782/images/fd3b80fd-49ad-44ac-9efd-1328300582cd/47c0f42e-8bda-4e3f-8337-870899238788',

'propagateErrors': 'off', 'name': 'sda', 'bootOrder':
'1', 'volumeID':
'47c0f42e-8bda-4e3f-8337-870899238788', 'diskType':
'file', 'alias':
'ua-fd3b80fd-49ad-44ac-9efd-1328300582cd', 'discard':
False}.

When I check the path permission is correct and there
are also files in it.

Is there any ways to fix that? Or to prevent this issue
in the future?

In the attachment I send also the engine.log


Regards

Jonathan




___
Users mailing list -- users@ovirt.org

To unsubscribe send an email to users-le...@ovirt.org

Privacy Statement:
https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct:
https://www.ovirt.org/community/about/community-guidelines/
List Archives:

https://lists.ovirt.org/archives/list/users@ovirt.org/message/XLHPEKGQWTVFJCHPJUC3WOXH525SWLEC/



-- 
Regards,

Eyal Shenitzky





-- 
Regards,

Eyal Shenitzky





--
Regards,
Eyal Shenitzky



___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/AIQBUFHJOGJ3GXZLZSBOXIO2IBWL7UU4/


[ovirt-users] Re: VM Snapshots not erasable and not bootable

2019-04-14 Thread Eyal Shenitzky
On Sun, Apr 14, 2019 at 2:28 PM Jonathan Baecker  wrote:

> Am 14.04.2019 um 12:13 schrieb Eyal Shenitzky:
>
> Seems like your SPM went down while you had running Live merge operation.
>
> Can you please submit a bug and attach the logs?
>
> Yes I can do - but you really think this is a bug? Because in that time I
> had only one host running, so this was the SPM. And the time in the log is
> exactly this time when the host was restarting. But the merging jobs and
> snapshot deleting was starting ~20 hours before.
>
We should investigate and see if there is a bug or not.
I overview the logs and saw some NPE that might suggest that there may be a
bug here.
Please attach all the logs including the beginning of the snapshot deletion.

Thanks

>
> On Sun, Apr 14, 2019 at 9:40 AM Jonathan Baecker 
> wrote:
>
>> Am 14.04.2019 um 07:05 schrieb Eyal Shenitzky:
>>
>> Hi Jonathan,
>>
>> Can you please add the engine and VDSM logs?
>>
>> Thanks,
>>
>> Hi Eyal,
>>
>> my last message had the engine.log in a zip included.
>>
>> Here are both again, but I delete some lines to get it smaller.
>>
>>
>>
>> On Sun, Apr 14, 2019 at 12:24 AM Jonathan Baecker 
>> wrote:
>>
>>> Hello,
>>>
>>> I make automatically backups of my VMs and last night there was making
>>> some new one. But somehow ovirt could not delete the snapshots anymore,
>>> in the log it show that it tried the hole day to delete them but they
>>> had to wait until the merge command was done.
>>>
>>> In the evening the host was totally crashed and started again. Now I can
>>> not delete the snapshots manually and I can also not start the VMs
>>> anymore. In the web interface I get the message:
>>>
>>> VM timetrack is down with error. Exit message: Bad volume specification
>>> {'address': {'bus': '0', 'controller': '0', 'type': 'drive', 'target':
>>> '0', 'unit': '0'}, 'serial': 'fd3b80fd-49ad-44ac-9efd-1328300582cd',
>>> 'index': 0, 'iface': 'scsi', 'apparentsize': '1572864', 'specParams':
>>> {}, 'cache': 'none', 'imageID': 'fd3b80fd-49ad-44ac-9efd-1328300582cd',
>>> 'truesize': '229888', 'type': 'disk', 'domainID':
>>> '9c3f06cf-7475-448e-819b-f4f52fa7d782', 'reqsize': '0', 'format': 'cow',
>>> 'poolID': '59ef3a18-002f-02d1-0220-0124', 'device': 'disk',
>>> 'path':
>>> '/rhev/data-center/59ef3a18-002f-02d1-0220-0124/9c3f06cf-7475-448e-819b-f4f52fa7d782/images/fd3b80fd-49ad-44ac-9efd-1328300582cd/47c0f42e-8bda-4e3f-8337-870899238788',
>>>
>>> 'propagateErrors': 'off', 'name': 'sda', 'bootOrder': '1', 'volumeID':
>>> '47c0f42e-8bda-4e3f-8337-870899238788', 'diskType': 'file', 'alias':
>>> 'ua-fd3b80fd-49ad-44ac-9efd-1328300582cd', 'discard': False}.
>>>
>>> When I check the path permission is correct and there are also files in
>>> it.
>>>
>>> Is there any ways to fix that? Or to prevent this issue in the future?
>>>
>>> In the attachment I send also the engine.log
>>>
>>>
>>> Regards
>>>
>>> Jonathan
>>>
>>>
>>>
>>>
>>> ___
>>> Users mailing list -- users@ovirt.org
>>> To unsubscribe send an email to users-le...@ovirt.org
>>> Privacy Statement: https://www.ovirt.org/site/privacy-policy/
>>> oVirt Code of Conduct:
>>> https://www.ovirt.org/community/about/community-guidelines/
>>> List Archives:
>>> https://lists.ovirt.org/archives/list/users@ovirt.org/message/XLHPEKGQWTVFJCHPJUC3WOXH525SWLEC/
>>>
>>
>>
>> --
>> Regards,
>> Eyal Shenitzky
>>
>>
>>
>
> --
> Regards,
> Eyal Shenitzky
>
>
>

-- 
Regards,
Eyal Shenitzky
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/37Y5FXX42VJCMWVU3NPWFQBL4HH3HG6O/


[ovirt-users] Re: VM Snapshots not erasable and not bootable

2019-04-14 Thread Jonathan Baecker

Am 14.04.2019 um 12:13 schrieb Eyal Shenitzky:

Seems like your SPM went down while you had running Live merge operation.

Can you please submit a bug and attach the logs?

Yes I can do - but you really think this is a bug? Because in that time 
I had only one host running, so this was the SPM. And the time in the 
log is exactly this time when the host was restarting. But the merging 
jobs and snapshot deleting was starting ~20 hours before.



On Sun, Apr 14, 2019 at 9:40 AM Jonathan Baecker > wrote:


Am 14.04.2019 um 07:05 schrieb Eyal Shenitzky:

Hi Jonathan,

Can you please add the engine and VDSM logs?

Thanks,


Hi Eyal,

my last message had the engine.log in a zip included.

Here are both again, but I delete some lines to get it smaller.




On Sun, Apr 14, 2019 at 12:24 AM Jonathan Baecker
mailto:jonba...@gmail.com>> wrote:

Hello,

I make automatically backups of my VMs and last night there
was making
some new one. But somehow ovirt could not delete the
snapshots anymore,
in the log it show that it tried the hole day to delete them
but they
had to wait until the merge command was done.

In the evening the host was totally crashed and started
again. Now I can
not delete the snapshots manually and I can also not start
the VMs
anymore. In the web interface I get the message:

VM timetrack is down with error. Exit message: Bad volume
specification
{'address': {'bus': '0', 'controller': '0', 'type': 'drive',
'target':
'0', 'unit': '0'}, 'serial':
'fd3b80fd-49ad-44ac-9efd-1328300582cd',
'index': 0, 'iface': 'scsi', 'apparentsize': '1572864',
'specParams':
{}, 'cache': 'none', 'imageID':
'fd3b80fd-49ad-44ac-9efd-1328300582cd',
'truesize': '229888', 'type': 'disk', 'domainID':
'9c3f06cf-7475-448e-819b-f4f52fa7d782', 'reqsize': '0',
'format': 'cow',
'poolID': '59ef3a18-002f-02d1-0220-0124', 'device':
'disk',
'path':

'/rhev/data-center/59ef3a18-002f-02d1-0220-0124/9c3f06cf-7475-448e-819b-f4f52fa7d782/images/fd3b80fd-49ad-44ac-9efd-1328300582cd/47c0f42e-8bda-4e3f-8337-870899238788',

'propagateErrors': 'off', 'name': 'sda', 'bootOrder': '1',
'volumeID':
'47c0f42e-8bda-4e3f-8337-870899238788', 'diskType': 'file',
'alias':
'ua-fd3b80fd-49ad-44ac-9efd-1328300582cd', 'discard': False}.

When I check the path permission is correct and there are
also files in it.

Is there any ways to fix that? Or to prevent this issue in
the future?

In the attachment I send also the engine.log


Regards

Jonathan




___
Users mailing list -- users@ovirt.org 
To unsubscribe send an email to users-le...@ovirt.org

Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct:
https://www.ovirt.org/community/about/community-guidelines/
List Archives:

https://lists.ovirt.org/archives/list/users@ovirt.org/message/XLHPEKGQWTVFJCHPJUC3WOXH525SWLEC/



-- 
Regards,

Eyal Shenitzky





--
Regards,
Eyal Shenitzky



___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/6CXCNVDHJG4YUQFEIUJHVEPTWSJLZKQ5/


[ovirt-users] Re: VM Snapshots not erasable and not bootable

2019-04-14 Thread Eyal Shenitzky
Seems like your SPM went down while you had running Live merge operation.

Can you please submit a bug and attach the logs?

On Sun, Apr 14, 2019 at 9:40 AM Jonathan Baecker  wrote:

> Am 14.04.2019 um 07:05 schrieb Eyal Shenitzky:
>
> Hi Jonathan,
>
> Can you please add the engine and VDSM logs?
>
> Thanks,
>
> Hi Eyal,
>
> my last message had the engine.log in a zip included.
>
> Here are both again, but I delete some lines to get it smaller.
>
>
>
> On Sun, Apr 14, 2019 at 12:24 AM Jonathan Baecker 
> wrote:
>
>> Hello,
>>
>> I make automatically backups of my VMs and last night there was making
>> some new one. But somehow ovirt could not delete the snapshots anymore,
>> in the log it show that it tried the hole day to delete them but they
>> had to wait until the merge command was done.
>>
>> In the evening the host was totally crashed and started again. Now I can
>> not delete the snapshots manually and I can also not start the VMs
>> anymore. In the web interface I get the message:
>>
>> VM timetrack is down with error. Exit message: Bad volume specification
>> {'address': {'bus': '0', 'controller': '0', 'type': 'drive', 'target':
>> '0', 'unit': '0'}, 'serial': 'fd3b80fd-49ad-44ac-9efd-1328300582cd',
>> 'index': 0, 'iface': 'scsi', 'apparentsize': '1572864', 'specParams':
>> {}, 'cache': 'none', 'imageID': 'fd3b80fd-49ad-44ac-9efd-1328300582cd',
>> 'truesize': '229888', 'type': 'disk', 'domainID':
>> '9c3f06cf-7475-448e-819b-f4f52fa7d782', 'reqsize': '0', 'format': 'cow',
>> 'poolID': '59ef3a18-002f-02d1-0220-0124', 'device': 'disk',
>> 'path':
>> '/rhev/data-center/59ef3a18-002f-02d1-0220-0124/9c3f06cf-7475-448e-819b-f4f52fa7d782/images/fd3b80fd-49ad-44ac-9efd-1328300582cd/47c0f42e-8bda-4e3f-8337-870899238788',
>>
>> 'propagateErrors': 'off', 'name': 'sda', 'bootOrder': '1', 'volumeID':
>> '47c0f42e-8bda-4e3f-8337-870899238788', 'diskType': 'file', 'alias':
>> 'ua-fd3b80fd-49ad-44ac-9efd-1328300582cd', 'discard': False}.
>>
>> When I check the path permission is correct and there are also files in
>> it.
>>
>> Is there any ways to fix that? Or to prevent this issue in the future?
>>
>> In the attachment I send also the engine.log
>>
>>
>> Regards
>>
>> Jonathan
>>
>>
>>
>>
>> ___
>> Users mailing list -- users@ovirt.org
>> To unsubscribe send an email to users-le...@ovirt.org
>> Privacy Statement: https://www.ovirt.org/site/privacy-policy/
>> oVirt Code of Conduct:
>> https://www.ovirt.org/community/about/community-guidelines/
>> List Archives:
>> https://lists.ovirt.org/archives/list/users@ovirt.org/message/XLHPEKGQWTVFJCHPJUC3WOXH525SWLEC/
>>
>
>
> --
> Regards,
> Eyal Shenitzky
>
>
>

-- 
Regards,
Eyal Shenitzky
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/RBSH2VG3QPDLW624FWISM55CHUGIGSWF/


[ovirt-users] Re: Bug 1666795 - Related? - VM's don't start after shutdown on FCP

2019-04-14 Thread Nardus Geldenhuys
This is fixed. Was a table in db that was truncated, we fixed it by restoring 
backup
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/KFDWSRJGBRJGXKY6RT2HEHTCZWMOTYYC/


[ovirt-users] Failed to add storage domain

2019-04-14 Thread thunderlight1
Hi!
I have installed oVirt using the iso 
ovirt-node-ng-installer-4.3.2-2019031908.el7. I the did run the Host-engine 
deployment through Cockpit.
I got an error when it tries to create the domain storage. It sucessfully 
mounted the NFS-share on the host. Bellow is the error I got:

2019-04-14 10:40:38,967+0200 INFO ansible skipped {'status': 'SKIPPED', 
'ansible_task': u'Check storage domain free space', 'ansible_host': 
u'localhost', 'ansible_playbook': 
u'/usr/share/ovirt-hosted-engine-setup/ansible/trigger_role.yml', 
'ansible_type': 'task'}
2019-04-14 10:40:38,967+0200 DEBUG ansible on_any args 
 kwargs
2019-04-14 10:40:39,516+0200 INFO ansible task start {'status': 'OK', 
'ansible_task': u'ovirt.hosted_engine_setup : Activate storage domain', 
'ansible_playbook': 
u'/usr/share/ovirt-hosted-engine-setup/ansible/trigger_role.yml', 
'ansible_type': 'task'}
2019-04-14 10:40:39,516+0200 DEBUG ansible on_any args TASK: 
ovirt.hosted_engine_setup : Activate storage domain kwargs is_conditional:False
2019-04-14 10:40:41,923+0200 DEBUG var changed: host "localhost" var 
"otopi_storage_domain_details" type "" value: "{
"changed": false,
"exception": "Traceback (most recent call last):\n  File 
\"/tmp/ansible_ovirt_storage_domain_payload_xSFxOp/__main__.py\", line 664, in 
main\nstorage_domains_module.post_create_check(sd_id)\n  File 
\"/tmp/ansible_ovirt_storage_domain_payload_xSFxOp/__main__.py\", line 526, in 
post_create_check\nid=storage_domain.id,\n  File 
\"/usr/lib64/python2.7/site-packages/ovirtsdk4/services.py\", line 3053, in 
add\nreturn self._internal_add(storage_domain, headers, query, wait)\n  
File \"/usr/lib64/python2.7/site-packages/ovirtsdk4/service.py\", line 232, in 
_internal_add\nreturn future.wait() if wait else future\n  File 
\"/usr/lib64/python2.7/site-packages/ovirtsdk4/service.py\", line 55, in wait\n 
   return self._code(response)\n  File 
\"/usr/lib64/python2.7/site-packages/ovirtsdk4/service.py\", line 229, in 
callback\nself._check_fault(response)\n  File 
\"/usr/lib64/python2.7/site-packages/ovirtsdk4/service.py\", line 132, in 
_check_fault\nself._raise_error(response
 , body)\n  File \"/usr/lib64/python2.7/site-packages/ovirtsdk4/service.py\", 
line 118, in _raise_error\nraise error\nError: Fault reason is \"Operation 
Failed\". Fault detail is \"[]\". HTTP response code is 400.\n",
"failed": true,
"msg": "Fault reason is \"Operation Failed\". Fault detail is \"[]\". HTTP 
response code is 400."
}"
2019-04-14 10:40:41,924+0200 DEBUG var changed: host "localhost" var 
"ansible_play_hosts" type "" value: "[]"
2019-04-14 10:40:41,924+0200 DEBUG var changed: host "localhost" var 
"play_hosts" type "" value: "[]"
2019-04-14 10:40:41,924+0200 DEBUG var changed: host "localhost" var 
"ansible_play_batch" type "" value: "[]"
2019-04-14 10:40:41,924+0200 ERROR ansible failed {'status': 'FAILED', 
'ansible_type': 'task', 'ansible_task': u'Activate storage domain', 
'ansible_result': u'type: \nstr: {\'_ansible_parsed\': True, 
u\'exception\': u\'Traceback (most recent call last):\\n  File 
"/tmp/ansible_ovirt_storage_domain_payload_xSFxOp/__main__.py", line 664, in 
main\\nstorage_domains_module.post_create_check(sd_id)\\n  File 
"/tmp/ansible_ovirt_storage_domain_payload_xSFxOp/__main__.py", line 526', 
'task_duration': 2, 'ansible_host': u'localhost', 'ansible_playbook': 
u'/usr/share/ovirt-hosted-engine-setup/ansible/trigger_role.yml'}
2019-04-14 10:40:41,924+0200 DEBUG ansible on_any args 
 kwargs 
ignore_errors:None
2019-04-14 10:40:41,928+0200 INFO ansible stats {
"ansible_playbook": 
"/usr/share/ovirt-hosted-engine-setup/ansible/trigger_role.yml",
"ansible_playbook_duration": "00:37 Minutes",
"ansible_result": "type: \nstr: {u'localhost': {'unreachable': 
0, 'skipped': 6, 'ok': 23, 'changed': 1, 'failures': 1}}",
"ansible_type": "finish",
"status": "FAILED"
}
2019-04-14 10:40:41,928+0200 INFO SUMMARY:
DurationTask Name

[ < 1 sec ] Execute just a specific set of steps
[  00:01  ] Force facts gathering
[  00:01  ] Check local VM dir stat
[  00:01  ] Obtain SSO token using username/password credentials
[  00:01  ] Fetch host facts
[ < 1 sec ] Fetch cluster ID
[  00:01  ] Fetch cluster facts
[  00:01  ] Fetch Datacenter facts
[ < 1 sec ] Fetch Datacenter ID
[ < 1 sec ] Fetch Datacenter name
[  00:02  ] Add NFS storage domain
[  00:01  ] Get storage domain details
[  00:01  ] Find the appliance OVF
[  00:01  ] Parse OVF
[ < 1 sec ] Get required size
[ FAILED  ] Activate storage domain
2019-04-14 10:40:41,928+0200 DEBUG ansible on_any args 
 kwargs


Any suggestions on how fix this?
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: