[ovirt-users] Problem with multiple IP addresses and AWX

2020-04-29 Thread Bernhard Dick

Hi,

I've started to use awx shortly and I'd like to have an inventory based 
on their RHEV/oVirt-Plugin. Basically that works fine, however on 
machines that have multiple network interfaces or internal nets (like 
Docker) the first address in the IP list of the vm (which is used as 
ansible_host by AWX) is not an address that is reachable from the 
machine's outside and so those host entries are failing in my jobs.

I found an issue at AWX, being like we won't fix first:
https://github.com/ansible/awx/issues/1191
It also seems that there where issues related to this on oVirt:
First this with a request for setting a primary IP:
https://bugzilla.redhat.com/show_bug.cgi?id=1437145

And this one mentioning a way to ignore interfaces on ovirt-guest-agent, 
however it does no longer work due to the use of qemu-guest-agent:

https://bugzilla.redhat.com/show_bug.cgi?id=1437145

Do you have any ideas how to circumvent this or is there some 
improvement alread in planning?


  Best regards
Bernhard
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/TZJ52RNFT2BQOKX67FNH2JJWFMOEYNZE/


[ovirt-users] Re: Windows server std 2019 64 bit support

2019-09-06 Thread Bernhard Dick

Hi Alex,

Am 05.09.2019 um 14:23 schrieb Alex K:

[...]
Have you tried to load Windows server std 2019 64 bit successfullyon 
ovirt 3.4?

I think you are talking about version 4.3, as you also mention RHEV 4.3?
I have three Windows Server 2019 64bit installations running in an oVirt 
4.3 environment (two in Core mode, one with Desktop extensions 
available) and they're running fine.


  Regards
Bernhard


Thank you.

___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/BU4HO5QZAO3GMYFXSPDI7SWPMEHKSNZZ/


[ovirt-users] Re: Windows Server 2019 Drivers

2019-05-28 Thread Bernhard Dick

Hi Rick,

Am 22.05.2019 um 18:09 schrieb racev...@lenovo.com:

We have updated to oVirt 4.3.3.1 and I'm trying to create a Windows Server 2019 
VM. [...]
I've eventually tried every directory manually in those iso's.  Can someone 
link proper documentation on how to get Windows Server 2019 to work or point to 
the correct drivers?
that sounds strange. I'm using a recent oVirt version ( 4.3.3.7-1.el7 ) 
and the installation of multiple Windows 2019 vms is just fine.

I just tested it with the following configuration:
Disk type: virtio-scsi
BIOS-Type: Q35 chipset with UEFI
ovirt-iso: oVirt-toolsSetup_4.3-2.el7.iso

I use the driver that is located at 
"[CDROMLETTER]:\virtio\vioscsi\2k16\amd64\". That driver is listed as 
compatible and finding the disk.


  Best regards
Bernhard


Thanks
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/DROBUEYCYFUZNAAWKCZIS4VKQBAFYX7U/


___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/HOYB7BZFPWMME6IXNS2P7WDN5ONVQKSC/


[ovirt-users] Re: Wrong disk size in UI after expanding iscsi direct LUN

2019-05-21 Thread Bernhard Dick

Hi Nir,

Am 18.05.2019 um 20:48 schrieb Nir Soffer:
On Thu, May 16, 2019 at 6:10 PM Bernhard Dick <mailto:bernh...@bdick.de>> wrote:


Hi,

I've extended the size of one of my direct iSCSI LUNs. The VM is seeing
the new size but in the webinterface there is still the old size
reported. Is there a way to update this information? I already took a
look into the list but there are only reports regarding updating the
size the VM sees.


Sounds like you hit this bug:
https://bugzilla.redhat.com/1651939 
<https://bugzilla.redhat.com/show_bug.cgi?id=1651939>


The description mention a workaround using the REST API.

thanks, the workaround using the REST API helped.

  Bernhard


Nir


    Best regards
      Bernhard
___
Users mailing list -- users@ovirt.org <mailto:users@ovirt.org>
To unsubscribe send an email to users-le...@ovirt.org
<mailto:users-le...@ovirt.org>
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct:
https://www.ovirt.org/community/about/community-guidelines/
List Archives:

https://lists.ovirt.org/archives/list/users@ovirt.org/message/54YHISUA66227IAMI2UVPZRIXV54BAKA/


___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/S74IJSVBYB3RVS7RBXB75XQHLPMMUPIC/


[ovirt-users] Re: Wrong disk size in UI after expanding iscsi direct LUN

2019-05-21 Thread Bernhard Dick

Hi,

Am 17.05.2019 um 19:25 schrieb Scott Dickerson:



On Thu, May 16, 2019 at 11:11 AM Bernhard Dick <mailto:bernh...@bdick.de>> wrote:


Hi,

I've extended the size of one of my direct iSCSI LUNs. The VM is seeing
the new size but in the webinterface there is still the old size
reported. Is there a way to update this information? I already took a
look into the list but there are only reports regarding updating the
size the VM sees.


What ovirt version?  Which webinterface and view are you checking, Admin 
Portal or VM Portal?

I'm using the Admin Portal. And the version is 4.3.3.7-1.el7.

  Best Regards
Bernhard



    Best regards
      Bernhard
___
Users mailing list -- users@ovirt.org <mailto:users@ovirt.org>
To unsubscribe send an email to users-le...@ovirt.org
<mailto:users-le...@ovirt.org>
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct:
https://www.ovirt.org/community/about/community-guidelines/
List Archives:

https://lists.ovirt.org/archives/list/users@ovirt.org/message/54YHISUA66227IAMI2UVPZRIXV54BAKA/



--
Scott Dickerson
Senior Software Engineer
RHV-M Engineering - UX Team
Red Hat, Inc

___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/MCY2SDB2N4RNALDMJUMSAU7GHL7MISEX/


[ovirt-users] Wrong disk size in UI after expanding iscsi direct LUN

2019-05-16 Thread Bernhard Dick

Hi,

I've extended the size of one of my direct iSCSI LUNs. The VM is seeing 
the new size but in the webinterface there is still the old size 
reported. Is there a way to update this information? I already took a 
look into the list but there are only reports regarding updating the 
size the VM sees.


  Best regards
Bernhard
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/54YHISUA66227IAMI2UVPZRIXV54BAKA/


[ovirt-users] Migrating Hosted Engine environment from untagged to LACP bonded interfaces with tagged ovirtmgmt network

2019-01-23 Thread Bernhard Dick

Hi,

I have an oVirt 4.2 environment running with the engine in hosted engine 
state. We are now going to change our whole setup migrating to 
redundancy on switch side and using a different VLAN for the ovirt 
management traffic.


Currently all host and management-traffic is running untagged on one 
network interface on all of our hosts and we want to change this to be 
VLAN-tagged inside LACP-Bonds (each bond containing two network 
interfaces) on all hosts. While changing the configuration for the VM 
networks should be straight forward as I can shutdown all VMs during the 
migration I'm asking how to handle this for the hosted engine VM and 
configuration. Is there any information how to do such a change?


  Regards
Bernhard
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/IXE5PEZFSS3KBXJK55UUWGBWCM7BRHSG/


[ovirt-users] Re: iSCSI multipath with Dell Compellent

2018-10-02 Thread Bernhard Dick

Hi,

Am 02.10.2018 um 16:59 schrieb Christopher Cox:
You usually have to specify that the LUN has to be accessible to 
multiple systems (e.g. like a clustered filesystem would).  It's not 
unusual for a system to default to allowing only one initiator to connect.
the problem is not that not multiple initiators are unable to connect 
(that runs well), but that I see the LUNs only on one of the two 
controller ports, as the storage decided that the Top Controller is 
currently the active one. I can login to the second controller, but that 
does not present any LUNs to any of my servers.


Maybe I should have written active controller instead of active system.

  Regards
Bernhard


On 10/02/2018 05:51 AM, Bernhard Dick wrote:


Hi,

I'm trying to achieve iSCSI multipathing with an Dell Compellent 
SC4020 array. As the Dell Array does not work as an ALUA system it 
displays available LUNs only on the currently active system (here is a 
good description that I found: 
https://niktips.wordpress.com/2016/05/16/dell-compellent-is-not-an-alua-storage-array/ 
). As a result I cannot add the currently non-active controller to an 
iSCSI-Bond (as it does not present the LUNs to oVirt) and so the path 
to the second controller will not be up. Is there any way to solve this?


   Regards
 Bernhard

___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/GUPFXMI2I5JJJN6JU7QLXTY6G6X25QVL/


[ovirt-users] iSCSI multipath with Dell Compellent

2018-10-02 Thread Bernhard Dick


Hi,

I'm trying to achieve iSCSI multipathing with an Dell Compellent SC4020 
array. As the Dell Array does not work as an ALUA system it displays 
available LUNs only on the currently active system (here is a good 
description that I found: 
https://niktips.wordpress.com/2016/05/16/dell-compellent-is-not-an-alua-storage-array/ 
). As a result I cannot add the currently non-active controller to an 
iSCSI-Bond (as it does not present the LUNs to oVirt) and so the path to 
the second controller will not be up. Is there any way to solve this?


  Regards
Bernhard
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/MTE3KM4L2762KHBMN2XJ5ZFU32M236OY/


[ovirt-users] Re: Change network interface for hosted engine iSCSI connection

2018-10-02 Thread Bernhard Dick

Hi,

Am 01.10.2018 um 12:08 schrieb Simone Tiraboschi:



On Mon, Oct 1, 2018 at 10:40 AM Bernhard Dick <mailto:bernh...@bdick.de>> wrote:


Hi,

after I changed the network configuration of one of my hosted engine
hosts in oVirt I am no longer able to run the ha-daemon on the host.
When I run hosted-engine --connect-storage on this host I get the
following error:
vdsm.client.TimeoutError: Request StoragePool.connectStorageServer with
args {'connectionParams': [{'netIfaceName': 'ens1f1', 'port': '3260',
'connection': '192.168.1.1', 'iqn': 'iqn.2002-03.com.compellent:5000
d...', 'user': '', 'tpgt': '0', 'ifaceName': 'ens1f1', 'password': '',
'id': 'cd5fa13f-fbb7-4cc6-a094-4b280dc7b514'}], 'storagepoolID':
'----', 'domainType': 3} timed out
   after 60 seconds

ens1f1 is the network card that has been used earlier for iSCSI, now it
should connect via ens1f0. Is there a way to update this configuration
for the host?


If you create an iSCSI bond on engine side with ens1f0, the engine will 
configure the host for that and ovirt-ha-agent will honor that 
configuration.

that worked. Thank you!




    Regards
      Bernhard

___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/CTCQB3EWXSYR6LQMP7Z2BUZ3ZHW5ROZT/


[ovirt-users] Change network interface for hosted engine iSCSI connection

2018-10-01 Thread Bernhard Dick

Hi,

after I changed the network configuration of one of my hosted engine 
hosts in oVirt I am no longer able to run the ha-daemon on the host. 
When I run hosted-engine --connect-storage on this host I get the 
following error:
vdsm.client.TimeoutError: Request StoragePool.connectStorageServer with 
args {'connectionParams': [{'netIfaceName': 'ens1f1', 'port': '3260', 
'connection': '192.168.1.1', 'iqn': 'iqn.2002-03.com.compellent:5000
d...', 'user': '', 'tpgt': '0', 'ifaceName': 'ens1f1', 'password': '', 
'id': 'cd5fa13f-fbb7-4cc6-a094-4b280dc7b514'}], 'storagepoolID': 
'----', 'domainType': 3} timed out

 after 60 seconds

ens1f1 is the network card that has been used earlier for iSCSI, now it 
should connect via ens1f0. Is there a way to update this configuration 
for the host?


  Regards
Bernhard
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/CA2VBLGK6MORJJ3EF3Z2EGAPDIP7JL7H/


[ovirt-users] Re: Slow vm transfer speed from vmware esxi 5

2018-09-20 Thread Bernhard Dick



Am 17.09.2018 um 09:53 schrieb Richard W.M. Jones:

On Sun, Sep 16, 2018 at 07:30:09PM +0300, Nir Soffer wrote:

I used to disable the limit enforcing "sparse" in libguestfs upstream
source, but lately the simple check at the python plugin level was moved to
to the ocaml code, and I did not have time to understand it yet.

If you want to remove the limit, try to look here:
https://github.com/libguestfs/libguestfs/blob/51a9c874d3f0a9c4780f2cd3ee7072180446e685/v2v/output_rhv_upload.ml#L163

On RHEL, there is no such limit, and you can import vms to any kind of
storage.

Richard, can we remove the limit on sparse format? I don't see how this
limit
helps anyone.


We already remove it downstream in all RHEL and LP builds.  Here is
the commit which does that:

https://github.com/libguestfs/libguestfs/commit/aa5608a922bd35db28f555e53aea2308361991dd
thanks for pointing to the commit. I was lazy and recompiled virt-v2v 
from the rhel-branch now and it worked fine.


Regards
  Bernhard


We could remove it upstream, but AIUI it causes conversions to break
with no easy way for users to understand what -oa modes are supported
by what backends.  To fix it properly we need a way for oVirt /
imageio / whatever to describe what modes are possible for the current
backend.


oVirt support several combinations:

file:
- raw sparse
- raw preallocated
- qcow2 sparse (unsupported in v2v)

block:
- raw preallocated
- qcow2 sparse (unsupported in v2v)

It seems that oVirt SDK is does not have a good way to select the format
yet, so
virt-v2v cannot select the format for the user. This means the user need to
select
the format.


Right.

There are two open bugs:

https://bugzilla.redhat.com/show_bug.cgi?id=1600547
https://bugzilla.redhat.com/show_bug.cgi?id=1574734

Rich.

___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/JBF2DBOW2A3S4RRWETIWI7WKQ2ILHUFU/


[ovirt-users] Re: Proxmox - oVirt Migration

2018-09-18 Thread Bernhard Dick

Hi Leo,

Am 17.09.2018 um 14:30 schrieb Leo David:

Hello everyone,
I have this situation where I need to migrate about 20 vms from Proxmox 
to oVirt.

In this case,  its about qcow2 images running on Proxmox.
I there a recomended way and procedure for doing this ?
not sure if this is really recommended, but I used the 
import-to-ovirt.pl script that can be found at 
http://git.annexia.org/?p=import-to-ovirt.git to import lvm-based VMs 
from proxmox to oVirt. Maybe that helps.
It also seem that the oVirt export Domain is using qcow2-files. As you 
have already such files you might also be able to build an export 
storage domain without converting the files.


  Regards
Bernhard


Thank you very much !

--
Best regards, Leo David


___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/YV5CMHDZSXJ5FQ3QRYNL2QKGXGKYHS7L/


___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/UDTHCFKNQL5UENSQKPGBKP26ZFBK7AU2/


[ovirt-users] Re: Slow vm transfer speed from vmware esxi 5

2018-09-14 Thread Bernhard Dick

Hi,

it took some time to answer due to some other stuff, but now I had the 
time to look into it.


Am 21.08.2018 um 17:02 schrieb Michal Skrivanek:

[...]

Hi Bernhard,

With the latest version of the ovirt-imageio and the v2v we are 
performing quite nicely, and without specifying


the difference is that with the integrated v2v you don’t use any of 
that. It’s going through the vCenter server which is the major slowdown.
With 10MB/s I do not expect the bottleneck is on our side in any way. 
After all the integrated v2v is writing locally directly to the target 
prepared volume so it’s probably even faster than imageio.


the “new” virt-v2v -o rhv-upload method is not integrated in GUI, but 
supports VDDK and SSH methods of access which both should be faster

you could try to use that, but you’d need to use it on cmdline
I first tried the ssh way which already improved the speed. Afterwards I 
did some more experiments and ended up using vmfs-tools to mount the 
vmware datastore directly and I see transfer speeds of ~50-60MB/sec when 
transferring to an ovirt-export domain now. This seems to be the maximum 
the used system can handle when using the fuse-vmfs-way. That would be 
fast enough in my case (and is a huge improvement).


However I cannot use the rhv-upload method because my storage domain is 
iSCSI and I get the error that sparse filetypes are not allowed (like 
being described at https://bugzilla.redhat.com/show_bug.cgi?id=1600547 
). The solution from the Bug does also not help, because then instantly 
I get the error message that I'd need to use -oa sparse when using 
rhv-upload. This happens with the development version 1.39.9 of 
libguestfs and with the git master branch. Do you have some advice how 
to fix this / which version to use?


  Regards
Bernhard

https://github.com/oVirt/ovirt-ansible-v2v-conversion-host/ might help 
to use it a bit more nicely


Thanks,
michal

number I can tell you that weakest link is the read rate from the 
vmware data store. In our lab
I can say that we roughly peek ~40 MiB/sec reading a single vm and the 
rest of our components(after the read from vmds)
have no problem dealing with that - i.e buffering -> converting -> 
writing to imageio -> writing to storage


So, in short, examine the read-rate from vm datastore, let us know, 
and please specify the versions you are using.


___
Users mailing list -- users@ovirt.org <mailto:users@ovirt.org>
To unsubscribe send an email to users-le...@ovirt.org
<mailto:users-le...@ovirt.org>
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct:
https://www.ovirt.org/community/about/community-guidelines/
List Archives:

https://lists.ovirt.org/archives/list/users@ovirt.org/message/KQZPRX4M7V74FSYIY5LRUPC46CCJ2DCR/

___
Users mailing list -- users@ovirt.org <mailto:users@ovirt.org>
To unsubscribe send an email to users-le...@ovirt.org 
<mailto:users-le...@ovirt.org>

Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/WYKFJPEIPGVUT7Q6TMRVLYRAFQVWBQTI/





--
Dipl.-Inf. Bernhard Dick
Auf dem Anger 24
DE-46485 Wesel
www.BernhardDick.de

jabber: bernh...@jabber.bdick.de

Tel : +49.2812068620
Mobil : +49.1747607927
FAX : +49.2812068621
USt-IdNr.: DE274728845
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/CSIMWXZL744WEMIBPRWNZHLQLYCYCMHZ/


[ovirt-users] Re: Slow vm transfer speed from vmware esxi 5

2018-08-22 Thread Bernhard Dick

Hi,

Am 21.08.2018 um 17:02 schrieb Michal Skrivanek:

[...]

Hi Bernhard,

With the latest version of the ovirt-imageio and the v2v we are 
performing quite nicely, and without specifying


the difference is that with the integrated v2v you don’t use any of 
that. It’s going through the vCenter server which is the major slowdown.
With 10MB/s I do not expect the bottleneck is on our side in any way. 
After all the integrated v2v is writing locally directly to the target 
prepared volume so it’s probably even faster than imageio.VMWare as the bottleneck was also my idea. I just had the time to take a 
look into it, locally the ovirt and the vmware hosts can access their 
storage with about 100MB/sec. I also see that on the receiving side the 
linux IO writes in 100MB/sec bursts and then waits some time before a 
new write happens.


the “new” virt-v2v -o rhv-upload method is not integrated in GUI, but 
supports VDDK and SSH methods of access which both should be faster

you could try to use that, but you’d need to use it on cmdline
https://github.com/oVirt/ovirt-ansible-v2v-conversion-host/ might help 
to use it a bit more nicely

Thanks, I will take a look into it.

  Regards
Bernhard


Thanks,
michal

number I can tell you that weakest link is the read rate from the 
vmware data store. In our lab
I can say that we roughly peek ~40 MiB/sec reading a single vm and the 
rest of our components(after the read from vmds)
have no problem dealing with that - i.e buffering -> converting -> 
writing to imageio -> writing to storage


So, in short, examine the read-rate from vm datastore, let us know, 
and please specify the versions you are using.


___
Users mailing list -- users@ovirt.org 
To unsubscribe send an email to users-le...@ovirt.org

Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct:
https://www.ovirt.org/community/about/community-guidelines/
List Archives:

https://lists.ovirt.org/archives/list/users@ovirt.org/message/KQZPRX4M7V74FSYIY5LRUPC46CCJ2DCR/

___
Users mailing list -- users@ovirt.org 
To unsubscribe send an email to users-le...@ovirt.org 


Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/WYKFJPEIPGVUT7Q6TMRVLYRAFQVWBQTI/



___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/QNPHYCN37NFT3YOS5CAHWYIV3QSTQ363/


[ovirt-users] Slow vm transfer speed from vmware esxi 5

2018-08-18 Thread Bernhard Dick

Hi,

currently I'm trying to move VMs from our vsphere 5 environment to 
oVirt. While the io performance on oVirt and on the esxi platform is 
quite well (about 100MByte/sec on a 1GBit storage link) the transfer 
speed using the integrated v2v-feature is very slow (only 10MByte/sec). 
That would result in transfer time of >24h for some machines.

Do you have any ideas how I can improve the transfer speed?

  Regards
    Bernhard Dick
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/KQZPRX4M7V74FSYIY5LRUPC46CCJ2DCR/


[ovirt-users] Re: Cant create ISCSI data domain

2018-07-20 Thread Bernhard Dick

Hi Robert,

Am 20.07.2018 um 16:40 schrieb teh...@take3.ro:

[...]
Discovery of ISCSI target is working but after logging in to the target no LUNs 
are found or displayed, so i cant finish the procedure.

On the SPM i see an ISCSI session, and LUN 1 is attached as /dev/sdc.

The ISCSI system is a FreeNAS installation and im running ovirt 4.2.
are you running an hyperconverged gluster setup as base for your oVirt 
installation? Maybe you're running into the same problem I had last 
month, which can be found in the archive here:

https://lists.ovirt.org/archives/list/users@ovirt.org/thread/45IIDJUVOCNLBUEKH5LOHM2DK6BYD44D/#45IIDJUVOCNLBUEKH5LOHM2DK6BYD44D

That was due to a multipath blacklisting everything.

  Regards
Bernhard


Any suggestion how to solve this problem?

Regards, Robert

___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/ELWLMF6ZROHZRRMY3YU7P7V6FXIAZBK3/


[ovirt-users] Re: unable to create iSCSI storage domain

2018-06-25 Thread Bernhard Dick

Hi,

Am 22.06.2018 um 18:12 schrieb Nir Soffer:
On Fri, Jun 22, 2018 at 6:48 PM Bernhard Dick <mailto:bernh...@bdick.de>> wrote:


Am 22.06.2018 um 17:38 schrieb Nir Soffer:
 > On Fri, Jun 22, 2018 at 6:22 PM Bernhard Dick mailto:bernh...@bdick.de>
 > <mailto:bernh...@bdick.de <mailto:bernh...@bdick.de>>> wrote:
[...]

Is sdc your LUN?

here sdc is from the storage, sdd is from the linux based target.

 > multipath -ll
No Output


You don't have any multipath devices. oVirt block storage is using
only multipath devices. This means that you will no see any devices
on engine side.

 > cat /etc/multipath.conf
# VDSM REVISION 1.3
# VDSM PRIVATE
# VDSM REVISION 1.5


You are mixing several versions here. Is this 1.3 or 1.5 file?
hm I didn't touch the file. Maybe something went weird during update 
procedures.




# This file is managed by vdsm.
# [...]
defaults {
      # [...]
      polling_interval            5
      # [...]
      no_path_retry               4


According to this this is a 1.5 version.

      # [...]
      user_friendly_names         no
      # [...]
      flush_on_last_del           yes
      # [...]
      fast_io_fail_tmo            5
      # [...]
      dev_loss_tmo                30
      # [...]
      max_fds                     4096
}
# Remove devices entries when overrides section is available.
devices {
      device {
          # [...]
          all_devs                yes
          no_path_retry           4
      }
}
# [...]
# inserted by blacklist_all_disks.sh

blacklist {
          devnode "*"
}


This is your issue - why do you blacklist all devices?

By lsblk output I think you are running hyperconverge setup, which
wrongly disabled all multipath devices, instead of the local devices
used by gluster.

To fix this:

1. Remove the wrong multipath blacklist
2. Find the WWID of the local devices used by gluster
    these are /dev/sda and /dev/sdb
3. Add blacklist for these specific devices using

blacklist {
     wwid XXX-YYY
     wwid YYY-ZZZ
}

With this you should be able to access all LUNs from the storage
server (assuming you configured the storage so the host can see them).
Finally, it is recommended to use a drop-in configuration file for
local changes, and *never* touch /etc/multipath.conf, so vdsm is
able to manage this file.

This is done by putting your changes in:
/etc/multipath/conf.d/local.conf

Example:

$ cat /etc/multipath/conf.d/local.conf
# Local multipath configuration for host XXX
# blacklist boot device and device used for gluster storage.
blacklist {
     wwid XXX-YYY
     wwid YYY-ZZZ
}

You probably want to backup these files and have a script to
deploy them to the hosts if you need to restore the setup.

Once you have a proper drop-in configuration, you can use
the standard vdsm multipath configuration by removing the line

# VDSM PRIVATE

And running:

     vdsm-tool configure --force --module multipath

That solved it. Blacklisting the local drives however does not really 
seem to work. I assume that is due to the local drives are virtio 
storage drives in my case (as it is a testing environment based on 
virtual Hosts) and they do have type 0x80 wwids of the Form "0QEMU 
QEMU HARDDISK   drive-scsi1".


Thanks for your help!

  Regards
Bernhard


In EL7.6 we expect to have a fix for this issue, blacklisting
automatically local devices.
See https://bugzilla.redhat.com/1593459


 > vdsm-client Host getDeviceList
[]


Expected in this configuration.



 > Nir
 >
 >     When I logon to the ovirt hosts I see that they are connected
with the
 >     target LUNs (dmesg is telling that there are iscsi devices
being found
 >     and they are getting assigned to devices in /dev/sdX ).
Writing and
 >     reading from the devices (also accros hosts) works. Do you
have some
 >     advice how to troubleshoot this?
 >
 >         Regards
 >           Bernhard
 >     ___
 >     Users mailing list -- users@ovirt.org
<mailto:users@ovirt.org> <mailto:users@ovirt.org
<mailto:users@ovirt.org>>
 >     To unsubscribe send an email to users-le...@ovirt.org
<mailto:users-le...@ovirt.org>
 >     <mailto:users-le...@ovirt.org <mailto:users-le...@ovirt.org>>
 >     Privacy Statement: https://www.ovirt.org/site/privacy-policy/
 >     oVirt Code of Conduct:
 > https://www.ovirt.org/community/about/community-guidelines/
 >     List Archives:
 >

https://lists.ovirt.org/archives/list/users@ovirt.org/message/45IIDJUVOCNLBUEKH5LOHM2DK6BYD44D/
 >


-- 
Dipl.-Inf. Bernhard Dick

Auf dem Ang

[ovirt-users] Re: unable to create iSCSI storage domain

2018-06-22 Thread Bernhard Dick

Am 22.06.2018 um 17:38 schrieb Nir Soffer:
On Fri, Jun 22, 2018 at 6:22 PM Bernhard Dick <mailto:bernh...@bdick.de>> wrote:

I've a problem creating an iSCSI storage domain. My hosts are running
the current ovirt 4.2 engine-ng 



What is engine-ng?

sorry, I mixed it up. It is ovirt node-ng.



version. I can detect and login to the
iSCSI targets, but I cannot see any LUNs (on the LUNs > Targets page).
That happens with our storage and with a linux based iSCSI target which
I created for testing purposes.


Linux based iscsi based target works fine, we use it a lot for testing
environment.

Can you share the output of these commands on the the host connected
to the storage server?

lsblk
NAME MAJ:MIN RM  SIZE RO 
TYPE MOUNTPOINT
sda8:00   64G  0 
disk
sda1 8:101G  0 
part /boot

sda2 8:20   63G  0 part
  onn-pool00_tmeta 253:001G  0 
lvm

   onn-pool00-tpool   253:20   44G  0 lvm
 onn-ovirt--node--ng--4.2.3.1--0.20180530.0+1 253:30   17G  0 
lvm  /

 onn-pool00   253:12   0   44G  0 lvm
 onn-var_log_audit253:13   02G  0 
lvm  /var/log/audit
 onn-var_log  253:14   08G  0 
lvm  /var/log
 onn-var  253:15   0   15G  0 
lvm  /var
 onn-tmp  253:16   01G  0 
lvm  /tmp
 onn-home 253:17   01G  0 
lvm  /home

 onn-root 253:18   0   17G  0 lvm
 onn-ovirt--node--ng--4.2.2--0.20180430.0+1   253:19   0   17G  0 lvm
 onn-var_crash253:20   0   10G  0 lvm
  onn-pool00_tdata 253:10   44G  0 
lvm

   onn-pool00-tpool   253:20   44G  0 lvm
 onn-ovirt--node--ng--4.2.3.1--0.20180530.0+1 253:30   17G  0 
lvm  /

 onn-pool00   253:12   0   44G  0 lvm
 onn-var_log_audit253:13   02G  0 
lvm  /var/log/audit
 onn-var_log  253:14   08G  0 
lvm  /var/log
 onn-var  253:15   0   15G  0 
lvm  /var
 onn-tmp  253:16   01G  0 
lvm  /tmp
 onn-home 253:17   01G  0 
lvm  /home

 onn-root 253:18   0   17G  0 lvm
 onn-ovirt--node--ng--4.2.2--0.20180430.0+1   253:19   0   17G  0 lvm
 onn-var_crash253:20   0   10G  0 lvm
  onn-swap 253:40  6.4G  0 
lvm  [SWAP]
sdb8:16   0  256G  0 
disk

gluster_vg_sdb-gluster_thinpool_sdb_tmeta  253:501G  0 lvm
 gluster_vg_sdb-gluster_thinpool_sdb-tpool253:70  129G  0 lvm
   gluster_vg_sdb-gluster_thinpool_sdb253:80  129G  0 lvm
   gluster_vg_sdb-gluster_lv_data 253:10   0   64G  0 
lvm  /gluster_bricks/data
   gluster_vg_sdb-gluster_lv_vmstore  253:11   0   64G  0 
lvm  /gluster_bricks/vmstore

gluster_vg_sdb-gluster_thinpool_sdb_tdata  253:60  129G  0 lvm
 gluster_vg_sdb-gluster_thinpool_sdb-tpool253:70  129G  0 lvm
   gluster_vg_sdb-gluster_thinpool_sdb253:80  129G  0 lvm
   gluster_vg_sdb-gluster_lv_data 253:10   0   64G  0 
lvm  /gluster_bricks/data
   gluster_vg_sdb-gluster_lv_vmstore  253:11   0   64G  0 
lvm  /gluster_bricks/vmstore
gluster_vg_sdb-gluster_lv_engine   253:90  100G  0 
lvm  /gluster_bricks/engine
sdc8:32   0  500G  0 
disk
sdd8:48   01G  0 
disk
sr0   11:01  1.1G  0 
rom


here sdc is from the storage, sdd is from the linux based target.


multipath -ll

No Output

cat /etc/multipath.conf

# VDSM REVISION 1.3
# VDSM PRIVATE
# VDSM REVISION 1.5

# This file is managed by vdsm.
# [...]
defaults {
# [...]
polling_interval5
# [...]
no_path_retry   4
# [...]
user_friendly_names no
# [...]
flush_on_last_del   yes
# [...]
fast_io_fail_tmo5
# [...]
dev_loss_tmo30
# [...]
max_fds 4096
}
# Remove devices entries when overrides section is available.
devices {
device {
# [...]
all_devs   

[ovirt-users] unable to create iSCSI storage domain

2018-06-22 Thread Bernhard Dick

Hi,

I've a problem creating an iSCSI storage domain. My hosts are running 
the current ovirt 4.2 engine-ng version. I can detect and login to the 
iSCSI targets, but I cannot see any LUNs (on the LUNs > Targets page).
That happens with our storage and with a linux based iSCSI target which 
I created for testing purposes.
When I logon to the ovirt hosts I see that they are connected with the 
target LUNs (dmesg is telling that there are iscsi devices being found 
and they are getting assigned to devices in /dev/sdX ). Writing and 
reading from the devices (also accros hosts) works. Do you have some 
advice how to troubleshoot this?


  Regards
Bernhard
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/45IIDJUVOCNLBUEKH5LOHM2DK6BYD44D/


[ovirt-users] Re: [Qemu-block] Re: Debugging ceph access

2018-06-06 Thread Bernhard Dick

Hi,

Am 05.06.2018 um 22:11 schrieb Nir Soffer:
On Fri, Jun 1, 2018 at 3:54 PM Stefan Hajnoczi <mailto:stefa...@gmail.com>> wrote:


On Thu, May 31, 2018 at 11:02:01PM +0300, Nir Soffer wrote:
 > On Thu, May 31, 2018 at 1:55 AM Bernhard Dick mailto:bernh...@bdick.de>> wrote:
 >
 > > Hi,
 > >
 > > I found the reason for my timeout problems: It is the version
of librbd1
 > > (which is 0.94.5) in conjunction with my CEPH test-environment
which is
 > > running the luminous release.
 > > When I install the librbd1 (and librados2) packages from the
 > > centos-ceph-luminous repository (version 12.2.5) I'm able to
start and
 > > migrate VMs inbetween the hosts.
 > >
 >
 > vdsm does not require librbd since qemu brings this dependency,
and vdsm
 > does not access ceph directly yet.
 >
 > Maybe qemu should require newer version of librbd?

Upstream QEMU builds against any librbd version that exports the
necessary APIs.

The choice of library versions is mostly up to distro package
maintainers.

Have you filed a bug against Ceph on the distro you are using?


Thanks for clearing this up Stefan.

Bernhard, can you give more info about your Linux version and
installed packages (.e.g qemu-*)?
Sure. I have two test-systems. The first is running a stock oVirt Node 
4.3 which states "CentOS Linux release 7.5.1804 (Core)" as version 
string. The qemu and ceph packages are:

Name: qemu-img-ev
Arch: x86_64
Epoch   : 10
Version : 2.10.0
Release : 21.el7_5.3.1

Name: qemu-kvm-common-ev
Arch: x86_64
Epoch   : 10
Version : 2.10.0
Release : 21.el7_5.3.1

Name: qemu-kvm-ev
Arch: x86_64
Epoch   : 10
Version : 2.10.0
Release : 21.el7_5.3.1

Name: librados2
Arch: x86_64
Epoch   : 1
Version : 0.94.5
Release : 2.el7

Name: librbd1
Arch: x86_64
Epoch   : 1
Version : 0.94.5
Release : 2.el7

The Centos 7 system is a centos minimal installation with the following 
repos being enabled:

CentOS-7 - Base
CentOS-7 - Updates
CentOS-7 - Extras
ovirt-4.2-epel
ovirt-4.2-centos-gluster123
ovirt-4.2-virtio-win-latest
ovirt-4.2-centos-qemu-ev
ovirt-4.2-centos-opstools
centos-sclo-rh-release
ovirt-4.2-centos-ovirt42
ovirt-4.2

The version numbers for the qemu packages are the same as above as 
they're from the ovirt-4.2-centos-qemu-ev repository. Also the version 
numbers for librados2 and librbd1 match, while they're from the 
centos-base (instead of ovirt-base) repository.


When I activate the ceph-centos-luminous repository librbd1 and 
librados2 get upgraded to the following versions (leaving the qemu 
packages untouched, what is as expected):

Name: librados2
Arch: x86_64
Epoch   : 2
Version : 12.2.5
Release : 0.el7

Name: librbd1
Arch: x86_64
Epoch   : 2
Version : 12.2.5
Release : 0.el7

So from my perspective on side of oVirt there should be thaught about a 
way to add a more recent ceph library version into the ovirt node image, 
as it is not the most common task to add extra repositories here (and 
I'm not whether that might break the image based upgrade-path).


I will go for Centos based hosts in my case, as I'm a bit more flexible 
than so at least for me there is no real need to get the above noted 
implemented quickly :-)


  Regards
Bernhard
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/HOP7SLMVCTXNQCACKIOYYENRTLOVNFQ3/


[ovirt-users] Re: [Qemu-block] Re: Debugging ceph access

2018-06-06 Thread Bernhard Dick

Hi,

Am 01.06.2018 um 14:54 schrieb Stefan Hajnoczi:

On Thu, May 31, 2018 at 11:02:01PM +0300, Nir Soffer wrote:

On Thu, May 31, 2018 at 1:55 AM Bernhard Dick  wrote:


Hi,

I found the reason for my timeout problems: It is the version of librbd1
(which is 0.94.5) in conjunction with my CEPH test-environment which is
running the luminous release.
When I install the librbd1 (and librados2) packages from the
centos-ceph-luminous repository (version 12.2.5) I'm able to start and
migrate VMs inbetween the hosts.



vdsm does not require librbd since qemu brings this dependency, and vdsm
does not access ceph directly yet.

Maybe qemu should require newer version of librbd?


Upstream QEMU builds against any librbd version that exports the
necessary APIs.

The choice of library versions is mostly up to distro package
maintainers.

Have you filed a bug against Ceph on the distro you are using?
At least I didn't file a bug, as I'm not sure whether this might even 
desired. And if supplying a newer version of librbd within the base 
repository leads to problems with older clusters. The 0.94.5 version is 
from the base repository (On Centos 7 and oVirt Node 4.3).


  Regards
Bernhard
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/42SSHPXBXIWVRQGLIBSC3BI3DCP63O45/


[ovirt-users] Re: Debugging ceph access

2018-05-30 Thread Bernhard Dick

Hi,

I found the reason for my timeout problems: It is the version of librbd1 
(which is 0.94.5) in conjunction with my CEPH test-environment which is 
running the luminous release.
When I install the librbd1 (and librados2) packages from the 
centos-ceph-luminous repository (version 12.2.5) I'm able to start and 
migrate VMs inbetween the hosts.


  Regards
Bernhard

Am 25.05.2018 um 17:08 schrieb Bernhard Dick:

Hi,

as you might already know I try to use ceph with openstack in an oVirt 
test environment. I'm able to create and remove volumes. But if I try to 
run a VM which contains a ceph volume it is in the "Wait for launch" 
state for a very long time. Then it gets into "down" state again. The 
qemu log states


2018-05-25T15:03:41.100401Z qemu-kvm: -drive 
file=rbd:rbd/volume-3bec499e-d0d0-45ef-86ad-2c187cdb2811:id=cinder:auth_supported=cephx\;none:mon_host=[mon0]\:6789\;[mon1]\:6789,file.password-secret=scsi0-0-0-0-secret0,format=raw,if=none,id=drive-scsi0-0-0-0,serial=3bec499e-d0d0-45ef-86ad-2c187cdb2811,cache=none,werror=stop,rerror=stop,aio=threads: 
error connecting: Connection timed out


2018-05-25 15:03:41.109+: shutting down, reason=failed

On the monitor hosts I see traffic with the ceph-mon-port, but not on 
other ports (the osds for example). In the ceph logs however I don't 
really see what happens.

Do you have some tips how to debug this problem?

   Regards
     Bernhard

___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/N6ODADRIIYRJPSSX23ITWLNQLX3ER3Q4/


[ovirt-users] Debugging ceph access

2018-05-25 Thread Bernhard Dick

Hi,

as you might already know I try to use ceph with openstack in an oVirt 
test environment. I'm able to create and remove volumes. But if I try to 
run a VM which contains a ceph volume it is in the "Wait for launch" 
state for a very long time. Then it gets into "down" state again. The 
qemu log states


2018-05-25T15:03:41.100401Z qemu-kvm: -drive 
file=rbd:rbd/volume-3bec499e-d0d0-45ef-86ad-2c187cdb2811:id=cinder:auth_supported=cephx\;none:mon_host=[mon0]\:6789\;[mon1]\:6789,file.password-secret=scsi0-0-0-0-secret0,format=raw,if=none,id=drive-scsi0-0-0-0,serial=3bec499e-d0d0-45ef-86ad-2c187cdb2811,cache=none,werror=stop,rerror=stop,aio=threads: 
error connecting: Connection timed out


2018-05-25 15:03:41.109+: shutting down, reason=failed

On the monitor hosts I see traffic with the ceph-mon-port, but not on 
other ports (the osds for example). In the ceph logs however I don't 
really see what happens.

Do you have some tips how to debug this problem?

  Regards
Bernhard
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org


[ovirt-users] Re: Support for openstack keystone v3 API?

2018-05-24 Thread Bernhard Dick



Am 24.05.2018 um 16:15 schrieb Nir Soffer:
[...] 
Basically Ceph support via Cinder is not fully supported. Even if you get
the API to work, we don't support lot of operations like moving disks 
between

Ceph and other storage types. This should be improved in 4.3, but we don't
have concrete plans yet.
k. It also seems not being possible to create a datacenter that only 
contains ceph storage, as this is of type "Volume" instead of "Data" am 
I correct?


You can also try other ways to integrate Ceph, like cephfs or Ceph iSCSI 
gateway.
These options are fully supported as they use the existing file and 
block based

capabilities.
Using the iSCSI-Gateway from ceph also seemed an intresting alternative 
(as it also removes the need for an openstack environment just for 
adding the ceph based storage part in my case). However at least for me 
it seemed that the openstack integration had been pushed more as "the 
way to go".


  Regards
Bernhard


Nir

___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org


[ovirt-users] Support for openstack keystone v3 API?

2018-05-24 Thread Bernhard Dick

Hi,

I wanted to try ceph with oVirt and so I've installed an openstack 
queens (current stable) environment containing cinder and keystone. But 
when I tried to add the storage provider I ended up with 404 errors in 
the engine log files.
They are because oVirt tries using the v2.0 API, and 
http://HOST:5000/v2.0/tokens does not exist.
If I understand the release notes of openstack correctly the section 
https://blueprints.launchpad.net/keystone/+spec/removed-as-of-queens 
includes all v2.0 API being removed from keystone. So I'm asking whether 
there is support for the v3 API in sight?


  Regards
Bernhard
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org


[ovirt-users] Re: ovirt engine frequently rebooting/changing host

2018-05-17 Thread Bernhard Dick

Hi,

Am 17.05.2018 um 07:30 schrieb Yedidyah Bar David:

On Wed, May 16, 2018 at 5:38 PM, Bernhard Dick <bernh...@bdick.de> wrote:

Hi,

Am 07.05.2018 um 11:23 schrieb Yedidyah Bar David:


[...]


It seems to work quite well, but after some hours I get many status
update
mails from the ovirt engine which are either going to EngineStop or
EngeineForceStop. Sometimes the host where the engine runs is switched.
After some of those reboots there is silence for some hours before it is
starting over. Can you tell me where I should look at to fix that
problem?



You can check, on all hosts, /var/log/ovirt-hosted-engine-ha/* .


thanks, that helped. Our gateway does not always respond to ping-requests so
I changed the penality score accordingly.


How? In the code?
I changed the value for "gateway-score-penalty" in 
/etc/ovirt-hosted-engine-ha/agent.conf .


  Regards
Bernhard
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org


[ovirt-users] Re: ovirt engine frequently rebooting/changing host

2018-05-16 Thread Bernhard Dick

Hi,

Am 07.05.2018 um 11:23 schrieb Yedidyah Bar David:

[...]

It seems to work quite well, but after some hours I get many status update
mails from the ovirt engine which are either going to EngineStop or
EngeineForceStop. Sometimes the host where the engine runs is switched.
After some of those reboots there is silence for some hours before it is
starting over. Can you tell me where I should look at to fix that problem?


You can check, on all hosts, /var/log/ovirt-hosted-engine-ha/* .
thanks, that helped. Our gateway does not always respond to 
ping-requests so I changed the penality score accordingly. It is now 
running stable for almost one week.


  Regards
Bernhard


Good luck,


___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org


[ovirt-users] ovirt engine frequently rebooting/changing host

2018-05-07 Thread Bernhard Dick

Hi,

currently I'm evaluating oVirt and I have three hosts installed within 
nested KVM. They're sharing a gluster environment which has been 
configured using the oVirt Node Wizards.
It seems to work quite well, but after some hours I get many status 
update mails from the ovirt engine which are either going to EngineStop 
or EngeineForceStop. Sometimes the host where the engine runs is 
switched. After some of those reboots there is silence for some hours 
before it is starting over. Can you tell me where I should look at to 
fix that problem?


  Regards
Bernhard Dick
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users