[ovirt-users] Re: oVirt 4.2.4: Enable only strong ciphers/Disable TLS versions < 1.2

2018-06-27 Thread Nir Soffer
On Tue, Jun 26, 2018 at 5:52 PM Matthias Leopold <
matthias.leop...@meduniwien.ac.at> wrote:

> Hi,
>
> i decided to update my test environment (4.2.2) today and noticed oVirt
> 4.2.4 is out ;-)
>
> i have some dumb questions concerning
> - BZ 1582527 Enable only strong ciphers from engine to VDSM
> communication for hosts in cluster level >= 4.2
> - BZ 1577593 Disable TLS versions < 1.2 for hosts with cluster level >= 4.1
>
> Is simply updating a host from 4.2.2 to 4.2.4 enough to apply the
> changes mentioned above?
>

Updating is enough, no reinstall is needed.

Piotr, do we need any additional configuration on the host?

Nir


> Or do i have to reinstall hosts in addition to upgrading? Before or
> after the upgrade?
>
> My cluster was on cluster level 4.2 when i started.
> My hosts are type: Enterprise Linux (CentOS)
>
> thx
> matthias
>
> ___
> Users mailing list -- users@ovirt.org
> To unsubscribe send an email to users-le...@ovirt.org
> Privacy Statement: https://www.ovirt.org/site/privacy-policy/
> oVirt Code of Conduct:
> https://www.ovirt.org/community/about/community-guidelines/
> List Archives:
> https://lists.ovirt.org/archives/list/users@ovirt.org/message/REL7JFGVC3D263USMF73HK2GIFNFND5I/
>
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/SWEKNNVBAK2QBOAQSHH5ZEGXCMKDFNK2/


[ovirt-users] Re: oVirt 4.2 max limits

2018-06-27 Thread Nir Soffer
On Tue, Jun 26, 2018 at 8:45 PM Simon Coter  wrote:

> I’m looking for documentation reference on oVirt max-limits, like max
> number of guest on single compute-node, max number of LUNs into a storage
> domain or others.
>

There is no limit to the number of LUNs you can add in a storage domain,
but I don't it is a good idea or very useful to have many LUNs in a storage
domain, since the number of volume in a storage domain is limited to about
1947.

The number of LUNs in a system is limited by the kernel, I think the
number of around 16000.

Why do you care about the maximum number of LUNs?

Nir


> Is there a reference on oVirt documentation ?
> Actually the only thing I found is related to RHEV and memory (
> https://access.redhat.com/articles/906543)
> Thanks for the help.
>
> Simon
> ___
> Users mailing list -- users@ovirt.org
> To unsubscribe send an email to users-le...@ovirt.org
> Privacy Statement: https://www.ovirt.org/site/privacy-policy/
> oVirt Code of Conduct:
> https://www.ovirt.org/community/about/community-guidelines/
> List Archives:
> https://lists.ovirt.org/archives/list/users@ovirt.org/message/R3BF7D6ZNVVI4ASJ44MYXEPGR3C4QBKS/
>
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/5AZCSCWM7OLRBA5FN3L2E52TCC47VVBW/


[ovirt-users] Re: Cannot acquire Lock .... snapshot error

2018-06-27 Thread Nir Soffer
On Wed, Jun 27, 2018 at 4:21 PM Enrico Becchetti <
enrico.becche...@pg.infn.it> wrote:

> Hi all,
> after update vdsm I run this command:
>
> *[root@infm-vm04 ~]# vdsm-tool -v check-volume-leases*
> *WARNING: Make sure there are no running storage operations.*
>
> *Do you want to check volume leases? [yes,NO] yes*
>
> *Checking active storage domains. This can take several minutes, please
> wait.*
>
> After that I saw some volumes with issue:
>
> *The following volume leases need repair:*
>
> *- domain: 47b7c9aa-ef53-48bc-bb55-4a1a0ba5c8d5*
>
> *  - image: 4b2a6552-847c-43f4-a180-b037d0b93a30*
> *- volume: 6eb8caf0-b120-40e5-86b5-405f15d1245a*
> *  - image: 871196b2-9d8b-422f-9a3e-be54e100dc5c*
> *- volume: 861ff7dd-a01c-47f7-8a01-95bc766c2607*
> *  - image: 267b8b8c-da09-44ef-81b3-065dfa2e7085*
> *- volume: d5f3158a-87ac-4c02-84ba-fcb86b8688a0*
> *  - image: c5611862-6504-445e-a6c8-f1e1a95b5df7*
> *- volume: e156ac2e-09ac-4e1e-a139-17fa374a96d4*
> *  - image: e765c9c4-2ef9-4c8f-a573-8cd2b0a0a3a2*
> *- volume: 47d81cbe-598a-402a-9795-1d046c45b7b1*
> *  - image: ab88a08c-910d-44dd-bdf8-8242001ba527*
> *- volume: 86c72239-525f-4b0b-9aa6-763fc71340bc*
> *  - image: 5e8c4620-b6a5-4fc6-a5bb-f209173d186c*
> *- volume: 0f52831b-ec35-4140-9a8c-fa24c2647f17*
> *  - image: a67809fc-b830-4ea3-af66-b5b9285b4924*
> *- volume: 26c6bdd7-1382-4e8e-addc-dcd3898b317f*
>
> *Do you want to repair the leases? [yes,NO] *
>
> what happen If I try to repair them ? Is there any impact to
> my running vm ?
>

This is safe.

Each of these volumes has certain area on storage, reserved
for the sanlock lease for this volume.

The name of the volume e.g. "*6eb8caf0-b120-40e5-86b5-405f15d1245a"*
*must appear in that storage area. Sanlock will not allow acquiring*
*the lease if the name on storage does not match the name asked by*
*the caller.*

Repairing the leases will correct the name of the lease, and future
operations that aquire the volume lease (e.g. merge snapshots)
will succeed.

Nir


>
> Thanks a lot !!!
> Best Regards
> Enrico
>
>
>
>
>
> Il 26/06/2018 15:32, Ala Hino ha scritto:
>
> You are running vdsm-4.20.17, and the tool introduced in vdsm-4.20.24.
> You will have to upgrade vdsm to be able to use the tool.
>
> On Tue, Jun 26, 2018 at 4:29 PM, Enrico Becchetti <
> enrico.becche...@pg.infn.it> wrote:
>
>> Hi,
>> I run this command from my SPM , Centos 7.4.1708:
>>
>> [root@infn-vm05 vdsm]# rpm -qa | grep -i vdsm
>> vdsm-hook-vmfex-dev-4.20.17-1.el7.centos.noarch
>> vdsm-python-4.20.17-1.el7.centos.noarch
>> vdsm-hook-fcoe-4.20.17-1.el7.centos.noarch
>> vdsm-common-4.20.17-1.el7.centos.noarch
>> vdsm-jsonrpc-4.20.17-1.el7.centos.noarch
>> vdsm-hook-ethtool-options-4.20.17-1.el7.centos.noarch
>> vdsm-hook-openstacknet-4.20.17-1.el7.centos.noarch
>> vdsm-http-4.20.17-1.el7.centos.noarch
>> vdsm-client-4.20.17-1.el7.centos.noarch
>> vdsm-gluster-4.20.17-1.el7.centos.noarch
>> vdsm-hook-vfio-mdev-4.20.17-1.el7.centos.noarch
>> vdsm-api-4.20.17-1.el7.centos.noarch
>> vdsm-network-4.20.17-1.el7.centos.x86_64
>> vdsm-yajsonrpc-4.20.17-1.el7.centos.noarch
>> vdsm-4.20.17-1.el7.centos.x86_64
>> vdsm-hook-vhostmd-4.20.17-1.el7.centos.noarch
>>
>> Thanks !!!
>> Enrico
>>
>>
>> Il 26/06/2018 15:21, Ala Hino ha scritto:
>>
>> Hi Enrico,
>>
>> What's the vdsm version that you are using?
>>
>> The tool introduced in vdsm 4.20.24.
>>
>> On Tue, Jun 26, 2018 at 3:51 PM, Enrico Becchetti <
>> enrico.becche...@pg.infn.it> wrote:
>>
>>> Dear Ala,
>>> if you have a few minutes for me I'd like to ask you to read my issue.
>>> It's a strange problem because my vm works fine but I can't delete its
>>> snapshoot.
>>> Thanks a lot
>>> Best Regards
>>> Enrico
>>>
>>>
>>>  Messaggio Inoltrato 
>>> Oggetto: [ovirt-users] Re: Cannot acquire Lock  snapshot error
>>> Data: Mon, 25 Jun 2018 14:20:21 +0200
>>> Mittente: Enrico Becchetti 
>>> 
>>> A: Nir Soffer  
>>> CC: users  
>>>
>>>
>>>  Dear Friends ,
>>> to fix my problem I've try vdsm-tool command but it's seem an error:
>>>
>>> [root@infn-vm05 vdsm]# vdsm-tool check-volume-lease
>>> Usage: /usr/bin/vdsm-tool [options]  [arguments]
>>> Valid options:
>>> ..
>>>
>>> as you can see there isn't check-volumes-option  and my ovirt engine is
>>> already at 4.2.
>>> Any other ideas ?
>>> Thanks a lot !
>>> Best Regards
>>> Enrico
>>>
>>>
>>>
>>> Il 22/06/2018 17:46, Nir Soffer ha scritto:
>>>
>>> On Fri, Jun 22, 2018 at 3:13 PM Enrico Becchetti <
>>> enrico.becche...@pg.infn.it> wrote:
>>>
  Dear All,
 my ovirt 4.2.1.7-1.el7.centos has three hypervisors, lvm storage and
 virtiual machine with
 ovirt-engine. All works fine but with one vm when I try to remove its
 snapshot I have
 this error:

 2018-06-22 07:35:48,155+0200 INFO  (jsonrpc/5) [vdsm.api] START
 prepareMerge(spUUID=u'18d57688-6ed4-43b8-bd7c-0665b55950b7',
 subchainInfo={u'img_id': u'c5611862-6504-445e-a6c8-f1e1a95b5df7', u'sd_id':
 

[ovirt-users] Re: Unable to do Live Migration

2018-06-27 Thread Nir Soffer
On Tue, Jun 19, 2018 at 11:39 AM  wrote:

> Hello,
>
> I'm using a 4 node cluster oVirt 4.1. After updating the Engine to 4.2.3.8
> (and ContOS 7.5), I tried to update the nodes too.
> Unfortunally, the live migration is not working (any more). Maybe it's
> related to following error found in engine.log
>
> 2018-06-19 09:32:53,469+02 WARN
> [org.ovirt.engine.core.bll.provider.network.openstack.CustomizedRESTEasyConnector]
> (EE-ManagedThreadFactory-engineScheduled-Thread-7) [4d6a7dc4] Cannot
> register external providers trust store: java.io.IOException: Keystore was
> tampered with, or password was incorrect
> 2018-06-19 09:32:53,475+02 ERROR
> [org.ovirt.engine.core.bll.provider.network.SyncNetworkProviderCommand]
> (EE-ManagedThreadFactory-engineScheduled-Thread-7) [4d6a7dc4] Command
> 'org.ovirt.engine.core.bll.provider.network.SyncNetworkProviderCommand'
> failed: EngineException: (Failed with error unable to find valid
> certification path to requested target and code 5050)
>
>
Didi, do have an idea what can be the issue?


> I'm using the selfsigned CA provided with oVirt.
>
> Nodes (source and destination)
> OS-Version:RHEL - 7 - 4.1708.el7.centos
> Kernelversion: 3.10.0 - 693.21.1.el7.x86_64
> KVM-Version:2.9.0 - 16.el7_4.14.1
> LIBVIRT-Version:libvirt-3.2.0-14.el7_4.9
> VDSM-Version:vdsm-4.19.45-1.el7.centos
> SPICE-Version:0.12.8 - 2.el7.1
> CEPH-Version:librbd1-0.94.5-2.el7
> Kernel Features:PTI: 1, IBPB: 0, IBRS: 0
>
> engine.log for the mirgration:
> 2018-06-14 13:13:30,300+02 INFO
> [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector]
> (EE-ManagedThreadFactory-engine-Thread-246968) [] EVENT_ID:
> VM_MIGRATION_START_SYSTEM_INITIATED(67), Migration initiated by system (VM:
> vm_to_migrate, Source: SOURCE, Destination: DESTINATION, Reason: Host
> preparing for maintenance).
> 2018-06-14 13:13:32,886+02 INFO
> [org.ovirt.engine.core.vdsbroker.monitoring.VmsStatisticsFetcher]
> (EE-ManagedThreadFactory-engineScheduled-Thread-94) [] Fetched 8 VMs from
> VDS 'f3749f1e-a68d-424f-aa3a-f0ec996df498'
> 2018-06-14 13:13:32,887+02 INFO
> [org.ovirt.engine.core.vdsbroker.monitoring.VmAnalyzer]
> (EE-ManagedThreadFactory-engineScheduled-Thread-94) [] VM
> '4aff4193-ba75-481d-92b3-59b62cd8b111'(vm_to_migrate) was unexpectedly
> detected as 'MigratingTo' on VDS
> 'f3749f1e-a68d-424f-aa3a-f0ec996df498'(DESTINATION) (expected on
> 'ea7bae8f-d966-4baa-b8b4-a5522b88ec3a')
> 2018-06-14 13:13:32,887+02 INFO
> [org.ovirt.engine.core.vdsbroker.monitoring.VmAnalyzer]
> (EE-ManagedThreadFactory-engineScheduled-Thread-94) [] VM
> '4aff4193-ba75-481d-92b3-59b62cd8b111' is migrating to VDS
> 'f3749f1e-a68d-424f-aa3a-f0ec996df498'(DESTINATION) ignoring it in the
> refresh until migration is done
> 2018-06-14 13:13:47,939+02 INFO
> [org.ovirt.engine.core.vdsbroker.monitoring.VmAnalyzer]
> (EE-ManagedThreadFactory-engineScheduled-Thread-42) [] VM
> '4aff4193-ba75-481d-92b3-59b62cd8b111'(vm_to_migrate) was unexpectedly
> detected as 'MigratingTo' on VDS
> 'f3749f1e-a68d-424f-aa3a-f0ec996df498'(DESTINATION) (expected on
> 'ea7bae8f-d966-4baa-b8b4-a5522b88ec3a')
> 2018-06-14 13:13:47,939+02 INFO
> [org.ovirt.engine.core.vdsbroker.monitoring.VmAnalyzer]
> (EE-ManagedThreadFactory-engineScheduled-Thread-42) [] VM
> '4aff4193-ba75-481d-92b3-59b62cd8b111' is migrating to VDS
> 'f3749f1e-a68d-424f-aa3a-f0ec996df498'(DESTINATION) ignoring it in the
> refresh until migration is done
> 2018-06-14 13:14:02,993+02 INFO
> [org.ovirt.engine.core.vdsbroker.monitoring.VmAnalyzer]
> (EE-ManagedThreadFactory-engineScheduled-Thread-13) [] VM
> '4aff4193-ba75-481d-92b3-59b62cd8b111'(vm_to_migrate) was unexpectedly
> detected as 'MigratingTo' on VDS
> 'f3749f1e-a68d-424f-aa3a-f0ec996df498'(DESTINATION) (expected on
> 'ea7bae8f-d966-4baa-b8b4-a5522b88ec3a')
> 2018-06-14 13:14:02,993+02 INFO
> [org.ovirt.engine.core.vdsbroker.monitoring.VmAnalyzer]
> (EE-ManagedThreadFactory-engineScheduled-Thread-13) [] VM
> '4aff4193-ba75-481d-92b3-59b62cd8b111' is migrating to VDS
> 'f3749f1e-a68d-424f-aa3a-f0ec996df498'(DESTINATION) ignoring it in the
> refresh until migration is done
> 2018-06-14 13:14:18,042+02 INFO
> [org.ovirt.engine.core.vdsbroker.monitoring.VmAnalyzer]
> (EE-ManagedThreadFactory-engineScheduled-Thread-67) [] VM
> '4aff4193-ba75-481d-92b3-59b62cd8b111'(vm_to_migrate) was unexpectedly
> detected as 'MigratingTo' on VDS
> 'f3749f1e-a68d-424f-aa3a-f0ec996df498'(DESTINATION) (expected on
> 'ea7bae8f-d966-4baa-b8b4-a5522b88ec3a')
> 2018-06-14 13:14:18,042+02 INFO
> [org.ovirt.engine.core.vdsbroker.monitoring.VmAnalyzer]
> (EE-ManagedThreadFactory-engineScheduled-Thread-67) [] VM
> '4aff4193-ba75-481d-92b3-59b62cd8b111' is migrating to VDS
> 'f3749f1e-a68d-424f-aa3a-f0ec996df498'(DESTINATION) ignoring it in the
> refresh until migration is done
> 2018-06-14 13:14:33,090+02 INFO
> [org.ovirt.engine.core.vdsbroker.monitoring.VmAnalyzer]
> (EE-ManagedThreadFactory-engineScheduled-Thread-85) [] VM
> 

[ovirt-users] Re: Snapshot error with Cinder/Ceph disk

2018-06-27 Thread Nir Soffer
On Wed, Jun 27, 2018 at 6:47 PM Matthias Leopold <
matthias.leop...@meduniwien.ac.at> wrote:

> Hi,
>
> i'm having problems with snapshotting Cinder/Ceph disks since upgrading
> to 4.2. Observed behavior has changed between 4.2.2 and 4.2.4.
>
> With oVirt 4.2.2 and Cinder 11.1.0
> - oVirt snapshot fails (according to oVirt), but is listed in GUI
> - disk snapshot are visible in oVirt storage domain tab and Cinder CLI
> - first try to remove the oVirt snapshot fails (according to oVirt), but
> disk snapshots are removed from oVirt storage domain tab and Cinder CLI
> - second try to remove oVirt snapshot succeeds
>
> With oVirt 4.2.4 and Cinder 11.1.1
> - oVirt snapshot fails "completely"
> - in Cinder logs i can see that disk snapshots are created and
> immediately deleted
>
> oVirt error log message is the same in both cases: "Failed in
> 'SnapshotVDS' method"
>
> I'm attaching logs from oVirt engine from the latter case.
>
> thx for any advice
> matthias
>

Can you file a ovirt-engine bug?

Nir
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/JNURH2VXTX3TUU24JQ6RA62FP7WFX27H/


[ovirt-users]Re: Install hosted-engine - Task Get local VM IP failed

2018-06-27 Thread fso...@systea.fr
Hi again,In fact, the hour in file is exactly 2hours before, I guess a timezone problem (in the process of install ?), as the file itself is correctly timed at 11:17am (correct hour here in France). So the messages are synchrone. Message original Objet : Re: [ovirt-users] Re: Install hosted-engine - Task Get local VM IP failedDe : Simone Tiraboschi À : fso...@systea.frCc : users Hi,HostedEngineLocal was started at 2018-06-26 09:17:26 but /var/log/messages starts only at Jun 26 11:02:32.Can you please reattach it fro the relevant time frame?On Wed, Jun 27, 2018 at 10:54 AM fsoyer  wrote:Hi Simone,here are the revelant part of messages and the engine install log (there were only this file in /var/log/libvirt/qemu) .Thanks for your time.Frank Le Mardi, Juin 26, 2018 11:43 CEST, Simone Tiraboschi  a écrit:  On Tue, Jun 26, 2018 at 11:39 AM fsoyer  wrote:Well,unfortunatly, it was a "false-positive". This morning I tried again, with the idea that at one moment the deploy will ask for the final destination for the engine, I will restart bond0+gluster+volume engine at thos moment.Re-launching the deploy on the second "fresh" host (the first one with all errors yesterday let it in a doutful state) with em2 and gluster+bond0 off :# ip a1: lo:  mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00    inet 127.0.0.1/8 scope host lo       valid_lft forever preferred_lft forever    inet6 ::1/128 scope host        valid_lft forever preferred_lft forever2: em1:  mtu 1500 qdisc mq state UP group default qlen 1000    link/ether e0:db:55:15:f0:f0 brd ff:ff:ff:ff:ff:ff    inet 10.0.0.227/8 brd 10.255.255.255 scope global em1       valid_lft forever preferred_lft forever    inet6 fe80::e2db:55ff:fe15:f0f0/64 scope link        valid_lft forever preferred_lft forever3: em2:  mtu 1500 qdisc mq state DOWN group default qlen 1000    link/ether e0:db:55:15:f0:f1 brd ff:ff:ff:ff:ff:ff4: em3:  mtu 1500 qdisc mq state DOWN group default qlen 1000    link/ether e0:db:55:15:f0:f2 brd ff:ff:ff:ff:ff:ff5: em4:  mtu 1500 qdisc mq state DOWN group default qlen 1000    link/ether e0:db:55:15:f0:f3 brd ff:ff:ff:ff:ff:ff6: bond0:  mtu 9000 qdisc noqueue state DOWN group default qlen 1000    link/ether 3a:ab:a2:f2:38:5c brd ff:ff:ff:ff:ff:ff# ip rdefault via 10.0.1.254 dev em1 10.0.0.0/8 dev em1 proto kernel scope link src 10.0.0.227 169.254.0.0/16 dev em1 scope link metric 1002 ... does NOT work this morning[ INFO  ] TASK [Get local VM IP][ ERROR ] fatal: [localhost]: FAILED! => {"attempts": 50, "changed": true, "cmd": "virsh -r net-dhcp-leases default | grep -i 00:16:3e:01:c6:32 | awk '{ print $5 }' | cut -f1 -d'/'", "delta": "0:00:00.083587", "end": "2018-06-26 11:26:07.581706", "rc": 0, "start": "2018-06-26 11:26:07.498119", "stderr": "", "stderr_lines": [], "stdout": "", "stdout_lines": []}I'm sure that the network was the same yesterday when my attempt finally pass the "get local vm ip". Why not today ?After the error, the network was :1: lo:  mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00    inet 127.0.0.1/8 scope host lo       valid_lft forever preferred_lft forever    inet6 ::1/128 scope host        valid_lft forever preferred_lft forever2: em1:  mtu 1500 qdisc mq state UP group default qlen 1000    link/ether e0:db:55:15:f0:f0 brd ff:ff:ff:ff:ff:ff    inet 10.0.0.227/8 brd 10.255.255.255 scope global em1       valid_lft forever preferred_lft forever    inet6 fe80::e2db:55ff:fe15:f0f0/64 scope link        valid_lft forever preferred_lft forever3: em2:  mtu 1500 qdisc mq state DOWN group default qlen 1000    link/ether e0:db:55:15:f0:f1 brd ff:ff:ff:ff:ff:ff4: em3:  mtu 1500 qdisc mq state DOWN group default qlen 1000    link/ether e0:db:55:15:f0:f2 brd ff:ff:ff:ff:ff:ff5: em4:  mtu 1500 qdisc mq state DOWN group default qlen 1000    link/ether e0:db:55:15:f0:f3 brd ff:ff:ff:ff:ff:ff6: bond0:  mtu 9000 qdisc noqueue state DOWN group default qlen 1000    link/ether 3a:ab:a2:f2:38:5c brd ff:ff:ff:ff:ff:ff7: virbr0:  mtu 1500 qdisc noqueue state UP group default qlen 1000    link/ether 52:54:00:ae:8d:93 brd ff:ff:ff:ff:ff:ff    inet 192.168.122.1/24 brd 192.168.122.255 scope global virbr0       valid_lft forever preferred_lft forever8: virbr0-nic:  mtu 1500 qdisc pfifo_fast master virbr0 state DOWN group default qlen 1000    link/ether 52:54:00:ae:8d:93 brd ff:ff:ff:ff:ff:ff9: vnet0:  mtu 1500 qdisc pfifo_fast master virbr0 state UNKNOWN group default qlen 1000    link/ether fe:16:3e:01:c6:32 brd ff:ff:ff:ff:ff:ff    inet6 fe80::fc16:3eff:fe01:c632/64 scope link        valid_lft forever preferred_lft forever# ip rdefault via 10.0.1.254 dev em1 10.0.0.0/8 dev em1 proto kernel scope link src 10.0.0.227 169.254.0.0/16 dev em1 scope link metric 1002 192.168.122.0/24 dev virbr0 proto kernel scope link src 192.168.122.1   So, finally, I 

[ovirt-users] Snapshot error with Cinder/Ceph disk

2018-06-27 Thread Matthias Leopold

Hi,

i'm having problems with snapshotting Cinder/Ceph disks since upgrading 
to 4.2. Observed behavior has changed between 4.2.2 and 4.2.4.


With oVirt 4.2.2 and Cinder 11.1.0
- oVirt snapshot fails (according to oVirt), but is listed in GUI
- disk snapshot are visible in oVirt storage domain tab and Cinder CLI
- first try to remove the oVirt snapshot fails (according to oVirt), but 
disk snapshots are removed from oVirt storage domain tab and Cinder CLI

- second try to remove oVirt snapshot succeeds

With oVirt 4.2.4 and Cinder 11.1.1
- oVirt snapshot fails "completely"
- in Cinder logs i can see that disk snapshots are created and 
immediately deleted


oVirt error log message is the same in both cases: "Failed in 
'SnapshotVDS' method"


I'm attaching logs from oVirt engine from the latter case.

thx for any advice
matthias


2018-06-27 16:19:18,550+02 INFO  
[org.ovirt.engine.core.bll.snapshots.CreateSnapshotForVmCommand] (default 
task-3) [73adb039-22cc-47b5-9d0f-3620a12df43f] Lock Acquired to object 
'EngineLock:{exclusiveLocks='[4a8c9902-f9ab-490f-b1dd-82d9aee63b5f=VM]', 
sharedLocks=''}'
2018-06-27 16:19:19,186+02 INFO  
[org.ovirt.engine.core.bll.snapshots.CreateSnapshotForVmCommand] 
(EE-ManagedThreadFactory-engine-Thread-22973) 
[73adb039-22cc-47b5-9d0f-3620a12df43f] Running command: 
CreateSnapshotForVmCommand internal: false. Entities affected :  ID: 
4a8c9902-f9ab-490f-b1dd-82d9aee63b5f Type: VMAction group 
MANIPULATE_VM_SNAPSHOTS with role type USER
2018-06-27 16:19:19,208+02 INFO  
[org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector] 
(EE-ManagedThreadFactory-engine-Thread-22973) 
[73adb039-22cc-47b5-9d0f-3620a12df43f] EVENT_ID: FREEZE_VM_INITIATED(10,766), 
Freeze of guest filesystems on VM ovirt-test01.srv was initiated.
2018-06-27 16:19:19,209+02 INFO  
[org.ovirt.engine.core.vdsbroker.vdsbroker.FreezeVDSCommand] 
(EE-ManagedThreadFactory-engine-Thread-22973) 
[73adb039-22cc-47b5-9d0f-3620a12df43f] START, FreezeVDSCommand(HostName = 
ov-test-04-01, 
VdsAndVmIDVDSParametersBase:{hostId='d8794e95-3f89-4b1a-9bec-12ccf6db0cb1', 
vmId='4a8c9902-f9ab-490f-b1dd-82d9aee63b5f'}), log id: 2d55f627
2018-06-27 16:19:19,259+02 INFO  
[org.ovirt.engine.core.vdsbroker.vdsbroker.FreezeVDSCommand] 
(EE-ManagedThreadFactory-engine-Thread-22973) 
[73adb039-22cc-47b5-9d0f-3620a12df43f] FINISH, FreezeVDSCommand, log id: 
2d55f627
2018-06-27 16:19:19,262+02 INFO  
[org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector] 
(EE-ManagedThreadFactory-engine-Thread-22973) 
[73adb039-22cc-47b5-9d0f-3620a12df43f] EVENT_ID: FREEZE_VM_SUCCESS(10,767), 
Guest filesystems on VM ovirt-test01.srv have been frozen successfully.
2018-06-27 16:19:19,292+02 INFO  
[org.ovirt.engine.core.bll.snapshots.CreateSnapshotDiskCommand] 
(EE-ManagedThreadFactory-engine-Thread-22973) 
[73adb039-22cc-47b5-9d0f-3620a12df43f] Running command: 
CreateSnapshotDiskCommand internal: true. Entities affected :  ID: 
4a8c9902-f9ab-490f-b1dd-82d9aee63b5f Type: VMAction group 
MANIPULATE_VM_SNAPSHOTS with role type USER
2018-06-27 16:19:19,359+02 INFO  
[org.ovirt.engine.core.bll.storage.disk.cinder.CreateCinderSnapshotCommand] 
(EE-ManagedThreadFactory-commandCoordinator-Thread-10) 
[73adb039-22cc-47b5-9d0f-3620a12df43f] Running command: 
CreateCinderSnapshotCommand internal: true. Entities affected :  ID: 
e97009e5-c712-4199-9664-572eaba268dc Type: StorageAction group 
CONFIGURE_VM_STORAGE with role type USER
2018-06-27 16:19:20,228+02 INFO  
[org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector] 
(EE-ManagedThreadFactory-engine-Thread-22973) [] EVENT_ID: 
USER_CREATE_SNAPSHOT(45), Snapshot 'disk1_snap' creation for VM 
'ovirt-test01.srv' was initiated by admin@internal-authz.
2018-06-27 16:19:22,322+02 INFO  
[org.ovirt.engine.core.bll.storage.disk.cinder.CreateCinderSnapshotCommandCallback]
 (EE-ManagedThreadFactory-engineScheduled-Thread-96) 
[73adb039-22cc-47b5-9d0f-3620a12df43f] Command 'CreateCinderSnapshot' id: 
'e4561612-d000-47f3-980e-1c05ed813f88' child commands '[]' executions were 
completed, status 'SUCCEEDED'
2018-06-27 16:19:22,322+02 INFO  
[org.ovirt.engine.core.bll.storage.disk.cinder.CreateCinderSnapshotCommandCallback]
 (EE-ManagedThreadFactory-engineScheduled-Thread-96) 
[73adb039-22cc-47b5-9d0f-3620a12df43f] Command 'CreateCinderSnapshot' id: 
'e4561612-d000-47f3-980e-1c05ed813f88' Updating status to 'SUCCEEDED', The 
command end method logic will be executed by one of its parent commands.
2018-06-27 16:19:22,332+02 INFO  
[org.ovirt.engine.core.bll.ConcurrentChildCommandsExecutionCallback] 
(EE-ManagedThreadFactory-engineScheduled-Thread-96) 
[73adb039-22cc-47b5-9d0f-3620a12df43f] Command 'CreateSnapshotDisk' id: 
'378e2ffc-352b-4318-b34b-6c46a7fc15d8' child commands 
'[e4561612-d000-47f3-980e-1c05ed813f88]' executions were completed, status 
'SUCCEEDED'
2018-06-27 16:19:22,332+02 INFO  
[org.ovirt.engine.core.bll.ConcurrentChildCommandsExecutionCallback] 

[ovirt-users] Host unable to run VMs after upgrading to 4.2 release

2018-06-27 Thread Michael Watters
After upgrading our ovirt hosts to the 4.2.4 release two nodes are
unable to run VMs.  The vdsm.log shows a failure which appears to be
related to firewalld.

2018-06-27 10:58:08,340-0400 ERROR (vm/5a42e1ed) [virt.vm] 
(vmId='5a42e1ed-7b9f-42f2-b1c3-403276cf1cd9') The vm start process failed 
(vm:943)
Traceback (most recent call last):
  File "/usr/lib/python2.7/site-packages/vdsm/virt/vm.py", line 872, in 
_startUnderlyingVm
self._run()
  File "/usr/lib/python2.7/site-packages/vdsm/virt/vm.py", line 2876, in _run
dom.createWithFlags(flags)
  File "/usr/lib/python2.7/site-packages/vdsm/common/libvirtconnection.py", 
line 130, in wrapper
ret = f(*args, **kwargs)
  File "/usr/lib/python2.7/site-packages/vdsm/common/function.py", line 92, in 
wrapper
return func(inst, *args, **kwargs)
  File "/usr/lib64/python2.7/site-packages/libvirt.py", line 1099, in 
createWithFlags
if ret == -1: raise libvirtError ('virDomainCreateWithFlags() failed', 
dom=self)
libvirtError: The name org.fedoraproject.FirewallD1 was not provided by any 
.service files
2018-06-27 10:58:08,342-0400 INFO  (vm/5a42e1ed) [virt.vm] 
(vmId='5a42e1ed-7b9f-42f2-b1c3-403276cf1cd9') Changed state to Down: The name 
org.fedoraproject.FirewallD1 was not provided by any .service files (code=1) 
(vm:1683)

The cluster this host is a member of is configured to use *iptables*,
not firewalld.  Is there a way to resolve this?  What package provides
the .service file that vdsm is looking for?


___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/QDIGLYAE336FQ7MUXBX22BS2AGKZMXTY/


[ovirt-users] Re: Unable to do Live Migration

2018-06-27 Thread Michael Watters
Having the same issue after upgrading from ovirt 4.1 to 4.2.4. 
Attempting to migrate any VM results in an immediate failure.  The
engine log shows an error as follows.

2018-06-27 10:43:59,815-04 ERROR
[org.ovirt.engine.core.vdsbroker.monitoring.VmAnalyzer]
(ForkJoinPool-1-worker-3) [] Migration of VM 'ASAP' to host
'ovirt-node-production1' failed: VM destroyed during the startup.
2018-06-27 10:43:59,819-04 INFO 
[org.ovirt.engine.core.vdsbroker.monitoring.VmAnalyzer]
(ForkJoinPool-1-worker-10) [] VM
'91be7864-3793-4583-accb-31b441af7b6a'(ASAP) moved from 'MigratingFrom'
--> 'Up'


On 06/19/2018 04:29 AM, r...@chef.net wrote:
> Hello,
>
> I'm using a 4 node cluster oVirt 4.1. After updating the Engine to 4.2.3.8 
> (and ContOS 7.5), I tried to update the nodes too.
> Unfortunally, the live migration is not working (any more). Maybe it's 
> related to following error found in engine.log
>
> 2018-06-19 09:32:53,469+02 WARN  
> [org.ovirt.engine.core.bll.provider.network.openstack.CustomizedRESTEasyConnector]
>  (EE-ManagedThreadFactory-engineScheduled-Thread-7) [4d6a7dc4] Cannot 
> register external providers trust store: java.io.IOException: Keystore was 
> tampered with, or password was incorrect
> 2018-06-19 09:32:53,475+02 ERROR 
> [org.ovirt.engine.core.bll.provider.network.SyncNetworkProviderCommand] 
> (EE-ManagedThreadFactory-engineScheduled-Thread-7) [4d6a7dc4] Command 
> 'org.ovirt.engine.core.bll.provider.network.SyncNetworkProviderCommand' 
> failed: EngineException: (Failed with error unable to find valid 
> certification path to requested target and code 5050)
>
> I'm using the selfsigned CA provided with oVirt.
>
> Nodes (source and destination)
> OS-Version:RHEL - 7 - 4.1708.el7.centos
> Kernelversion: 3.10.0 - 693.21.1.el7.x86_64
> KVM-Version:2.9.0 - 16.el7_4.14.1
> LIBVIRT-Version:libvirt-3.2.0-14.el7_4.9
> VDSM-Version:vdsm-4.19.45-1.el7.centos
> SPICE-Version:0.12.8 - 2.el7.1
> CEPH-Version:librbd1-0.94.5-2.el7
> Kernel Features:PTI: 1, IBPB: 0, IBRS: 0
>
> engine.log for the mirgration:
> 2018-06-14 13:13:30,300+02 INFO  
> [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector] 
> (EE-ManagedThreadFactory-engine-Thread-246968) [] EVENT_ID: 
> VM_MIGRATION_START_SYSTEM_INITIATED(67), Migration initiated by system (VM: 
> vm_to_migrate, Source: SOURCE, Destination: DESTINATION, Reason: Host 
> preparing for maintenance).
> 2018-06-14 13:13:32,886+02 INFO  
> [org.ovirt.engine.core.vdsbroker.monitoring.VmsStatisticsFetcher] 
> (EE-ManagedThreadFactory-engineScheduled-Thread-94) [] Fetched 8 VMs from VDS 
> 'f3749f1e-a68d-424f-aa3a-f0ec996df498'
> 2018-06-14 13:13:32,887+02 INFO  
> [org.ovirt.engine.core.vdsbroker.monitoring.VmAnalyzer] 
> (EE-ManagedThreadFactory-engineScheduled-Thread-94) [] VM 
> '4aff4193-ba75-481d-92b3-59b62cd8b111'(vm_to_migrate) was unexpectedly 
> detected as 'MigratingTo' on VDS 
> 'f3749f1e-a68d-424f-aa3a-f0ec996df498'(DESTINATION) (expected on 
> 'ea7bae8f-d966-4baa-b8b4-a5522b88ec3a')
> 2018-06-14 13:13:32,887+02 INFO  
> [org.ovirt.engine.core.vdsbroker.monitoring.VmAnalyzer] 
> (EE-ManagedThreadFactory-engineScheduled-Thread-94) [] VM 
> '4aff4193-ba75-481d-92b3-59b62cd8b111' is migrating to VDS 
> 'f3749f1e-a68d-424f-aa3a-f0ec996df498'(DESTINATION) ignoring it in the 
> refresh until migration is done
> 2018-06-14 13:13:47,939+02 INFO  
> [org.ovirt.engine.core.vdsbroker.monitoring.VmAnalyzer] 
> (EE-ManagedThreadFactory-engineScheduled-Thread-42) [] VM 
> '4aff4193-ba75-481d-92b3-59b62cd8b111'(vm_to_migrate) was unexpectedly 
> detected as 'MigratingTo' on VDS 
> 'f3749f1e-a68d-424f-aa3a-f0ec996df498'(DESTINATION) (expected on 
> 'ea7bae8f-d966-4baa-b8b4-a5522b88ec3a')
> 2018-06-14 13:13:47,939+02 INFO  
> [org.ovirt.engine.core.vdsbroker.monitoring.VmAnalyzer] 
> (EE-ManagedThreadFactory-engineScheduled-Thread-42) [] VM 
> '4aff4193-ba75-481d-92b3-59b62cd8b111' is migrating to VDS 
> 'f3749f1e-a68d-424f-aa3a-f0ec996df498'(DESTINATION) ignoring it in the 
> refresh until migration is done
> 2018-06-14 13:14:02,993+02 INFO  
> [org.ovirt.engine.core.vdsbroker.monitoring.VmAnalyzer] 
> (EE-ManagedThreadFactory-engineScheduled-Thread-13) [] VM 
> '4aff4193-ba75-481d-92b3-59b62cd8b111'(vm_to_migrate) was unexpectedly 
> detected as 'MigratingTo' on VDS 
> 'f3749f1e-a68d-424f-aa3a-f0ec996df498'(DESTINATION) (expected on 
> 'ea7bae8f-d966-4baa-b8b4-a5522b88ec3a')
> 2018-06-14 13:14:02,993+02 INFO  
> [org.ovirt.engine.core.vdsbroker.monitoring.VmAnalyzer] 
> (EE-ManagedThreadFactory-engineScheduled-Thread-13) [] VM 
> '4aff4193-ba75-481d-92b3-59b62cd8b111' is migrating to VDS 
> 'f3749f1e-a68d-424f-aa3a-f0ec996df498'(DESTINATION) ignoring it in the 
> refresh until migration is done
> 2018-06-14 13:14:18,042+02 INFO  
> [org.ovirt.engine.core.vdsbroker.monitoring.VmAnalyzer] 
> (EE-ManagedThreadFactory-engineScheduled-Thread-67) [] VM 
> '4aff4193-ba75-481d-92b3-59b62cd8b111'(vm_to_migrate) was unexpectedly 
> detected as 'MigratingTo' on VDS 
> 

[ovirt-users] Re: [ovirt-devel] Re: oVirt HCI point-to-point interconnection

2018-06-27 Thread Yaniv Kaul
On Wed, Jun 27, 2018 at 11:26 AM, Stefano Zappa 
wrote:

> The final purpose is strictly targeted to your HCI solution with 3-node
> gluster replication.
>

Can you explain to me what the benefit is? You need a switch anyway (for
uplink), so the 'cost saving' is that you can use a 4 port (?) switch and
do not need 6 ports?
Y.


>
>
>
>
> Stefano Zappa
> IT Specialist CAE - TT-3
> Technical Department
>
> Industrie Saleri Italo S.p.A.
> Phone:+39 0308250480
> Fax:+39 0308250466
>
> This message contains confidential information and is intended only for
> sab...@redhat.com, sbona...@redhat.com, stira...@redhat.com,
> users@ovirt.org, de...@ovirt.org. If you are not sab...@redhat.com,
> sbona...@redhat.com, stira...@redhat.com, users@ovirt.org, de...@ovirt.org
> you should not disseminate, distribute or copy this e-mail. Please notify
> stefano.za...@saleri.it immediately by e-mail if you have received this
> e-mail by mistake and delete this e-mail from your system. E-mail
> transmission cannot be guaranteed to be secure or error-free as information
> could be intercepted, corrupted, lost, destroyed, arrive late or
> incomplete, or contain viruses. Stefano Zappa therefore does not accept
> liability for any errors or omissions in the contents of this message,
> which arise as a result of e-mail transmission. If verification is required
> please request a hard-copy version.
> 
> Da: Sahina Bose 
> Inviato: mercoledì 27 giugno 2018 10:15
> A: Sandro Bonazzola
> Cc: Stefano Zappa; Simone Tiraboschi; users@ovirt.org; de...@ovirt.org
> Oggetto: Re: [ovirt-users] oVirt HCI point-to-point interconnection
>
> The point to point interconnect is something we have not explored - I
> think this limits the solution from scaling out to more nodes.
>
> On Wed, Jun 27, 2018 at 1:22 PM, Sandro Bonazzola  > wrote:
> Simone, Sahina, can you please have a look?
>
> 2018-06-07 9:59 GMT+02:00 Stefano Zappa  stefano.za...@saleri.it>>:
>
> Good morning,
> I would like to kindly ask you a question about the feasibility of
> defining a point-to-point interconnection between three ovirt nodes.
>
> Initially with the idea of optimizing the direct communications between
> the nodes and especially the gluster communications, and so it would seem
> quite easy, then evaluating a more complex configuration, assuming to
> create an overlay L2 network on the three L3 point-to-point, using
> techniques like geneve, of which at the moment I have no mastery.
>
> If the direct routing of three nodes to interconnect the public network
> with the private overlay network was not easily doable, we could leave the
> private overlay network isolated from the outside world and connect the VM
> hosted engine directly to the two networks with two adapters.
>
> This layout with direct interconnection of the nodes without switches and
> a shared L2 overlay network between the nodes may in future be contemplated
> in future releases of your HCI solution?
>
> Thank you for your attention, have a nice day!
>
> Stefano Zappa.
>
>
> [cid:609f1f14-f74e-489f-b86d-08647efc6d1c]
>
>
>
>
> Stefano Zappa
>
> IT Specialist CAE - TT-3
>
>
>
> Industrie Saleri Italo S.p.A.
>
> Phone:  +39 0308250480
> Fax:+39 0308250466
>
>
>
> This message contains confidential information and is intended only for
> users@ovirt.org, in...@ovirt.org ovirt.org>, de...@ovirt.org. If you are not
> users@ovirt.org, in...@ovirt.org ovirt.org>, de...@ovirt.org you should not
> disseminate, distribute or copy this e-mail. Please notify
> stefano.za...@saleri.it immediately by
> e-mail if you have received this e-mail by mistake and delete this e-mail
> from your system. E-mail transmission cannot be guaranteed to be secure or
> error-free as information could be intercepted, corrupted, lost, destroyed,
> arrive late or incomplete, or contain viruses. Stefano Zappa therefore does
> not accept liability for any errors or omissions in the contents of this
> message, which arise as a result of e-mail transmission. If verification is
> required please request a hard-copy version.
>
> PRIVACY INFORMATION ART. 13 EU REG. 2016/679
> We inform you that the personal data contained in the present and
> subsequent electronic communications will be processed in compliance with
> the EU Regulation 2016/679, for the purpose of and only for the time
> necessary to allow the sender to carry out the activities connected to the
> existing commercial relationships.
> The provision of personal data is not mandatory. However, failure to
> provide them determines the impossibility of achieving the aforementioned
> purpose.
> With regard to these data, is allowed the exercise of the rights set out
> in art. 13 and from the articles from 15 to 22 of EU Regulation 2016/679
> and in 

[ovirt-users] Re: Ovirt and L2 Gateway

2018-06-27 Thread Marcin Mirecki
Hi Carl,

What you want is probably to use the l2gateway type logical switch port in
OVN.
Please refer to the following doc for the description (not very detailed
unfortunately):
http://www.openvswitch.org//support/dist-docs/ovn-nb.5.txt
Look at the Logical_Switch_Port Table type field, along with some of the
options keys.

Unfortunately we do not support this in ovirt, nor is this supported in
ovirt-provider-ovn.
To use this ovn feature, you will have to manually add an l2gateway port to
your environment.

Marcin



On Sun, Jun 24, 2018 at 11:18 PM,  wrote:

> I have install ovirt 4.2.3 and everything seems to be working fine: I can
> create virtual (Geneve overlay) networks for communication between virtual
> machines via the external provider ovirt-provider-ovn by using the OWS
> switch on the cluster. Live migrations and everything else within the
> virtual environment works perfectly :-)
>
> For connections from virtual machines to physical VLAN's in a switch, I
> can also create a logical network which is created using the external
> provider ovirt-provider-ovn by specifying a connection to a physical VLAN
> network created as a separate data center network. This method requires
> that all ovirt-nodes (hosts) in the cluster have access to the physical
> network though.
>
> What I am looking for is a way to implement a L2 Gateway such that (not
> all) ovirt nodes (hosts) need to have direct access to the physical
> network. What I am looking for is a way where virtual machines can
> communicate with the L2 Gateway via virtual (Geneve overlay) networks. On
> the L2 Gateway the virtual network shall then be bridged to the physical
> VLAN on a dedicated network interface. My goal is that the virtual network
> and the physical network becomes one big broadcast domain.
>
> This concept has been described by different people on the Internet such
> as these articles:
> - https://weiti.org/ovn/2018/01/03/ovn-l2-breakout-options
> - https://wiki.openstack.org/wiki/Neutron/L2-GW
>
> How can I accomplish something similar in an ovirt-environment?
>
> Thanks in advance,
>
> Carl Grundholm
> ___
> Users mailing list -- users@ovirt.org
> To unsubscribe send an email to users-le...@ovirt.org
> Privacy Statement: https://www.ovirt.org/site/privacy-policy/
> oVirt Code of Conduct: https://www.ovirt.org/community/about/community-
> guidelines/
> List Archives: https://lists.ovirt.org/archives/list/users@ovirt.org/
> message/SUQWX4PAQ2OWM6LQIEQALKEC7YSDHCF2/
>
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/SDBDWQX4C2EYRMC57Y4HZGF3MWN6SU5R/


[ovirt-users] Re: Hang tasks delete snapshot

2018-06-27 Thread Marcelo Leandro
Hello,

The task not show more in the gui, but in the engine.log show this msg:

2018-06-27 10:01:54,705-03 INFO
[org.ovirt.engine.core.bll.ConcurrentChildCommandsExecutionCallback]
(DefaultQuartzScheduler4) [4172f065-06a4-4f09-954e-0dcfceb61cda] Command
'RemoveSnapshot' (id: '8639a3dc-0064-44b8-84b7-5f733c3fd9b3') waiting on
child command id: '94607c69-77ce-4005-8ed9-a8b7bd40c496'
type:'RemoveSnapshotSingleDiskLive' to complete
2018-06-27 10:01:54,712-03 ERROR
[org.ovirt.engine.core.bll.tasks.CommandCallbacksPoller]
(DefaultQuartzScheduler4) [4172f065-06a4-4f09-954e-0dcfceb61cda] Failed
invoking callback end method 'onFailed' for command
'94607c69-77ce-4005-8ed9-a8b7bd40c496' with exception 'null', the callback
is marked for end method retries
2018-06-27 10:02:04,730-03 INFO
[org.ovirt.engine.core.bll.ConcurrentChildCommandsExecutionCallback]
(DefaultQuartzScheduler2) [4172f065-06a4-4f09-954e-0dcfceb61cda] Command
'RemoveSnapshot' (id: '8639a3dc-0064-44b8-84b7-5f733c3fd9b3') waiting on
child command id: '94607c69-77ce-4005-8ed9-a8b7bd40c496'
type:'RemoveSnapshotSingleDiskLive' to complete
2018-06-27 10:02:04,737-03 ERROR
[org.ovirt.engine.core.bll.tasks.CommandCallbacksPoller]
(DefaultQuartzScheduler2) [4172f065-06a4-4f09-954e-0dcfceb61cda] Failed
invoking callback end method 'onFailed' for command
'94607c69-77ce-4005-8ed9-a8b7bd40c496' with exception 'null', the callback
is marked for end method retries


2018-06-26 10:14 GMT-03:00 Marcelo Leandro :

> Very thanks, its work for me.
>
> 2018-06-26 9:53 GMT-03:00 Nathanaël Blanchet :
>
>>
>>
>> Le 26/06/2018 à 13:29, Marcelo Leandro a écrit :
>>
>> Hello ,
>>
>> Nathanael, Thank for reply, if possible can you describe this steps and
>> what this command do?
>>
>> I would like understand for solve future problems.
>>
>> Very thanks.
>>
>> Marcelo Leandro
>>
>> 2018-06-26 6:50 GMT-03:00 Nathanaël Blanchet :
>>
>>> PGPASSWORD=X /usr/share/ovirt-engine/setup/dbutils/unlock_entity.sh
>>> -q -t snapshot -u engine
>>>
>> you can find your PGPASSWORD here : 
>> /etc/ovirt-engine/engine.conf.d/10-setup-database.conf
>>
>>
>> 296c010e-3c1d-4008-84b3-5cd39cff6aa1 | 525a4dda-dbbb-4872-a5f1-8ac2ae
>>> d48392
>>>
>> This command returns a list of locked processes of the chosen type (-t
>> TYPE   - The object type {all | vm | template | disk | snapshot})
>> First item is the vm id, second one the locked snapshot id.
>>
>>
>>> REMOVE
>>>
>>> PGPASSWORD=X /usr/share/ovirt-engine/setup/dbutils/unlock_entity.sh
>>> -t snapshot -u engine 525a4dda-dbbb-4872-a5f1-8ac2aed48392
>>>
>> Now use the locked snapshot id to unlock
>>
>>
>>
>>>
>>> Le 25/06/2018 à 19:42, Marcelo Leandro a écrit :
>>>
>>> Hello,
>>> The few days I tried delete snapshot but the task not concluded yet. How
>>> I can stop this task? I Already try with taskcleaner.sh but dont had sucess.
>>>
>>> attached the ovirt and vdsm-spm logs.
>>>
>>> ovirt version. 4.1.9
>>>
>>> Thanks.
>>>
>>>
>>> ___
>>> Users mailing list -- users@ovirt.org
>>> To unsubscribe send an email to users-le...@ovirt.org
>>> Privacy Statement: https://www.ovirt.org/site/privacy-policy/
>>> oVirt Code of Conduct: 
>>> https://www.ovirt.org/community/about/community-guidelines/
>>> List Archives: 
>>> https://lists.ovirt.org/archives/list/users@ovirt.org/message/G2Z4VUZLOE75CJU6A3VHBLI7ZLQLXLNB/
>>>
>>>
>>> --
>>> Nathanaël Blanchet
>>>
>>> Supervision réseau
>>> Pôle Infrastrutures Informatiques227 avenue Professeur-Jean-Louis-Viala 
>>> 
>>> 34193 MONTPELLIER CEDEX 5   
>>> Tél. 33 (0)4 67 54 84 55
>>> Fax  33 (0)4 67 54 84 14blanc...@abes.fr
>>>
>>>
>>
>> --
>> Nathanaël Blanchet
>>
>> Supervision réseau
>> Pôle Infrastrutures Informatiques227 avenue Professeur-Jean-Louis-Viala 
>> 
>> 34193 MONTPELLIER CEDEX 5
>> Tél. 33 (0)4 67 54 84 55
>> Fax  33 (0)4 67 54 84 14blanc...@abes.fr
>>
>>
>
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/V5ZXCHN33JJQNJV63MR6FVFRHDKEVOOD/


[ovirt-users] NAT+IP masquerading with oVirt 4.2

2018-06-27 Thread julius . schwartzenberg
Hi,

I'm trying to set up NAT+IP masquerading with oVirt 4.2. I have enabled the 
libvirt default network on virbr0 and added a network with the same name in 
oVirt. I have also installed vdsm-hook-extnet and set extnet to 'default' for 
this network to default.

When I try to start my VM with this network assigned to it, I get this error:
The host ovirthost did not satisfy internal filter Network because network(s) 
virbr0 are missing.

What should I do to solve this?

Best regards,
Julius
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/N72FUH7RAKC6MASBFXHXVUUZ2QJ6JNL6/


[ovirt-users] Re: Unable to do Live Migration

2018-06-27 Thread rni
Hi,
does nobody has an idea, how to solve this issue?
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/KHVUVRCUIVC4W6K5I2LZ7FIZ6ITX5VBI/


[ovirt-users] Re: Cannot acquire Lock .... snapshot error

2018-06-27 Thread Ala Hino
The listed leases are broken, there should be no affect on the vm

On Wed, Jun 27, 2018, 4:19 PM Enrico Becchetti 
wrote:

> Hi all,
> after update vdsm I run this command:
>
> *[root@infm-vm04 ~]# vdsm-tool -v check-volume-leases*
> *WARNING: Make sure there are no running storage operations.*
>
> *Do you want to check volume leases? [yes,NO] yes*
>
> *Checking active storage domains. This can take several minutes, please
> wait.*
>
> After that I saw some volumes with issue:
>
> *The following volume leases need repair:*
>
> *- domain: 47b7c9aa-ef53-48bc-bb55-4a1a0ba5c8d5*
>
> *  - image: 4b2a6552-847c-43f4-a180-b037d0b93a30*
> *- volume: 6eb8caf0-b120-40e5-86b5-405f15d1245a*
> *  - image: 871196b2-9d8b-422f-9a3e-be54e100dc5c*
> *- volume: 861ff7dd-a01c-47f7-8a01-95bc766c2607*
> *  - image: 267b8b8c-da09-44ef-81b3-065dfa2e7085*
> *- volume: d5f3158a-87ac-4c02-84ba-fcb86b8688a0*
> *  - image: c5611862-6504-445e-a6c8-f1e1a95b5df7*
> *- volume: e156ac2e-09ac-4e1e-a139-17fa374a96d4*
> *  - image: e765c9c4-2ef9-4c8f-a573-8cd2b0a0a3a2*
> *- volume: 47d81cbe-598a-402a-9795-1d046c45b7b1*
> *  - image: ab88a08c-910d-44dd-bdf8-8242001ba527*
> *- volume: 86c72239-525f-4b0b-9aa6-763fc71340bc*
> *  - image: 5e8c4620-b6a5-4fc6-a5bb-f209173d186c*
> *- volume: 0f52831b-ec35-4140-9a8c-fa24c2647f17*
> *  - image: a67809fc-b830-4ea3-af66-b5b9285b4924*
> *- volume: 26c6bdd7-1382-4e8e-addc-dcd3898b317f*
>
> *Do you want to repair the leases? [yes,NO] *
>
> what happen If I try to repair them ? Is there any impact to
> my running vm ?
>
> Thanks a lot !!!
> Best Regards
> Enrico
>
>
>
>
> Il 26/06/2018 15:32, Ala Hino ha scritto:
>
> You are running vdsm-4.20.17, and the tool introduced in vdsm-4.20.24.
> You will have to upgrade vdsm to be able to use the tool.
>
> On Tue, Jun 26, 2018 at 4:29 PM, Enrico Becchetti <
> enrico.becche...@pg.infn.it> wrote:
>
>> Hi,
>> I run this command from my SPM , Centos 7.4.1708:
>>
>> [root@infn-vm05 vdsm]# rpm -qa | grep -i vdsm
>> vdsm-hook-vmfex-dev-4.20.17-1.el7.centos.noarch
>> vdsm-python-4.20.17-1.el7.centos.noarch
>> vdsm-hook-fcoe-4.20.17-1.el7.centos.noarch
>> vdsm-common-4.20.17-1.el7.centos.noarch
>> vdsm-jsonrpc-4.20.17-1.el7.centos.noarch
>> vdsm-hook-ethtool-options-4.20.17-1.el7.centos.noarch
>> vdsm-hook-openstacknet-4.20.17-1.el7.centos.noarch
>> vdsm-http-4.20.17-1.el7.centos.noarch
>> vdsm-client-4.20.17-1.el7.centos.noarch
>> vdsm-gluster-4.20.17-1.el7.centos.noarch
>> vdsm-hook-vfio-mdev-4.20.17-1.el7.centos.noarch
>> vdsm-api-4.20.17-1.el7.centos.noarch
>> vdsm-network-4.20.17-1.el7.centos.x86_64
>> vdsm-yajsonrpc-4.20.17-1.el7.centos.noarch
>> vdsm-4.20.17-1.el7.centos.x86_64
>> vdsm-hook-vhostmd-4.20.17-1.el7.centos.noarch
>>
>> Thanks !!!
>> Enrico
>>
>>
>> Il 26/06/2018 15:21, Ala Hino ha scritto:
>>
>> Hi Enrico,
>>
>> What's the vdsm version that you are using?
>>
>> The tool introduced in vdsm 4.20.24.
>>
>> On Tue, Jun 26, 2018 at 3:51 PM, Enrico Becchetti <
>> enrico.becche...@pg.infn.it> wrote:
>>
>>> Dear Ala,
>>> if you have a few minutes for me I'd like to ask you to read my issue.
>>> It's a strange problem because my vm works fine but I can't delete its
>>> snapshoot.
>>> Thanks a lot
>>> Best Regards
>>> Enrico
>>>
>>>
>>>  Messaggio Inoltrato 
>>> Oggetto: [ovirt-users] Re: Cannot acquire Lock  snapshot error
>>> Data: Mon, 25 Jun 2018 14:20:21 +0200
>>> Mittente: Enrico Becchetti 
>>> 
>>> A: Nir Soffer  
>>> CC: users  
>>>
>>>
>>>  Dear Friends ,
>>> to fix my problem I've try vdsm-tool command but it's seem an error:
>>>
>>> [root@infn-vm05 vdsm]# vdsm-tool check-volume-lease
>>> Usage: /usr/bin/vdsm-tool [options]  [arguments]
>>> Valid options:
>>> ..
>>>
>>> as you can see there isn't check-volumes-option  and my ovirt engine is
>>> already at 4.2.
>>> Any other ideas ?
>>> Thanks a lot !
>>> Best Regards
>>> Enrico
>>>
>>>
>>>
>>> Il 22/06/2018 17:46, Nir Soffer ha scritto:
>>>
>>> On Fri, Jun 22, 2018 at 3:13 PM Enrico Becchetti <
>>> enrico.becche...@pg.infn.it> wrote:
>>>
  Dear All,
 my ovirt 4.2.1.7-1.el7.centos has three hypervisors, lvm storage and
 virtiual machine with
 ovirt-engine. All works fine but with one vm when I try to remove its
 snapshot I have
 this error:

 2018-06-22 07:35:48,155+0200 INFO  (jsonrpc/5) [vdsm.api] START
 prepareMerge(spUUID=u'18d57688-6ed4-43b8-bd7c-0665b55950b7',
 subchainInfo={u'img_id': u'c5611862-6504-445e-a6c8-f1e1a95b5df7', u'sd_id':
 u'47b7c9aa-ef53-48bc-bb55-4a1a0ba5c8d5', u'top_id':
 u'0e6f7512-871d-4645-b9c6-320ba7e3bee7', u'base_id':
 u'e156ac2e-09ac-4e1e-a139-17fa374a96d4'}) from=:::10.0.0.46,53304,
 flow_id=07011450-2296-4a13-a9ed-5d5d2b91be98,
 task_id=87f95d85-cc3d-4f29-9883-a4dbb3808f88 (api:46)
 2018-06-22 07:35:48,406+0200 INFO  (tasks/3) [storage.merge] Preparing
 subchain >>> img_id=c5611862-6504-445e-a6c8-f1e1a95b5df7,
 

[ovirt-users] Re: Cannot acquire Lock .... snapshot error

2018-06-27 Thread Enrico Becchetti

Hi all,
after update vdsm I run this command:

/[root@infm-vm04 ~]# vdsm-tool -v check-volume-leases//
//WARNING: Make sure there are no running storage operations.//
//
//Do you want to check volume leases? [yes,NO] yes//
//
//Checking active storage domains. This can take several minutes, please 
wait.//

/
After that I saw some volumes with issue:

/The following volume leases need repair://
//
//- domain: 47b7c9aa-ef53-48bc-bb55-4a1a0ba5c8d5//
//
//  - image: 4b2a6552-847c-43f4-a180-b037d0b93a30//
//    - volume: 6eb8caf0-b120-40e5-86b5-405f15d1245a//
//  - image: 871196b2-9d8b-422f-9a3e-be54e100dc5c//
//    - volume: 861ff7dd-a01c-47f7-8a01-95bc766c2607//
//  - image: 267b8b8c-da09-44ef-81b3-065dfa2e7085//
//    - volume: d5f3158a-87ac-4c02-84ba-fcb86b8688a0//
//  - image: c5611862-6504-445e-a6c8-f1e1a95b5df7//
//    - volume: e156ac2e-09ac-4e1e-a139-17fa374a96d4//
//  - image: e765c9c4-2ef9-4c8f-a573-8cd2b0a0a3a2//
//    - volume: 47d81cbe-598a-402a-9795-1d046c45b7b1//
//  - image: ab88a08c-910d-44dd-bdf8-8242001ba527//
//    - volume: 86c72239-525f-4b0b-9aa6-763fc71340bc//
//  - image: 5e8c4620-b6a5-4fc6-a5bb-f209173d186c//
//    - volume: 0f52831b-ec35-4140-9a8c-fa24c2647f17//
//  - image: a67809fc-b830-4ea3-af66-b5b9285b4924//
//    - volume: 26c6bdd7-1382-4e8e-addc-dcd3898b317f//
//
//Do you want to repair the leases? [yes,NO] /

what happen If I try to repair them ? Is there any impact to
my running vm ?

Thanks a lot !!!
Best Regards
Enrico




Il 26/06/2018 15:32, Ala Hino ha scritto:

You are running vdsm-4.20.17, and the tool introduced in vdsm-4.20.24.
You will have to upgrade vdsm to be able to use the tool.

On Tue, Jun 26, 2018 at 4:29 PM, Enrico Becchetti 
mailto:enrico.becche...@pg.infn.it>> wrote:


Hi,
I run this command from my SPM , Centos 7.4.1708:

[root@infn-vm05 vdsm]# rpm -qa | grep -i vdsm
vdsm-hook-vmfex-dev-4.20.17-1.el7.centos.noarch
vdsm-python-4.20.17-1.el7.centos.noarch
vdsm-hook-fcoe-4.20.17-1.el7.centos.noarch
vdsm-common-4.20.17-1.el7.centos.noarch
vdsm-jsonrpc-4.20.17-1.el7.centos.noarch
vdsm-hook-ethtool-options-4.20.17-1.el7.centos.noarch
vdsm-hook-openstacknet-4.20.17-1.el7.centos.noarch
vdsm-http-4.20.17-1.el7.centos.noarch
vdsm-client-4.20.17-1.el7.centos.noarch
vdsm-gluster-4.20.17-1.el7.centos.noarch
vdsm-hook-vfio-mdev-4.20.17-1.el7.centos.noarch
vdsm-api-4.20.17-1.el7.centos.noarch
vdsm-network-4.20.17-1.el7.centos.x86_64
vdsm-yajsonrpc-4.20.17-1.el7.centos.noarch
vdsm-4.20.17-1.el7.centos.x86_64
vdsm-hook-vhostmd-4.20.17-1.el7.centos.noarch

Thanks !!!
Enrico


Il 26/06/2018 15:21, Ala Hino ha scritto:

Hi Enrico,

What's the vdsm version that you are using?

The tool introduced in vdsm 4.20.24.

On Tue, Jun 26, 2018 at 3:51 PM, Enrico Becchetti
mailto:enrico.becche...@pg.infn.it>> wrote:

Dear Ala,
if you have a few minutes for me I'd like to ask you to read
my issue.
It's a strange problem because my vm works fine but I can't
delete its snapshoot.
Thanks a lot
Best Regards
Enrico


 Messaggio Inoltrato 
Oggetto:[ovirt-users] Re: Cannot acquire Lock  snapshot
error
Data:   Mon, 25 Jun 2018 14:20:21 +0200
Mittente:   Enrico Becchetti 

A:  Nir Soffer  
CC: users  



 Dear Friends ,
to fix my problem I've try vdsm-tool command but it's seem an
error:

[root@infn-vm05 vdsm]# vdsm-tool check-volume-lease
Usage: /usr/bin/vdsm-tool [options]  [arguments]
Valid options:
..

as you can see there isn't check-volumes-option  and my ovirt
engine is already at 4.2.
Any other ideas ?
Thanks a lot !
Best Regards
Enrico



Il 22/06/2018 17:46, Nir Soffer ha scritto:

On Fri, Jun 22, 2018 at 3:13 PM Enrico Becchetti
mailto:enrico.becche...@pg.infn.it>> wrote:

 Dear All,
my ovirt 4.2.1.7-1.el7.centos has three hypervisors, lvm
storage and virtiual machine with
ovirt-engine. All works fine but with one vm when I try
to remove its snapshot I have
this error:

2018-06-22 07:35:48,155+0200 INFO  (jsonrpc/5)
[vdsm.api] START
prepareMerge(spUUID=u'18d57688-6ed4-43b8-bd7c-0665b55950b7',
subchainInfo={u'img_id':
u'c5611862-6504-445e-a6c8-f1e1a95b5df7', u'sd_id':
u'47b7c9aa-ef53-48bc-bb55-4a1a0ba5c8d5', u'top_id':
u'0e6f7512-871d-4645-b9c6-320ba7e3bee7', u'base_id':
u'e156ac2e-09ac-4e1e-a139-17fa374a96d4'})
from=:::10.0.0.46,53304,
flow_id=07011450-2296-4a13-a9ed-5d5d2b91be98,

[ovirt-users] Problems import vm/uploading disk

2018-06-27 Thread Alan G
Hi, I'm trying to import a KVM VM into Ovirt. First I tried the GUI VM import 
functionality and this failed with the error below. However other VMs from the 
same source host were imported fine. read-32893::ERROR::2018-06-27 
09:43:48,703::v2v::679::root::(_run) Job 
u'1a5fe287-d2dd-429c-87b5-6f240b59c17f' failed Traceback (most recent call 
last):   File "/usr/lib/python2.7/site-packages/vdsm/v2v.py", line 674, in _run 
    self._import()   File "/usr/lib/python2.7/site-packages/vdsm/v2v.py", line 
691, in _import     with self._command.execute() as self._proc:   File 
"/usr/lib64/python2.7/contextlib.py", line 17, in __enter__     return 
self.gen.next()   File "/usr/lib/python2.7/site-packages/vdsm/v2v.py", line 
597, in execute     yield self._start_helper()   File 
"/usr/lib/python2.7/site-packages/vdsm/v2v.py", line 374, in _start_helper     
env=self._environment())   File 
"/usr/lib/python2.7/site-packages/vdsm/commands.py", line 71, in execCmd     
deathSignal=deathSignal)   File 
"/usr/lib64/python2.7/site-packages/cpopen/__init__.py", line 63, in __init__   
  **kw)   File "/usr/lib64/python2.7/subprocess.py", line 711, in __init__     
errread, errwrite)   File 
"/usr/lib64/python2.7/site-packages/cpopen/__init__.py", line 83, in 
_execute_child_v276     _to_close=to_close   File 
"/usr/lib64/python2.7/site-packages/cpopen/__init__.py", line 118, in 
_execute_child_v275     restore_sigpipe OSError: [Errno 0] Error 
Thread-32893::ERROR::2018-06-27 09:43:48,704::v2v::686::root::(_run) Job 
u'1a5fe287-d2dd-429c-87b5-6f240b59c17f', error trying to abort: 
AttributeError("'NoneType' object has no attribute 'returncode'",) Traceback 
(most recent call last):   File "/usr/lib/python2.7/site-packages/vdsm/v2v.py", 
line 683, in _run     self._abort()   File 
"/usr/lib/python2.7/site-packages/vdsm/v2v.py", line 743, in _abort     if 
self._proc.returncode is None: AttributeError: 'NoneType' object has no 
attribute 'returncode' Second I tried using the disk upload feature in the GUI, 
but this created the target disk then fails to upload the content - I've 
ensured that the relevant CAs are loaded into the browser. Third I successfully 
imported the VM into an ovirt 4.2 instance in the lab, but can find no way of 
then importing it into ovirt 4.0 as the storage domain format seems not to be 
backwards compatible. Finally I used the upload_disk.py example script from 
python SDK. This creates the disk and starts to upload the content but always 
fails at 2% with "socket.error: [Errno 32] Broken pipe" Any ideas on how I can 
get this qcow2 image loaded into ovirt 4.0? Thanks, Alan___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/A7RVNCGYKBHC7NV6HUJ2WAJ2WDJORTIU/


[ovirt-users] Re: Install hosted-engine - Task Get local VM IP failed

2018-06-27 Thread Simone Tiraboschi
Hi,
HostedEngineLocal was started at 2018-06-26 09:17:26 but /var/log/messages
starts only at Jun 26 11:02:32.
Can you please reattach it fro the relevant time frame?

On Wed, Jun 27, 2018 at 10:54 AM fsoyer  wrote:

> Hi Simone,
> here are the revelant part of messages and the engine install log (there
> were only this file in /var/log/libvirt/qemu) .
>
> Thanks for your time.
>
> Frank
>
>
> Le Mardi, Juin 26, 2018 11:43 CEST, Simone Tiraboschi 
> a écrit:
>
>
>
> On Tue, Jun 26, 2018 at 11:39 AM fsoyer  wrote:
>
>> Well,
>> unfortunatly, it was a "false-positive". This morning I tried again, with
>> the idea that at one moment the deploy will ask for the final destination
>> for the engine, I will restart bond0+gluster+volume engine at thos moment.
>> Re-launching the deploy on the second "fresh" host (the first one with
>> all errors yesterday let it in a doutful state) with em2 and gluster+bond0
>> off :
>>
>> # ip a
>> 1: lo:  mtu 65536 qdisc noqueue state UNKNOWN group
>> default qlen 1000
>> link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
>> inet 127.0.0.1/8 scope host lo
>>valid_lft forever preferred_lft forever
>> inet6 ::1/128 scope host
>>valid_lft forever preferred_lft forever
>> 2: em1:  mtu 1500 qdisc mq state UP
>> group default qlen 1000
>> link/ether e0:db:55:15:f0:f0 brd ff:ff:ff:ff:ff:ff
>> inet 10.0.0.227/8 brd 10.255.255.255 scope global em1
>>valid_lft forever preferred_lft forever
>> inet6 fe80::e2db:55ff:fe15:f0f0/64 scope link
>>valid_lft forever preferred_lft forever
>> 3: em2:  mtu 1500 qdisc mq state DOWN group default
>> qlen 1000
>> link/ether e0:db:55:15:f0:f1 brd ff:ff:ff:ff:ff:ff
>> 4: em3:  mtu 1500 qdisc mq state DOWN group default
>> qlen 1000
>> link/ether e0:db:55:15:f0:f2 brd ff:ff:ff:ff:ff:ff
>> 5: em4:  mtu 1500 qdisc mq state DOWN group default
>> qlen 1000
>> link/ether e0:db:55:15:f0:f3 brd ff:ff:ff:ff:ff:ff
>> 6: bond0:  mtu 9000 qdisc noqueue state DOWN
>> group default qlen 1000
>> link/ether 3a:ab:a2:f2:38:5c brd ff:ff:ff:ff:ff:ff
>>
>> # ip r
>> default via 10.0.1.254 dev em1
>> 10.0.0.0/8 dev em1 proto kernel scope link src 10.0.0.227
>> 169.254.0.0/16 dev em1 scope link metric 1002
>>
>> ... does NOT work this morning
>>
>> [ INFO  ] TASK [Get local VM IP]
>> [ ERROR ] fatal: [localhost]: FAILED! => {"attempts": 50, "changed":
>> true, "cmd": "virsh -r net-dhcp-leases default | grep -i 00:16:3e:01:c6:32
>> | awk '{ print $5 }' | cut -f1 -d'/'", "delta": "0:00:00.083587", "end":
>> "2018-06-26 11:26:07.581706", "rc": 0, "start": "2018-06-26
>> 11:26:07.498119", "stderr": "", "stderr_lines": [], "stdout": "",
>> "stdout_lines": []}
>>
>> I'm sure that the network was the same yesterday when my attempt finally
>> pass the "get local vm ip". Why not today ?
>> After the error, the network was :
>>
>> 1: lo:  mtu 65536 qdisc noqueue state UNKNOWN group
>> default qlen 1000
>> link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
>> inet 127.0.0.1/8 scope host lo
>>valid_lft forever preferred_lft forever
>> inet6 ::1/128 scope host
>>valid_lft forever preferred_lft forever
>> 2: em1:  mtu 1500 qdisc mq state UP
>> group default qlen 1000
>> link/ether e0:db:55:15:f0:f0 brd ff:ff:ff:ff:ff:ff
>> inet 10.0.0.227/8 brd 10.255.255.255 scope global em1
>>valid_lft forever preferred_lft forever
>> inet6 fe80::e2db:55ff:fe15:f0f0/64 scope link
>>valid_lft forever preferred_lft forever
>> 3: em2:  mtu 1500 qdisc mq state DOWN group default
>> qlen 1000
>> link/ether e0:db:55:15:f0:f1 brd ff:ff:ff:ff:ff:ff
>> 4: em3:  mtu 1500 qdisc mq state DOWN group default
>> qlen 1000
>> link/ether e0:db:55:15:f0:f2 brd ff:ff:ff:ff:ff:ff
>> 5: em4:  mtu 1500 qdisc mq state DOWN group default
>> qlen 1000
>> link/ether e0:db:55:15:f0:f3 brd ff:ff:ff:ff:ff:ff
>> 6: bond0:  mtu 9000 qdisc noqueue state DOWN
>> group default qlen 1000
>> link/ether 3a:ab:a2:f2:38:5c brd ff:ff:ff:ff:ff:ff
>> 7: virbr0:  mtu 1500 qdisc noqueue state
>> UP group default qlen 1000
>> link/ether 52:54:00:ae:8d:93 brd ff:ff:ff:ff:ff:ff
>> inet 192.168.122.1/24 brd 192.168.122.255 scope global virbr0
>>valid_lft forever preferred_lft forever
>> 8: virbr0-nic:  mtu 1500 qdisc pfifo_fast master
>> virbr0 state DOWN group default qlen 1000
>> link/ether 52:54:00:ae:8d:93 brd ff:ff:ff:ff:ff:ff
>> 9: vnet0:  mtu 1500 qdisc pfifo_fast
>> master virbr0 state UNKNOWN group default qlen 1000
>> link/ether fe:16:3e:01:c6:32 brd ff:ff:ff:ff:ff:ff
>> inet6 fe80::fc16:3eff:fe01:c632/64 scope link
>>valid_lft forever preferred_lft forever
>>
>> # ip r
>> default via 10.0.1.254 dev em1
>> 10.0.0.0/8 dev em1 proto kernel scope link src 10.0.0.227
>> 169.254.0.0/16 dev em1 scope link metric 1002
>> 192.168.122.0/24 dev virbr0 proto kernel scope link src 192.168.122.1
>>
>>
>>
>> So, finally, I have no idea why this appends :(((
>
>
> Can you 

[ovirt-users] Re: RHEL5 guests frequently hang when migrating host.

2018-06-27 Thread Jiří Sléžka
Hi,

I had similiar experience with some vms (really small amount of them).
It hangs, cpu was 100%, console inresponsible and powering that vm off
and on solves problem. But one of this vms was CentOS6, not 5 (in the
fact it was our mirror.slu.cz which mirrors also ovirt project so sorry
for unplanned outages ;-)

I am not sure if it is tied to migrations, I had this experience even
without migrating this vm (as long as it was not migrated automatically)

This didn't happen after I upgraded hosts to CentOS7.5.1804 (I am always
using the latest oVirt in time here)

Cpu familly is AMD Opteron(tm) Processor 6172 (yes, it is a little bit
older cluster :-)

Cheers,

Jiri Slezka


On 06/27/2018 08:59 AM, Eduardo Mayoral wrote:
> Hi,
> 
>     I am experiencing that my RHEL5 guests frequently "hang" when
> migrating host. Console is blank, CPU after migration is 100% and as far
> as oVirt is concerned, the VM is OK.
> 
>     oVirt is 4.2.3.8-1.el7, on CentOS 7. Hosts are CentOS 7 as well.
> Cluster is in "Intel Westmere family" CPU type.
> 
>     Guest is RHEL5, fully patched, kernel 4.2.3.8-1.el7 with
> ovirt-guest-agent installed from EPEL.
> 
>     I do not see anything out of place in the ovirt-engine and vdsm
> logs, and the guest logs are simply not there, the stop right before the
> migration,as if the machine had "frozen".
> 
>     Powering the VM off and starting it starts the VM correctly. This
> does not happen 100% of the time. If I try to migrate the VM when it is
> freshly started the migration is faster (maybe 5 seconds), and the guest
> OS does not hang.
> 
>     Anybody else experiencing something similar? Maybe something
> (timeouts?) that I should tune on the guest OS for RHEL5?
> 
>     Thanks!
> 
> --
> 
> Eduardo Mayoral.
> 
> 
> 
> 
> ___
> Users mailing list -- users@ovirt.org
> To unsubscribe send an email to users-le...@ovirt.org
> Privacy Statement: https://www.ovirt.org/site/privacy-policy/
> oVirt Code of Conduct: 
> https://www.ovirt.org/community/about/community-guidelines/
> List Archives: 
> https://lists.ovirt.org/archives/list/users@ovirt.org/message/42D5IQBRX7ABZFAXA3O4SHABRPLDCMZA/
> 




smime.p7s
Description: S/MIME Cryptographic Signature
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/JDNYDKUGP3LJQY4DMBEY3PIPOZFFPOII/


[ovirt-users] Re: RHEL5 guests frequently hang when migrating host.

2018-06-27 Thread Eduardo Mayoral
Yes, this looks definitely like it. Thank you very much!



On 27/06/18 10:51, Arnaud Lauriou wrote:
> Hi,
>
> Yes, I've got quite the same issues with RHEL5 guests on oVirt 4.2.3 :
> random guest freeze and oVirt host crash while rebooting a guest !
> It seems to be a kernel bug introduce with 7.5 :
> https://bugzilla.redhat.com/show_bug.cgi?id=1584775
>
> https://access.redhat.com/solutions/3496461
>
> While waiting for this kernel patch, I use an older kernel version on
> the oVirt host : 3.10.0-693.21.1.el7 works fine with oVirt 4.2.3 and
> RHEL5 guest.
>
> Regards,
>
> Arnaud Lauriou
>
> On 06/27/2018 08:59 AM, Eduardo Mayoral wrote:
>>
>> Hi,
>>
>>     I am experiencing that my RHEL5 guests frequently "hang" when
>> migrating host. Console is blank, CPU after migration is 100% and as
>> far as oVirt is concerned, the VM is OK.
>>
>>     oVirt is 4.2.3.8-1.el7, on CentOS 7. Hosts are CentOS 7 as well.
>> Cluster is in "Intel Westmere family" CPU type.
>>
>>     Guest is RHEL5, fully patched, kernel 4.2.3.8-1.el7 with
>> ovirt-guest-agent installed from EPEL.
>>
>>     I do not see anything out of place in the ovirt-engine and vdsm
>> logs, and the guest logs are simply not there, the stop right before
>> the migration,as if the machine had "frozen".
>>
>>     Powering the VM off and starting it starts the VM correctly. This
>> does not happen 100% of the time. If I try to migrate the VM when it
>> is freshly started the migration is faster (maybe 5 seconds), and the
>> guest OS does not hang.
>>
>>     Anybody else experiencing something similar? Maybe something
>> (timeouts?) that I should tune on the guest OS for RHEL5?
>>
>>     Thanks!
>>
>> --
>>
>> Eduardo Mayoral.
>>
>>
>>
>>
>> ___
>> Users mailing list -- users@ovirt.org
>> To unsubscribe send an email to users-le...@ovirt.org
>> Privacy Statement: https://www.ovirt.org/site/privacy-policy/
>> oVirt Code of Conduct: 
>> https://www.ovirt.org/community/about/community-guidelines/
>> List Archives: 
>> https://lists.ovirt.org/archives/list/users@ovirt.org/message/42D5IQBRX7ABZFAXA3O4SHABRPLDCMZA/
>

___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/U6G647XAI2JDTAMBVUFVM63VOCMYWAHA/


[ovirt-users] Re: Install hosted-engine - Task Get local VM IP failed

2018-06-27 Thread fsoyer

Hi Simone,
here are the revelant part of messages and the engine install log (there were 
only this file in /var/log/libvirt/qemu) .

Thanks for your time.

Frank
 Le Mardi, Juin 26, 2018 11:43 CEST, Simone Tiraboschi  a 
écrit:
  On Tue, Jun 26, 2018 at 11:39 AM fsoyer  wrote:Well,
unfortunatly, it was a "false-positive". This morning I tried again, with the 
idea that at one moment the deploy will ask for the final destination for the 
engine, I will restart bond0+gluster+volume engine at thos moment.
Re-launching the deploy on the second "fresh" host (the first one with all 
errors yesterday let it in a doutful state) with em2 and gluster+bond0 off :
# ip a
1: lo:  mtu 65536 qdisc noqueue state UNKNOWN group 
default qlen 1000
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
    inet6 ::1/128 scope host 
       valid_lft forever preferred_lft forever
2: em1:  mtu 1500 qdisc mq state UP group 
default qlen 1000
    link/ether e0:db:55:15:f0:f0 brd ff:ff:ff:ff:ff:ff
    inet 10.0.0.227/8 brd 10.255.255.255 scope global em1
       valid_lft forever preferred_lft forever
    inet6 fe80::e2db:55ff:fe15:f0f0/64 scope link 
       valid_lft forever preferred_lft forever
3: em2:  mtu 1500 qdisc mq state DOWN group default qlen 
1000
    link/ether e0:db:55:15:f0:f1 brd ff:ff:ff:ff:ff:ff
4: em3:  mtu 1500 qdisc mq state DOWN group default qlen 
1000
    link/ether e0:db:55:15:f0:f2 brd ff:ff:ff:ff:ff:ff
5: em4:  mtu 1500 qdisc mq state DOWN group default qlen 
1000
    link/ether e0:db:55:15:f0:f3 brd ff:ff:ff:ff:ff:ff
6: bond0:  mtu 9000 qdisc noqueue state DOWN group 
default qlen 1000
    link/ether 3a:ab:a2:f2:38:5c brd ff:ff:ff:ff:ff:ff

# ip r
default via 10.0.1.254 dev em1 
10.0.0.0/8 dev em1 proto kernel scope link src 10.0.0.227 
169.254.0.0/16 dev em1 scope link metric 1002 ... does NOT work this morning
[ INFO  ] TASK [Get local VM IP]
[ ERROR ] fatal: [localhost]: FAILED! => {"attempts": 50, "changed": true, 
"cmd": "virsh -r net-dhcp-leases default | grep -i 00:16:3e:01:c6:32 | awk '{ 
print $5 }' | cut -f1 -d'/'", "delta": "0:00:00.083587", "end": "2018-06-26 
11:26:07.581706", "rc": 0, "start": "2018-06-26 11:26:07.498119", "stderr": "", 
"stderr_lines": [], "stdout": "", "stdout_lines": []}I'm sure that the network 
was the same yesterday when my attempt finally pass the "get local vm ip". Why 
not today ?
After the error, the network was :
1: lo:  mtu 65536 qdisc noqueue state UNKNOWN group 
default qlen 1000
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
    inet6 ::1/128 scope host 
       valid_lft forever preferred_lft forever
2: em1:  mtu 1500 qdisc mq state UP group 
default qlen 1000
    link/ether e0:db:55:15:f0:f0 brd ff:ff:ff:ff:ff:ff
    inet 10.0.0.227/8 brd 10.255.255.255 scope global em1
       valid_lft forever preferred_lft forever
    inet6 fe80::e2db:55ff:fe15:f0f0/64 scope link 
       valid_lft forever preferred_lft forever
3: em2:  mtu 1500 qdisc mq state DOWN group default qlen 
1000
    link/ether e0:db:55:15:f0:f1 brd ff:ff:ff:ff:ff:ff
4: em3:  mtu 1500 qdisc mq state DOWN group default qlen 
1000
    link/ether e0:db:55:15:f0:f2 brd ff:ff:ff:ff:ff:ff
5: em4:  mtu 1500 qdisc mq state DOWN group default qlen 
1000
    link/ether e0:db:55:15:f0:f3 brd ff:ff:ff:ff:ff:ff
6: bond0:  mtu 9000 qdisc noqueue state DOWN group 
default qlen 1000
    link/ether 3a:ab:a2:f2:38:5c brd ff:ff:ff:ff:ff:ff
7: virbr0:  mtu 1500 qdisc noqueue state UP 
group default qlen 1000
    link/ether 52:54:00:ae:8d:93 brd ff:ff:ff:ff:ff:ff
    inet 192.168.122.1/24 brd 192.168.122.255 scope global virbr0
       valid_lft forever preferred_lft forever
8: virbr0-nic:  mtu 1500 qdisc pfifo_fast master virbr0 
state DOWN group default qlen 1000
    link/ether 52:54:00:ae:8d:93 brd ff:ff:ff:ff:ff:ff
9: vnet0:  mtu 1500 qdisc pfifo_fast master 
virbr0 state UNKNOWN group default qlen 1000
    link/ether fe:16:3e:01:c6:32 brd ff:ff:ff:ff:ff:ff
    inet6 fe80::fc16:3eff:fe01:c632/64 scope link 
       valid_lft forever preferred_lft forever

# ip r
default via 10.0.1.254 dev em1 
10.0.0.0/8 dev em1 proto kernel scope link src 10.0.0.227 
169.254.0.0/16 dev em1 scope link metric 1002 
192.168.122.0/24 dev virbr0 proto kernel scope link src 192.168.122.1 
 
 So, finally, I have no idea why this appends :((( Can you please attach 
/var/log/messages and /var/log/libvirt/qemu/* ?  

Le Mardi, Juin 26, 2018 09:21 CEST, Simone Tiraboschi  a 
écrit:
  On Mon, Jun 25, 2018 at 6:32 PM fsoyer  wrote:Well, 
answering to myself for more informations.
Thinking that the network was part of the problem, I tried to stop gluster 
volumes, stop gluster on host, and stop bond0.
So, the host now had just em1 with one IP.
And... The winner is... Yes : the install passed the "[Get local VM IP]" and 
continued !!

I hit ctrl-c, restart the bond0, 

[ovirt-users] Re: RHEL5 guests frequently hang when migrating host.

2018-06-27 Thread Arnaud Lauriou

Hi,

Yes, I've got quite the same issues with RHEL5 guests on oVirt 4.2.3 : 
random guest freeze and oVirt host crash while rebooting a guest !

It seems to be a kernel bug introduce with 7.5 :
https://bugzilla.redhat.com/show_bug.cgi?id=1584775

https://access.redhat.com/solutions/3496461

While waiting for this kernel patch, I use an older kernel version on 
the oVirt host : 3.10.0-693.21.1.el7 works fine with oVirt 4.2.3 and 
RHEL5 guest.


Regards,

Arnaud Lauriou

On 06/27/2018 08:59 AM, Eduardo Mayoral wrote:


Hi,

    I am experiencing that my RHEL5 guests frequently "hang" when 
migrating host. Console is blank, CPU after migration is 100% and as 
far as oVirt is concerned, the VM is OK.


    oVirt is 4.2.3.8-1.el7, on CentOS 7. Hosts are CentOS 7 as well. 
Cluster is in "Intel Westmere family" CPU type.


    Guest is RHEL5, fully patched, kernel 4.2.3.8-1.el7 with 
ovirt-guest-agent installed from EPEL.


    I do not see anything out of place in the ovirt-engine and vdsm 
logs, and the guest logs are simply not there, the stop right before 
the migration,as if the machine had "frozen".


    Powering the VM off and starting it starts the VM correctly. This 
does not happen 100% of the time. If I try to migrate the VM when it 
is freshly started the migration is faster (maybe 5 seconds), and the 
guest OS does not hang.


    Anybody else experiencing something similar? Maybe something 
(timeouts?) that I should tune on the guest OS for RHEL5?


    Thanks!

--

Eduardo Mayoral.




___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/42D5IQBRX7ABZFAXA3O4SHABRPLDCMZA/


___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/TQQMCBYRRNDX2DCULLRIDQTGALCXVDEZ/


[ovirt-users] Re: Internal Server Error 'AutoProxy[instance]' object has no attribute 'glusterLogicalVolumeList'

2018-06-27 Thread Sahina Bose
Adding Denis.

On Wed, Jun 27, 2018 at 1:12 PM, Sandro Bonazzola 
wrote:

> Sahina can you please have a look?
>
> 2018-06-26 15:21 GMT+02:00 Hesham Ahmed :
>
>> With upgrade to oVirt 4.2.4 (both engine and nodes) the error is
>> replaced with the following similar error:
>> Jun 26 16:16:28 vhost03.somedomain.com vdsm[6465]: ERROR Internal server
>> error
>>Traceback (most
>> recent call last):
>>  File
>> "/usr/lib/python2.7/site-packages/yajsonrpc/__init__.py", line 606, in
>> _handle_request
>>res =
>> method(**params)
>>  File
>> "/usr/lib/python2.7/site-packages/vdsm/rpc/Bridge.py", line 197, in
>> _dynamicMethod
>>result =
>> fn(*methodArgs)
>>  File
>> "/usr/lib/python2.7/site-packages/vdsm/gluster/apiwrapper.py", line
>> 91, in vdoVolumeList
>>return
>> self._gluster.vdoVolumeList()
>>  File
>> "/usr/lib/python2.7/site-packages/vdsm/gluster/api.py", line 90, in
>> wrapper
>>rv =
>> func(*args, **kwargs)
>>  File
>> "/usr/lib/python2.7/site-packages/vdsm/gluster/api.py", line 818, in
>> vdoVolumeList
>>status =
>> self.svdsmProxy.glusterVdoVolumeList()
>>  File
>> "/usr/lib/python2.7/site-packages/vdsm/common/supervdsm.py", line 55,
>> in __call__
>>return callMethod()
>>  File
>> "/usr/lib/python2.7/site-packages/vdsm/common/supervdsm.py", line 53,
>> in 
>>**kwargs)
>>  File "",
>> line 2, in glusterVdoVolumeList
>>  File
>> "/usr/lib64/python2.7/multiprocessing/managers.py", line 773, in
>> _callmethod
>>raise
>> convert_to_error(kind, result)
>>OSError: [Errno 2]
>> No such file or directory: vdo
>>
>> On Mon, Jun 25, 2018 at 6:09 AM Hesham Ahmed  wrote:
>> >
>> > I am receiving the following error in journal repeatedly every few
>> minutes on all 3 nodes of a hyperconverged oVirt 4.2.3 setup running oVirt
>> Nodes:
>> >
>> > Jun 25 06:03:26 vhost01.somedomain.com vdsm[45222]: ERROR Internal
>> server error
>> > Traceback (most
>> recent call last):
>> >   File
>> "/usr/lib/python2.7/site-packages/yajsonrpc/__init__.py", line 606, in
>> _handle_request
>> > res =
>> method(**params)
>> >   File
>> "/usr/lib/python2.7/site-packages/vdsm/rpc/Bridge.py", line 197, in
>> _dynamicMethod
>> > result =
>> fn(*methodArgs)
>> >   File
>> "/usr/lib/python2.7/site-packages/vdsm/gluster/apiwrapper.py", line 85,
>> in logicalVolumeList
>> > return
>> self._gluster.logicalVolumeList()
>> >   File
>> "/usr/lib/python2.7/site-packages/vdsm/gluster/api.py", line 90, in
>> wrapper
>> > rv =
>> func(*args, **kwargs)
>> >   File
>> "/usr/lib/python2.7/site-packages/vdsm/gluster/api.py", line 808, in
>> logicalVolumeList
>> > status =
>> self.svdsmProxy.glusterLogicalVolumeList()
>> >   File
>> "/usr/lib/python2.7/site-packages/vdsm/common/supervdsm.py", line 55, in
>> __call__
>> > return
>> callMethod()
>> >   File
>> "/usr/lib/python2.7/site-packages/vdsm/common/supervdsm.py", line 52, in
>> 
>> >
>>  getattr(self._supervdsmProxy._svdsm, self._funcName)(*args,
>> > AttributeError:
>> 'AutoProxy[instance]' object has no attribute 'glusterLogicalVolumeList'
>> >
>> > And in /var/log/vdsm/vdsm.log
>> >
>> > 2018-06-25 06:03:24,118+0300 INFO  (jsonrpc/6) [jsonrpc.JsonRpcServer]
>> RPC call Host.getCapabilities succeeded in 0.79 

[ovirt-users] Re: Problem download disk

2018-06-27 Thread Shani Leviim
Hi,
Can you please provide your API request as well?


*Regards,*

*Shani Leviim*

On Tue, Jun 26, 2018 at 10:26 PM, Marcelo Leandro 
wrote:

> Hello,
>
> I am use api java to download a disk but not was concluded , I have this
> error:
>
> The server response was 503 in the range request
>
>
> I need change anythings how timeout or size the download ?
>
> Very Thanks.
>
> ___
> Users mailing list -- users@ovirt.org
> To unsubscribe send an email to users-le...@ovirt.org
> Privacy Statement: https://www.ovirt.org/site/privacy-policy/
> oVirt Code of Conduct: https://www.ovirt.org/community/about/community-
> guidelines/
> List Archives: https://lists.ovirt.org/archives/list/users@ovirt.org/
> message/IM2ILYHDBKT5ZZQWIMQMQTATUQL6ASHP/
>
>
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/U4Q5XACEZ7RZIBT2F2QMHUKMTAKE2BJ4/


[ovirt-users] Re: Internal Server Error 'AutoProxy[instance]' object has no attribute 'glusterLogicalVolumeList'

2018-06-27 Thread Sandro Bonazzola
Sahina can you please have a look?

2018-06-26 15:21 GMT+02:00 Hesham Ahmed :

> With upgrade to oVirt 4.2.4 (both engine and nodes) the error is
> replaced with the following similar error:
> Jun 26 16:16:28 vhost03.somedomain.com vdsm[6465]: ERROR Internal server
> error
>Traceback (most
> recent call last):
>  File
> "/usr/lib/python2.7/site-packages/yajsonrpc/__init__.py", line 606, in
> _handle_request
>res =
> method(**params)
>  File
> "/usr/lib/python2.7/site-packages/vdsm/rpc/Bridge.py", line 197, in
> _dynamicMethod
>result =
> fn(*methodArgs)
>  File
> "/usr/lib/python2.7/site-packages/vdsm/gluster/apiwrapper.py", line
> 91, in vdoVolumeList
>return
> self._gluster.vdoVolumeList()
>  File
> "/usr/lib/python2.7/site-packages/vdsm/gluster/api.py", line 90, in
> wrapper
>rv =
> func(*args, **kwargs)
>  File
> "/usr/lib/python2.7/site-packages/vdsm/gluster/api.py", line 818, in
> vdoVolumeList
>status =
> self.svdsmProxy.glusterVdoVolumeList()
>  File
> "/usr/lib/python2.7/site-packages/vdsm/common/supervdsm.py", line 55,
> in __call__
>return callMethod()
>  File
> "/usr/lib/python2.7/site-packages/vdsm/common/supervdsm.py", line 53,
> in 
>**kwargs)
>  File "",
> line 2, in glusterVdoVolumeList
>  File
> "/usr/lib64/python2.7/multiprocessing/managers.py", line 773, in
> _callmethod
>raise
> convert_to_error(kind, result)
>OSError: [Errno 2]
> No such file or directory: vdo
>
> On Mon, Jun 25, 2018 at 6:09 AM Hesham Ahmed  wrote:
> >
> > I am receiving the following error in journal repeatedly every few
> minutes on all 3 nodes of a hyperconverged oVirt 4.2.3 setup running oVirt
> Nodes:
> >
> > Jun 25 06:03:26 vhost01.somedomain.com vdsm[45222]: ERROR Internal
> server error
> > Traceback (most
> recent call last):
> >   File
> "/usr/lib/python2.7/site-packages/yajsonrpc/__init__.py", line 606, in
> _handle_request
> > res =
> method(**params)
> >   File
> "/usr/lib/python2.7/site-packages/vdsm/rpc/Bridge.py", line 197, in
> _dynamicMethod
> > result =
> fn(*methodArgs)
> >   File
> "/usr/lib/python2.7/site-packages/vdsm/gluster/apiwrapper.py", line 85,
> in logicalVolumeList
> > return
> self._gluster.logicalVolumeList()
> >   File
> "/usr/lib/python2.7/site-packages/vdsm/gluster/api.py", line 90, in
> wrapper
> > rv = func(*args,
> **kwargs)
> >   File
> "/usr/lib/python2.7/site-packages/vdsm/gluster/api.py", line 808, in
> logicalVolumeList
> > status =
> self.svdsmProxy.glusterLogicalVolumeList()
> >   File
> "/usr/lib/python2.7/site-packages/vdsm/common/supervdsm.py", line 55, in
> __call__
> > return
> callMethod()
> >   File
> "/usr/lib/python2.7/site-packages/vdsm/common/supervdsm.py", line 52, in
> 
> >
>  getattr(self._supervdsmProxy._svdsm, self._funcName)(*args,
> > AttributeError:
> 'AutoProxy[instance]' object has no attribute 'glusterLogicalVolumeList'
> >
> > And in /var/log/vdsm/vdsm.log
> >
> > 2018-06-25 06:03:24,118+0300 INFO  (jsonrpc/6) [jsonrpc.JsonRpcServer]
> RPC call Host.getCapabilities succeeded in 0.79 seconds (__init__:573)
> > 2018-06-25 06:03:26,106+0300 ERROR (jsonrpc/0) [jsonrpc.JsonRpcServer]
> Internal server error (__init__:611)
> > Traceback (most recent call last):
> >  

[ovirt-users] RHEL5 guests frequently hang when migrating host.

2018-06-27 Thread Eduardo Mayoral
Hi,

    I am experiencing that my RHEL5 guests frequently "hang" when
migrating host. Console is blank, CPU after migration is 100% and as far
as oVirt is concerned, the VM is OK.

    oVirt is 4.2.3.8-1.el7, on CentOS 7. Hosts are CentOS 7 as well.
Cluster is in "Intel Westmere family" CPU type.

    Guest is RHEL5, fully patched, kernel 4.2.3.8-1.el7 with
ovirt-guest-agent installed from EPEL.

    I do not see anything out of place in the ovirt-engine and vdsm
logs, and the guest logs are simply not there, the stop right before the
migration,as if the machine had "frozen".

    Powering the VM off and starting it starts the VM correctly. This
does not happen 100% of the time. If I try to migrate the VM when it is
freshly started the migration is faster (maybe 5 seconds), and the guest
OS does not hang.

    Anybody else experiencing something similar? Maybe something
(timeouts?) that I should tune on the guest OS for RHEL5?

    Thanks!

--

Eduardo Mayoral.


___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/42D5IQBRX7ABZFAXA3O4SHABRPLDCMZA/


[ovirt-users] Re: trunked ports

2018-06-27 Thread Edward Haas
On Mon, Jun 25, 2018 at 4:35 PM, Michael Watters 
wrote:

> You should be able to use bonded interfaces with an IP on each VLAN
> Interface for the ovirt hosts and the engine.  For example, here is the IP
> configuration for one of our VLANs.
>
> 10: bond3:  mtu 1500 qdisc
> noqueue state UP group default qlen 1000
> link/ether 00:1b:21:5c:80:39 brd ff:ff:ff:ff:ff:ff
> inet6 fe80::21b:21ff:fe5c:8039/64 scope link
>valid_lft forever preferred_lft forever
> 24: bond3.311@bond3:  mtu 1500 qdisc
> noqueue state UP group default qlen 1000
> link/ether 00:1b:21:5c:80:39 brd ff:ff:ff:ff:ff:ff
> inet 192.168.111.201/24 brd 192.168.111.255 scope global bond3.311
>valid_lft forever preferred_lft forever
> inet6 fe80::21b:21ff:fe5c:8039/64 scope link
>valid_lft forever preferred_lft forever
>
> bond3.311 configuration is managed in the 
> /etc/sysconfig/network-scripts/ifcfg-bond3.311
> file.
> IMO setting up bonded NICs with VLAN tagging is one area where ovirt falls
> short.  You essentially have to configure your networks twice.  First using
> the /etc/sysconfig/network-interface files and then inside of the engine
> itself.
>

I am not familiar with this problem, a vlan network over a bond is
supported. I even recall several fixes related to it in 4.2.


> VDSM may also need to be configured to use ifcfg persistence in the config
> file.
>
> cat files/vdsm/vdsm.conf
> [vars]
> ssl = true
> net_persistence = ifcfg
>
> The ifcfg persistence mode is planned for  deprecation in 4.3, and is not
well tested/supported for some time now.
Do not use it unless you have a very (very) good reason to do so.

>
> [addresses]
> management_port = 54321
>
>
> Your switch ports also need to be configured to support 802.1q networking.
>
>
>
> On 06/23/2018 09:56 AM, william.doss...@gmail.com wrote:
>
> Hi,
>
>
>
> I setup oVirt a few years back…  now that the HCI is real, I am revisiting
> it.  I have deployed with Gluster, and am now moving on to networking.
>
>
>
> I come from a VMware shop and normally we trunk all the network ports
> exposing all VLANs to the hosts and place VMs in Portgoups that are tagged
> with VLANs.
>
>
>
> I did manage to do this years back but I am struggling to get this to work
> today.  I had pretty limited hardware back then and I thought I installed
> using vlan tagging and trunked ports, but I don’t see any  option to do
> this using the glusterfs and hosted engine setup.
>
>
>
> Each host has a dual port 10Gb NICs  I use one for storage that is
> connected to my storage network and one for ovirtmgmt.  (I need to add
> another of these for redundancy down the road, but no money for that at the
> moment)
>
>
>
> The hosts also have 4 x 1Gb ports. So in lieu of being able to configure
> vlan tagging to trunked ports on hosted engine deploy, I am considering
> cabling up a 1 Gb port on each in my management services VLAN and when it
> is all up and running create another logical network (or several of them as
> I think these equate to what is a vlan tagged port group in VMware) with
> the 10Gb NIC backing for VMs.
>
>
>
> Does that sound reasonable?  Or if anyone can point me to any docs that
> describe how to deploy to a specific VLAN with trunk ports, that would be
> nice as well as I won’t have to actually go to the office and run
> additional cables.
>
>
As mentioned by Michael, you can use vlan networks and attach several of
them on the same bond/nic in the setup networks window.
You also have the option to use a non-vlan network, and define the vlans
inside the VM. The linux bridge that connects the vnics to the bond ignore
tagging information, it just forwards frames in a flat manner, leaving it
to the vnic in the VM to strip the tag.


>
>
> Appreciate any advice
>
>
>
> Bill
>
>
>
>
> ___
> Users mailing list -- users@ovirt.org
> To unsubscribe send an email to users-le...@ovirt.org
> Privacy Statement: https://www.ovirt.org/site/privacy-policy/
> oVirt Code of Conduct: 
> https://www.ovirt.org/community/about/community-guidelines/
> List Archives: 
> https://lists.ovirt.org/archives/list/users@ovirt.org/message/TDFK2BNP4DMBB22SBNEXQLTOK5SOW42O/
>
>
>
> ___
> Users mailing list -- users@ovirt.org
> To unsubscribe send an email to users-le...@ovirt.org
> Privacy Statement: https://www.ovirt.org/site/privacy-policy/
> oVirt Code of Conduct: https://www.ovirt.org/community/about/community-
> guidelines/
> List Archives: https://lists.ovirt.org/archives/list/users@ovirt.org/
> message/4NVTFK2I5LOWCS4XLSCP5BEERS7NXXNR/
>
>
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: