Re: [ovirt-devel] Experimental Flow for Master Fails to Run a VM

2016-12-04 Thread Yaniv Kaul
On Dec 4, 2016 10:57 PM, "Eyal Edri"  wrote:

tests are back to stable,  but with a cost of not testing 4.1 CL.


DC level.
But we are not explicitly using any of the 4.1 features for the time being.
(I'd like to believe implictly we did, qcow2v3 for example).


Iets hope we get centos 7.3 soon.


Indeed -  but we also need virt-builder image for it.
Y.


On Dec 4, 2016 22:41, "Yaniv Kaul"  wrote:

>
>
> On Dec 4, 2016 6:42 PM, "Arik Hadas"  wrote:
>
> Yaniv will try to lower the cluster level used in the system-tests to 4.0
> - this is supposed to solve the issue.
>
>
> Done.
> Y.
>
> If it won't help (we will know it in about an hour), we'll add a db-script
> that changes the rng device of the blank template only.
>
> On Sun, Dec 4, 2016 at 3:34 PM, Eyal Edri  wrote:
>
>> FYI,
>>
>> I opened a bug [1] to track this issue since I don't see any attempts to
>> resolve the issue on the thread, hopefully a bug will get more attention.
>> Opened on VDSM since we see the libvirt error there, feel free to move
>> product/team.
>>
>>
>> [1] https://bugzilla.redhat.com/show_bug.cgi?id=1401303
>>
>> On Sun, Dec 4, 2016 at 1:23 PM, Eyal Edri  wrote:
>>
>>> Not sure if relevant, but Juan posted a fix for SDK4 last time it
>>> happened ( but different failure on log-collector ):
>>>
>>> https://gerrit.ovirt.org/#/c/67213/
>>>
>>> * Added `urandom` to the `RngSource` enumerated type.
>>>
>>> On Sun, Dec 4, 2016 at 9:17 AM, Eyal Edri  wrote:
>>>
 And its still failing from Friday,
 Since we don't have official Centos 7.3 repos yet ( hopefully we'll
 have it this week, but as of this moment its not published yet ) , we have
 to either revert the offending patch
 or send a quick fix.

 Right now all experimental flows for master are not working and nightly
 rpms are not refreshed with new RPMs.



 On Fri, Dec 2, 2016 at 9:41 PM, Yaniv Kaul  wrote:

>
>
> On Dec 2, 2016 2:11 PM, "Anton Marchukov"  wrote:
>
> Hello Martin.
>
> Do by outdated you mean the old libvirt? If so that is that livirt
> available in CentOS 7.2? There is no 7.3 yet.
>
>
> Right, this is the issue.
> Y.
>
>
> Anton.
>
> On Fri, Dec 2, 2016 at 1:07 PM, Martin Polednik 
> wrote:
>
>> On 02/12/16 10:55 +0100, Anton Marchukov wrote:
>>
>>> Hello All.
>>>
>>> Engine log can be viewed here:
>>>
>>> http://jenkins.ovirt.org/job/test-repo_ovirt_experimental_ma
>>> ster/3838/artifact/exported-artifacts/basic_suite_master.sh-
>>> el7/exported-artifacts/test_logs/basic-suite-master/post-004
>>> _basic_sanity.py/lago-basic-suite-master-engine/_var_log_ovi
>>> rt-engine/engine.log
>>>
>>> I see the following exception there:
>>>
>>> 2016-12-02 04:29:24,030-05 DEBUG
>>> [org.ovirt.vdsm.jsonrpc.client.internal.ResponseWorker]
>>> (ResponseWorker) [83b6b5d] Message received: {"jsonrpc": "2.0", "id":
>>> "ec254aad-441b-47e7-a644-aebddcc1d62c", "result": true}
>>> 2016-12-02 04:29:24,030-05 ERROR
>>> [org.ovirt.vdsm.jsonrpc.client.JsonRpcClient] (ResponseWorker)
>>> [83b6b5d] Not able to update response for
>>> "ec254aad-441b-47e7-a644-aebddcc1d62c"
>>> 2016-12-02 04:29:24,041-05 DEBUG
>>> [org.ovirt.engine.core.utils.timer.FixedDelayJobListener]
>>> (DefaultQuartzScheduler3) [47a31d72] Rescheduling
>>> DEFAULT.org.ovirt.engine.core.bll.gluster.GlusterSyncJob.ref
>>> reshLightWeightData#-9223372036854775775
>>> as there is no unfired trigger.
>>> 2016-12-02 04:29:24,024-05 DEBUG
>>> [org.ovirt.engine.core.vdsbroker.vdsbroker.PollVDSCommand] (default
>>> task-12) [d932871a-af4f-4fc9-9ee5-f7a0126a7b85] Exception:
>>> org.ovirt.engine.core.vdsbroker.vdsbroker.VDSNetworkException:
>>> VDSGenericException: VDSNetworkException: Timeout during xml-rpc call
>>> at org.ovirt.engine.core.vdsbroke
>>> r.vdsbroker.FutureVDSCommand.get(FutureVDSCommand.java:73)
>>> [vdsbroker.jar:]
>>>
>>> ...
>>>
>>> 2016-12-02 04:29:24,042-05 ERROR
>>> [org.ovirt.engine.core.vdsbroker.vdsbroker.PollVDSCommand] (default
>>> task-12) [d932871a-af4f-4fc9-9ee5-f7a0126a7b85] Timeout waiting for
>>> VDSM response: Internal timeout occured
>>> 2016-12-02 04:29:24,044-05 DEBUG
>>> [org.ovirt.engine.core.vdsbroker.vdsbroker.GetCapabilitiesVD
>>> SCommand]
>>> (default task-12) [d932871a-af4f-4fc9-9ee5-f7a0126a7b85] START,
>>> GetCapabilitiesVDSCommand(HostName = lago-basic-suite-master-host0,
>>> VdsIdAndVdsVDSCommandParametersBase:{runAsync='true',
>>> hostId='5eb7019e-28a3-4f93-9188-685b6c64a2f5',
>>> vds='Host[lago-basic-suite-master-host0,5eb7019e-28a3-4f93-9
>>> 188-685b6c64a2f5]'}),
>>> log id: 58f448b8
>>> 2016-12-02 04:29:24,044-05 DEBUG
>>> [org.ovirt.vdsm.jsonrpc.client.reactors.stomp.impl.Message] (default
>>> task-12) [d932871a-af4f-4fc9-9

Re: [ovirt-devel] Experimental Flow for Master Fails to Run a VM

2016-12-04 Thread Eyal Edri
tests are back to stable,  but with a cost of not testing 4.1 CL.

Iets hope we get centos 7.3 soon.

On Dec 4, 2016 22:41, "Yaniv Kaul"  wrote:

>
>
> On Dec 4, 2016 6:42 PM, "Arik Hadas"  wrote:
>
> Yaniv will try to lower the cluster level used in the system-tests to 4.0
> - this is supposed to solve the issue.
>
>
> Done.
> Y.
>
> If it won't help (we will know it in about an hour), we'll add a db-script
> that changes the rng device of the blank template only.
>
> On Sun, Dec 4, 2016 at 3:34 PM, Eyal Edri  wrote:
>
>> FYI,
>>
>> I opened a bug [1] to track this issue since I don't see any attempts to
>> resolve the issue on the thread, hopefully a bug will get more attention.
>> Opened on VDSM since we see the libvirt error there, feel free to move
>> product/team.
>>
>>
>> [1] https://bugzilla.redhat.com/show_bug.cgi?id=1401303
>>
>> On Sun, Dec 4, 2016 at 1:23 PM, Eyal Edri  wrote:
>>
>>> Not sure if relevant, but Juan posted a fix for SDK4 last time it
>>> happened ( but different failure on log-collector ):
>>>
>>> https://gerrit.ovirt.org/#/c/67213/
>>>
>>> * Added `urandom` to the `RngSource` enumerated type.
>>>
>>> On Sun, Dec 4, 2016 at 9:17 AM, Eyal Edri  wrote:
>>>
 And its still failing from Friday,
 Since we don't have official Centos 7.3 repos yet ( hopefully we'll
 have it this week, but as of this moment its not published yet ) , we have
 to either revert the offending patch
 or send a quick fix.

 Right now all experimental flows for master are not working and nightly
 rpms are not refreshed with new RPMs.



 On Fri, Dec 2, 2016 at 9:41 PM, Yaniv Kaul  wrote:

>
>
> On Dec 2, 2016 2:11 PM, "Anton Marchukov"  wrote:
>
> Hello Martin.
>
> Do by outdated you mean the old libvirt? If so that is that livirt
> available in CentOS 7.2? There is no 7.3 yet.
>
>
> Right, this is the issue.
> Y.
>
>
> Anton.
>
> On Fri, Dec 2, 2016 at 1:07 PM, Martin Polednik 
> wrote:
>
>> On 02/12/16 10:55 +0100, Anton Marchukov wrote:
>>
>>> Hello All.
>>>
>>> Engine log can be viewed here:
>>>
>>> http://jenkins.ovirt.org/job/test-repo_ovirt_experimental_ma
>>> ster/3838/artifact/exported-artifacts/basic_suite_master.sh-
>>> el7/exported-artifacts/test_logs/basic-suite-master/post-004
>>> _basic_sanity.py/lago-basic-suite-master-engine/_var_log_ovi
>>> rt-engine/engine.log
>>>
>>> I see the following exception there:
>>>
>>> 2016-12-02 04:29:24,030-05 DEBUG
>>> [org.ovirt.vdsm.jsonrpc.client.internal.ResponseWorker]
>>> (ResponseWorker) [83b6b5d] Message received: {"jsonrpc": "2.0", "id":
>>> "ec254aad-441b-47e7-a644-aebddcc1d62c", "result": true}
>>> 2016-12-02 04:29:24,030-05 ERROR
>>> [org.ovirt.vdsm.jsonrpc.client.JsonRpcClient] (ResponseWorker)
>>> [83b6b5d] Not able to update response for
>>> "ec254aad-441b-47e7-a644-aebddcc1d62c"
>>> 2016-12-02 04:29:24,041-05 DEBUG
>>> [org.ovirt.engine.core.utils.timer.FixedDelayJobListener]
>>> (DefaultQuartzScheduler3) [47a31d72] Rescheduling
>>> DEFAULT.org.ovirt.engine.core.bll.gluster.GlusterSyncJob.ref
>>> reshLightWeightData#-9223372036854775775
>>> as there is no unfired trigger.
>>> 2016-12-02 04:29:24,024-05 DEBUG
>>> [org.ovirt.engine.core.vdsbroker.vdsbroker.PollVDSCommand] (default
>>> task-12) [d932871a-af4f-4fc9-9ee5-f7a0126a7b85] Exception:
>>> org.ovirt.engine.core.vdsbroker.vdsbroker.VDSNetworkException:
>>> VDSGenericException: VDSNetworkException: Timeout during xml-rpc call
>>> at org.ovirt.engine.core.vdsbroke
>>> r.vdsbroker.FutureVDSCommand.get(FutureVDSCommand.java:73)
>>> [vdsbroker.jar:]
>>>
>>> ...
>>>
>>> 2016-12-02 04:29:24,042-05 ERROR
>>> [org.ovirt.engine.core.vdsbroker.vdsbroker.PollVDSCommand] (default
>>> task-12) [d932871a-af4f-4fc9-9ee5-f7a0126a7b85] Timeout waiting for
>>> VDSM response: Internal timeout occured
>>> 2016-12-02 04:29:24,044-05 DEBUG
>>> [org.ovirt.engine.core.vdsbroker.vdsbroker.GetCapabilitiesVD
>>> SCommand]
>>> (default task-12) [d932871a-af4f-4fc9-9ee5-f7a0126a7b85] START,
>>> GetCapabilitiesVDSCommand(HostName = lago-basic-suite-master-host0,
>>> VdsIdAndVdsVDSCommandParametersBase:{runAsync='true',
>>> hostId='5eb7019e-28a3-4f93-9188-685b6c64a2f5',
>>> vds='Host[lago-basic-suite-master-host0,5eb7019e-28a3-4f93-9
>>> 188-685b6c64a2f5]'}),
>>> log id: 58f448b8
>>> 2016-12-02 04:29:24,044-05 DEBUG
>>> [org.ovirt.vdsm.jsonrpc.client.reactors.stomp.impl.Message] (default
>>> task-12) [d932871a-af4f-4fc9-9ee5-f7a0126a7b85] SEND
>>> destination:jms.topic.vdsm_requests
>>> reply-to:jms.topic.vdsm_responses
>>> content-length:105
>>>
>>>
>>> Please note that this runs on localhost with local bridge. So it is
>>> not
>>> likely t

Re: [ovirt-devel] Experimental Flow for Master Fails to Run a VM

2016-12-04 Thread Yaniv Kaul
On Dec 4, 2016 6:42 PM, "Arik Hadas"  wrote:

Yaniv will try to lower the cluster level used in the system-tests to 4.0 -
this is supposed to solve the issue.


Done.
Y.

If it won't help (we will know it in about an hour), we'll add a db-script
that changes the rng device of the blank template only.

On Sun, Dec 4, 2016 at 3:34 PM, Eyal Edri  wrote:

> FYI,
>
> I opened a bug [1] to track this issue since I don't see any attempts to
> resolve the issue on the thread, hopefully a bug will get more attention.
> Opened on VDSM since we see the libvirt error there, feel free to move
> product/team.
>
>
> [1] https://bugzilla.redhat.com/show_bug.cgi?id=1401303
>
> On Sun, Dec 4, 2016 at 1:23 PM, Eyal Edri  wrote:
>
>> Not sure if relevant, but Juan posted a fix for SDK4 last time it
>> happened ( but different failure on log-collector ):
>>
>> https://gerrit.ovirt.org/#/c/67213/
>>
>> * Added `urandom` to the `RngSource` enumerated type.
>>
>> On Sun, Dec 4, 2016 at 9:17 AM, Eyal Edri  wrote:
>>
>>> And its still failing from Friday,
>>> Since we don't have official Centos 7.3 repos yet ( hopefully we'll have
>>> it this week, but as of this moment its not published yet ) , we have to
>>> either revert the offending patch
>>> or send a quick fix.
>>>
>>> Right now all experimental flows for master are not working and nightly
>>> rpms are not refreshed with new RPMs.
>>>
>>>
>>>
>>> On Fri, Dec 2, 2016 at 9:41 PM, Yaniv Kaul  wrote:
>>>


 On Dec 2, 2016 2:11 PM, "Anton Marchukov"  wrote:

 Hello Martin.

 Do by outdated you mean the old libvirt? If so that is that livirt
 available in CentOS 7.2? There is no 7.3 yet.


 Right, this is the issue.
 Y.


 Anton.

 On Fri, Dec 2, 2016 at 1:07 PM, Martin Polednik 
 wrote:

> On 02/12/16 10:55 +0100, Anton Marchukov wrote:
>
>> Hello All.
>>
>> Engine log can be viewed here:
>>
>> http://jenkins.ovirt.org/job/test-repo_ovirt_experimental_ma
>> ster/3838/artifact/exported-artifacts/basic_suite_master.sh-
>> el7/exported-artifacts/test_logs/basic-suite-master/post-004
>> _basic_sanity.py/lago-basic-suite-master-engine/_var_log_ovi
>> rt-engine/engine.log
>>
>> I see the following exception there:
>>
>> 2016-12-02 04:29:24,030-05 DEBUG
>> [org.ovirt.vdsm.jsonrpc.client.internal.ResponseWorker]
>> (ResponseWorker) [83b6b5d] Message received: {"jsonrpc": "2.0", "id":
>> "ec254aad-441b-47e7-a644-aebddcc1d62c", "result": true}
>> 2016-12-02 04:29:24,030-05 ERROR
>> [org.ovirt.vdsm.jsonrpc.client.JsonRpcClient] (ResponseWorker)
>> [83b6b5d] Not able to update response for
>> "ec254aad-441b-47e7-a644-aebddcc1d62c"
>> 2016-12-02 04:29:24,041-05 DEBUG
>> [org.ovirt.engine.core.utils.timer.FixedDelayJobListener]
>> (DefaultQuartzScheduler3) [47a31d72] Rescheduling
>> DEFAULT.org.ovirt.engine.core.bll.gluster.GlusterSyncJob.ref
>> reshLightWeightData#-9223372036854775775
>> as there is no unfired trigger.
>> 2016-12-02 04:29:24,024-05 DEBUG
>> [org.ovirt.engine.core.vdsbroker.vdsbroker.PollVDSCommand] (default
>> task-12) [d932871a-af4f-4fc9-9ee5-f7a0126a7b85] Exception:
>> org.ovirt.engine.core.vdsbroker.vdsbroker.VDSNetworkException:
>> VDSGenericException: VDSNetworkException: Timeout during xml-rpc call
>> at org.ovirt.engine.core.vdsbroke
>> r.vdsbroker.FutureVDSCommand.get(FutureVDSCommand.java:73)
>> [vdsbroker.jar:]
>>
>> ...
>>
>> 2016-12-02 04:29:24,042-05 ERROR
>> [org.ovirt.engine.core.vdsbroker.vdsbroker.PollVDSCommand] (default
>> task-12) [d932871a-af4f-4fc9-9ee5-f7a0126a7b85] Timeout waiting for
>> VDSM response: Internal timeout occured
>> 2016-12-02 04:29:24,044-05 DEBUG
>> [org.ovirt.engine.core.vdsbroker.vdsbroker.GetCapabilitiesVDSCommand]
>> (default task-12) [d932871a-af4f-4fc9-9ee5-f7a0126a7b85] START,
>> GetCapabilitiesVDSCommand(HostName = lago-basic-suite-master-host0,
>> VdsIdAndVdsVDSCommandParametersBase:{runAsync='true',
>> hostId='5eb7019e-28a3-4f93-9188-685b6c64a2f5',
>> vds='Host[lago-basic-suite-master-host0,5eb7019e-28a3-4f93-9
>> 188-685b6c64a2f5]'}),
>> log id: 58f448b8
>> 2016-12-02 04:29:24,044-05 DEBUG
>> [org.ovirt.vdsm.jsonrpc.client.reactors.stomp.impl.Message] (default
>> task-12) [d932871a-af4f-4fc9-9ee5-f7a0126a7b85] SEND
>> destination:jms.topic.vdsm_requests
>> reply-to:jms.topic.vdsm_responses
>> content-length:105
>>
>>
>> Please note that this runs on localhost with local bridge. So it is
>> not
>> likely to be network itself.
>>
>
> The main issue I see is that the VM run command has actually failed
> due to libvirt no accepting /dev/urandom as RNG source[1]. This was
> done as engine patch and according to git log, posted around Mon Nov
> 28. Also adding Jakub - this shoul

Re: [ovirt-devel] Experimental Flow for Master Fails to Run a VM

2016-12-04 Thread Arik Hadas
Yaniv will try to lower the cluster level used in the system-tests to 4.0 -
this is supposed to solve the issue.
If it won't help (we will know it in about an hour), we'll add a db-script
that changes the rng device of the blank template only.

On Sun, Dec 4, 2016 at 3:34 PM, Eyal Edri  wrote:

> FYI,
>
> I opened a bug [1] to track this issue since I don't see any attempts to
> resolve the issue on the thread, hopefully a bug will get more attention.
> Opened on VDSM since we see the libvirt error there, feel free to move
> product/team.
>
>
> [1] https://bugzilla.redhat.com/show_bug.cgi?id=1401303
>
> On Sun, Dec 4, 2016 at 1:23 PM, Eyal Edri  wrote:
>
>> Not sure if relevant, but Juan posted a fix for SDK4 last time it
>> happened ( but different failure on log-collector ):
>>
>> https://gerrit.ovirt.org/#/c/67213/
>>
>> * Added `urandom` to the `RngSource` enumerated type.
>>
>> On Sun, Dec 4, 2016 at 9:17 AM, Eyal Edri  wrote:
>>
>>> And its still failing from Friday,
>>> Since we don't have official Centos 7.3 repos yet ( hopefully we'll have
>>> it this week, but as of this moment its not published yet ) , we have to
>>> either revert the offending patch
>>> or send a quick fix.
>>>
>>> Right now all experimental flows for master are not working and nightly
>>> rpms are not refreshed with new RPMs.
>>>
>>>
>>>
>>> On Fri, Dec 2, 2016 at 9:41 PM, Yaniv Kaul  wrote:
>>>


 On Dec 2, 2016 2:11 PM, "Anton Marchukov"  wrote:

 Hello Martin.

 Do by outdated you mean the old libvirt? If so that is that livirt
 available in CentOS 7.2? There is no 7.3 yet.


 Right, this is the issue.
 Y.


 Anton.

 On Fri, Dec 2, 2016 at 1:07 PM, Martin Polednik 
 wrote:

> On 02/12/16 10:55 +0100, Anton Marchukov wrote:
>
>> Hello All.
>>
>> Engine log can be viewed here:
>>
>> http://jenkins.ovirt.org/job/test-repo_ovirt_experimental_ma
>> ster/3838/artifact/exported-artifacts/basic_suite_master.sh-
>> el7/exported-artifacts/test_logs/basic-suite-master/post-004
>> _basic_sanity.py/lago-basic-suite-master-engine/_var_log_ovi
>> rt-engine/engine.log
>>
>> I see the following exception there:
>>
>> 2016-12-02 04:29:24,030-05 DEBUG
>> [org.ovirt.vdsm.jsonrpc.client.internal.ResponseWorker]
>> (ResponseWorker) [83b6b5d] Message received: {"jsonrpc": "2.0", "id":
>> "ec254aad-441b-47e7-a644-aebddcc1d62c", "result": true}
>> 2016-12-02 04:29:24,030-05 ERROR
>> [org.ovirt.vdsm.jsonrpc.client.JsonRpcClient] (ResponseWorker)
>> [83b6b5d] Not able to update response for
>> "ec254aad-441b-47e7-a644-aebddcc1d62c"
>> 2016-12-02 04:29:24,041-05 DEBUG
>> [org.ovirt.engine.core.utils.timer.FixedDelayJobListener]
>> (DefaultQuartzScheduler3) [47a31d72] Rescheduling
>> DEFAULT.org.ovirt.engine.core.bll.gluster.GlusterSyncJob.ref
>> reshLightWeightData#-9223372036854775775
>> as there is no unfired trigger.
>> 2016-12-02 04:29:24,024-05 DEBUG
>> [org.ovirt.engine.core.vdsbroker.vdsbroker.PollVDSCommand] (default
>> task-12) [d932871a-af4f-4fc9-9ee5-f7a0126a7b85] Exception:
>> org.ovirt.engine.core.vdsbroker.vdsbroker.VDSNetworkException:
>> VDSGenericException: VDSNetworkException: Timeout during xml-rpc call
>> at org.ovirt.engine.core.vdsbroke
>> r.vdsbroker.FutureVDSCommand.get(FutureVDSCommand.java:73)
>> [vdsbroker.jar:]
>>
>> ...
>>
>> 2016-12-02 04:29:24,042-05 ERROR
>> [org.ovirt.engine.core.vdsbroker.vdsbroker.PollVDSCommand] (default
>> task-12) [d932871a-af4f-4fc9-9ee5-f7a0126a7b85] Timeout waiting for
>> VDSM response: Internal timeout occured
>> 2016-12-02 04:29:24,044-05 DEBUG
>> [org.ovirt.engine.core.vdsbroker.vdsbroker.GetCapabilitiesVDSCommand]
>> (default task-12) [d932871a-af4f-4fc9-9ee5-f7a0126a7b85] START,
>> GetCapabilitiesVDSCommand(HostName = lago-basic-suite-master-host0,
>> VdsIdAndVdsVDSCommandParametersBase:{runAsync='true',
>> hostId='5eb7019e-28a3-4f93-9188-685b6c64a2f5',
>> vds='Host[lago-basic-suite-master-host0,5eb7019e-28a3-4f93-9
>> 188-685b6c64a2f5]'}),
>> log id: 58f448b8
>> 2016-12-02 04:29:24,044-05 DEBUG
>> [org.ovirt.vdsm.jsonrpc.client.reactors.stomp.impl.Message] (default
>> task-12) [d932871a-af4f-4fc9-9ee5-f7a0126a7b85] SEND
>> destination:jms.topic.vdsm_requests
>> reply-to:jms.topic.vdsm_responses
>> content-length:105
>>
>>
>> Please note that this runs on localhost with local bridge. So it is
>> not
>> likely to be network itself.
>>
>
> The main issue I see is that the VM run command has actually failed
> due to libvirt no accepting /dev/urandom as RNG source[1]. This was
> done as engine patch and according to git log, posted around Mon Nov
> 28. Also adding Jakub - this should either not happen from engine's
> point of view or t

Re: [ovirt-devel] Experimental Flow for Master Fails to Run a VM

2016-12-04 Thread Eyal Edri
Some more info on changes that were merged during Friday:

And from the looks of it [1] looks the culprit, I'd like to avoid another
db upgrade issue with revert, anyone from dev can please handle this?

[1] https://gerrit.ovirt.org/#/c/67470/

core: New VM has RND device by default Script adds urandom rng device to
Blank template and all predefined instance types. This causes that new VMs
will inherit such RNG device. Custom instance types are not changed. The
assumption is that if they were created without a RNG device, it was an
intentional decision. Change-Id: I93a51b67c0e8bff06152d9fe7a4315efd509774d
Bug-Url: https://bugzilla.redhat.com/1337101 Signed-off-by: Jakub
Niedermertl 
* 4616f4d - core: New VM has RND device by default (3 days ago) Jakub
Niedermertl 
* c95365a - core: Fix of NPE when creating new instance type (3 days ago)
Jakub Niedermertl 
* 2ff5c4d - restapi: Reflecting template RNG settings to new VM (3 days
ago) Jakub Niedermertl 
* f8bdfa0 - frontend: use authz name instead of profile name for sysprep (3
days ago) Ondra Machacek 


vdsm changelog:

* a149cb7 - Adding simple client for sending gauge metrics to statsd port
using udp (3 days ago) Yaniv Bronhaim 
* d5c00b9 - API: move vm parameters fixup in a method (3 days ago)
Francesco Romani 
* 8180dfb - hostdev: add test for massive number of devices (3 days ago)
Martin Polednik 
* f904734 - Remove the usage of  clientIF from GlusterApi (3 days ago)
Ramesh Nachimuthu 
* 36c0ce6 - rename method wrapApiMethod to  _wrap_api_method (3 days ago)
Ramesh Nachimuthu 
* 3383158 - vmfakecon: optimize HostDeviceStub (3 days ago) Martin Polednik

* 316893c - hostdev: use *c*ElementTree (3 days ago) Martin Polednik <
mpoled...@redhat.com>
* c56619e - client: document ConnectionError exception (3 days ago) Irit
Goihman 
* f5d605e - py3: take Queue from six.moves (3 days ago) Dan Kenigsberg <
dan...@redhat.com>



On Sun, Dec 4, 2016 at 3:34 PM, Eyal Edri  wrote:

> FYI,
>
> I opened a bug [1] to track this issue since I don't see any attempts to
> resolve the issue on the thread, hopefully a bug will get more attention.
> Opened on VDSM since we see the libvirt error there, feel free to move
> product/team.
>
>
> [1] https://bugzilla.redhat.com/show_bug.cgi?id=1401303
>
> On Sun, Dec 4, 2016 at 1:23 PM, Eyal Edri  wrote:
>
>> Not sure if relevant, but Juan posted a fix for SDK4 last time it
>> happened ( but different failure on log-collector ):
>>
>> https://gerrit.ovirt.org/#/c/67213/
>>
>> * Added `urandom` to the `RngSource` enumerated type.
>>
>> On Sun, Dec 4, 2016 at 9:17 AM, Eyal Edri  wrote:
>>
>>> And its still failing from Friday,
>>> Since we don't have official Centos 7.3 repos yet ( hopefully we'll have
>>> it this week, but as of this moment its not published yet ) , we have to
>>> either revert the offending patch
>>> or send a quick fix.
>>>
>>> Right now all experimental flows for master are not working and nightly
>>> rpms are not refreshed with new RPMs.
>>>
>>>
>>>
>>> On Fri, Dec 2, 2016 at 9:41 PM, Yaniv Kaul  wrote:
>>>


 On Dec 2, 2016 2:11 PM, "Anton Marchukov"  wrote:

 Hello Martin.

 Do by outdated you mean the old libvirt? If so that is that livirt
 available in CentOS 7.2? There is no 7.3 yet.


 Right, this is the issue.
 Y.


 Anton.

 On Fri, Dec 2, 2016 at 1:07 PM, Martin Polednik 
 wrote:

> On 02/12/16 10:55 +0100, Anton Marchukov wrote:
>
>> Hello All.
>>
>> Engine log can be viewed here:
>>
>> http://jenkins.ovirt.org/job/test-repo_ovirt_experimental_ma
>> ster/3838/artifact/exported-artifacts/basic_suite_master.sh-
>> el7/exported-artifacts/test_logs/basic-suite-master/post-004
>> _basic_sanity.py/lago-basic-suite-master-engine/_var_log_ovi
>> rt-engine/engine.log
>>
>> I see the following exception there:
>>
>> 2016-12-02 04:29:24,030-05 DEBUG
>> [org.ovirt.vdsm.jsonrpc.client.internal.ResponseWorker]
>> (ResponseWorker) [83b6b5d] Message received: {"jsonrpc": "2.0", "id":
>> "ec254aad-441b-47e7-a644-aebddcc1d62c", "result": true}
>> 2016-12-02 04:29:24,030-05 ERROR
>> [org.ovirt.vdsm.jsonrpc.client.JsonRpcClient] (ResponseWorker)
>> [83b6b5d] Not able to update response for
>> "ec254aad-441b-47e7-a644-aebddcc1d62c"
>> 2016-12-02 04:29:24,041-05 DEBUG
>> [org.ovirt.engine.core.utils.timer.FixedDelayJobListener]
>> (DefaultQuartzScheduler3) [47a31d72] Rescheduling
>> DEFAULT.org.ovirt.engine.core.bll.gluster.GlusterSyncJob.ref
>> reshLightWeightData#-9223372036854775775
>> as there is no unfired trigger.
>> 2016-12-02 04:29:24,024-05 DEBUG
>> [org.ovirt.engine.core.vdsbroker.vdsbroker.PollVDSCommand] (default
>> task-12) [d932871a-af4f-4fc9-9ee5-f7a0126a7b85] Exception:
>> org.ovirt.engine.core.vdsbroker.vdsbroker.VDSNetworkException:
>> VDSGenericException: VDSNetworkException: Timeout during xml-rpc cal

Re: [ovirt-devel] Experimental Flow for Master Fails to Run a VM

2016-12-04 Thread Eyal Edri
FYI,

I opened a bug [1] to track this issue since I don't see any attempts to
resolve the issue on the thread, hopefully a bug will get more attention.
Opened on VDSM since we see the libvirt error there, feel free to move
product/team.


[1] https://bugzilla.redhat.com/show_bug.cgi?id=1401303

On Sun, Dec 4, 2016 at 1:23 PM, Eyal Edri  wrote:

> Not sure if relevant, but Juan posted a fix for SDK4 last time it happened
> ( but different failure on log-collector ):
>
> https://gerrit.ovirt.org/#/c/67213/
>
> * Added `urandom` to the `RngSource` enumerated type.
>
> On Sun, Dec 4, 2016 at 9:17 AM, Eyal Edri  wrote:
>
>> And its still failing from Friday,
>> Since we don't have official Centos 7.3 repos yet ( hopefully we'll have
>> it this week, but as of this moment its not published yet ) , we have to
>> either revert the offending patch
>> or send a quick fix.
>>
>> Right now all experimental flows for master are not working and nightly
>> rpms are not refreshed with new RPMs.
>>
>>
>>
>> On Fri, Dec 2, 2016 at 9:41 PM, Yaniv Kaul  wrote:
>>
>>>
>>>
>>> On Dec 2, 2016 2:11 PM, "Anton Marchukov"  wrote:
>>>
>>> Hello Martin.
>>>
>>> Do by outdated you mean the old libvirt? If so that is that livirt
>>> available in CentOS 7.2? There is no 7.3 yet.
>>>
>>>
>>> Right, this is the issue.
>>> Y.
>>>
>>>
>>> Anton.
>>>
>>> On Fri, Dec 2, 2016 at 1:07 PM, Martin Polednik 
>>> wrote:
>>>
 On 02/12/16 10:55 +0100, Anton Marchukov wrote:

> Hello All.
>
> Engine log can be viewed here:
>
> http://jenkins.ovirt.org/job/test-repo_ovirt_experimental_ma
> ster/3838/artifact/exported-artifacts/basic_suite_master.sh-
> el7/exported-artifacts/test_logs/basic-suite-master/post-004
> _basic_sanity.py/lago-basic-suite-master-engine/_var_log_ovi
> rt-engine/engine.log
>
> I see the following exception there:
>
> 2016-12-02 04:29:24,030-05 DEBUG
> [org.ovirt.vdsm.jsonrpc.client.internal.ResponseWorker]
> (ResponseWorker) [83b6b5d] Message received: {"jsonrpc": "2.0", "id":
> "ec254aad-441b-47e7-a644-aebddcc1d62c", "result": true}
> 2016-12-02 04:29:24,030-05 ERROR
> [org.ovirt.vdsm.jsonrpc.client.JsonRpcClient] (ResponseWorker)
> [83b6b5d] Not able to update response for
> "ec254aad-441b-47e7-a644-aebddcc1d62c"
> 2016-12-02 04:29:24,041-05 DEBUG
> [org.ovirt.engine.core.utils.timer.FixedDelayJobListener]
> (DefaultQuartzScheduler3) [47a31d72] Rescheduling
> DEFAULT.org.ovirt.engine.core.bll.gluster.GlusterSyncJob.ref
> reshLightWeightData#-9223372036854775775
> as there is no unfired trigger.
> 2016-12-02 04:29:24,024-05 DEBUG
> [org.ovirt.engine.core.vdsbroker.vdsbroker.PollVDSCommand] (default
> task-12) [d932871a-af4f-4fc9-9ee5-f7a0126a7b85] Exception:
> org.ovirt.engine.core.vdsbroker.vdsbroker.VDSNetworkException:
> VDSGenericException: VDSNetworkException: Timeout during xml-rpc call
> at org.ovirt.engine.core.vdsbroke
> r.vdsbroker.FutureVDSCommand.get(FutureVDSCommand.java:73)
> [vdsbroker.jar:]
>
> ...
>
> 2016-12-02 04:29:24,042-05 ERROR
> [org.ovirt.engine.core.vdsbroker.vdsbroker.PollVDSCommand] (default
> task-12) [d932871a-af4f-4fc9-9ee5-f7a0126a7b85] Timeout waiting for
> VDSM response: Internal timeout occured
> 2016-12-02 04:29:24,044-05 DEBUG
> [org.ovirt.engine.core.vdsbroker.vdsbroker.GetCapabilitiesVDSCommand]
> (default task-12) [d932871a-af4f-4fc9-9ee5-f7a0126a7b85] START,
> GetCapabilitiesVDSCommand(HostName = lago-basic-suite-master-host0,
> VdsIdAndVdsVDSCommandParametersBase:{runAsync='true',
> hostId='5eb7019e-28a3-4f93-9188-685b6c64a2f5',
> vds='Host[lago-basic-suite-master-host0,5eb7019e-28a3-4f93-9
> 188-685b6c64a2f5]'}),
> log id: 58f448b8
> 2016-12-02 04:29:24,044-05 DEBUG
> [org.ovirt.vdsm.jsonrpc.client.reactors.stomp.impl.Message] (default
> task-12) [d932871a-af4f-4fc9-9ee5-f7a0126a7b85] SEND
> destination:jms.topic.vdsm_requests
> reply-to:jms.topic.vdsm_responses
> content-length:105
>
>
> Please note that this runs on localhost with local bridge. So it is not
> likely to be network itself.
>

 The main issue I see is that the VM run command has actually failed
 due to libvirt no accepting /dev/urandom as RNG source[1]. This was
 done as engine patch and according to git log, posted around Mon Nov
 28. Also adding Jakub - this should either not happen from engine's
 point of view or the lago host is outdated.

 [1]
 http://jenkins.ovirt.org/job/test-repo_ovirt_experimental_ma
 ster/3838/artifact/exported-artifacts/basic_suite_master.sh-
 el7/exported-artifacts/test_logs/basic-suite-master/post-004
 _basic_sanity.py/lago-basic-suite-master-host0/_var_log_vdsm/vdsm.log


 Anton.
>
> On Fri, Dec 2, 2016 at 10:43 AM, Anton Marchukov 
> wrote:
>
> FYI. Exper

Re: [ovirt-devel] Experimental Flow for Master Fails to Run a VM

2016-12-04 Thread Eyal Edri
Not sure if relevant, but Juan posted a fix for SDK4 last time it happened
( but different failure on log-collector ):

https://gerrit.ovirt.org/#/c/67213/

* Added `urandom` to the `RngSource` enumerated type.

On Sun, Dec 4, 2016 at 9:17 AM, Eyal Edri  wrote:

> And its still failing from Friday,
> Since we don't have official Centos 7.3 repos yet ( hopefully we'll have
> it this week, but as of this moment its not published yet ) , we have to
> either revert the offending patch
> or send a quick fix.
>
> Right now all experimental flows for master are not working and nightly
> rpms are not refreshed with new RPMs.
>
>
>
> On Fri, Dec 2, 2016 at 9:41 PM, Yaniv Kaul  wrote:
>
>>
>>
>> On Dec 2, 2016 2:11 PM, "Anton Marchukov"  wrote:
>>
>> Hello Martin.
>>
>> Do by outdated you mean the old libvirt? If so that is that livirt
>> available in CentOS 7.2? There is no 7.3 yet.
>>
>>
>> Right, this is the issue.
>> Y.
>>
>>
>> Anton.
>>
>> On Fri, Dec 2, 2016 at 1:07 PM, Martin Polednik 
>> wrote:
>>
>>> On 02/12/16 10:55 +0100, Anton Marchukov wrote:
>>>
 Hello All.

 Engine log can be viewed here:

 http://jenkins.ovirt.org/job/test-repo_ovirt_experimental_ma
 ster/3838/artifact/exported-artifacts/basic_suite_master.sh-
 el7/exported-artifacts/test_logs/basic-suite-master/post-004
 _basic_sanity.py/lago-basic-suite-master-engine/_var_log_ovi
 rt-engine/engine.log

 I see the following exception there:

 2016-12-02 04:29:24,030-05 DEBUG
 [org.ovirt.vdsm.jsonrpc.client.internal.ResponseWorker]
 (ResponseWorker) [83b6b5d] Message received: {"jsonrpc": "2.0", "id":
 "ec254aad-441b-47e7-a644-aebddcc1d62c", "result": true}
 2016-12-02 04:29:24,030-05 ERROR
 [org.ovirt.vdsm.jsonrpc.client.JsonRpcClient] (ResponseWorker)
 [83b6b5d] Not able to update response for
 "ec254aad-441b-47e7-a644-aebddcc1d62c"
 2016-12-02 04:29:24,041-05 DEBUG
 [org.ovirt.engine.core.utils.timer.FixedDelayJobListener]
 (DefaultQuartzScheduler3) [47a31d72] Rescheduling
 DEFAULT.org.ovirt.engine.core.bll.gluster.GlusterSyncJob.ref
 reshLightWeightData#-9223372036854775775
 as there is no unfired trigger.
 2016-12-02 04:29:24,024-05 DEBUG
 [org.ovirt.engine.core.vdsbroker.vdsbroker.PollVDSCommand] (default
 task-12) [d932871a-af4f-4fc9-9ee5-f7a0126a7b85] Exception:
 org.ovirt.engine.core.vdsbroker.vdsbroker.VDSNetworkException:
 VDSGenericException: VDSNetworkException: Timeout during xml-rpc call
 at org.ovirt.engine.core.vdsbroker.vdsbroker.FutureVDSCommand.g
 et(FutureVDSCommand.java:73)
 [vdsbroker.jar:]

 ...

 2016-12-02 04:29:24,042-05 ERROR
 [org.ovirt.engine.core.vdsbroker.vdsbroker.PollVDSCommand] (default
 task-12) [d932871a-af4f-4fc9-9ee5-f7a0126a7b85] Timeout waiting for
 VDSM response: Internal timeout occured
 2016-12-02 04:29:24,044-05 DEBUG
 [org.ovirt.engine.core.vdsbroker.vdsbroker.GetCapabilitiesVDSCommand]
 (default task-12) [d932871a-af4f-4fc9-9ee5-f7a0126a7b85] START,
 GetCapabilitiesVDSCommand(HostName = lago-basic-suite-master-host0,
 VdsIdAndVdsVDSCommandParametersBase:{runAsync='true',
 hostId='5eb7019e-28a3-4f93-9188-685b6c64a2f5',
 vds='Host[lago-basic-suite-master-host0,5eb7019e-28a3-4f93-9
 188-685b6c64a2f5]'}),
 log id: 58f448b8
 2016-12-02 04:29:24,044-05 DEBUG
 [org.ovirt.vdsm.jsonrpc.client.reactors.stomp.impl.Message] (default
 task-12) [d932871a-af4f-4fc9-9ee5-f7a0126a7b85] SEND
 destination:jms.topic.vdsm_requests
 reply-to:jms.topic.vdsm_responses
 content-length:105


 Please note that this runs on localhost with local bridge. So it is not
 likely to be network itself.

>>>
>>> The main issue I see is that the VM run command has actually failed
>>> due to libvirt no accepting /dev/urandom as RNG source[1]. This was
>>> done as engine patch and according to git log, posted around Mon Nov
>>> 28. Also adding Jakub - this should either not happen from engine's
>>> point of view or the lago host is outdated.
>>>
>>> [1]
>>> http://jenkins.ovirt.org/job/test-repo_ovirt_experimental_ma
>>> ster/3838/artifact/exported-artifacts/basic_suite_master.sh-
>>> el7/exported-artifacts/test_logs/basic-suite-master/post-004
>>> _basic_sanity.py/lago-basic-suite-master-host0/_var_log_vdsm/vdsm.log
>>>
>>>
>>> Anton.

 On Fri, Dec 2, 2016 at 10:43 AM, Anton Marchukov 
 wrote:

 FYI. Experimental flow for master currently fails to run a VM. The tests
> times out while waiting for 180 seconds:
>
> http://jenkins.ovirt.org/job/test-repo_ovirt_experimental_
> master/3838/testReport/(root)/004_basic_sanity/vm_run/
>
> This is reproducible over 23 runs of this happened tonight, sounds
> like a
> regression to me:
>
> http://jenkins.ovirt.org/job/test-repo_ovirt_experimental_master/
>
> I will update here with addit

Re: [ovirt-devel] Experimental Flow for Master Fails to Run a VM

2016-12-03 Thread Eyal Edri
And its still failing from Friday,
Since we don't have official Centos 7.3 repos yet ( hopefully we'll have it
this week, but as of this moment its not published yet ) , we have to
either revert the offending patch
or send a quick fix.

Right now all experimental flows for master are not working and nightly
rpms are not refreshed with new RPMs.



On Fri, Dec 2, 2016 at 9:41 PM, Yaniv Kaul  wrote:

>
>
> On Dec 2, 2016 2:11 PM, "Anton Marchukov"  wrote:
>
> Hello Martin.
>
> Do by outdated you mean the old libvirt? If so that is that livirt
> available in CentOS 7.2? There is no 7.3 yet.
>
>
> Right, this is the issue.
> Y.
>
>
> Anton.
>
> On Fri, Dec 2, 2016 at 1:07 PM, Martin Polednik 
> wrote:
>
>> On 02/12/16 10:55 +0100, Anton Marchukov wrote:
>>
>>> Hello All.
>>>
>>> Engine log can be viewed here:
>>>
>>> http://jenkins.ovirt.org/job/test-repo_ovirt_experimental_ma
>>> ster/3838/artifact/exported-artifacts/basic_suite_master.sh-
>>> el7/exported-artifacts/test_logs/basic-suite-master/post-004
>>> _basic_sanity.py/lago-basic-suite-master-engine/_var_log_ovi
>>> rt-engine/engine.log
>>>
>>> I see the following exception there:
>>>
>>> 2016-12-02 04:29:24,030-05 DEBUG
>>> [org.ovirt.vdsm.jsonrpc.client.internal.ResponseWorker]
>>> (ResponseWorker) [83b6b5d] Message received: {"jsonrpc": "2.0", "id":
>>> "ec254aad-441b-47e7-a644-aebddcc1d62c", "result": true}
>>> 2016-12-02 04:29:24,030-05 ERROR
>>> [org.ovirt.vdsm.jsonrpc.client.JsonRpcClient] (ResponseWorker)
>>> [83b6b5d] Not able to update response for
>>> "ec254aad-441b-47e7-a644-aebddcc1d62c"
>>> 2016-12-02 04:29:24,041-05 DEBUG
>>> [org.ovirt.engine.core.utils.timer.FixedDelayJobListener]
>>> (DefaultQuartzScheduler3) [47a31d72] Rescheduling
>>> DEFAULT.org.ovirt.engine.core.bll.gluster.GlusterSyncJob.ref
>>> reshLightWeightData#-9223372036854775775
>>> as there is no unfired trigger.
>>> 2016-12-02 04:29:24,024-05 DEBUG
>>> [org.ovirt.engine.core.vdsbroker.vdsbroker.PollVDSCommand] (default
>>> task-12) [d932871a-af4f-4fc9-9ee5-f7a0126a7b85] Exception:
>>> org.ovirt.engine.core.vdsbroker.vdsbroker.VDSNetworkException:
>>> VDSGenericException: VDSNetworkException: Timeout during xml-rpc call
>>> at org.ovirt.engine.core.vdsbroker.vdsbroker.FutureVDSCommand.g
>>> et(FutureVDSCommand.java:73)
>>> [vdsbroker.jar:]
>>>
>>> ...
>>>
>>> 2016-12-02 04:29:24,042-05 ERROR
>>> [org.ovirt.engine.core.vdsbroker.vdsbroker.PollVDSCommand] (default
>>> task-12) [d932871a-af4f-4fc9-9ee5-f7a0126a7b85] Timeout waiting for
>>> VDSM response: Internal timeout occured
>>> 2016-12-02 04:29:24,044-05 DEBUG
>>> [org.ovirt.engine.core.vdsbroker.vdsbroker.GetCapabilitiesVDSCommand]
>>> (default task-12) [d932871a-af4f-4fc9-9ee5-f7a0126a7b85] START,
>>> GetCapabilitiesVDSCommand(HostName = lago-basic-suite-master-host0,
>>> VdsIdAndVdsVDSCommandParametersBase:{runAsync='true',
>>> hostId='5eb7019e-28a3-4f93-9188-685b6c64a2f5',
>>> vds='Host[lago-basic-suite-master-host0,5eb7019e-28a3-4f93-9
>>> 188-685b6c64a2f5]'}),
>>> log id: 58f448b8
>>> 2016-12-02 04:29:24,044-05 DEBUG
>>> [org.ovirt.vdsm.jsonrpc.client.reactors.stomp.impl.Message] (default
>>> task-12) [d932871a-af4f-4fc9-9ee5-f7a0126a7b85] SEND
>>> destination:jms.topic.vdsm_requests
>>> reply-to:jms.topic.vdsm_responses
>>> content-length:105
>>>
>>>
>>> Please note that this runs on localhost with local bridge. So it is not
>>> likely to be network itself.
>>>
>>
>> The main issue I see is that the VM run command has actually failed
>> due to libvirt no accepting /dev/urandom as RNG source[1]. This was
>> done as engine patch and according to git log, posted around Mon Nov
>> 28. Also adding Jakub - this should either not happen from engine's
>> point of view or the lago host is outdated.
>>
>> [1]
>> http://jenkins.ovirt.org/job/test-repo_ovirt_experimental_ma
>> ster/3838/artifact/exported-artifacts/basic_suite_master.sh-
>> el7/exported-artifacts/test_logs/basic-suite-master/post-004
>> _basic_sanity.py/lago-basic-suite-master-host0/_var_log_vdsm/vdsm.log
>>
>>
>> Anton.
>>>
>>> On Fri, Dec 2, 2016 at 10:43 AM, Anton Marchukov 
>>> wrote:
>>>
>>> FYI. Experimental flow for master currently fails to run a VM. The tests
 times out while waiting for 180 seconds:

 http://jenkins.ovirt.org/job/test-repo_ovirt_experimental_
 master/3838/testReport/(root)/004_basic_sanity/vm_run/

 This is reproducible over 23 runs of this happened tonight, sounds like
 a
 regression to me:

 http://jenkins.ovirt.org/job/test-repo_ovirt_experimental_master/

 I will update here with additional information once I find it.

 Last successful run was with this patch:

 https://gerrit.ovirt.org/#/c/66416/ (vdsm: API: move vm parameters
 fixup
 in a method)

 Known to start failing around this patch:

 https://gerrit.ovirt.org/#/c/67647/ (vdsmapi: fix a typo in string
 formatting)

 Please notes that we do not have gating implemented

Re: [ovirt-devel] Experimental Flow for Master Fails to Run a VM

2016-12-02 Thread Yaniv Kaul
On Dec 2, 2016 2:11 PM, "Anton Marchukov"  wrote:

Hello Martin.

Do by outdated you mean the old libvirt? If so that is that livirt
available in CentOS 7.2? There is no 7.3 yet.


Right, this is the issue.
Y.


Anton.

On Fri, Dec 2, 2016 at 1:07 PM, Martin Polednik 
wrote:

> On 02/12/16 10:55 +0100, Anton Marchukov wrote:
>
>> Hello All.
>>
>> Engine log can be viewed here:
>>
>> http://jenkins.ovirt.org/job/test-repo_ovirt_experimental_ma
>> ster/3838/artifact/exported-artifacts/basic_suite_master.sh-
>> el7/exported-artifacts/test_logs/basic-suite-master/post-004
>> _basic_sanity.py/lago-basic-suite-master-engine/_var_log_o
>> virt-engine/engine.log
>>
>> I see the following exception there:
>>
>> 2016-12-02 04:29:24,030-05 DEBUG
>> [org.ovirt.vdsm.jsonrpc.client.internal.ResponseWorker]
>> (ResponseWorker) [83b6b5d] Message received: {"jsonrpc": "2.0", "id":
>> "ec254aad-441b-47e7-a644-aebddcc1d62c", "result": true}
>> 2016-12-02 04:29:24,030-05 ERROR
>> [org.ovirt.vdsm.jsonrpc.client.JsonRpcClient] (ResponseWorker)
>> [83b6b5d] Not able to update response for
>> "ec254aad-441b-47e7-a644-aebddcc1d62c"
>> 2016-12-02 04:29:24,041-05 DEBUG
>> [org.ovirt.engine.core.utils.timer.FixedDelayJobListener]
>> (DefaultQuartzScheduler3) [47a31d72] Rescheduling
>> DEFAULT.org.ovirt.engine.core.bll.gluster.GlusterSyncJob.ref
>> reshLightWeightData#-9223372036854775775
>> as there is no unfired trigger.
>> 2016-12-02 04:29:24,024-05 DEBUG
>> [org.ovirt.engine.core.vdsbroker.vdsbroker.PollVDSCommand] (default
>> task-12) [d932871a-af4f-4fc9-9ee5-f7a0126a7b85] Exception:
>> org.ovirt.engine.core.vdsbroker.vdsbroker.VDSNetworkException:
>> VDSGenericException: VDSNetworkException: Timeout during xml-rpc call
>> at org.ovirt.engine.core.vdsbroker.vdsbroker.FutureVDSCommand.g
>> et(FutureVDSCommand.java:73)
>> [vdsbroker.jar:]
>>
>> 
>>
>> 2016-12-02 04:29:24,042-05 ERROR
>> [org.ovirt.engine.core.vdsbroker.vdsbroker.PollVDSCommand] (default
>> task-12) [d932871a-af4f-4fc9-9ee5-f7a0126a7b85] Timeout waiting for
>> VDSM response: Internal timeout occured
>> 2016-12-02 04:29:24,044-05 DEBUG
>> [org.ovirt.engine.core.vdsbroker.vdsbroker.GetCapabilitiesVDSCommand]
>> (default task-12) [d932871a-af4f-4fc9-9ee5-f7a0126a7b85] START,
>> GetCapabilitiesVDSCommand(HostName = lago-basic-suite-master-host0,
>> VdsIdAndVdsVDSCommandParametersBase:{runAsync='true',
>> hostId='5eb7019e-28a3-4f93-9188-685b6c64a2f5',
>> vds='Host[lago-basic-suite-master-host0,5eb7019e-28a3-4f93-9
>> 188-685b6c64a2f5]'}),
>> log id: 58f448b8
>> 2016-12-02 04:29:24,044-05 DEBUG
>> [org.ovirt.vdsm.jsonrpc.client.reactors.stomp.impl.Message] (default
>> task-12) [d932871a-af4f-4fc9-9ee5-f7a0126a7b85] SEND
>> destination:jms.topic.vdsm_requests
>> reply-to:jms.topic.vdsm_responses
>> content-length:105
>>
>>
>> Please note that this runs on localhost with local bridge. So it is not
>> likely to be network itself.
>>
>
> The main issue I see is that the VM run command has actually failed
> due to libvirt no accepting /dev/urandom as RNG source[1]. This was
> done as engine patch and according to git log, posted around Mon Nov
> 28. Also adding Jakub - this should either not happen from engine's
> point of view or the lago host is outdated.
>
> [1]
> http://jenkins.ovirt.org/job/test-repo_ovirt_experimental_ma
> ster/3838/artifact/exported-artifacts/basic_suite_master.sh-
> el7/exported-artifacts/test_logs/basic-suite-master/post-004
> _basic_sanity.py/lago-basic-suite-master-host0/_var_log_vdsm/vdsm.log
>
>
> Anton.
>>
>> On Fri, Dec 2, 2016 at 10:43 AM, Anton Marchukov 
>> wrote:
>>
>> FYI. Experimental flow for master currently fails to run a VM. The tests
>>> times out while waiting for 180 seconds:
>>>
>>> http://jenkins.ovirt.org/job/test-repo_ovirt_experimental_
>>> master/3838/testReport/(root)/004_basic_sanity/vm_run/
>>>
>>> This is reproducible over 23 runs of this happened tonight, sounds like a
>>> regression to me:
>>>
>>> http://jenkins.ovirt.org/job/test-repo_ovirt_experimental_master/
>>>
>>> I will update here with additional information once I find it.
>>>
>>> Last successful run was with this patch:
>>>
>>> https://gerrit.ovirt.org/#/c/66416/ (vdsm: API: move vm parameters fixup
>>> in a method)
>>>
>>> Known to start failing around this patch:
>>>
>>> https://gerrit.ovirt.org/#/c/67647/ (vdsmapi: fix a typo in string
>>> formatting)
>>>
>>> Please notes that we do not have gating implemented yet, so everything
>>> that was merged in between those patches might have caused this (not
>>> necessary in vdsm project).
>>>
>>> Anton.
>>> --
>>> Anton Marchukov
>>> Senior Software Engineer - RHEV CI - Red Hat
>>>
>>>
>>>
>>
>> --
>> Anton Marchukov
>> Senior Software Engineer - RHEV CI - Red Hat
>>
>
> ___
>> Devel mailing list
>> Devel@ovirt.org
>> http://lists.ovirt.org/mailman/listinfo/devel
>>
>
>


-- 
Anton Marchukov
Senior Software Engineer - RHEV CI - Red Hat


__

Re: [ovirt-devel] Experimental Flow for Master Fails to Run a VM

2016-12-02 Thread Anton Marchukov
Hello Martin.

Do by outdated you mean the old libvirt? If so that is that livirt
available in CentOS 7.2? There is no 7.3 yet.

Anton.

On Fri, Dec 2, 2016 at 1:07 PM, Martin Polednik 
wrote:

> On 02/12/16 10:55 +0100, Anton Marchukov wrote:
>
>> Hello All.
>>
>> Engine log can be viewed here:
>>
>> http://jenkins.ovirt.org/job/test-repo_ovirt_experimental_ma
>> ster/3838/artifact/exported-artifacts/basic_suite_master.sh-
>> el7/exported-artifacts/test_logs/basic-suite-master/post-
>> 004_basic_sanity.py/lago-basic-suite-master-engine/_var_log_
>> ovirt-engine/engine.log
>>
>> I see the following exception there:
>>
>> 2016-12-02 04:29:24,030-05 DEBUG
>> [org.ovirt.vdsm.jsonrpc.client.internal.ResponseWorker]
>> (ResponseWorker) [83b6b5d] Message received: {"jsonrpc": "2.0", "id":
>> "ec254aad-441b-47e7-a644-aebddcc1d62c", "result": true}
>> 2016-12-02 04:29:24,030-05 ERROR
>> [org.ovirt.vdsm.jsonrpc.client.JsonRpcClient] (ResponseWorker)
>> [83b6b5d] Not able to update response for
>> "ec254aad-441b-47e7-a644-aebddcc1d62c"
>> 2016-12-02 04:29:24,041-05 DEBUG
>> [org.ovirt.engine.core.utils.timer.FixedDelayJobListener]
>> (DefaultQuartzScheduler3) [47a31d72] Rescheduling
>> DEFAULT.org.ovirt.engine.core.bll.gluster.GlusterSyncJob.ref
>> reshLightWeightData#-9223372036854775775
>> as there is no unfired trigger.
>> 2016-12-02 04:29:24,024-05 DEBUG
>> [org.ovirt.engine.core.vdsbroker.vdsbroker.PollVDSCommand] (default
>> task-12) [d932871a-af4f-4fc9-9ee5-f7a0126a7b85] Exception:
>> org.ovirt.engine.core.vdsbroker.vdsbroker.VDSNetworkException:
>> VDSGenericException: VDSNetworkException: Timeout during xml-rpc call
>> at org.ovirt.engine.core.vdsbroker.vdsbroker.FutureVDSCommand.
>> get(FutureVDSCommand.java:73)
>> [vdsbroker.jar:]
>>
>> 
>>
>> 2016-12-02 04:29:24,042-05 ERROR
>> [org.ovirt.engine.core.vdsbroker.vdsbroker.PollVDSCommand] (default
>> task-12) [d932871a-af4f-4fc9-9ee5-f7a0126a7b85] Timeout waiting for
>> VDSM response: Internal timeout occured
>> 2016-12-02 04:29:24,044-05 DEBUG
>> [org.ovirt.engine.core.vdsbroker.vdsbroker.GetCapabilitiesVDSCommand]
>> (default task-12) [d932871a-af4f-4fc9-9ee5-f7a0126a7b85] START,
>> GetCapabilitiesVDSCommand(HostName = lago-basic-suite-master-host0,
>> VdsIdAndVdsVDSCommandParametersBase:{runAsync='true',
>> hostId='5eb7019e-28a3-4f93-9188-685b6c64a2f5',
>> vds='Host[lago-basic-suite-master-host0,5eb7019e-28a3-4f93-
>> 9188-685b6c64a2f5]'}),
>> log id: 58f448b8
>> 2016-12-02 04:29:24,044-05 DEBUG
>> [org.ovirt.vdsm.jsonrpc.client.reactors.stomp.impl.Message] (default
>> task-12) [d932871a-af4f-4fc9-9ee5-f7a0126a7b85] SEND
>> destination:jms.topic.vdsm_requests
>> reply-to:jms.topic.vdsm_responses
>> content-length:105
>>
>>
>> Please note that this runs on localhost with local bridge. So it is not
>> likely to be network itself.
>>
>
> The main issue I see is that the VM run command has actually failed
> due to libvirt no accepting /dev/urandom as RNG source[1]. This was
> done as engine patch and according to git log, posted around Mon Nov
> 28. Also adding Jakub - this should either not happen from engine's
> point of view or the lago host is outdated.
>
> [1]
> http://jenkins.ovirt.org/job/test-repo_ovirt_experimental_ma
> ster/3838/artifact/exported-artifacts/basic_suite_master.sh-
> el7/exported-artifacts/test_logs/basic-suite-master/post-
> 004_basic_sanity.py/lago-basic-suite-master-host0/_var_log_vdsm/vdsm.log
>
>
> Anton.
>>
>> On Fri, Dec 2, 2016 at 10:43 AM, Anton Marchukov 
>> wrote:
>>
>> FYI. Experimental flow for master currently fails to run a VM. The tests
>>> times out while waiting for 180 seconds:
>>>
>>> http://jenkins.ovirt.org/job/test-repo_ovirt_experimental_
>>> master/3838/testReport/(root)/004_basic_sanity/vm_run/
>>>
>>> This is reproducible over 23 runs of this happened tonight, sounds like a
>>> regression to me:
>>>
>>> http://jenkins.ovirt.org/job/test-repo_ovirt_experimental_master/
>>>
>>> I will update here with additional information once I find it.
>>>
>>> Last successful run was with this patch:
>>>
>>> https://gerrit.ovirt.org/#/c/66416/ (vdsm: API: move vm parameters fixup
>>> in a method)
>>>
>>> Known to start failing around this patch:
>>>
>>> https://gerrit.ovirt.org/#/c/67647/ (vdsmapi: fix a typo in string
>>> formatting)
>>>
>>> Please notes that we do not have gating implemented yet, so everything
>>> that was merged in between those patches might have caused this (not
>>> necessary in vdsm project).
>>>
>>> Anton.
>>> --
>>> Anton Marchukov
>>> Senior Software Engineer - RHEV CI - Red Hat
>>>
>>>
>>>
>>
>> --
>> Anton Marchukov
>> Senior Software Engineer - RHEV CI - Red Hat
>>
>
> ___
>> Devel mailing list
>> Devel@ovirt.org
>> http://lists.ovirt.org/mailman/listinfo/devel
>>
>
>


-- 
Anton Marchukov
Senior Software Engineer - RHEV CI - Red Hat
___
Devel mailing list
Devel@ovirt.org
http://lists.ovirt.org/m

Re: [ovirt-devel] Experimental Flow for Master Fails to Run a VM

2016-12-02 Thread Eyal Edri
The fix for that was new python sdk build for V4 if its the same issue
where log-collector fails.
Adding Juan.

On Fri, Dec 2, 2016 at 2:07 PM, Martin Polednik 
wrote:

> On 02/12/16 10:55 +0100, Anton Marchukov wrote:
>
>> Hello All.
>>
>> Engine log can be viewed here:
>>
>> http://jenkins.ovirt.org/job/test-repo_ovirt_experimental_ma
>> ster/3838/artifact/exported-artifacts/basic_suite_master.sh-
>> el7/exported-artifacts/test_logs/basic-suite-master/post-
>> 004_basic_sanity.py/lago-basic-suite-master-engine/_var_log_
>> ovirt-engine/engine.log
>>
>> I see the following exception there:
>>
>> 2016-12-02 04:29:24,030-05 DEBUG
>> [org.ovirt.vdsm.jsonrpc.client.internal.ResponseWorker]
>> (ResponseWorker) [83b6b5d] Message received: {"jsonrpc": "2.0", "id":
>> "ec254aad-441b-47e7-a644-aebddcc1d62c", "result": true}
>> 2016-12-02 04:29:24,030-05 ERROR
>> [org.ovirt.vdsm.jsonrpc.client.JsonRpcClient] (ResponseWorker)
>> [83b6b5d] Not able to update response for
>> "ec254aad-441b-47e7-a644-aebddcc1d62c"
>> 2016-12-02 04:29:24,041-05 DEBUG
>> [org.ovirt.engine.core.utils.timer.FixedDelayJobListener]
>> (DefaultQuartzScheduler3) [47a31d72] Rescheduling
>> DEFAULT.org.ovirt.engine.core.bll.gluster.GlusterSyncJob.ref
>> reshLightWeightData#-9223372036854775775
>> as there is no unfired trigger.
>> 2016-12-02 04:29:24,024-05 DEBUG
>> [org.ovirt.engine.core.vdsbroker.vdsbroker.PollVDSCommand] (default
>> task-12) [d932871a-af4f-4fc9-9ee5-f7a0126a7b85] Exception:
>> org.ovirt.engine.core.vdsbroker.vdsbroker.VDSNetworkException:
>> VDSGenericException: VDSNetworkException: Timeout during xml-rpc call
>> at org.ovirt.engine.core.vdsbroker.vdsbroker.FutureVDSCommand.
>> get(FutureVDSCommand.java:73)
>> [vdsbroker.jar:]
>>
>> 
>>
>> 2016-12-02 04:29:24,042-05 ERROR
>> [org.ovirt.engine.core.vdsbroker.vdsbroker.PollVDSCommand] (default
>> task-12) [d932871a-af4f-4fc9-9ee5-f7a0126a7b85] Timeout waiting for
>> VDSM response: Internal timeout occured
>> 2016-12-02 04:29:24,044-05 DEBUG
>> [org.ovirt.engine.core.vdsbroker.vdsbroker.GetCapabilitiesVDSCommand]
>> (default task-12) [d932871a-af4f-4fc9-9ee5-f7a0126a7b85] START,
>> GetCapabilitiesVDSCommand(HostName = lago-basic-suite-master-host0,
>> VdsIdAndVdsVDSCommandParametersBase:{runAsync='true',
>> hostId='5eb7019e-28a3-4f93-9188-685b6c64a2f5',
>> vds='Host[lago-basic-suite-master-host0,5eb7019e-28a3-4f93-
>> 9188-685b6c64a2f5]'}),
>> log id: 58f448b8
>> 2016-12-02 04:29:24,044-05 DEBUG
>> [org.ovirt.vdsm.jsonrpc.client.reactors.stomp.impl.Message] (default
>> task-12) [d932871a-af4f-4fc9-9ee5-f7a0126a7b85] SEND
>> destination:jms.topic.vdsm_requests
>> reply-to:jms.topic.vdsm_responses
>> content-length:105
>>
>>
>> Please note that this runs on localhost with local bridge. So it is not
>> likely to be network itself.
>>
>
> The main issue I see is that the VM run command has actually failed
> due to libvirt no accepting /dev/urandom as RNG source[1]. This was
> done as engine patch and according to git log, posted around Mon Nov
> 28. Also adding Jakub - this should either not happen from engine's
> point of view or the lago host is outdated.
>
> [1]
> http://jenkins.ovirt.org/job/test-repo_ovirt_experimental_ma
> ster/3838/artifact/exported-artifacts/basic_suite_master.sh-
> el7/exported-artifacts/test_logs/basic-suite-master/post-
> 004_basic_sanity.py/lago-basic-suite-master-host0/_var_log_vdsm/vdsm.log
>
>
> Anton.
>>
>> On Fri, Dec 2, 2016 at 10:43 AM, Anton Marchukov 
>> wrote:
>>
>> FYI. Experimental flow for master currently fails to run a VM. The tests
>>> times out while waiting for 180 seconds:
>>>
>>> http://jenkins.ovirt.org/job/test-repo_ovirt_experimental_
>>> master/3838/testReport/(root)/004_basic_sanity/vm_run/
>>>
>>> This is reproducible over 23 runs of this happened tonight, sounds like a
>>> regression to me:
>>>
>>> http://jenkins.ovirt.org/job/test-repo_ovirt_experimental_master/
>>>
>>> I will update here with additional information once I find it.
>>>
>>> Last successful run was with this patch:
>>>
>>> https://gerrit.ovirt.org/#/c/66416/ (vdsm: API: move vm parameters fixup
>>> in a method)
>>>
>>> Known to start failing around this patch:
>>>
>>> https://gerrit.ovirt.org/#/c/67647/ (vdsmapi: fix a typo in string
>>> formatting)
>>>
>>> Please notes that we do not have gating implemented yet, so everything
>>> that was merged in between those patches might have caused this (not
>>> necessary in vdsm project).
>>>
>>> Anton.
>>> --
>>> Anton Marchukov
>>> Senior Software Engineer - RHEV CI - Red Hat
>>>
>>>
>>>
>>
>> --
>> Anton Marchukov
>> Senior Software Engineer - RHEV CI - Red Hat
>>
>
> ___
>> Devel mailing list
>> Devel@ovirt.org
>> http://lists.ovirt.org/mailman/listinfo/devel
>>
>
> ___
> Devel mailing list
> Devel@ovirt.org
> http://lists.ovirt.org/mailman/listinfo/devel
>
>
>


-- 
Eyal Edri
Associate Manager
RHV DevOps
EMEA ENG Virt

Re: [ovirt-devel] Experimental Flow for Master Fails to Run a VM

2016-12-02 Thread Martin Polednik

On 02/12/16 10:55 +0100, Anton Marchukov wrote:

Hello All.

Engine log can be viewed here:

http://jenkins.ovirt.org/job/test-repo_ovirt_experimental_master/3838/artifact/exported-artifacts/basic_suite_master.sh-el7/exported-artifacts/test_logs/basic-suite-master/post-004_basic_sanity.py/lago-basic-suite-master-engine/_var_log_ovirt-engine/engine.log

I see the following exception there:

2016-12-02 04:29:24,030-05 DEBUG
[org.ovirt.vdsm.jsonrpc.client.internal.ResponseWorker]
(ResponseWorker) [83b6b5d] Message received: {"jsonrpc": "2.0", "id":
"ec254aad-441b-47e7-a644-aebddcc1d62c", "result": true}
2016-12-02 04:29:24,030-05 ERROR
[org.ovirt.vdsm.jsonrpc.client.JsonRpcClient] (ResponseWorker)
[83b6b5d] Not able to update response for
"ec254aad-441b-47e7-a644-aebddcc1d62c"
2016-12-02 04:29:24,041-05 DEBUG
[org.ovirt.engine.core.utils.timer.FixedDelayJobListener]
(DefaultQuartzScheduler3) [47a31d72] Rescheduling
DEFAULT.org.ovirt.engine.core.bll.gluster.GlusterSyncJob.refreshLightWeightData#-9223372036854775775
as there is no unfired trigger.
2016-12-02 04:29:24,024-05 DEBUG
[org.ovirt.engine.core.vdsbroker.vdsbroker.PollVDSCommand] (default
task-12) [d932871a-af4f-4fc9-9ee5-f7a0126a7b85] Exception:
org.ovirt.engine.core.vdsbroker.vdsbroker.VDSNetworkException:
VDSGenericException: VDSNetworkException: Timeout during xml-rpc call
at 
org.ovirt.engine.core.vdsbroker.vdsbroker.FutureVDSCommand.get(FutureVDSCommand.java:73)
[vdsbroker.jar:]



2016-12-02 04:29:24,042-05 ERROR
[org.ovirt.engine.core.vdsbroker.vdsbroker.PollVDSCommand] (default
task-12) [d932871a-af4f-4fc9-9ee5-f7a0126a7b85] Timeout waiting for
VDSM response: Internal timeout occured
2016-12-02 04:29:24,044-05 DEBUG
[org.ovirt.engine.core.vdsbroker.vdsbroker.GetCapabilitiesVDSCommand]
(default task-12) [d932871a-af4f-4fc9-9ee5-f7a0126a7b85] START,
GetCapabilitiesVDSCommand(HostName = lago-basic-suite-master-host0,
VdsIdAndVdsVDSCommandParametersBase:{runAsync='true',
hostId='5eb7019e-28a3-4f93-9188-685b6c64a2f5',
vds='Host[lago-basic-suite-master-host0,5eb7019e-28a3-4f93-9188-685b6c64a2f5]'}),
log id: 58f448b8
2016-12-02 04:29:24,044-05 DEBUG
[org.ovirt.vdsm.jsonrpc.client.reactors.stomp.impl.Message] (default
task-12) [d932871a-af4f-4fc9-9ee5-f7a0126a7b85] SEND
destination:jms.topic.vdsm_requests
reply-to:jms.topic.vdsm_responses
content-length:105


Please note that this runs on localhost with local bridge. So it is not
likely to be network itself.


The main issue I see is that the VM run command has actually failed
due to libvirt no accepting /dev/urandom as RNG source[1]. This was
done as engine patch and according to git log, posted around Mon Nov
28. Also adding Jakub - this should either not happen from engine's
point of view or the lago host is outdated.

[1]
http://jenkins.ovirt.org/job/test-repo_ovirt_experimental_master/3838/artifact/exported-artifacts/basic_suite_master.sh-el7/exported-artifacts/test_logs/basic-suite-master/post-004_basic_sanity.py/lago-basic-suite-master-host0/_var_log_vdsm/vdsm.log


Anton.

On Fri, Dec 2, 2016 at 10:43 AM, Anton Marchukov 
wrote:


FYI. Experimental flow for master currently fails to run a VM. The tests
times out while waiting for 180 seconds:

http://jenkins.ovirt.org/job/test-repo_ovirt_experimental_
master/3838/testReport/(root)/004_basic_sanity/vm_run/

This is reproducible over 23 runs of this happened tonight, sounds like a
regression to me:

http://jenkins.ovirt.org/job/test-repo_ovirt_experimental_master/

I will update here with additional information once I find it.

Last successful run was with this patch:

https://gerrit.ovirt.org/#/c/66416/ (vdsm: API: move vm parameters fixup
in a method)

Known to start failing around this patch:

https://gerrit.ovirt.org/#/c/67647/ (vdsmapi: fix a typo in string
formatting)

Please notes that we do not have gating implemented yet, so everything
that was merged in between those patches might have caused this (not
necessary in vdsm project).

Anton.
--
Anton Marchukov
Senior Software Engineer - RHEV CI - Red Hat





--
Anton Marchukov
Senior Software Engineer - RHEV CI - Red Hat



___
Devel mailing list
Devel@ovirt.org
http://lists.ovirt.org/mailman/listinfo/devel


___
Devel mailing list
Devel@ovirt.org
http://lists.ovirt.org/mailman/listinfo/devel


Re: [ovirt-devel] Experimental Flow for Master Fails to Run a VM

2016-12-02 Thread Piotr Kliczewski
Anton,

I see following event in the log:

2016-12-02 04:31:12,527-05 DEBUG
[org.ovirt.engine.core.vdsbroker.monitoring.EventVmStatsRefresher]
(ForkJoinPool-1-worker-4) [83b6b5d] processing event for host
lago-basic-suite-master-host0 data:
39710f89-9fa2-423e-9fa8-1448ca51f166:
status = Down
timeOffset = 0
exitReason = 1
exitMessage = XML error: file '/dev/urandom' is not a supported random source
exitCode = 1

and here is the vdsm log:

2016-12-02 04:31:10,618 ERROR (vm/39710f89) [virt.vm]
(vmId='39710f89-9fa2-423e-9fa8-1448ca51f166') The vm start process
failed (vm:613)
Traceback (most recent call last):
  File "/usr/share/vdsm/virt/vm.py", line 549, in _startUnderlyingVm
self._run()
  File "/usr/share/vdsm/virt/vm.py", line 1980, in _run
self._connection.createXML(domxml, flags),
  File "/usr/lib/python2.7/site-packages/vdsm/libvirtconnection.py",
line 128, in wrapper
ret = f(*args, **kwargs)
  File "/usr/lib/python2.7/site-packages/vdsm/utils.py", line 936, in wrapper
return func(inst, *args, **kwargs)
  File "/usr/lib64/python2.7/site-packages/libvirt.py", line 3611, in createXML
if ret is None:raise libvirtError('virDomainCreateXML() failed', conn=self)
libvirtError: XML error: file '/dev/urandom' is not a supported random source

@Martin is it known issue?

Thanks,
Piotr

On Fri, Dec 2, 2016 at 10:55 AM, Anton Marchukov  wrote:
> Hello All.
>
> Engine log can be viewed here:
>
> http://jenkins.ovirt.org/job/test-repo_ovirt_experimental_master/3838/artifact/exported-artifacts/basic_suite_master.sh-el7/exported-artifacts/test_logs/basic-suite-master/post-004_basic_sanity.py/lago-basic-suite-master-engine/_var_log_ovirt-engine/engine.log
>
> I see the following exception there:
>
> 2016-12-02 04:29:24,030-05 DEBUG
> [org.ovirt.vdsm.jsonrpc.client.internal.ResponseWorker] (ResponseWorker)
> [83b6b5d] Message received: {"jsonrpc": "2.0", "id":
> "ec254aad-441b-47e7-a644-aebddcc1d62c", "result": true}
> 2016-12-02 04:29:24,030-05 ERROR
> [org.ovirt.vdsm.jsonrpc.client.JsonRpcClient] (ResponseWorker) [83b6b5d] Not
> able to update response for "ec254aad-441b-47e7-a644-aebddcc1d62c"
> 2016-12-02 04:29:24,041-05 DEBUG
> [org.ovirt.engine.core.utils.timer.FixedDelayJobListener]
> (DefaultQuartzScheduler3) [47a31d72] Rescheduling
> DEFAULT.org.ovirt.engine.core.bll.gluster.GlusterSyncJob.refreshLightWeightData#-9223372036854775775
> as there is no unfired trigger.
> 2016-12-02 04:29:24,024-05 DEBUG
> [org.ovirt.engine.core.vdsbroker.vdsbroker.PollVDSCommand] (default task-12)
> [d932871a-af4f-4fc9-9ee5-f7a0126a7b85] Exception:
> org.ovirt.engine.core.vdsbroker.vdsbroker.VDSNetworkException:
> VDSGenericException: VDSNetworkException: Timeout during xml-rpc call
> at
> org.ovirt.engine.core.vdsbroker.vdsbroker.FutureVDSCommand.get(FutureVDSCommand.java:73)
> [vdsbroker.jar:]
>

This issue may occur during setupNetworks due to nature of the
operation. I need to update the message because is not correct.

> 
>
> 2016-12-02 04:29:24,042-05 ERROR
> [org.ovirt.engine.core.vdsbroker.vdsbroker.PollVDSCommand] (default task-12)
> [d932871a-af4f-4fc9-9ee5-f7a0126a7b85] Timeout waiting for VDSM response:
> Internal timeout occured
> 2016-12-02 04:29:24,044-05 DEBUG
> [org.ovirt.engine.core.vdsbroker.vdsbroker.GetCapabilitiesVDSCommand]
> (default task-12) [d932871a-af4f-4fc9-9ee5-f7a0126a7b85] START,
> GetCapabilitiesVDSCommand(HostName = lago-basic-suite-master-host0,
> VdsIdAndVdsVDSCommandParametersBase:{runAsync='true',
> hostId='5eb7019e-28a3-4f93-9188-685b6c64a2f5',
> vds='Host[lago-basic-suite-master-host0,5eb7019e-28a3-4f93-9188-685b6c64a2f5]'}),
> log id: 58f448b8
> 2016-12-02 04:29:24,044-05 DEBUG
> [org.ovirt.vdsm.jsonrpc.client.reactors.stomp.impl.Message] (default
> task-12) [d932871a-af4f-4fc9-9ee5-f7a0126a7b85] SEND
> destination:jms.topic.vdsm_requests
> reply-to:jms.topic.vdsm_responses
> content-length:105
>
>
> Please note that this runs on localhost with local bridge. So it is not
> likely to be network itself.
>
> Anton.
>
> On Fri, Dec 2, 2016 at 10:43 AM, Anton Marchukov 
> wrote:
>>
>> FYI. Experimental flow for master currently fails to run a VM. The tests
>> times out while waiting for 180 seconds:
>>
>>
>> http://jenkins.ovirt.org/job/test-repo_ovirt_experimental_master/3838/testReport/(root)/004_basic_sanity/vm_run/
>>
>> This is reproducible over 23 runs of this happened tonight, sounds like a
>> regression to me:
>>
>> http://jenkins.ovirt.org/job/test-repo_ovirt_experimental_master/
>>
>> I will update here with additional information once I find it.
>>
>> Last successful run was with this patch:
>>
>> https://gerrit.ovirt.org/#/c/66416/ (vdsm: API: move vm parameters fixup
>> in a method)
>>
>> Known to start failing around this patch:
>>
>> https://gerrit.ovirt.org/#/c/67647/ (vdsmapi: fix a typo in string
>> formatting)
>>
>> Please notes that we do not have gating implemented yet, so everything
>> that was merged in between those patches might have caused this (n

Re: [ovirt-devel] Experimental Flow for Master Fails to Run a VM

2016-12-02 Thread Anton Marchukov
Hello All.

Engine log can be viewed here:

http://jenkins.ovirt.org/job/test-repo_ovirt_experimental_master/3838/artifact/exported-artifacts/basic_suite_master.sh-el7/exported-artifacts/test_logs/basic-suite-master/post-004_basic_sanity.py/lago-basic-suite-master-engine/_var_log_ovirt-engine/engine.log

I see the following exception there:

2016-12-02 04:29:24,030-05 DEBUG
[org.ovirt.vdsm.jsonrpc.client.internal.ResponseWorker]
(ResponseWorker) [83b6b5d] Message received: {"jsonrpc": "2.0", "id":
"ec254aad-441b-47e7-a644-aebddcc1d62c", "result": true}
2016-12-02 04:29:24,030-05 ERROR
[org.ovirt.vdsm.jsonrpc.client.JsonRpcClient] (ResponseWorker)
[83b6b5d] Not able to update response for
"ec254aad-441b-47e7-a644-aebddcc1d62c"
2016-12-02 04:29:24,041-05 DEBUG
[org.ovirt.engine.core.utils.timer.FixedDelayJobListener]
(DefaultQuartzScheduler3) [47a31d72] Rescheduling
DEFAULT.org.ovirt.engine.core.bll.gluster.GlusterSyncJob.refreshLightWeightData#-9223372036854775775
as there is no unfired trigger.
2016-12-02 04:29:24,024-05 DEBUG
[org.ovirt.engine.core.vdsbroker.vdsbroker.PollVDSCommand] (default
task-12) [d932871a-af4f-4fc9-9ee5-f7a0126a7b85] Exception:
org.ovirt.engine.core.vdsbroker.vdsbroker.VDSNetworkException:
VDSGenericException: VDSNetworkException: Timeout during xml-rpc call
at 
org.ovirt.engine.core.vdsbroker.vdsbroker.FutureVDSCommand.get(FutureVDSCommand.java:73)
[vdsbroker.jar:]



2016-12-02 04:29:24,042-05 ERROR
[org.ovirt.engine.core.vdsbroker.vdsbroker.PollVDSCommand] (default
task-12) [d932871a-af4f-4fc9-9ee5-f7a0126a7b85] Timeout waiting for
VDSM response: Internal timeout occured
2016-12-02 04:29:24,044-05 DEBUG
[org.ovirt.engine.core.vdsbroker.vdsbroker.GetCapabilitiesVDSCommand]
(default task-12) [d932871a-af4f-4fc9-9ee5-f7a0126a7b85] START,
GetCapabilitiesVDSCommand(HostName = lago-basic-suite-master-host0,
VdsIdAndVdsVDSCommandParametersBase:{runAsync='true',
hostId='5eb7019e-28a3-4f93-9188-685b6c64a2f5',
vds='Host[lago-basic-suite-master-host0,5eb7019e-28a3-4f93-9188-685b6c64a2f5]'}),
log id: 58f448b8
2016-12-02 04:29:24,044-05 DEBUG
[org.ovirt.vdsm.jsonrpc.client.reactors.stomp.impl.Message] (default
task-12) [d932871a-af4f-4fc9-9ee5-f7a0126a7b85] SEND
destination:jms.topic.vdsm_requests
reply-to:jms.topic.vdsm_responses
content-length:105


Please note that this runs on localhost with local bridge. So it is not
likely to be network itself.

Anton.

On Fri, Dec 2, 2016 at 10:43 AM, Anton Marchukov 
wrote:

> FYI. Experimental flow for master currently fails to run a VM. The tests
> times out while waiting for 180 seconds:
>
> http://jenkins.ovirt.org/job/test-repo_ovirt_experimental_
> master/3838/testReport/(root)/004_basic_sanity/vm_run/
>
> This is reproducible over 23 runs of this happened tonight, sounds like a
> regression to me:
>
> http://jenkins.ovirt.org/job/test-repo_ovirt_experimental_master/
>
> I will update here with additional information once I find it.
>
> Last successful run was with this patch:
>
> https://gerrit.ovirt.org/#/c/66416/ (vdsm: API: move vm parameters fixup
> in a method)
>
> Known to start failing around this patch:
>
> https://gerrit.ovirt.org/#/c/67647/ (vdsmapi: fix a typo in string
> formatting)
>
> Please notes that we do not have gating implemented yet, so everything
> that was merged in between those patches might have caused this (not
> necessary in vdsm project).
>
> Anton.
> --
> Anton Marchukov
> Senior Software Engineer - RHEV CI - Red Hat
>
>


-- 
Anton Marchukov
Senior Software Engineer - RHEV CI - Red Hat
___
Devel mailing list
Devel@ovirt.org
http://lists.ovirt.org/mailman/listinfo/devel

[ovirt-devel] Experimental Flow for Master Fails to Run a VM

2016-12-02 Thread Anton Marchukov
FYI. Experimental flow for master currently fails to run a VM. The tests
times out while waiting for 180 seconds:

http://jenkins.ovirt.org/job/test-repo_ovirt_experimental_master/3838/testReport/(root)/004_basic_sanity/vm_run/

This is reproducible over 23 runs of this happened tonight, sounds like a
regression to me:

http://jenkins.ovirt.org/job/test-repo_ovirt_experimental_master/

I will update here with additional information once I find it.

Last successful run was with this patch:

https://gerrit.ovirt.org/#/c/66416/ (vdsm: API: move vm parameters fixup in
a method)

Known to start failing around this patch:

https://gerrit.ovirt.org/#/c/67647/ (vdsmapi: fix a typo in string
formatting)

Please notes that we do not have gating implemented yet, so everything that
was merged in between those patches might have caused this (not necessary
in vdsm project).

Anton.
-- 
Anton Marchukov
Senior Software Engineer - RHEV CI - Red Hat
___
Devel mailing list
Devel@ovirt.org
http://lists.ovirt.org/mailman/listinfo/devel