[JIRA] (OVIRT-3062) ci job stuck

2020-11-23 Thread Dominik Holler (oVirt JIRA)
Dominik Holler created OVIRT-3062:
-

 Summary: ci job stuck
 Key: OVIRT-3062
 URL: https://ovirt-jira.atlassian.net/browse/OVIRT-3062
 Project: oVirt - virtualization made easy
  Issue Type: Bug
Reporter: Dominik Holler
Assignee: infra


The ci job
https://jenkins.ovirt.org/job/ovirt-provider-ovn_standard-on-merge/442/
seems to be stuck.



--
This message was sent by Atlassian Jira
(v1001.0.0-SNAPSHOT#100151)
___
Infra mailing list -- infra@ovirt.org
To unsubscribe send an email to infra-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/infra@ovirt.org/message/3V3IZWFH7TP7OZVI7J4NAT5BS46AK3M6/


[JIRA] (OVIRT-3029) git is slow today

2020-10-01 Thread Dominik Holler (oVirt JIRA)
Dominik Holler created OVIRT-3029:
-

 Summary: git is slow today
 Key: OVIRT-3029
 URL: https://ovirt-jira.atlassian.net/browse/OVIRT-3029
 Project: oVirt - virtualization made easy
  Issue Type: By-EMAIL
Reporter: Dominik Holler
Assignee: infra


Accessing gerrit.ovirt.org via git (ssh and http) is slow today.
The slowness is if I work with git local, but also if CI wants to use git.
This results into timeouts on CI, e.g. in
https://jenkins.ovirt.org/job/ovirt-engine_standard-check-patch/8246/console#L354
:
06:53:16 > git fetch --tags --progress https://gerrit.ovirt.org/ovirt-engine
+refs/changes/65/111065/30:myhead # timeout=20

07:13:16 ERROR: Timeout after 20 minutes

07:13:55 ERROR: Error fetching remote repo 'origin'

07:13:55 hudson.plugins.git.GitException: Failed to fetch from
https://gerrit.ovirt.org/ovirt-engine

07:13:55 at hudson.plugins.git.GitSCM.fetchFrom(GitSCM.java:915)

07:13:55 at hudson.plugins.git.GitSCM.retrieveChanges(GitSCM.java:1141)

07:13:55 at hudson.plugins.git.GitSCM.checkout(GitSCM.java:1177)



--
This message was sent by Atlassian Jira
(v1001.0.0-SNAPSHOT#100147)
___
Infra mailing list -- infra@ovirt.org
To unsubscribe send an email to infra-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/infra@ovirt.org/message/BLI2SHV5CX2EZBCCBGO6VV4SS7GILNJT/


[JIRA] (OVIRT-3025) Grant Eitan maintainer permission on OST

2020-09-28 Thread Dominik Holler (oVirt JIRA)
Dominik Holler created OVIRT-3025:
-

 Summary: Grant Eitan maintainer permission on OST
 Key: OVIRT-3025
 URL: https://ovirt-jira.atlassian.net/browse/OVIRT-3025
 Project: oVirt - virtualization made easy
  Issue Type: Task
  Components: oVirt Infra
Reporter: Dominik Holler
Assignee: infra


Please give the code review +2 and merging permissions to 
[~accountid:557058:60c84466-2fd8-4225-a3b1-dca60032ce48]

Related thread on ovirt-devel is
https://lists.ovirt.org/archives/list/de...@ovirt.org/thread/KALQTOFWNYIU3EVLU2DOPAY4EROAFAIF/



--
This message was sent by Atlassian Jira
(v1001.0.0-SNAPSHOT#100147)
___
Infra mailing list -- infra@ovirt.org
To unsubscribe send an email to infra-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/infra@ovirt.org/message/5AAUPL75E3XLLJSHYCIKSLRNAOPJY74F/


[JIRA] (OVIRT-2969) new repo request

2020-07-08 Thread Dominik Holler (oVirt JIRA)
Dominik Holler created OVIRT-2969:
-

 Summary: new repo request
 Key: OVIRT-2969
 URL: https://ovirt-jira.atlassian.net/browse/OVIRT-2969
 Project: oVirt - virtualization made easy
  Issue Type: Task
  Components: Gerrit/git
Reporter: Dominik Holler
Assignee: infra


Hello,
can you please create a new empty repo
ovirt-openvswitch
on gerrit.ovirt.org 
and mirror read-only to github.com/ovirt? 
Thanks



--
This message was sent by Atlassian Jira
(v1001.0.0-SNAPSHOT#100133)
___
Infra mailing list -- infra@ovirt.org
To unsubscribe send an email to infra-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/infra@ovirt.org/message/TFLFYVQ27AUPIOTUSQFXT263YX4QFTVA/


[JIRA] (OVIRT-2917) Vagrant VM container

2020-04-22 Thread Dominik Holler (oVirt JIRA)

[ 
https://ovirt-jira.atlassian.net/browse/OVIRT-2917?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=40364#comment-40364
 ] 

Dominik Holler commented on OVIRT-2917:
---

[~accountid:557058:5fc78873-359e-47c9-aa0b-4845b0da8143]
Would you add the quay.io/ovirt/vdsm-test-func-network-centos-8 to the withe 
list  until we have another way to run a privileged container?


> Vagrant VM container
> 
>
> Key: OVIRT-2917
> URL: https://ovirt-jira.atlassian.net/browse/OVIRT-2917
> Project: oVirt - virtualization made easy
>  Issue Type: By-EMAIL
>Reporter: Ales Musil
>Assignee: infra
>
> Hi,
> is there any documentation on how to use the new container backend with the
> CI container that spawns VM for privileged operations?
> Thank you.
> Regards,
> Ales
> -- 
> Ales Musil
> Software Engineer - RHV Network
> Red Hat EMEA <https://www.redhat.com>
> amu...@redhat.comIM: amusil
> <https://red.ht/sig>



--
This message was sent by Atlassian Jira
(v1001.0.0-SNAPSHOT#100125)
___
Infra mailing list -- infra@ovirt.org
To unsubscribe send an email to infra-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/infra@ovirt.org/message/G3YIAPYI4M5ZOYFE3ISKRUBDW7TKXIC5/


Re: [oVirt Jenkins] ovirt-system-tests_he-basic-suite-4.3 - Build # 346 - Still Failing!

2020-02-10 Thread Dominik Holler
On Mon, Feb 10, 2020 at 10:32 AM Dominik Holler  wrote:

> Hello,
> is this issue reproducible local for someone, or does this happen only on
> jenkins?
>
>
For me it seems to happen only on jenkins.
I don't know enough about the environment on jenkins.
Might a "yum update" fix the problem?
I guess that there are incompatible versions of libvirt and firewalld
installed.


> On Mon, Feb 10, 2020 at 9:46 AM Galit Rosenthal 
> wrote:
>
>> Checking this
>> Once I have more info I will update.
>>
>> On Mon, Feb 10, 2020 at 9:16 AM Parth Dhanjal  wrote:
>>
>>> Hey!
>>>
>>> hc_basic_suite_4.3 is failing with the same error as well
>>>
>>> Project:
>>> https://jenkins.ovirt.org/view/oVirt%20system%20tests/job/ovirt-system-tests_hc-basic-suite-4.3/
>>> Build:
>>> https://jenkins.ovirt.org/view/oVirt%20system%20tests/job/ovirt-system-tests_hc-basic-suite-4.3/328/consoleFul
>>> <https://jenkins.ovirt.org/view/oVirt%20system%20tests/job/ovirt-system-tests_hc-basic-suite-4.3/328/consoleFull>
>>> l
>>>
>>> *07:36:24* @ Start Prefix: *07:36:24*   # Start nets: *07:36:24* * 
>>> Create network lago-hc-basic-suite-4-3-net-management: *07:36:30* * 
>>> Create network lago-hc-basic-suite-4-3-net-management: ERROR (in 
>>> 0:00:05)*07:36:30*   # Start nets: ERROR (in 0:00:05)*07:36:30* @ Start 
>>> Prefix: ERROR (in 0:00:05)*07:36:30* Error occured, aborting*07:36:30* 
>>> Traceback (most recent call last):*07:36:30*   File 
>>> "/usr/lib/python2.7/site-packages/lago/cmd.py", line 969, in main*07:36:30* 
>>> cli_plugins[args.verb].do_run(args)*07:36:30*   File 
>>> "/usr/lib/python2.7/site-packages/lago/plugins/cli.py", line 184, in 
>>> do_run*07:36:30* self._do_run(**vars(args))*07:36:30*   File 
>>> "/usr/lib/python2.7/site-packages/lago/utils.py", line 573, in 
>>> wrapper*07:36:30* return func(*args, **kwargs)*07:36:30*   File 
>>> "/usr/lib/python2.7/site-packages/lago/utils.py", line 584, in 
>>> wrapper*07:36:30* return func(*args, prefix=prefix, **kwargs)*07:36:30* 
>>>   File "/usr/lib/python2.7/site-packages/lago/cmd.py", line 271, in 
>>> do_start*07:36:30* prefix.start(vm_names=vm_names)*07:36:30*   File 
>>> "/usr/lib/python2.7/site-packages/lago/sdk_utils.py", line 50, in 
>>> wrapped*07:36:30* return func(*args, **kwargs)*07:36:30*   File 
>>> "/usr/lib/python2.7/site-packages/lago/prefix.py", line 1323, in 
>>> start*07:36:30* self.virt_env.start(vm_names=vm_names)*07:36:30*   File 
>>> "/usr/lib/python2.7/site-packages/lago/virt.py", line 341, in 
>>> start*07:36:30* net.start()*07:36:30*   File 
>>> "/usr/lib/python2.7/site-packages/lago/providers/libvirt/network.py", line 
>>> 115, in start*07:36:30* net = 
>>> self.libvirt_con.networkCreateXML(self._libvirt_xml())*07:36:30*   File 
>>> "/usr/lib64/python2.7/site-packages/libvirt.py", line 4216, in 
>>> networkCreateXML*07:36:30* if ret is None:raise 
>>> libvirtError('virNetworkCreateXML() failed', conn=self)*07:36:30* 
>>> libvirtError: COMMAND_FAILED: INVALID_IPV: 'ipv6' is not a valid backend or 
>>> is unavailable*07:36:30* + on_exit*07:36:30* + [[ 1 -ne 0 ]]*07:36:30* + 
>>> logger.error 'on_exit: Exiting with a non-zero status'*07:36:30* + 
>>> logger.log ERROR 'on_exit: Exiting with a non-zero status'*07:36:30* + set 
>>> +x*07:36:30* 2020-02-10 02:06:30.296966051+ 
>>> run_suite.sh::on_exit::ERROR:: on_exit: Exiting with a non-zero status
>>>
>>>
>>>
>>> On Mon, Feb 10, 2020 at 12:08 PM Yedidyah Bar David 
>>> wrote:
>>>
>>>> On Mon, Feb 10, 2020 at 4:13 AM  wrote:
>>>> >
>>>> > Project:
>>>> https://jenkins.ovirt.org/job/ovirt-system-tests_he-basic-suite-4.3/
>>>> > Build:
>>>> https://jenkins.ovirt.org/job/ovirt-system-tests_he-basic-suite-4.3/346/
>>>>
>>>> lago.log has:
>>>>
>>>> 1. some repo issue:
>>>>
>>>> 2020-02-10 02:11:13,681::ERROR::repoman.common.parser::No artifacts
>>>> found for source /var/lib/lago/ovirt-appliance-4.3-el7:only-missing
>>>>
>>>> Galit - any idea?
>>>>
>>>> 2. IPv6 issue - after a long series of tracebacks:
>>>>
>>>> libvirtError: COMMAND_FAILED: INVALID_IPV: 'ipv

Re: [oVirt Jenkins] ovirt-system-tests_he-basic-suite-4.3 - Build # 346 - Still Failing!

2020-02-10 Thread Dominik Holler
Hello,
is this issue reproducible local for someone, or does this happen only on
jenkins?

On Mon, Feb 10, 2020 at 9:46 AM Galit Rosenthal  wrote:

> Checking this
> Once I have more info I will update.
>
> On Mon, Feb 10, 2020 at 9:16 AM Parth Dhanjal  wrote:
>
>> Hey!
>>
>> hc_basic_suite_4.3 is failing with the same error as well
>>
>> Project:
>> https://jenkins.ovirt.org/view/oVirt%20system%20tests/job/ovirt-system-tests_hc-basic-suite-4.3/
>> Build:
>> https://jenkins.ovirt.org/view/oVirt%20system%20tests/job/ovirt-system-tests_hc-basic-suite-4.3/328/consoleFul
>> 
>> l
>>
>> *07:36:24* @ Start Prefix: *07:36:24*   # Start nets: *07:36:24* * 
>> Create network lago-hc-basic-suite-4-3-net-management: *07:36:30* * 
>> Create network lago-hc-basic-suite-4-3-net-management: ERROR (in 
>> 0:00:05)*07:36:30*   # Start nets: ERROR (in 0:00:05)*07:36:30* @ Start 
>> Prefix: ERROR (in 0:00:05)*07:36:30* Error occured, aborting*07:36:30* 
>> Traceback (most recent call last):*07:36:30*   File 
>> "/usr/lib/python2.7/site-packages/lago/cmd.py", line 969, in main*07:36:30*  
>>cli_plugins[args.verb].do_run(args)*07:36:30*   File 
>> "/usr/lib/python2.7/site-packages/lago/plugins/cli.py", line 184, in 
>> do_run*07:36:30* self._do_run(**vars(args))*07:36:30*   File 
>> "/usr/lib/python2.7/site-packages/lago/utils.py", line 573, in 
>> wrapper*07:36:30* return func(*args, **kwargs)*07:36:30*   File 
>> "/usr/lib/python2.7/site-packages/lago/utils.py", line 584, in 
>> wrapper*07:36:30* return func(*args, prefix=prefix, **kwargs)*07:36:30*  
>>  File "/usr/lib/python2.7/site-packages/lago/cmd.py", line 271, in 
>> do_start*07:36:30* prefix.start(vm_names=vm_names)*07:36:30*   File 
>> "/usr/lib/python2.7/site-packages/lago/sdk_utils.py", line 50, in 
>> wrapped*07:36:30* return func(*args, **kwargs)*07:36:30*   File 
>> "/usr/lib/python2.7/site-packages/lago/prefix.py", line 1323, in 
>> start*07:36:30* self.virt_env.start(vm_names=vm_names)*07:36:30*   File 
>> "/usr/lib/python2.7/site-packages/lago/virt.py", line 341, in 
>> start*07:36:30* net.start()*07:36:30*   File 
>> "/usr/lib/python2.7/site-packages/lago/providers/libvirt/network.py", line 
>> 115, in start*07:36:30* net = 
>> self.libvirt_con.networkCreateXML(self._libvirt_xml())*07:36:30*   File 
>> "/usr/lib64/python2.7/site-packages/libvirt.py", line 4216, in 
>> networkCreateXML*07:36:30* if ret is None:raise 
>> libvirtError('virNetworkCreateXML() failed', conn=self)*07:36:30* 
>> libvirtError: COMMAND_FAILED: INVALID_IPV: 'ipv6' is not a valid backend or 
>> is unavailable*07:36:30* + on_exit*07:36:30* + [[ 1 -ne 0 ]]*07:36:30* + 
>> logger.error 'on_exit: Exiting with a non-zero status'*07:36:30* + 
>> logger.log ERROR 'on_exit: Exiting with a non-zero status'*07:36:30* + set 
>> +x*07:36:30* 2020-02-10 02:06:30.296966051+ 
>> run_suite.sh::on_exit::ERROR:: on_exit: Exiting with a non-zero status
>>
>>
>>
>> On Mon, Feb 10, 2020 at 12:08 PM Yedidyah Bar David 
>> wrote:
>>
>>> On Mon, Feb 10, 2020 at 4:13 AM  wrote:
>>> >
>>> > Project:
>>> https://jenkins.ovirt.org/job/ovirt-system-tests_he-basic-suite-4.3/
>>> > Build:
>>> https://jenkins.ovirt.org/job/ovirt-system-tests_he-basic-suite-4.3/346/
>>>
>>> lago.log has:
>>>
>>> 1. some repo issue:
>>>
>>> 2020-02-10 02:11:13,681::ERROR::repoman.common.parser::No artifacts
>>> found for source /var/lib/lago/ovirt-appliance-4.3-el7:only-missing
>>>
>>> Galit - any idea?
>>>
>>> 2. IPv6 issue - after a long series of tracebacks:
>>>
>>> libvirtError: COMMAND_FAILED: INVALID_IPV: 'ipv6' is not a valid
>>> backend or is unavailable
>>>
>>> Dominik, any idea? Not sure if this is an infra issue or a recent
>>> change to OST (or lago, or libvirt...).
>>>
>>> Thanks and best regards,
>>> --
>>> Didi
>>> ___
>>> Infra mailing list -- infra@ovirt.org
>>> To unsubscribe send an email to infra-le...@ovirt.org
>>> Privacy Statement: https://www.ovirt.org/site/privacy-policy/
>>> oVirt Code of Conduct:
>>> https://www.ovirt.org/community/about/community-guidelines/
>>> List Archives:
>>> https://lists.ovirt.org/archives/list/infra@ovirt.org/message/URERUP2VUCW25RFUIOLVEE3DMBMHBF6C/
>>>
>>
>
> --
>
> GALIT ROSENTHAL
>
> SOFTWARE ENGINEER
>
> Red Hat
>
> 
>
> ga...@redhat.comT: 972-9-7692230
> 
>
___
Infra mailing list -- infra@ovirt.org
To unsubscribe send an email to infra-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/infra@ovirt.org/message/PGA5MU5ZSX5GSJSQ6QJ64SYMYZW4HPFH/


[JIRA] (OVIRT-2767) https://github.com/mmirecki/ovirt-provider-mock -> https://github.com/ovirt/ovirt-provider-mock

2019-07-31 Thread Dominik Holler (oVirt JIRA)

[ 
https://ovirt-jira.atlassian.net/browse/OVIRT-2767?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=39605#comment-39605
 ] 

Dominik Holler commented on OVIRT-2767:
---

Thanks Eyal!
Do mmirecki and I have maintainer rights on this project?

> https://github.com/mmirecki/ovirt-provider-mock -> 
> https://github.com/ovirt/ovirt-provider-mock
> ---
>
> Key: OVIRT-2767
> URL: https://ovirt-jira.atlassian.net/browse/OVIRT-2767
> Project: oVirt - virtualization made easy
>  Issue Type: Task
>  Components: GitHub
>Reporter: Dominik Holler
>Assignee: infra
>
> Hello,
> I would like to ovirt-provider-mock  [1] to be included under oVirt with 
> mmirecki and dholler as maintainers.
> Thanks
> [1]  https://github.com/mmirecki/ovirt-provider-mock



--
This message was sent by Atlassian Jira
(v1001.0.0-SNAPSHOT#100106)
___
Infra mailing list -- infra@ovirt.org
To unsubscribe send an email to infra-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/infra@ovirt.org/message/GIUDLBPZ3LYHTDHSLLJPEAJ75UIFDTLJ/


[JIRA] (OVIRT-2767) https://github.com/mmirecki/ovirt-provider-mock -> https://github.com/ovirt/ovirt-provider-mock

2019-07-31 Thread Dominik Holler (oVirt JIRA)
Dominik Holler created OVIRT-2767:
-

 Summary: https://github.com/mmirecki/ovirt-provider-mock -> 
https://github.com/ovirt/ovirt-provider-mock
 Key: OVIRT-2767
 URL: https://ovirt-jira.atlassian.net/browse/OVIRT-2767
 Project: oVirt - virtualization made easy
  Issue Type: Task
  Components: GitHub
Reporter: Dominik Holler
Assignee: infra


Hello,
I would like to ovirt-provider-mock  [1] to be included under oVirt with 
mmirecki and dholler as maintainers.
Thanks

[1]  https://github.com/mmirecki/ovirt-provider-mock



--
This message was sent by Atlassian Jira
(v1001.0.0-SNAPSHOT#100106)
___
Infra mailing list -- infra@ovirt.org
To unsubscribe send an email to infra-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/infra@ovirt.org/message/3S5K6AVAJGVHWL3OK6RP65ID3AXKNIUW/


Re: [ovirt-devel] Re: [ OST Failure Report ] [ oVirt master (ovirt-engine) ] [ 22-03-2019 ] [004_basic_sanity.hotplug_cpu ]

2019-04-02 Thread Dominik Holler
On Tue, 2 Apr 2019 12:24:45 +0300
Galit Rosenthal  wrote:

> Hi
> 
> I had a failure on my laptop when running in mock on hotplug cpu test.
> (when testing a change, got the same error on the jenkins)
> Dominik requested me to make a video of the vm0 dmesg command.
> 
> 
> https://drive.google.com/file/d/1Kr6r4SMhnVsWBvWD6E6JIZI4ddivo2pv/view?usp=sharing
> 

Galit, thank you very much for the video!
The video shows that the guest is in serious trouble and a dropbear
process is killed by the oom killer.
Gallit currently checks if it is possible to give the guest VM more
memory.



> 
> Regards,
> Galit
> 
> 
> On Wed, Mar 27, 2019 at 2:39 PM Sandro Bonazzola 
> wrote:
> 
> >
> >
> > Il giorno mer 27 mar 2019 alle ore 09:54 Dominik Holler <
> > dhol...@redhat.com> ha scritto:
> >
> >> On Wed, 27 Mar 2019 10:07:16 +0200
> >> Eyal Edri  wrote:
> >>
> >> > On Wed, Mar 27, 2019 at 3:06 AM Ryan Barry  wrote:
> >> >
> >> > > On Tue, Mar 26, 2019 at 4:07 PM Dominik Holler 
> >> wrote:
> >> > > >
> >> > > > I added in
> >> > > > https://gerrit.ovirt.org/#/c/98925/
> >> > > > a ping directly before the ssh.
> >> > > > The ping succeeds, but the ssh fails.
> >> > > >
> >> > > >
> >> > > > On Tue, 26 Mar 2019 17:07:45 +0100
> >> > > > Sandro Bonazzola  wrote:
> >> > > >
> >> > > > > Il giorno mar 26 mar 2019 alle ore 16:48 Ryan Barry <
> >> rba...@redhat.com>
> >> > > ha
> >> > > > > scritto:
> >> > > > >
> >> > > > > > +1 from me
> >> > > > > >
> >> > > > >
> >> > > > > Merged. I have 2 patches constantly failing on it, rebased them,
> >> you
> >> > > can
> >> > > > > follow on:
> >> > > > > https://gerrit.ovirt.org/#/c/98863/ and
> >> https://gerrit.ovirt.org/98862
> >> > > > >
> >> > > >
> >> > > > still failing on jenkins, but at least one succeeds locally for me
> >> > >
> >> > > Succeeds locally for me also.
> >> > >
> >> > > Dafna, are we sure there's not an infra issue?
> >> > >
> >> >
> >> > I think since its a race ( and we've seen failures on this test in the
> >> > past, also a race I think ), its probably hard to reproduce locally.
> >> > Also, we probably need to make sure the same Libvirt version is used.
> >> > The upstream servers are quite old, it can also be local run ends up
> >> being
> >> > faster and not hitting the same issues ( as we've seen in the past )
> >> >
> >> > Could it be a bug in the ssh client ( paramiko? )
> >> >
> >>
> >>
> >> Probably wrong idea, but worth to ask:
> >> Any ideas which  ssh_timeout is used or how to modify?
> >>
> >> If 100 tries including a time.sleep(1) takes 100 seconds,
> >> either the timeout is not the expected 10 seconds, or the guest refuses
> >> the connection.
> >>
> >>
> > I'm looking into a similar failure and found this on host1 logs at the
> > time of the ssh failure:
> > https://jenkins.ovirt.org/job/ovirt-system-tests_standard-check-patch/3905/artifact/check-patch.basic_suite_master.el7.x86_64/test_logs/basic-suite-master/post-004_basic_sanity.py/lago-basic-suite-master-host-1/_var_log/messages
> >
> > Mar 27 03:35:20 lago-basic-suite-master-host-1 kernel: on65322a61b5f14:
> > port 2(vnet1) entered blocking state
> > Mar 27 03:35:20 lago-basic-suite-master-host-1 kernel: on65322a61b5f14:
> > port 2(vnet1) entered disabled state
> > Mar 27 03:35:20 lago-basic-suite-master-host-1 kernel: device vnet1
> > entered promiscuous mode
> > Mar 27 03:35:20 lago-basic-suite-master-host-1 kernel: on65322a61b5f14:
> > port 2(vnet1) entered blocking state
> > Mar 27 03:35:20 lago-basic-suite-master-host-1 kernel: on65322a61b5f14:
> > port 2(vnet1) entered forwarding state
> > Mar 27 03:35:20 lago-basic-suite-master-host-1 NetworkManager[2667]:
> >   [1553672120.9133] manager: (vnet1): new Tun device
> > (/org/freedesktop/NetworkManager/Devices/44)
> > Mar 27 03:35:20 lago-basic-suite-master-host-1 lldpad: recvfrom(Event
> > interface): No buffer space available
> > Mar 27 03:35:20 lago-basic-suite-mast

Re: [ovirt-devel] Re: [ OST Failure Report ] [ oVirt master (ovirt-engine) ] [ 22-03-2019 ] [004_basic_sanity.hotplug_cpu ]

2019-03-27 Thread Dominik Holler
On Wed, 27 Mar 2019 10:07:16 +0200
Eyal Edri  wrote:

> On Wed, Mar 27, 2019 at 3:06 AM Ryan Barry  wrote:
> 
> > On Tue, Mar 26, 2019 at 4:07 PM Dominik Holler  wrote:
> > >
> > > I added in
> > > https://gerrit.ovirt.org/#/c/98925/
> > > a ping directly before the ssh.
> > > The ping succeeds, but the ssh fails.
> > >
> > >
> > > On Tue, 26 Mar 2019 17:07:45 +0100
> > > Sandro Bonazzola  wrote:
> > >
> > > > Il giorno mar 26 mar 2019 alle ore 16:48 Ryan Barry 
> > ha
> > > > scritto:
> > > >
> > > > > +1 from me
> > > > >
> > > >
> > > > Merged. I have 2 patches constantly failing on it, rebased them, you
> > can
> > > > follow on:
> > > > https://gerrit.ovirt.org/#/c/98863/ and https://gerrit.ovirt.org/98862
> > > >
> > >
> > > still failing on jenkins, but at least one succeeds locally for me
> >
> > Succeeds locally for me also.
> >
> > Dafna, are we sure there's not an infra issue?
> >
> 
> I think since its a race ( and we've seen failures on this test in the
> past, also a race I think ), its probably hard to reproduce locally.
> Also, we probably need to make sure the same Libvirt version is used.
> The upstream servers are quite old, it can also be local run ends up being
> faster and not hitting the same issues ( as we've seen in the past )
> 
> Could it be a bug in the ssh client ( paramiko? )
> 


Probably wrong idea, but worth to ask:
Any ideas which  ssh_timeout is used or how to modify?

If 100 tries including a time.sleep(1) takes 100 seconds,
either the timeout is not the expected 10 seconds, or the guest refuses
the connection.


> Barak,Gal,Galit, Evgheni - any thoughts on something on infra that can
> cause this? ( other than slow servers )
> 
> 
> >
> > >
> > > >
> > > >
> > > > >
> > > > > On Tue, Mar 26, 2019 at 11:13 AM Dominik Holler 
> > > > > wrote:
> > > > > >
> > > > > > On Tue, 26 Mar 2019 12:31:36 +0100
> > > > > > Dominik Holler  wrote:
> > > > > >
> > > > > > > On Tue, 26 Mar 2019 10:58:22 +
> > > > > > > Dafna Ron  wrote:
> > > > > > >
> > > > > > > > This is still failing randomly
> > > > > > > >
> > > > > > >
> > > > > > > I created https://gerrit.ovirt.org/#/c/98906/ to help to
> > understand
> > > > > > > which action is crashing the guest.
> > > > > > >
> > > > > >
> > > > > > I was not able to reproduce the failure with the change above.
> > > > > > We could merge the change to have better information on the next
> > > > > > failure.
> > > > > >
> > > > > >
> > > > > > > >
> > > > > > > > On Tue, Mar 26, 2019 at 8:15 AM Dominik Holler <
> > dhol...@redhat.com>
> > > > > wrote:
> > > > > > > >
> > > > > > > > > On Mon, 25 Mar 2019 17:30:53 -0400
> > > > > > > > > Ryan Barry  wrote:
> > > > > > > > >
> > > > > > > > > > It may be virt, but I'm looking...
> > > > > > > > > >
> > > > > > > > > > I'm very suspicious of this happening immediately after
> > > > > hotplugging a
> > > > > > > > > NIC,
> > > > > > > > > > especially since the bug attached to
> > > > > https://gerrit.ovirt.org/#/c/98765/
> > > > > > > > > > talks about dropping packets. Dominik, did anything else
> > change
> > > > > here?
> > > > > > > > > >
> > > > > > > > >
> > > > > > > > > No, nothing I am aware of.
> > > > > > > > >
> > > > > > > > > Is there already a pattern in the failed runs detected, or
> > does it
> > > > > fail
> > > > > > > > > randomly?
> > > > > > > > >
> > > > > > > > > > On Mon, Mar 25, 2019 at 12:42 PM Anton Marchukov <
> > > > > amarc...@redhat.com>
> > > > > > > > > > wro

Re: [ovirt-devel] Re: [ OST Failure Report ] [ oVirt master (ovirt-engine) ] [ 22-03-2019 ] [004_basic_sanity.hotplug_cpu ]

2019-03-26 Thread Dominik Holler
I added in 
https://gerrit.ovirt.org/#/c/98925/
a ping directly before the ssh.
The ping succeeds, but the ssh fails.


On Tue, 26 Mar 2019 17:07:45 +0100
Sandro Bonazzola  wrote:

> Il giorno mar 26 mar 2019 alle ore 16:48 Ryan Barry  ha
> scritto:
> 
> > +1 from me
> >
> 
> Merged. I have 2 patches constantly failing on it, rebased them, you can
> follow on:
> https://gerrit.ovirt.org/#/c/98863/ and https://gerrit.ovirt.org/98862
> 

still failing on jenkins, but at least one succeeds locally for me

> 
> 
> >
> > On Tue, Mar 26, 2019 at 11:13 AM Dominik Holler 
> > wrote:
> > >
> > > On Tue, 26 Mar 2019 12:31:36 +0100
> > > Dominik Holler  wrote:
> > >
> > > > On Tue, 26 Mar 2019 10:58:22 +
> > > > Dafna Ron  wrote:
> > > >
> > > > > This is still failing randomly
> > > > >
> > > >
> > > > I created https://gerrit.ovirt.org/#/c/98906/ to help to understand
> > > > which action is crashing the guest.
> > > >
> > >
> > > I was not able to reproduce the failure with the change above.
> > > We could merge the change to have better information on the next
> > > failure.
> > >
> > >
> > > > >
> > > > > On Tue, Mar 26, 2019 at 8:15 AM Dominik Holler 
> > wrote:
> > > > >
> > > > > > On Mon, 25 Mar 2019 17:30:53 -0400
> > > > > > Ryan Barry  wrote:
> > > > > >
> > > > > > > It may be virt, but I'm looking...
> > > > > > >
> > > > > > > I'm very suspicious of this happening immediately after
> > hotplugging a
> > > > > > NIC,
> > > > > > > especially since the bug attached to
> > https://gerrit.ovirt.org/#/c/98765/
> > > > > > > talks about dropping packets. Dominik, did anything else change
> > here?
> > > > > > >
> > > > > >
> > > > > > No, nothing I am aware of.
> > > > > >
> > > > > > Is there already a pattern in the failed runs detected, or does it
> > fail
> > > > > > randomly?
> > > > > >
> > > > > > > On Mon, Mar 25, 2019 at 12:42 PM Anton Marchukov <
> > amarc...@redhat.com>
> > > > > > > wrote:
> > > > > > >
> > > > > > > > Which team is it? Is it Virt? Just checking who should open a
> > bug in
> > > > > > > > libvirt as suggested.
> > > > > > > >
> > > > > > > > > On 22 Mar 2019, at 20:52, Nir Soffer 
> > wrote:
> > > > > > > > >
> > > > > > > > > On Fri, Mar 22, 2019 at 7:12 PM Dafna Ron 
> > wrote:
> > > > > > > > > Hi,
> > > > > > > > >
> > > > > > > > > We are failing ovirt-engine master on test
> > > > > > 004_basic_sanity.hotplug_cpu
> > > > > > > > > looking at the logs, we can see that the for some reason,
> > libvirt
> > > > > > > > reports a vm as none responsive which fails the test.
> > > > > > > > >
> > > > > > > > > CQ first failure was for patch:
> > > > > > > > > https://gerrit.ovirt.org/#/c/98553/ - core: Add
> > display="on" for
> > > > > > mdevs,
> > > > > > > > use nodisplay to override
> > > > > > > > > But I do not think this is the cause of failure.
> > > > > > > > >
> > > > > > > > > Adding Marcin, Milan and Dan as well as I think it may be
> > netwrok
> > > > > > > > related.
> > > > > > > > >
> > > > > > > > > You can see the libvirt log here:
> > > > > > > > >
> > > > > > > >
> > > > > >
> > https://jenkins.ovirt.org/job/ovirt-master_change-queue-tester/13516/artifact/basic-suite.el7.x86_64/test_logs/basic-suite-master/post-004_basic_sanity.py/lago-basic-suite-master-host-1/_var_log/libvirt.log
> > > > > > > > >
> > > > > > > > > you can see the full logs here:
> > > > > > > > >
> > > > > > > > >
> > > > > > > >
> > >

Re: [ovirt-devel] Re: [ OST Failure Report ] [ oVirt master (ovirt-engine) ] [ 22-03-2019 ] [004_basic_sanity.hotplug_cpu ]

2019-03-26 Thread Dominik Holler
On Tue, 26 Mar 2019 12:31:36 +0100
Dominik Holler  wrote:

> On Tue, 26 Mar 2019 10:58:22 +
> Dafna Ron  wrote:
> 
> > This is still failing randomly
> > 
> 
> I created https://gerrit.ovirt.org/#/c/98906/ to help to understand
> which action is crashing the guest.
> 

I was not able to reproduce the failure with the change above.
We could merge the change to have better information on the next
failure.


> > 
> > On Tue, Mar 26, 2019 at 8:15 AM Dominik Holler  wrote:
> > 
> > > On Mon, 25 Mar 2019 17:30:53 -0400
> > > Ryan Barry  wrote:
> > >
> > > > It may be virt, but I'm looking...
> > > >
> > > > I'm very suspicious of this happening immediately after hotplugging a
> > > NIC,
> > > > especially since the bug attached to https://gerrit.ovirt.org/#/c/98765/
> > > > talks about dropping packets. Dominik, did anything else change here?
> > > >
> > >
> > > No, nothing I am aware of.
> > >
> > > Is there already a pattern in the failed runs detected, or does it fail
> > > randomly?
> > >
> > > > On Mon, Mar 25, 2019 at 12:42 PM Anton Marchukov 
> > > > wrote:
> > > >
> > > > > Which team is it? Is it Virt? Just checking who should open a bug in
> > > > > libvirt as suggested.
> > > > >
> > > > > > On 22 Mar 2019, at 20:52, Nir Soffer  wrote:
> > > > > >
> > > > > > On Fri, Mar 22, 2019 at 7:12 PM Dafna Ron  wrote:
> > > > > > Hi,
> > > > > >
> > > > > > We are failing ovirt-engine master on test
> > > 004_basic_sanity.hotplug_cpu
> > > > > > looking at the logs, we can see that the for some reason, libvirt
> > > > > reports a vm as none responsive which fails the test.
> > > > > >
> > > > > > CQ first failure was for patch:
> > > > > > https://gerrit.ovirt.org/#/c/98553/ - core: Add display="on" for
> > > mdevs,
> > > > > use nodisplay to override
> > > > > > But I do not think this is the cause of failure.
> > > > > >
> > > > > > Adding Marcin, Milan and Dan as well as I think it may be netwrok
> > > > > related.
> > > > > >
> > > > > > You can see the libvirt log here:
> > > > > >
> > > > >
> > > https://jenkins.ovirt.org/job/ovirt-master_change-queue-tester/13516/artifact/basic-suite.el7.x86_64/test_logs/basic-suite-master/post-004_basic_sanity.py/lago-basic-suite-master-host-1/_var_log/libvirt.log
> > > > > >
> > > > > > you can see the full logs here:
> > > > > >
> > > > > >
> > > > >
> > > http://jenkins.ovirt.org/job/ovirt-master_change-queue-tester/13516/artifact/basic-suite.el7.x86_64/test_logs/basic-suite-master/post-004_basic_sanity.py/
> > > > > >
> > > > > > Evgheni and I confirmed this is not an infra issue and the problem 
> > > > > > is
> > > > > ssh connection to the internal vm
> > > > > >
> > > > > > Thanks,
> > > > > > Dafna
> > > > > >
> > > > > >
> > > > > > error:
> > > > > > 2019-03-22 15:08:22.658+: 22068: warning :
> > > qemuDomainObjTaint:7521 :
> > > > > Domain id=3 name='vm0' uuid=a9443d02-e054-40bb-8ea3-ae346e2d02a7 is
> > > > > tainted: hook-script
> > > > > >
> > > > > > Why our vm is tainted?
> > > > > >
> > > > > > 2019-03-22 15:08:22.693+: 22068: error :
> > > > > virProcessRunInMountNamespace:1159 : internal error: child reported:
> > > unable
> > > > > to set security context 'system_u:object_r:virt_content_t:s0' on
> > > > >
> > > '/rhev/data-center/mnt/blockSD/91d97292-9ac3-4d77-a152-c7ea3250b065/images/e60dae48-ecc7-4171-8bfe-42bfc2190ffd/40243c76-a384-4497-8a2d-792a5e10d510':
> > > > > No such file or directory
> > > > > >
> > > > > > This should not happen, libvirt is not adding labels to files in
> > > > > /rhev/data-center. It is using using its own mount
> > > > > > namespace and adding there the devices used by the VM. Since libvirt
> > > > > create the devices in its namespace
&g

Re: [ovirt-devel] Re: [ OST Failure Report ] [ oVirt master (ovirt-engine) ] [ 22-03-2019 ] [004_basic_sanity.hotplug_cpu ]

2019-03-26 Thread Dominik Holler
On Tue, 26 Mar 2019 10:58:22 +
Dafna Ron  wrote:

> This is still failing randomly
> 

I created https://gerrit.ovirt.org/#/c/98906/ to help to understand
which action is crashing the guest.

> 
> On Tue, Mar 26, 2019 at 8:15 AM Dominik Holler  wrote:
> 
> > On Mon, 25 Mar 2019 17:30:53 -0400
> > Ryan Barry  wrote:
> >
> > > It may be virt, but I'm looking...
> > >
> > > I'm very suspicious of this happening immediately after hotplugging a
> > NIC,
> > > especially since the bug attached to https://gerrit.ovirt.org/#/c/98765/
> > > talks about dropping packets. Dominik, did anything else change here?
> > >
> >
> > No, nothing I am aware of.
> >
> > Is there already a pattern in the failed runs detected, or does it fail
> > randomly?
> >
> > > On Mon, Mar 25, 2019 at 12:42 PM Anton Marchukov 
> > > wrote:
> > >
> > > > Which team is it? Is it Virt? Just checking who should open a bug in
> > > > libvirt as suggested.
> > > >
> > > > > On 22 Mar 2019, at 20:52, Nir Soffer  wrote:
> > > > >
> > > > > On Fri, Mar 22, 2019 at 7:12 PM Dafna Ron  wrote:
> > > > > Hi,
> > > > >
> > > > > We are failing ovirt-engine master on test
> > 004_basic_sanity.hotplug_cpu
> > > > > looking at the logs, we can see that the for some reason, libvirt
> > > > reports a vm as none responsive which fails the test.
> > > > >
> > > > > CQ first failure was for patch:
> > > > > https://gerrit.ovirt.org/#/c/98553/ - core: Add display="on" for
> > mdevs,
> > > > use nodisplay to override
> > > > > But I do not think this is the cause of failure.
> > > > >
> > > > > Adding Marcin, Milan and Dan as well as I think it may be netwrok
> > > > related.
> > > > >
> > > > > You can see the libvirt log here:
> > > > >
> > > >
> > https://jenkins.ovirt.org/job/ovirt-master_change-queue-tester/13516/artifact/basic-suite.el7.x86_64/test_logs/basic-suite-master/post-004_basic_sanity.py/lago-basic-suite-master-host-1/_var_log/libvirt.log
> > > > >
> > > > > you can see the full logs here:
> > > > >
> > > > >
> > > >
> > http://jenkins.ovirt.org/job/ovirt-master_change-queue-tester/13516/artifact/basic-suite.el7.x86_64/test_logs/basic-suite-master/post-004_basic_sanity.py/
> > > > >
> > > > > Evgheni and I confirmed this is not an infra issue and the problem is
> > > > ssh connection to the internal vm
> > > > >
> > > > > Thanks,
> > > > > Dafna
> > > > >
> > > > >
> > > > > error:
> > > > > 2019-03-22 15:08:22.658+: 22068: warning :
> > qemuDomainObjTaint:7521 :
> > > > Domain id=3 name='vm0' uuid=a9443d02-e054-40bb-8ea3-ae346e2d02a7 is
> > > > tainted: hook-script
> > > > >
> > > > > Why our vm is tainted?
> > > > >
> > > > > 2019-03-22 15:08:22.693+: 22068: error :
> > > > virProcessRunInMountNamespace:1159 : internal error: child reported:
> > unable
> > > > to set security context 'system_u:object_r:virt_content_t:s0' on
> > > >
> > '/rhev/data-center/mnt/blockSD/91d97292-9ac3-4d77-a152-c7ea3250b065/images/e60dae48-ecc7-4171-8bfe-42bfc2190ffd/40243c76-a384-4497-8a2d-792a5e10d510':
> > > > No such file or directory
> > > > >
> > > > > This should not happen, libvirt is not adding labels to files in
> > > > /rhev/data-center. It is using using its own mount
> > > > > namespace and adding there the devices used by the VM. Since libvirt
> > > > create the devices in its namespace
> > > > > it should not complain about missing paths in /rhev/data-center.
> > > > >
> > > > > I think we should file a libvirt bug for this.
> > > > >
> > > > > 2019-03-22 15:08:28.168+: 22070: error :
> > > > qemuDomainAgentAvailable:9133 : Guest agent is not responding: QEMU
> > guest
> > > > agent is not connected
> > > > > 2019-03-22 15:08:58.193+: 22070: error :
> > > > qemuDomainAgentAvailable:9133 : Guest agent is not responding: QEMU
> > guest
> > > > agent is not connected
> > > > > 2019-03-22 15:13:58.17

Re: [ovirt-devel] [ OST Failure Report ] [ oVirt master (ovirt-engine) ] [ 22-03-2019 ] [004_basic_sanity.hotplug_cpu ]

2019-03-26 Thread Dominik Holler
On Mon, 25 Mar 2019 17:30:53 -0400
Ryan Barry  wrote:

> It may be virt, but I'm looking...
> 
> I'm very suspicious of this happening immediately after hotplugging a NIC,
> especially since the bug attached to https://gerrit.ovirt.org/#/c/98765/
> talks about dropping packets. Dominik, did anything else change here?
> 

No, nothing I am aware of.

Is there already a pattern in the failed runs detected, or does it fail
randomly?

> On Mon, Mar 25, 2019 at 12:42 PM Anton Marchukov 
> wrote:
> 
> > Which team is it? Is it Virt? Just checking who should open a bug in
> > libvirt as suggested.
> >
> > > On 22 Mar 2019, at 20:52, Nir Soffer  wrote:
> > >
> > > On Fri, Mar 22, 2019 at 7:12 PM Dafna Ron  wrote:
> > > Hi,
> > >
> > > We are failing ovirt-engine master on test 004_basic_sanity.hotplug_cpu
> > > looking at the logs, we can see that the for some reason, libvirt
> > reports a vm as none responsive which fails the test.
> > >
> > > CQ first failure was for patch:
> > > https://gerrit.ovirt.org/#/c/98553/ - core: Add display="on" for mdevs,
> > use nodisplay to override
> > > But I do not think this is the cause of failure.
> > >
> > > Adding Marcin, Milan and Dan as well as I think it may be netwrok
> > related.
> > >
> > > You can see the libvirt log here:
> > >
> > https://jenkins.ovirt.org/job/ovirt-master_change-queue-tester/13516/artifact/basic-suite.el7.x86_64/test_logs/basic-suite-master/post-004_basic_sanity.py/lago-basic-suite-master-host-1/_var_log/libvirt.log
> > >
> > > you can see the full logs here:
> > >
> > >
> > http://jenkins.ovirt.org/job/ovirt-master_change-queue-tester/13516/artifact/basic-suite.el7.x86_64/test_logs/basic-suite-master/post-004_basic_sanity.py/
> > >
> > > Evgheni and I confirmed this is not an infra issue and the problem is
> > ssh connection to the internal vm
> > >
> > > Thanks,
> > > Dafna
> > >
> > >
> > > error:
> > > 2019-03-22 15:08:22.658+: 22068: warning : qemuDomainObjTaint:7521 :
> > Domain id=3 name='vm0' uuid=a9443d02-e054-40bb-8ea3-ae346e2d02a7 is
> > tainted: hook-script
> > >
> > > Why our vm is tainted?
> > >
> > > 2019-03-22 15:08:22.693+: 22068: error :
> > virProcessRunInMountNamespace:1159 : internal error: child reported: unable
> > to set security context 'system_u:object_r:virt_content_t:s0' on
> > '/rhev/data-center/mnt/blockSD/91d97292-9ac3-4d77-a152-c7ea3250b065/images/e60dae48-ecc7-4171-8bfe-42bfc2190ffd/40243c76-a384-4497-8a2d-792a5e10d510':
> > No such file or directory
> > >
> > > This should not happen, libvirt is not adding labels to files in
> > /rhev/data-center. It is using using its own mount
> > > namespace and adding there the devices used by the VM. Since libvirt
> > create the devices in its namespace
> > > it should not complain about missing paths in /rhev/data-center.
> > >
> > > I think we should file a libvirt bug for this.
> > >
> > > 2019-03-22 15:08:28.168+: 22070: error :
> > qemuDomainAgentAvailable:9133 : Guest agent is not responding: QEMU guest
> > agent is not connected
> > > 2019-03-22 15:08:58.193+: 22070: error :
> > qemuDomainAgentAvailable:9133 : Guest agent is not responding: QEMU guest
> > agent is not connected
> > > 2019-03-22 15:13:58.179+: 22071: error :
> > qemuDomainAgentAvailable:9133 : Guest agent is not responding: QEMU guest
> > agent is not connected
> > >
> > > Do we have guest agent in the test VMs?
> > >
> > > Nir
> >
> > --
> > Anton Marchukov
> > Associate Manager - RHV DevOps - Red Hat
> >
> >
> >
> >
> >
> > ___
> > Infra mailing list -- infra@ovirt.org
> > To unsubscribe send an email to infra-le...@ovirt.org
> > Privacy Statement: https://www.ovirt.org/site/privacy-policy/
> > oVirt Code of Conduct:
> > https://www.ovirt.org/community/about/community-guidelines/
> > List Archives:
> > https://lists.ovirt.org/archives/list/infra@ovirt.org/message/B44Q3AZA7JUPMW4IDWZAS3RYMAFQ56VG/
> >
> 
> 
___
Infra mailing list -- infra@ovirt.org
To unsubscribe send an email to infra-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/infra@ovirt.org/message/7XYIPXZLPHRRI53QDC24TY6J2ZL2JWSH/


Re: [ovirt-devel] [ OST Failure Report ] [ oVirt 4.2 (ovirt-provider-ovn) ] [ 18-01-2019 ] [ 098_ovirt_provider_ovn.use_ovn_provider ]

2019-01-18 Thread Dominik Holler
On Fri, 18 Jan 2019 11:13:25 +
Dafna Ron  wrote:

> Hi,
> 
> We have a failure in ovn tests in branch 4.2. Marcin/Miguel, can you please
> take a look?
> 

https://gerrit.ovirt.org/#/c/97072/ is ready to be merged.

> Jira opened: https://ovirt-jira.atlassian.net/browse/OVIRT-2655
> 
> Link and headline of suspected patches:
> 
> https://gerrit.ovirt.org/#/c/96926/ - ip_version is mandatory on POSTs
> 
> Link to Job:
> 
> http://jenkins.ovirt.org/job/ovirt-4.2_change-queue-tester/3742/
> 
> Link to all logs:
> 
> http://jenkins.ovirt.org/job/ovirt-4.2_change-queue-tester/3742/artifact/basic-suite.el7.x86_64/test_logs/basic-suite-4.2/post-098_ovirt_provider_ovn.py/
> 
> (Relevant) error snippet from the log:
> 
> 
> 
> 2019-01-18 00:14:30,591 root Starting server
> 2019-01-18 00:14:30,592 root Version: 1.2.19-0.20190117180529.gite1d4195
> 2019-01-18 00:14:30,592 root Build date: 20190117180529
> 2019-01-18 00:14:30,592 root Githash: e1d4195
> 2019-01-18 00:20:39,394 ovsdbapp.backend.ovs_idl.vlog ssl:127.0.0.1:6641:
> no response to inactivity probe after 5.01 seconds, disconnecting
> 2019-01-18 00:45:01,435 root From: :::192.168.200.1:49008 Request: POST
> /v2.0/subnets/
> 2019-01-18 00:45:01,435 root Request body:
> {"subnet": {"network_id": "99c260ec-dad4-40b9-8732-df32dd54bd00",
> "dns_nameservers": ["8.8.8.8"], "cidr": "1.1.1.0/24", "gateway_ip":
> "1.1.1.1", "name": "subnet_1"}}
> 2019-01-18 00:45:01,435 root Missing 'ip_version' attribute
> Traceback (most recent call last):
>   File "/usr/share/ovirt-provider-ovn/handlers/base_handler.py", line 134,
> in _handle_request
> method, path_parts, content
>   File "/usr/share/ovirt-provider-ovn/handlers/selecting_handler.py", line
> 175, in handle_request
> return self.call_response_handler(handler, content, parameters)
>   File "/usr/share/ovirt-provider-ovn/handlers/neutron.py", line 36, in
> call_response_handler
> return response_handler(ovn_north, content, parameters)
>   File "/usr/share/ovirt-provider-ovn/handlers/neutron_responses.py", line
> 154, in post_subnets
> subnet = nb_db.add_subnet(received_subnet)
>   File "/usr/share/ovirt-provider-ovn/neutron/neutron_api_mappers.py", line
> 74, in wrapper
> validate_rest_input(rest_data)
>   File "/usr/share/ovirt-provider-ovn/neutron/neutron_api_mappers.py", line
> 596, in validate_add_rest_input
> raise BadRequestError('Missing \'ip_version\' attribute')
> BadRequestError: Missing 'ip_version' attribute
> 
> 
___
Infra mailing list -- infra@ovirt.org
To unsubscribe send an email to infra-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/infra@ovirt.org/message/XCOOAPHNOYLMPMBE4UK2QLPGZHLHUBNM/


[JIRA] (OVIRT-2653) Enable IPv6 for glance.ovirt.org

2019-01-17 Thread Dominik Holler (oVirt JIRA)
Dominik Holler created OVIRT-2653:
-

 Summary: Enable IPv6 for glance.ovirt.org
 Key: OVIRT-2653
 URL: https://ovirt-jira.atlassian.net/browse/OVIRT-2653
 Project: oVirt - virtualization made easy
  Issue Type: Improvement
  Components: oVirt Infra
Reporter: Dominik Holler
Assignee: infra
Priority: Low


 glance.ovirt.org should be reachable from IPv6, because it is included in the 
default installation of oVirt.
oVirt 4.3.0 supports IPv6-only scenarios.



--
This message was sent by Atlassian Jira
(v1001.0.0-SNAPSHOT#100097)
___
Infra mailing list -- infra@ovirt.org
To unsubscribe send an email to infra-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/infra@ovirt.org/message/DJTMXG5CHXXRLZRO42OSWF2RXU6B4NON/


identities on gerrit.ovirt.org

2018-11-27 Thread Dominik Holler
Hi,
can you please remove the identities with the email address
thegreenkep...@hollyhome.ath.cx
and
dominik.hol...@gmail.com
from gerrit.ovirt.org.
But please do not delete dhol...@redhat.com.
Thanks,
Dominik
___
Infra mailing list -- infra@ovirt.org
To unsubscribe send an email to infra-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/infra@ovirt.org/message/C7OLABFOL6YXVNEPUV6DKESO2EJIQEP3/


Re: [ovirt-devel] [CQ ovirt master] [ovirt-engine] - not passing for 10 days

2018-11-14 Thread Dominik Holler
On Wed, 14 Nov 2018 11:24:10 +0100
Michal Skrivanek  wrote:

> > On 14 Nov 2018, at 10:50, Dominik Holler  wrote:
> > 
> > On Wed, 14 Nov 2018 09:27:39 +0100
> > Dominik Holler  wrote:
> >   
> >> On Tue, 13 Nov 2018 13:01:09 +0100
> >> Martin Perina  wrote:
> >>   
> >>> On Tue, Nov 13, 2018 at 12:49 PM Michal Skrivanek 
> >>> wrote:
> >>>   
> >>>> 
> >>>> 
> >>>> On 13 Nov 2018, at 12:20, Dominik Holler  wrote:
> >>>> 
> >>>> On Tue, 13 Nov 2018 11:56:37 +0100
> >>>> Martin Perina  wrote:
> >>>> 
> >>>> On Tue, Nov 13, 2018 at 11:02 AM Dafna Ron  wrote:
> >>>> 
> >>>> Martin? can you please look at the patch that Dominik sent?
> >>>> We need to resolve this as we have not had an engine build for the last 
> >>>> 11
> >>>> days
> >>>> 
> >>>> 
> >>>> Yesterday I've merged Dominik's revert patch
> >>>> https://gerrit.ovirt.org/95377
> >>>> which should switch cluster level back to 4.2. Below mentioned change
> >>>> https://gerrit.ovirt.org/95310 is relevant only to cluster level 4.3, am 
> >>>> I
> >>>> right Michal?
> >>>> 
> >>>> The build mentioned
> >>>> 
> >>>> https://jenkins.ovirt.org/view/Change%20queue%20jobs/job/ovirt-master_change-queue-tester/11121/
> >>>> is from yesterday. Are we sure that it was executed only after #95377 was
> >>>> merged? I'd like to see the results from latest
> >>>> 
> >>>> https://jenkins.ovirt.org/view/Change%20queue%20jobs/job/ovirt-master_change-queue-tester/11127/
> >>>> but unfortunately it already waits more than an hour for available hosts
> >>>> ...
> >>>> 
> >>>> 
> >>>> 
> >>>> 
> >>>> 
> >>>> https://gerrit.ovirt.org/#/c/95283/ results in
> >>>> 
> >>>> http://jenkins.ovirt.org/job/ovirt-engine_master_build-artifacts-el7-x86_64/8071/
> >>>> which is used in
> >>>> 
> >>>> https://jenkins.ovirt.org/view/oVirt%20system%20tests/job/ovirt-system-tests_manual/3489/parameters/
> >>>> results in run_vms succeeding.
> >>>> 
> >>>> The next merged change
> >>>> https://gerrit.ovirt.org/#/c/95310/ results in
> >>>> 
> >>>> http://jenkins.ovirt.org/job/ovirt-engine_master_build-artifacts-el7-x86_64/8072/
> >>>> which is used in
> >>>> 
> >>>> https://jenkins.ovirt.org/view/oVirt%20system%20tests/job/ovirt-system-tests_manual/3490/parameters/
> >>>> results in run_vms failing with
> >>>> 2018-11-12 17:35:10,109-05 INFO
> >>>> [org.ovirt.engine.core.bll.RunVmOnceCommand] (default task-1)
> >>>> [6930b632-5593-4481-bf2a-a1d8b14a583a] Running command: RunVmOnceCommand
> >>>> internal: false. Entities affected :  ID:
> >>>> d10aa133-b9b6-455d-8137-ab822d1c1971 Type: VMAction group RUN_VM with 
> >>>> role
> >>>> type USER
> >>>> 2018-11-12 17:35:10,113-05 DEBUG
> >>>> [org.ovirt.engine.core.common.di.interceptor.DebugLoggingInterceptor]
> >>>> (default task-1) [6930b632-5593-4481-bf2a-a1d8b14a583a] method:
> >>>> getVmManager, params: [d10aa133-b9b6-455d-8137-ab822d1c1971], 
> >>>> timeElapsed:
> >>>> 4ms
> >>>> 2018-11-12 17:35:10,128-05 DEBUG
> >>>> [org.ovirt.engine.core.common.di.interceptor.DebugLoggingInterceptor]
> >>>> (default task-1) [6930b632-5593-4481-bf2a-a1d8b14a583a] method:
> >>>> getAllForClusterWithStatus, params: 
> >>>> [2ca9ccd8-61f0-470c-ba3f-07766202f260,
> >>>> Up], timeElapsed: 7ms
> >>>> 2018-11-12 17:35:10,129-05 INFO
> >>>> [org.ovirt.engine.core.bll.scheduling.SchedulingManager] (default task-1)
> >>>> [6930b632-5593-4481-bf2a-a1d8b14a583a] Candidate host
> >>>> 'lago-basic-suite-master-host-1' ('282860ab-8873-4702-a2be-100a6da111af')
> >>>> was filtered out by 'VAR__FILTERTYPE__INTERNAL' filter 'CPU-Level'
> >>>> (correlation id: 6930b632-5593-4481-bf2a-a1d8b14a583a)
> >>>> 2018-11-12 17:35:10,129-05 INFO
> >>>> [org.ovirt.engine.core.b

Re: [ovirt-devel] Re: [CQ ovirt master] [ovirt-engine] - not passing for 10 days

2018-11-14 Thread Dominik Holler
On Wed, 14 Nov 2018 09:27:39 +0100
Dominik Holler  wrote:

> On Tue, 13 Nov 2018 13:01:09 +0100
> Martin Perina  wrote:
> 
> > On Tue, Nov 13, 2018 at 12:49 PM Michal Skrivanek 
> > wrote:
> >   
> > >
> > >
> > > On 13 Nov 2018, at 12:20, Dominik Holler  wrote:
> > >
> > > On Tue, 13 Nov 2018 11:56:37 +0100
> > > Martin Perina  wrote:
> > >
> > > On Tue, Nov 13, 2018 at 11:02 AM Dafna Ron  wrote:
> > >
> > > Martin? can you please look at the patch that Dominik sent?
> > > We need to resolve this as we have not had an engine build for the last 11
> > > days
> > >
> > >
> > > Yesterday I've merged Dominik's revert patch
> > > https://gerrit.ovirt.org/95377
> > > which should switch cluster level back to 4.2. Below mentioned change
> > > https://gerrit.ovirt.org/95310 is relevant only to cluster level 4.3, am I
> > > right Michal?
> > >
> > > The build mentioned
> > >
> > > https://jenkins.ovirt.org/view/Change%20queue%20jobs/job/ovirt-master_change-queue-tester/11121/
> > > is from yesterday. Are we sure that it was executed only after #95377 was
> > > merged? I'd like to see the results from latest
> > >
> > > https://jenkins.ovirt.org/view/Change%20queue%20jobs/job/ovirt-master_change-queue-tester/11127/
> > > but unfortunately it already waits more than an hour for available hosts
> > > ...
> > >
> > >
> > >
> > >
> > >
> > > https://gerrit.ovirt.org/#/c/95283/ results in
> > >
> > > http://jenkins.ovirt.org/job/ovirt-engine_master_build-artifacts-el7-x86_64/8071/
> > > which is used in
> > >
> > > https://jenkins.ovirt.org/view/oVirt%20system%20tests/job/ovirt-system-tests_manual/3489/parameters/
> > > results in run_vms succeeding.
> > >
> > > The next merged change
> > > https://gerrit.ovirt.org/#/c/95310/ results in
> > >
> > > http://jenkins.ovirt.org/job/ovirt-engine_master_build-artifacts-el7-x86_64/8072/
> > > which is used in
> > >
> > > https://jenkins.ovirt.org/view/oVirt%20system%20tests/job/ovirt-system-tests_manual/3490/parameters/
> > > results in run_vms failing with
> > > 2018-11-12 17:35:10,109-05 INFO
> > >  [org.ovirt.engine.core.bll.RunVmOnceCommand] (default task-1)
> > > [6930b632-5593-4481-bf2a-a1d8b14a583a] Running command: RunVmOnceCommand
> > > internal: false. Entities affected :  ID:
> > > d10aa133-b9b6-455d-8137-ab822d1c1971 Type: VMAction group RUN_VM with role
> > > type USER
> > > 2018-11-12 17:35:10,113-05 DEBUG
> > > [org.ovirt.engine.core.common.di.interceptor.DebugLoggingInterceptor]
> > > (default task-1) [6930b632-5593-4481-bf2a-a1d8b14a583a] method:
> > > getVmManager, params: [d10aa133-b9b6-455d-8137-ab822d1c1971], timeElapsed:
> > > 4ms
> > > 2018-11-12 17:35:10,128-05 DEBUG
> > > [org.ovirt.engine.core.common.di.interceptor.DebugLoggingInterceptor]
> > > (default task-1) [6930b632-5593-4481-bf2a-a1d8b14a583a] method:
> > > getAllForClusterWithStatus, params: [2ca9ccd8-61f0-470c-ba3f-07766202f260,
> > > Up], timeElapsed: 7ms
> > > 2018-11-12 17:35:10,129-05 INFO
> > >  [org.ovirt.engine.core.bll.scheduling.SchedulingManager] (default task-1)
> > > [6930b632-5593-4481-bf2a-a1d8b14a583a] Candidate host
> > > 'lago-basic-suite-master-host-1' ('282860ab-8873-4702-a2be-100a6da111af')
> > > was filtered out by 'VAR__FILTERTYPE__INTERNAL' filter 'CPU-Level'
> > > (correlation id: 6930b632-5593-4481-bf2a-a1d8b14a583a)
> > > 2018-11-12 17:35:10,129-05 INFO
> > >  [org.ovirt.engine.core.bll.scheduling.SchedulingManager] (default task-1)
> > > [6930b632-5593-4481-bf2a-a1d8b14a583a] Candidate host
> > > 'lago-basic-suite-master-host-0' ('c48eca36-ea98-46b2-8473-f184833e68a8')
> > > was filtered out by 'VAR__FILTERTYPE__INTERNAL' filter 'CPU-Level'
> > > (correlation id: 6930b632-5593-4481-bf2a-a1d8b14a583a)
> > > 2018-11-12 17:35:10,130-05 ERROR [org.ovirt.engine.core.bll.RunVmCommand]
> > > (default task-1) [6930b632-5593-4481-bf2a-a1d8b14a583a] Can't find VDS to
> > > run the VM 'd10aa133-b9b6-455d-8137-ab822d1c1971' on, so this VM will not
> > > be run.
> > > in
> > >
> > > https://jenkins.ovirt.org/view/oVirt%20system%20tests/job/ovirt-system-tests_manual/3490/artifact/exported-art

Re: [ovirt-devel] Re: [CQ ovirt master] [ovirt-engine] - not passing for 10 days

2018-11-14 Thread Dominik Holler
On Tue, 13 Nov 2018 13:01:09 +0100
Martin Perina  wrote:

> On Tue, Nov 13, 2018 at 12:49 PM Michal Skrivanek 
> wrote:
> 
> >
> >
> > On 13 Nov 2018, at 12:20, Dominik Holler  wrote:
> >
> > On Tue, 13 Nov 2018 11:56:37 +0100
> > Martin Perina  wrote:
> >
> > On Tue, Nov 13, 2018 at 11:02 AM Dafna Ron  wrote:
> >
> > Martin? can you please look at the patch that Dominik sent?
> > We need to resolve this as we have not had an engine build for the last 11
> > days
> >
> >
> > Yesterday I've merged Dominik's revert patch
> > https://gerrit.ovirt.org/95377
> > which should switch cluster level back to 4.2. Below mentioned change
> > https://gerrit.ovirt.org/95310 is relevant only to cluster level 4.3, am I
> > right Michal?
> >
> > The build mentioned
> >
> > https://jenkins.ovirt.org/view/Change%20queue%20jobs/job/ovirt-master_change-queue-tester/11121/
> > is from yesterday. Are we sure that it was executed only after #95377 was
> > merged? I'd like to see the results from latest
> >
> > https://jenkins.ovirt.org/view/Change%20queue%20jobs/job/ovirt-master_change-queue-tester/11127/
> > but unfortunately it already waits more than an hour for available hosts
> > ...
> >
> >
> >
> >
> >
> > https://gerrit.ovirt.org/#/c/95283/ results in
> >
> > http://jenkins.ovirt.org/job/ovirt-engine_master_build-artifacts-el7-x86_64/8071/
> > which is used in
> >
> > https://jenkins.ovirt.org/view/oVirt%20system%20tests/job/ovirt-system-tests_manual/3489/parameters/
> > results in run_vms succeeding.
> >
> > The next merged change
> > https://gerrit.ovirt.org/#/c/95310/ results in
> >
> > http://jenkins.ovirt.org/job/ovirt-engine_master_build-artifacts-el7-x86_64/8072/
> > which is used in
> >
> > https://jenkins.ovirt.org/view/oVirt%20system%20tests/job/ovirt-system-tests_manual/3490/parameters/
> > results in run_vms failing with
> > 2018-11-12 17:35:10,109-05 INFO
> >  [org.ovirt.engine.core.bll.RunVmOnceCommand] (default task-1)
> > [6930b632-5593-4481-bf2a-a1d8b14a583a] Running command: RunVmOnceCommand
> > internal: false. Entities affected :  ID:
> > d10aa133-b9b6-455d-8137-ab822d1c1971 Type: VMAction group RUN_VM with role
> > type USER
> > 2018-11-12 17:35:10,113-05 DEBUG
> > [org.ovirt.engine.core.common.di.interceptor.DebugLoggingInterceptor]
> > (default task-1) [6930b632-5593-4481-bf2a-a1d8b14a583a] method:
> > getVmManager, params: [d10aa133-b9b6-455d-8137-ab822d1c1971], timeElapsed:
> > 4ms
> > 2018-11-12 17:35:10,128-05 DEBUG
> > [org.ovirt.engine.core.common.di.interceptor.DebugLoggingInterceptor]
> > (default task-1) [6930b632-5593-4481-bf2a-a1d8b14a583a] method:
> > getAllForClusterWithStatus, params: [2ca9ccd8-61f0-470c-ba3f-07766202f260,
> > Up], timeElapsed: 7ms
> > 2018-11-12 17:35:10,129-05 INFO
> >  [org.ovirt.engine.core.bll.scheduling.SchedulingManager] (default task-1)
> > [6930b632-5593-4481-bf2a-a1d8b14a583a] Candidate host
> > 'lago-basic-suite-master-host-1' ('282860ab-8873-4702-a2be-100a6da111af')
> > was filtered out by 'VAR__FILTERTYPE__INTERNAL' filter 'CPU-Level'
> > (correlation id: 6930b632-5593-4481-bf2a-a1d8b14a583a)
> > 2018-11-12 17:35:10,129-05 INFO
> >  [org.ovirt.engine.core.bll.scheduling.SchedulingManager] (default task-1)
> > [6930b632-5593-4481-bf2a-a1d8b14a583a] Candidate host
> > 'lago-basic-suite-master-host-0' ('c48eca36-ea98-46b2-8473-f184833e68a8')
> > was filtered out by 'VAR__FILTERTYPE__INTERNAL' filter 'CPU-Level'
> > (correlation id: 6930b632-5593-4481-bf2a-a1d8b14a583a)
> > 2018-11-12 17:35:10,130-05 ERROR [org.ovirt.engine.core.bll.RunVmCommand]
> > (default task-1) [6930b632-5593-4481-bf2a-a1d8b14a583a] Can't find VDS to
> > run the VM 'd10aa133-b9b6-455d-8137-ab822d1c1971' on, so this VM will not
> > be run.
> > in
> >
> > https://jenkins.ovirt.org/view/oVirt%20system%20tests/job/ovirt-system-tests_manual/3490/artifact/exported-artifacts/test_logs/basic-suite-master/post-004_basic_sanity.py/lago-basic-suite-master-engine/_var_log/ovirt-engine/engine.log/*view*/
> >
> > Is this helpful for you?
> >
> >
> >
> > actually, there ire two issues
> > 1) cluster is still 4.3 even after Martin’s revert.
> >  
> 
> https://gerrit.ovirt.org/#/c/95409/ should align cluster level with dc level
> 

This change aligns the cluster level, but
https://jenkins.ovirt.org/view/oVirt%20system%20tests/job

Re: [ovirt-devel] Re: [CQ ovirt master] [ovirt-engine] - not passing for 10 days

2018-11-13 Thread Dominik Holler
On Tue, 13 Nov 2018 11:56:37 +0100
Martin Perina  wrote:

> On Tue, Nov 13, 2018 at 11:02 AM Dafna Ron  wrote:
> 
> > Martin? can you please look at the patch that Dominik sent?
> > We need to resolve this as we have not had an engine build for the last 11
> > days
> >  
> 
> Yesterday I've merged Dominik's revert patch https://gerrit.ovirt.org/95377
> which should switch cluster level back to 4.2. Below mentioned change
> https://gerrit.ovirt.org/95310 is relevant only to cluster level 4.3, am I
> right Michal?
> 
> The build mentioned
> https://jenkins.ovirt.org/view/Change%20queue%20jobs/job/ovirt-master_change-queue-tester/11121/
> is from yesterday. Are we sure that it was executed only after #95377 was
> merged? I'd like to see the results from latest
> https://jenkins.ovirt.org/view/Change%20queue%20jobs/job/ovirt-master_change-queue-tester/11127/
> but unfortunately it already waits more than an hour for available hosts ...
> 




https://gerrit.ovirt.org/#/c/95283/ results in 
http://jenkins.ovirt.org/job/ovirt-engine_master_build-artifacts-el7-x86_64/8071/
which is used in
https://jenkins.ovirt.org/view/oVirt%20system%20tests/job/ovirt-system-tests_manual/3489/parameters/
results in run_vms succeeding.

The next merged change
https://gerrit.ovirt.org/#/c/95310/ results in
http://jenkins.ovirt.org/job/ovirt-engine_master_build-artifacts-el7-x86_64/8072/
which is used in
https://jenkins.ovirt.org/view/oVirt%20system%20tests/job/ovirt-system-tests_manual/3490/parameters/
results in run_vms failing with
2018-11-12 17:35:10,109-05 INFO  [org.ovirt.engine.core.bll.RunVmOnceCommand] 
(default task-1) [6930b632-5593-4481-bf2a-a1d8b14a583a] Running command: 
RunVmOnceCommand internal: false. Entities affected :  ID: 
d10aa133-b9b6-455d-8137-ab822d1c1971 Type: VMAction group RUN_VM with role type 
USER
2018-11-12 17:35:10,113-05 DEBUG 
[org.ovirt.engine.core.common.di.interceptor.DebugLoggingInterceptor] (default 
task-1) [6930b632-5593-4481-bf2a-a1d8b14a583a] method: getVmManager, params: 
[d10aa133-b9b6-455d-8137-ab822d1c1971], timeElapsed: 4ms
2018-11-12 17:35:10,128-05 DEBUG 
[org.ovirt.engine.core.common.di.interceptor.DebugLoggingInterceptor] (default 
task-1) [6930b632-5593-4481-bf2a-a1d8b14a583a] method: 
getAllForClusterWithStatus, params: [2ca9ccd8-61f0-470c-ba3f-07766202f260, Up], 
timeElapsed: 7ms
2018-11-12 17:35:10,129-05 INFO  
[org.ovirt.engine.core.bll.scheduling.SchedulingManager] (default task-1) 
[6930b632-5593-4481-bf2a-a1d8b14a583a] Candidate host 
'lago-basic-suite-master-host-1' ('282860ab-8873-4702-a2be-100a6da111af') was 
filtered out by 'VAR__FILTERTYPE__INTERNAL' filter 'CPU-Level' (correlation id: 
6930b632-5593-4481-bf2a-a1d8b14a583a)
2018-11-12 17:35:10,129-05 INFO  
[org.ovirt.engine.core.bll.scheduling.SchedulingManager] (default task-1) 
[6930b632-5593-4481-bf2a-a1d8b14a583a] Candidate host 
'lago-basic-suite-master-host-0' ('c48eca36-ea98-46b2-8473-f184833e68a8') was 
filtered out by 'VAR__FILTERTYPE__INTERNAL' filter 'CPU-Level' (correlation id: 
6930b632-5593-4481-bf2a-a1d8b14a583a)
2018-11-12 17:35:10,130-05 ERROR [org.ovirt.engine.core.bll.RunVmCommand] 
(default task-1) [6930b632-5593-4481-bf2a-a1d8b14a583a] Can't find VDS to run 
the VM 'd10aa133-b9b6-455d-8137-ab822d1c1971' on, so this VM will not be run.
in
https://jenkins.ovirt.org/view/oVirt%20system%20tests/job/ovirt-system-tests_manual/3490/artifact/exported-artifacts/test_logs/basic-suite-master/post-004_basic_sanity.py/lago-basic-suite-master-engine/_var_log/ovirt-engine/engine.log/*view*/

Is this helpful for you?

> 
> > On Mon, Nov 12, 2018 at 3:58 PM Dominik Holler  wrote:
> >  
> >> On Mon, 12 Nov 2018 13:45:54 +0100
> >> Martin Perina  wrote:
> >>  
> >> > On Mon, Nov 12, 2018 at 12:58 PM Dominik Holler   
> >> wrote:  
> >> >  
> >> > > On Mon, 12 Nov 2018 12:29:17 +0100
> >> > > Martin Perina  wrote:
> >> > >  
> >> > > > On Mon, Nov 12, 2018 at 12:20 PM Dafna Ron  wrote:
> >> > > >  
> >> > > > > There are currently two issues failing ovirt-engine on CQ ovirt  
> >> master:  
> >> > > > >
> >> > > > > 1. edit vm pool is causing failure in different tests. it has a  
> >> patch  
> >> > > *waiting  
> >> > > > > to be merged*: https://gerrit.ovirt.org/#/c/95354/
> >> > > > >  
> >> > > >
> >> > > > Merged
> >> > > >  
> >> > > > >
> >> > > > > 2. we have a failure in upgrade suite as well to run vm but this  
> >> seems  

Re: [ovirt-devel] Re: [CQ ovirt master] [ovirt-engine] - not passing for 10 days

2018-11-12 Thread Dominik Holler
On Mon, 12 Nov 2018 13:45:54 +0100
Martin Perina  wrote:

> On Mon, Nov 12, 2018 at 12:58 PM Dominik Holler  wrote:
> 
> > On Mon, 12 Nov 2018 12:29:17 +0100
> > Martin Perina  wrote:
> >  
> > > On Mon, Nov 12, 2018 at 12:20 PM Dafna Ron  wrote:
> > >  
> > > > There are currently two issues failing ovirt-engine on CQ ovirt master:
> > > >
> > > > 1. edit vm pool is causing failure in different tests. it has a patch  
> > *waiting  
> > > > to be merged*: https://gerrit.ovirt.org/#/c/95354/
> > > >  
> > >
> > > Merged
> > >  
> > > >
> > > > 2. we have a failure in upgrade suite as well to run vm but this seems  
> > to  
> > > > be related to the tests as well:
> > > > 2018-11-12 05:41:07,831-05 WARN
> > > > [org.ovirt.engine.core.bll.validator.VirtIoRngValidator] (default  
> > task-1)  
> > > > [] Random number source URANDOM is not supported in cluster  
> > 'test-cluster'  
> > > > compatibility version 4.0.
> > > >
> > > > here is the full error from the upgrade suite failure in run vm:
> > > > https://pastebin.com/XLHtWGGx
> > > >
> > > > Here is the latest failure:
> > > >  
> > https://jenkins.ovirt.org/view/Change%20queue%20jobs/job/ovirt-master_change-queue-tester/8/
> >   
> > > >  
> > >
> > > I will try to take a look later today
> > >  
> >
> > I have the idea that this might be related to
> > https://gerrit.ovirt.org/#/c/95377/ , and I check in
> > https://jenkins.ovirt.org/view/oVirt%20system%20tests/job/ovirt-system-tests_manual/3485/console
> > , but I have to stop now, if not solved I can go on later today.
> >  
> 
> OK, both CI and above manual OST job went fine, so I've just merged the
> revert patch. I will take a look at it later in detail, we should really be
> testing 4.3 on master and not 4.2
> 

Ack.

Now
https://jenkins.ovirt.org/view/Change%20queue%20jobs/job/ovirt-master_change-queue-tester/11121/
is failing on
File 
"/home/jenkins/workspace/ovirt-master_change-queue-tester/ovirt-system-tests/basic-suite-master/test-scenarios/004_basic_sanity.py",
 line 698, in run_vms
api.vms.get(VM0_NAME).start(start_params)
status: 400
reason: Bad Request

2018-11-12 10:06:30,722-05 INFO  
[org.ovirt.engine.core.bll.scheduling.SchedulingManager] (default task-3) 
[b8d11cb0-5be9-4b7e-b45a-c95fa1f18681] Candidate host 
'lago-basic-suite-master-host-1' ('dbfe1b0c-f940-4dba-8fb1-0cfe5ca7ddfc') was 
filtered out by 'VAR__FILTERTYPE__INTERNAL' filter 'CPU-Level' (correlation id: 
b8d11cb0-5be9-4b7e-b45a-c95fa1f18681)
2018-11-12 10:06:30,722-05 INFO  
[org.ovirt.engine.core.bll.scheduling.SchedulingManager] (default task-3) 
[b8d11cb0-5be9-4b7e-b45a-c95fa1f18681] Candidate host 
'lago-basic-suite-master-host-0' ('e83a63ca-381e-40db-acb2-65a3e7953e11') was 
filtered out by 'VAR__FILTERTYPE__INTERNAL' filter 'CPU-Level' (correlation id: 
b8d11cb0-5be9-4b7e-b45a-c95fa1f18681)
2018-11-12 10:06:30,723-05 ERROR [org.ovirt.engine.core.bll.RunVmCommand] 
(default task-3) [b8d11cb0-5be9-4b7e-b45a-c95fa1f18681] Can't find VDS to run 
the VM '57a66eff-8cbf-4643-b045-43d4dda80c66' on, so this VM will not be run.

Is this related to
https://gerrit.ovirt.org/#/c/95310/
?



> >  
> > > >
> > > >
> > > > Thanks,
> > > > Dafna
> > > >
> > > >
> > > >
> > > >
> > > > On Mon, Nov 12, 2018 at 9:23 AM Dominik Holler   
> > wrote:  
> > > >  
> > > >> On Sun, 11 Nov 2018 19:04:40 +0200
> > > >> Dan Kenigsberg  wrote:
> > > >>  
> > > >> > On Sun, Nov 11, 2018 at 5:27 PM Eyal Edri   
> > wrote:  
> > > >> > >
> > > >> > >
> > > >> > >
> > > >> > > On Sun, Nov 11, 2018 at 5:24 PM Eyal Edri   
> > wrote:  
> > > >> > >>
> > > >> > >>
> > > >> > >>
> > > >> > >> On Sun, Nov 11, 2018 at 5:20 PM Dan Kenigsberg <  
> > dan...@redhat.com>  
> > > >> wrote:  
> > > >> > >>>
> > > >> > >>> On Sun, Nov 11, 2018 at 4:36 PM Ehud Yonasi  
> > > >> > >>>  
> >  
> > > >> wrote:  
> > > >> > >>> >
> > > >> > >>> > 

Re: [ovirt-devel] Re: [CQ ovirt master] [ovirt-engine] - not passing for 10 days

2018-11-12 Thread Dominik Holler
On Mon, 12 Nov 2018 12:29:17 +0100
Martin Perina  wrote:

> On Mon, Nov 12, 2018 at 12:20 PM Dafna Ron  wrote:
> 
> > There are currently two issues failing ovirt-engine on CQ ovirt master:
> >
> > 1. edit vm pool is causing failure in different tests. it has a patch 
> > *waiting
> > to be merged*: https://gerrit.ovirt.org/#/c/95354/
> >  
> 
> Merged
> 
> >
> > 2. we have a failure in upgrade suite as well to run vm but this seems to
> > be related to the tests as well:
> > 2018-11-12 05:41:07,831-05 WARN
> > [org.ovirt.engine.core.bll.validator.VirtIoRngValidator] (default task-1)
> > [] Random number source URANDOM is not supported in cluster 'test-cluster'
> > compatibility version 4.0.
> >
> > here is the full error from the upgrade suite failure in run vm:
> > https://pastebin.com/XLHtWGGx
> >
> > Here is the latest failure:
> > https://jenkins.ovirt.org/view/Change%20queue%20jobs/job/ovirt-master_change-queue-tester/8/
> >  
> 
> I will try to take a look later today
> 

I have the idea that this might be related to 
https://gerrit.ovirt.org/#/c/95377/ , and I check in 
https://jenkins.ovirt.org/view/oVirt%20system%20tests/job/ovirt-system-tests_manual/3485/console
 , but I have to stop now, if not solved I can go on later today.

> >
> >
> > Thanks,
> > Dafna
> >
> >
> >
> >
> > On Mon, Nov 12, 2018 at 9:23 AM Dominik Holler  wrote:
> >  
> >> On Sun, 11 Nov 2018 19:04:40 +0200
> >> Dan Kenigsberg  wrote:
> >>  
> >> > On Sun, Nov 11, 2018 at 5:27 PM Eyal Edri  wrote:  
> >> > >
> >> > >
> >> > >
> >> > > On Sun, Nov 11, 2018 at 5:24 PM Eyal Edri  wrote:  
> >> > >>
> >> > >>
> >> > >>
> >> > >> On Sun, Nov 11, 2018 at 5:20 PM Dan Kenigsberg   
> >> wrote:  
> >> > >>>
> >> > >>> On Sun, Nov 11, 2018 at 4:36 PM Ehud Yonasi   
> >> wrote:  
> >> > >>> >
> >> > >>> > Hey,
> >> > >>> > I've seen that CQ Master is not passing ovirt-engine for 10 days  
> >> and fails on test suite called restore_vm0_networking  
> >> > >>> > here's a snap error regarding it:
> >> > >>> >
> >> > >>> > https://pastebin.com/7msEYqKT
> >> > >>> >
> >> > >>> > Link to a sample job with the error:
> >> > >>> >
> >> > >>> >  
> >> http://jenkins.ovirt.org/job/ovirt-master_change-queue-tester/3/artifact/basic-suite.el7.x86_64/004_basic_sanity.py.junit.xml
> >>  
> >> > >>>
> >> > >>> I cannot follow this link because I'm 4 minutes too late
> >> > >>>
> >> > >>> jenkins.ovirt.org uses an invalid security certificate. The
> >> > >>> certificate expired on November 11, 2018, 5:13:25 PM GMT+2. The
> >> > >>> current time is November 11, 2018, 5:17 PM.  
> >> > >>
> >> > >>
> >> > >> Yes, we're looking into that issue now.  
> >> > >
> >> > >
> >> > > Fixed, you should be able to access it now.  
> >> >
> >> > OST fails during restore_vm0_networking in line 101 of
> >> > 004_basic_sanity.py while comparing
> >> > vm_service.get().status == state
> >> >
> >> > It seems that instead of reporting back the VM status, Engine set  
> >> garbage  
> >> > "The response content type 'text/html; charset=iso-8859-1' isn't the
> >> > expected XML"
> >> >  
> >>
> >> The relevant line in
> >>
> >> https://jenkins.ovirt.org/job/ovirt-master_change-queue-tester/3/artifact/basic-suite.el7.x86_64/test_logs/basic-suite-master/post-004_basic_sanity.py/lago-basic-suite-master-engine/_var_log/httpd/ssl_access_log/*view*/
> >> seems to be
> >> 192.168.201.1 - - [11/Nov/2018:04:27:43 -0500] "GET
> >> /ovirt-engine/api/vms/26088164-d1a0-4254-a377-5d3c242c8105 HTTP/1.1" 503 
> >> 299
> >> and I guess the 503 error message is sent in HTML instead of XML.
> >>
> >> If I run manually
> >> https://gerrit.ovirt.org/#/c/95354/
> >> with latest build of engine-master
> >>
> >> http://jenkins.ovirt.org/job/ovirt-engine_master_b

Re: [ovirt-devel] Re: [CQ ovirt master] [ovirt-engine] - not passing for 10 days

2018-11-12 Thread Dominik Holler
On Sun, 11 Nov 2018 19:04:40 +0200
Dan Kenigsberg  wrote:

> On Sun, Nov 11, 2018 at 5:27 PM Eyal Edri  wrote:
> >
> >
> >
> > On Sun, Nov 11, 2018 at 5:24 PM Eyal Edri  wrote:  
> >>
> >>
> >>
> >> On Sun, Nov 11, 2018 at 5:20 PM Dan Kenigsberg  wrote:  
> >>>
> >>> On Sun, Nov 11, 2018 at 4:36 PM Ehud Yonasi  wrote:  
> >>> >
> >>> > Hey,
> >>> > I've seen that CQ Master is not passing ovirt-engine for 10 days and 
> >>> > fails on test suite called restore_vm0_networking
> >>> > here's a snap error regarding it:
> >>> >
> >>> > https://pastebin.com/7msEYqKT
> >>> >
> >>> > Link to a sample job with the error:
> >>> >
> >>> > http://jenkins.ovirt.org/job/ovirt-master_change-queue-tester/3/artifact/basic-suite.el7.x86_64/004_basic_sanity.py.junit.xml
> >>> >   
> >>>
> >>> I cannot follow this link because I'm 4 minutes too late
> >>>
> >>> jenkins.ovirt.org uses an invalid security certificate. The
> >>> certificate expired on November 11, 2018, 5:13:25 PM GMT+2. The
> >>> current time is November 11, 2018, 5:17 PM.  
> >>
> >>
> >> Yes, we're looking into that issue now.  
> >
> >
> > Fixed, you should be able to access it now.  
> 
> OST fails during restore_vm0_networking in line 101 of
> 004_basic_sanity.py while comparing
> vm_service.get().status == state
> 
> It seems that instead of reporting back the VM status, Engine set garbage
> "The response content type 'text/html; charset=iso-8859-1' isn't the
> expected XML"
> 

The relevant line in
https://jenkins.ovirt.org/job/ovirt-master_change-queue-tester/3/artifact/basic-suite.el7.x86_64/test_logs/basic-suite-master/post-004_basic_sanity.py/lago-basic-suite-master-engine/_var_log/httpd/ssl_access_log/*view*/
seems to be
192.168.201.1 - - [11/Nov/2018:04:27:43 -0500] "GET 
/ovirt-engine/api/vms/26088164-d1a0-4254-a377-5d3c242c8105 HTTP/1.1" 503 299
and I guess the 503 error message is sent in HTML instead of XML.

If I run manually
https://gerrit.ovirt.org/#/c/95354/
with latest build of engine-master
http://jenkins.ovirt.org/job/ovirt-engine_master_build-artifacts-el7-x86_64/8074/
basic suite seems to be happy:
https://jenkins.ovirt.org/view/oVirt system 
tests/job/ovirt-system-tests_manual/3484/


> I do not know what could cause that, and engine.log does not mention
> it. But it seems like a problem in engine API hence +Martin Perina and
> +Ondra Machacek .
> 
> 
> 
> >  
> >>
> >>
> >>  
> >>>
> >>>  
> >>> >
> >>> > Can some1 have a look at it and help to resolve the issue?
> >>> >
> >>> >
> >>> > ___
> >>> > Infra mailing list -- infra@ovirt.org
> >>> > To unsubscribe send an email to infra-le...@ovirt.org
> >>> > Privacy Statement: https://www.ovirt.org/site/privacy-policy/
> >>> > oVirt Code of Conduct: 
> >>> > https://www.ovirt.org/community/about/community-guidelines/
> >>> > List Archives: 
> >>> > https://lists.ovirt.org/archives/list/infra@ovirt.org/message/ZQAYWTLZJKGPJ25F33E6ICVDXQDYSKSQ/
> >>> >   
> >>> ___
> >>> Devel mailing list -- de...@ovirt.org
> >>> To unsubscribe send an email to devel-le...@ovirt.org
> >>> Privacy Statement: https://www.ovirt.org/site/privacy-policy/
> >>> oVirt Code of Conduct: 
> >>> https://www.ovirt.org/community/about/community-guidelines/
> >>> List Archives: 
> >>> https://lists.ovirt.org/archives/list/de...@ovirt.org/message/R5LOJH73XCLLFOUTKPM5GUCS6PNNKGTE/
> >>>   
> >>
> >>
> >>
> >> --
> >>
> >> Eyal edri
> >>
> >>
> >> MANAGER
> >>
> >> RHV/CNV DevOps
> >>
> >> EMEA VIRTUALIZATION R&D
> >>
> >>
> >> Red Hat EMEA
> >>
> >> TRIED. TESTED. TRUSTED.
> >> phone: +972-9-7692018
> >> irc: eedri (on #tlv #rhev-dev #rhev-integ)  
> >
> >
> >
> > --
> >
> > Eyal edri
> >
> >
> > MANAGER
> >
> > RHV/CNV DevOps
> >
> > EMEA VIRTUALIZATION R&D
> >
> >
> > Red Hat EMEA
> >
> > TRIED. TESTED. TRUSTED.
> > phone: +972-9-7692018
> > irc: eedri (on #tlv #rhev-dev #rhev-integ)  
> ___
> Devel mailing list -- de...@ovirt.org
> To unsubscribe send an email to devel-le...@ovirt.org
> Privacy Statement: https://www.ovirt.org/site/privacy-policy/
> oVirt Code of Conduct: 
> https://www.ovirt.org/community/about/community-guidelines/
> List Archives: 
> https://lists.ovirt.org/archives/list/de...@ovirt.org/message/DA6Q5RE5JO3FYIKN2QLKLWMCUBQA2HBX/
___
Infra mailing list -- infra@ovirt.org
To unsubscribe send an email to infra-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/infra@ovirt.org/message/IKTOVHLDB2DH6BM7QF4VM6HI4KCWHPUZ/


Re: [ OST Failure Report ] [ oVirt Master (ovirt-engine) ] [ 08-11-2018 ] [ verify_suspend_resume_vm0 ]

2018-11-10 Thread Dominik Holler
On Fri, 9 Nov 2018 11:14:56 +
Dafna Ron  wrote:

> Hi,
> 
> We have a regression in ovirt-engine which is causing a failure in test
> verify_suspend_resume_vm0 in basic suite.
> 
> Since the regression was introduced along side a second regression CQ was
> unable to point on the faulty patch, only on the first failed patch since
> the new regression.
> 
> Dominik debugged the issue and found the problematic change:
> 
> https://gerrit.ovirt.org/#/c/95222/ - engine : Updating template of VM Pool
> leaves tasks stuck after VMs shutdown
> 
> Jira: https://ovirt-jira.atlassian.net/browse/OVIRT-2571
> 
> error from engine log:
> 
> 2018-11-08 04:26:42,081-05 ERROR
> [org.ovirt.engine.api.restapi.resource.AbstractBackendResource]
> (default task-1) [] Operation Failed: [Cannot remove VM-Pool. VM is
> being updated.]
> 


I created https://gerrit.ovirt.org/#/c/95354/ .
According to
https://jenkins.ovirt.org/view/oVirt%20system%20tests/job/ovirt-system-tests_manual/3482/
this change seems to unbreak.
Please check if this is the way to go.

> 
> Thanks,
> Dafna
___
Infra mailing list -- infra@ovirt.org
To unsubscribe send an email to infra-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/infra@ovirt.org/message/LITXKWETNQ5J5OUWWP2MKQRKCNJNXNVP/


Re: [ OST Failure Report ] [ oVirt master (ovirt-engine) ] [ 01-10-2018 ] [ 002_bootstrap.download_engine_certs ]

2018-10-01 Thread Dominik Holler
On Mon, 1 Oct 2018 08:50:34 +0100
Dafna Ron  wrote:

> Hi,
> 
> We are failing project ovirt-engine on master branch.
> The issue seems to be related to the reported patch
> Dominik, can you please take a look?
> 

Thanks,
https://gerrit.ovirt.org/#/c/94585/ will be this fix, but is not yet enough 
verified.



> https://gerrit.ovirt.org/#/c/94582/ - packaging: Add MAC Pool range only if
> MAC Pool exists
> 
> full logs can be found here:
> 
> https://jenkins.ovirt.org/job/ovirt-master_change-queue-tester/10442/artifact/basic-suite.el7.x86_64/test_logs/basic-suite-master/post-002_bootstrap.py/
> 
> Error:
> 
>  [...]  
> 2018-09-29 12:37:58,209-04 INFO
> [org.ovirt.engine.core.bll.network.macpool.MacPoolUsingRanges]
> (ServerService Thread Pool -- 43) [] Initializing
> MacPoolUsingRanges:{id='58ca604b-017d-0374-0220-014e'}
> 2018-09-29 12:37:58,220-04 ERROR
> [org.ovirt.engine.core.bll.network.macpool.MacPoolPerCluster]
> (ServerService Thread Pool -- 43) [] Error initializing: EngineException:
> MAC_POOL_INITIALIZATION_FAILED (Failed with error
> MAC_POOL_INITIALIZATION_FAILED and code 5010)
> 2018-09-29 12:37:58,237-04 ERROR [org.ovirt.engine.core.bll.Backend]
> (ServerService Thread Pool -- 43) [] Error during initialization:
> javax.ejb.EJBException: java.lang.IllegalStateException: WFLYEE0042: Failed
> to construct component instance
> at
> org.jboss.as.ejb3.tx.CMTTxInterceptor.invokeInOurTx(CMTTxInterceptor.java:246)
> [wildfly-ejb3-13.0.0.Final.jar:13.0.0.Final]
> at
> org.jboss.as.ejb3.tx.CMTTxInterceptor.required(CMTTxInterceptor.java:362)
> [wildfly-ejb3-13.0.0.Final.jar:13.0.0.Final]
> at
> org.jboss.as.ejb3.tx.CMTTxInterceptor.processInvocation(CMTTxInterceptor.java:144)
> [wildfly-ejb3-13.0.0.Final.jar:13.0.0.Final]
> at
> org.jboss.invocation.InterceptorContext.proceed(InterceptorContext.java:422)
> at
> org.jboss.invocation.InterceptorContext$Invocation.proceed(InterceptorContext.java:509)
> at
> org.jboss.weld.module.ejb.AbstractEJBRequestScopeActivationInterceptor.aroundInvoke(AbstractEJBRequestScopeActivationInterceptor.java:72)
> [weld-ejb-3.0.4.Final.jar:3.0.4.Final]
> at
> org.jboss.as.weld.ejb.EjbRequestScopeActivationInterceptor.processInvocation(EjbRequestScopeActivationInterceptor.java:89)
> at
> org.jboss.invocation.InterceptorContext.proceed(InterceptorContext.java:422)
> at
> org.jboss.as.ejb3.component.interceptors.CurrentInvocationContextInterceptor.processInvocation(CurrentInvocationContextInterceptor.java:41)
> [wildfly-ejb3-13.0.0.Final.jar:13.0.0.Final]
> at
> org.jboss.invocation.InterceptorContext.proceed(InterceptorContext.java:422)
> at
> org.jboss.as.ejb3.component.invocationmetrics.WaitTimeInterceptor.processInvocation(WaitTimeInterceptor.java:47)
> [wildfly-ejb3-13.0.0.Final.jar:13.0.0.Final]
> at
> org.jboss.invocation.InterceptorContext.proceed(InterceptorContext.java:422)
> at
> org.jboss.as.ejb3.security.SecurityContextInterceptor.processInvocation(SecurityContextInterceptor.java:100)
> [wildfly-ejb3-13.0.0.Final.jar:13.0.0.Final]
> at
> org.jboss.invocation.InterceptorContext.proceed(InterceptorContext.java:422)
> at
> org.jboss.as.ejb3.deployment.processors.StartupAwaitInterceptor.processInvocation(StartupAwaitInterceptor.java:22)
> [wildfly-ejb3-13.0.0.Final.jar:13.0.0.Final]
> at
> org.jboss.invocation.InterceptorContext.proceed(InterceptorContext.java:422)
> at
> org.jboss.invocation.InterceptorContext$Invocation.proceed(InterceptorContext.java:509)
> at
> org.jboss.as.ejb3.component.singleton.ContainerManagedConcurrencyInterceptor.processInvocation(ContainerManagedConcurrencyInterceptor.java:106)
> [wildfly-ejb3-13.0.0.Final.jar:13.0.0.Final]
> at
> org.jboss.invocation.InterceptorContext.proceed(InterceptorContext.java:422)
> at
> org.jboss.as.ejb3.component.interceptors.ShutDownInterceptorFactory$1.processInvocation(ShutDownInterceptorFactory.java:64)
> [wildfly-ejb3-13.0.0.Final.jar:13.0.0.Final]
> at
> org.jboss.invocation.InterceptorContext.proceed(InterceptorContext.java:422)
> at
> org.jboss.as.ejb3.component.interceptors.LoggingInterceptor.processInvocation(LoggingInterceptor.java:67)
> [wildfly-ejb3-13.0.0.Final.jar:13.0.0.Final]
> at
> org.jboss.invocation.InterceptorContext.proceed(InterceptorContext.java:422)
> at
> org.jboss.as.ee.component.NamespaceContextInterceptor.processInvocation(NamespaceContextInterceptor.java:50)
> at
> org.jboss.invocation.InterceptorContext.proceed(InterceptorContext.java:422)
> at
> org.jboss.invocation.ContextClassLoaderInterceptor.processInvocation(ContextClassLoaderInterceptor.java:60)
> at
> org.jboss.invocation.InterceptorContext.proceed(InterceptorContext.java:422)
> at
> org.jboss.invocation.InterceptorContext.run(InterceptorContext.java:438)
> at
> org.wildfly.security.ma

Re: [ovirt-devel] [ OST Failure Report ] [ oVirt 4.2 (ovirt-engine) ] [ 27-09-20118 ] [ initialize_engine ]

2018-09-28 Thread Dominik Holler
https://gerrit.ovirt.org/#/c/94582/ fixes the issue

On Fri, 28 Sep 2018 20:04:21 +0100
Dafna Ron  wrote:

> Thanks
> Please note that ovirt-engine on 4.2 is broken and we had 2 more changes
> fail on this issue.
> 
> thanks,
> Dafna
> 
> 
> On Fri, Sep 28, 2018 at 3:10 PM Dominik Holler  wrote:
> 
> > On Thu, 27 Sep 2018 15:28:02 +0100
> > Dafna Ron  wrote:
> >  
>  [...]  
> > pool  
>  [...]  
> > https://jenkins.ovirt.org/job/ovirt-4.2_change-queue-tester/3234/testReport/junit/(root)/001_upgrade_engine/running_tests___upgrade_from_prevrelease_suite_el7_x86_64___test_initialize_engine/
> >   
>  [...]  
> > main  
>  [...]  
> > execute  
>  [...]  
> > "/usr/share/ovirt-engine/setup/bin/../plugins/ovirt-engine-setup/ovirt-engine/network/macpools.py",
> >   
>  [...]  
> > "/usr/share/ovirt-engine/setup/bin/../plugins/ovirt-engine-setup/ovirt-engine/network/macpools.py",
> >   
>  [...]  
> > "/usr/share/ovirt-engine/setup/ovirt_engine_setup/engine_common/database.py",
> >   
>  [...]  
> >
> > I will have a look.
> >  
___
Infra mailing list -- infra@ovirt.org
To unsubscribe send an email to infra-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/infra@ovirt.org/message/ZGJD377CQKOXSR2MSI36O7RRM2COTCJT/


Re: [ovirt-devel] [ OST Failure Report ] [ oVirt 4.2 (ovirt-engine) ] [ 27-09-20118 ] [ initialize_engine ]

2018-09-28 Thread Dominik Holler
On Thu, 27 Sep 2018 15:28:02 +0100
Dafna Ron  wrote:

> Hi,
> 
> we are failing on ovirt-engine 4.1 on the upgrade suite.
> 
> The issue seems to be related to this change:
> https://gerrit.ovirt.org/#/c/94551/ - packaging: Generate random MAC pool
> instead of hardcoded one
> 
> Can you please have a look and issue a fix?
> 
> Build log:
> 
> https://jenkins.ovirt.org/job/ovirt-4.2_change-queue-tester/3234/testReport/junit/(root)/001_upgrade_engine/running_tests___upgrade_from_prevrelease_suite_el7_x86_64___test_initialize_engine/
> 
> error:
> 
> [ INFO  ] Yum Verify: 100/100: ovirt-engine-tools.noarch
> 0:4.1.9.1-1.el7.centos - ud
> [ INFO  ] Stage: Misc configuration
> [ INFO  ] Upgrading CA
> [ INFO  ] Installing PostgreSQL uuid-ossp extension into database
> [ INFO  ] Creating/refreshing DWH database schema
> [ INFO  ] Configuring WebSocket Proxy
> [ INFO  ] Creating/refreshing Engine database schema
> [ INFO  ] Creating/refreshing Engine 'internal' domain database schema
>   Unregistering existing client registration info.
> [ INFO  ] Creating default mac pool
> [ ERROR ] Failed to execute stage 'Misc configuration': insert or
> update on table "mac_pool_ranges" violates foreign key constraint
> "mac_pool_ranges_mac_pool_id_fkey"
>  DETAIL:  Key
> (mac_pool_id)=(58ca604b-017d-0374-0220-014e) is not present in
> table "mac_pools".
>  CONTEXT:  SQL statement "INSERT INTO mac_pool_ranges (
>  mac_pool_id,
>  from_mac,
>  to_mac
>  )
>  VALUES (
>  v_mac_pool_id,
>  v_from_mac,
>  v_to_mac
>  )"
>  PL/pgSQL function insertmacpoolrange(uuid,character
> varying,character varying) line 3 at SQL statement
> 
> [ INFO  ] Rolling back to the previous PostgreSQL instance (postgresql).
> [ INFO  ] Stage: Clean up
>   Log file is located at
> /var/log/ovirt-engine/setup/ovirt-engine-setup-20180927090017-97fd5u.log
> [ INFO  ] Generating answer file
> '/var/lib/ovirt-engine/setup/answers/20180927090149-setup.conf'
> [ INFO  ] Stage: Pre-termination
> [ INFO  ] Stage: Termination
> [ ERROR ] Execution of setup failed
> ('FATAL Internal error (main): insert or update on table
> "mac_pool_ranges" violates foreign key constraint
> "mac_pool_ranges_mac_pool_id_fkey"\nDETAIL:  Key
> (mac_pool_id)=(58ca604b-017d-0374-0220-014e) is not present in
> table "mac_pools".\nCONTEXT:  SQL statement "INSERT INTO
> mac_pool_ranges (\nmac_pool_id,\nfrom_mac,\n
> to_mac\n)\nVALUES (\nv_mac_pool_id,\n
> v_from_mac,\nv_to_mac\n)"\nPL/pgSQL function
> insertmacpoolrange(uuid,character varying,character varying) line 3 at
> SQL statement\n',)
> 
> lago.ssh: DEBUG: Command 483aadd2 on
> lago-upgrade-from-prevrelease-suite-4-2-engine  errors:
>  Traceback (most recent call last):
>   File "/usr/lib/python2.7/site-packages/otopi/__main__.py", line 88, in main
> installer.execute()
>   File "/usr/lib/python2.7/site-packages/otopi/main.py", line 157, in execute
> self.context.runSequence()
>   File "/usr/lib/python2.7/site-packages/otopi/context.py", line 771,
> in runSequence
> util.raiseExceptionInformation(infos[0])
>   File "/usr/lib/python2.7/site-packages/otopi/util.py", line 81, in
> raiseExceptionInformation
> exec('raise info[1], None, info[2]')
>   File "/usr/lib/python2.7/site-packages/otopi/context.py", line 133,
> in _executeMethod
> method['method']()
>   File 
> "/usr/share/ovirt-engine/setup/bin/../plugins/ovirt-engine-setup/ovirt-engine/network/macpools.py",
> line 98, in _misc_db_entries
> self._create_new_mac_pool_range(range_prefix)
>   File 
> "/usr/share/ovirt-engine/setup/bin/../plugins/ovirt-engine-setup/ovirt-engine/network/macpools.py",
> line 73, in _create_new_mac_pool_range
> to_mac=range_prefix + ':ff:ff',
>   File 
> "/usr/share/ovirt-engine/setup/ovirt_engine_setup/engine_common/database.py",
> line 266, in execute
> args,
> IntegrityError: insert or update on table "mac_pool_ranges" violates
> foreign key constraint "mac_pool_ranges_mac_pool_id_fkey"
> DETAIL:  Key (mac_pool_id)=(58ca604b-017d-0374-0220-014e) is
> not present in table "mac_pools".
> CONTEXT:  SQL statement "INSERT INTO mac_pool_ranges (
> mac_pool_id,
> from_mac,
> to_mac
> )
> VALUES (
> v_mac_pool_id,
> v_from_mac,
> v_to_mac
> )"
> PL/pgSQL function insertmacpoolrange(uuid,character varying,character
> varying) line 3 at SQL statement
> 
> 
> Thanks,
> 
> Dafna

I will have a look.
___
Infra mailing list -- infra@ovirt.org
To unsubscribe send an email to infra-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lis

Re: [CQ]: 94493,2 (vdsm) failed "ovirt-4.2" system tests

2018-09-27 Thread Dominik Holler
Adding Francesco

On Thu, 27 Sep 2018 15:49:54 +0100
Dafna Ron  wrote:

> we are failing master build-artifacts for the same issue now:
> 
> *14:34:18* [Errno 2] No such file or directory:
> '/var/cache/dnf/updates-07b7057ee4fded96/packages/python2-2.7.15-3.fc28.s390x.rpm'
> 
> 
> Can someone in vdsm help or remove the job from testing until we
> figure this out? it is blocking new changes from running in CQ (i.e no
> new vdsm packages will be updated in tested repo until this is
> resolved)
> 
> 
> On Thu, Sep 27, 2018 at 2:22 PM Dafna Ron  wrote:
> 
> >
> >
> > On Thu, Sep 27, 2018 at 1:56 PM Sandro Bonazzola 
> > wrote:
> >  
>  [...]  
>  [...]  
>  [...]  
> >
> > I am seeing this a few lines above - can it be what is deleting it?
> >
> > + cp
> > /home/ovirt/workspace/vdsm_4.2_build-artifacts-fc28-s390x/vdsm/lib/vdsm/api/vdsm-api.html
> > /home/ovirt/workspace/vdsm_4.2_build-artifacts-fc28-s390x/vdsm/exported-artifacts
> >
> >
> > +
> > yum-builddep ./vdsm.spec
> >
> > Failed
> > to synchronize cache for repo 'virt-preview', disabling.
> >
> > Failed
> > to synchronize cache for repo 'vdo', disabling.
> >
> >
> >
> >  
>  [...]  
>  [...]  
>  [...]  
> >  
___
Infra mailing list -- infra@ovirt.org
To unsubscribe send an email to infra-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/infra@ovirt.org/message/4TDVZ5VY5HH5AZBOYBSE5EOSLXQWMM4X/


Re: [ovirt-devel] failure in ost test - Invalid argument - help in debugging issue

2018-09-10 Thread Dominik Holler
Looks like the problem is network related, we will have a deeper look.

On Mon, 10 Sep 2018 10:01:28 +0100
Dafna Ron  wrote:

> Hi,
> 
> can someone please have a look at this ost failure?
> it is not related to the change that failed and I think its probably a
> race.
> 
> you can find the logs here:
> 
> https://jenkins.ovirt.org/job/ovirt-master_change-queue-tester/10175/artifact/basic-suite.el7.x86_64/test_logs/basic-suite-master/post-004_basic_sanity.py/
> 
> The error I can see is this:
> 
> https://pastebin.com/pm6x0W62
> 
> Thanks,
> Dafna
___
Infra mailing list -- infra@ovirt.org
To unsubscribe send an email to infra-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/infra@ovirt.org/message/3OQQYVK4YHEHKTFEO74CHJ3A5VHTGUPM/


Re: [ OST Failure Report ] [ oVirt Master (ALL) ] [ 27-07-2018 ] [ 002_bootstrap.list_glance_images ]

2018-07-27 Thread Dominik Holler
On Fri, 27 Jul 2018 15:37:23 +0100
Dafna Ron  wrote:

> There are two issues here:
> 1. OST is exiting with wring error due to the local function not
> working which Gal has a patch for
> 2. there is an actual code regression which we suspects comes from
> the SDK
> 
> We suspect the issue is a new sdk package build yesterday
> https://cbs.centos.org/koji/buildinfo?buildID=23581
> 
> Here is a link to the fist failure's logs:
> https://jenkins.ovirt.org/job/ovirt-master_change-queue-tester/8800/artifact/exported-artifacts/basic-suit-master-el7/test_logs/basic-suite-master/post-002_bootstrap.py/
> 
> Dominic is currently looking at the issue.
> 

One reason was an incompatibility on bytecode level of the
openstack-java-sdk used at compile time and runtime.
Fix posted on https://gerrit.ovirt.org/93352

> 
> 
> 2018-07-26 13:47:37,745-04 DEBUG
> [org.ovirt.otopi.dialog.MachineDialogParser] (VdsDeploy) [3283d2df]
> Got: ***L:INFO Yum install: 217/529: libosinfo-1.0.0-1.el7.x86_64
> 2018-07-26 13:47:37,745-04 DEBUG
> [org.ovirt.otopi.dialog.MachineDialogParser] (VdsDeploy) [3283d2df]
> nextEvent: Log INFO Yum install: 217/529: libosinfo-1.0.0-1.el7.x86_64
> 2018-07-26 13:47:37,754-04 DEBUG
> [org.ovirt.engine.core.common.di.interceptor.DebugLoggingInterceptor]
> (default task-2) [b4915769-1cf0-4526-9214-e932d078cf07] method:
> runAction, params: [TestProviderConnectivity,
> ProviderParameters:{comma
> ndId='d29721ff-3dd7-4932-b43d-eee819f1afee', user='null',
> commandType='Unknown'}], timeElapsed: 33ms 2018-07-26 13:47:37,766-04
> INFO
> [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector]
> (VdsDeploy) [3283d2df] EVENT_ID: VDS_INSTALL_IN_PROGRESS(509),
> Installing Host lago-basic-suite-master-host-1. Yum install: 217/529:
> l ibosinfo-1.0.0-1.el7.x86_64. 2018-07-26 13:47:37,782-04 ERROR
> [org.ovirt.engine.api.restapi.resource.AbstractBackendResource]
> (default task-2) [] Operation Failed: WFLYEJB0442: Unexpected Error
> 2018-07-26 13:47:37,782-04 ERROR
> [org.ovirt.engine.api.restapi.resource.AbstractBackendResource]
> (default task-2) [] Exception: javax.ejb.EJBException: WFLYEJB0442:
> Unexpected Error at
> org.jboss.as.ejb3.tx.CMTTxInterceptor.invokeInNoTx(CMTTxInterceptor.java:218)
> [wildfly-ejb3-13.0.0.Final.jar:13.0.0.Final]
> at
> org.jboss.as.ejb3.tx.CMTTxInterceptor.supports(CMTTxInterceptor.java:418)
> [wildfly-ejb3-13.0.0.Final.jar:13.0.0.Final]
> at
> org.jboss.as.ejb3.tx.CMTTxInterceptor.processInvocation(CMTTxInterceptor.java:148)
> [wildfly-ejb3-13.0.0.Final.jar:13.0.0.Final]
> at
> org.jboss.invocation.InterceptorContext.proceed(InterceptorContext.java:422)
> at
> org.jboss.invocation.InterceptorContext$Invocation.proceed(InterceptorContext.java:509)
> at
> org.jboss.weld.module.ejb.AbstractEJBRequestScopeActivationInterceptor.aroundInvoke(AbstractEJBRequestScopeActivationInterceptor.java:81)
> at
> org.jboss.as.weld.ejb.EjbRequestScopeActivationInterceptor.processInvocation(EjbRequestScopeActivationInterceptor.java:89)
> at
> org.jboss.invocation.InterceptorContext.proceed(InterceptorContext.java:422)
> at
> org.jboss.as.ejb3.component.interceptors.CurrentInvocationContextInterceptor.processInvocation(CurrentInvocationContextInterceptor.java:41)
> [wildfly-ejb3-13.0.0.Final.jar:13.0.0.Final]
> at
> org.jboss.invocation.InterceptorContext.proceed(InterceptorContext.java:422)
> at
> org.jboss.as.ejb3.component.invocationmetrics.WaitTimeInterceptor.processInvocation(WaitTimeInterceptor.java:47)
> [wildfly-ejb3-13.0.0.Final.jar:13.0.0.Final]
> at
> org.jboss.invocation.InterceptorContext.proceed(InterceptorContext.java:422)
> at
> org.jboss.as.ejb3.security.SecurityContextInterceptor.processInvocation(SecurityContextInterceptor.java:100)
> [wildfly-ejb3-13.0.0.Final.jar:13.0.0.Final]
> at
> org.jboss.invocation.InterceptorContext.proceed(InterceptorContext.java:422)
> at
> org.jboss.as.ejb3.deployment.processors.StartupAwaitInterceptor.processInvocation(StartupAwaitInterceptor.java:22)
> [wildfly-ejb3-13.0.0.Final.jar:13.0.0.Final]
> at
> org.jboss.invocation.InterceptorContext.proceed(InterceptorContext.java:422)
> at
> org.jboss.as.ejb3.component.interceptors.ShutDownInterceptorFactory$1.processInvocation(ShutDownInterceptorFactory.java:64)
> [wildfly-ejb3-13.0.0.Final.jar:13.0.0.Final]
> at
> org.jboss.invocation.InterceptorContext.proceed(InterceptorContext.java:422)
> at
> org.jboss.as.ejb3.component.interceptors.LoggingInterceptor.processInvocation(LoggingInterceptor.java:67)
> [wildfly-ejb3-13.0.0.Final.jar:13.0.0.Final]
> at
> org.jboss.invocation.InterceptorContext.proceed(InterceptorContext.java:422)
> at
> org.jboss.as.ee.component.NamespaceContextInterceptor.processInvocation(NamespaceContextInterceptor.java:50)
> at
> org.jboss.invocation.InterceptorContext.proceed(InterceptorContext.java:422)
>   

[JIRA] (OVIRT-2268) Enable IPv6 for resources.ovirt.org

2018-06-29 Thread Dominik Holler (oVirt JIRA)
Dominik Holler created OVIRT-2268:
-

 Summary: Enable IPv6 for resources.ovirt.org
 Key: OVIRT-2268
 URL: https://ovirt-jira.atlassian.net/browse/OVIRT-2268
 Project: oVirt - virtualization made easy
  Issue Type: Improvement
  Components: oVirt Infra
Reporter: Dominik Holler
Assignee: infra


To allow access to resources.ovirt.org in IPv6 only setups, IPv6 should be 
enabled for resources.ovirt.org.
This would allow users in IPv6 only setups to install oVirt.



--
This message was sent by Atlassian Jira
(v1001.0.0-SNAPSHOT#100088)
___
Infra mailing list -- infra@ovirt.org
To unsubscribe send an email to infra-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/infra@ovirt.org/message/SQTIA4VFA5GQY6YWSWE6ZZUY46TAAHY2/


[JIRA] (OVIRT-2150) missing packages on check-patch

2018-06-07 Thread Dominik Holler (oVirt JIRA)

[ 
https://ovirt-jira.atlassian.net/browse/OVIRT-2150?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=36915#comment-36915
 ] 

Dominik Holler commented on OVIRT-2150:
---

{quote}I think Dominic already sent a patch for it to replace it with
python2-netaddr{quote}

Yes, rebasing on current master including my change resolves this issue.

> missing packages on check-patch
> ---
>
> Key: OVIRT-2150
> URL: https://ovirt-jira.atlassian.net/browse/OVIRT-2150
> Project: oVirt - virtualization made easy
>  Issue Type: Task
>Reporter: Dafna Ron
>Assignee: infra
>  Labels: ost_failures, ost_infra, ost_infra_packages
>
> http://jenkins.ovirt.org/job/ovirt-system-tests_standard-check-patch/1170/consoleFull
> 10:18:14 [check-patch.basic_suite_master.el7.x86_64] Error: Package: 
> python2-ovsdbapp-0.6.0-1.el7.noarch (alocalsync)
> 10:18:14 [check-patch.basic_suite_master.el7.x86_64]Requires: 
> python-netaddr
> 10:18:14 [check-patch.basic_suite_master.el7.x86_64] Error: Package: 
> ovirt-engine-4.3.0-0.0.master.20180604134534.git3e394fa.el7.noarch 
> (alocalsync)
> 10:18:14 [check-patch.basic_suite_master.el7.x86_64]Requires: 
> pyOpenSSL
> 10:18:14 [check-patch.basic_suite_master.el7.x86_64] Error: Package: 
> ovirt-provider-ovn-1.2.11-1.el7.noarch (alocalsync)
> 10:18:14 [check-patch.basic_suite_master.el7.x86_64]Requires: 
> python-netaddr
> 10:18:14 [check-patch.basic_suite_master.el7.x86_64] 



--
This message was sent by Atlassian Jira
(v1001.0.0-SNAPSHOT#100087)
___
Infra mailing list -- infra@ovirt.org
To unsubscribe send an email to infra-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/infra@ovirt.org/message/M42BKO65PDSY5MD6ECJBOE67Q3XHI4IZ/


Re: [ OST Failure Report ] [ oVirt Master (otopi+imgbased) ] [ 11-05-2018 ] [ 001_initialize_engine.test_initialize_engine ]

2018-05-11 Thread Dominik Holler
On Fri, 11 May 2018 13:05:54 +0300
Dafna Ron  wrote:

> Hi,
> 
> We are failing in 001_initialize_engine.test_initialize_engine in the
> upgrade suite.
> the issue seems to be related to ovn configuration.
> 
> The changes reported by CQ are not the cause of this failure and I
> may be mistaken but I suspect it may be related to one of the below
> changes.
> 
> *Link and headline of suspected patches: *
> 
> 
> 
> *https://gerrit.ovirt.org/#/c/90784/
> 
> - network: default ovn provider client is returned by
> fixturehttps://gerrit.ovirt.org/#/c/90327/
>  - backend, packing: Add default
> MTU for tunnelled networksLink to Job:*
> *http://jenkins.ovirt.org/job/ovirt-master_change-queue-tester/7492/
> *
> http://jenkins.ovirt.org/job/ovirt-master_change-queue-tester/7488/
> 
> 
> 
> 
> *Link to all
> logs:http://jenkins.ovirt.org/job/ovirt-master_change-queue-tester/7492/artifact/exported-artifacts/upgrade-from-release-suit-master-el7/test_logs/upgrade-from-release-suite-master/post-001_initialize_engine.py/
> (Relevant)
> error snippet from the log: *
> 
> 2018-05-11 04:14:34,940-0400 DEBUG otopi.context
> context._executeMethod:143 method exception
> Traceback (most recent call last):
>   File "/usr/lib/python2.7/site-packages/otopi/context.py", line 133,
> in _executeMethod
> method['method']()
>   File
> "/usr/share/ovirt-engine/setup/bin/../plugins/ovirt-engine-setup/ovirt-engine/network/ovirtproviderovn.py",
> line 779, in _customization self._query_install_ovn()
>   File
> "/usr/share/ovirt-engine/setup/bin/../plugins/ovirt-engine-setup/ovirt-engine/network/ovirtproviderovn.py",
> line 399, in _query_install_ovn default=True
>   File "/usr/lib/python2.7/site-packages/ovirt_setup_lib/dialog.py",
> line 47, in queryBoolean
> default=true if default else false,
>   File "/usr/share/otopi/plugins/otopi/dialog/human.py", line 211, in
> queryString
> value = self._readline(hidden=hidden)
>   File "/usr/lib/python2.7/site-packages/otopi/dialog.py", line 248,
> in _readline
> raise IOError(_('End of file'))
> IOError: End of file
> 2018-05-11 04:14:34,942-0400 ERROR otopi.context
> context._executeMethod:152 Failed to execute stage 'Environment
> customization': End of file
> 2018-05-11 04:14:34,972-0400 DEBUG
> otopi.plugins.otopi.debug.debug_failure.debug_failure
> debug_failure._notification:100 tcp connections:
> id uid local foreign state pid exe
> 0: 0 0.0.0.0:111 0.0.0.0:0 LISTEN 1829 /usr/sbin/rpcbind
> 1: 29 0.0.0.0:662 0.0.0.0:0 LISTEN 1868 /usr/sbin/rpc.statd
> 2: 0 0.0.0.0:22 0.0.0.0:0 LISTEN 970 /usr/sbin/sshd
> 3: 0 192.168.201.2:3260 0.0.0.0:0 LISTEN UnknownPID UnknownEXE
> 4: 0 192.168.200.2:3260 0.0.0.0:0 LISTEN UnknownPID UnknownEXE
> 5: 0 0.0.0.0:892 0.0.0.0:0 LISTEN 1874 /usr/sbin/rpc.mountd
> 6: 0 0.0.0.0:2049 0.0.0.0:0 LISTEN UnknownPID UnknownEXE
> 7: 0 0.0.0.0:32803 0.0.0.0:0 LISTEN UnknownPID UnknownEXE
> 8: 0 192.168.201.2:22 192.168.201.1:8 ESTABLISHED
> 5544 /usr/sbin/sshd 2018-05-11 04:14:34,973-0400 DEBUG otopi.context
> context.dumpEnvironment:859 ENVIRONMENT DUMP - BEGIN
> 2018-05-11 04:14:34,973-0400 DEBUG otopi.context
> context.dumpEnvironment:869 ENV BASE/error=bool:'True'
> 2018-05-11 04:14:34,973-0400 DEBUG otopi.context
> context.dumpEnvironment:869 ENV BASE/exceptionInfo=list:'[( 'exceptions.IOError'>, IOError('End of file',),  0x239ab90>)]'  
> 2018-05-11 04:14:34,974-0400 DEBUG otopi.context
> context.dumpEnvironment:873 ENVIRONMENT DUMP - END
> 2018-05-11 04:14:34,975-0400 INFO otopi.context
> context.runSequence:741 Stage: Clean up
> 2018-05-11 04:14:34,975-0400 DEBUG otopi.context
> context.runSequence:745 STAGE cleanup
> 2018-05-11 04:14:34,976-0400 DEBUG otopi.context
> context._executeMethod:128 Stage cleanup METHOD
> otopi.plugins.otopi.dialog.answer_file.Plugin._generate_answer_file
> 2018-05-11 04:14:34,977-0400 DEBUG otopi.context
> context.dumpEnvironment:859 ENVIRONMENT DUMP - BEGIN
> 2018-05-11 04:14:34,977-0400 DEBUG otopi.context
> context.dumpEnvironment:869 ENV DIALOG/answerFileContent=str:'# OTOPI
> answer file, generated by human dialog
> [environment:default]
> '
> 
> 
> 
> 
> *Thanks, Dafna*


The upgrade suite looks like the change of the initial version of
oVirt from 4.1 to 4.2 is not yet completed.
https://gerrit.ovirt.org/#/c/91172/ fixes this issue, but the next one
seems to be
[ INFO  ] Configuring WebSocket Proxy\n
[ INFO  ] Backing up database localhost:engine to
\'/var/lib/ovirt-engine/backups/engine-20180511123046.d7PUoD.dump\'.\n
[ INFO  ] Creating/refreshing Engine database schema\n
[ ERROR ] schema.sh: FATAL:
Cannot execute sql command:
--file=/usr/share/ovirt-engine/dbs

Re: OST Network suite is failing on "OSError: [Errno 28] No space left on device"

2018-03-19 Thread Dominik Holler
Thanks Gal, I expect the problem is fixed until something eats
all space in /dev/shm.
But the usage of /dev/shm is logged in the output, so we would be able
to detect the problem next time instantly.

>From my point of view it would be good to know why /dev/shm was full,
to prevent this situation in future.


 On Mon, 19 Mar 2018 18:44:54
+0200 Gal Ben Haim  wrote:

> I see that this failure happens a lot on "ovirt-srv19.phx.ovirt.org
> <http://jenkins.ovirt.org/computer/ovirt-srv19.phx.ovirt.org>", and by
> different projects that uses ansible.
> Not sure it relates, but I've found (and removed) a stale lago
> environment in "/dev/shm" that were created by
> ovirt-system-tests_he-basic-iscsi-suite -master
> <http://jenkins.ovirt.org/view/oVirt%20system%20tests/job/ovirt-system-tests_he-basic-iscsi-suite-master/>
> .
> The stale environment caused the suite to not run in "/dev/shm".
> The maximum number of semaphore on both  ovirt-srv19.phx.ovirt.org
> <http://jenkins.ovirt.org/computer/ovirt-srv19.phx.ovirt.org> and
> ovirt-srv23.phx.ovirt.org
> <http://jenkins.ovirt.org/computer/ovirt-srv19.phx.ovirt.org> (which
> run the ansible suite with success) is 128.
> 
> On Mon, Mar 19, 2018 at 3:37 PM, Yedidyah Bar David 
> wrote:
> 
> > Failed also here:
> >
> > http://jenkins.ovirt.org/job/ovirt-system-tests_master_
> > check-patch-el7-x86_64/4540/
> >
> > The patch trigerring this affects many suites, and the job failed
> > during ansible-suite-master .
> >
> > On Mon, Mar 19, 2018 at 3:10 PM, Eyal Edri  wrote:
> >  
> >> Gal and Daniel are looking into it, strange its not affecting all
> >> suites.
> >>
> >> On Mon, Mar 19, 2018 at 2:11 PM, Dominik Holler
> >>  wrote:
> >>  
> >>> Looks like /dev/shm is run out of space.
> >>>
> >>> On Mon, 19 Mar 2018 13:33:28 +0200
> >>> Leon Goldberg  wrote:
> >>>  
> >>> > Hey, any updates?
> >>> >
> >>> > On Sun, Mar 18, 2018 at 10:44 AM, Edward Haas 
> >>> > wrote:
> >>> >  
> >>> > > We are doing nothing special there, just executing ansible
> >>> > > through their API.
> >>> > >
> >>> > > On Sun, Mar 18, 2018 at 10:42 AM, Daniel Belenky
> >>> > >  wrote:
> >>> > >  
> >>> > >> It's not a space issue. Other suites ran on that slave after
> >>> > >> your suite successfully.
> >>> > >> I think that the problem is the setting for max semaphores,
> >>> > >> though I don't know what you're doing to reach that limit.
> >>> > >>
> >>> > >> [dbelenky@ovirt-srv18 ~]$ ipcs -ls
> >>> > >>
> >>> > >> -- Semaphore Limits 
> >>> > >> max number of arrays = 128
> >>> > >> max semaphores per array = 250
> >>> > >> max semaphores system wide = 32000
> >>> > >> max ops per semop call = 32
> >>> > >> semaphore max value = 32767
> >>> > >>
> >>> > >>
> >>> > >> On Sun, Mar 18, 2018 at 10:31 AM, Edward Haas
> >>> > >>  wrote:  
> >>> > >>> http://jenkins.ovirt.org/job/ovirt-system-tests_network-suit  
> >>> e-master/  
> >>> > >>>
> >>> > >>> On Sun, Mar 18, 2018 at 10:24 AM, Daniel Belenky
> >>> > >>>  wrote:
> >>> > >>>  
> >>> > >>>> Hi Edi,
> >>> > >>>>
> >>> > >>>> Are there any logs? where you're running the suite? may I
> >>> > >>>> have a link?
> >>> > >>>>
> >>> > >>>> On Sun, Mar 18, 2018 at 8:20 AM, Edward Haas
> >>> > >>>>  wrote:  
> >>> > >>>>> Good morning,
> >>> > >>>>>
> >>> > >>>>> We are running in the OST network suite a test module with
> >>> > >>>>> Ansible and it started failing during the weekend on
> >>> > >>>>> "OSError: [Errno 28] No space left on device" when
> >>> > >>>>> attempting to take a lock in the mutiprocessing python
> >>> > >>>>> module.
> >>> > >>

Re: OST Network suite is failing on "OSError: [Errno 28] No space left on device"

2018-03-19 Thread Dominik Holler
Looks like /dev/shm is run out of space.

On Mon, 19 Mar 2018 13:33:28 +0200
Leon Goldberg  wrote:

> Hey, any updates?
> 
> On Sun, Mar 18, 2018 at 10:44 AM, Edward Haas 
> wrote:
> 
> > We are doing nothing special there, just executing ansible through
> > their API.
> >
> > On Sun, Mar 18, 2018 at 10:42 AM, Daniel Belenky
> >  wrote:
> >  
> >> It's not a space issue. Other suites ran on that slave after your
> >> suite successfully.
> >> I think that the problem is the setting for max semaphores, though
> >> I don't know what you're doing to reach that limit.
> >>
> >> [dbelenky@ovirt-srv18 ~]$ ipcs -ls
> >>
> >> -- Semaphore Limits 
> >> max number of arrays = 128
> >> max semaphores per array = 250
> >> max semaphores system wide = 32000
> >> max ops per semop call = 32
> >> semaphore max value = 32767
> >>
> >>
> >> On Sun, Mar 18, 2018 at 10:31 AM, Edward Haas 
> >> wrote: 
> >>> http://jenkins.ovirt.org/job/ovirt-system-tests_network-suite-master/
> >>>
> >>> On Sun, Mar 18, 2018 at 10:24 AM, Daniel Belenky
> >>>  wrote:
> >>>  
>  Hi Edi,
> 
>  Are there any logs? where you're running the suite? may I have a
>  link?
> 
>  On Sun, Mar 18, 2018 at 8:20 AM, Edward Haas 
>  wrote: 
> > Good morning,
> >
> > We are running in the OST network suite a test module with
> > Ansible and it started failing during the weekend on "OSError:
> > [Errno 28] No space left on device" when attempting to take a
> > lock in the mutiprocessing python module.
> >
> > It smells like a slave resource problem, could someone help
> > investigate this?
> >
> > Thanks,
> > Edy.
> >
> > === FAILURES
> > === __
> > test_ovn_provider_create_scenario ___
> >
> > os_client_config = None
> >
> > def test_ovn_provider_create_scenario(os_client_config):  
> > >   _test_ovn_provider('create_scenario.yml')  
> >
> > network-suite-master/tests/test_ovn_provider.py:68:
> > _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
> > _ _ _ _ _ _ _ _
> > network-suite-master/tests/test_ovn_provider.py:78: in
> > _test_ovn_provider playbook.run()
> > network-suite-master/lib/ansiblelib.py:127: in run
> > self._run_playbook_executor()
> > network-suite-master/lib/ansiblelib.py:138: in
> > _run_playbook_executor pbex =
> > PlaybookExecutor(**self._pbex_args) 
> > /usr/lib/python2.7/site-packages/ansible/executor/playbook_executor.py:60:
> > in __init__ self._tqm = TaskQueueManager(inventory=inventory,
> > variable_manager=variable_manager, loader=loader,
> > options=options,
> > passwords=self.passwords) 
> > /usr/lib/python2.7/site-packages/ansible/executor/task_queue_manager.py:104:
> > in __init__ self._final_q =
> > multiprocessing.Queue() 
> > /usr/lib64/python2.7/multiprocessing/__init__.py:218:
> > in Queue return
> > Queue(maxsize) /usr/lib64/python2.7/multiprocessing/queues.py:63:
> > in __init__ self._rlock =
> > Lock() /usr/lib64/python2.7/multiprocessing/synchronize.py:147:
> > in __init__ SemLock.__init__(self, SEMAPHORE, 1, 1) _ _ _ _ _ _
> > _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
> > _ _
> >
> > self = , kind = 1, value = 1, maxvalue = 1
> >
> > def __init__(self, kind, value, maxvalue):  
> > >   sl = self._semlock = _multiprocessing.SemLock(kind,
> > > value, maxvalue)  
> > E   OSError: [Errno 28] No space left on device
> >
> > /usr/lib64/python2.7/multiprocessing/synchronize.py:75: OSError
> >
> >  
> 
> 
>  --
> 
>  DANIEL BELENKY
> 
>  RHV DEVOPS
>   
> >>>
> >>>  
> >>
> >>
> >> --
> >>
> >> DANIEL BELENKY
> >>
> >> RHV DEVOPS
> >>  
> >
> >  

___
Infra mailing list
Infra@ovirt.org
http://lists.ovirt.org/mailman/listinfo/infra


Re: [ovirt-devel] [ OST Failure Report ] [ oVirt Master (cockpit-ovirt) ] [ 15-03-2018 ] [ 098_ovirt_provider_ovn.use_ovn_provider ]

2018-03-15 Thread Dominik Holler
On Thu, 15 Mar 2018 16:24:10 +
Dafna Ron  wrote:

> Hi,
> 
> We have a failure on master for test
> 098_ovirt_provider_ovn.use_ovn_provider in project cockpit-ovirt.
> This seems to be a race because object is locked. also, the actual
> failure is logged as WARN and not ERROR.
> 
> I don't think the patch is actually related to the failure but I
> think the test should be fixed.
> can you please review to make sure we do not have an actual
> regression and let me know if we need to open a bz to fix the test?
> 
> 
> *Link and headline of suspected patches: *
> *https://gerrit.ovirt.org/#/c/89020/2
>  - *
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> *wizard: Enable scroll on start page for low-res screensLink to
> Job:http://jenkins.ovirt.org/job/ovirt-master_change-queue-tester/6374
> Link
> to all
> logs:http://jenkins.ovirt.org/job/ovirt-master_change-queue-tester/6374/artifacts
> (Relevant)
> error snippet from the log: 2018-03-15 10:05:00,160-04 DEBUG
> [org.ovirt.engine.core.sso.servlets.OAuthTokenInfoServlet] (default
> task-10) [] Sending json response2018-03-15 10:05:00,160-04 DEBUG
> [org.ovirt.engine.core.sso.utils.TokenCleanupUtility] (default
> task-10) [] Not cleaning up expired tokens2018-03-15 10:05:00,169-04
> INFO
> [org.ovirt.engine.core.bll.provider.network.SyncNetworkProviderCommand]
> (EE-ManagedThreadFactory-engineScheduled-Thread-90) [789edb23] Lock
> Acquired to object
> 'EngineLock:{exclusiveLocks='[c38a67ec-0b48-4e6f-be85-70c700df5483=PROVIDER]',
> sharedLocks=''}'2018-03-15 10:05:00,184-04 INFO
> [org.ovirt.engine.core.bll.provider.network.SyncNetworkProviderCommand]
> (EE-ManagedThreadFactory-engineScheduled-Thread-90) [789edb23]
> Running command: SyncNetworkProviderCommand internal: true.2018-03-15
> 10:05:00,228-04 DEBUG
> [org.ovirt.engine.core.dal.dbbroker.PostgresDbEngineDialect$PostgresSimpleJdbcCall]
> (default task-13) [e1328379-17b7-49f8-beb2-cf8331784828] Compiled
> stored procedure. Call string is [{call
> getdcidbyexternalnetworkid(?)}]2018-03-15 10:05:00,228-04 DEBUG
> [org.ovirt.engine.core.dal.dbbroker.PostgresDbEngineDialect$PostgresSimpleJdbcCall]
> (default task-13) [e1328379-17b7-49f8-beb2-cf8331784828] SqlCall for
> procedure [GetDcIdByExternalNetworkId] compiled2018-03-15
> 10:05:00,229-04 DEBUG
> [org.ovirt.engine.core.common.di.interceptor.DebugLoggingInterceptor]
> (default task-13) [e1328379-17b7-49f8-beb2-cf8331784828] method:
> runQuery, params: [GetAllExternalNetworksOnProvider,
> IdQueryParameters:{refresh='false', filtered='false'}], timeElapsed:
> 353ms2018-03-15 10:05:00,239-04 INFO
> [org.ovirt.engine.core.bll.network.dc.AddNetworkCommand] (default
> task-13) [e1328379-17b7-49f8-beb2-cf8331784828] Failed to Acquire
> Lock to object 'EngineLock:{exclusiveLocks='[network_1=NETWORK,
> c38a67ec-0b48-4e6f-be85-70c700df5483=PROVIDER]',
> sharedLocks=''}'2018-03-15 10:05:00,239-04 WARN
> [org.ovirt.engine.core.bll.network.dc.AddNetworkCommand] (default
> task-13) [e1328379-17b7-49f8-beb2-cf8331784828] Validation of action
> 'AddNetwork' failed for user admin@internal-authz. Reasons:
> VAR__TYPE__NETWORK,VAR__ACTION__ADD,ACTION_TYPE_FAILED_PROVIDER_LOCKED,$providerId
> c38a67ec-0b48-4e6f-be85-70c700df54832018-03-15 10:05:00,240-04 DEBUG
> [org.ovirt.engine.core.common.di.interceptor.DebugLoggingInterceptor]
> (default task-13) [e1328379-17b7-49f8-beb2-cf8331784828] method:
> runAction, params: [AddNetwork,
> AddNetworkStoragePoolParameters:{commandId='61b365ec-27c1-49af-ad72-f907df8befcd',
> user='null', commandType='Unknown'}], timeElapsed: 10ms2018-03-15
> 10:05:00,250-04 ERROR
> [org.ovirt.engine.api.restapi.resource.AbstractBackendResource]
> (default task-13) [] Operation Failed: [Cannot add Network. Related
> operation on provider with the id
> c38a67ec-0b48-4e6f-be85-70c700df5483 is currently in progress. Please
> try again later.]2018-03-15 10:05:00,254-04 DEBUG
> [org.ovirt.engine.core.utils.servlet.LocaleFilter] (default task-14)
> [] Incoming locale 'en-US'. Filter determined locale to be
> 'en-US'2018-03-15 10:05:00,254-04 DEBUG
> [org.ovirt.engine.core.sso.servlets.OAuthTokenServlet] (default
> task-14) [] Entered OAuthTokenServlet Query String: null,
> Parameters : password = ***, grant_type = password, scope =
> ovirt-app-api ovirt-ext=token-info:validate, username =
> admin@internal, *

I will care about this.
The problem is that SyncNetworkProviderCommand is running in the
background and locking the provider, which blocks the lock for the
tested AddNetworkCommand.
The related changes are
core: Add locking for Add and RemoveNetworkCommand
https://gerrit.ovirt.org/#/c/85480/
and
core: Add SyncNetworkProviderCommand
https://gerrit.ovirt.org/#/c/85134/


___
Infra mailing list
I

ovirt-engine_4.2_check-patch-fc27-x86_64 fails

2018-03-12 Thread Dominik Holler
Hi,
dnf in CI jobs ovirt-engine_4.2_check-patch-fc27-x86_64 [1] is failing
for me with:
ImportError: libelf.so.1: cannot open shared object file: No such file
or directory
Can you please have a look?
Thanks
Dominik

[1]
  http://jenkins.ovirt.org/job/ovirt-engine_4.2_check-patch-fc27-x86_64/411



12:48:21 Start: dnf install
12:48:22 ERROR: Command failed: 
12:48:22  # /usr/bin/dnf
--installroot 
/var/lib/mock/fedora-27-x86_64-27659b1e362bd3ee3ef0da8cd3f0c97c-26540/root/
--releasever 27 --disableplugin=local --setopt=deltarpm=False install
@buildsys-build libcrypt-nss autoconf git java-1.8.0-openjdk-devel make
maven net-tools otopi postgresql-jdbc postgresql-server
postgresql-contrib pyflakes python2-mock python2-psycopg2
python2-pytest python-devel python-isort python-pep8 yum-utils
--setopt=tsflags=nocontexts 12:48:22 Traceback (most recent call last):
12:48:22   File "/usr/bin/dnf", line 57, in  12:48:22 from
dnf.cli import main 12:48:22   File
"/usr/lib/python3.6/site-packages/dnf/__init__.py", line 31, in
 12:48:22 import dnf.base 12:48:22   File
"/usr/lib/python3.6/site-packages/dnf/base.py", line 29, in 
12:48:22 from dnf.yum import history 12:48:22   File
"/usr/lib/python3.6/site-packages/dnf/yum/history.py", line 22, in
 12:48:22 import hawkey 12:48:22   File
"/usr/lib64/python3.6/site-packages/hawkey/__init__.py", line 24, in
 12:48:22 from . import _hawkey 12:48:22 ImportError:
libelf.so.1: cannot open shared object file: No such file or directory
___
Infra mailing list
Infra@ovirt.org
http://lists.ovirt.org/mailman/listinfo/infra


Re: [ovirt-devel] [ OST Failure Report ] [ oVirtMaster (otopi) ] [ 01-02-2018 ] [ 001_initialize_engine.initialize_engine/001_upgrade_engine.test_initialize_engine ]

2018-02-02 Thread Dominik Holler
On Thu, 1 Feb 2018 15:57:46 +
Dafna Ron  wrote:

> Hi,
> 
> We are failing initialize engine on both basic and upgrade suites.
> 
> Can you please check?
> 
> *Link and headline of suspected patches:
> https://gerrit.ovirt.org/#/c/86679/
>  - *
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> *core: Check Sequence before/afterLink to
> Job:http://jenkins.ovirt.org/job/ovirt-master_change-queue-tester/5187/
> Link
> to all
> logs:http://jenkins.ovirt.org/job/ovirt-master_change-queue-tester/5187/artifact/
> (Relevant)
> error snippet from the log: 2018-02-01 10:38:27,057-0500 DEBUG
> otopi.plugins.otopi.dialog.human dialog.__logString:204
> DIALOG:SEND Version: otopi-1.7.7_master
> (otopi-1.7.7-0.0.master.20180201063428.git81ce9b7.el7.centos)2018-02-01
> 10:38:27,058-0500 ERROR otopi.context context.check:833 "before"
> parameter of method
> otopi.plugins.ovirt_engine_setup.ovirt_engine.network.ovirtproviderovn.Plugin._misc_configure_provider
> is a string, should probably be a tuple. Perhaps a missing
> comma?2018-02-01 10:38:27,058-0500 DEBUG
> otopi.plugins.otopi.dialog.human dialog.__logString:204
> DIALOG:SEND methodinfo: {'priority': 5000, 'name':
> None, 'before': 'osetup.ovn.provider.service.restart', 'after':
> ('osetup.pki.ca .available',
> 'osetup.ovn.services.restart'), 'method':  method ?._misc_configure_provider of
>  object at 0x2edf6d0>>, 'condition':  of
>  object at 0x2edf6d0>>, 'stage': 11}2018-02-01 10:38:27,059-0500 DEBUG
> otopi.context context._executeMethod:143 method exceptionTraceback
> (most recent call last):  File
> "/usr/lib/python2.7/site-packages/otopi/context.py", line 133, in
> _executeMethodmethod['method']()  File
> "/usr/share/otopi/plugins/otopi/core/misc.py", line 61, in _setup
> self.context.checkSequence()  File
> "/usr/lib/python2.7/site-packages/otopi/context.py", line 844, in
> checkSequenceraise RuntimeError(_('Found bad "before" or "after"
> parameters'))RuntimeError: Found bad "before" or "after"
> parameters2018-02-01 10:38:27,059-0500 ERROR otopi.context
> context._executeMethod:152 Failed to execute stage 'Environment
> setup': Found bad "before" or "after" parameters*

Seems like the new introduced check of
https://gerrit.ovirt.org/#/c/86679/ works.
I posted https://gerrit.ovirt.org/#/c/87045/ to fix this. Locally it
works for me, but I still have test this change in OST on jenkins.

___
Infra mailing list
Infra@ovirt.org
http://lists.ovirt.org/mailman/listinfo/infra


[JIRA] (OVIRT-1246) Gerrit: 500 Internal server error

2017-03-10 Thread Dominik Holler (oVirt JIRA)
Dominik Holler created OVIRT-1246:
-

 Summary: Gerrit: 500 Internal server error
 Key: OVIRT-1246
 URL: https://ovirt-jira.atlassian.net/browse/OVIRT-1246
 Project: oVirt - virtualization made easy
  Issue Type: By-EMAIL
Reporter: Dominik Holler
Assignee: infra


Hello,
I get a "500 Internal server error" during accessing
https://gerrit.ovirt.org/#/c/67563/ .
Git gives me a "(change 67563 missing revisions)" error.
Last action I triggered was to delete a draft revision.

I abandoned this change, and created a new one to continue, but maybe
you are interested in this fault.

Dominik



--
This message was sent by Atlassian JIRA
(v1000.815.1#100035)
___
Infra mailing list
Infra@ovirt.org
http://lists.ovirt.org/mailman/listinfo/infra


Re: [ovirt-devel] [OST Failure Report] [oVirt master] [09.02.2017] [test-repo_ovirt_experimental_master]

2017-02-21 Thread Dominik Holler
A deep analysis of the logfiles gives details about the
unexpected behavior, but I regret to not provide the fault causing the
unexpected behavior.

To get this fault, the help of someone familiar with
org.ovirt.vdsm.jsonrpc.client.JsonRpcClient is needed.

In the failing test "assign_labeled_network" a (labeld) network is
assigned to the cluster. For this reason the network has to be added to
the hosts. After that, the test "assign_labeled_network" checks, if the
engine acknowledges that hosts are in the labeld network. This
execution of the test failed, because this acknowledge of the engine
is missing after 180 seconds [3].

There are two hosts lago-basic-suite-master-host0 and
lago-basic-suite-master-host1 in the scenario.
lago-basic-suite-master-host1 fails and  
lago-basic-suite-master-host0 succeeds, so only 
lago-basic-suite-master-host1 is analyzed below.

Please find here the most relevant steps causing this error:
1. The engine sends Host.setupNetworks to the hosts in 
   line 40279 - 40295 in [1] with
   "id":"02298344-165f-47e4-9ea4-7c17a55d37f8".
2. The host executes the Host.setupNetworks RPC call successfully in
   line 1286 in [2].
3. The engine receives the acknowledgment of the successful execution
   in line 40716 and 40717 in [1].
4. The error occurs in line 40718: 
   '[org.ovirt.vdsm.jsonrpc.client.JsonRpcClient] (ResponseWorker) []
   Not able to update response for
   "02298344-165f-47e4-9ea4-7c17a55d37f8"'. This means the engine can
   not process the acknowledgment of the successful execution.
5. The command HostSetupNetworksVDS is aborted.
   So Host.getCapabilities is skipped and the engine database is not
   updated with the new network configuration of the host.
6. Since the test script relays on the information from database
   about host network configuration, it does not see that
   Host.setupNetworks is successfully executed and stops with the
   error "False != True after 180 seconds" [3]

So the fault happens before or in step 4 and is around the jsonrpc
communication.

It is an open action item to precise the location of the fault.



[1]
  
http://jenkins.ovirt.org/job/test-repo_ovirt_experimental_master/5217/artifact/exported-artifacts/basic-suit-master-el7/test_logs/basic-suite-master/post-005_network_by_label.py/lago-basic-suite-master-engine/_var_log/ovirt-engine/engine.log

[2]
  
http://jenkins.ovirt.org/job/test-repo_ovirt_experimental_master/5217/artifact/exported-artifacts/basic-suit-master-el7/test_logs/basic-suite-master/post-005_network_by_label.py/lago-basic-suite-master-host1/_var_log/vdsm/vdsm.log

[3]
  
http://jenkins.ovirt.org/job/test-repo_ovirt_experimental_master/5217/testReport/junit/(root)/005_network_by_label/assign_labeled_network/



On Thu, 9 Feb 2017 14:52:52 +0200
Shlomo Ben David  wrote:

> Hi,
> 
> 
> *Test failed:* [test-repo_ovirt_experimental_master]
> 
> *Link to suspected patches:* n/a
> 
> *Link to Job:*
> http://jenkins.ovirt.org/job/test-repo_ovirt_experimental_master/5217
> 
> *Link to all logs:*
> http://jenkins.ovirt.org/job/test-repo_ovirt_experimental_master/5217/artifact/exported-artifacts/basic-suit-master-el7/test_logs/basic-suite-master/post-005_network_by_label.py/
> 
> *Error snippet from the log: *
> 
> 
> 
> ifup/VLAN100_Network::ERROR::2017-02-09
> 06:21:15,236::concurrent::189::root::(run) FINISH thread
>  failed
> Traceback (most recent call last):
>   File "/usr/lib/python2.7/site-packages/vdsm/concurrent.py", line
> 185, in run ret = func(*args, **kwargs)
>   File
> "/usr/lib/python2.7/site-packages/vdsm/network/configurators/ifcfg.py",
> line 949, in _exec_ifup _exec_ifup_by_name(iface.name, cgroup)
>   File
> "/usr/lib/python2.7/site-packages/vdsm/network/configurators/ifcfg.py",
> line 935, in _exec_ifup_by_name raise
> ConfigNetworkError(ERR_FAILED_IFUP, out[-1] if out else '')
> ConfigNetworkError: (29, 'Determining IPv6 information for
> VLAN100_Network... failed.')
> 
> 
> 
> Best Regards,
> 
> Shlomi Ben-David | Software Engineer | Red Hat ISRAEL
> RHCSA | RHCVA | RHCE
> IRC: shlomibendavid (on #rhev-integ, #rhev-dev, #rhev-ci)
> 
> OPEN SOURCE - 1 4 011 && 011 4 1

___
Infra mailing list
Infra@ovirt.org
http://lists.ovirt.org/mailman/listinfo/infra


Re: SQLException on http://artifactory.ovirt.org/

2016-12-14 Thread Dominik Holler
http://artifactory.ovirt.org works again, thank you.

On Wed, 14 Dec 2016 15:59:18 +0200
Gil Shinar  wrote:

> Can you please retry?
> 
> Thanks
> Gil
> 
> On Tue, Dec 13, 2016 at 2:27 PM, Dominik Holler 
> wrote:
> 
> > Hi,
> > on access to
> > http://artifactory.ovirt.org/artifactory/ovirt-mirror/org/
> > apache/maven/surefire/surefire-junit4/2.7.2/surefire-junit4-2.7.2.pom
> > the following error message is responded:
> > {
> >   "errors" : [ {
> > "status" : 500,
> > "message" : "Could not process download request:
> > java.sql.SQLException: An SQL data change is not permitted for a
> > read-only connection, user or database." } ] }
> >
> > This seems to be critical for ovirt-4.0 CI build.
> >
> > Who can fix this?
> >
> > Thanks, Dominik
> > ___
> > Infra mailing list
> > Infra@ovirt.org
> > http://lists.phx.ovirt.org/mailman/listinfo/infra
> >  

___
Infra mailing list
Infra@ovirt.org
http://lists.phx.ovirt.org/mailman/listinfo/infra


SQLException on http://artifactory.ovirt.org/

2016-12-13 Thread Dominik Holler
Hi,
on access to
http://artifactory.ovirt.org/artifactory/ovirt-mirror/org/apache/maven/surefire/surefire-junit4/2.7.2/surefire-junit4-2.7.2.pom
the following error message is responded:
{
  "errors" : [ {
"status" : 500,
"message" : "Could not process download request:
java.sql.SQLException: An SQL data change is not permitted for a
read-only connection, user or database." } ] }

This seems to be critical for ovirt-4.0 CI build.

Who can fix this?

Thanks, Dominik
___
Infra mailing list
Infra@ovirt.org
http://lists.phx.ovirt.org/mailman/listinfo/infra


Permission to (re-)trigger gerrit builds in jenkins

2016-10-21 Thread Dominik Holler
Hi,
I like to have the permission to retrigger failed gerrit builds on the
jenkins build system.
Who can enable me to do so?
Thanks,
Dominik
___
Infra mailing list
Infra@ovirt.org
http://lists.ovirt.org/mailman/listinfo/infra