[JIRA] (OVIRT-3101) Add "ci system-test" command

2021-06-24 Thread Marcin Sobczyk (oVirt JIRA)

[ 
https://ovirt-jira.atlassian.net/browse/OVIRT-3101?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=41030#comment-41030
 ] 

Marcin Sobczyk commented on OVIRT-3101:
---

On 6/23/21 5:44 PM, Nir Soffer wrote:
> Similar to "ci build", "ci test", "ci merge" add a new command that
> triggers OST run.
>
> Running OST is tied now in vdsm (and engine?) to Code-Review: +2.
> This causes trouble and does not allow non-maintainers to use the convenient 
> OST
> infrastructure.
>
> Expected flow:
>
> 1. User add a comment with "ci system-test"
"ci system-test" is sooo long, I vote for "ci ost".

Regards, Marcin

> 2. OST flow building and running OST triggered
> ___
> Devel mailing list -- de...@ovirt.org
> To unsubscribe send an email to devel-le...@ovirt.org
> Privacy Statement: https://www.ovirt.org/privacy-policy.html
> oVirt Code of Conduct: 
> https://www.ovirt.org/community/about/community-guidelines/
> List Archives: 
> https://lists.ovirt.org/archives/list/de...@ovirt.org/message/2FCJZLFJJ2SB3KVQ3YREZBVEYXPBQRUN/

> Add "ci system-test" command
> 
>
> Key: OVIRT-3101
> URL: https://ovirt-jira.atlassian.net/browse/OVIRT-3101
> Project: oVirt - virtualization made easy
>  Issue Type: By-EMAIL
>Reporter: Nir Soffer
>Assignee: infra
>
> Similar to "ci build", "ci test", "ci merge" add a new command that
> triggers OST run.
> Running OST is tied now in vdsm (and engine?) to Code-Review: +2.
> This causes trouble and does not allow non-maintainers to use the convenient 
> OST
> infrastructure.
> Expected flow:
> 1. User add a comment with "ci system-test"
> 2. OST flow building and running OST triggered



--
This message was sent by Atlassian Jira
(v1001.0.0-SNAPSHOT#100166)
___
Infra mailing list -- infra@ovirt.org
To unsubscribe send an email to infra-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/infra@ovirt.org/message/JQTHIJEUGUP4SKNJX6N25U4JGDUAS2ZX/


[JIRA] (OVIRT-2958) Please create new project called 'ost-images' on gerrit

2020-06-11 Thread Marcin Sobczyk (oVirt JIRA)
Marcin Sobczyk created OVIRT-2958:
-

 Summary: Please create new project called 'ost-images' on gerrit
 Key: OVIRT-2958
 URL: https://ovirt-jira.atlassian.net/browse/OVIRT-2958
 Project: oVirt - virtualization made easy
  Issue Type: Task
Reporter: Marcin Sobczyk
Assignee: infra


Please create a project called 'ost-images' on gerrit. The project should be 
initialized with an empty commit and should use STDCI pipelines.



--
This message was sent by Atlassian Jira
(v1001.0.0-SNAPSHOT#100128)
___
Infra mailing list -- infra@ovirt.org
To unsubscribe send an email to infra-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/infra@ovirt.org/message/WF6773K6RBDFPZCYURQQYVVFVHCZNCZN/


[JIRA] (OVIRT-2964) qemu-kvm fails in el8 mock

2020-06-19 Thread Marcin Sobczyk (oVirt JIRA)
Marcin Sobczyk created OVIRT-2964:
-

 Summary: qemu-kvm fails in el8 mock
 Key: OVIRT-2964
 URL: https://ovirt-jira.atlassian.net/browse/OVIRT-2964
 Project: oVirt - virtualization made easy
  Issue Type: Bug
Reporter: Marcin Sobczyk
Assignee: infra


Since today (19.6.2020), virt-install is [failing in el8 
mock|https://jenkins.ovirt.org/job/standard-manual-runner/1291/console#L1,600] 
with:

{{13:27:15 qemu-kvm: error: failed to set MSR 0x48e to 0x401e1720401e172
13:27:15 qemu-kvm: /builddir/build/BUILD/qemu-2.12.0/target/i386/kvm.c:2119: 
kvm_buf_set_msrs: Assertion `ret == cpu->kvm_msr_buf->nmsrs' failed.}}

This is causing failures while trying to build OST images. It's possible that 
packages in mock are now coming from CentOS 8.2 and they refuse to work with 
the el7 stuff we have underneath.
Could someone look into this please?



--
This message was sent by Atlassian Jira
(v1001.0.0-SNAPSHOT#100130)
___
Infra mailing list -- infra@ovirt.org
To unsubscribe send an email to infra-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/infra@ovirt.org/message/NKAXJJR7CU47EXYJENYJDAD66DLQTRS7/


[JIRA] (OVIRT-3022) poll-upstream-sources pipelines don't work on el8 nodes

2020-09-23 Thread Marcin Sobczyk (oVirt JIRA)
Marcin Sobczyk created OVIRT-3022:
-

 Summary: poll-upstream-sources pipelines don't work on el8 nodes
 Key: OVIRT-3022
 URL: https://ovirt-jira.atlassian.net/browse/OVIRT-3022
 Project: oVirt - virtualization made easy
  Issue Type: Bug
Reporter: Marcin Sobczyk
Assignee: infra


Trying to run `poll-upstream-sources` pipeline on an el8 node ends with an 
[error|https://jenkins.ovirt.org/blue/organizations/jenkins/ost-images_master_standard-poll-upstream-sources/detail/ost-images_master_standard-poll-upstream-sources/99/pipeline]:

{noformat}
[2020-09-23T02:07:48.761Z] + 
/home/jenkins/workspace/ost-images_master_standard-poll-upstream-sources/jenkins/stdci_tools/usrc.py
 --log -d get
[2020-09-23T02:07:48.761Z] /usr/bin/env: ‘python’: No such file or directory
{noformat}

After an attempt of resolving the issue by installing python2 and setting the 
unversioned python command to point to python2 we fail on dependencies:

{noformat}
[2020-09-23T09:39:13.566Z] + 
/home/jenkins/workspace/ost-images_master_standard-poll-upstream-sources/jenkins/stdci_tools/usrc.py
 --log -d get
[2020-09-23T09:39:13.566Z] Traceback (most recent call last):
[2020-09-23T09:39:13.566Z]   File 
"/home/jenkins/workspace/ost-images_master_standard-poll-upstream-sources/jenkins/stdci_tools/usrc.py",
 line 11, in 
[2020-09-23T09:39:13.566Z] import yaml
[2020-09-23T09:39:13.566Z] ImportError: No module named yaml
{noformat}

A quick look at the `usrc.py` script shows we need at least 3 deps:
* python2-six
* python2-pyyaml
* python2-pyxdg (or simply pyxdg)
The last package is not available at all in el8 repos.

Please provide a solution to this problem by either moving the script to use 
py3 or any other means.




--
This message was sent by Atlassian Jira
(v1001.0.0-SNAPSHOT#100146)
___
Infra mailing list -- infra@ovirt.org
To unsubscribe send an email to infra-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/infra@ovirt.org/message/CYNXAN7IHXQKXZQL3MZJKGS6ZIMUPSDV/


[JIRA] (OVIRT-3068) Please increase the amount of RAM for templates.ovirt.org VM to 6-8GBs

2020-12-02 Thread Marcin Sobczyk (oVirt JIRA)
Marcin Sobczyk created OVIRT-3068:
-

 Summary: Please increase the amount of RAM for templates.ovirt.org 
VM to 6-8GBs
 Key: OVIRT-3068
 URL: https://ovirt-jira.atlassian.net/browse/OVIRT-3068
 Project: oVirt - virtualization made easy
  Issue Type: Task
Reporter: Marcin Sobczyk
Assignee: infra


We're planning to merge [some 
changes|https://gerrit.ovirt.org/#/c/ost-images/+/112448/] to ost-images for 
building HE images + reducing the overall size of the qcows and RPMs. These 
changes will require more RAM to be availalble on the machine - at least 6GB 
(but if 8GB is possible than I would go for it). AFAICS currently the 
templates.ovirt.org has only 4GBs of RAM - please bump it. Thanks!



--
This message was sent by Atlassian Jira
(v1001.0.0-SNAPSHOT#100152)
___
Infra mailing list -- infra@ovirt.org
To unsubscribe send an email to infra-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/infra@ovirt.org/message/SLFIL4FGTM7RKMO26RAOIC2SJ4DT5H6B/


[JIRA] (OVIRT-3074) Long filenames chopped in directory index

2021-01-07 Thread Marcin Sobczyk (oVirt JIRA)

[ 
https://ovirt-jira.atlassian.net/browse/OVIRT-3074?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=40991#comment-40991
 ] 

Marcin Sobczyk commented on OVIRT-3074:
---



On 1/5/21 9:36 AM, Shlomi Zidmi (oVirt JIRA) wrote:
>   [ 
> https://ovirt-jira.atlassian.net/browse/OVIRT-3074?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
>  ]
>
> Shlomi Zidmi updated OVIRT-3074:
> 
>Assignee:   (was: )
>  Resolution: Fixed
>  Status: Done  (was: To Do)
>
> Hi,
> As requested, long filenames are no longer being chopped by httpd.
Thanks for this! Never thought of that, 1000 x increase in usability.

>
>> Long filenames chopped in directory index
>> -
>>
>>  Key: OVIRT-3074
>>  URL: https://ovirt-jira.atlassian.net/browse/OVIRT-3074
>>  Project: oVirt - virtualization made easy
>>   Issue Type: By-EMAIL
>> Reporter: Yedidyah Bar David
>> Assignee: Shlomi Zidmi
>>
>> Hi all,
>> Can you please configure stuff so that [1] will show full file names?
>> If that's apache httpd, should be doable by adding to conf (e.g.
>> .htaccess or somewhere in /etc/httpd):
>>  IndexOptions NameWidth=*
>> Thanks and best regards,
>> [1] https://templates.ovirt.org/yum/
>> -- 
>> Didi
>
>
> --
> This message was sent by Atlassian Jira
> (v1001.0.0-SNAPSHOT#100153)
>

> Long filenames chopped in directory index
> -
>
> Key: OVIRT-3074
> URL: https://ovirt-jira.atlassian.net/browse/OVIRT-3074
> Project: oVirt - virtualization made easy
>  Issue Type: By-EMAIL
>Reporter: Yedidyah Bar David
>Assignee: Shlomi Zidmi
>
> Hi all,
> Can you please configure stuff so that [1] will show full file names?
> If that's apache httpd, should be doable by adding to conf (e.g.
> .htaccess or somewhere in /etc/httpd):
> IndexOptions NameWidth=*
> Thanks and best regards,
> [1] https://templates.ovirt.org/yum/
> -- 
> Didi



--
This message was sent by Atlassian Jira
(v1001.0.0-SNAPSHOT#100154)
___
Infra mailing list -- infra@ovirt.org
To unsubscribe send an email to infra-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/infra@ovirt.org/message/4EDZGKOJGYFYEZTDUIMOCNXMZJPXCW2T/


[JIRA] (OVIRT-3083) [lago] Fork 'lago' project to 'lago-ost'

2021-01-26 Thread Marcin Sobczyk (oVirt JIRA)
Marcin Sobczyk created OVIRT-3083:
-

 Summary: [lago] Fork 'lago' project to 'lago-ost'
 Key: OVIRT-3083
 URL: https://ovirt-jira.atlassian.net/browse/OVIRT-3083
 Project: oVirt - virtualization made easy
  Issue Type: Task
Reporter: Marcin Sobczyk
Assignee: infra


The [github repository|https://github.com/lago-project/lago] we currently use 
for Lago development has serious CI problems and doesn't use STDCI. Since 
moving the project to STDCI is hard and Lago is not maintained as a 
general-purpose tool anymore, let's:
* fork the project to gerrit as 'lago-ost'
* make the new project use STDCI and d/s PSI (the throw-away-VMs 
implementation, not the static agents)
* mark the current project on github as EOLed



--
This message was sent by Atlassian Jira
(v1001.0.0-SNAPSHOT#100154)
___
Infra mailing list -- infra@ovirt.org
To unsubscribe send an email to infra-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/infra@ovirt.org/message/JHPMV6VXLUPNXRGXSOCQRQ7ZCW3E6UNE/


[JIRA] (OVIRT-3084) [lago-ost] Work on STDCI for the fork of lago in gerrit

2021-01-26 Thread Marcin Sobczyk (oVirt JIRA)
Marcin Sobczyk created OVIRT-3084:
-

 Summary: [lago-ost] Work on STDCI for the fork of lago in gerrit
 Key: OVIRT-3084
 URL: https://ovirt-jira.atlassian.net/browse/OVIRT-3084
 Project: oVirt - virtualization made easy
  Issue Type: Task
Reporter: Marcin Sobczyk
Assignee: infra






--
This message was sent by Atlassian Jira
(v1001.0.0-SNAPSHOT#100154)
___
Infra mailing list -- infra@ovirt.org
To unsubscribe send an email to infra-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/infra@ovirt.org/message/QZFILIH3NWEF3JYEHYCIORNCJFCBQ6YU/


[JIRA] (OVIRT-3085) ovirt-system-tests_manual job config

2021-02-03 Thread Marcin Sobczyk (oVirt JIRA)

[ 
https://ovirt-jira.atlassian.net/browse/OVIRT-3085?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=41002#comment-41002
 ] 

Marcin Sobczyk commented on OVIRT-3085:
---

[~accountid:557058:42767862-fc29-47f5-aaab-17e88f11d751] it’s a bit 
complicated, but doable I think. I [filed a task for 
this|https://issues.redhat.com/browse/RHV-40941]. Please see the description on 
why this happens.

> ovirt-system-tests_manual job config
> 
>
> Key: OVIRT-3085
> URL: https://ovirt-jira.atlassian.net/browse/OVIRT-3085
> Project: oVirt - virtualization made easy
>  Issue Type: Bug
>  Components: OST Manual job
>Reporter: Vojtech Juranek
>Assignee: infra
>
> To run ovirt-system-tests_manual job with custom repo, one needs
> to provide full path to the repo, e.g.
> https://jenkins.ovirt.org/job/standard-manual-runner/1680/artifact/build-artifacts.build-py3.el8.x86_64/
> Until recently, it was sufficient to provide just link to the job which
> built the repo, e.g.
> https://jenkins.ovirt.org/job/standard-manual-runner/1680/
> Would it be possible to move back to old config so that only job link
> is sufficient? This change breaks our automation, ovirt-ci tool doesn't
> work with it, see
> https://github.com/nirs/ovirt-ci/issues/50



--
This message was sent by Atlassian Jira
(v1001.0.0-SNAPSHOT#100154)
___
Infra mailing list -- infra@ovirt.org
To unsubscribe send an email to infra-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/infra@ovirt.org/message/UUKKS3WDOMNH7C2SQBBPVQKCLBHNDVN6/


[JIRA] (OVIRT-3085) ovirt-system-tests_manual job config

2021-02-03 Thread Marcin Sobczyk (oVirt JIRA)

[ 
https://ovirt-jira.atlassian.net/browse/OVIRT-3085?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=41002#comment-41002
 ] 

Marcin Sobczyk edited comment on OVIRT-3085 at 2/3/21 10:32 AM:


[~accountid:557058:42767862-fc29-47f5-aaab-17e88f11d751] it’s a bit 
complicated, but doable I think. I [filed a task for 
this|https://issues.redhat.com/browse/RHV-40941]. Please see the description on 
why this happens. Since we have a separate board for non-CI OST tasks I’d close 
this one.


was (Author: msobczyk):
[~accountid:557058:42767862-fc29-47f5-aaab-17e88f11d751] it’s a bit 
complicated, but doable I think. I [filed a task for 
this|https://issues.redhat.com/browse/RHV-40941]. Please see the description on 
why this happens.

> ovirt-system-tests_manual job config
> 
>
> Key: OVIRT-3085
> URL: https://ovirt-jira.atlassian.net/browse/OVIRT-3085
> Project: oVirt - virtualization made easy
>  Issue Type: Bug
>  Components: OST Manual job
>Reporter: Vojtech Juranek
>Assignee: infra
>
> To run ovirt-system-tests_manual job with custom repo, one needs
> to provide full path to the repo, e.g.
> https://jenkins.ovirt.org/job/standard-manual-runner/1680/artifact/build-artifacts.build-py3.el8.x86_64/
> Until recently, it was sufficient to provide just link to the job which
> built the repo, e.g.
> https://jenkins.ovirt.org/job/standard-manual-runner/1680/
> Would it be possible to move back to old config so that only job link
> is sufficient? This change breaks our automation, ovirt-ci tool doesn't
> work with it, see
> https://github.com/nirs/ovirt-ci/issues/50



--
This message was sent by Atlassian Jira
(v1001.0.0-SNAPSHOT#100154)
___
Infra mailing list -- infra@ovirt.org
To unsubscribe send an email to infra-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/infra@ovirt.org/message/HQB2FAPHVNYDAXOYLQLDNKNMNSDPYVQR/


[JIRA] (OVIRT-2736) Jenkins build artifacts fail on Fedora - global_setup[lago_setup] WARN: Lago directory missing

2019-05-30 Thread Marcin Sobczyk (oVirt JIRA)

[ 
https://ovirt-jira.atlassian.net/browse/OVIRT-2736?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=39364#comment-39364
 ] 

Marcin Sobczyk commented on OVIRT-2736:
---

Hi,

On 5/30/19 2:43 PM, Nir Soffer wrote:
>
> On Thu, May 30, 2019 at 3:41 PM Nir Soffer  > wrote:
>
> Here a failed build:
> 
> https://jenkins.ovirt.org/blue/organizations/jenkins/standard-manual-runner/detail/standard-manual-runner/302/pipeline
>
>
> Marcin, can it be related to the latest py3 changes?
Don't really think so - please notice that only s390x agents are touched.
The only thing that changed recently was adding a 'build-artifacts' job 
that uses Python 3.6 to build artifacts on s390x machines.
I've recently reported a problem that the agents ran out of space and it 
was fixed by Ehud:

https://ovirt-jira.atlassian.net/browse/OVIRT-2734

Ehud, did you notice anything weird occupying disk space on these agents?

>
>
> The second failure today.
>
> I hope we can fix this quickly.
>
> If not we need to disable fedora builds for now.
>

> Jenkins build artifacts fail on Fedora - global_setup[lago_setup] WARN: Lago 
> directory missing
> --
>
> Key: OVIRT-2736
> URL: https://ovirt-jira.atlassian.net/browse/OVIRT-2736
> Project: oVirt - virtualization made easy
>  Issue Type: By-EMAIL
>Reporter: Nir Soffer
>Assignee: infra
>
> Here a failed build:
> https://jenkins.ovirt.org/blue/organizations/jenkins/standard-manual-runner/detail/standard-manual-runner/302/pipeline
> The second failure today.
> I hope we can fix this quickly.
> If not we need to disable fedora builds for now.



--
This message was sent by Atlassian Jira
(v1001.0.0-SNAPSHOT#100103)
___
Infra mailing list -- infra@ovirt.org
To unsubscribe send an email to infra-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/infra@ovirt.org/message/JMDS4MJHTXAXAD2JE2X4VDWNMG5IMWSD/


[JIRA] (OVIRT-2783) CI fails - tox error

2019-08-26 Thread Marcin Sobczyk (oVirt JIRA)

[ 
https://ovirt-jira.atlassian.net/browse/OVIRT-2783?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=39706#comment-39706
 ] 

Marcin Sobczyk commented on OVIRT-2783:
---

Hi,

it's a known issue:

https://stackoverflow.com/questions/54648246/invalid-syntax-in-more-itertools-when-running-pytest

Posted a patch for this:

https://gerrit.ovirt.org/#/c/102849/

Regards, Marcin

On 8/25/19 8:04 PM, Nir Soffer wrote:
>
>
> On Sun, Aug 25, 2019, 15:19 Pavel Bar  > wrote:
>
> Hi,
> We experience problems with failed CI on VDSM patches.
>
>
> Does it work locally and in travis?
>
> Examples:
> http://jenkins.ovirt.org/job/vdsm_standard-check-patch/10633/
> http://jenkins.ovirt.org/job/vdsm_standard-check-patch/10634/
> http://jenkins.ovirt.org/job/vdsm_standard-check-patch/10635/
>
> This is what we see in logs:
>
> out=`tox --version`; \
> if [ $? -ne 0 ]; then \
> echo "Error: cannot run tox, please install tox \
> 2.9.1 or later"; \
> exit 1; \
> fi; \
> version=`echo $out | cut -d' ' -f1`; \
> if python2.7 build-aux/vercmp $version 2.9.1; then \
> echo "Error: tox is too old, please install tox \
> 2.9.1 or later"; \
> exit 1; \
> fi
> Traceback (most recent call last):
>   File "/usr/bin/tox", line 7, in 
>     from tox import cmdline
>   File "/usr/lib/python2.7/site-packages/tox/__init__.py", line 4,
> in 
>     from .hookspecs import hookimpl
>   File "/usr/lib/python2.7/site-packages/tox/hookspecs.py", line
> 4, in 
>     from pluggy import HookimplMarker
>   File "/usr/lib/python2.7/site-packages/pluggy/__init__.py", line
> 16, in 
>     from .manager import PluginManager, PluginValidationError
>   File "/usr/lib/python2.7/site-packages/pluggy/manager.py", line
> 6, in 
>     import importlib_metadata
>   File
> "/usr/lib/python2.7/site-packages/importlib_metadata/__init__.py",
> line 9, in 
>     import zipp
>   File "/usr/lib/python2.7/site-packages/zipp.py", line 12, in
> 
>     import more_itertools
>   File
> "/usr/lib/python2.7/site-packages/more_itertools/__init__.py",
> line 1, in 
>     from more_itertools.more import *  # noqa
>   File "/usr/lib/python2.7/site-packages/more_itertools/more.py",
> line 340
>     def _collate(*iterables, key=lambda a: a, reverse=False):
>                                ^
> SyntaxError: invalid syntax
> Error: cannot run tox, please install tox 2.9.1 or later
> make: *** [tox] Error 1
> + teardown
> + res=2
> + '[' 2 -ne 0 ']'
> + echo '*** err: 2'
> *** err: 2
> + collect_logs
> + tar --directory /var/log --exclude 'journal/*' -czf
> 
> /home/jenkins/workspace/vdsm_standard-check-patch/vdsm/exported-artifacts/mock_varlogs.tar.gz
> .
> + tar --directory /var/host_log --exclude 'journal/*' -czf
> 
> /home/jenkins/workspace/vdsm_standard-check-patch/vdsm/exported-artifacts/host_varlogs.tar.gz
> .
> + teardown_storage
> + python2 tests/storage/userstorage.py teardown
>
> Can someone look at it?
>
> Thank you in advance!
>
> Pavel
>
> ___
> Devel mailing list -- de...@ovirt.org 
> To unsubscribe send an email to devel-le...@ovirt.org
> 
> Privacy Statement: https://www.ovirt.org/site/privacy-policy/
> oVirt Code of Conduct:
> https://www.ovirt.org/community/about/community-guidelines/
> List Archives:
> 
> https://lists.ovirt.org/archives/list/de...@ovirt.org/message/VADZM35IRVN4EY24MOM3JIHLWQKASLRG/
>
>
> ___
> Devel mailing list -- de...@ovirt.org
> To unsubscribe send an email to devel-le...@ovirt.org
> Privacy Statement: https://www.ovirt.org/site/privacy-policy/
> oVirt Code of Conduct: 
> https://www.ovirt.org/community/about/community-guidelines/
> List Archives: 
> https://lists.ovirt.org/archives/list/de...@ovirt.org/message/AVD6QSBEPRTDYI7W2XNWSEPD4OT35LIW/

> CI fails - tox error
> 
>
> Key: OVIRT-2783
> URL: https://ovirt-jira.atlassian.net/browse/OVIRT-2783
> Project: oVirt - virtualization made easy
>  Issue Type: By-EMAIL
>Reporter: Pavel Bar
>Assignee: infra
>
> Hi,
> We experience problems with failed CI on VDSM patches.
> Examples:
> http://jenkins.ovirt.org/job/vdsm_standard-check-patch/10633/
> http://jenkins.ovirt.org/job/vdsm_standard-check-patch/10634/
> http://jenkins.ovirt.org/job/vdsm_standard-check-patch/10635/
> This is what we see in logs:
> out=`tox --version`; \
> if [ $? -ne 0 ]; then \
> echo "Error: cannot run tox, please install tox \
> 2.9.1 or later"; \
> exit 1; \
> fi; \
> version=`echo $o

[JIRA] (OVIRT-2813) EL8 builds fail to mount loop device - kernel too old?

2019-10-09 Thread Marcin Sobczyk (oVirt JIRA)

[ 
https://ovirt-jira.atlassian.net/browse/OVIRT-2813?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=39899#comment-39899
 ] 

Marcin Sobczyk commented on OVIRT-2813:
---

Hi,

On 10/10/19 6:14 AM, Yuval Turgeman wrote:
> You may be running out of loop devices, if that's the case, you need 
> to manually mknod them, see 
> https://github.com/oVirt/ovirt-node-ng-image/blob/56a2797b5ef84bd56ab95fdb0cbfb908b4bc8ec1/automation/build-artifacts.sh#L24
>
> On Thursday, October 10, 2019, Nir Soffer  > wrote:
>
> On Wed, Oct 9, 2019 at 11:56 PM Nir Soffer  > wrote:
>
> I'm trying to run imageio tests on el8 mock, and the tests
> fail early when trying to create storage
> for testing:
>
> [userstorage] INFOCreating filesystem 
> /var/tmp/imageio-storage/file-512-ext4-mount
> Suggestion: Use Linux kernel >= 3.18 for improved stability of the 
> metadata and journal checksum features.
> [userstorage] INFOCreating file 
> /var/tmp/imageio-storage/file-512-ext4-mount/file
> [userstorage] INFOCreating backing file 
> /var/tmp/imageio-storage/file-512-xfs-backing
> [userstorage] INFOCreating loop device 
> /var/tmp/imageio-storage/file-512-xfs-loop
> [userstorage] INFOCreating filesystem 
> /var/tmp/imageio-storage/file-512-xfs-mount
> mount: /var/tmp/imageio-storage/file-512-xfs-mount: wrong fs type, 
> bad option, bad superblock on /dev/loop4, missing codepage or helper program, 
> or other error.
> Traceback (most recent call last):
>File "/usr/local/bin/userstorage", line 10, in 
>  sys.exit(main())
>File 
> "/usr/local/lib/python3.6/site-packages/userstorage/__main__.py", line 42, in 
> main
>  create(cfg)
>File 
> "/usr/local/lib/python3.6/site-packages/userstorage/__main__.py", line 52, in 
> create
>  b.create()
>File "/usr/local/lib/python3.6/site-packages/userstorage/file.py", 
> line 47, in create
>  self._mount.create()
>File 
> "/usr/local/lib/python3.6/site-packages/userstorage/mount.py", line 53, in 
> create
>  self._mount_loop()
>File 
> "/usr/local/lib/python3.6/site-packages/userstorage/mount.py", line 94, in 
> _mount_loop
>  ["sudo", "mount", "-t", self.fstype, self._loop.path, self.path])
>File "/usr/lib64/python3.6/subprocess.py", line 311, in check_call
>  raise CalledProcessError(retcode, cmd)
> subprocess.CalledProcessError: Command '['sudo', 'mount', '-t', 
> 'xfs', '/var/tmp/imageio-storage/file-512-xfs-loop', 
> '/var/tmp/imageio-storage/file-512-xfs-mount']' returned non-zero exit status 
> 32.
>
> 
> https://jenkins.ovirt.org/job/ovirt-imageio_standard-check-patch/1593//artifact/check-patch.el8.ppc64le/mock_logs/script/stdout_stderr.log
> 
> 
>
>
> Same code runs fine in Travis:
> https://travis-ci.org/nirs/ovirt-imageio/jobs/595794863
> 
>
>
> And also locally on Fedora 29:
> $ ../jenkins/mock_configs/mock_runner.sh -C
> ../jenkins/mock_configs -p el8
> ...
> ## Wed Oct  9 23:37:22 IDT 2019 Finished env: el8:epel-8-x86_64
> ##      took 85 seconds
> ##      rc = 0
>
>
> My guess is that we run el8 jobs on el7 hosts with old kernels
> (Suggestion: Use Linux kernel >= 3.18 for improved stability
> of the metadata and journal checksum features.)
>
>
> Here is info from failed builds:
>
> DEBUG buildroot.py:503: kernel version == 3.10.0-693.11.6.el7.ppc64le
> 
> https://jenkins.ovirt.org/job/ovirt-imageio_standard-check-patch/1593/artifact/check-patch.el8.ppc64le/mock_logs/init/root.log
> 
> 
>
> DEBUG buildroot.py:503:  kernel version == 3.10.0-957.12.1.el7.x86_64
> 
> https://jenkins.ovirt.org/job/ovirt-imageio_standard-check-patch/1593/artifact/check-patch.el8.x86_64/mock_logs/init/root.log
> 
> 
> and successful builds:
>
> DEBUG buildroot.py:503:  kernel version == 5.1.18-200.fc29.x86_64
> My laptop
>
> Runtime kernel version: 4.15.0-1032-gcp
> https://travis-ci.org/nirs/ovirt-imageio/jobs/595794863
> 
>
>
> This issue will affects vdsm, using similar code to create
> storage for testing.

[JIRA] (OVIRT-2818) Manual OST's basic suite runs are broken on libvirt ipv6 issue

2019-10-22 Thread Marcin Sobczyk (oVirt JIRA)
Marcin Sobczyk created OVIRT-2818:
-

 Summary: Manual OST's basic suite runs are broken on libvirt ipv6 
issue
 Key: OVIRT-2818
 URL: https://ovirt-jira.atlassian.net/browse/OVIRT-2818
 Project: oVirt - virtualization made easy
  Issue Type: By-EMAIL
Reporter: Marcin Sobczyk
Assignee: infra


Hi,

trying to run OST's basic suite fails with an error:

*10:51:19* @ Start Prefix: *10:51:19*   # Start nets: *10:51:19* *
Create network lago-basic-suite-master-net-storage: *10:51:34* *
Create network lago-basic-suite-master-net-storage: ERROR (in
0:00:15)*10:51:34*   # Start nets: ERROR (in 0:00:15)*10:51:34* @
Start Prefix: ERROR (in 0:00:15)*10:51:34* Error occured,
aborting*10:51:34* Traceback (most recent call last):*10:51:34*   File
"/usr/lib/python2.7/site-packages/lago/cmd.py", line 969, in
main*10:51:34* cli_plugins[args.verb].do_run(args)*10:51:34*
File "/usr/lib/python2.7/site-packages/lago/plugins/cli.py", line 184,
in do_run*10:51:34* self._do_run(**vars(args))*10:51:34*   File
"/usr/lib/python2.7/site-packages/lago/utils.py", line 573, in
wrapper*10:51:34* return func(*args, **kwargs)*10:51:34*   File
"/usr/lib/python2.7/site-packages/lago/utils.py", line 584, in
wrapper*10:51:34* return func(*args, prefix=prefix,
**kwargs)*10:51:34*   File
"/usr/lib/python2.7/site-packages/lago/cmd.py", line 271, in
do_start*10:51:34* prefix.start(vm_names=vm_names)*10:51:34*
File "/usr/lib/python2.7/site-packages/lago/sdk_utils.py", line 50, in
wrapped*10:51:34* return func(*args, **kwargs)*10:51:34*   File
"/usr/lib/python2.7/site-packages/lago/prefix.py", line 1323, in
start*10:51:34* self.virt_env.start(vm_names=vm_names)*10:51:34*
File "/usr/lib/python2.7/site-packages/lago/virt.py", line 341, in
start*10:51:34* net.start()*10:51:34*   File
"/usr/lib/python2.7/site-packages/lago/providers/libvirt/network.py",
line 115, in start*10:51:34* net =
self.libvirt_con.networkCreateXML(self._libvirt_xml())*10:51:34*
File "/usr/lib64/python2.7/site-packages/libvirt.py", line 4216, in
networkCreateXML*10:51:34* if ret is None:raise
libvirtError('virNetworkCreateXML() failed', conn=self)*10:51:34*
libvirtError: COMMAND_FAILED: INVALID_IPV: 'ipv6' is not a valid
backend or is unavailable

Example run:
https://jenkins.ovirt.org/view/oVirt%20system%20tests/job/ovirt-system-tests_manual/5833/consoleFull

Regards, Marcin



--
This message was sent by Atlassian Jira
(v1001.0.0-SNAPSHOT#100113)
___
Infra mailing list -- infra@ovirt.org
To unsubscribe send an email to infra-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/infra@ovirt.org/message/LV2LIP4RGUL7ZEUBWJYLZHZFZ7DSGZRW/


[JIRA] (OVIRT-2824) el8 pipelines failing on dnf module-related error

2019-11-06 Thread Marcin Sobczyk (oVirt JIRA)
Marcin Sobczyk created OVIRT-2824:
-

 Summary: el8 pipelines failing on dnf module-related error
 Key: OVIRT-2824
 URL: https://ovirt-jira.atlassian.net/browse/OVIRT-2824
 Project: oVirt - virtualization made easy
  Issue Type: By-EMAIL
Reporter: Marcin Sobczyk
Assignee: infra


Hi,

our el8 pipelines are failing with:

[2019-11-06T09:50:58.119Z]  # /usr/bin/dnf --installroot
/var/lib/mock/epel-8-x86_64-07665569fc3ce1a11bd8aaf80a58999f-18386/root/
-y --releasever 8 --setopt=deltarpm=False --allowerasing
--disableplugin=local --disableplugin=spacewalk module enable
javapackages-tools python36 --setopt=tsflags=nocontexts
 
[2019-11-06T09:50:58.119Z]
Error: Problems in request:
 
[2019-11-06T09:50:58.119Z]
missing groups or modules: javapackages-tools, python36

I did a quick test locally on a CentOS 8 container image and these are
installable simply as packages (without any module tinkering):

dnf idnf install[root@f43b0f10a3c8 /]# dnf install python36 javapackages-tools
Failed to set locale, defaulting to C
CentOS-8 - AppStream

1.7 MB/s | 6.3 MB 00:03
CentOS-8 - Base

1.9 MB/s | 7.9 MB 00:04
CentOS-8 - Extras

689  B/s | 2.1 kB 00:03
Dependencies resolved.
==
 Package  Arch
   Version
Repository  Size
==
Installing:
 javapackages-tools   noarch
   5.3.0-1.module_el8.0.0+11+5b8c10bd
AppStream   44 k
 python36 x86_64
   3.6.8-2.module_el8.0.0+33+0a10c0e1
AppStream   19 k
Installing dependencies:
 copy-jdk-configs noarch
   3.7-1.el8
AppStream   27 k
 java-1.8.0-openjdk-headless  x86_64
   1:1.8.0.232.b09-0.el8_0
AppStream   32 M
 javapackages-filesystem  noarch
   5.3.0-1.module_el8.0.0+11+5b8c10bd
AppStream   30 k
 libjpeg-turbox86_64
   1.5.3-7.el8
AppStream  155 k
 lua  x86_64
   5.3.4-10.el8
AppStream  192 k
 nspr x86_64
   4.21.0-2.el8_0
AppStream  143 k
 nss  x86_64
   3.44.0-7.el8_0
AppStream  722 k
 nss-softokn  x86_64
   3.44.0-7.el8_0
AppStream  470 k
 nss-softokn-freebl   x86_64
   3.44.0-7.el8_0
AppStream  274 k
 nss-sysinit  x86_64
   3.44.0-7.el8_0
AppStream   69 k
 nss-util x86_64
   3.44.0-7.el8_0
AppStream  134 k
 python3-pip  noarch
   9.0.3-13.el8
AppStream   18 k
 tzdata-java  noarch
   2019a-1.el8
AppStream  188 k
 avahi-libs   x86_64
   0.7-19.el8
BaseOS  62 k
 cups-libsx86_64
   1:2.2.6-25.el8
BaseOS 432 k
 freetype x86_64
   2.9.1-4.el8
BaseOS 393 k
 libpng   x86_64
   2:1.6.34-5.el8
BaseOS 126 k
 lksctp-tools x86_64
   1.0.18-3.el8
BaseOS 100 k
 python3-setuptools   noarch
   39.2.0-4.el8
BaseOS 162 k
 whichx86_64
   2.21-10.el8
BaseOS  49 k
Enabling module streams:
 javapackages-runtime
   201801
 python36
   3.6

Transaction Summary
==
Install  22 Packages

Tot

[JIRA] (OVIRT-2837) OST fails for collecting artifacts

2019-11-18 Thread Marcin Sobczyk (oVirt JIRA)

[ 
https://ovirt-jira.atlassian.net/browse/OVIRT-2837?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=39986#comment-39986
 ] 

Marcin Sobczyk commented on OVIRT-2837:
---

Hi,

I thought it's been removed already, but it seems it's not.
I'm the author of the currently used and newer implementation of 
artifact collection in lago,
but from my experiments, I've learned that the extraction of wildcard 
paths never worked.
Here's an email I wrote to Galit about it some time ago:

=

Hi Galit,

aaah yes - wildcard collection doesn't work - it never worked, even 
before my changes.

TL; DR - we just need to remove wildcard stuff from "LagoInitFile.in" 
files ("/tmp/otopi*", "/tmp/ovirt*").


If you're curious what really happens... :)

The old algorithm uses "SCPClient" from "scp" library to copy files.
"scp.get" function accepts, among others, two arguments - "remote_path" 
and "local_path".
What we do, is we change slashes in "remote_path" to underscores and 
pass it as "local_path":

https://github.com/lago-project/lago/blob/7024fb7cabbd4ebc87f3ad12e35e0f5832af7d56/lago/plugins/vm.py#L651

The effect is, we command "scp.get" to retrieve "/tmp/otopi*" and save 
it as "_tmp_otopi*"... which of course makes no sense at all and doesn't 
work...


The new implementation *could* work with wildcards because the 
collection is divided into two stages:

https://github.com/lago-project/lago/blob/9803eeacd41b3f91cd6661a110aa0285aaf4b957/lago/plugins/vm.py#L313

First we do the "tar -> copy tar with ssh -> untar to tmpdir" thing and 
*only then* we use "shutil.move" to rename the files to the underscored 
version.
We could use "glob" module to try to iterate over stuff like 
"/tmp/otopi*" and rename the files appropriately.
However, we maintain two parallel implementations of artifacts 
collection - the old one being a "plan B" in case there's no "tar" or 
"gzip" on the target machine.
This is the reason we have to keep both implementations identical in 
behavior to avoid confusion. BTW the new implementation could drop the 
underscore-renaming process completely - I think the only reason we do 
the renaming in the old algorithm is because "scp" won't create 
intermediary directories for you... untarring stuff handles that case 
well, but that's a backwards-compatibility-breaking change :)

=

I will post a patch that removes this.

Regards, Marcin

On 11/18/19 2:45 PM, Amit Bawer wrote:
> Happens for several runs, full log can be seen at
> http://jenkins.ovirt.org/job/ovirt-system-tests_manual/6057/artifact/exported-artifacts/test_logs/basic-suite-master/post-002_bootstrap.py/lago_logs/lago.log
> 2019-11-18 12:28:12,710::log_utils.py::end_log_task::670::root::ERROR::  
> - [Thread-42] lago-basic-suite-master-engine:  [31mERROR [0m (in 0:00:08)
> 2019-11-18 12:28:12,731::log_utils.py::__exit__::607::lago.prefix::DEBUG::  
> File "/usr/lib/python2.7/site-packages/lago/prefix.py", line 1526, in 
> _collect_artifacts
>  vm.collect_artifacts(path, ignore_nopath)
>File "/usr/lib/python2.7/site-packages/lago/plugins/vm.py", line 748, in 
> collect_artifacts
>  ignore_nopath=ignore_nopath
>File "/usr/lib/python2.7/site-packages/lago/plugins/vm.py", line 468, in 
> extract_paths
>  return self.provider.extract_paths(paths, *args, **kwargs)
>File "/usr/lib/python2.7/site-packages/lago/providers/libvirt/vm.py", line 
> 398, in extract_paths
>  ignore_nopath=ignore_nopath,
>File "/usr/lib/python2.7/site-packages/lago/plugins/vm.py", line 253, in 
> extract_paths
>  self._extract_paths_tar_gz(paths, ignore_nopath)
>File "/usr/lib/python2.7/site-packages/lago/plugins/vm.py", line 102, in 
> wrapper
>  return func(self, *args, **kwargs)
>File "/usr/lib/python2.7/site-packages/lago/plugins/vm.py", line 341, in 
> _extract_paths_tar_gz
>  raise ExtractPathNoPathError(remote_path)
>
> 2019-11-18 
> 12:28:12,731::utils.py::_ret_via_queue::63::lago.utils::DEBUG::Error while 
> running thread Thread-42
> Traceback (most recent call last):
>File "/usr/lib/python2.7/site-packages/lago/utils.py", line 58, in 
> _ret_via_queue
>  queue.put({'return': func()})
>File "/usr/lib/python2.7/site-packages/lago/prefix.py", line 1526, in 
> _collect_artifacts
>  vm.collect_artifacts(path, ignore_nopath)
>File "/usr/lib/python2.7/site-packages/lago/plugins/vm.py", line 748, in 
> collect_artifacts
>  ignore_nopath=ignore_nopath
>File "/usr/lib/python2.7/site-packages/lago/plugins/vm.py", line 468, in 
> extract_paths
>  return self.provider.extract_paths(paths, *args, **kwargs)
>File "/usr/lib/python2.7/site-packages/lago/providers/libvirt/vm.py", line 
> 398, in extract_paths
>  ignore_nopath=ignore_nopath,
>File "/usr/lib/python2.7/site-packages/lago/plugins/vm.py", line 253, in 
> extract_paths
>  self._extr

[JIRA] (OVIRT-2837) OST fails for collecting artifacts

2019-11-18 Thread Marcin Sobczyk (oVirt JIRA)

[ 
https://ovirt-jira.atlassian.net/browse/OVIRT-2837?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=39988#comment-39988
 ] 

Marcin Sobczyk commented on OVIRT-2837:
---

Posted a patch for this issue: 
[https://gerrit.ovirt.org/#/c/104789/|https://gerrit.ovirt.org/#/c/104789/]

> OST fails for collecting artifacts
> --
>
> Key: OVIRT-2837
> URL: https://ovirt-jira.atlassian.net/browse/OVIRT-2837
> Project: oVirt - virtualization made easy
>  Issue Type: By-EMAIL
>Reporter: Amit Bawer
>Assignee: infra
>
> Happens for several runs, full log can be seen at
> http://jenkins.ovirt.org/job/ovirt-system-tests_manual/6057/artifact/exported-artifacts/test_logs/basic-suite-master/post-002_bootstrap.py/lago_logs/lago.log
> 2019-11-18 12:28:12,710::log_utils.py::end_log_task::670::root::ERROR::
>  - [Thread-42] lago-basic-suite-master-engine:  [31mERROR [0m (in
> 0:00:08)
> 2019-11-18 12:28:12,731::log_utils.py::__exit__::607::lago.prefix::DEBUG::
>  File "/usr/lib/python2.7/site-packages/lago/prefix.py", line 1526, in
> _collect_artifacts
> vm.collect_artifacts(path, ignore_nopath)
>   File "/usr/lib/python2.7/site-packages/lago/plugins/vm.py", line
> 748, in collect_artifacts
> ignore_nopath=ignore_nopath
>   File "/usr/lib/python2.7/site-packages/lago/plugins/vm.py", line
> 468, in extract_paths
> return self.provider.extract_paths(paths, *args, **kwargs)
>   File "/usr/lib/python2.7/site-packages/lago/providers/libvirt/vm.py",
> line 398, in extract_paths
> ignore_nopath=ignore_nopath,
>   File "/usr/lib/python2.7/site-packages/lago/plugins/vm.py", line
> 253, in extract_paths
> self._extract_paths_tar_gz(paths, ignore_nopath)
>   File "/usr/lib/python2.7/site-packages/lago/plugins/vm.py", line
> 102, in wrapper
> return func(self, *args, **kwargs)
>   File "/usr/lib/python2.7/site-packages/lago/plugins/vm.py", line
> 341, in _extract_paths_tar_gz
> raise ExtractPathNoPathError(remote_path)
> 2019-11-18 
> 12:28:12,731::utils.py::_ret_via_queue::63::lago.utils::DEBUG::Error
> while running thread Thread-42
> Traceback (most recent call last):
>   File "/usr/lib/python2.7/site-packages/lago/utils.py", line 58, in
> _ret_via_queue
> queue.put({'return': func()})
>   File "/usr/lib/python2.7/site-packages/lago/prefix.py", line 1526,
> in _collect_artifacts
> vm.collect_artifacts(path, ignore_nopath)
>   File "/usr/lib/python2.7/site-packages/lago/plugins/vm.py", line
> 748, in collect_artifacts
> ignore_nopath=ignore_nopath
>   File "/usr/lib/python2.7/site-packages/lago/plugins/vm.py", line
> 468, in extract_paths
> return self.provider.extract_paths(paths, *args, **kwargs)
>   File "/usr/lib/python2.7/site-packages/lago/providers/libvirt/vm.py",
> line 398, in extract_paths
> ignore_nopath=ignore_nopath,
>   File "/usr/lib/python2.7/site-packages/lago/plugins/vm.py", line
> 253, in extract_paths
> self._extract_paths_tar_gz(paths, ignore_nopath)
>   File "/usr/lib/python2.7/site-packages/lago/plugins/vm.py", line
> 102, in wrapper
> return func(self, *args, **kwargs)
>   File "/usr/lib/python2.7/site-packages/lago/plugins/vm.py", line
> 341, in _extract_paths_tar_gz
> raise ExtractPathNoPathError(remote_path)
> ExtractPathNoPathError: Failed to extract files: /tmp/otopi*



--
This message was sent by Atlassian Jira
(v1001.0.0-SNAPSHOT#100114)
___
Infra mailing list -- infra@ovirt.org
To unsubscribe send an email to infra-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/infra@ovirt.org/message/RFVJWB3RS6XLOCJQQYXR2I3FZ7H5QKAH/


[JIRA] (OVIRT-2869) Move 'lago' and 'lago-ost-plugin' projects to use stdci v2 pipelines

2020-03-05 Thread Marcin Sobczyk (oVirt JIRA)
Marcin Sobczyk created OVIRT-2869:
-

 Summary: Move 'lago' and 'lago-ost-plugin' projects to use stdci 
v2 pipelines
 Key: OVIRT-2869
 URL: https://ovirt-jira.atlassian.net/browse/OVIRT-2869
 Project: oVirt - virtualization made easy
  Issue Type: Task
  Components: GitHub
Reporter: Marcin Sobczyk
Assignee: infra


Please move the aforementioned projects to stdci v2 pipelines - I will add 
'stdci.yml' files accordingly.



--
This message was sent by Atlassian Jira
(v1001.0.0-SNAPSHOT#100121)
___
Infra mailing list -- infra@ovirt.org
To unsubscribe send an email to infra-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/infra@ovirt.org/message/ASUBN74ZK3NFXWBXOE5ZRDHEOJKBIRSR/


[JIRA] (OVIRT-2897) ppc64le build-artifacts/check-merged job failing in "mock init"

2020-04-06 Thread Marcin Sobczyk (oVirt JIRA)

[ 
https://ovirt-jira.atlassian.net/browse/OVIRT-2897?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=40277#comment-40277
 ] 

Marcin Sobczyk commented on OVIRT-2897:
---

Same thing happens for vdsm merge jobs. Can someone take a look at this?

[https://jenkins.ovirt.org/blue/organizations/jenkins/vdsm_standard-on-merge/detail/vdsm_standard-on-merge/2512/|https://jenkins.ovirt.org/blue/organizations/jenkins/vdsm_standard-on-merge/detail/vdsm_standard-on-merge/2512/]

> ppc64le build-artifacts/check-merged job failing in "mock init"
> ---
>
> Key: OVIRT-2897
> URL: https://ovirt-jira.atlassian.net/browse/OVIRT-2897
> Project: oVirt - virtualization made easy
>  Issue Type: By-EMAIL
>Reporter: Nir Soffer
>Assignee: infra
>
> The ppc64le build artifacts jobs fail now in "mock init". Looks like
> an environmental issue.
> Here are few failing builds:
> https://jenkins.ovirt.org/job/ovirt-imageio_standard-check-patch/2574/
> https://jenkins.ovirt.org/job/ovirt-imageio_standard-check-patch/2573/
> https://jenkins.ovirt.org/job/ovirt-imageio_standard-on-merge/573/
> This seems to be the last successful build, 4 days ago:
> https://jenkins.ovirt.org/job/ovirt-imageio_standard-on-merge/563/
> 
> [2020-04-04T19:28:16.332Z] + ../jenkins/mock_configs/mock_runner.sh
> --execute-script automation/build-artifacts.py3.sh --mock-confs-dir
> ../jenkins/mock_configs --secrets-file
> /home/jenkins/workspace/ovirt-imageio_standard-check-patch/std_ci_secrets.yaml
> --try-proxy --timeout-duration 10800 --try-mirrors
> http://mirrors.phx.ovirt.org/repos/yum/all_latest.json 'el8.*ppc64le'
> [2020-04-04T19:28:16.332Z]
> ##
> [2020-04-04T19:28:16.332Z]
> ##
> [2020-04-04T19:28:16.332Z] ## Sat Apr  4 19:28:16 UTC 2020 Running
> env: el8:epel-8-ppc64le
> [2020-04-04T19:28:16.332Z]
> ##
> [2020-04-04T19:28:16.332Z]
> @@
> [2020-04-04T19:28:16.332Z] @@ Sat Apr  4 19:28:16 UTC 2020 Running
> chroot for script: automation/build-artifacts.py3.sh
> [2020-04-04T19:28:16.332Z]
> @@
> [2020-04-04T19:28:16.599Z] Using base mock conf
> ../jenkins/mock_configs/epel-8-ppc64le.cfg
> [2020-04-04T19:28:16.599Z] WARN: Unable to find req file
> automation/build-artifacts.py3.req or
> automation/build-artifacts.py3.req.el8, skipping req
> [2020-04-04T19:28:16.599Z] Using proxified config
> ../jenkins/mock_configs/epel-8-ppc64le_proxied.cfg
> [2020-04-04T19:28:16.599Z] Generating temporary mock conf
> /home/jenkins/workspace/ovirt-imageio_standard-check-patch/ovirt-imageio/mocker-epel-8-ppc64le.el8
> [2020-04-04T19:28:16.599Z] Skipping mount points
> [2020-04-04T19:28:16.599Z] WARN: Unable to find repos file
> automation/build-artifacts.py3.repos or
> automation/build-artifacts.py3.repos.el8, skipping repos
> [2020-04-04T19:28:16.599Z] Using chroot cache =
> /var/cache/mock/epel-8-ppc64le-ef003ab0662b9b04a2143d179b949705
> [2020-04-04T19:28:16.599Z] Using chroot dir =
> /var/lib/mock/epel-8-ppc64le-ef003ab0662b9b04a2143d179b949705-15351
> [2020-04-04T19:28:16.599Z] Skipping environment variables
> [2020-04-04T19:28:16.599Z] == Initializing chroot
> [2020-04-04T19:28:16.599Z] mock \
> [2020-04-04T19:28:16.599Z] --old-chroot \
> [2020-04-04T19:28:16.599Z]
> --configdir="/home/jenkins/workspace/ovirt-imageio_standard-check-patch/ovirt-imageio"
> \
> [2020-04-04T19:28:16.599Z] --root="mocker-epel-8-ppc64le.el8" \
> [2020-04-04T19:28:16.599Z] --resultdir="/tmp/mock_logs.FrjZTOEI/init" 
> \
> [2020-04-04T19:28:16.599Z] --init
> [2020-04-04T19:28:17.186Z] WARNING: Could not find required logging
> config file: 
> /home/jenkins/workspace/ovirt-imageio_standard-check-patch/ovirt-imageio/logging.ini.
> Using default...
> [2020-04-04T19:28:17.186Z] INFO: mock.py version 1.4.21 starting
> (python version = 3.6.8)...
> [2020-04-04T19:28:17.186Z] Start(bootstrap): init plugins
> [2020-04-04T19:28:17.186Z] INFO: selinux enabled
> [2020-04-04T19:28:17.845Z] Finish(bootstrap): init plugins
> [2020-04-04T19:28:17.845Z] Start: init plugins
> [2020-04-04T19:28:17.845Z] INFO: selinux enabled
> [2020-04-04T19:28:17.845Z] Finish: init plugins
> [2020-04-04T19:28:17.845Z] INFO: Signal handler active
> [2020-04-04T19:28:17.845Z] Start: run
> [2020-04-04T19:28:17.845Z] Start: clean chroot
> [2020-04-04T19:28:17.845Z] Finish: clean chroot
> [2020-04-04T19:28:17.845Z] Start(bootstrap): chroot init
> [2020-04-04T19:28:17.845Z] INFO: calling preinit hooks
> [2020-04-04T19:28:17.845Z] INFO: enabled root cache
> [2020-04-04T19:28:17.846Z] INFO

[JIRA] (OVIRT-2909) CI is Not Triggered and Success Does Not result in +1

2020-04-17 Thread Marcin Sobczyk (oVirt JIRA)

[ 
https://ovirt-jira.atlassian.net/browse/OVIRT-2909?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=40341#comment-40341
 ] 

Marcin Sobczyk commented on OVIRT-2909:
---

Is this really a bug? AFAIK “ci please build“ never altered CI marks in gerrit 
- it runs “build-artifacts” stage which should not run tests. “ci please test“ 
OTOH should alter the CI mark.

> CI is Not Triggered and Success Does Not result in +1
> -
>
> Key: OVIRT-2909
> URL: https://ovirt-jira.atlassian.net/browse/OVIRT-2909
> Project: oVirt - virtualization made easy
>  Issue Type: By-EMAIL
>Reporter: Anton Marchukov
>Assignee: infra
>
> https://gerrit.ovirt.org/#/c/108365/ here CI was not triggered and after
> "ci please build" and successful CI finish it did not get +1 set.
> -- 
> Anton Marchukov
> Associate Manager - RHV DevOps - Red Hat



--
This message was sent by Atlassian Jira
(v1001.0.0-SNAPSHOT#100124)
___
Infra mailing list -- infra@ovirt.org
To unsubscribe send an email to infra-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/infra@ovirt.org/message/R6TV7GYGOOZNTSNLDHXEWVKB3RWX2E4M/