Jenkins build is back to normal : system-sync_mirrors-fedora-updates-fc25-x86_64 #649

2017-09-18 Thread jenkins
See 


___
Infra mailing list
Infra@ovirt.org
http://lists.ovirt.org/mailman/listinfo/infra


[oVirt Jenkins] ovirt-system-tests_he-basic-suite-master - Build # 41 - Still Failing!

2017-09-18 Thread jenkins
Project: http://jenkins.ovirt.org/job/ovirt-system-tests_he-basic-suite-master/ 
Build: http://jenkins.ovirt.org/job/ovirt-system-tests_he-basic-suite-master/41/
Build Number: 41
Build Status:  Still Failing
Triggered By: Started by timer

-
Changes Since Last Success:
-
Changes for Build #34
[Eyal Edri] drop obselete ovirt-4.0 virt repo from reposync

[Sandro Bonazzola] ovirt-image-uploader: add master builders

[Daniel Belenky] Add meaningful prefix to new/old xml dirs

[Sandro Bonazzola] ovirt-optimizer: drop from master / 4.2


Changes for Build #35
[Yaniv Kaul] Fix reposync for master


Changes for Build #36
[Yaniv Kaul] Fix reposync for master

[Milan Zamazal] vdsm: Add Fedora 26 build


Changes for Build #37
[Yaniv Kaul] Fix reposync for master


Changes for Build #38
[Yaniv Kaul] Disable the firewall on the storage server

[dfodor] update python-paramiko to python2-paramiko

[Daniel Belenky] add support to inject runtime env vars to mock

[Barak Korren] Add (partial) STD_CI support for 'automation.yaml'

[Barak Korren] Add retries to GitHub notifications

[Barak Korren] Fix pipeline stdci trigger detection code

[Barak Korren] Add production STD-CI pipeline jobs


Changes for Build #39
[Yaniv Kaul] Disable the firewall on the storage server


Changes for Build #40
[Yaniv Kaul] Disable the firewall on the storage server


Changes for Build #41
[Ales Musil] 003_basic_networking: Add test for passthrough vnic profile




-
Failed Tests:
-
1 tests failed.
FAILED:  002_bootstrap.wait_engine

Error Message:
None != True after 600 seconds

Stack Trace:
Traceback (most recent call last):
  File "/usr/lib64/python2.7/unittest/case.py", line 369, in run
testMethod()
  File "/usr/lib/python2.7/site-packages/nose/case.py", line 197, in runTest
self.test(*self.arg)
  File "/usr/lib/python2.7/site-packages/ovirtlago/testlib.py", line 129, in 
wrapped_test
test()
  File "/usr/lib/python2.7/site-packages/ovirtlago/testlib.py", line 59, in 
wrapper
return func(get_test_prefix(), *args, **kwargs)
  File 
"/home/jenkins/workspace/ovirt-system-tests_he-basic-suite-master/ovirt-system-tests/he-basic-suite-master/test-scenarios/002_bootstrap.py",
 line 714, in wait_engine
testlib.assert_true_within(_engine_is_up, timeout=10 * 60)
  File "/usr/lib/python2.7/site-packages/ovirtlago/testlib.py", line 263, in 
assert_true_within
assert_equals_within(func, True, timeout, allowed_exceptions)
  File "/usr/lib/python2.7/site-packages/ovirtlago/testlib.py", line 237, in 
assert_equals_within
'%s != %s after %s seconds' % (res, value, timeout)
AssertionError: None != True after 600 seconds___
Infra mailing list
Infra@ovirt.org
http://lists.ovirt.org/mailman/listinfo/infra


[oVirt Jenkins] ovirt-system-tests_hc-basic-suite-master - Build # 34 - Still Failing!

2017-09-18 Thread jenkins
Project: http://jenkins.ovirt.org/job/ovirt-system-tests_hc-basic-suite-master/ 
Build: http://jenkins.ovirt.org/job/ovirt-system-tests_hc-basic-suite-master/34/
Build Number: 34
Build Status:  Still Failing
Triggered By: Started by timer

-
Changes Since Last Success:
-
Changes for Build #15
[Ondra Machacek] Download latest module_utils ovirt.py file

[Barak Korren] Make ovirt-ansible jobs not deploy to tested


Changes for Build #16
[Ondra Machacek] Download latest module_utils ovirt.py file


Changes for Build #17
[Ondra Machacek] Download latest module_utils ovirt.py file


Changes for Build #18
[Ondra Machacek] Download latest module_utils ovirt.py file

[Daniel Belenky] Run infra-docs_check-patch and check-merged on el7

[Daniel Belenky] try to build container if related files were changed

[Gil Shinar] Suppress git ls-remote output to stderr

[Yuval Turgeman] Add 4.2 jobs for imgbased, node-ng and appliance

[Daniel Belenky] Fix bugs in mock_runner secrets binding

[Daniel Belenky] Prevent post merge Jobs from voting in Gerrit

[Gil Shinar] suppress git fetch output in upstream source collector

[Daniel Belenky] Add env inject to ost template

[Daniel Belenky] Move gerrit-trigger-ci-label inject to be first


Changes for Build #19
[Barak Korren] Ensure glusterfs-rdma package is synced by OST

[Sandro Bonazzola] ovirt-release: add 4.2 branch


Changes for Build #20
[Simone Tiraboschi] el7: add scl-rh repo on x86_64


Changes for Build #21
[Your Name] Change migration network IP address range.

[Daniel Belenky] Fix race condition in docker cleanup

[Barak Korren] Remove old ovirt-ansible jobs


Changes for Build #22
[Dominik Holler] Ping vm0

[Barak Korren] Added pipeline-std-ci jobs for GitHub PRs

[Barak Korren] Add support for building on demand from GitHub PRs

[Sandro Bonazzola] cockpit-ovirt: add 4.2 branch


Changes for Build #23
[Dominik Holler] Ping vm0

[Barak Korren] Keep CQ builds for 14 days


Changes for Build #24
[Your Name] 004_basic_sanity: add vm network vnic to vm


Changes for Build #25
[Dominik Holler] Parallelize booting of VMs


Changes for Build #26
[Yaniv Kaul] Add APIv4 based test to add an iSCSI based storage domain


Changes for Build #27
[Gal Ben Haim] Remove template files from suites that use Jinja2

[Daniel Belenky] Remove find bugs jobs


Changes for Build #28
[Gal Ben Haim] Remove template files from suites that use Jinja2


Changes for Build #29
[Eyal Edri] add kvm_common repo to .repos files

[Yedidyah Bar David] Add ovirt-host 3.6

[Sandro Bonazzola] ovirt-live: drop from master

[Sandro Bonazzola] ovirt-engine-sdk-java: add fc25 for master consumption


Changes for Build #30
[Yaniv Kaul] Fix reposync for master

[Sandro Bonazzola] ovirt-image-uploader: add master builders

[Daniel Belenky] Add meaningful prefix to new/old xml dirs

[Sandro Bonazzola] ovirt-optimizer: drop from master / 4.2


Changes for Build #31
[Yaniv Kaul] Fix reposync for master

[Milan Zamazal] vdsm: Add Fedora 26 build


Changes for Build #32
[Yaniv Kaul] Fix reposync for master


Changes for Build #33
[Yaniv Kaul] Disable the firewall on the storage server

[dfodor] update python-paramiko to python2-paramiko

[Daniel Belenky] add support to inject runtime env vars to mock

[Barak Korren] Add (partial) STD_CI support for 'automation.yaml'

[Barak Korren] Add retries to GitHub notifications

[Barak Korren] Fix pipeline stdci trigger detection code

[Barak Korren] Add production STD-CI pipeline jobs


Changes for Build #34
[Ales Musil] 003_basic_networking: Add test for passthrough vnic profile




-
Failed Tests:
-
No tests ran.___
Infra mailing list
Infra@ovirt.org
http://lists.ovirt.org/mailman/listinfo/infra


oVirt infra daily report - unstable production jobs - 448

2017-09-18 Thread jenkins
Good morning!

Attached is the HTML page with the jenkins status report. You can see it also 
here:
 - 
http://jenkins.ovirt.org/job/system_jenkins-report/448//artifact/exported-artifacts/upstream_report.html

Cheers,
Jenkins
 
 
 
 RHEVM CI Jenkins Daily Report - 18/09/2017
 
00 Unstable Critical
 
   
   ovirt-system-tests_ansible-suite-master
   
   This job is automatically updated by jenkins job builder, any manual
change will be lost in the next update. If you want to make permanent
changes, check out the 
jenkins repo.

   
   
   
   ovirt-system-tests_hc-basic-suite-master
   
   This job is automatically updated by jenkins job builder, any manual
change will be lost in the next update. If you want to make permanent
changes, check out the 
jenkins repo.

   
   
   
   ovirt-system-tests_he-basic-suite-master
   
   This job is automatically updated by jenkins job builder, any manual
change will be lost in the next update. If you want to make permanent
changes, check out the 
jenkins repo.

   
   
   
   system-sync_mirrors-fedora-updates-fc25-x86_64
   
   This job is automatically updated by jenkins job builder, any manual
change will be lost in the next update. If you want to make permanent
changes, check out the 
jenkins repo.

   
   ___
Infra mailing list
Infra@ovirt.org
http://lists.ovirt.org/mailman/listinfo/infra


Build failed in Jenkins: system-sync_mirrors-fedora-updates-fc25-x86_64 #648

2017-09-18 Thread jenkins
See 


--
Started by timer
[EnvInject] - Loading node environment variables.
Building remotely on mirrors.phx.ovirt.org (mirrors) in workspace 

 > git rev-parse --is-inside-work-tree # timeout=10
Fetching changes from the remote Git repository
 > git config remote.origin.url http://gerrit.ovirt.org/jenkins.git # timeout=10
Cleaning workspace
 > git rev-parse --verify HEAD # timeout=10
Resetting working tree
 > git reset --hard # timeout=10
 > git clean -fdx # timeout=10
Pruning obsolete local branches
Fetching upstream changes from http://gerrit.ovirt.org/jenkins.git
 > git --version # timeout=10
 > git fetch --tags --progress http://gerrit.ovirt.org/jenkins.git 
 > +refs/changes/13/75913/5:patch --prune
 > git rev-parse origin/patch^{commit} # timeout=10
 > git rev-parse patch^{commit} # timeout=10
Checking out Revision 4b0fe3e0c9fba26cdbaafe2b29fddd3411225d6f (patch)
Commit message: "Exclude big packages from mirrors"
 > git config core.sparsecheckout # timeout=10
 > git checkout -f 4b0fe3e0c9fba26cdbaafe2b29fddd3411225d6f
 > git rev-list 4b0fe3e0c9fba26cdbaafe2b29fddd3411225d6f # timeout=10
[system-sync_mirrors-fedora-updates-fc25-x86_64] $ /bin/bash -xe 
/tmp/jenkins7863686872464999301.sh
+ jenkins/scripts/mirror_mgr.sh resync_yum_mirror fedora-updates-fc25 x86_64 
jenkins/data/mirrors-reposync.conf
Checking if mirror needs a resync
Resyncing repo: fedora-updates-fc25
Syncing yum repo fedora-updates-fc25 for arch: x86_64
Traceback (most recent call last):
  File "/usr/bin/reposync", line 343, in 
main()
  File "/usr/bin/reposync", line 175, in main
my.doRepoSetup()
  File "/usr/lib/python2.7/site-packages/yum/__init__.py", line 681, in 
doRepoSetup
return self._getRepos(thisrepo, True)
  File "/usr/lib/python2.7/site-packages/yum/__init__.py", line 721, in 
_getRepos
self._repos.doSetup(thisrepo)
  File "/usr/lib/python2.7/site-packages/yum/repos.py", line 157, in doSetup
self.retrieveAllMD()
  File "/usr/lib/python2.7/site-packages/yum/repos.py", line 88, in 
retrieveAllMD
dl = repo._async and repo._commonLoadRepoXML(repo)
  File "/usr/lib/python2.7/site-packages/yum/yumRepo.py", line 1478, in 
_commonLoadRepoXML
self._revertOldRepoXML()
  File "/usr/lib/python2.7/site-packages/yum/yumRepo.py", line 1323, in 
_revertOldRepoXML
os.rename(old_data['old_local'], old_data['local'])
OSError: [Errno 2] No such file or directory
Build step 'Execute shell' marked build as failure
___
Infra mailing list
Infra@ovirt.org
http://lists.ovirt.org/mailman/listinfo/infra


Fwd: [rhevm-staff] oVirt website hackathon

2017-09-18 Thread Yaniv Kaul
Is there a tool we can periodically run that finds dead links on ovirt.org?
-- Forwarded message --
From: "Sandro Bonazzola" 
Date: Sep 18, 2017 5:56 PM
Subject: [rhevm-staff] oVirt website hackathon
To: "rhevm-staff" 
Cc:

Just got the following experience on #ovirt

(15:37:35) investigator: is there anyone around today that can help with a
first time install?
(16:13:40) sbonazzo: investigator: hi, how may I help? did you read
installation guide before starting the installation?
(16:14:01) investigator: sbonazzo: hi. thanks. yes, I read it
(16:14:17) sbonazzo: investigator: that's a good start :-)
(16:14:51) investigator: sbonazzo: I'm trying to understand several things
before I get going
(16:15:24) investigator: From what I've read, I think I want to do a
self-hosted engine
(16:15:46) investigator: So that means I need at least two machines to
start, right?
(16:16:06) investigator: And EL7 is the recommended base, true?
(16:16:08) sbonazzo: investigator: yes. you may want to use 3 machines if
you wnat to run in a hyperconverged environment
(16:16:32) sbonazzo: yes, EL7 is practically the only supported environment
(16:16:45) investigator: great
(16:17:06) investigator: then I need to have a storage server up and ready
before I start
(16:17:21) investigator: That's my current snag
(16:18:29) sbonazzo: investigator: you can avoid the storage server if you
ran hyperconverged
(16:18:34) investigator: My idea is to partition my two self-hosting
machines with minimum diskspace and use the remaining disk as iSCSI targets
for a storage soution
(16:18:47) investigator: what is hyperconverged?
(16:19:00) sbonazzo: investigator: running storage and virt on the same
machines
(16:19:32) sbonazzo: investigator: we have an integrated solution using
glusterfs on 3 hosts as storage, running virtualization on top of that
(16:19:57) investigator: ok. sounds like what I was trying to do
(16:20:10) investigator: except for the glusterfs
(16:20:11) sbonazzo: investigator: here's the architecture
https://www.ovirt.org/develop/release-management/features/gluster/glusterfs-
hyperconvergence/
(16:20:38) sbonazzo: investigator: here's a guide to get there
https://www.ovirt.org/documentation/gluster-hyperconverged/Gluster_
Hyperconverged_Guide/
(16:20:59) investigator: sbonazzo: thank you much
(16:21:18) sbonazzo: investigator: you're welcome
(16:21:55) investigator: Don't run away. I'm sure I will have lots more
questions after I read a bit
(16:23:29) investigator: The first link on that guide is
https://www.ovirt.org/documentation/gluster-hyperconverged/gluster-
hyperconverged/chap-Introduction#documentation%20gluster%20hyperconverged%
20gluster%20hyperconverged%20chap%20Introduction
(16:23:33) investigator: which is dead
(16:23:36) sbonazzo: investigator: I'm probably not the best one for
helping you on this. I would suggest to ask on use...@ovirt.org mailing
list, since today is already a bit late. Sahina Bose from gluster team may
help you more than I can
(16:23:51) sbonazzo: investigator: doesn't look like a good start, let me
see
(16:24:10) investigator: what time zone is best for this channel?
(16:24:59) investigator: I have tried it over a 10 hour span, but seem to
be off be about 6 hours
(16:25:49) investigator: sbonazzo: All the links in that guid are dead
(16:26:15) sbonazzo: investigator: https://www.ovirt.org/
documentation/gluster-hyperconverged/
(16:26:26) sbonazzo: investigator: just broken link, but content still there
(16:26:30) sbonazzo: investigator: pushing a fix
(16:28:17) investigator: ok. first link says to start wtih a hosted engine
deployment
(16:28:37) investigator: so I'm back where I started
(16:28:43) sbonazzo: investigator: here's the fix https://github.com/oVirt/
ovirt-site/pull/1220 thanks for reporting
(16:29:01) sbonazzo: investigator: https://www.ovirt.org/
documentation/gluster-hyperconverged/chap-Introduction/
(16:29:21) sbonazzo: investigator: then https://www.ovirt.org/
documentation/gluster-hyperconverged/chap-Deploying_Hyperconverged/
(16:29:28) investigator: sbonazzo: :) that's still broken
(16:29:50) investigator: the navigation links on that page have the same
problems
(16:30:40) sbonazzo: investigator: :-( I'm very sorry for this experience,
not sure how come all those links got broken


I think we need to spend some time fixing the website. We may release the
best release ever but we'll loose user just becuase they get lost in broken
links.


-- 

SANDRO BONAZZOLA

ASSOCIATE MANAGER, SOFTWARE ENGINEERING, EMEA ENG VIRTUALIZATION R

Red Hat EMEA 

TRIED. TESTED. TRUSTED. 
___
Infra mailing list
Infra@ovirt.org
http://lists.ovirt.org/mailman/listinfo/infra


[oVirt Jenkins] ovirt-appliance_ovirt-4.2-pre_build-artifacts-el7-x86_64 - Build # 10 - Failure!

2017-09-18 Thread jenkins
Project: 
http://jenkins.ovirt.org/job/ovirt-appliance_ovirt-4.2-pre_build-artifacts-el7-x86_64/
 
Build: 
http://jenkins.ovirt.org/job/ovirt-appliance_ovirt-4.2-pre_build-artifacts-el7-x86_64/10/
Build Number: 10
Build Status:  Failure
Triggered By: Started by timer

-
Changes Since Last Success:
-
Changes for Build #10
[Sandro Bonazzola] packaging: build: move from fc24 to fc25




-
Failed Tests:
-
No tests ran.___
Infra mailing list
Infra@ovirt.org
http://lists.ovirt.org/mailman/listinfo/infra


Re: [ovirt-devel] [ OST Failure Report ] [ oVirt HE master ] [ 17/09/17 ] [ engine-setup ]

2017-09-18 Thread Simone Tiraboschi
On Mon, Sep 18, 2017 at 4:33 PM, Simone Tiraboschi 
wrote:

>
>
> On Mon, Sep 18, 2017 at 12:09 PM, Simone Tiraboschi 
> wrote:
>
>>
>>
>> On Sun, Sep 17, 2017 at 11:14 AM, Eyal Edri  wrote:
>>
>>>
>>>
>>> On Sun, Sep 17, 2017 at 11:50 AM, Yaniv Kaul  wrote:
>>>


 On Sun, Sep 17, 2017 at 11:47 AM, Eyal Edri  wrote:

> Hi,
>
> It looks like HE suite ( both 'master' and '4.1' ) is failing
> constantly, most likely due to 7.4 updates.
>

>> I'm investigating the issue on master.
>> In my case I choose to configure the engine VM with a static IP address
>> and engine-setup failed on the engine VM since it wasn't able to check
>> available OVN related packages.
>>
>> So we have two distinct issues here:
>> 1. we are executing engine-setup with --offline cli option but the OVN
>> plugins are ignoring it.
>>
>
> https://bugzilla.redhat.com/show_bug.cgi?id=1492702
> https://bugzilla.redhat.com/show_bug.cgi?id=1492706
>
>
>>
>> 2. the engine VM has no connectivity.
>> I dig it a bit and I found that the default gateway wasn't configured on
>> the engine VM although it's correctly set in cloud-init meta-data file.
>> So it seams that on 7.4 cloud-init is failing to set the default gateway:
>>
>>
> https://bugzilla.redhat.com/show_bug.cgi?id=1492726
>

I pushed a working patch with an ugly hack:
https://gerrit.ovirt.org/#/c/81934/

With that, I was able to successfully deploy hosted-engine on NFS using
Centos 7.4 on the host and on the engine VM.


>
>
>
>> [root@enginevm ~]# nmcli con show "System eth0" | grep -i GATEWAY
>> connection.gateway-ping-timeout:0
>> ipv4.gateway:   --
>> ipv6.gateway:   --
>> IP4.GATEWAY:--
>> IP6.GATEWAY:fe80::c4ee:3eff:fed5:fad9
>> [root@enginevm ~]# nmcli con modify "System eth0" ipv4.gateway
>> Error: value for 'ipv4.gateway' is missing.
>> [root@enginevm ~]#
>> [root@enginevm ~]# nmcli con show "System eth0" | grep -i GATEWAY
>> connection.gateway-ping-timeout:0
>> ipv4.gateway:   --
>> ipv6.gateway:   --
>> IP4.GATEWAY:--
>> IP6.GATEWAY:fe80::c4ee:3eff:fed5:fad9
>> [root@enginevm ~]# nmcli con modify "System eth0" ipv4.gateway
>> 192.168.1.1
>> [root@enginevm ~]# nmcli con reload "System eth0"
>> [root@enginevm ~]# nmcli con up "System eth0"
>> Connection successfully activated (D-Bus active path:
>> /org/freedesktop/NetworkManager/ActiveConnection/3)
>> [root@enginevm ~]# nmcli con show "System eth0" | grep -i GATEWAY
>> connection.gateway-ping-timeout:0
>> ipv4.gateway:   192.168.1.1
>> ipv6.gateway:   --
>> IP4.GATEWAY:192.168.1.1
>> IP6.GATEWAY:fe80::c4ee:3eff:fed5:fad9
>> [root@enginevm ~]# mount /dev/sr0 /mnt/
>> mount: /dev/sr0 is write-protected, mounting read-only
>> [root@enginevm ~]# cat /mnt/meta-data
>> instance-id: d8b22f43-1565-44e2-916f-f211c7e07f13
>> local-hostname: enginevm.localdomain
>> network-interfaces: |
>>   auto eth0
>>   iface eth0 inet static
>> address 192.168.1.204
>> network 192.168.1.0
>> netmask 255.255.255.0
>> broadcast 192.168.1.255
>> gateway 192.168.1.1
>>
>>
>>
>>
>>> So there is no suspected patch from oVirt side that might have caused it.
>

 It's the firewall. I've fixed it[1] and specifically[2] but probably
 not completely.

>>>
>>> Great! Wasn't aware your patch address that, I've replied on the patch
>>> itself, but I think we need to split the fix to 2 seperate patches.
>>>
>>>

 Perhaps we should try to take[2] separately.
 Y.

 [1] https://gerrit.ovirt.org/#/c/81766/
 [2] https://gerrit.ovirt.org/#/c/81766/3/common/deploy-scrip
 ts/setup_storage_unified_he_extra_el7.sh




> It is probably also the reason why HC suites are failing, since they
> are using also HE for deployments.
>
> I think this issue should BLOCK the Alpha release tomorrow, or at the
> minimum, we need to verify its an OST issue and not a real regression.
>
> Links to relevant failures:
> http://jenkins.ovirt.org/job/ovirt-system-tests_he-basic-sui
> te-master/37/consoleFull
> http://jenkins.ovirt.org/job/ovirt-system-tests_he-basic-sui
> te-4.1/33/console
>
> Error snippet:
>
> 03:01:38
> 03:01:38   --== STORAGE CONFIGURATION ==--
> 03:01:38
> 03:02:47 [ ERROR ] Error while mounting specified storage path:
> mount.nfs: No route to host
> 03:02:58 [WARNING] Cannot unmount /tmp/tmp2gkFwJ
> 03:02:58 [ ERROR ] Failed to execute stage 'Environment
> customization': mount.nfs: No route to host
>
>
> --
>
> 

Fwd: ** PROBLEM Service Alert: ovirt-mirrorchecker/www.gtlib.gatech.edu/pub/oVirt/pub mirror site last sync is CRITICAL **

2017-09-18 Thread Dafna Ron
Hello,

We are getting alerts that the mirror below has a sync problem.

I tried reach the mirror using the http and was unable to reach the repo.

can you please check the issue?

Many thanks,

Dafna



 Forwarded Message 
Subject:** PROBLEM Service Alert:
ovirt-mirrorchecker/www.gtlib.gatech.edu/pub/oVirt/pub mirror site last
sync is CRITICAL **
Date:   Mon, 18 Sep 2017 14:57:20 +
From:   icinga 
To: d...@redhat.com



* Icinga *

Notification Type: PROBLEM

Service: www.gtlib.gatech.edu/pub/oVirt/pub mirror site last sync
Host: ovirt-mirrorchecker
Address: 66.187.230.105
State: CRITICAL

Date/Time: Mon Sept 18 14:57:20 UTC 2017

Additional Info:

CRITICAL - 562911 seconds since last sync, which are 156.3642 hours.

___
Infra mailing list
Infra@ovirt.org
http://lists.ovirt.org/mailman/listinfo/infra


[JIRA] (OVIRT-1652) [gerrit hooks] add change-restored hook

2017-09-18 Thread Shlomo Ben David (oVirt JIRA)

 [ 
https://ovirt-jira.atlassian.net/browse/OVIRT-1652?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Shlomo Ben David reassigned OVIRT-1652:
---

Assignee: Shlomo Ben David  (was: infra)

> [gerrit hooks] add change-restored hook
> ---
>
> Key: OVIRT-1652
> URL: https://ovirt-jira.atlassian.net/browse/OVIRT-1652
> Project: oVirt - virtualization made easy
>  Issue Type: Task
>Reporter: Shlomo Ben David
>Assignee: Shlomo Ben David
>
> Currently, when a developer abandons a patch the update tracker hook
> updates the external tracker status to ABANDONED.
> When the patch restored the external tracker doesn't change and its status 
> remains on ABANDONED.



--
This message was sent by Atlassian {0}
(v1001.0.0-SNAPSHOT#100060)
___
Infra mailing list
Infra@ovirt.org
http://lists.ovirt.org/mailman/listinfo/infra


[JIRA] (OVIRT-1652) [gerrit hooks] add change-restored hook

2017-09-18 Thread Shlomo Ben David (oVirt JIRA)

 [ 
https://ovirt-jira.atlassian.net/browse/OVIRT-1652?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Shlomo Ben David updated OVIRT-1652:

Epic Link: OVIRT-411

> [gerrit hooks] add change-restored hook
> ---
>
> Key: OVIRT-1652
> URL: https://ovirt-jira.atlassian.net/browse/OVIRT-1652
> Project: oVirt - virtualization made easy
>  Issue Type: Task
>Reporter: Shlomo Ben David
>Assignee: infra
>
> Currently, when a developer abandons a patch the update tracker hook
> updates the external tracker status to ABANDONED.
> When the patch restored the external tracker doesn't change and its status 
> remains on ABANDONED.



--
This message was sent by Atlassian {0}
(v1001.0.0-SNAPSHOT#100060)
___
Infra mailing list
Infra@ovirt.org
http://lists.ovirt.org/mailman/listinfo/infra


[JIRA] (OVIRT-1652) [gerrit hooks] add change-restored hook

2017-09-18 Thread Shlomo Ben David (oVirt JIRA)
Shlomo Ben David created OVIRT-1652:
---

 Summary: [gerrit hooks] add change-restored hook
 Key: OVIRT-1652
 URL: https://ovirt-jira.atlassian.net/browse/OVIRT-1652
 Project: oVirt - virtualization made easy
  Issue Type: Task
Reporter: Shlomo Ben David
Assignee: infra


Currently, when a developer abandons a patch the update tracker hook
updates the external tracker status to ABANDONED.
When the patch restored the external tracker doesn't change and its status 
remains on ABANDONED.



--
This message was sent by Atlassian {0}
(v1001.0.0-SNAPSHOT#100060)
___
Infra mailing list
Infra@ovirt.org
http://lists.ovirt.org/mailman/listinfo/infra


Re: [ovirt-devel] [ OST Failure Report ] [ oVirt HE master ] [ 17/09/17 ] [ engine-setup ]

2017-09-18 Thread Simone Tiraboschi
On Mon, Sep 18, 2017 at 12:09 PM, Simone Tiraboschi 
wrote:

>
>
> On Sun, Sep 17, 2017 at 11:14 AM, Eyal Edri  wrote:
>
>>
>>
>> On Sun, Sep 17, 2017 at 11:50 AM, Yaniv Kaul  wrote:
>>
>>>
>>>
>>> On Sun, Sep 17, 2017 at 11:47 AM, Eyal Edri  wrote:
>>>
 Hi,

 It looks like HE suite ( both 'master' and '4.1' ) is failing
 constantly, most likely due to 7.4 updates.

>>>
> I'm investigating the issue on master.
> In my case I choose to configure the engine VM with a static IP address
> and engine-setup failed on the engine VM since it wasn't able to check
> available OVN related packages.
>
> So we have two distinct issues here:
> 1. we are executing engine-setup with --offline cli option but the OVN
> plugins are ignoring it.
>

https://bugzilla.redhat.com/show_bug.cgi?id=1492702
https://bugzilla.redhat.com/show_bug.cgi?id=1492706


>
> 2. the engine VM has no connectivity.
> I dig it a bit and I found that the default gateway wasn't configured on
> the engine VM although it's correctly set in cloud-init meta-data file.
> So it seams that on 7.4 cloud-init is failing to set the default gateway:
>
>
https://bugzilla.redhat.com/show_bug.cgi?id=1492726


> [root@enginevm ~]# nmcli con show "System eth0" | grep -i GATEWAY
> connection.gateway-ping-timeout:0
> ipv4.gateway:   --
> ipv6.gateway:   --
> IP4.GATEWAY:--
> IP6.GATEWAY:fe80::c4ee:3eff:fed5:fad9
> [root@enginevm ~]# nmcli con modify "System eth0" ipv4.gateway
> Error: value for 'ipv4.gateway' is missing.
> [root@enginevm ~]#
> [root@enginevm ~]# nmcli con show "System eth0" | grep -i GATEWAY
> connection.gateway-ping-timeout:0
> ipv4.gateway:   --
> ipv6.gateway:   --
> IP4.GATEWAY:--
> IP6.GATEWAY:fe80::c4ee:3eff:fed5:fad9
> [root@enginevm ~]# nmcli con modify "System eth0" ipv4.gateway 192.168.1.1
> [root@enginevm ~]# nmcli con reload "System eth0"
> [root@enginevm ~]# nmcli con up "System eth0"
> Connection successfully activated (D-Bus active path: /org/freedesktop/
> NetworkManager/ActiveConnection/3)
> [root@enginevm ~]# nmcli con show "System eth0" | grep -i GATEWAY
> connection.gateway-ping-timeout:0
> ipv4.gateway:   192.168.1.1
> ipv6.gateway:   --
> IP4.GATEWAY:192.168.1.1
> IP6.GATEWAY:fe80::c4ee:3eff:fed5:fad9
> [root@enginevm ~]# mount /dev/sr0 /mnt/
> mount: /dev/sr0 is write-protected, mounting read-only
> [root@enginevm ~]# cat /mnt/meta-data
> instance-id: d8b22f43-1565-44e2-916f-f211c7e07f13
> local-hostname: enginevm.localdomain
> network-interfaces: |
>   auto eth0
>   iface eth0 inet static
> address 192.168.1.204
> network 192.168.1.0
> netmask 255.255.255.0
> broadcast 192.168.1.255
> gateway 192.168.1.1
>
>
>
>
>> So there is no suspected patch from oVirt side that might have caused it.

>>>
>>> It's the firewall. I've fixed it[1] and specifically[2] but probably not
>>> completely.
>>>
>>
>> Great! Wasn't aware your patch address that, I've replied on the patch
>> itself, but I think we need to split the fix to 2 seperate patches.
>>
>>
>>>
>>> Perhaps we should try to take[2] separately.
>>> Y.
>>>
>>> [1] https://gerrit.ovirt.org/#/c/81766/
>>> [2] https://gerrit.ovirt.org/#/c/81766/3/common/deploy-scrip
>>> ts/setup_storage_unified_he_extra_el7.sh
>>>
>>>
>>>
>>>
 It is probably also the reason why HC suites are failing, since they
 are using also HE for deployments.

 I think this issue should BLOCK the Alpha release tomorrow, or at the
 minimum, we need to verify its an OST issue and not a real regression.

 Links to relevant failures:
 http://jenkins.ovirt.org/job/ovirt-system-tests_he-basic-sui
 te-master/37/consoleFull
 http://jenkins.ovirt.org/job/ovirt-system-tests_he-basic-sui
 te-4.1/33/console

 Error snippet:

 03:01:38
 03:01:38   --== STORAGE CONFIGURATION ==--
 03:01:38
 03:02:47 [ ERROR ] Error while mounting specified storage path:
 mount.nfs: No route to host
 03:02:58 [WARNING] Cannot unmount /tmp/tmp2gkFwJ
 03:02:58 [ ERROR ] Failed to execute stage 'Environment customization':
 mount.nfs: No route to host


 --

 Eyal edri


 ASSOCIATE MANAGER

 RHV DevOps

 EMEA VIRTUALIZATION R


 Red Hat EMEA 
  TRIED. TESTED. TRUSTED.
 
 phone: +972-9-7692018 <+972%209-769-2018>
 irc: eedri (on #tlv #rhev-dev #rhev-integ)

 ___
 Devel mailing list
 de...@ovirt.org
 

[oVirt Jenkins] ovirt-node-ng_ovirt-4.1_build-artifacts-el7-x86_64 - Build # 275 - Failure!

2017-09-18 Thread jenkins
Project: 
http://jenkins.ovirt.org/job/ovirt-node-ng_ovirt-4.1_build-artifacts-el7-x86_64/
 
Build: 
http://jenkins.ovirt.org/job/ovirt-node-ng_ovirt-4.1_build-artifacts-el7-x86_64/275/
Build Number: 275
Build Status:  Failure
Triggered By: Started by user Sandro Bonazzola

-
Changes Since Last Success:
-
Changes for Build #275
[Yuval Turgeman] automation: adding loop devices




-
Failed Tests:
-
No tests ran.___
Infra mailing list
Infra@ovirt.org
http://lists.ovirt.org/mailman/listinfo/infra


[oVirt Jenkins] ovirt-node-ng_ovirt-4.2_build-artifacts-el7-x86_64 - Build # 19 - Still Failing!

2017-09-18 Thread jenkins
Project: 
http://jenkins.ovirt.org/job/ovirt-node-ng_ovirt-4.2_build-artifacts-el7-x86_64/
 
Build: 
http://jenkins.ovirt.org/job/ovirt-node-ng_ovirt-4.2_build-artifacts-el7-x86_64/19/
Build Number: 19
Build Status:  Still Failing
Triggered By: Started by timer

-
Changes Since Last Success:
-
Changes for Build #17
[Yuval Turgeman] Adding loop devices to check-merged


Changes for Build #18
No changes

Changes for Build #19
[dfodor] update python-paramiko to python2-paramiko

[Daniel Belenky] add support to inject runtime env vars to mock

[Barak Korren] Add (partial) STD_CI support for 'automation.yaml'

[Barak Korren] Add retries to GitHub notifications

[Barak Korren] Fix pipeline stdci trigger detection code

[Barak Korren] Add production STD-CI pipeline jobs

[Yuval Turgeman] Adding loop devices to check-merged




-
Failed Tests:
-
No tests ran.___
Infra mailing list
Infra@ovirt.org
http://lists.ovirt.org/mailman/listinfo/infra


[oVirt Jenkins] ovirt-node-ng_ovirt-4.1_build-artifacts-el7-x86_64 - Build # 272 - Still Failing!

2017-09-18 Thread jenkins
Project: 
http://jenkins.ovirt.org/job/ovirt-node-ng_ovirt-4.1_build-artifacts-el7-x86_64/
 
Build: 
http://jenkins.ovirt.org/job/ovirt-node-ng_ovirt-4.1_build-artifacts-el7-x86_64/272/
Build Number: 272
Build Status:  Still Failing
Triggered By: Started by user Sandro Bonazzola

-
Changes Since Last Success:
-
Changes for Build #271
[dfodor] update python-paramiko to python2-paramiko

[Daniel Belenky] add support to inject runtime env vars to mock

[Barak Korren] Add (partial) STD_CI support for 'automation.yaml'

[Barak Korren] Add retries to GitHub notifications

[Barak Korren] Fix pipeline stdci trigger detection code

[Barak Korren] Add production STD-CI pipeline jobs

[Yuval Turgeman] automation: adding loop devices


Changes for Build #272
[Yuval Turgeman] automation: adding loop devices




-
Failed Tests:
-
No tests ran.___
Infra mailing list
Infra@ovirt.org
http://lists.ovirt.org/mailman/listinfo/infra


Jenkins build is back to normal : system-sync_mirrors-fedora-updates-fc25-x86_64 #647

2017-09-18 Thread jenkins
See 


___
Infra mailing list
Infra@ovirt.org
http://lists.ovirt.org/mailman/listinfo/infra


Re: [CQ]: 81775,1 (ovirt-engine) failed "ovirt-4.1" system tests

2017-09-18 Thread Eyal Edri
Evgheni,
Can you check how much swap we have on the slaves and add some if its
missing?

On Mon, Sep 18, 2017 at 2:52 PM, Greg Sheremeta  wrote:

> Thanks. Apparently 137 = 128 + "9" (sigkill) because of out of memory.
> I've never seen it that way locally. I assume the builders don't have swap?
>
> So, this is another shortage of memory, but in a different place.
>
> Greg
>
>
> On Mon, Sep 18, 2017 at 3:45 AM, Eyal Edri  wrote:
>
>>
>>
>> On Mon, Sep 18, 2017 at 3:53 AM, Greg Sheremeta 
>> wrote:
>>
>>> This looks different, because iirc that previous issue was in uicommon.
>>>
>>> Can you share more log above?
>>>
>>
>> The full log should be in the job link provided:
>>
>> http://jenkins.ovirt.org/job/ovirt-engine_4.1_build-artifact
>> s-el7-x86_64/928/console
>>
>>
>>>
>>> Greg
>>>
>>>
>>> On Sep 17, 2017 5:29 AM, "Eyal Edri"  wrote:
>>>
>>> So it looks like the fix we did last week on not running findbugs on
>>> fedora didn't fix all the failures of engine failing to build [1].
>>>
>>> Any thoughts on what else can we do to fix it?
>>>
>>> 07:42:27 [INFO] WebAdmin ..
>>> FAILURE [5:36.003s]
>>> 07:42:27 [INFO] UserPortal 
>>> SKIPPED
>>> 07:42:27 [INFO] oVirt Server EAR ..
>>> SKIPPED
>>> 07:42:27 [INFO] ovirt-engine maven make ...
>>> SKIPPED
>>> 07:42:27 [INFO] --
>>> --
>>> 07:42:27 [INFO] BUILD FAILURE
>>> 07:42:27 [INFO] --
>>> --
>>> 07:42:27 [INFO] Total time: 13:12.947s
>>> 07:42:27 [INFO] Finished at: Sun Sep 17 07:42:27 GMT 2017
>>> 07:42:29 [INFO] Final Memory: 547M/1317M
>>> 07:42:29 [INFO] --
>>> --
>>> 07:42:29 [ERROR] Failed to execute goal 
>>> org.codehaus.mojo:gwt-maven-plugin:2.6.1:compile
>>> (gwtcompile) on project webadmin: Command [[
>>> 07:42:29 [ERROR] /bin/sh -c /usr/lib/jvm/java-1.8.0-openjd
>>> k-1.8.0.144-0.b01.el7_4.x86_64/jre/bin/java
>>> -javaagent:/root/.m2/repository/org/aspectj/aspectjweaver/1.8.2/aspectjweaver-1.8.2.jar
>>> -Dgwt.jjs.permutationWorkerFactory=com.google.gwt.dev.ThreadedPermutationWorkerFactory
>>> \
>>> 07:42:29 [ERROR] -Dgwt.jjs.maxThreads=4 \
>>> 07:42:29 [ERROR] -Djava.io.tmpdir="/home/jenkin
>>> s/workspace/ovirt-engine_4.1_build-artifacts-el7-x86_64/ovir
>>> t-engine/rpmbuild/BUILD/ovirt-engine-4.1.6.3/frontend/webadm
>>> in/modules/webadmin/target/tmp" \
>>> 07:42:29 [ERROR] -Djava.util.prefs.systemRoot="
>>> /home/jenkins/workspace/ovirt-engine_4.1_build-artifacts-el7
>>> -x86_64/ovirt-engine/rpmbuild/BUILD/ovirt-engine-4.1.6.3/fro
>>> ntend/webadmin/modules/webadmin/target/tmp" \
>>> 07:42:29 [ERROR] -Djava.util.prefs.userRoot="/h
>>> ome/jenkins/workspace/ovirt-engine_4.1_build-artifacts-el7-x
>>> 86_64/ovirt-engine/rpmbuild/BUILD/ovirt-engine-4.1.6.3/front
>>> end/webadmin/modules/webadmin/target/tmp" \
>>> 07:42:29 [ERROR] -Djava.util.logging.config.cla
>>> ss=org.ovirt.engine.ui.gwtextension.JavaLoggingConfig \
>>> 07:42:29 [ERROR] -Xms1G -Xmx4G  '-Dgwt.dontPrune=org\.ovirt\.e
>>> ngine\.core\.(common|compat)\..*' -classpath
>>> /home/jenkins/workspace/ovirt-engine_4.1_build-artifacts-el7
>>> -x86_64/ovirt-engine/rpmbuild/BUILD/ovirt-engine-4.1.6.3/fro
>>> ntend/webadmin/modules/webadmin/target/webadmin-4.1.6.3-SNAP
>>> SHOT/WEB-INF/classes:/home/jenkins/workspace/ovirt-engine
>>> _4.1_build-artifacts-el7-x86_64/ovirt-engine/rpmbuild/BUILD/
>>> ovirt-engine-4.1.6.3/frontend/webadmin/modules/webadmin/src/
>>> main/java:/home/jenkins/workspace/ovirt-
>>>
>>>
>>> [1] http://jenkins.ovirt.org/job/ovirt-engine_4.1_build-artifact
>>> s-el7-x86_64/928/console
>>>
>>>
>>>
>>> On Sun, Sep 17, 2017 at 10:49 AM, oVirt Jenkins 
>>> wrote:
>>>
 Change 81775,1 (ovirt-engine) is probably the reason behind recent
 system test
 failures in the "ovirt-4.1" change queue and needs to be fixed.

 This change had been removed from the testing queue. Artifacts build
 from this
 change will not be released until it is fixed.

 For further details about the change see:
 https://gerrit.ovirt.org/#/c/81775/1

 For failed test results see:
 http://jenkins.ovirt.org/job/ovirt-4.1_change-queue-tester/1007/
 ___
 Infra mailing list
 Infra@ovirt.org
 http://lists.ovirt.org/mailman/listinfo/infra

>>>
>>>
>>>
>>> --
>>>
>>> Eyal edri
>>>
>>>
>>> ASSOCIATE MANAGER
>>>
>>> RHV DevOps
>>>
>>> EMEA VIRTUALIZATION R
>>>
>>>
>>> Red Hat EMEA 
>>>  TRIED. TESTED. TRUSTED.
>>> 
>>> phone: +972-9-7692018 <+972%209-769-2018>
>>> irc: eedri (on #tlv #rhev-dev 

Re: [CQ]: 81775,1 (ovirt-engine) failed "ovirt-4.1" system tests

2017-09-18 Thread Greg Sheremeta
Thanks. Apparently 137 = 128 + "9" (sigkill) because of out of memory. I've
never seen it that way locally. I assume the builders don't have swap?

So, this is another shortage of memory, but in a different place.

Greg


On Mon, Sep 18, 2017 at 3:45 AM, Eyal Edri  wrote:

>
>
> On Mon, Sep 18, 2017 at 3:53 AM, Greg Sheremeta 
> wrote:
>
>> This looks different, because iirc that previous issue was in uicommon.
>>
>> Can you share more log above?
>>
>
> The full log should be in the job link provided:
>
> http://jenkins.ovirt.org/job/ovirt-engine_4.1_build-
> artifacts-el7-x86_64/928/console
>
>
>>
>> Greg
>>
>>
>> On Sep 17, 2017 5:29 AM, "Eyal Edri"  wrote:
>>
>> So it looks like the fix we did last week on not running findbugs on
>> fedora didn't fix all the failures of engine failing to build [1].
>>
>> Any thoughts on what else can we do to fix it?
>>
>> 07:42:27 [INFO] WebAdmin ..
>> FAILURE [5:36.003s]
>> 07:42:27 [INFO] UserPortal 
>> SKIPPED
>> 07:42:27 [INFO] oVirt Server EAR ..
>> SKIPPED
>> 07:42:27 [INFO] ovirt-engine maven make ...
>> SKIPPED
>> 07:42:27 [INFO] --
>> --
>> 07:42:27 [INFO] BUILD FAILURE
>> 07:42:27 [INFO] --
>> --
>> 07:42:27 [INFO] Total time: 13:12.947s
>> 07:42:27 [INFO] Finished at: Sun Sep 17 07:42:27 GMT 2017
>> 07:42:29 [INFO] Final Memory: 547M/1317M
>> 07:42:29 [INFO] --
>> --
>> 07:42:29 [ERROR] Failed to execute goal 
>> org.codehaus.mojo:gwt-maven-plugin:2.6.1:compile
>> (gwtcompile) on project webadmin: Command [[
>> 07:42:29 [ERROR] /bin/sh -c /usr/lib/jvm/java-1.8.0-openjd
>> k-1.8.0.144-0.b01.el7_4.x86_64/jre/bin/java
>> -javaagent:/root/.m2/repository/org/aspectj/aspectjweaver/1.8.2/aspectjweaver-1.8.2.jar
>> -Dgwt.jjs.permutationWorkerFactory=com.google.gwt.dev.ThreadedPermutationWorkerFactory
>> \
>> 07:42:29 [ERROR] -Dgwt.jjs.maxThreads=4 \
>> 07:42:29 [ERROR] -Djava.io.tmpdir="/home/jenkin
>> s/workspace/ovirt-engine_4.1_build-artifacts-el7-x86_64/ovir
>> t-engine/rpmbuild/BUILD/ovirt-engine-4.1.6.3/frontend/webadm
>> in/modules/webadmin/target/tmp" \
>> 07:42:29 [ERROR] -Djava.util.prefs.systemRoot="
>> /home/jenkins/workspace/ovirt-engine_4.1_build-artifacts-el7
>> -x86_64/ovirt-engine/rpmbuild/BUILD/ovirt-engine-4.1.6.3/fro
>> ntend/webadmin/modules/webadmin/target/tmp" \
>> 07:42:29 [ERROR] -Djava.util.prefs.userRoot="/h
>> ome/jenkins/workspace/ovirt-engine_4.1_build-artifacts-el7-x
>> 86_64/ovirt-engine/rpmbuild/BUILD/ovirt-engine-4.1.6.3/front
>> end/webadmin/modules/webadmin/target/tmp" \
>> 07:42:29 [ERROR] -Djava.util.logging.config.cla
>> ss=org.ovirt.engine.ui.gwtextension.JavaLoggingConfig \
>> 07:42:29 [ERROR] -Xms1G -Xmx4G  '-Dgwt.dontPrune=org\.ovirt\.e
>> ngine\.core\.(common|compat)\..*' -classpath
>> /home/jenkins/workspace/ovirt-engine_4.1_build-artifacts-el7
>> -x86_64/ovirt-engine/rpmbuild/BUILD/ovirt-engine-4.1.6.3/fro
>> ntend/webadmin/modules/webadmin/target/webadmin-4.1.6.3-
>> SNAPSHOT/WEB-INF/classes:/home/jenkins/workspace/ovirt-engin
>> e_4.1_build-artifacts-el7-x86_64/ovirt-engine/rpmbuild/
>> BUILD/ovirt-engine-4.1.6.3/frontend/webadmin/modules/
>> webadmin/src/main/java:/home/jenkins/workspace/ovirt-
>>
>>
>> [1] http://jenkins.ovirt.org/job/ovirt-engine_4.1_build-artifact
>> s-el7-x86_64/928/console
>>
>>
>>
>> On Sun, Sep 17, 2017 at 10:49 AM, oVirt Jenkins 
>> wrote:
>>
>>> Change 81775,1 (ovirt-engine) is probably the reason behind recent
>>> system test
>>> failures in the "ovirt-4.1" change queue and needs to be fixed.
>>>
>>> This change had been removed from the testing queue. Artifacts build
>>> from this
>>> change will not be released until it is fixed.
>>>
>>> For further details about the change see:
>>> https://gerrit.ovirt.org/#/c/81775/1
>>>
>>> For failed test results see:
>>> http://jenkins.ovirt.org/job/ovirt-4.1_change-queue-tester/1007/
>>> ___
>>> Infra mailing list
>>> Infra@ovirt.org
>>> http://lists.ovirt.org/mailman/listinfo/infra
>>>
>>
>>
>>
>> --
>>
>> Eyal edri
>>
>>
>> ASSOCIATE MANAGER
>>
>> RHV DevOps
>>
>> EMEA VIRTUALIZATION R
>>
>>
>> Red Hat EMEA 
>>  TRIED. TESTED. TRUSTED. 
>> phone: +972-9-7692018 <+972%209-769-2018>
>> irc: eedri (on #tlv #rhev-dev #rhev-integ)
>>
>>
>>
>
>
> --
>
> Eyal edri
>
>
> ASSOCIATE MANAGER
>
> RHV DevOps
>
> EMEA VIRTUALIZATION R
>
>
> Red Hat EMEA 
>  TRIED. TESTED. TRUSTED. 
> phone: +972-9-7692018 <+972%209-769-2018>
> irc: eedri (on #tlv #rhev-dev 

Re: [ovirt-devel] [ OST Failure Report ] [ oVirt HE master ] [ 17/09/17 ] [ engine-setup ]

2017-09-18 Thread Dafna Ron
On 09/18/2017 11:12 AM, Simone Tiraboschi wrote:
>
>
> On Mon, Sep 18, 2017 at 12:09 PM, Simone Tiraboschi
> > wrote:
>
>
>
> On Sun, Sep 17, 2017 at 11:14 AM, Eyal Edri  > wrote:
>
>
>
> On Sun, Sep 17, 2017 at 11:50 AM, Yaniv Kaul  > wrote:
>
>
>
> On Sun, Sep 17, 2017 at 11:47 AM, Eyal Edri
> > wrote:
>
> Hi,
>
> It looks like HE suite ( both 'master' and '4.1' ) is
> failing constantly, most likely due to 7.4 updates.
>
>
> I'm investigating the issue on master.
> In my case I choose to configure the engine VM with a static IP
> address and engine-setup failed on the engine VM since it wasn't
> able to check available OVN related packages.
>
> So we have two distinct issues here:
> 1. we are executing engine-setup with --offline cli option but the
> OVN plugins are ignoring it.
>
> 2. the engine VM has no connectivity.
> I dig it a bit and I found that the default gateway wasn't
> configured on the engine VM although it's correctly set in
> cloud-init meta-data file.
> So it seams that on 7.4 cloud-init is failing to set the default
> gateway:
>
> [root@enginevm ~]# nmcli con show "System eth0" | grep -i GATEWAY
> connection.gateway-ping-timeout:0
> ipv4.gateway:   --
> ipv6.gateway:   --
> IP4.GATEWAY:--
> IP6.GATEWAY:fe80::c4ee:3eff:fed5:fad9
> [root@enginevm ~]# nmcli con modify "System eth0" ipv4.gateway
> Error: value for 'ipv4.gateway' is missing.
> [root@enginevm ~]# 
> [root@enginevm ~]# nmcli con show "System eth0" | grep -i GATEWAY
> connection.gateway-ping-timeout:0
> ipv4.gateway:   --
> ipv6.gateway:   --
> IP4.GATEWAY:--
> IP6.GATEWAY:fe80::c4ee:3eff:fed5:fad9
> [root@enginevm ~]# nmcli con modify "System eth0" ipv4.gateway
> 192.168.1.1
> [root@enginevm ~]# nmcli con reload "System eth0"
> [root@enginevm ~]# nmcli con up "System eth0"
> Connection successfully activated (D-Bus active path:
> /org/freedesktop/NetworkManager/ActiveConnection/3)
> [root@enginevm ~]# nmcli con show "System eth0" | grep -i GATEWAY
> connection.gateway-ping-timeout:0
> ipv4.gateway:   192.168.1.1
> ipv6.gateway:   --
> IP4.GATEWAY:192.168.1.1
> IP6.GATEWAY:fe80::c4ee:3eff:fed5:fad9
> [root@enginevm ~]# mount /dev/sr0 /mnt/
> mount: /dev/sr0 is write-protected, mounting read-only
> [root@enginevm ~]# cat /mnt/meta-data 
> instance-id: d8b22f43-1565-44e2-916f-f211c7e07f13
> local-hostname: enginevm.localdomain
> network-interfaces: |
>   auto eth0
>   iface eth0 inet static
> address 192.168.1.204
> network 192.168.1.0
> netmask 255.255.255.0
> broadcast 192.168.1.255
> gateway 192.168.1.1
>
>
> An upstream user was also reporting that he updated his host and his
> engine VM to Centos 7.4 and it failed to reboot the engine VM with 7.4
> kernel hanging at
> "Probing EDD (edd=off to disable)...ok". He manually forced the old
> 7.3 kernel via grub menu and his engine VM correctly booted.
> I wasn't able to reproduce it here.


This is a firmware issue for enhanced disk drive.
to work around it we can add linux edd=off to the kernel boot parameters.
>  
>
>
>  
>
> So there is no suspected patch from oVirt side that
> might have caused it.
>
>
> It's the firewall. I've fixed it[1] and specifically[2]
> but probably not completely.
>
>
> Great! Wasn't aware your patch address that, I've replied on
> the patch itself, but I think we need to split the fix to 2
> seperate patches.
>  
>
>
> Perhaps we should try to take[2] separately.
> Y.
>
> [1] https://gerrit.ovirt.org/#/c/81766/
> 
> [2] 
> https://gerrit.ovirt.org/#/c/81766/3/common/deploy-scripts/setup_storage_unified_he_extra_el7.sh
> 
> 
>
>
>
>
> It is probably also the reason why HC suites are
> failing, since they are using also HE for deployments.
>
> I think this issue should BLOCK the Alpha release
> tomorrow, or at the minimum, we need to verify its an
> 

Re: [ovirt-devel] [ OST Failure Report ] [ oVirt HE master ] [ 17/09/17 ] [ engine-setup ]

2017-09-18 Thread Simone Tiraboschi
On Mon, Sep 18, 2017 at 12:09 PM, Simone Tiraboschi 
wrote:

>
>
> On Sun, Sep 17, 2017 at 11:14 AM, Eyal Edri  wrote:
>
>>
>>
>> On Sun, Sep 17, 2017 at 11:50 AM, Yaniv Kaul  wrote:
>>
>>>
>>>
>>> On Sun, Sep 17, 2017 at 11:47 AM, Eyal Edri  wrote:
>>>
 Hi,

 It looks like HE suite ( both 'master' and '4.1' ) is failing
 constantly, most likely due to 7.4 updates.

>>>
> I'm investigating the issue on master.
> In my case I choose to configure the engine VM with a static IP address
> and engine-setup failed on the engine VM since it wasn't able to check
> available OVN related packages.
>
> So we have two distinct issues here:
> 1. we are executing engine-setup with --offline cli option but the OVN
> plugins are ignoring it.
>
> 2. the engine VM has no connectivity.
> I dig it a bit and I found that the default gateway wasn't configured on
> the engine VM although it's correctly set in cloud-init meta-data file.
> So it seams that on 7.4 cloud-init is failing to set the default gateway:
>
> [root@enginevm ~]# nmcli con show "System eth0" | grep -i GATEWAY
> connection.gateway-ping-timeout:0
> ipv4.gateway:   --
> ipv6.gateway:   --
> IP4.GATEWAY:--
> IP6.GATEWAY:fe80::c4ee:3eff:fed5:fad9
> [root@enginevm ~]# nmcli con modify "System eth0" ipv4.gateway
> Error: value for 'ipv4.gateway' is missing.
> [root@enginevm ~]#
> [root@enginevm ~]# nmcli con show "System eth0" | grep -i GATEWAY
> connection.gateway-ping-timeout:0
> ipv4.gateway:   --
> ipv6.gateway:   --
> IP4.GATEWAY:--
> IP6.GATEWAY:fe80::c4ee:3eff:fed5:fad9
> [root@enginevm ~]# nmcli con modify "System eth0" ipv4.gateway 192.168.1.1
> [root@enginevm ~]# nmcli con reload "System eth0"
> [root@enginevm ~]# nmcli con up "System eth0"
> Connection successfully activated (D-Bus active path: /org/freedesktop/
> NetworkManager/ActiveConnection/3)
> [root@enginevm ~]# nmcli con show "System eth0" | grep -i GATEWAY
> connection.gateway-ping-timeout:0
> ipv4.gateway:   192.168.1.1
> ipv6.gateway:   --
> IP4.GATEWAY:192.168.1.1
> IP6.GATEWAY:fe80::c4ee:3eff:fed5:fad9
> [root@enginevm ~]# mount /dev/sr0 /mnt/
> mount: /dev/sr0 is write-protected, mounting read-only
> [root@enginevm ~]# cat /mnt/meta-data
> instance-id: d8b22f43-1565-44e2-916f-f211c7e07f13
> local-hostname: enginevm.localdomain
> network-interfaces: |
>   auto eth0
>   iface eth0 inet static
> address 192.168.1.204
> network 192.168.1.0
> netmask 255.255.255.0
> broadcast 192.168.1.255
> gateway 192.168.1.1
>
>
An upstream user was also reporting that he updated his host and his engine
VM to Centos 7.4 and it failed to reboot the engine VM with 7.4 kernel
hanging at
"Probing EDD (edd=off to disable)...ok". He manually forced the old 7.3
kernel via grub menu and his engine VM correctly booted.
I wasn't able to reproduce it here.


>
>
>
>> So there is no suspected patch from oVirt side that might have caused it.

>>>
>>> It's the firewall. I've fixed it[1] and specifically[2] but probably not
>>> completely.
>>>
>>
>> Great! Wasn't aware your patch address that, I've replied on the patch
>> itself, but I think we need to split the fix to 2 seperate patches.
>>
>>
>>>
>>> Perhaps we should try to take[2] separately.
>>> Y.
>>>
>>> [1] https://gerrit.ovirt.org/#/c/81766/
>>> [2] https://gerrit.ovirt.org/#/c/81766/3/common/deploy-scrip
>>> ts/setup_storage_unified_he_extra_el7.sh
>>>
>>>
>>>
>>>
 It is probably also the reason why HC suites are failing, since they
 are using also HE for deployments.

 I think this issue should BLOCK the Alpha release tomorrow, or at the
 minimum, we need to verify its an OST issue and not a real regression.

 Links to relevant failures:
 http://jenkins.ovirt.org/job/ovirt-system-tests_he-basic-sui
 te-master/37/consoleFull
 http://jenkins.ovirt.org/job/ovirt-system-tests_he-basic-sui
 te-4.1/33/console

 Error snippet:

 03:01:38
 03:01:38   --== STORAGE CONFIGURATION ==--
 03:01:38
 03:02:47 [ ERROR ] Error while mounting specified storage path:
 mount.nfs: No route to host
 03:02:58 [WARNING] Cannot unmount /tmp/tmp2gkFwJ
 03:02:58 [ ERROR ] Failed to execute stage 'Environment customization':
 mount.nfs: No route to host


 --

 Eyal edri


 ASSOCIATE MANAGER

 RHV DevOps

 EMEA VIRTUALIZATION R


 Red Hat EMEA 
  TRIED. TESTED. TRUSTED.
 
 phone: +972-9-7692018 

Re: CI error on 4.1 builds?

2017-09-18 Thread Ala Hino
Got conflicts when did the backport and don't have time now to look into
those conflicts.

On Mon, Sep 18, 2017 at 12:33 PM, Nir Soffer  wrote:

> On Mon, Sep 18, 2017 at 10:40 AM Milan Zamazal 
> wrote:
>
>> Nir Soffer  writes:
>>
>> > On Mon, Sep 18, 2017 at 10:13 AM Ala Hino  wrote:
>> >
>> >> Getting following error from CI, only on 4.1 branch, CI for same patch
>> on
>> >> master succeded:
>> >>
>> >> 20:33:26 Start: yum install 20:33:28 ERROR: Command failed: 20:33:28 #
>> >> /usr/bin/yum-deprecated --installroot
>> >> /var/lib/mock/epel-7-x86_64-11370f2637703a06ca4541539ddee7
>> 29-1963/root/
>> >> --releasever 7 install @buildsys-build autoconf automake dbus-python
>> gdb
>> >> git libguestfs-tools-c m2crypto make mom openvswitch
>> ovirt-imageio-common
>> >> policycoreutils-python PyYAML python-blivet python-coverage
>> python-dateutil
>> >> python-decorator python-devel python-inotify python-ioprocess
>> python-mock
>> >> python-magic python-netaddr python-pthreading python-setuptools
>> python-six
>> >> python-requests rpm-build sanlock-python sudo yum yum-utils
>> >> --setopt=tsflags=nocontexts ... 20:33:28 failure: repodata/repomd.xml
>> from
>> >> centos-ovirt40-release-x86_64: [Errno 256] No more mirrors to try.
>> 20:33:28
>> >> http://mirror.centos.org/centos/7/virt/x86_64/ovirt-4.
>> 0/repodata/repomd.xml:
>> >> [Errno 14] HTTP Error 404 - Not Found
>> >>
>> >
>> > Milan, is this the same error you reported last week, fixed in OST?
>>
>> Yes, it looks the same as what Dan has fixed in commit d2baae2 in Vdsm
>> master and what has been fixed in OST in https://gerrit.ovirt.org/81729.
>>
>
> Ala, maybe you can backport Dan fix to 4.1?
>
>
>>
>> >> Link to build:
>> >> http://jenkins.ovirt.org/job/vdsm_4.1_check-patch-fc24-x86_64/892/
>>
>>
___
Infra mailing list
Infra@ovirt.org
http://lists.ovirt.org/mailman/listinfo/infra


Re: CI error on 4.1 builds?

2017-09-18 Thread Nir Soffer
On Mon, Sep 18, 2017 at 10:40 AM Milan Zamazal  wrote:

> Nir Soffer  writes:
>
> > On Mon, Sep 18, 2017 at 10:13 AM Ala Hino  wrote:
> >
> >> Getting following error from CI, only on 4.1 branch, CI for same patch
> on
> >> master succeded:
> >>
> >> 20:33:26 Start: yum install 20:33:28 ERROR: Command failed: 20:33:28 #
> >> /usr/bin/yum-deprecated --installroot
> >> /var/lib/mock/epel-7-x86_64-11370f2637703a06ca4541539ddee729-1963/root/
> >> --releasever 7 install @buildsys-build autoconf automake dbus-python gdb
> >> git libguestfs-tools-c m2crypto make mom openvswitch
> ovirt-imageio-common
> >> policycoreutils-python PyYAML python-blivet python-coverage
> python-dateutil
> >> python-decorator python-devel python-inotify python-ioprocess
> python-mock
> >> python-magic python-netaddr python-pthreading python-setuptools
> python-six
> >> python-requests rpm-build sanlock-python sudo yum yum-utils
> >> --setopt=tsflags=nocontexts ... 20:33:28 failure: repodata/repomd.xml
> from
> >> centos-ovirt40-release-x86_64: [Errno 256] No more mirrors to try.
> 20:33:28
> >>
> http://mirror.centos.org/centos/7/virt/x86_64/ovirt-4.0/repodata/repomd.xml
> :
> >> [Errno 14] HTTP Error 404 - Not Found
> >>
> >
> > Milan, is this the same error you reported last week, fixed in OST?
>
> Yes, it looks the same as what Dan has fixed in commit d2baae2 in Vdsm
> master and what has been fixed in OST in https://gerrit.ovirt.org/81729.
>

Ala, maybe you can backport Dan fix to 4.1?


>
> >> Link to build:
> >> http://jenkins.ovirt.org/job/vdsm_4.1_check-patch-fc24-x86_64/892/
>
>
___
Infra mailing list
Infra@ovirt.org
http://lists.ovirt.org/mailman/listinfo/infra


[oVirt Jenkins] ovirt-appliance_ovirt-4.2-pre_build-artifacts-el7-x86_64 - Build # 7 - Still Failing!

2017-09-18 Thread jenkins
Project: 
http://jenkins.ovirt.org/job/ovirt-appliance_ovirt-4.2-pre_build-artifacts-el7-x86_64/
 
Build: 
http://jenkins.ovirt.org/job/ovirt-appliance_ovirt-4.2-pre_build-artifacts-el7-x86_64/7/
Build Number: 7
Build Status:  Still Failing
Triggered By: Started by user Sandro Bonazzola

-
Changes Since Last Success:
-
Changes for Build #6
[Milan Zamazal] vdsm: Add Fedora 26 build

[dfodor] update python-paramiko to python2-paramiko

[Daniel Belenky] add support to inject runtime env vars to mock

[Barak Korren] Add (partial) STD_CI support for 'automation.yaml'

[Barak Korren] Add retries to GitHub notifications

[Barak Korren] Fix pipeline stdci trigger detection code

[Barak Korren] Add production STD-CI pipeline jobs

[Sandro Bonazzola] packaging: build: move from fc24 to fc25


Changes for Build #7
[Sandro Bonazzola] packaging: build: move from fc24 to fc25




-
Failed Tests:
-
No tests ran.___
Infra mailing list
Infra@ovirt.org
http://lists.ovirt.org/mailman/listinfo/infra


[JIRA] (OVIRT-1651) Jenkins EL7 slaves configured as Fedora slaves

2017-09-18 Thread eyal edri (oVirt JIRA)

[ 
https://ovirt-jira.atlassian.net/browse/OVIRT-1651?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=34945#comment-34945
 ] 

eyal edri commented on OVIRT-1651:
--

I see, so maybe this line: [[ -d /etc/dnf ]] && dnf -y reinstall dnf-conf can 
change to something using the new availalbe ENV variables we have?
To check if you're on EL or Fedora? 

e.g: 
if [[ "$STD_CI_DISTRO" != "el7" ]]; then   ( or = fc* ? )
dnf -y reinstall dnf-conf
fi


> Jenkins EL7 slaves configured as Fedora slaves
> --
>
> Key: OVIRT-1651
> URL: https://ovirt-jira.atlassian.net/browse/OVIRT-1651
> Project: oVirt - virtualization made easy
>  Issue Type: Bug
>  Components: oVirt CI
>Reporter: sbonazzo
>Assignee: infra
>Priority: Lowest
>
> This morning I'm seeing jobs failing trying to use DNF instead of YUM on EL7 
> slaves because of the existence of /etc/dnf directory on EL7 slaves:
> Example 
> job:http://jenkins.ovirt.org/job/ovirt-release_4.1_check-patch-el7-x86_64/100/console
> Example slave: http://jenkins.ovirt.org/computer/vm0085.workers-phx.ovirt.org
> println "ls -l /etc/dnf".execute().text
> println "cat /etc/os-release".execute().text
> Result
> total 8
> -rw-r--r--. 1 root root 1562 Sep 16 05:43 dnf.conf
> -rw-r--r--. 1 root root   72 Mar  4  2015 dnf.conf.old
> drwxr-xr-x. 2 root root6 Jul  7  2015 plugins
> drwxr-xr-x. 2 root root   21 Sep 14 21:57 protected.d
> NAME="CentOS Linux"
> VERSION="7 (Core)"
> ID="centos"
> ID_LIKE="rhel fedora"
> VERSION_ID="7"
> PRETTY_NAME="CentOS Linux 7 (Core)"
> ANSI_COLOR="0;31"
> CPE_NAME="cpe:/o:centos:centos:7"
> HOME_URL="https://www.centos.org/;
> BUG_REPORT_URL="https://bugs.centos.org/;
> CENTOS_MANTISBT_PROJECT="CentOS-7"
> CENTOS_MANTISBT_PROJECT_VERSION="7"
> REDHAT_SUPPORT_PRODUCT="centos"
> REDHAT_SUPPORT_PRODUCT_VERSION="7"
> Please fix slaves configuration.



--
This message was sent by Atlassian {0}
(v1001.0.0-SNAPSHOT#100060)
___
Infra mailing list
Infra@ovirt.org
http://lists.ovirt.org/mailman/listinfo/infra


Jenkins build is back to normal : system-sync_mirrors-centos-updates-el7-x86_64 #793

2017-09-18 Thread jenkins
See 


___
Infra mailing list
Infra@ovirt.org
http://lists.ovirt.org/mailman/listinfo/infra


[JIRA] (OVIRT-1651) Jenkins EL7 slaves configured as Fedora slaves

2017-09-18 Thread Barak Korren (oVirt JIRA)

[ 
https://ovirt-jira.atlassian.net/browse/OVIRT-1651?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=34944#comment-34944
 ] 

Barak Korren commented on OVIRT-1651:
-

[~eedri] that is not the error that caused the job failure, this is:

{code}
05:55:12   centos-release.x86_64 0:7-4.1708.el7.centos yum.noarch 
0:3.4.3-154.el7.centos
05:55:12 
05:55:12 + [[ -d /etc/dnf ]]
05:55:12 + dnf -y reinstall dnf-conf
05:55:12 ./automation/check-patch.sh: line 27: dnf: command not found
05:55:12 Took 14 seconds
{code}

> Jenkins EL7 slaves configured as Fedora slaves
> --
>
> Key: OVIRT-1651
> URL: https://ovirt-jira.atlassian.net/browse/OVIRT-1651
> Project: oVirt - virtualization made easy
>  Issue Type: Bug
>  Components: oVirt CI
>Reporter: sbonazzo
>Assignee: infra
>Priority: Lowest
>
> This morning I'm seeing jobs failing trying to use DNF instead of YUM on EL7 
> slaves because of the existence of /etc/dnf directory on EL7 slaves:
> Example 
> job:http://jenkins.ovirt.org/job/ovirt-release_4.1_check-patch-el7-x86_64/100/console
> Example slave: http://jenkins.ovirt.org/computer/vm0085.workers-phx.ovirt.org
> println "ls -l /etc/dnf".execute().text
> println "cat /etc/os-release".execute().text
> Result
> total 8
> -rw-r--r--. 1 root root 1562 Sep 16 05:43 dnf.conf
> -rw-r--r--. 1 root root   72 Mar  4  2015 dnf.conf.old
> drwxr-xr-x. 2 root root6 Jul  7  2015 plugins
> drwxr-xr-x. 2 root root   21 Sep 14 21:57 protected.d
> NAME="CentOS Linux"
> VERSION="7 (Core)"
> ID="centos"
> ID_LIKE="rhel fedora"
> VERSION_ID="7"
> PRETTY_NAME="CentOS Linux 7 (Core)"
> ANSI_COLOR="0;31"
> CPE_NAME="cpe:/o:centos:centos:7"
> HOME_URL="https://www.centos.org/;
> BUG_REPORT_URL="https://bugs.centos.org/;
> CENTOS_MANTISBT_PROJECT="CentOS-7"
> CENTOS_MANTISBT_PROJECT_VERSION="7"
> REDHAT_SUPPORT_PRODUCT="centos"
> REDHAT_SUPPORT_PRODUCT_VERSION="7"
> Please fix slaves configuration.



--
This message was sent by Atlassian {0}
(v1001.0.0-SNAPSHOT#100060)
___
Infra mailing list
Infra@ovirt.org
http://lists.ovirt.org/mailman/listinfo/infra


[JIRA] (OVIRT-1651) Jenkins EL7 slaves configured as Fedora slaves

2017-09-18 Thread eyal edri (oVirt JIRA)

[ 
https://ovirt-jira.atlassian.net/browse/OVIRT-1651?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=34943#comment-34943
 ] 

eyal edri commented on OVIRT-1651:
--

>From the job it looks like cleanup error issue:

05:52:58 error: Failed to destroy domain f0e6d58d-53f1-4f07-9b30-082380a09ece
05:52:58 error: Requested operation is not valid: domain is not running
05:52:58 
05:53:00 error: Storage volume 
'hda'(/home/jenkins/workspace/ovirt-node-ng_ovirt-4.2_build-artifacts-el7-x86_64/ovirt-node-ng/build/diskGZCMNL.img)
 is not managed by libvirt. Remove it manually.
05:53:00 
05:53:00 error: Storage volume 
'hdb'(/home/jenkins/workspace/ovirt-node-ng_ovirt-4.2_build-artifacts-el7-x86_64/ovirt-node-ng/boot.iso)
 is not managed by libvirt. Remove it manually.
05:53:00 
05:53:00 Domain f0e6d58d-53f1-4f07-9b30-082380a09ece ha

> Jenkins EL7 slaves configured as Fedora slaves
> --
>
> Key: OVIRT-1651
> URL: https://ovirt-jira.atlassian.net/browse/OVIRT-1651
> Project: oVirt - virtualization made easy
>  Issue Type: Bug
>  Components: oVirt CI
>Reporter: sbonazzo
>Assignee: infra
>Priority: Lowest
>
> This morning I'm seeing jobs failing trying to use DNF instead of YUM on EL7 
> slaves because of the existence of /etc/dnf directory on EL7 slaves:
> Example 
> job:http://jenkins.ovirt.org/job/ovirt-release_4.1_check-patch-el7-x86_64/100/console
> Example slave: http://jenkins.ovirt.org/computer/vm0085.workers-phx.ovirt.org
> println "ls -l /etc/dnf".execute().text
> println "cat /etc/os-release".execute().text
> Result
> total 8
> -rw-r--r--. 1 root root 1562 Sep 16 05:43 dnf.conf
> -rw-r--r--. 1 root root   72 Mar  4  2015 dnf.conf.old
> drwxr-xr-x. 2 root root6 Jul  7  2015 plugins
> drwxr-xr-x. 2 root root   21 Sep 14 21:57 protected.d
> NAME="CentOS Linux"
> VERSION="7 (Core)"
> ID="centos"
> ID_LIKE="rhel fedora"
> VERSION_ID="7"
> PRETTY_NAME="CentOS Linux 7 (Core)"
> ANSI_COLOR="0;31"
> CPE_NAME="cpe:/o:centos:centos:7"
> HOME_URL="https://www.centos.org/;
> BUG_REPORT_URL="https://bugs.centos.org/;
> CENTOS_MANTISBT_PROJECT="CentOS-7"
> CENTOS_MANTISBT_PROJECT_VERSION="7"
> REDHAT_SUPPORT_PRODUCT="centos"
> REDHAT_SUPPORT_PRODUCT_VERSION="7"
> Please fix slaves configuration.



--
This message was sent by Atlassian {0}
(v1001.0.0-SNAPSHOT#100060)
___
Infra mailing list
Infra@ovirt.org
http://lists.ovirt.org/mailman/listinfo/infra


[oVirt Jenkins] ovirt-appliance_ovirt-4.2-pre_build-artifacts-el7-x86_64 - Build # 6 - Failure!

2017-09-18 Thread jenkins
Project: 
http://jenkins.ovirt.org/job/ovirt-appliance_ovirt-4.2-pre_build-artifacts-el7-x86_64/
 
Build: 
http://jenkins.ovirt.org/job/ovirt-appliance_ovirt-4.2-pre_build-artifacts-el7-x86_64/6/
Build Number: 6
Build Status:  Failure
Triggered By: Started by user Sandro Bonazzola

-
Changes Since Last Success:
-
Changes for Build #6
[Milan Zamazal] vdsm: Add Fedora 26 build

[dfodor] update python-paramiko to python2-paramiko

[Daniel Belenky] add support to inject runtime env vars to mock

[Barak Korren] Add (partial) STD_CI support for 'automation.yaml'

[Barak Korren] Add retries to GitHub notifications

[Barak Korren] Fix pipeline stdci trigger detection code

[Barak Korren] Add production STD-CI pipeline jobs

[Sandro Bonazzola] packaging: build: move from fc24 to fc25




-
Failed Tests:
-
No tests ran.___
Infra mailing list
Infra@ovirt.org
http://lists.ovirt.org/mailman/listinfo/infra


Re: [CQ]: 81775,1 (ovirt-engine) failed "ovirt-4.1" system tests

2017-09-18 Thread Eyal Edri
On Mon, Sep 18, 2017 at 3:53 AM, Greg Sheremeta  wrote:

> This looks different, because iirc that previous issue was in uicommon.
>
> Can you share more log above?
>

The full log should be in the job link provided:

http://jenkins.ovirt.org/job/ovirt-engine_4.1_build-artifacts-el7-x86_64/928/console


>
> Greg
>
>
> On Sep 17, 2017 5:29 AM, "Eyal Edri"  wrote:
>
> So it looks like the fix we did last week on not running findbugs on
> fedora didn't fix all the failures of engine failing to build [1].
>
> Any thoughts on what else can we do to fix it?
>
> 07:42:27 [INFO] WebAdmin ..
> FAILURE [5:36.003s]
> 07:42:27 [INFO] UserPortal 
> SKIPPED
> 07:42:27 [INFO] oVirt Server EAR ..
> SKIPPED
> 07:42:27 [INFO] ovirt-engine maven make ... SKIPPED
> 07:42:27 [INFO] --
> --
> 07:42:27 [INFO] BUILD FAILURE
> 07:42:27 [INFO] --
> --
> 07:42:27 [INFO] Total time: 13:12.947s
> 07:42:27 [INFO] Finished at: Sun Sep 17 07:42:27 GMT 2017
> 07:42:29 [INFO] Final Memory: 547M/1317M
> 07:42:29 [INFO] --
> --
> 07:42:29 [ERROR] Failed to execute goal 
> org.codehaus.mojo:gwt-maven-plugin:2.6.1:compile
> (gwtcompile) on project webadmin: Command [[
> 07:42:29 [ERROR] /bin/sh -c /usr/lib/jvm/java-1.8.0-openjd
> k-1.8.0.144-0.b01.el7_4.x86_64/jre/bin/java -javaagent:/root/.m2/repositor
> y/org/aspectj/aspectjweaver/1.8.2/aspectjweaver-1.8.2.jar
> -Dgwt.jjs.permutationWorkerFactory=com.google.gwt.dev.ThreadedPermutationWorkerFactory
> \
> 07:42:29 [ERROR] -Dgwt.jjs.maxThreads=4 \
> 07:42:29 [ERROR] -Djava.io.tmpdir="/home/jenkin
> s/workspace/ovirt-engine_4.1_build-artifacts-el7-x86_64/
> ovirt-engine/rpmbuild/BUILD/ovirt-engine-4.1.6.3/frontend/
> webadmin/modules/webadmin/target/tmp" \
> 07:42:29 [ERROR] -Djava.util.prefs.systemRoot="
> /home/jenkins/workspace/ovirt-engine_4.1_build-artifacts-el7
> -x86_64/ovirt-engine/rpmbuild/BUILD/ovirt-engine-4.1.6.3/
> frontend/webadmin/modules/webadmin/target/tmp" \
> 07:42:29 [ERROR] -Djava.util.prefs.userRoot="/h
> ome/jenkins/workspace/ovirt-engine_4.1_build-artifacts-el7-
> x86_64/ovirt-engine/rpmbuild/BUILD/ovirt-engine-4.1.6.3/
> frontend/webadmin/modules/webadmin/target/tmp" \
> 07:42:29 [ERROR] -Djava.util.logging.config.cla
> ss=org.ovirt.engine.ui.gwtextension.JavaLoggingConfig \
> 07:42:29 [ERROR] -Xms1G -Xmx4G  '-Dgwt.dontPrune=org\.ovirt\.e
> ngine\.core\.(common|compat)\..*' -classpath
> /home/jenkins/workspace/ovirt-engine_4.1_build-artifacts-el7
> -x86_64/ovirt-engine/rpmbuild/BUILD/ovirt-engine-4.1.6.3/
> frontend/webadmin/modules/webadmin/target/webadmin-4.1.
> 6.3-SNAPSHOT/WEB-INF/classes:/home/jenkins/workspace/ovirt-
> engine_4.1_build-artifacts-el7-x86_64/ovirt-engine/
> rpmbuild/BUILD/ovirt-engine-4.1.6.3/frontend/webadmin/
> modules/webadmin/src/main/java:/home/jenkins/workspace/ovirt-
>
>
> [1] http://jenkins.ovirt.org/job/ovirt-engine_4.1_build-artifact
> s-el7-x86_64/928/console
>
>
>
> On Sun, Sep 17, 2017 at 10:49 AM, oVirt Jenkins  wrote:
>
>> Change 81775,1 (ovirt-engine) is probably the reason behind recent system
>> test
>> failures in the "ovirt-4.1" change queue and needs to be fixed.
>>
>> This change had been removed from the testing queue. Artifacts build from
>> this
>> change will not be released until it is fixed.
>>
>> For further details about the change see:
>> https://gerrit.ovirt.org/#/c/81775/1
>>
>> For failed test results see:
>> http://jenkins.ovirt.org/job/ovirt-4.1_change-queue-tester/1007/
>> ___
>> Infra mailing list
>> Infra@ovirt.org
>> http://lists.ovirt.org/mailman/listinfo/infra
>>
>
>
>
> --
>
> Eyal edri
>
>
> ASSOCIATE MANAGER
>
> RHV DevOps
>
> EMEA VIRTUALIZATION R
>
>
> Red Hat EMEA 
>  TRIED. TESTED. TRUSTED. 
> phone: +972-9-7692018 <+972%209-769-2018>
> irc: eedri (on #tlv #rhev-dev #rhev-integ)
>
>
>


-- 

Eyal edri


ASSOCIATE MANAGER

RHV DevOps

EMEA VIRTUALIZATION R


Red Hat EMEA 
 TRIED. TESTED. TRUSTED. 
phone: +972-9-7692018
irc: eedri (on #tlv #rhev-dev #rhev-integ)
___
Infra mailing list
Infra@ovirt.org
http://lists.ovirt.org/mailman/listinfo/infra


Re: CI error on 4.1 builds?

2017-09-18 Thread Milan Zamazal
Nir Soffer  writes:

> On Mon, Sep 18, 2017 at 10:13 AM Ala Hino  wrote:
>
>> Getting following error from CI, only on 4.1 branch, CI for same patch on
>> master succeded:
>>
>> 20:33:26 Start: yum install 20:33:28 ERROR: Command failed: 20:33:28 #
>> /usr/bin/yum-deprecated --installroot
>> /var/lib/mock/epel-7-x86_64-11370f2637703a06ca4541539ddee729-1963/root/
>> --releasever 7 install @buildsys-build autoconf automake dbus-python gdb
>> git libguestfs-tools-c m2crypto make mom openvswitch ovirt-imageio-common
>> policycoreutils-python PyYAML python-blivet python-coverage python-dateutil
>> python-decorator python-devel python-inotify python-ioprocess python-mock
>> python-magic python-netaddr python-pthreading python-setuptools python-six
>> python-requests rpm-build sanlock-python sudo yum yum-utils
>> --setopt=tsflags=nocontexts ... 20:33:28 failure: repodata/repomd.xml from
>> centos-ovirt40-release-x86_64: [Errno 256] No more mirrors to try. 20:33:28
>> http://mirror.centos.org/centos/7/virt/x86_64/ovirt-4.0/repodata/repomd.xml:
>> [Errno 14] HTTP Error 404 - Not Found
>>
>
> Milan, is this the same error you reported last week, fixed in OST?

Yes, it looks the same as what Dan has fixed in commit d2baae2 in Vdsm
master and what has been fixed in OST in https://gerrit.ovirt.org/81729.

>> Link to build:
>> http://jenkins.ovirt.org/job/vdsm_4.1_check-patch-fc24-x86_64/892/

___
Infra mailing list
Infra@ovirt.org
http://lists.ovirt.org/mailman/listinfo/infra


Re: CI error on 4.1 builds?

2017-09-18 Thread Eyal Edri
On Mon, Sep 18, 2017 at 10:13 AM, Ala Hino  wrote:

> Getting following error from CI, only on 4.1 branch, CI for same patch on
> master succeded:
>
> 20:33:26 Start: yum install 20:33:28 ERROR: Command failed: 20:33:28 #
> /usr/bin/yum-deprecated --installroot /var/lib/mock/epel-7-x86_64-
> 11370f2637703a06ca4541539ddee729-1963/root/ --releasever 7 install
> @buildsys-build autoconf automake dbus-python gdb git libguestfs-tools-c
> m2crypto make mom openvswitch ovirt-imageio-common policycoreutils-python
> PyYAML python-blivet python-coverage python-dateutil python-decorator
> python-devel python-inotify python-ioprocess python-mock python-magic
> python-netaddr python-pthreading python-setuptools python-six
> python-requests rpm-build sanlock-python sudo yum yum-utils
> --setopt=tsflags=nocontexts ... 20:33:28 failure: repodata/repomd.xml from
> centos-ovirt40-release-x86_64: [Errno 256] No more mirrors to try. 20:33:28
> http://mirror.centos.org/centos/7/virt/x86_64/ovirt-4.
> 0/repodata/repomd.xml: [Errno 14] HTTP Error 404 - Not Found
>

The repos in .repos file under automation dir is outdated and needs to be
fixed by the vdsm maintainers.
Similar patch was sent to master last week [1]


[1] https://gerrit.ovirt.org/#/c/81774/


>
>
> Link to build: http://jenkins.ovirt.org/job/vdsm_4.1_check-patch-fc24-x86_
> 64/892/
>
> ___
> Infra mailing list
> Infra@ovirt.org
> http://lists.ovirt.org/mailman/listinfo/infra
>
>


-- 

Eyal edri


ASSOCIATE MANAGER

RHV DevOps

EMEA VIRTUALIZATION R


Red Hat EMEA 
 TRIED. TESTED. TRUSTED. 
phone: +972-9-7692018
irc: eedri (on #tlv #rhev-dev #rhev-integ)
___
Infra mailing list
Infra@ovirt.org
http://lists.ovirt.org/mailman/listinfo/infra


Re: CI error on 4.1 builds?

2017-09-18 Thread Nir Soffer
On Mon, Sep 18, 2017 at 10:13 AM Ala Hino  wrote:

> Getting following error from CI, only on 4.1 branch, CI for same patch on
> master succeded:
>
> 20:33:26 Start: yum install 20:33:28 ERROR: Command failed: 20:33:28 #
> /usr/bin/yum-deprecated --installroot
> /var/lib/mock/epel-7-x86_64-11370f2637703a06ca4541539ddee729-1963/root/
> --releasever 7 install @buildsys-build autoconf automake dbus-python gdb
> git libguestfs-tools-c m2crypto make mom openvswitch ovirt-imageio-common
> policycoreutils-python PyYAML python-blivet python-coverage python-dateutil
> python-decorator python-devel python-inotify python-ioprocess python-mock
> python-magic python-netaddr python-pthreading python-setuptools python-six
> python-requests rpm-build sanlock-python sudo yum yum-utils
> --setopt=tsflags=nocontexts ... 20:33:28 failure: repodata/repomd.xml from
> centos-ovirt40-release-x86_64: [Errno 256] No more mirrors to try. 20:33:28
> http://mirror.centos.org/centos/7/virt/x86_64/ovirt-4.0/repodata/repomd.xml:
> [Errno 14] HTTP Error 404 - Not Found
>

Milan, is this the same error you reported last week, fixed in OST?

>
>
> Link to build:
> http://jenkins.ovirt.org/job/vdsm_4.1_check-patch-fc24-x86_64/892/
>
___
Infra mailing list
Infra@ovirt.org
http://lists.ovirt.org/mailman/listinfo/infra


CI error on 4.1 builds?

2017-09-18 Thread Ala Hino
Getting following error from CI, only on 4.1 branch, CI for same patch on
master succeded:

20:33:26 Start: yum install 20:33:28 ERROR: Command failed: 20:33:28 #
/usr/bin/yum-deprecated --installroot
/var/lib/mock/epel-7-x86_64-11370f2637703a06ca4541539ddee729-1963/root/
--releasever 7 install @buildsys-build autoconf automake dbus-python gdb
git libguestfs-tools-c m2crypto make mom openvswitch ovirt-imageio-common
policycoreutils-python PyYAML python-blivet python-coverage python-dateutil
python-decorator python-devel python-inotify python-ioprocess python-mock
python-magic python-netaddr python-pthreading python-setuptools python-six
python-requests rpm-build sanlock-python sudo yum yum-utils
--setopt=tsflags=nocontexts ... 20:33:28 failure: repodata/repomd.xml from
centos-ovirt40-release-x86_64: [Errno 256] No more mirrors to try. 20:33:28
http://mirror.centos.org/centos/7/virt/x86_64/ovirt-4.0/repodata/repomd.xml:
[Errno 14] HTTP Error 404 - Not Found

Link to build:
http://jenkins.ovirt.org/job/vdsm_4.1_check-patch-fc24-x86_64/892/
___
Infra mailing list
Infra@ovirt.org
http://lists.ovirt.org/mailman/listinfo/infra


[oVirt Jenkins] ovirt-node-ng_ovirt-4.1_build-artifacts-el7-x86_64 - Build # 271 - Failure!

2017-09-18 Thread jenkins
Project: 
http://jenkins.ovirt.org/job/ovirt-node-ng_ovirt-4.1_build-artifacts-el7-x86_64/
 
Build: 
http://jenkins.ovirt.org/job/ovirt-node-ng_ovirt-4.1_build-artifacts-el7-x86_64/271/
Build Number: 271
Build Status:  Failure
Triggered By: Started by timer

-
Changes Since Last Success:
-
Changes for Build #271
[dfodor] update python-paramiko to python2-paramiko

[Daniel Belenky] add support to inject runtime env vars to mock

[Barak Korren] Add (partial) STD_CI support for 'automation.yaml'

[Barak Korren] Add retries to GitHub notifications

[Barak Korren] Fix pipeline stdci trigger detection code

[Barak Korren] Add production STD-CI pipeline jobs

[Yuval Turgeman] automation: adding loop devices




-
Failed Tests:
-
No tests ran.___
Infra mailing list
Infra@ovirt.org
http://lists.ovirt.org/mailman/listinfo/infra


[JIRA] (OVIRT-1651) Jenkins EL7 slaves configured as Fedora slaves

2017-09-18 Thread Barak Korren (oVirt JIRA)

 [ 
https://ovirt-jira.atlassian.net/browse/OVIRT-1651?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Barak Korren updated OVIRT-1651:

Priority: Lowest  (was: Highest)

Reducing priority since I don not regard this as a real system issue.

> Jenkins EL7 slaves configured as Fedora slaves
> --
>
> Key: OVIRT-1651
> URL: https://ovirt-jira.atlassian.net/browse/OVIRT-1651
> Project: oVirt - virtualization made easy
>  Issue Type: Bug
>  Components: oVirt CI
>Reporter: sbonazzo
>Assignee: infra
>Priority: Lowest
>
> This morning I'm seeing jobs failing trying to use DNF instead of YUM on EL7 
> slaves because of the existence of /etc/dnf directory on EL7 slaves:
> Example 
> job:http://jenkins.ovirt.org/job/ovirt-release_4.1_check-patch-el7-x86_64/100/console
> Example slave: http://jenkins.ovirt.org/computer/vm0085.workers-phx.ovirt.org
> println "ls -l /etc/dnf".execute().text
> println "cat /etc/os-release".execute().text
> Result
> total 8
> -rw-r--r--. 1 root root 1562 Sep 16 05:43 dnf.conf
> -rw-r--r--. 1 root root   72 Mar  4  2015 dnf.conf.old
> drwxr-xr-x. 2 root root6 Jul  7  2015 plugins
> drwxr-xr-x. 2 root root   21 Sep 14 21:57 protected.d
> NAME="CentOS Linux"
> VERSION="7 (Core)"
> ID="centos"
> ID_LIKE="rhel fedora"
> VERSION_ID="7"
> PRETTY_NAME="CentOS Linux 7 (Core)"
> ANSI_COLOR="0;31"
> CPE_NAME="cpe:/o:centos:centos:7"
> HOME_URL="https://www.centos.org/;
> BUG_REPORT_URL="https://bugs.centos.org/;
> CENTOS_MANTISBT_PROJECT="CentOS-7"
> CENTOS_MANTISBT_PROJECT_VERSION="7"
> REDHAT_SUPPORT_PRODUCT="centos"
> REDHAT_SUPPORT_PRODUCT_VERSION="7"
> Please fix slaves configuration.



--
This message was sent by Atlassian {0}
(v1001.0.0-SNAPSHOT#100060)
___
Infra mailing list
Infra@ovirt.org
http://lists.ovirt.org/mailman/listinfo/infra


[JIRA] (OVIRT-1651) Jenkins EL7 slaves configured as Fedora slaves

2017-09-18 Thread Barak Korren (oVirt JIRA)

[ 
https://ovirt-jira.atlassian.net/browse/OVIRT-1651?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=34941#comment-34941
 ] 

Barak Korren edited comment on OVIRT-1651 at 9/18/17 6:25 AM:
--

The job never sees the slave itself. Its running inside mock so it only sees 
what is in the mock environment.

You're seeing '{{/etc/dnf*}}' because we generate it inside mock for fedora 
compatibility. This has been like this for years.

This does not mean DNF itsef is installed, not does it mean you're seeing a 
Fedora userspace. To check if DNF is installed please check for existence of 
the DNF binary or the DNF rpm package.





was (Author: bkor...@redhat.com):
The job never sees the slave itself. Its running inside mock so it only sees 
what is in the mock environment.

You're seeing '{{/etc/dnf*}}' because we generate it inside mock for fedora 
compatibility. This does not mean DNF itsef is installed, not does it mean 
you're seeing a Fedora userspace. To check if DNF is installed please check for 
existance of the DNF binary or the DNF rpm package.



> Jenkins EL7 slaves configured as Fedora slaves
> --
>
> Key: OVIRT-1651
> URL: https://ovirt-jira.atlassian.net/browse/OVIRT-1651
> Project: oVirt - virtualization made easy
>  Issue Type: Bug
>  Components: oVirt CI
>Reporter: sbonazzo
>Assignee: infra
>Priority: Highest
>
> This morning I'm seeing jobs failing trying to use DNF instead of YUM on EL7 
> slaves because of the existence of /etc/dnf directory on EL7 slaves:
> Example 
> job:http://jenkins.ovirt.org/job/ovirt-release_4.1_check-patch-el7-x86_64/100/console
> Example slave: http://jenkins.ovirt.org/computer/vm0085.workers-phx.ovirt.org
> println "ls -l /etc/dnf".execute().text
> println "cat /etc/os-release".execute().text
> Result
> total 8
> -rw-r--r--. 1 root root 1562 Sep 16 05:43 dnf.conf
> -rw-r--r--. 1 root root   72 Mar  4  2015 dnf.conf.old
> drwxr-xr-x. 2 root root6 Jul  7  2015 plugins
> drwxr-xr-x. 2 root root   21 Sep 14 21:57 protected.d
> NAME="CentOS Linux"
> VERSION="7 (Core)"
> ID="centos"
> ID_LIKE="rhel fedora"
> VERSION_ID="7"
> PRETTY_NAME="CentOS Linux 7 (Core)"
> ANSI_COLOR="0;31"
> CPE_NAME="cpe:/o:centos:centos:7"
> HOME_URL="https://www.centos.org/;
> BUG_REPORT_URL="https://bugs.centos.org/;
> CENTOS_MANTISBT_PROJECT="CentOS-7"
> CENTOS_MANTISBT_PROJECT_VERSION="7"
> REDHAT_SUPPORT_PRODUCT="centos"
> REDHAT_SUPPORT_PRODUCT_VERSION="7"
> Please fix slaves configuration.



--
This message was sent by Atlassian {0}
(v1001.0.0-SNAPSHOT#100060)
___
Infra mailing list
Infra@ovirt.org
http://lists.ovirt.org/mailman/listinfo/infra


[JIRA] (OVIRT-1651) Jenkins EL7 slaves configured as Fedora slaves

2017-09-18 Thread Barak Korren (oVirt JIRA)

[ 
https://ovirt-jira.atlassian.net/browse/OVIRT-1651?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=34941#comment-34941
 ] 

Barak Korren commented on OVIRT-1651:
-

The job never sees the slave itself. Its running inside mock so it only sees 
what is in the mock environment.

You're seeing '{{/etc/dnf*}}' because we generate it inside mock for fedora 
compatibility. This does not mean DNF itsef is installed, not does it mean 
you're seeing a Fedora userspace. To check if DNF is installed please check for 
existance of the DNF binary or the DNF rpm package.



> Jenkins EL7 slaves configured as Fedora slaves
> --
>
> Key: OVIRT-1651
> URL: https://ovirt-jira.atlassian.net/browse/OVIRT-1651
> Project: oVirt - virtualization made easy
>  Issue Type: Bug
>  Components: oVirt CI
>Reporter: sbonazzo
>Assignee: infra
>Priority: Highest
>
> This morning I'm seeing jobs failing trying to use DNF instead of YUM on EL7 
> slaves because of the existence of /etc/dnf directory on EL7 slaves:
> Example 
> job:http://jenkins.ovirt.org/job/ovirt-release_4.1_check-patch-el7-x86_64/100/console
> Example slave: http://jenkins.ovirt.org/computer/vm0085.workers-phx.ovirt.org
> println "ls -l /etc/dnf".execute().text
> println "cat /etc/os-release".execute().text
> Result
> total 8
> -rw-r--r--. 1 root root 1562 Sep 16 05:43 dnf.conf
> -rw-r--r--. 1 root root   72 Mar  4  2015 dnf.conf.old
> drwxr-xr-x. 2 root root6 Jul  7  2015 plugins
> drwxr-xr-x. 2 root root   21 Sep 14 21:57 protected.d
> NAME="CentOS Linux"
> VERSION="7 (Core)"
> ID="centos"
> ID_LIKE="rhel fedora"
> VERSION_ID="7"
> PRETTY_NAME="CentOS Linux 7 (Core)"
> ANSI_COLOR="0;31"
> CPE_NAME="cpe:/o:centos:centos:7"
> HOME_URL="https://www.centos.org/;
> BUG_REPORT_URL="https://bugs.centos.org/;
> CENTOS_MANTISBT_PROJECT="CentOS-7"
> CENTOS_MANTISBT_PROJECT_VERSION="7"
> REDHAT_SUPPORT_PRODUCT="centos"
> REDHAT_SUPPORT_PRODUCT_VERSION="7"
> Please fix slaves configuration.



--
This message was sent by Atlassian {0}
(v1001.0.0-SNAPSHOT#100060)
___
Infra mailing list
Infra@ovirt.org
http://lists.ovirt.org/mailman/listinfo/infra


[JIRA] (OVIRT-1651) Jenkins EL7 slaves configured as Fedora slaves

2017-09-18 Thread sbonazzo (oVirt JIRA)
sbonazzo created OVIRT-1651:
---

 Summary: Jenkins EL7 slaves configured as Fedora slaves
 Key: OVIRT-1651
 URL: https://ovirt-jira.atlassian.net/browse/OVIRT-1651
 Project: oVirt - virtualization made easy
  Issue Type: Bug
  Components: oVirt CI
Reporter: sbonazzo
Assignee: infra
Priority: Highest


This morning I'm seeing jobs failing trying to use DNF instead of YUM on EL7 
slaves because of the existence of /etc/dnf directory on EL7 slaves:

Example 
job:http://jenkins.ovirt.org/job/ovirt-release_4.1_check-patch-el7-x86_64/100/console

Example slave: http://jenkins.ovirt.org/computer/vm0085.workers-phx.ovirt.org

println "ls -l /etc/dnf".execute().text
println "cat /etc/os-release".execute().text

Result

total 8
-rw-r--r--. 1 root root 1562 Sep 16 05:43 dnf.conf
-rw-r--r--. 1 root root   72 Mar  4  2015 dnf.conf.old
drwxr-xr-x. 2 root root6 Jul  7  2015 plugins
drwxr-xr-x. 2 root root   21 Sep 14 21:57 protected.d

NAME="CentOS Linux"
VERSION="7 (Core)"
ID="centos"
ID_LIKE="rhel fedora"
VERSION_ID="7"
PRETTY_NAME="CentOS Linux 7 (Core)"
ANSI_COLOR="0;31"
CPE_NAME="cpe:/o:centos:centos:7"
HOME_URL="https://www.centos.org/;
BUG_REPORT_URL="https://bugs.centos.org/;

CENTOS_MANTISBT_PROJECT="CentOS-7"
CENTOS_MANTISBT_PROJECT_VERSION="7"
REDHAT_SUPPORT_PRODUCT="centos"
REDHAT_SUPPORT_PRODUCT_VERSION="7"

Please fix slaves configuration.




--
This message was sent by Atlassian {0}
(v1001.0.0-SNAPSHOT#100060)
___
Infra mailing list
Infra@ovirt.org
http://lists.ovirt.org/mailman/listinfo/infra


[JIRA] (OVIRT-1651) Jenkins EL7 slaves configured as Fedora slaves

2017-09-18 Thread sbonazzo (oVirt JIRA)

 [ 
https://ovirt-jira.atlassian.net/browse/OVIRT-1651?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

sbonazzo updated OVIRT-1651:

Epic Link: OVIRT-400

> Jenkins EL7 slaves configured as Fedora slaves
> --
>
> Key: OVIRT-1651
> URL: https://ovirt-jira.atlassian.net/browse/OVIRT-1651
> Project: oVirt - virtualization made easy
>  Issue Type: Bug
>  Components: oVirt CI
>Reporter: sbonazzo
>Assignee: infra
>Priority: Highest
>
> This morning I'm seeing jobs failing trying to use DNF instead of YUM on EL7 
> slaves because of the existence of /etc/dnf directory on EL7 slaves:
> Example 
> job:http://jenkins.ovirt.org/job/ovirt-release_4.1_check-patch-el7-x86_64/100/console
> Example slave: http://jenkins.ovirt.org/computer/vm0085.workers-phx.ovirt.org
> println "ls -l /etc/dnf".execute().text
> println "cat /etc/os-release".execute().text
> Result
> total 8
> -rw-r--r--. 1 root root 1562 Sep 16 05:43 dnf.conf
> -rw-r--r--. 1 root root   72 Mar  4  2015 dnf.conf.old
> drwxr-xr-x. 2 root root6 Jul  7  2015 plugins
> drwxr-xr-x. 2 root root   21 Sep 14 21:57 protected.d
> NAME="CentOS Linux"
> VERSION="7 (Core)"
> ID="centos"
> ID_LIKE="rhel fedora"
> VERSION_ID="7"
> PRETTY_NAME="CentOS Linux 7 (Core)"
> ANSI_COLOR="0;31"
> CPE_NAME="cpe:/o:centos:centos:7"
> HOME_URL="https://www.centos.org/;
> BUG_REPORT_URL="https://bugs.centos.org/;
> CENTOS_MANTISBT_PROJECT="CentOS-7"
> CENTOS_MANTISBT_PROJECT_VERSION="7"
> REDHAT_SUPPORT_PRODUCT="centos"
> REDHAT_SUPPORT_PRODUCT_VERSION="7"
> Please fix slaves configuration.



--
This message was sent by Atlassian {0}
(v1001.0.0-SNAPSHOT#100060)
___
Infra mailing list
Infra@ovirt.org
http://lists.ovirt.org/mailman/listinfo/infra