Migration of Jenkins VM to new Cluster failed

2016-06-23 Thread Nadav Goldin
Hi Evgheni,
Unfortunately migrating the Jenkins VM failed, to (my) luck its back
running in the old Production cluster. so we could track this I am
listing again the steps taken today:

1. around 18:00 TLV time, I triggered a snapshot of the VM. This not
only failed but caused the Jenkins VM to be none-responsive for a few
minutes. More distributing is that although in the 'event's in the
engine it announced a failure, under 'snapshots' the new snapshot was
listed under status 'ok'. this also caused few CI failures(which were
re-triggered).

2. As snapshot seems like a no-option, I created a new VM in the
production cluster jenkins-2.phx.ovirt.org, and downloaded the latest
backup from backup.phx.ovirt.org, so in case of a failure we could
change the DNS and use it(keep in mind this backup does not have any
builds, only logs/configs)

3. I shut down the VM from the engine - it was hanging for a few
minutes in 'shutting down' and then announced 'shutdown failed', which
caused it to appear again in 'up' state but it was non responsive.
virsh -r --list also stated it was up.

4. I triggered another shutdown, which succeeded. As I didn't want to
risk it any more I let it boot in the same cluster, which was also
successful.

I've attached some parts of engine.log, from a quick look on vdsm.log
I didn't see anything but could help if someone else have a look(this
is ovirt-srv02). the relevant log times for the shut down failure are
from '2016-06-23 16:15'.

Either way until we find the problem, I'm not sure we should risk it
before we have a proper recovery plan. One brute-force option is using
rsync from jenkins.phx.ovirt.org:/var/lib/data/jenkins to jenkins-2,
with jenkins daemon itself shut down on 'jenkins-2', then we could
schedule a downtime on jenkins.phx.ovirt.org, wait that everything is
synced, and stop jenkins(and puppet), then start jenkins daemon on
jenkins-2 and change the DNS cname of jenkins.ovirt.org to point to
it. if everything goes smooth it should run fine, and if not, we still
have jenkins.phx.ovirt.org running.

another option is to unmount /var/lib/data/  and mount it back to
jenkins-2, though then we might be in trouble if something goes wrong
on the way.


Nadav.
engine.log
snapshot event
2016-06-23 09:06:49,592 INFO  
[org.ovirt.engine.core.vdsbroker.VdsUpdateRunTimeInfo] 
(DefaultQuartzScheduler_Worker-44) VM jenkins-phx-ovirt-org 
e7a7b735-0310-4f88-9ed9-4fed85835a01 moved from Up --> Paused
, Custom Event ID: -1, Message: Failed to create live snapshot 
'ngoldin_before_cluster_move' for VM 'jenkins-phx-ovirt-org'. VM restart is 
recommended. Note that using the created snapshot might cause data 
inconsistency.
, Custom Event ID: -1, Message: Failed to complete snapshot 
'ngoldin_before_cluster_move' creation for VM 'jenkins-phx-ovirt-org'.
2016-06-23 09:17:29,020 INFO  
[org.ovirt.engine.core.vdsbroker.VdsUpdateRunTimeInfo] 
(DefaultQuartzScheduler_Worker-69) VM jenkins-phx-ovirt-org 
e7a7b735-0310-4f88-9ed9-4fed85835a01 moved from Paused --> Up

failed shutdown
2016-06-23 15:59:20,348 INFO  [org.ovirt.engine.core.bll.ShutdownVmCommand] 
(org.ovirt.thread.pool-8-thread-25) [52b9dd27] Entered (VM 
jenkins-phx-ovirt-org).
2016-06-23 15:59:20,349 INFO  [org.ovirt.engine.core.bll.ShutdownVmCommand] 
(org.ovirt.thread.pool-8-thread-25) [52b9dd27] Sending shutdown command for VM 
jenkins-phx-ovirt-org.
2016-06-23 15:59:20,446 INFO  
[org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector] 
(org.ovirt.thread.pool-8-thread-25) [52b9dd27] Correlation ID: 52b9dd27, Job 
ID: f1f0d78e-ae68-465e-a3c1-e46d146fc2e7, Call Stack: null, Custom Event ID: 
-1, Message: VM shutdown initiated by admin on VM jenkins-phx-ovirt-org (Host: 
ovirt-srv02) (Reason: Not Specified).
2016-06-23 16:04:20,556 INFO  
[org.ovirt.engine.core.vdsbroker.VdsUpdateRunTimeInfo] 
(DefaultQuartzScheduler_Worker-20) [2d2d1b3a] VM jenkins-phx-ovirt-org 
e7a7b735-0310-4f88-9ed9-4fed85835a01 moved from PoweringDown --> Up
2016-06-23 16:04:20,628 WARN  
[org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector] 
(DefaultQuartzScheduler_Worker-20) [2d2d1b3a] Correlation ID: null, Call Stack: 
null, Custom Event ID: -1, Message: Shutdown of VM jenkins-phx-ovirt-org failed.


___
Infra mailing list
Infra@ovirt.org
http://lists.ovirt.org/mailman/listinfo/infra


Re: [Attention] Jenkins maintenance today(24/06/2016 01:00 AM TLV)

2016-06-23 Thread Nadav Goldin
Jenkins is back up to normal.



On Fri, Jun 24, 2016 at 12:07 AM, Nadav Goldin  wrote:
> Hi,
> As part of an infrastructure upgrade, in approximately one hour at
> 01:00 AM TLV, http://jenkins.ovirt.org will be shut down for
> maintenance, expected downtime is 15 minutes.
> Patches sent during the downtime will be checked afterwards, patches
> sent around 40 minutes prior to the downtime might not get checked.
>
> If patches you sent did not trigger CI, you can login after the
> downtime and re-trigger them manually.
>
> Thanks,
>
> Nadav.
___
Infra mailing list
Infra@ovirt.org
http://lists.ovirt.org/mailman/listinfo/infra


[JIRA] (OVIRT-608) [URGENT] Half of the Jenkins slaves are offline

2016-06-23 Thread Nadav Goldin (oVirt JIRA)

[ 
https://ovirt-jira.atlassian.net/browse/OVIRT-608?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17509#comment-17509
 ] 

Nadav Goldin commented on OVIRT-608:


quick check - all disconnected VMs were disconnected on purpose to reduce load. 

22 VMs  were disconnected in order to reduce load, most of them by [~dcaroest] 
last week, not sure how it was calculated.
2 BM slaves are offeline, most likely they lost their IP because of DHCP 
problem.


> [URGENT] Half of the Jenkins slaves are offline
> ---
>
> Key: OVIRT-608
> URL: https://ovirt-jira.atlassian.net/browse/OVIRT-608
> Project: oVirt - virtualization made easy
>  Issue Type: By-EMAIL
>Reporter: sbonazzo
>Assignee: infra
>
> Please check what happened, 24 jenkins slaves are down right now.
> -- 
> Sandro Bonazzola
> Better technology. Faster innovation. Powered by community collaboration.
> See how it works at redhat.com



--
This message was sent by Atlassian JIRA
(v1000.98.4#14)
___
Infra mailing list
Infra@ovirt.org
http://lists.ovirt.org/mailman/listinfo/infra


[JIRA] (OVIRT-608) [URGENT] Half of the Jenkins slaves are offline

2016-06-23 Thread Nadav Goldin (oVirt JIRA)

[ 
https://ovirt-jira.atlassian.net/browse/OVIRT-608?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17508#comment-17508
 ] 

Nadav Goldin commented on OVIRT-608:


During the evening I tried creating a snapshot of the Jenkins VM, which 
surprisingly caused the entire storage domain to slow down, the snapshot 
failed, and halted the Jenkins VM for a few minutes, I'll check if this might 
have disconnected more slaves than we intended.
.


> [URGENT] Half of the Jenkins slaves are offline
> ---
>
> Key: OVIRT-608
> URL: https://ovirt-jira.atlassian.net/browse/OVIRT-608
> Project: oVirt - virtualization made easy
>  Issue Type: By-EMAIL
>Reporter: sbonazzo
>Assignee: infra
>
> Please check what happened, 24 jenkins slaves are down right now.
> -- 
> Sandro Bonazzola
> Better technology. Faster innovation. Powered by community collaboration.
> See how it works at redhat.com



--
This message was sent by Atlassian JIRA
(v1000.98.4#14)
___
Infra mailing list
Infra@ovirt.org
http://lists.ovirt.org/mailman/listinfo/infra


[Attention] Jenkins maintenance today(24/06/2016 01:00 AM TLV)

2016-06-23 Thread Nadav Goldin
Hi,
As part of an infrastructure upgrade, in approximately one hour at
01:00 AM TLV, http://jenkins.ovirt.org will be shut down for
maintenance, expected downtime is 15 minutes.
Patches sent during the downtime will be checked afterwards, patches
sent around 40 minutes prior to the downtime might not get checked.

If patches you sent did not trigger CI, you can login after the
downtime and re-trigger them manually.

Thanks,

Nadav.
___
Infra mailing list
Infra@ovirt.org
http://lists.ovirt.org/mailman/listinfo/infra


[JIRA] (OVIRT-608) [URGENT] Half of the Jenkins slaves are offline

2016-06-23 Thread eyal edri [Administrator] (oVirt JIRA)

[ 
https://ovirt-jira.atlassian.net/browse/OVIRT-608?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17507#comment-17507
 ] 

eyal edri [Administrator] commented on OVIRT-608:
-

Some of the slaves are offline for a reason.  We have storage overload that
can risk the stability of the entire DC,  and until we won't move to use
local disk storage we can't keep all the slaves running all the time.
Having said that,  if there something critical,  we can start a few more
vms to unlock a critical fix.
On Jun 23, 2016 9:49 PM, "sbonazzo (oVirt JIRA)" <
j...@ovirt-jira.atlassian.net> wrote:

sbonazzo created OVIRT-608:
--

 Summary: [URGENT] Half of the Jenkins slaves are offline
 Key: OVIRT-608
 URL: https://ovirt-jira.atlassian.net/browse/OVIRT-608
 Project: oVirt - virtualization made easy
  Issue Type: By-EMAIL
Reporter: sbonazzo
Assignee: infra


Please check what happened, 24 jenkins slaves are down right now.

--
Sandro Bonazzola
Better technology. Faster innovation. Powered by community collaboration.
See how it works at redhat.com



--
This message was sent by Atlassian JIRA
(v1000.98.4#14)


> [URGENT] Half of the Jenkins slaves are offline
> ---
>
> Key: OVIRT-608
> URL: https://ovirt-jira.atlassian.net/browse/OVIRT-608
> Project: oVirt - virtualization made easy
>  Issue Type: By-EMAIL
>Reporter: sbonazzo
>Assignee: infra
>
> Please check what happened, 24 jenkins slaves are down right now.
> -- 
> Sandro Bonazzola
> Better technology. Faster innovation. Powered by community collaboration.
> See how it works at redhat.com



--
This message was sent by Atlassian JIRA
(v1000.98.4#14)
___
Infra mailing list
Infra@ovirt.org
http://lists.ovirt.org/mailman/listinfo/infra


Re: [JIRA] (OVIRT-608) [URGENT] Half of the Jenkins slaves are offline

2016-06-23 Thread Eyal Edri
Some of the slaves are offline for a reason.  We have storage overload that
can risk the stability of the entire DC,  and until we won't move to use
local disk storage we can't keep all the slaves running all the time.
Having said that,  if there something critical,  we can start a few more
vms to unlock a critical fix.
On Jun 23, 2016 9:49 PM, "sbonazzo (oVirt JIRA)" <
j...@ovirt-jira.atlassian.net> wrote:

sbonazzo created OVIRT-608:
--

 Summary: [URGENT] Half of the Jenkins slaves are offline
 Key: OVIRT-608
 URL: https://ovirt-jira.atlassian.net/browse/OVIRT-608
 Project: oVirt - virtualization made easy
  Issue Type: By-EMAIL
Reporter: sbonazzo
Assignee: infra


Please check what happened, 24 jenkins slaves are down right now.

--
Sandro Bonazzola
Better technology. Faster innovation. Powered by community collaboration.
See how it works at redhat.com



--
This message was sent by Atlassian JIRA
(v1000.98.4#14)
___
Infra mailing list
Infra@ovirt.org
http://lists.ovirt.org/mailman/listinfo/infra
___
Infra mailing list
Infra@ovirt.org
http://lists.ovirt.org/mailman/listinfo/infra


[JIRA] (OVIRT-608) [URGENT] Half of the Jenkins slaves are offline

2016-06-23 Thread sbonazzo (oVirt JIRA)
sbonazzo created OVIRT-608:
--

 Summary: [URGENT] Half of the Jenkins slaves are offline
 Key: OVIRT-608
 URL: https://ovirt-jira.atlassian.net/browse/OVIRT-608
 Project: oVirt - virtualization made easy
  Issue Type: By-EMAIL
Reporter: sbonazzo
Assignee: infra


Please check what happened, 24 jenkins slaves are down right now.

-- 
Sandro Bonazzola
Better technology. Faster innovation. Powered by community collaboration.
See how it works at redhat.com



--
This message was sent by Atlassian JIRA
(v1000.98.4#14)
___
Infra mailing list
Infra@ovirt.org
http://lists.ovirt.org/mailman/listinfo/infra


planned restart of ci-templates-repo.phx.ovirt.org and glance.phx.ovirt.org

2016-06-23 Thread Evgheni Dereveanchin
Hello everyone,

I'll restart these two VMs to move them to the new oVirt cluster:
 ci-templates-repo.phx.ovirt.org
 glance.phx.ovirt.org

The first VM is used by lago. If any jobs fail 
due to template unavailability please restart them.
If you find any persisting issues please let me know.

Regards, 
Evgheni Dereveanchin 


___
Infra mailing list
Infra@ovirt.org
http://lists.ovirt.org/mailman/listinfo/infra


[ANN] oVirt 4.0.0 Final Release is now available

2016-06-23 Thread Sandro Bonazzola
The oVirt Project is pleased to announce today the general availability of
oVirt 4.0.0.

This release is available now for:
* Fedora 23 (tech preview)
* Red Hat Enterprise Linux 7.2 or later
* CentOS Linux (or similar) 7.2 or later

This release supports Hypervisor Hosts running:
* Red Hat Enterprise Linux 7.2 or later
* CentOS Linux (or similar) 7.2 or later
* Fedora 23 (tech preview)
* oVirt Next Generation Node 4.0

Please take a look at our community page[1] to know how to ask questions
and interact with developers and users.
All issues or bugs should be reported via oVirt Bugzilla[2].

See the release notes [3] for installation / upgrade instructions and a
list of new features and bugs fixed.

Notes:
* A new oVirt Live ISO will be available soon. [4]
* A new oVirt Next Generation Node will be available soon [4].
* A new oVirt Engine Appliance will be available soon.
* A new oVirt Guest Tools ISO is already available [4].
* Mirrors[5] might need up to one day to synchronize.

Additional Resources:
* Read more about the oVirt 4.0.0 release highlights:
http://www.ovirt.org/release/4.0.0/
* Get more oVirt Project updates on Twitter: https://twitter.com/ovirt
* Check out the latest project news on the oVirt blog:
http://www.ovirt.org/blog/

[1] https://www.ovirt.org/community/
[2] https://bugzilla.redhat.com/enter_bug.cgi?classification=oVirt
[3] http://www.ovirt.org/release/4.0.0/
[4] http://resources.ovirt.org/pub/ovirt-4.0/iso/
[5] http://www.ovirt.org/Repository_mirrors#Current_mirrors


-- 
Sandro Bonazzola
Better technology. Faster innovation. Powered by community collaboration.
See how it works at redhat.com
___
Infra mailing list
Infra@ovirt.org
http://lists.ovirt.org/mailman/listinfo/infra


[JIRA] (OVIRT-607) sort out jobs which didn't move to std ci

2016-06-23 Thread eyal edri [Administrator] (oVirt JIRA)
eyal edri [Administrator] created OVIRT-607:
---

 Summary: sort out jobs which didn't move to std ci
 Key: OVIRT-607
 URL: https://ovirt-jira.atlassian.net/browse/OVIRT-607
 Project: oVirt - virtualization made easy
  Issue Type: New Feature
Reporter: eyal edri [Administrator]
Assignee: infra


http://pastebin.test.redhat.com/386625

The filtered list which doesn't include projects who doesn't need to move, 
which are using a 'rel-eng' kind of building..

http://pastebin.test.redhat.com/386628



--
This message was sent by Atlassian JIRA
(v1000.98.4#14)
___
Infra mailing list
Infra@ovirt.org
http://lists.ovirt.org/mailman/listinfo/infra


Re: Change in ovirt-wgt[master]: Correct path to QEMU GA MSI files

2016-06-23 Thread Vinzenz Feenstra

> On Jun 23, 2016, at 9:30 AM, Barak Korren  wrote:
> 
>> 
>> I now retriggered the job and it finished successfully.
>> 
> Probably some other package got updates on resources in the meantime
> which cause the repomd to be rebuilt.
> 
>> Last night I downloaded the files from resources.ovirt.org and from jenkins
>> and compared them.
>> 
>> cmp -l on them was very long.
>> 
>> opened them with rpm2cpio, and the cpio files were very similar - didn't
>> check the diff, likely timestamp or something like that.
>> 
>> No idea how signatures/checksum etc work in rpm.
> 
> This is still concerning, perhaps we better rebuild the package to
> ensure we have the right version on resources.

It actually works already again and must have been a temporary hiccup

> 
> 
> -- 
> Barak Korren
> bkor...@redhat.com
> RHEV-CI Team
> ___
> Infra mailing list
> Infra@ovirt.org
> http://lists.ovirt.org/mailman/listinfo/infra

___
Infra mailing list
Infra@ovirt.org
http://lists.ovirt.org/mailman/listinfo/infra


Re: ** PROBLEM Service Alert: PHX Ovirt-Srv 06/Total Processes is WARNING **

2016-06-23 Thread Evgheni Dereveanchin
I started getting Nagios alerts as the number of processes
on ovirt-srv06 was fluctuating around 450 which was the warning
threshold. Same story is happening happens on ovirt-srv05.
I've increased the threshold to 500 on these two hosts as
450 looks like a totally normal number of running proceses on EL7.

Regards, 
Evgheni Dereveanchin 

- Original Message -
From: "icinga" 
To: edere...@redhat.com
Sent: Thursday, 23 June, 2016 7:05:37 AM
Subject: ** PROBLEM Service Alert: PHX Ovirt-Srv 06/Total Processes is WARNING 
**

* Icinga *

Notification Type: PROBLEM

Service: Total Processes
Host: PHX Ovirt-Srv 06
Address: 66.187.230.8
State: WARNING

Date/Time: Thu Jun 23 05:05:37 UTC 2016

Additional Info:

PROCS WARNING: 454 processes
___
Infra mailing list
Infra@ovirt.org
http://lists.ovirt.org/mailman/listinfo/infra


Re: Change in ovirt-wgt[master]: Correct path to QEMU GA MSI files

2016-06-23 Thread Barak Korren
>
> I now retriggered the job and it finished successfully.
>
Probably some other package got updates on resources in the meantime
which cause the repomd to be rebuilt.

> Last night I downloaded the files from resources.ovirt.org and from jenkins
> and compared them.
>
> cmp -l on them was very long.
>
> opened them with rpm2cpio, and the cpio files were very similar - didn't
> check the diff, likely timestamp or something like that.
>
> No idea how signatures/checksum etc work in rpm.

This is still concerning, perhaps we better rebuild the package to
ensure we have the right version on resources.


-- 
Barak Korren
bkor...@redhat.com
RHEV-CI Team
___
Infra mailing list
Infra@ovirt.org
http://lists.ovirt.org/mailman/listinfo/infra


Re: Change in ovirt-wgt[master]: Correct path to QEMU GA MSI files

2016-06-23 Thread Yedidyah Bar David
On Thu, Jun 23, 2016 at 9:38 AM, Barak Korren  wrote:
> On 22 June 2016 at 23:01, Yedidyah Bar David  wrote:
>> On Wed, Jun 22, 2016 at 10:33 PM, Jenkins CI  
>> wrote:
>>> Jenkins CI has posted comments on this change.
>>>
>>> Change subject: Correct path to QEMU GA MSI files
>>> ..
>>>
>>>
>>> Patch Set 3:
>>>
>>> Build Failed
>>>
>>> http://jenkins.ovirt.org/job/ovirt-wgt_master_create-rpms-fc23-x86_64_created/32/
>>>  : FAILURE
>>
>> root.log has:
>>
>> DEBUG util.py:417:
>> http://resources.ovirt.org/pub/ovirt-master-snapshot/rpm/fc23/noarch/nsis-simple-service-plugin-1.30-1.noarch.rpm:
>> [Errno -1] Package does not match intended download. Suggestion: run
>> yum --enablerepo=ovirt-master-snapshot clean metadata
>>
>> No idea what above means, so tried to manually install it on f23:
>>
>> [MIRROR] nsis-simple-service-plugin-1.30-1.noarch.rpm: Downloading
>> successful, but checksum doesn't match. Calculated:
>> 24a4a7e8aa59fe5f4ec59c868cad6aa5f499ee4147282d567bfe3544fba01075(sha256)
>>  Expected: 
>> f31f377d70f49f09862cd75552645de9ff4969b48fcb7ab226d4fe3930220744(sha256)
>>
>> Any idea?
>
> It seems the 'nsis-simple-service-plugin-1.30-1.noarch.rpm' file is
> different from what the yum metadata says it should be, was it
> tampered with?
> Did anyone try to update it manually on resources.ovirt.org without
> re-running createrepo?

I now retriggered the job and it finished successfully.

Last night I downloaded the files from resources.ovirt.org and from jenkins
and compared them.

cmp -l on them was very long.

opened them with rpm2cpio, and the cpio files were very similar - didn't
check the diff, likely timestamp or something like that.

No idea how signatures/checksum etc work in rpm.
-- 
Didi
___
Infra mailing list
Infra@ovirt.org
http://lists.ovirt.org/mailman/listinfo/infra


Re: Change in ovirt-wgt[master]: Correct path to QEMU GA MSI files

2016-06-23 Thread Barak Korren
On 22 June 2016 at 23:01, Yedidyah Bar David  wrote:
> On Wed, Jun 22, 2016 at 10:33 PM, Jenkins CI  wrote:
>> Jenkins CI has posted comments on this change.
>>
>> Change subject: Correct path to QEMU GA MSI files
>> ..
>>
>>
>> Patch Set 3:
>>
>> Build Failed
>>
>> http://jenkins.ovirt.org/job/ovirt-wgt_master_create-rpms-fc23-x86_64_created/32/
>>  : FAILURE
>
> root.log has:
>
> DEBUG util.py:417:
> http://resources.ovirt.org/pub/ovirt-master-snapshot/rpm/fc23/noarch/nsis-simple-service-plugin-1.30-1.noarch.rpm:
> [Errno -1] Package does not match intended download. Suggestion: run
> yum --enablerepo=ovirt-master-snapshot clean metadata
>
> No idea what above means, so tried to manually install it on f23:
>
> [MIRROR] nsis-simple-service-plugin-1.30-1.noarch.rpm: Downloading
> successful, but checksum doesn't match. Calculated:
> 24a4a7e8aa59fe5f4ec59c868cad6aa5f499ee4147282d567bfe3544fba01075(sha256)
>  Expected: 
> f31f377d70f49f09862cd75552645de9ff4969b48fcb7ab226d4fe3930220744(sha256)
>
> Any idea?

It seems the 'nsis-simple-service-plugin-1.30-1.noarch.rpm' file is
different from what the yum metadata says it should be, was it
tampered with?
Did anyone try to update it manually on resources.ovirt.org without
re-running createrepo?

-- 
Barak Korren
bkor...@redhat.com
RHEV-CI Team
___
Infra mailing list
Infra@ovirt.org
http://lists.ovirt.org/mailman/listinfo/infra


[JIRA] (OVIRT-606) Trigger Permissions for user evilissimo @jenkins

2016-06-23 Thread Nadav Goldin (oVirt JIRA)

 [ 
https://ovirt-jira.atlassian.net/browse/OVIRT-606?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Nadav Goldin reassigned OVIRT-606:
--

Assignee: Nadav Goldin  (was: infra)

> Trigger Permissions for user evilissimo @jenkins
> 
>
> Key: OVIRT-606
> URL: https://ovirt-jira.atlassian.net/browse/OVIRT-606
> Project: oVirt - virtualization made easy
>  Issue Type: Task
>  Components: Jenkins
>Reporter: vfeenstr
>Assignee: Nadav Goldin
>
> Hi,
> I'd like to ask for permissions to trigger jenkins jobs for the user 
> evilissimo on jenkins.ovirt.org
> thanks.



--
This message was sent by Atlassian JIRA
(v1000.98.4#14)
___
Infra mailing list
Infra@ovirt.org
http://lists.ovirt.org/mailman/listinfo/infra


[JIRA] (OVIRT-605) VDSM check merged failing looks infra related)

2016-06-23 Thread eyal edri [Administrator] (oVirt JIRA)

 [ 
https://ovirt-jira.atlassian.net/browse/OVIRT-605?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

eyal edri [Administrator] reassigned OVIRT-605:
---

Assignee: Anton Marchukov  (was: infra)

Anton, Please have a look.
It might be related to the reposync bug we've found.
If it uses reposync, lets try to disable it and see if it goes away.
you can talk with [~ybron...@redhat.com] who is familiar with this job.

> VDSM check merged failing looks infra related)
> --
>
> Key: OVIRT-605
> URL: https://ovirt-jira.atlassian.net/browse/OVIRT-605
> Project: oVirt - virtualization made easy
>  Issue Type: By-EMAIL
>Reporter: sbonazzo
>Assignee: Anton Marchukov
>
> http://jenkins.ovirt.org/job/vdsm_master_check-merged-fc23-x86_64/381/
> [33mcurrent session does not belong to lago group. [0m
> [36m@ Deploy oVirt environment: [0m [0m [0m
> [33m  # ovirt-role metadata entry will be soon deprecated, instead you
> should use the vm-provider entry in the domain definiton and set it no one
> of: ovirt-node, ovirt-engine, ovirt-host [0m
> [36m  # Deploy environment: [0m [0m [0m
> [36m* [Thread-2] Deploy VM vdsm_functional_tests_host-fc23: [0m [0m [0m
> [31m  - STDERR
> Failed to synchronize cache for repo 'localsync' from '
> http://192.168.200.1:8585/fc23/': Cannot download repomd.xml: Cannot
> download repodata/repomd.xml: All mirrors were tried, disabling.
> Failed to synchronize cache for repo 'localsync' from '
> http://192.168.200.1:8585/fc23/': Cannot download repomd.xml: Cannot
> download repodata/repomd.xml: All mirrors were tried, disabling.
> You are using pip version 7.1.0, however version 8.1.2 is available.
> You should consider upgrading via the 'pip install --upgrade pip' command.
> Error:  ServiceOperationError: _systemctlReload failed
> Job for multipathd.service invalid.
> -- 
> Sandro Bonazzola
> Better technology. Faster innovation. Powered by community collaboration.
> See how it works at redhat.com



--
This message was sent by Atlassian JIRA
(v1000.98.4#14)
___
Infra mailing list
Infra@ovirt.org
http://lists.ovirt.org/mailman/listinfo/infra