[JIRA] (OVIRT-646) bad job [webadmin: remove tooltip underline from table column headers]

2016-07-20 Thread Greg Sheremeta (oVirt JIRA)
Greg Sheremeta created OVIRT-646:


 Summary: bad job [webadmin: remove tooltip underline from table 
column headers]
 Key: OVIRT-646
 URL: https://ovirt-jira.atlassian.net/browse/OVIRT-646
 Project: oVirt - virtualization made easy
  Issue Type: By-EMAIL
Reporter: Greg Sheremeta
Assignee: infra


This job seems to be broken.

http://jenkins.ovirt.org/job/ovirt-engine_master_upgrade-from-4.0_el7_merged/638/console
http://jenkins.ovirt.org/job/ovirt-engine_master_upgrade-from-4.0_el7_merged/638/


"""


[ INFO  ] Checking for an update for Setup...
  An update for the Setup packages ovirt-engine-setup
ovirt-engine-setup-plugin-websocket-proxy was found. Please update
that package by running:
  "yum update ovirt-engine-setup
ovirt-engine-setup-plugin-websocket-proxy"
  and then execute Setup again.
[ ERROR ] Failed to execute stage 'Environment customization': Please
update the Setup packages
[ INFO  ] Stage: Clean up
  Log file is located at
/var/log/ovirt-engine/setup/ovirt-engine-setup-20160721011950-b2qajq.log
[ INFO  ] Generating answer file
'/var/lib/ovirt-engine/setup/answers/20160721012010-setup.conf'
[ INFO  ] Stage: Pre-termination
[ INFO  ] Stage: Termination
[ ERROR ] Execution of setup failed
+ echo '## SETUP_FAILED'
## SETUP_FAILED
+ return 1
+ show_error
+ [[ SETUP::INSTALLING_ENGINE == \F\I\N\I\S\H\E\D ]]
+ echo 'FAILED::SETUP::INSTALLING_ENGINE:: Unrecoverable failure, exitting'
FAILED::SETUP::INSTALLING_ENGINE:: Unrecoverable failure, exitting
Build step 'Execute shell' marked build as failure
Performing Post build task...
Match found for :.* : True
Logical operation result is TRUE
Running script  : #!/bin/bash -x
echo "shell-scripts/ovirt-engine_upgrade-engine.cleanup.sh"
#
# Parameters:
#
# version
#
#   version to upgrade to
#

"""


-- Forwarded message --
From: Jenkins CI 
Date: Wed, Jul 20, 2016 at 9:27 PM
Subject: Change in ovirt-engine[master]: webadmin: remove tooltip underline
from table column headers
To: Greg Sheremeta , Scott Dickerson <
sdick...@redhat.com>


Jenkins CI has posted comments on this change.

Change subject: webadmin: remove tooltip underline from table column headers
..


Patch Set 3:

Build Failed

http://jenkins.ovirt.org/job/ovirt-engine_master_upgrade-from-4.0_el7_merged/638/
: FAILURE

http://jenkins.ovirt.org/job/ovirt-engine_master_check-merged-el7-x86_64/1078/
: SUCCESS

http://jenkins.ovirt.org/job/ovirt-engine_master_upgrade-from-master_el7_merged/1096/
: SUCCESS

http://jenkins.ovirt.org/job/ovirt-engine_master_check-merged-fc24-x86_64/181/
: SUCCESS

http://jenkins.ovirt.org/job/ovirt-engine_master_find-bugs_merged/268/ :
SUCCESS

--
To view, visit https://gerrit.ovirt.org/61000
To unsubscribe, visit https://gerrit.ovirt.org/settings

Gerrit-MessageType: comment
Gerrit-Change-Id: Ic1f393ec5f63580f928b8b9eacbe32ad2b04cf10
Gerrit-PatchSet: 3
Gerrit-Project: ovirt-engine
Gerrit-Branch: master
Gerrit-Owner: Scott Dickerson 
Gerrit-Reviewer: Alexander Wels 
Gerrit-Reviewer: Greg Sheremeta 
Gerrit-Reviewer: Jenkins CI
Gerrit-Reviewer: Oved Ourfali 
Gerrit-Reviewer: Scott Dickerson 
Gerrit-Reviewer: Vojtech Szocs 
Gerrit-Reviewer: gerrit-hooks 
Gerrit-HasComments: No



-- 
Greg Sheremeta, MBA
Red Hat, Inc.
Sr. Software Engineer
gsher...@redhat.com



--
This message was sent by Atlassian JIRA
(v1000.148.3#15)
___
Infra mailing list
Infra@ovirt.org
http://lists.ovirt.org/mailman/listinfo/infra


[oVirt Jenkins] ovirt-engine_master_upgrade-from-4.0_el7_merged - Build # 638 - Failure!

2016-07-20 Thread jenkins
Project: 
http://jenkins.ovirt.org/job/ovirt-engine_master_upgrade-from-4.0_el7_merged/ 
Build: 
http://jenkins.ovirt.org/job/ovirt-engine_master_upgrade-from-4.0_el7_merged/638/
Build Number: 638
Build Status:  Failure
Triggered By: Triggered by Gerrit: https://gerrit.ovirt.org/61000

-
Changes Since Last Success:
-
Changes for Build #638
[Scott J Dickerson] webadmin: remove tooltip underline from table column headers




-
Failed Tests:
-
No tests ran. 

___
Infra mailing list
Infra@ovirt.org
http://lists.ovirt.org/mailman/listinfo/infra


RHEVM CI Jenkins daily report - 20

2016-07-20 Thread jenkins
Good morning!

Attached is the HTML page with the jenkins status report. You can see it also 
here:
 - 
http://jenkins.ovirt.org/job/system_jenkins-report/20//artifact/exported-artifacts/upstream_report.html

Cheers,
Jenkins


upstream_report.html
Description: Binary data
___
Infra mailing list
Infra@ovirt.org
http://lists.ovirt.org/mailman/listinfo/infra


Re: Let's Consider Having Three Separated Environments in Infra

2016-07-20 Thread Anton Marchukov
On Wed, Jul 20, 2016 at 9:43 PM, Yedidyah Bar David  wrote:
>
> Not sure it's my business, but whatever:
>

It is. I think it is up to everybody who is interesting in making oVirt
infra more reliable and easier to maintain.


> 1. Do you intend also separate data centers? Such that if one (or two) of
> them looses connectivity/power/etc, we are still up?
>

I would like to. But so far we have only one physical Data Center. The only
mitigation for that we can do now is the offsite backups and offsite
mirrors. We have some of that now and working on improving the rest.


>
> 2. If so, does it mean that recreating it means copying from one of the
> others many GBs of data? And if so, is that also the plan for recovering
> from bad tests?
>

If they are completely removed than - yes. It is. But there should not be a
problem for it unless the new data are coming faster so you cannot catch up
on old. This is not the case for our infra, so eventually it will sync up.

3. If so, it probably means we'll not do that very happily, because
> "undo" will take a lot of time and bandwidth.
>

The good thing about having 3 instances is that you can allow one
"instance" even days to sync up the data if needed leaving the whole
construction in reliable state. So not sure about happily. But with such
configuration I would call it pretty nervous-free. Also the only way to get
perfect at something is well... to do it. So if it is not happily we should
make it so.


> 4. If we still want to go that route, perhaps consider having per-site
> backup, which allows syncing from the others the changes done on them
> since X (where X is "death of power/connectivity", "Start of test", etc).
> Some time ago I looked at backup tools, and found out that while there are
> several "field tested" tools, such as bacula and amanda, they are
> considered
> old-fashioned, but there are several different contenders for the future
> "perfect" tool. For an overview of some of them see [1]. For my own uses
> I chose 'bup', which isn't perfect, but seemed good and stable enough.
>

We consider on and offsite backups. The thing is that the backups is kind
of separate stuff. Because all "replicating" systems will happily replicate
all the errors you have to all the instances.  And good systems will do it
very fast. So you essentially need both.

Also my proposal is based on the reliability on the service level. E.g.
some things like "resources.ovirt.org" are quite easy to make reliable at
least for reads. You just start several ones and the only problem is the
mutation that will required to be done on all ones. There are multiple ways
to do that but I doubt we an find one solution for all services we have.
But all of them will need the underlying infra to be ready. If we store all
copies on one storage domain that is down it obviously will result in all
copies be down - less reliable when copies are separate.


> 5. This way we still can, perhaps need to, sync over the Internet many
> GBs of data if the local-site backup died too, but if it didn't, and we
> did everything right, we only need to sync diffs, which hopefully be much
> smaller.
>

This is indeed what should happen in properly designed service. Although
doubt it possible for all once we use. But if we choose per service
approach than it can be decided individually on a per service basis.

Anton.

-- 
Anton Marchukov
Senior Software Engineer - RHEV CI - Red Hat
___
Infra mailing list
Infra@ovirt.org
http://lists.ovirt.org/mailman/listinfo/infra


Re: Let's Consider Having Three Separated Environments in Infra

2016-07-20 Thread Yedidyah Bar David
On Wed, Jul 20, 2016 at 7:45 PM, Anton Marchukov  wrote:
> Hello All.
>
> This is a follow up to the meeting minutes. Just to record my thoughts to
> consider during the actual design.
>
> To make it straight. I think we need to target creation of 3 (yes, three)
> independent and completely similar setups with as less shared parts as
> possible.
>
> If we choose to go with reliability on service level than we do need 3
> because:
>
> 1. If we mess up with one environment (e,g, storage will be completely dead
> there) we will have 2 left working that gives us a reliability still because
> one of them can fail. So it will move us out of crunch mode into the regular
> work mode.
>
> 2. All consensus based algorithms generally require at least 2N+1 instances
> unless they utilize some special mode. The lowest is N=1 that is 3 and it
> would make sense to distribute them into different environments.
>
> I know the concern for having even 2 envs was that we will spend more effort
> to maintain them. But I think the opposite is true. Having 3 is actually
> less effort to maintain if we make them similar because of:
>
> 1. We can do gradual canary update, Same as with failure. You can test
> update on 1 instance leaving 2 left running that still provides reliability.
> So upgrade is no longer time constrained and safe.
>
> 2. If environments are similar then once we establish the correct playbook
> for one we can just apply it for second and later for third. So this
> overhead is not tripled in fact and if automated than it is no additional
> effort at all.
>
> 3. We are more open to test and play with one. We can even destroy it
> recreate from scratch, etc. Indirectly this will reduce our effort.
>
> I think the only real problem with it is the initial step when we should
> design an ideal hardware and network layout for that. But once it is done it
> will be easier to go with 3 environments. Also it may be possible to design
> the plan the way that we start with just one and later convert it into
> three.

Not sure it's my business, but whatever:

1. Do you intend also separate data centers? Such that if one (or two) of
them looses connectivity/power/etc, we are still up?

2. If so, does it mean that recreating it means copying from one of the
others many GBs of data? And if so, is that also the plan for recovering
from bad tests?

3. If so, it probably means we'll not do that very happily, because
"undo" will take a lot of time and bandwidth.

4. If we still want to go that route, perhaps consider having per-site
backup, which allows syncing from the others the changes done on them
since X (where X is "death of power/connectivity", "Start of test", etc).
Some time ago I looked at backup tools, and found out that while there are
several "field tested" tools, such as bacula and amanda, they are considered
old-fashioned, but there are several different contenders for the future
"perfect" tool. For an overview of some of them see [1]. For my own uses
I chose 'bup', which isn't perfect, but seemed good and stable enough.

5. This way we still can, perhaps need to, sync over the Internet many
GBs of data if the local-site backup died too, but if it didn't, and we
did everything right, we only need to sync diffs, which hopefully be much
smaller.

[1] 
http://changelog.complete.org/archives/9353-roundup-of-remote-encrypted-deduplicated-backups-in-linux

Best,

>
> Anton.
>
> --
> Anton Marchukov
> Senior Software Engineer - RHEV CI - Red Hat
>
>
> ___
> Infra mailing list
> Infra@ovirt.org
> http://lists.ovirt.org/mailman/listinfo/infra
>



-- 
Didi
___
Infra mailing list
Infra@ovirt.org
http://lists.ovirt.org/mailman/listinfo/infra


Base Squid Configuration for Repo Mirroring

2016-07-20 Thread Anton Marchukov
Hello All.

I tried to look some available solutions for transparent (read low
maintenance) repository caching and the best thing I have found is the one
from Ubuntu available here:

https://wiki.ubuntu.com/SquidDebProxy

Basically it is fully based on squid with some auto configuration and
additional service discovery capabilities. Do not need to use it in full at
all, but we can use the config available there as a base point to review
and start with:

http://bazaar.launchpad.net/~mvo/+junk/squid-deb-proxy/view/head:/squid-deb-proxy.conf

I also think we do not need acl to limit it to the domains as we have
several repos used and they are added/removed along the way. So we can try
with simply file type based acls. Then if we wisely setup our stats
collection we should be able to asses how it perform and if any problems
identified - try to address them individually.

Those are my 2 cents.

Anton.

-- 
Anton Marchukov
Senior Software Engineer - RHEV CI - Red Hat
___
Infra mailing list
Infra@ovirt.org
http://lists.ovirt.org/mailman/listinfo/infra


Re: Building for fc23 fails on yum package install

2016-07-20 Thread Vojtech Szocs


- Original Message -
> From: "Eyal Edri" 
> To: "Vojtech Szocs" , "Evgheni Dereveanchin" 
> 
> Cc: "infra" 
> Sent: Wednesday, July 20, 2016 8:09:07 PM
> Subject: Re: Building for fc23 fails on yum package install
> 
> I see its working now, maybe its the proxy issue again,
> Evgheni - can you check if the failure is related to the proxy?

It's solved, sorry, this was a problem on our end (Dashboard project).

We fixed it by updating *.repos: https://gerrit.ovirt.org/#/c/61149/

Thanks!

Vojtech

> 
> On Wed, Jul 20, 2016 at 7:37 PM, Vojtech Szocs  wrote:
> 
> > Hi,
> >
> > we're trying to build Dashboard 1.0.0-1 and for fc23 the build fails:
> >
> > 16:26:03 INFO: installing package(s): ovirt-engine-nodejs
> > ovirt-engine-nodejs-modules
> > 16:26:03 ERROR: Command failed. See logs for output.
> > 16:26:03  # /usr/bin/yum-deprecated --installroot
> > /var/lib/mock/fedora-23-x86_64-84590ba0ae0d50bf8fb4605dac9e1a22-7835/root/
> > --releasever 23 install ovirt-engine-nodejs ovirt-engine-nodejs-modules
> > --setopt=tsflags=nocontexts
> > 16:26:03 Install packages took 2 seconds
> >
> >
> > http://jenkins.ovirt.org/job/ovirt-engine-dashboard_4.0_check-patch-fc23-x86_64/10/console
> >
> > Is this an infra issue? I'll try to retrigger the build in the meantime.
> >
> > Thanks,
> > Vojtech
> > ___
> > Infra mailing list
> > Infra@ovirt.org
> > http://lists.ovirt.org/mailman/listinfo/infra
> >
> >
> >
> 
> 
> --
> Eyal Edri
> Associate Manager
> RHEV DevOps
> EMEA ENG Virtualization R
> Red Hat Israel
> 
> phone: +972-9-7692018
> irc: eedri (on #tlv #rhev-dev #rhev-integ)
> 
___
Infra mailing list
Infra@ovirt.org
http://lists.ovirt.org/mailman/listinfo/infra


Re: Building for fc23 fails on yum package install

2016-07-20 Thread Eyal Edri
I see its working now, maybe its the proxy issue again,
Evgheni - can you check if the failure is related to the proxy?

On Wed, Jul 20, 2016 at 7:37 PM, Vojtech Szocs  wrote:

> Hi,
>
> we're trying to build Dashboard 1.0.0-1 and for fc23 the build fails:
>
> 16:26:03 INFO: installing package(s): ovirt-engine-nodejs
> ovirt-engine-nodejs-modules
> 16:26:03 ERROR: Command failed. See logs for output.
> 16:26:03  # /usr/bin/yum-deprecated --installroot
> /var/lib/mock/fedora-23-x86_64-84590ba0ae0d50bf8fb4605dac9e1a22-7835/root/
> --releasever 23 install ovirt-engine-nodejs ovirt-engine-nodejs-modules
> --setopt=tsflags=nocontexts
> 16:26:03 Install packages took 2 seconds
>
>
> http://jenkins.ovirt.org/job/ovirt-engine-dashboard_4.0_check-patch-fc23-x86_64/10/console
>
> Is this an infra issue? I'll try to retrigger the build in the meantime.
>
> Thanks,
> Vojtech
> ___
> Infra mailing list
> Infra@ovirt.org
> http://lists.ovirt.org/mailman/listinfo/infra
>
>
>


-- 
Eyal Edri
Associate Manager
RHEV DevOps
EMEA ENG Virtualization R
Red Hat Israel

phone: +972-9-7692018
irc: eedri (on #tlv #rhev-dev #rhev-integ)
___
Infra mailing list
Infra@ovirt.org
http://lists.ovirt.org/mailman/listinfo/infra


Let's Consider Having Three Separated Environments in Infra

2016-07-20 Thread Anton Marchukov
Hello All.

This is a follow up to the meeting minutes. Just to record my thoughts to
consider during the actual design.

To make it straight. I think we need to target creation of 3 (yes, three)
independent and completely similar setups with as less shared parts as
possible.

If we choose to go with reliability on service level than we do need 3
because:

1. If we mess up with one environment (e,g, storage will be completely dead
there) we will have 2 left working that gives us a reliability still
because one of them can fail. So it will move us out of crunch mode into
the regular work mode.

2. All consensus based algorithms generally require at least 2N+1 instances
unless they utilize some special mode. The lowest is N=1 that is 3 and it
would make sense to distribute them into different environments.

I know the concern for having even 2 envs was that we will spend more
effort to maintain them. But I think the opposite is true. Having 3 is
actually less effort to maintain if we make them similar because of:

1. We can do gradual canary update, Same as with failure. You can test
update on 1 instance leaving 2 left running that still provides
reliability. So upgrade is no longer time constrained and safe.

2. If environments are similar then once we establish the correct playbook
for one we can just apply it for second and later for third. So this
overhead is not tripled in fact and if automated than it is no additional
effort at all.

3. We are more open to test and play with one. We can even destroy it
recreate from scratch, etc. Indirectly this will reduce our effort.

I think the only real problem with it is the initial step when we should
design an ideal hardware and network layout for that. But once it is done
it will be easier to go with 3 environments. Also it may be possible to
design the plan the way that we start with just one and later convert it
into three.

Anton.

-- 
Anton Marchukov
Senior Software Engineer - RHEV CI - Red Hat
___
Infra mailing list
Infra@ovirt.org
http://lists.ovirt.org/mailman/listinfo/infra


Building for fc23 fails on yum package install

2016-07-20 Thread Vojtech Szocs
Hi,

we're trying to build Dashboard 1.0.0-1 and for fc23 the build fails:

16:26:03 INFO: installing package(s): ovirt-engine-nodejs 
ovirt-engine-nodejs-modules
16:26:03 ERROR: Command failed. See logs for output.
16:26:03  # /usr/bin/yum-deprecated --installroot 
/var/lib/mock/fedora-23-x86_64-84590ba0ae0d50bf8fb4605dac9e1a22-7835/root/ 
--releasever 23 install ovirt-engine-nodejs ovirt-engine-nodejs-modules 
--setopt=tsflags=nocontexts
16:26:03 Install packages took 2 seconds

http://jenkins.ovirt.org/job/ovirt-engine-dashboard_4.0_check-patch-fc23-x86_64/10/console

Is this an infra issue? I'll try to retrigger the build in the meantime.

Thanks,
Vojtech
___
Infra mailing list
Infra@ovirt.org
http://lists.ovirt.org/mailman/listinfo/infra


A patch with wrong target milestone was not blocked in 3.6 stable branch

2016-07-20 Thread Tal Nisan
This patch in 3.6 branch contains a 4.0.2 bug in the Bug-Url:

https://gerrit.ovirt.org/#/c/60920

For some reason the Gerrit hooks approved it and did not block
___
Infra mailing list
Infra@ovirt.org
http://lists.ovirt.org/mailman/listinfo/infra


Jenkins build is back to normal : ovirt_4.0_he-system-tests #39

2016-07-20 Thread jenkins
See 

___
Infra mailing list
Infra@ovirt.org
http://lists.ovirt.org/mailman/listinfo/infra


[oVirt Jenkins] ovirt-engine_master_upgrade-from-4.0_el7_created - Build # 51 - Failure!

2016-07-20 Thread jenkins
Project: 
http://jenkins.ovirt.org/job/ovirt-engine_master_upgrade-from-4.0_el7_created/ 
Build: 
http://jenkins.ovirt.org/job/ovirt-engine_master_upgrade-from-4.0_el7_created/51/
Build Number: 51
Build Status:  Failure
Triggered By: Triggered by Gerrit: https://gerrit.ovirt.org/60967

-
Changes Since Last Success:
-


-
Failed Tests:
-
No tests ran. 

___
Infra mailing list
Infra@ovirt.org
http://lists.ovirt.org/mailman/listinfo/infra


[oVirt Jenkins] ovirt-engine_master_upgrade-from-master_el7_created - Build # 51 - Failure!

2016-07-20 Thread jenkins
Project: 
http://jenkins.ovirt.org/job/ovirt-engine_master_upgrade-from-master_el7_created/
 
Build: 
http://jenkins.ovirt.org/job/ovirt-engine_master_upgrade-from-master_el7_created/51/
Build Number: 51
Build Status:  Failure
Triggered By: Triggered by Gerrit: https://gerrit.ovirt.org/60967

-
Changes Since Last Success:
-


-
Failed Tests:
-
No tests ran. 

___
Infra mailing list
Infra@ovirt.org
http://lists.ovirt.org/mailman/listinfo/infra


[oVirt Jenkins] ovirt-engine_master_upgrade-from-4.0_el7_created - Build # 27 - Failure!

2016-07-20 Thread jenkins
Project: 
http://jenkins.ovirt.org/job/ovirt-engine_master_upgrade-from-4.0_el7_created/ 
Build: 
http://jenkins.ovirt.org/job/ovirt-engine_master_upgrade-from-4.0_el7_created/27/
Build Number: 27
Build Status:  Failure
Triggered By: Triggered by Gerrit: https://gerrit.ovirt.org/45649

-
Changes Since Last Success:
-
Changes for Build #27
[Martin Mucha] core: fluent builder for ValidationResult




-
Failed Tests:
-
No tests ran. 

___
Infra mailing list
Infra@ovirt.org
http://lists.ovirt.org/mailman/listinfo/infra


[oVirt Jenkins] ovirt-engine_master_upgrade-from-master_el7_created - Build # 27 - Failure!

2016-07-20 Thread jenkins
Project: 
http://jenkins.ovirt.org/job/ovirt-engine_master_upgrade-from-master_el7_created/
 
Build: 
http://jenkins.ovirt.org/job/ovirt-engine_master_upgrade-from-master_el7_created/27/
Build Number: 27
Build Status:  Failure
Triggered By: Triggered by Gerrit: https://gerrit.ovirt.org/45649

-
Changes Since Last Success:
-
Changes for Build #27
[Martin Mucha] core: fluent builder for ValidationResult




-
Failed Tests:
-
No tests ran. 

___
Infra mailing list
Infra@ovirt.org
http://lists.ovirt.org/mailman/listinfo/infra


Re: Automation moving bug to modified when it is not

2016-07-20 Thread Eyal Edri
On Tue, Jul 19, 2016 at 4:33 PM, Michal Skrivanek 
wrote:

>
> On 19 Jul 2016, at 14:25, Eyal Edri  wrote:
>
>
>
> On Tue, Jul 19, 2016 at 1:37 PM, Michal Skrivanek 
> wrote:
>
>>
>> On 19 Jul 2016, at 12:15, Eyal Edri  wrote:
>>
>> This happened because you used the same bug for master and 4.0.
>>
>>
>> right, but that’s the regular state of things because upstream there is
>> no separate bug for master vs current version
>>
>> The Gerrit hook doesn't verify status between major versions, only inside
>> a single version (for e.g, it would not move to modified if you needed to
>> backport to 4.0.1 and the target milestone was 4.0.1).
>> I'm not sure how we can tackle this, because master has no meaning in
>> bugzilla, it doesn't correlate to a version.
>>
>> One think I can think of, is NOT to move bugs to MODIFIED is a patch was
>> merged on master branch... , will that help?
>>
>>
>> I’m not sure if it’s better because before 4.1 is branched the master
>> development is for 4.1 bugs.
>> It would make sense to differentiate based on whether a branch for that
>> TM version exists or not, so in your above example since the bug has TM
>> 4.0.x and there is a 4.0 branch it would wait for a backport
>>
>
> I can't compare it to 4.0 because master is a moving target, so this hook
> will misbehave once master change versions, I need a solid logic that will
> work all the time for bugs on master.
> either not move them to MODIFIED if the bug is on target milestone !=
> master (which is probably 100% of the times) or some regex we can use... I
> don't have any other creative ideas…
>
>
> I guess if we have TM as x.y.z and the projects have x.y branch we can
> check for that, right? if the branch is not there then master is the final
> branch; if TM x.y.z matches some ovirt-x.y branch the backport is needed.
>

We already do that, that's why if you have branch 4.0.1 and you merge a
patch to ovirt-engine, but I think master was left out of this logic, since
it didn't have -x.y.z suffix.
I think this [1] should solve it, please review.

[1] https://gerrit.ovirt.org/#/c/61073/1


>
> You can look at the code if you want at [1] and see if you have an idea.
>
> [1]
> https://gerrit.ovirt.org/gitweb?p=gerrit-admin.git;a=blob;f=hooks/custom_hooks/change-merged.set_MODIFIED;h=678806dc35a372dadab5a5a392d25409db5c8275;hb=refs/heads/master
>
>
>>
>> Thanks,
>> michal
>>
>>
>>
>> On Tue, Jul 19, 2016 at 8:07 AM, Michal Skrivanek 
>> wrote:
>>
>>> Example in bug https://bugzilla.redhat.com/show_bug.cgi?id=1357440
>>> It doesn't take into account branches
>>>
>>> Thanks,
>>> michal
>>>
>>
>>
>>
>> --
>> Eyal Edri
>> Associate Manager
>> RHEV DevOps
>> EMEA ENG Virtualization R
>> Red Hat Israel
>>
>> phone: +972-9-7692018
>> irc: eedri (on #tlv #rhev-dev #rhev-integ)
>>
>>
>>
>
>
> --
> Eyal Edri
> Associate Manager
> RHEV DevOps
> EMEA ENG Virtualization R
> Red Hat Israel
>
> phone: +972-9-7692018
> irc: eedri (on #tlv #rhev-dev #rhev-integ)
>
>
>


-- 
Eyal Edri
Associate Manager
RHEV DevOps
EMEA ENG Virtualization R
Red Hat Israel

phone: +972-9-7692018
irc: eedri (on #tlv #rhev-dev #rhev-integ)
___
Infra mailing list
Infra@ovirt.org
http://lists.ovirt.org/mailman/listinfo/infra


Build failed in Jenkins: ovirt_4.0_he-system-tests #38

2016-07-20 Thread jenkins
See 

--
[...truncated 82 lines...]
at hudson.Launcher$RemoteLaunchCallable.call(Launcher.java:1113)
at hudson.remoting.UserRequest.perform(UserRequest.java:120)
at hudson.remoting.UserRequest.perform(UserRequest.java:48)
at hudson.remoting.Request$2.run(Request.java:326)
at 
hudson.remoting.InterceptingExecutorService$1.call(InterceptingExecutorService.java:68)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:745)
POST BUILD TASK : FAILURE
END OF POST BUILD TASK : 0
ESCALATE FAILED POST BUILD TASK TO JOB STATUS
Match found for :.* : True
Logical operation result is TRUE
Running script  : #!/bin/bash -xe
echo "shell-scripts/mock_cleanup.sh"

shopt -s nullglob


WORKSPACE="$PWD"

# Make clear this is the cleanup, helps reading the jenkins logs
cat : error=2, No 
such file or directory
at java.lang.ProcessBuilder.start(ProcessBuilder.java:1048)
at hudson.Proc$LocalProc.(Proc.java:244)
at hudson.Proc$LocalProc.(Proc.java:216)
at hudson.Launcher$LocalLauncher.launch(Launcher.java:815)
at hudson.Launcher$ProcStarter.start(Launcher.java:381)
at hudson.Launcher$RemoteLaunchCallable.call(Launcher.java:1148)
at hudson.Launcher$RemoteLaunchCallable.call(Launcher.java:1113)
at hudson.remoting.UserRequest.perform(UserRequest.java:120)
at hudson.remoting.UserRequest.perform(UserRequest.java:48)
at hudson.remoting.Request$2.run(Request.java:326)
at 
hudson.remoting.InterceptingExecutorService$1.call(InterceptingExecutorService.java:68)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:745)
at ..remote call to ovirt-srv26.phx.ovirt.org(Native Method)
at hudson.remoting.Channel.attachCallSiteStackTrace(Channel.java:1416)
at hudson.remoting.UserResponse.retrieve(UserRequest.java:220)
at hudson.remoting.Channel.call(Channel.java:781)
at hudson.Launcher$RemoteLauncher.launch(Launcher.java:928)
at hudson.Launcher$ProcStarter.start(Launcher.java:381)
at hudson.tasks.CommandInterpreter.perform(CommandInterpreter.java:95)
at hudson.tasks.CommandInterpreter.perform(CommandInterpreter.java:64)
at 
hudson.plugins.postbuildtask.PostbuildTask.perform(PostbuildTask.java:123)
at hudson.tasks.BuildStepMonitor$1.perform(BuildStepMonitor.java:20)
at 
hudson.model.AbstractBuild$AbstractBuildExecution.perform(AbstractBuild.java:782)
at 
hudson.model.AbstractBuild$AbstractBuildExecution.performAllBuildSteps(AbstractBuild.java:723)
at hudson.model.Build$BuildExecution.post2(Build.java:185)
at 
hudson.model.AbstractBuild$AbstractBuildExecution.post(AbstractBuild.java:668)
at hudson.model.Run.execute(Run.java:1763)
at hudson.model.FreeStyleBuild.run(FreeStyleBuild.java:43)
at hudson.model.ResourceController.execute(ResourceController.java:98)
at hudson.model.Executor.run(Executor.java:410)
Caused by: java.io.IOException: error=2, No such file or directory
at java.lang.UNIXProcess.forkAndExec(Native Method)
at java.lang.UNIXProcess.(UNIXProcess.java:248)
at java.lang.ProcessImpl.start(ProcessImpl.java:134)
at java.lang.ProcessBuilder.start(ProcessBuilder.java:1029)
at hudson.Proc$LocalProc.(Proc.java:244)
at hudson.Proc$LocalProc.(Proc.java:216)
at hudson.Launcher$LocalLauncher.launch(Launcher.java:815)
at hudson.Launcher$ProcStarter.start(Launcher.java:381)
at hudson.Launcher$RemoteLaunchCallable.call(Launcher.java:1148)
at hudson.Launcher$RemoteLaunchCallable.call(Launcher.java:1113)
at hudson.remoting.UserRequest.perform(UserRequest.java:120)
at hudson.remoting.UserRequest.perform(UserRequest.java:48)
at hudson.remoting.Request$2.run(Request.java:326)
at 
hudson.remoting.InterceptingExecutorService$1.call(InterceptingExecutorService.java:68)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:745)
POST BUILD TASK : FAILURE
END OF POST BUILD TASK : 1
Recording test results
ERROR: 

Re: Vdsm 4.0 fc23 build fails with "nothing provides ovirt-imageio-common"

2016-07-20 Thread Eyal Edri
It might be due to proxy issues Sandro reported already.
I see the recent jobs are OK, but we'll continue to investigate if we need
to fix something in the proxy.

On Tue, Jul 19, 2016 at 7:11 PM, Nir Soffer  wrote:

> More info - this is a random failure - other patches in same topic are
> fine.
>
> So it seems that some slaves have wrong repositories, maybe cache issue?
>
> On Tue, Jul 19, 2016 at 7:09 PM, Nir Soffer  wrote:
> > Hi all,
> >
> > Seems that builds on 4.0 are failing now with:
> > 15:19:02 Error: nothing provides ovirt-imageio-common needed by
> > vdsm-4.18.6-13.git3aaee18.fc23.x86_64.
> >
> > See
> http://jenkins.ovirt.org/job/vdsm_4.0_check-patch-fc23-x86_64/27/console
> >
> > ovirt-imageio-* packages are built in jenkins, and provided in ovirt
> > repositories.
> >
> > Can someone take a look?
> >
> > Nir
> ___
> Infra mailing list
> Infra@ovirt.org
> http://lists.ovirt.org/mailman/listinfo/infra
>
>
>


-- 
Eyal Edri
Associate Manager
RHEV DevOps
EMEA ENG Virtualization R
Red Hat Israel

phone: +972-9-7692018
irc: eedri (on #tlv #rhev-dev #rhev-integ)
___
Infra mailing list
Infra@ovirt.org
http://lists.ovirt.org/mailman/listinfo/infra


Re: [VDSM] vdsm_master_verify-error-codes_created running on master - why?

2016-07-20 Thread Dan Kenigsberg
On Wed, Jul 20, 2016 at 01:51:51AM +0300, Nir Soffer wrote:
> On Tue, Jul 19, 2016 at 6:20 PM, Eyal Edri  wrote:
> > And also, feel free to move it to check-patch.sh code as well.
> 
> Including this in vdsm seem like the best option.
> 
> Can you point me to the source of this job?
> 
> >
> > On Tue, Jul 19, 2016 at 6:19 PM, Eyal Edri  wrote:
> >>
> >> This isn't new, it was running for a few years, just on old jenkins,
> >> Maybe you just noticed it.
> >>
> >> Allon & Dan are familiar with that job and it already found in the past
> >> real issues.
> >> If you want to remove/disable it, I have no problem - just make sure
> >> you're synced with all VDSM people that requested this job in the first
> >> place.
> >>
> >> On Tue, Jul 19, 2016 at 6:02 PM, Nir Soffer  wrote:
> >>>
> >>> Hi all,
> >>>
> >>> Since yesterday, vdsm_master_verify-error-codes_created job is running
> >>> on master.
> >>>
> >>> I guess that this was a unintended change in jenkins - please revert this
> >>> change.
> >>>
> >>> If someone want to add a job for vdsm master, it must be approved by
> >>> vdsm maintainers first.
> >>>
> >>> The best would be to run everything from the automation scripts, so
> >>> vdsm maintainers have full control on the way patches are checked.

A bit of a background: this job was created many many years ago, in
order to compare the set of error codes in Vdsm to that of Engine. The
motivation was to catch typos or other mismatches, where Vdsm is sending
one value and Engine is expecting another, or Vdsm dropping something
that Engine depends on.

HOWEVER, I'm not sure at all that the job's code is up-to-date. I wonder
how it could have every survived the big changes of
https://gerrit.ovirt.org/#/c/48871/ and its bash code
http://jenkins.ovirt.org/job/vdsm_master_verify-error-codes_merged/configure
does not reassure me
___
Infra mailing list
Infra@ovirt.org
http://lists.ovirt.org/mailman/listinfo/infra


Re: [VDSM] vdsm_master_verify-error-codes_created running on master - why?

2016-07-20 Thread Eyal Edri
It wasn't yamelized until recently, so you can see the open patch for it at
[1].
I guess you can checkout the patch to jenkins repo and take the script from
there.
If you're moving it to vdsm repo, I'll abandon this patch, please let me
know.

[1] https://gerrit.ovirt.org/#/c/60630/2

On Wed, Jul 20, 2016 at 1:51 AM, Nir Soffer  wrote:

> On Tue, Jul 19, 2016 at 6:20 PM, Eyal Edri  wrote:
> > And also, feel free to move it to check-patch.sh code as well.
>
> Including this in vdsm seem like the best option.
>
> Can you point me to the source of this job?
>
> >
> > On Tue, Jul 19, 2016 at 6:19 PM, Eyal Edri  wrote:
> >>
> >> This isn't new, it was running for a few years, just on old jenkins,
> >> Maybe you just noticed it.
> >>
> >> Allon & Dan are familiar with that job and it already found in the past
> >> real issues.
> >> If you want to remove/disable it, I have no problem - just make sure
> >> you're synced with all VDSM people that requested this job in the first
> >> place.
> >>
> >> On Tue, Jul 19, 2016 at 6:02 PM, Nir Soffer  wrote:
> >>>
> >>> Hi all,
> >>>
> >>> Since yesterday, vdsm_master_verify-error-codes_created job is running
> >>> on master.
> >>>
> >>> I guess that this was a unintended change in jenkins - please revert
> this
> >>> change.
> >>>
> >>> If someone want to add a job for vdsm master, it must be approved by
> >>> vdsm maintainers first.
> >>>
> >>> The best would be to run everything from the automation scripts, so
> >>> vdsm maintainers have full control on the way patches are checked.
> >>>
> >>> Thanks,
> >>> Nir
> >>> ___
> >>> Infra mailing list
> >>> Infra@ovirt.org
> >>> http://lists.ovirt.org/mailman/listinfo/infra
> >>>
> >>>
> >>
> >>
> >>
> >> --
> >> Eyal Edri
> >> Associate Manager
> >> RHEV DevOps
> >> EMEA ENG Virtualization R
> >> Red Hat Israel
> >>
> >> phone: +972-9-7692018
> >> irc: eedri (on #tlv #rhev-dev #rhev-integ)
> >
> >
> >
> >
> > --
> > Eyal Edri
> > Associate Manager
> > RHEV DevOps
> > EMEA ENG Virtualization R
> > Red Hat Israel
> >
> > phone: +972-9-7692018
> > irc: eedri (on #tlv #rhev-dev #rhev-integ)
>



-- 
Eyal Edri
Associate Manager
RHEV DevOps
EMEA ENG Virtualization R
Red Hat Israel

phone: +972-9-7692018
irc: eedri (on #tlv #rhev-dev #rhev-integ)
___
Infra mailing list
Infra@ovirt.org
http://lists.ovirt.org/mailman/listinfo/infra


[JIRA] (OVIRT-641) Build oVirt 4.0.2 RC1

2016-07-20 Thread sbonazzo (oVirt JIRA)

 [ 
https://ovirt-jira.atlassian.net/browse/OVIRT-641?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

sbonazzo updated OVIRT-641:
---
Assignee: sbonazzo  (was: infra)
  Status: In Progress  (was: To Do)

> Build oVirt 4.0.2 RC1
> -
>
> Key: OVIRT-641
> URL: https://ovirt-jira.atlassian.net/browse/OVIRT-641
> Project: oVirt - virtualization made easy
>  Issue Type: Task
>  Components: Repositories Mgmt
>Reporter: sbonazzo
>Assignee: sbonazzo
>Priority: Highest
>




--
This message was sent by Atlassian JIRA
(v1000.148.3#15)
___
Infra mailing list
Infra@ovirt.org
http://lists.ovirt.org/mailman/listinfo/infra


[JIRA] (OVIRT-645) proxy server not reliable

2016-07-20 Thread sbonazzo (oVirt JIRA)
sbonazzo created OVIRT-645:
--

 Summary: proxy server not reliable
 Key: OVIRT-645
 URL: https://ovirt-jira.atlassian.net/browse/OVIRT-645
 Project: oVirt - virtualization made easy
  Issue Type: By-EMAIL
Reporter: sbonazzo
Assignee: infra


job:
http://jenkins.ovirt.org/job/ovirt-node-ng_master_build-artifacts-fc24-x86_64/8/


DEBUG util.py:421:
http://proxy.phx.ovirt.org:5000/centos-updates/7/x86_64/repodata/repomd.xml:
[Errno 14] HTTP Error 500 - Internal Server Error

it's happening quite often in several jobs. Can we make the proxy more
reliable?


-- 
Sandro Bonazzola
Better technology. Faster innovation. Powered by community collaboration.
See how it works at redhat.com



--
This message was sent by Atlassian JIRA
(v1000.148.3#15)
___
Infra mailing list
Infra@ovirt.org
http://lists.ovirt.org/mailman/listinfo/infra


[JIRA] (OVIRT-644) failed to write on /mnt/ramdisk/jenkins/workspace

2016-07-20 Thread sbonazzo (oVirt JIRA)
sbonazzo created OVIRT-644:
--

 Summary: failed to write on /mnt/ramdisk/jenkins/workspace
 Key: OVIRT-644
 URL: https://ovirt-jira.atlassian.net/browse/OVIRT-644
 Project: oVirt - virtualization made easy
  Issue Type: By-EMAIL
Reporter: sbonazzo
Assignee: infra


*From job: 
**http://jenkins.ovirt.org/job/ovirt-engine_4.0_build-artifacts-el7-x86_64/236/console
*



*00:00:00.565* Building remotely on ovirt-srv26.phx.ovirt.org

(physical integ-tests) in workspace
/mnt/ramdisk/jenkins/workspace/ovirt-engine_4.0_build-artifacts-el7-x86_64*00:00:00.585*
 > git rev-parse --is-inside-work-tree # timeout=10*00:00:00.590*
Fetching changes from the remote Git repository*00:00:00.593*  > git
config remote.origin.url git://gerrit.ovirt.org/ovirt-engine.git #
timeout=10*00:00:00.598* ERROR: Error fetching remote repo
'origin'*00:00:00.598* hudson.plugins.git.GitException
:
Failed to fetch from
git://gerrit.ovirt.org/ovirt-engine.git*00:00:00.599*   at
hudson.plugins.git.GitSCM.fetchFrom(GitSCM.java:766)
*00:00:00.599*
at hudson.plugins.git.GitSCM.retrieveChanges(GitSCM.java:1022)
*00:00:00.599*
at hudson.plugins.git.GitSCM.checkout(GitSCM.java:1053)
*00:00:00.599*
at 
org.jenkinsci.plugins.multiplescms.MultiSCM.checkout(MultiSCM.java:129)
*00:00:00.599*
at hudson.scm.SCM.checkout(SCM.java:485)
*00:00:00.599*
at hudson.model.AbstractProject.checkout(AbstractProject.java:1269)
*00:00:00.600*
at 
hudson.model.AbstractBuild$AbstractBuildExecution.defaultCheckout(AbstractBuild.java:607)
*00:00:00.600*
at jenkins.scm.SCMCheckoutStrategy.checkout(SCMCheckoutStrategy.java:86)
*00:00:00.600*
at 
hudson.model.AbstractBuild$AbstractBuildExecution.run(AbstractBuild.java:529)
*00:00:00.600*
at hudson.model.Run.execute(Run.java:1738)
*00:00:00.600*
at hudson.model.FreeStyleBuild.run(FreeStyleBuild.java:43)
*00:00:00.600*
at hudson.model.ResourceController.execute(ResourceController.java:98)
*00:00:00.600*
at hudson.model.Executor.run(Executor.java:410)
*00:00:00.601*
Caused by: hudson.plugins.git.GitException
:
Command "git config remote.origin.url
git://gerrit.ovirt.org/ovirt-engine.git" returned status code
4:*00:00:00.601* stdout: *00:00:00.601* stderr: error: failed to write
new configuration file
/mnt/ramdisk/jenkins/workspace/ovirt-engine_4.0_build-artifacts-el7-x86_64/ovirt-engine/.git/config.lock


-- 
Sandro Bonazzola
Better technology. Faster innovation. Powered by community collaboration.
See how it works at redhat.com



--
This message was sent by Atlassian JIRA
(v1000.148.3#15)
___
Infra mailing list
Infra@ovirt.org
http://lists.ovirt.org/mailman/listinfo/infra