Jenkins build is back to stable : ovirt_master_publish-rpms_nightly #50

2016-05-24 Thread jenkins
See 

___
Infra mailing list
Infra@ovirt.org
http://lists.ovirt.org/mailman/listinfo/infra


Re: IPv6 RR disabled on lists.ovirt.org -- WHY???

2016-05-24 Thread Dave Neary
I do recall a thread about this now that Karsten mentions it... let me
go digging. IIRC, there was an issue around the  entry... I have
found some emails from 2013 and a related ServiceNow ticket which,
apparently, has not survived the 3 year interval.

Is the following context at all useful?

Thanks,
Dave.

Way back when, this was the issue:
> Incident INC0093361: Neil Miao is requesting the following information to 
> assist in completing your request:
> 2013-11-10 21:37:49 EST - Neil Miao   Comments
> Hi there,
>  
> Thanks Mike for going the extra mile to dig it out. The existing SPF record 
> does look bad.
>  
> Since lists.ovirt.org is actually a CNAME of linode01.ovirt.org.
>  
> $ dig lists.ovirt.org
> ...
> ;; ANSWER SECTION:
> lists.ovirt.org.300INCNAMElinode01.ovirt.org.
> linode01.ovirt.org.300INA173.255.252.138
>  
> adding lists.ovirt.org to the SPF is a very obvious choice. :)
>  
> - IN TXT "v=spf1 a:linode01.ovirt.org ~all"
> + IN TXT "v=spf1 a:linode01.ovirt.org a:lists.ovirt.org ~all"
>  
> The change is pushed to the corp-dns. Let me know how it goes.
>  
> Cheers
> Neil
> 2013-10-31 16:55:49 EDT - Dave Neary  Comments
> Mail from the oVirt users mailing list is being marked as spam in gmail, and 
> is not getting through to users. A colleague, Mike McLean, looked into the 
> issue, and suspects it is related to our DNS config:
>  
> See Mike's email to me below. Is this something IT services can help fix?
>  
> Thanks,
> Dave.
>  
> Mike wrote:
> I don't think it is users marking as spam, despite the google warning
> bar. The warning in the header suggests it is a networking problem.
>  
> There is nothing in the headers about the ip address (which is ipv6)
> being on a blacklist.
>  
>> On 10/31/2013 03:20 PM, Mike McLean wrote:
>>> Since I subscribed to the users list last week, I've had exactly zero
>>> messages from it in my gmail inbox. They're all in the spam folder.
>>>
>>> Each message shows a warning at the top "Be careful with this message.
>>> Many people marked similar messages as spam."
>>>
>>> I'm attaching a header example. One notable line is:
>>>
>>> Authentication-Results: mx.google.com;
>>>spf=softfail (google.com: domain of transitioning
>>> users-boun...@ovirt.org does not designate
>>> 2600:3c01::f03c:91ff:fe93:4b0d as permitted sender)
>>> smtp.mail=users-boun...@ovirt.org
>  
> ^ This is the header warning I referred to
>  
>>> It looks like ovirt.org only has an mx entry for linode01.ovirt.org. The
>>> host actually sending to google (lists.ovirt.org) doesn't show up in an
>>> mx entry. I'd push this up to IT.
>  
> It appears that google doesn't trust this mail because ovirt.org
> explicitly says not to.
>  
> From the headers, the mail is traversing from
> (sending user) -> linode01.ovirt.org -> lists.ovirt.org -> (google)
>  
> The SPF record for ovirt.orig is: "v=spf1 a:linode01.ovirt.org ~all"
> (found via dig -t TXT ovirt.org)
>  
> See: http://en.wikipedia.org/wiki/Sender_Policy_Framework
>  
> This policy says that linode01.ovirt.org is allowed to send and all
> others (e.g. lists.ovirt.org) should "softfail".
>  
> Who manages the ovirt.org servers? IT?
>  
> Someone in IT will have more expertise in this than me, but I suspect
> that answer is one of
> 1) change the spf record for ovirt.org to allow lists.ovirt.org
> 2) reconfigure lists.ovirt.org to route its mail through linode01.ovirt.org
>  
>  
> State: Pending Customer
> Submitted Date: 2013-10-31 16:55:49 EDT
> Priority: 4 - Low
> Description: oVirt list email is being marked as spam by gmail
>  
> To update your request and notify the person assigned to your request, simply 
> reply to this email communication.
>  
> You can view the status of your incident by selecting "Incidents" from the 
> left navigation menu: LINK
>  
> Ref:MSG1353267


On 05/24/2016 03:24 PM, Karsten Wade wrote:
> On 05/24/2016 01:32 AM, David Caro wrote:
>> Maybe it's old enough so Quaid was involved back then?
> 
> I don't recall for sure why IPv6 would be turned off, but iirc we had
> problems with SPF for a few years for gmail.com users, meaning it
> affected the end-users mailing lists the most.
> 
> Is it possible SPF was turned off for IPv4 & IPv6, then the problem
> with SPF and GMail was fixed, and it was turned back on but only for IPv
> 4?
> 
> How about experimenting and see what happens (SCIENCE!), maybe with a
> warning to the two main lists (devel, users) in case anything breaks?
> 
> Best,
> 
> - Karsten
> 

-- 
Dave Neary - NFV/SDN Community Strategy
Open Source and Standards, Red Hat - http://community.redhat.com
Ph: +1-978-399-2182 / Cell: +1-978-799-3338
___
Infra mailing list
Infra@ovirt.org
http://lists.ovirt.org/mailman/listinfo/infra


Re: ngn build jobs take more than twice (x) as long as in the last days

2016-05-24 Thread Sandro Bonazzola
Il 24/Mag/2016 17:57, "Fabian Deutsch"  ha scritto:
>
> Hey,
>
> $subj says it all.
>
> Affected jobs are:
> http://jenkins.ovirt.org/user/fabiand/my-views/view/ovirt-node-ng/
>
> I.e. 3.6 - before: ~46min, now 1:23hrs
>
> In master it's even worse: >1:30hrs
>
> Can someone help to idnetify the reason?

I have no numbers but I have the feeling that all jobs are getting slower
since a couple of weeks ago. Yum install phase takes ages. I thoughtit was
some temporary storage i/o peak but looks like it's not temporary.

>
> - fabian
>
> --
> Fabian Deutsch 
> RHEV Hypervisor
> Red Hat
> ___
> Infra mailing list
> Infra@ovirt.org
> http://lists.ovirt.org/mailman/listinfo/infra
___
Infra mailing list
Infra@ovirt.org
http://lists.ovirt.org/mailman/listinfo/infra


Re: IPv6 RR disabled on lists.ovirt.org -- WHY???

2016-05-24 Thread Karsten Wade
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1

On 05/24/2016 01:32 AM, David Caro wrote:
> Maybe it's old enough so Quaid was involved back then?

I don't recall for sure why IPv6 would be turned off, but iirc we had
problems with SPF for a few years for gmail.com users, meaning it
affected the end-users mailing lists the most.

Is it possible SPF was turned off for IPv4 & IPv6, then the problem
with SPF and GMail was fixed, and it was turned back on but only for IPv
4?

How about experimenting and see what happens (SCIENCE!), maybe with a
warning to the two main lists (devel, users) in case anything breaks?

Best,

- - Karsten
- -- 
Karsten Wade
Community Infra & Platform (Mgr)
Open Source and Standards, @redhatopen
@quaid gpg: AD0E0C41
-BEGIN PGP SIGNATURE-
Version: GnuPG v2.0.22 (GNU/Linux)

iEYEARECAAYFAldEqmQACgkQ2ZIOBq0ODEF/1gCdGlAbok+hxemOK+WwXvFZ3p9/
AgAAn0FTmmYDdYiwocVO934JJvkav0Ui
=tXyn
-END PGP SIGNATURE-
___
Infra mailing list
Infra@ovirt.org
http://lists.ovirt.org/mailman/listinfo/infra


Re: ngn build jobs take more than twice (x) as long as in the last days

2016-05-24 Thread Fabian Deutsch
On Tue, May 24, 2016 at 8:49 PM, Eyal Edri  wrote:
> btw, feel free to increase history for node builds now, we have the space
> for it.
> I wouldn't got crazy with keeping history of artifacts, but build info with
> console output there is no problem keeping a month old.

~2GB * 3 * 30 == 180GB of disk space, ok?

> I'm adding Barak as this weekly infra owner to help debug this tomorrow if
> needed.
> Also, Evgheni is working on network configuration, might be relevant as
> well.

Actually, one thought:

I think it might make sense to change the standard job template to
support nesting.
Currently i need to create /dev/kvm to get at least that acceleration.
(If the app inside mock s speaking to /dev/kvm directly).
Also for libvirt a mount is needed.

My take would be: Change the standard ci job template so that every
job can use libvirt + geustfosh (w/ acceleration) right away without
needing these hacks.

If that sounds viable, then I can open a ticket.

- fabain

> On Tue, May 24, 2016 at 9:30 PM, Fabian Deutsch  wrote:
>>
>> I don't know if it was quicker on different salves, because I can not
>> look at the history.
>>
>> There were no Node sided changes, maybe the package set in master got
>> larger - well see that on the first regular build.
>>
>> But: I enabled kvm in mock, and this seems to help. The first job post
>> this chnage looks good.
>>
>> My assumption is that previous builds were running on a beefier slave.
>>
>> - fabian
>>
>> On Tue, May 24, 2016 at 6:50 PM, Eyal Edri  wrote:
>> > Is it running on the same slave as before?  Can we see a changelog of
>> > things
>> > changed from the Last time it took less time?
>> >
>> > On May 24, 2016 7:06 PM, "David Caro"  wrote:
>> >>
>> >> On 05/24 18:03, David Caro wrote:
>> >> > On 05/24 17:57, Fabian Deutsch wrote:
>> >> > > Hey,
>> >> > >
>> >> > > $subj says it all.
>> >> > >
>> >> > > Affected jobs are:
>> >> > > http://jenkins.ovirt.org/user/fabiand/my-views/view/ovirt-node-ng/
>> >> > >
>> >> > > I.e. 3.6 - before: ~46min, now 1:23hrs
>> >> > >
>> >> > > In master it's even worse: >1:30hrs
>> >> > >
>> >> > > Can someone help to idnetify the reason?
>> >> >
>> >> >
>> >> > I see that this is where there's a big jump in time:
>> >> >
>> >> > 06:39:38 Domain installation still in progress. You can reconnect to
>> >> > 06:39:38 the console to complete the installation process.
>> >> > 07:21:51
>> >> >
>> >> > .2016-05-24
>> >> > 03:21:51,341: Install finished. Or at least virt shut down.
>> >> >
>> >> > So it looks as if the code that checks if the domain is shut down is
>> >> > not
>> >> > working properly, or maybe the virt-install is taking very very long
>> >> > to
>> >> > work.
>> >>
>> >>
>> >> It seems that the virt-install log is not being archived, and the
>> >> workdir
>> >> has
>> >> already been cleaned up so I can check the logfile:
>> >>
>> >>
>> >>
>> >> /home/jenkins/workspace/ovirt-node-ng_ovirt-3.6_build-artifacts-fc22-x86_64/ovirt-node-ng/virt-install.log
>> >>
>> >>
>> >> Maybe you can archive it too on the next run to debug
>> >>
>> >> >
>> >> > >
>> >> > > - fabian
>> >> > >
>> >> > > --
>> >> > > Fabian Deutsch 
>> >> > > RHEV Hypervisor
>> >> > > Red Hat
>> >> > > ___
>> >> > > Infra mailing list
>> >> > > Infra@ovirt.org
>> >> > > http://lists.ovirt.org/mailman/listinfo/infra
>> >> >
>> >> > --
>> >> > David Caro
>> >> >
>> >> > Red Hat S.L.
>> >> > Continuous Integration Engineer - EMEA ENG Virtualization R
>> >> >
>> >> > Tel.: +420 532 294 605
>> >> > Email: dc...@redhat.com
>> >> > IRC: dcaro|dcaroest@{freenode|oftc|redhat}
>> >> > Web: www.redhat.com
>> >> > RHT Global #: 82-62605
>> >>
>> >>
>> >>
>> >> --
>> >> David Caro
>> >>
>> >> Red Hat S.L.
>> >> Continuous Integration Engineer - EMEA ENG Virtualization R
>> >>
>> >> Tel.: +420 532 294 605
>> >> Email: dc...@redhat.com
>> >> IRC: dcaro|dcaroest@{freenode|oftc|redhat}
>> >> Web: www.redhat.com
>> >> RHT Global #: 82-62605
>> >>
>> >> ___
>> >> Infra mailing list
>> >> Infra@ovirt.org
>> >> http://lists.ovirt.org/mailman/listinfo/infra
>> >>
>> >
>>
>>
>>
>> --
>> Fabian Deutsch 
>> RHEV Hypervisor
>> Red Hat
>
>
>
>
> --
> Eyal Edri
> Associate Manager
> RHEV DevOps
> EMEA ENG Virtualization R
> Red Hat Israel
>
> phone: +972-9-7692018
> irc: eedri (on #tlv #rhev-dev #rhev-integ)



-- 
Fabian Deutsch 
RHEV Hypervisor
Red Hat
___
Infra mailing list
Infra@ovirt.org
http://lists.ovirt.org/mailman/listinfo/infra


Re: ngn build jobs take more than twice (x) as long as in the last days

2016-05-24 Thread Eyal Edri
btw, feel free to increase history for node builds now, we have the space
for it.
I wouldn't got crazy with keeping history of artifacts, but build info with
console output there is no problem keeping a month old.

I'm adding Barak as this weekly infra owner to help debug this tomorrow if
needed.
Also, Evgheni is working on network configuration, might be relevant as
well.

On Tue, May 24, 2016 at 9:30 PM, Fabian Deutsch  wrote:

> I don't know if it was quicker on different salves, because I can not
> look at the history.
>
> There were no Node sided changes, maybe the package set in master got
> larger - well see that on the first regular build.
>
> But: I enabled kvm in mock, and this seems to help. The first job post
> this chnage looks good.
>
> My assumption is that previous builds were running on a beefier slave.
>
> - fabian
>
> On Tue, May 24, 2016 at 6:50 PM, Eyal Edri  wrote:
> > Is it running on the same slave as before?  Can we see a changelog of
> things
> > changed from the Last time it took less time?
> >
> > On May 24, 2016 7:06 PM, "David Caro"  wrote:
> >>
> >> On 05/24 18:03, David Caro wrote:
> >> > On 05/24 17:57, Fabian Deutsch wrote:
> >> > > Hey,
> >> > >
> >> > > $subj says it all.
> >> > >
> >> > > Affected jobs are:
> >> > > http://jenkins.ovirt.org/user/fabiand/my-views/view/ovirt-node-ng/
> >> > >
> >> > > I.e. 3.6 - before: ~46min, now 1:23hrs
> >> > >
> >> > > In master it's even worse: >1:30hrs
> >> > >
> >> > > Can someone help to idnetify the reason?
> >> >
> >> >
> >> > I see that this is where there's a big jump in time:
> >> >
> >> > 06:39:38 Domain installation still in progress. You can reconnect to
> >> > 06:39:38 the console to complete the installation process.
> >> > 07:21:51
> >> >
> .2016-05-24
> >> > 03:21:51,341: Install finished. Or at least virt shut down.
> >> >
> >> > So it looks as if the code that checks if the domain is shut down is
> not
> >> > working properly, or maybe the virt-install is taking very very long
> to
> >> > work.
> >>
> >>
> >> It seems that the virt-install log is not being archived, and the
> workdir
> >> has
> >> already been cleaned up so I can check the logfile:
> >>
> >>
> >>
> /home/jenkins/workspace/ovirt-node-ng_ovirt-3.6_build-artifacts-fc22-x86_64/ovirt-node-ng/virt-install.log
> >>
> >>
> >> Maybe you can archive it too on the next run to debug
> >>
> >> >
> >> > >
> >> > > - fabian
> >> > >
> >> > > --
> >> > > Fabian Deutsch 
> >> > > RHEV Hypervisor
> >> > > Red Hat
> >> > > ___
> >> > > Infra mailing list
> >> > > Infra@ovirt.org
> >> > > http://lists.ovirt.org/mailman/listinfo/infra
> >> >
> >> > --
> >> > David Caro
> >> >
> >> > Red Hat S.L.
> >> > Continuous Integration Engineer - EMEA ENG Virtualization R
> >> >
> >> > Tel.: +420 532 294 605
> >> > Email: dc...@redhat.com
> >> > IRC: dcaro|dcaroest@{freenode|oftc|redhat}
> >> > Web: www.redhat.com
> >> > RHT Global #: 82-62605
> >>
> >>
> >>
> >> --
> >> David Caro
> >>
> >> Red Hat S.L.
> >> Continuous Integration Engineer - EMEA ENG Virtualization R
> >>
> >> Tel.: +420 532 294 605
> >> Email: dc...@redhat.com
> >> IRC: dcaro|dcaroest@{freenode|oftc|redhat}
> >> Web: www.redhat.com
> >> RHT Global #: 82-62605
> >>
> >> ___
> >> Infra mailing list
> >> Infra@ovirt.org
> >> http://lists.ovirt.org/mailman/listinfo/infra
> >>
> >
>
>
>
> --
> Fabian Deutsch 
> RHEV Hypervisor
> Red Hat
>



-- 
Eyal Edri
Associate Manager
RHEV DevOps
EMEA ENG Virtualization R
Red Hat Israel

phone: +972-9-7692018
irc: eedri (on #tlv #rhev-dev #rhev-integ)
___
Infra mailing list
Infra@ovirt.org
http://lists.ovirt.org/mailman/listinfo/infra


Re: ngn build jobs take more than twice (x) as long as in the last days

2016-05-24 Thread Fabian Deutsch
I don't know if it was quicker on different salves, because I can not
look at the history.

There were no Node sided changes, maybe the package set in master got
larger - well see that on the first regular build.

But: I enabled kvm in mock, and this seems to help. The first job post
this chnage looks good.

My assumption is that previous builds were running on a beefier slave.

- fabian

On Tue, May 24, 2016 at 6:50 PM, Eyal Edri  wrote:
> Is it running on the same slave as before?  Can we see a changelog of things
> changed from the Last time it took less time?
>
> On May 24, 2016 7:06 PM, "David Caro"  wrote:
>>
>> On 05/24 18:03, David Caro wrote:
>> > On 05/24 17:57, Fabian Deutsch wrote:
>> > > Hey,
>> > >
>> > > $subj says it all.
>> > >
>> > > Affected jobs are:
>> > > http://jenkins.ovirt.org/user/fabiand/my-views/view/ovirt-node-ng/
>> > >
>> > > I.e. 3.6 - before: ~46min, now 1:23hrs
>> > >
>> > > In master it's even worse: >1:30hrs
>> > >
>> > > Can someone help to idnetify the reason?
>> >
>> >
>> > I see that this is where there's a big jump in time:
>> >
>> > 06:39:38 Domain installation still in progress. You can reconnect to
>> > 06:39:38 the console to complete the installation process.
>> > 07:21:51
>> > .2016-05-24
>> > 03:21:51,341: Install finished. Or at least virt shut down.
>> >
>> > So it looks as if the code that checks if the domain is shut down is not
>> > working properly, or maybe the virt-install is taking very very long to
>> > work.
>>
>>
>> It seems that the virt-install log is not being archived, and the workdir
>> has
>> already been cleaned up so I can check the logfile:
>>
>>
>> /home/jenkins/workspace/ovirt-node-ng_ovirt-3.6_build-artifacts-fc22-x86_64/ovirt-node-ng/virt-install.log
>>
>>
>> Maybe you can archive it too on the next run to debug
>>
>> >
>> > >
>> > > - fabian
>> > >
>> > > --
>> > > Fabian Deutsch 
>> > > RHEV Hypervisor
>> > > Red Hat
>> > > ___
>> > > Infra mailing list
>> > > Infra@ovirt.org
>> > > http://lists.ovirt.org/mailman/listinfo/infra
>> >
>> > --
>> > David Caro
>> >
>> > Red Hat S.L.
>> > Continuous Integration Engineer - EMEA ENG Virtualization R
>> >
>> > Tel.: +420 532 294 605
>> > Email: dc...@redhat.com
>> > IRC: dcaro|dcaroest@{freenode|oftc|redhat}
>> > Web: www.redhat.com
>> > RHT Global #: 82-62605
>>
>>
>>
>> --
>> David Caro
>>
>> Red Hat S.L.
>> Continuous Integration Engineer - EMEA ENG Virtualization R
>>
>> Tel.: +420 532 294 605
>> Email: dc...@redhat.com
>> IRC: dcaro|dcaroest@{freenode|oftc|redhat}
>> Web: www.redhat.com
>> RHT Global #: 82-62605
>>
>> ___
>> Infra mailing list
>> Infra@ovirt.org
>> http://lists.ovirt.org/mailman/listinfo/infra
>>
>



-- 
Fabian Deutsch 
RHEV Hypervisor
Red Hat
___
Infra mailing list
Infra@ovirt.org
http://lists.ovirt.org/mailman/listinfo/infra


Re: ngn build jobs take more than twice (x) as long as in the last days

2016-05-24 Thread Eyal Edri
Is it running on the same slave as before?  Can we see a changelog of
things changed from the Last time it took less time?
On May 24, 2016 7:06 PM, "David Caro"  wrote:

> On 05/24 18:03, David Caro wrote:
> > On 05/24 17:57, Fabian Deutsch wrote:
> > > Hey,
> > >
> > > $subj says it all.
> > >
> > > Affected jobs are:
> > > http://jenkins.ovirt.org/user/fabiand/my-views/view/ovirt-node-ng/
> > >
> > > I.e. 3.6 - before: ~46min, now 1:23hrs
> > >
> > > In master it's even worse: >1:30hrs
> > >
> > > Can someone help to idnetify the reason?
> >
> >
> > I see that this is where there's a big jump in time:
> >
> > 06:39:38 Domain installation still in progress. You can reconnect to
> > 06:39:38 the console to complete the installation process.
> > 07:21:51
> .2016-05-24
> 03:21:51,341: Install finished. Or at least virt shut down.
> >
> > So it looks as if the code that checks if the domain is shut down is not
> > working properly, or maybe the virt-install is taking very very long to
> work.
>
>
> It seems that the virt-install log is not being archived, and the workdir
> has
> already been cleaned up so I can check the logfile:
>
>
>  
> /home/jenkins/workspace/ovirt-node-ng_ovirt-3.6_build-artifacts-fc22-x86_64/ovirt-node-ng/virt-install.log
>
>
> Maybe you can archive it too on the next run to debug
>
> >
> > >
> > > - fabian
> > >
> > > --
> > > Fabian Deutsch 
> > > RHEV Hypervisor
> > > Red Hat
> > > ___
> > > Infra mailing list
> > > Infra@ovirt.org
> > > http://lists.ovirt.org/mailman/listinfo/infra
> >
> > --
> > David Caro
> >
> > Red Hat S.L.
> > Continuous Integration Engineer - EMEA ENG Virtualization R
> >
> > Tel.: +420 532 294 605
> > Email: dc...@redhat.com
> > IRC: dcaro|dcaroest@{freenode|oftc|redhat}
> > Web: www.redhat.com
> > RHT Global #: 82-62605
>
>
>
> --
> David Caro
>
> Red Hat S.L.
> Continuous Integration Engineer - EMEA ENG Virtualization R
>
> Tel.: +420 532 294 605
> Email: dc...@redhat.com
> IRC: dcaro|dcaroest@{freenode|oftc|redhat}
> Web: www.redhat.com
> RHT Global #: 82-62605
>
> ___
> Infra mailing list
> Infra@ovirt.org
> http://lists.ovirt.org/mailman/listinfo/infra
>
>
___
Infra mailing list
Infra@ovirt.org
http://lists.ovirt.org/mailman/listinfo/infra


Re: ngn build jobs take more than twice (x) as long as in the last days

2016-05-24 Thread David Caro
On 05/24 18:03, David Caro wrote:
> On 05/24 17:57, Fabian Deutsch wrote:
> > Hey,
> > 
> > $subj says it all.
> > 
> > Affected jobs are:
> > http://jenkins.ovirt.org/user/fabiand/my-views/view/ovirt-node-ng/
> > 
> > I.e. 3.6 - before: ~46min, now 1:23hrs
> > 
> > In master it's even worse: >1:30hrs
> > 
> > Can someone help to idnetify the reason?
> 
> 
> I see that this is where there's a big jump in time:
> 
> 06:39:38 Domain installation still in progress. You can reconnect to 
> 06:39:38 the console to complete the installation process.
> 07:21:51 
> .2016-05-24
>  03:21:51,341: Install finished. Or at least virt shut down.
> 
> So it looks as if the code that checks if the domain is shut down is not
> working properly, or maybe the virt-install is taking very very long to work.


It seems that the virt-install log is not being archived, and the workdir has
already been cleaned up so I can check the logfile:

   
/home/jenkins/workspace/ovirt-node-ng_ovirt-3.6_build-artifacts-fc22-x86_64/ovirt-node-ng/virt-install.log


Maybe you can archive it too on the next run to debug

> 
> > 
> > - fabian
> > 
> > -- 
> > Fabian Deutsch 
> > RHEV Hypervisor
> > Red Hat
> > ___
> > Infra mailing list
> > Infra@ovirt.org
> > http://lists.ovirt.org/mailman/listinfo/infra
> 
> -- 
> David Caro
> 
> Red Hat S.L.
> Continuous Integration Engineer - EMEA ENG Virtualization R
> 
> Tel.: +420 532 294 605
> Email: dc...@redhat.com
> IRC: dcaro|dcaroest@{freenode|oftc|redhat}
> Web: www.redhat.com
> RHT Global #: 82-62605



-- 
David Caro

Red Hat S.L.
Continuous Integration Engineer - EMEA ENG Virtualization R

Tel.: +420 532 294 605
Email: dc...@redhat.com
IRC: dcaro|dcaroest@{freenode|oftc|redhat}
Web: www.redhat.com
RHT Global #: 82-62605


signature.asc
Description: PGP signature
___
Infra mailing list
Infra@ovirt.org
http://lists.ovirt.org/mailman/listinfo/infra


Re: ngn build jobs take more than twice (x) as long as in the last days

2016-05-24 Thread David Caro
On 05/24 17:57, Fabian Deutsch wrote:
> Hey,
> 
> $subj says it all.
> 
> Affected jobs are:
> http://jenkins.ovirt.org/user/fabiand/my-views/view/ovirt-node-ng/
> 
> I.e. 3.6 - before: ~46min, now 1:23hrs
> 
> In master it's even worse: >1:30hrs
> 
> Can someone help to idnetify the reason?


I see that this is where there's a big jump in time:

06:39:38 Domain installation still in progress. You can reconnect to 
06:39:38 the console to complete the installation process.
07:21:51 
.2016-05-24
 03:21:51,341: Install finished. Or at least virt shut down.

So it looks as if the code that checks if the domain is shut down is not
working properly, or maybe the virt-install is taking very very long to work.

> 
> - fabian
> 
> -- 
> Fabian Deutsch 
> RHEV Hypervisor
> Red Hat
> ___
> Infra mailing list
> Infra@ovirt.org
> http://lists.ovirt.org/mailman/listinfo/infra

-- 
David Caro

Red Hat S.L.
Continuous Integration Engineer - EMEA ENG Virtualization R

Tel.: +420 532 294 605
Email: dc...@redhat.com
IRC: dcaro|dcaroest@{freenode|oftc|redhat}
Web: www.redhat.com
RHT Global #: 82-62605


signature.asc
Description: PGP signature
___
Infra mailing list
Infra@ovirt.org
http://lists.ovirt.org/mailman/listinfo/infra


ngn build jobs take more than twice (x) as long as in the last days

2016-05-24 Thread Fabian Deutsch
Hey,

$subj says it all.

Affected jobs are:
http://jenkins.ovirt.org/user/fabiand/my-views/view/ovirt-node-ng/

I.e. 3.6 - before: ~46min, now 1:23hrs

In master it's even worse: >1:30hrs

Can someone help to idnetify the reason?

- fabian

-- 
Fabian Deutsch 
RHEV Hypervisor
Red Hat
___
Infra mailing list
Infra@ovirt.org
http://lists.ovirt.org/mailman/listinfo/infra


Jenkins build is back to normal : ovirt_3.6_system-tests #42

2016-05-24 Thread jenkins
See 

___
Infra mailing list
Infra@ovirt.org
http://lists.ovirt.org/mailman/listinfo/infra


Build failed in Jenkins: ovirt_3.6_system-tests #41

2016-05-24 Thread jenkins
See 

Changes:

[David Caro] Added message when the system ram is small

--
[...truncated 179 lines...]
[[ -d /etc/dnf ]] && cat /etc/yum/yum.conf > /etc/dnf/dnf.conf
rm -Rf /var/lib/rpm/__* &>$logdir/rpmbuild.log
rpm --rebuilddb &>>$logdir/rpmbuild.log
EOC
WARNING: Could not find required logging config file: 

 Using default...
INFO: mock.py version 1.2.17 starting (python version = 3.4.3)...
Start: init plugins
INFO: selinux enabled
Finish: init plugins
Start: run
Start: chroot init
INFO: calling preinit hooks
INFO: enabled root cache
INFO: enabled yum cache
Start: cleaning yum metadata
Finish: cleaning yum metadata
Finish: chroot init
Start: shell
sh: cannot set terminal process group (110813): Inappropriate ioctl for device
sh: no job control in this shell
sh-4.3# set -e
sh-4.3# 
logdir=" 
-system-tests/logs/mocker-fedora-23-x86_64.fc23.clean_rpmdb"
sh-4.3# [[ -d $logdir ]] \
> || mkdir -p "$logdir"
sh-4.3# # Fix that allows using yum inside the chroot 
on dnf enabled 
   d
sh-4.3# # distros
sh-4.3# [[ -d /etc/dnf ]] && cat /etc/yum/yum.conf > 
/etc/dnf/dnf.co 
nf
sh-4.3# rm -Rf /var/lib/rpm/__* &>$logdir/rpmbuild.log
sh-4.3# rpm --rebuilddb &>>$logdir/rpmbuild.log
sh-4.3# logout
Finish: shell
Clean rpmdb took 2 seconds

Using proxified config ../jenkins/mock_configs/fedora-23-x86_64_proxied.cfg
Generating temporary mock conf 

Adding mount points
Using chroot cache = 
/var/cache/mock/fedora-23-x86_64-57b948690aad4a606d6ca382a3ea484d
Using chroot dir = 
/var/lib/mock/fedora-23-x86_64-57b948690aad4a606d6ca382a3ea484d-94419
Adding repo lago -> http://resources.ovirt.org/repos/lago/stable/0.0/rpm/fc23
Adding repo ovirt-3.6-stable -> 
http://resources.ovirt.org/pub/ovirt-3.6/rpm/fc23
mock \
--root="mocker-fedora-23-x86_64.fc23" \

--configdir="
 \
--no-clean \
--resultdir="logs/mocker-fedora-23-x86_64.fc23.basic_suite_3.6.sh" \
--shell 
[[ -d "\$logdir" ]] \
|| mkdir -p "\$logdir"
export 
HOME=
cd
chmod +x automation/basic_suite_3.6.sh
runner_GID="1013"
runner_GROUP="jenkins"
# mock group is called mockbuild inside the chroot
if [[ \$runner_GROUP == "mock" ]]; then
runner_GROUP=mockbuild
fi
if ! getent group "\$runner_GID" &>/dev/null; then
groupadd \
--gid "\$runner_GID" \
"\$runner_GROUP"
fi
start="\$(date +%s)"
res=0
echo "== Running the shellscript 
automation/basic_suite_3.6.sh" \
| tee -a \$logdir/basic_suite_3.6.sh.log
./automation/basic_suite_3.6.sh 2>&1 | tee -a 
\$logdir/basic_suite_3.6.sh.log \
|| res=\${PIPESTATUS[0]}
end="\$(date +%s)"
echo "Took \$((end - start)) seconds" \
| tee -a \$logdir/basic_suite_3.6.sh.log
echo "===" \
| tee -a \$logdir/basic_suite_3.6.sh.log
if [[ "\$(find . -uid 0 -print -quit)" != '' ]]; then
chown -R "$UID:\$runner_GID" .
fi
exit \$res
EOS
WARNING: Could not find required logging config file: 

 Using default...
INFO: mock.py version 1.2.17 starting (python version = 3.4.3)...
Start: init plugins
INFO: selinux enabled
Finish: init plugins
Start: run
Start: chroot init
INFO: calling preinit hooks
INFO: enabled root cache
INFO: enabled yum cache
Start: cleaning yum metadata
Finish: cleaning yum metadata
Finish: chroot init
Start: shell
sh: cannot set terminal process group (110813): Inappropriate ioctl for device
sh: no job control in this shell
sh-4.3# set -e
sh-4.3# 
logdir=" 
-system-tests/logs/mocker-fedora-23-x86_64.fc23.basic_suite_3.6.sh"
sh-4.3# [[ -d "$logdir" ]] \
> || mkdir -p "$logdir"
sh-4.3# 

Build failed in Jenkins: ovirt_3.6_system-tests #40

2016-05-24 Thread jenkins
See 

Changes:

[David Caro] Added message when the system ram is small

[Sandro Bonazzola] ovirt-engine: switch from 3.6.6 to 3.6.7

[Sandro Bonazzola] spagobi: drop jobs

[Sandro Bonazzola] publishers: added 4.0 publisher

[David Caro] Change the grop and rights of the dirs created

--
[...truncated 182 lines...]
[[ -d /etc/dnf ]] && cat /etc/yum/yum.conf > /etc/dnf/dnf.conf
rm -Rf /var/lib/rpm/__* &>$logdir/rpmbuild.log
rpm --rebuilddb &>>$logdir/rpmbuild.log
EOC
WARNING: Could not find required logging config file: 

 Using default...
INFO: mock.py version 1.2.17 starting (python version = 3.4.3)...
Start: init plugins
INFO: selinux enabled
Finish: init plugins
Start: run
Start: chroot init
INFO: calling preinit hooks
INFO: enabled root cache
INFO: enabled yum cache
Start: cleaning yum metadata
Finish: cleaning yum metadata
Finish: chroot init
Start: shell
sh: cannot set terminal process group (110813): Inappropriate ioctl for device
sh: no job control in this shell
sh-4.3# set -e
sh-4.3# 
logdir=" 
-system-tests/logs/mocker-fedora-23-x86_64.fc23.clean_rpmdb"
sh-4.3# [[ -d $logdir ]] \
> || mkdir -p "$logdir"
sh-4.3# # Fix that allows using yum inside the chroot 
on dnf enabled 
   d
sh-4.3# # distros
sh-4.3# [[ -d /etc/dnf ]] && cat /etc/yum/yum.conf > 
/etc/dnf/dnf.co 
nf
sh-4.3# rm -Rf /var/lib/rpm/__* &>$logdir/rpmbuild.log
sh-4.3# rpm --rebuilddb &>>$logdir/rpmbuild.log
sh-4.3# logout
Finish: shell
Clean rpmdb took 2 seconds

Using proxified config ../jenkins/mock_configs/fedora-23-x86_64_proxied.cfg
Generating temporary mock conf 

Adding mount points
Using chroot cache = 
/var/cache/mock/fedora-23-x86_64-57b948690aad4a606d6ca382a3ea484d
Using chroot dir = 
/var/lib/mock/fedora-23-x86_64-57b948690aad4a606d6ca382a3ea484d-70358
Adding repo lago -> http://resources.ovirt.org/repos/lago/stable/0.0/rpm/fc23
Adding repo ovirt-3.6-stable -> 
http://resources.ovirt.org/pub/ovirt-3.6/rpm/fc23
mock \
--root="mocker-fedora-23-x86_64.fc23" \

--configdir="
 \
--no-clean \
--resultdir="logs/mocker-fedora-23-x86_64.fc23.basic_suite_3.6.sh" \
--shell 
[[ -d "\$logdir" ]] \
|| mkdir -p "\$logdir"
export 
HOME=
cd
chmod +x automation/basic_suite_3.6.sh
runner_GID="1013"
runner_GROUP="jenkins"
# mock group is called mockbuild inside the chroot
if [[ \$runner_GROUP == "mock" ]]; then
runner_GROUP=mockbuild
fi
if ! getent group "\$runner_GID" &>/dev/null; then
groupadd \
--gid "\$runner_GID" \
"\$runner_GROUP"
fi
start="\$(date +%s)"
res=0
echo "== Running the shellscript 
automation/basic_suite_3.6.sh" \
| tee -a \$logdir/basic_suite_3.6.sh.log
./automation/basic_suite_3.6.sh 2>&1 | tee -a 
\$logdir/basic_suite_3.6.sh.log \
|| res=\${PIPESTATUS[0]}
end="\$(date +%s)"
echo "Took \$((end - start)) seconds" \
| tee -a \$logdir/basic_suite_3.6.sh.log
echo "===" \
| tee -a \$logdir/basic_suite_3.6.sh.log
if [[ "\$(find . -uid 0 -print -quit)" != '' ]]; then
chown -R "$UID:\$runner_GID" .
fi
exit \$res
EOS
WARNING: Could not find required logging config file: 

 Using default...
INFO: mock.py version 1.2.17 starting (python version = 3.4.3)...
Start: init plugins
INFO: selinux enabled
Finish: init plugins
Start: run
Start: chroot init
INFO: calling preinit hooks
INFO: enabled root cache
INFO: enabled yum cache
Start: cleaning yum metadata
Finish: cleaning yum metadata
Finish: chroot init
Start: shell
sh: cannot set terminal process group (110813): Inappropriate ioctl for device
sh: no job control in this shell
sh-4.3# set -e
sh-4.3# 

[oVirt Jenkins] ovirt-engine_master_upgrade-from-master_el7_merged - Build # 367 - Failure!

2016-05-24 Thread jenkins
Project: 
http://jenkins.ovirt.org/job/ovirt-engine_master_upgrade-from-master_el7_merged/
 
Build: 
http://jenkins.ovirt.org/job/ovirt-engine_master_upgrade-from-master_el7_merged/367/
Build Number: 367
Build Status:  Failure
Triggered By: Triggered by Gerrit: https://gerrit.ovirt.org/57948

-
Changes Since Last Success:
-
Changes for Build #367
[David Caro] Change the grop and rights of the dirs created

[Maor Lipchuk] core: Delete template even if all disks failed to be deleted.




-
Failed Tests:
-
No tests ran. 

___
Infra mailing list
Infra@ovirt.org
http://lists.ovirt.org/mailman/listinfo/infra


Re: IPv6 RR disabled on lists.ovirt.org -- WHY???

2016-05-24 Thread Dave Neary
Hi,

That's "dneary", not dnary (explains why you couldn't find me, Duck).

The lists.ovirt.org was set up originally by quaid. It used to be on an
old box I think was retired - it used to be on the old
resources.ovirt.org linode host. I have some email records of some
general maintenance that quaid did back in 2012/13 (adding more storage
to the host), nothing since then.

I don't believe I ever had a hand in the mailman installation/maintenance.

Thanks,
Dave.
On 05/24/2016 05:39 AM, Eyal Edri wrote:
> Dave Neary, added.
> 
> On Tue, May 24, 2016 at 12:38 PM, Marc Dequènes (Duck)  > wrote:
> 
> Quack,
> 
> On 05/24/2016 05:32 PM, David Caro wrote:
> > On 05/24 11:27, Eyal Edri wrote:
> >> Misc,David?
> 
> Misc is on PTO
> 
> > I don't know, when was that done?
> >
> > Maybe it's old enough so Quaid was involved back then? or dnary?
> 
> Added Quaid, please help us.
> Who is dnary? Could not find this nick/mail-prefix/…
> 
> Here is the original mail with the unsolved question:
> 
> >> On Tue, May 24, 2016 at 8:31 AM, Marc Dequènes (Duck)
> >
> >> wrote:
> >>
> >>> Quack,
> >>>
> >>> I'm having a look at OVIRT-357 and found that IPv6 was disabled for
> >>> Postfix as a workaround.
> >>>
> >>> It seems to me the IPv6 address should be added to the DNS RR
> (so that
> >>> SPF would allow this address too) and Postfix could have IPv6
> >>> reactivated. I see no other problem with other services on the
> machine
> >>> if we do so.
> >>>
> >>> Nevertheless, I found out this in the dns-maps:
> >>> ; TASK0043529 - TASK0108580 overwriten
> >>> ;linode01IN  2600:3c01::f03c:91ff:fe93:4b0d
> >>>
> >>> Which means IPv6 RR were activated and then later disabled. I
> don't know
> >>> how to have access to these TASKs (SNOW?) but I'd really like to
> know
> >>> the reason for this before any action.
> >>>
> >>> Do any one know why this DNS RR was removed? or were I could
> find it?
> >>>
> >>> Regards.
> 
> 
> 
> 
> -- 
> Eyal Edri
> Associate Manager
> RHEV DevOps
> EMEA ENG Virtualization R
> Red Hat Israel
> 
> phone: +972-9-7692018
> irc: eedri (on #tlv #rhev-dev #rhev-integ)

-- 
Dave Neary - NFV/SDN Community Strategy
Open Source and Standards, Red Hat - http://community.redhat.com
Ph: +1-978-399-2182 / Cell: +1-978-799-3338
___
Infra mailing list
Infra@ovirt.org
http://lists.ovirt.org/mailman/listinfo/infra


[oVirt Jenkins] ovirt-engine_master_upgrade-from-master_el7_merged - Build # 365 - Still Failing!

2016-05-24 Thread jenkins
Project: 
http://jenkins.ovirt.org/job/ovirt-engine_master_upgrade-from-master_el7_merged/
 
Build: 
http://jenkins.ovirt.org/job/ovirt-engine_master_upgrade-from-master_el7_merged/365/
Build Number: 365
Build Status:  Still Failing
Triggered By: Triggered by Gerrit: https://gerrit.ovirt.org/57924

-
Changes Since Last Success:
-
Changes for Build #364
[Sandro Bonazzola] spagobi: drop jobs

[Sharon Gratch] webadmin: v2v-rename "Verify Credentials" field


Changes for Build #365
[Sandro Bonazzola] publishers: added 4.0 publisher

[Allon Mureinik] core: GlusterAuditLogUtil redundant type arguments




-
Failed Tests:
-
No tests ran. 

___
Infra mailing list
Infra@ovirt.org
http://lists.ovirt.org/mailman/listinfo/infra


Re: [ovirt-devel] Branching out 4.0 tomorrow at 12:00 sharp (IST)

2016-05-24 Thread Eyal Edri
We have a ticket to track all ACTIONS we need to do in CI:
https://ovirt-jira.atlassian.net/browse/OVIRT-553

Infra team - we need to allocate a few hours on Wed to make sure all tasks
are done on that ticket to ensure 4.0 is covered:


   - sending patch to yaml to add ovirt-engine-4.0 jobs (standard ci)
   - adding setup/upgrade jobs to 4.0 (clone from
   http://jenkins.ovirt.org/job/ovirt-engine_master_upgrade-from-3.6_el7_merged/
   )
   - clone findbugs job (still on old jenkins) to 4.0
   - add new 4.0 branches to gerrit hooks
   - add new publisher to 4.0
   - add new ovirt-system-tests job to 4.0
   - add new dao tests to 4.0 (still not migrated to std ci)
   - add 4.0 to stable branch in gerrit hooks


On Tue, May 24, 2016 at 11:40 AM, Tal Nisan  wrote:

> Hi everyone,
>
> Just a quick reminder that we are branching out 4.0 tomorrow, we will make
> it at 12:00 Israel time (11:00 CET).
> Patches that will not be merged to master up to this point will not be a
> part of the 4.0 branch and will have to be backported in order to be in 4.0
> as well as to comply to regular stable branch restrictions (include a
> Bug-Url, full acks and so on).
>
>
>
> ___
> Devel mailing list
> de...@ovirt.org
> http://lists.ovirt.org/mailman/listinfo/devel
>



-- 
Eyal Edri
Associate Manager
RHEV DevOps
EMEA ENG Virtualization R
Red Hat Israel

phone: +972-9-7692018
irc: eedri (on #tlv #rhev-dev #rhev-integ)
___
Infra mailing list
Infra@ovirt.org
http://lists.ovirt.org/mailman/listinfo/infra


[JIRA] (OVIRT-553) Update all jenkins jobs currently building master to build from 4.0 branch too if available

2016-05-24 Thread eyal edri [Administrator] (oVirt JIRA)

[ 
https://ovirt-jira.atlassian.net/browse/OVIRT-553?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16305#comment-16305
 ] 

eyal edri [Administrator] commented on OVIRT-553:
-

also adding 4.0 to stable branch on gerrit.

> Update all jenkins jobs currently building master to build from 4.0 branch 
> too if available
> ---
>
> Key: OVIRT-553
> URL: https://ovirt-jira.atlassian.net/browse/OVIRT-553
> Project: oVirt - virtualization made easy
>  Issue Type: Task
>  Components: Jenkins
>Reporter: sbonazzo
>Assignee: infra
>
> On May 25th we'll branch 4.0 from master.
> We'll need all jenkins jobs running on master to work on 4.0 as well.



--
This message was sent by Atlassian JIRA
(v1000.5.2#72002)
___
Infra mailing list
Infra@ovirt.org
http://lists.ovirt.org/mailman/listinfo/infra


[oVirt Jenkins] ovirt-engine_master_upgrade-from-master_el7_merged - Build # 364 - Failure!

2016-05-24 Thread jenkins
Project: 
http://jenkins.ovirt.org/job/ovirt-engine_master_upgrade-from-master_el7_merged/
 
Build: 
http://jenkins.ovirt.org/job/ovirt-engine_master_upgrade-from-master_el7_merged/364/
Build Number: 364
Build Status:  Failure
Triggered By: Triggered by Gerrit: https://gerrit.ovirt.org/57874

-
Changes Since Last Success:
-
Changes for Build #364
[Sandro Bonazzola] spagobi: drop jobs

[Sharon Gratch] webadmin: v2v-rename "Verify Credentials" field




-
Failed Tests:
-
No tests ran. 

___
Infra mailing list
Infra@ovirt.org
http://lists.ovirt.org/mailman/listinfo/infra


[oVirt Jenkins] ovirt-engine_master_upgrade-from-3.6_el7_merged - Build # 361 - Failure!

2016-05-24 Thread jenkins
Project: 
http://jenkins.ovirt.org/job/ovirt-engine_master_upgrade-from-3.6_el7_merged/ 
Build: 
http://jenkins.ovirt.org/job/ovirt-engine_master_upgrade-from-3.6_el7_merged/361/
Build Number: 361
Build Status:  Failure
Triggered By: Triggered by Gerrit: https://gerrit.ovirt.org/57874

-
Changes Since Last Success:
-
Changes for Build #361
[Sandro Bonazzola] spagobi: drop jobs

[Sharon Gratch] webadmin: v2v-rename "Verify Credentials" field




-
Failed Tests:
-
No tests ran. 

___
Infra mailing list
Infra@ovirt.org
http://lists.ovirt.org/mailman/listinfo/infra


[oVirt Jenkins] ovirt-engine_master_upgrade-from-3.6_el7_merged - Build # 357 - Failure!

2016-05-24 Thread jenkins
Project: 
http://jenkins.ovirt.org/job/ovirt-engine_master_upgrade-from-3.6_el7_merged/ 
Build: 
http://jenkins.ovirt.org/job/ovirt-engine_master_upgrade-from-3.6_el7_merged/357/
Build Number: 357
Build Status:  Failure
Triggered By: Triggered by Gerrit: https://gerrit.ovirt.org/57895

-
Changes Since Last Success:
-
Changes for Build #357
[Tal Nisan] core: Fix import external provider as template




-
Failed Tests:
-
No tests ran. 

___
Infra mailing list
Infra@ovirt.org
http://lists.ovirt.org/mailman/listinfo/infra


[oVirt Jenkins] ovirt-engine_3.6_upgrade-from-3.5_el6_merged - Build # 33 - Failure!

2016-05-24 Thread jenkins
Project: 
http://jenkins.ovirt.org/job/ovirt-engine_3.6_upgrade-from-3.5_el6_merged/ 
Build: 
http://jenkins.ovirt.org/job/ovirt-engine_3.6_upgrade-from-3.5_el6_merged/33/
Build Number: 33
Build Status:  Failure
Triggered By: Triggered by Gerrit: https://gerrit.ovirt.org/57893

-
Changes Since Last Success:
-
Changes for Build #33
[Sandro Bonazzola] spagobi: drop jobs

[Jakub Niedermertl] webadmin: Fix of custom properties




-
Failed Tests:
-
No tests ran. 

___
Infra mailing list
Infra@ovirt.org
http://lists.ovirt.org/mailman/listinfo/infra


Re: vdsm_master_check-patch-el7 job is failing

2016-05-24 Thread Amit Aviram
Thanks, will rebase

On Tue, May 24, 2016 at 1:55 PM, Dan Kenigsberg  wrote:

> On Tue, May 24, 2016 at 10:22:16AM +0200, David Caro wrote:
> > On 05/24 11:07, Amit Aviram wrote:
> > > Hi.
> > > For the last day I am getting this error over and over again from
> jenkins:
> > >
> > > Start: yum install*07:23:55* ERROR: Command failed. See logs for
> > > output.*07:23:55*  # /usr/bin/yum-deprecated --installroot
> > >
> /var/lib/mock/epel-7-x86_64-cc6e9a99555654260f7f229c124a6940-31053/root/
> > > --releasever 7 install @buildsys-build
> > > --setopt=tsflags=nocontexts*07:23:55* WARNING: unable to delete
> > > selinux filesystems (/tmp/mock-selinux-plugin.3tk4zgr4): [Errno 1]
> > > Operation not permitted: '/tmp/mock-selinux-plugin.3tk4zgr4'*07:23:55*
> > > Init took 3 seconds
> > >
> > >
> > > (see
> http://jenkins.ovirt.org/job/vdsm_master_check-patch-el7-x86_64/2026/)
> > >
> > >
> > > This fails the job, so I get -1 from Jenkins CI for my patch.
> >
> >
> > That's not what's failing the job, is just a warning, the failure is
> happening
> > before that, when installing the chroot:
> >
> > 07:23:53 Start: yum install
> > 07:23:55 ERROR: Command failed. See logs for output.
> > 07:23:55  # /usr/bin/yum-deprecated --installroot
> /var/lib/mock/epel-7-x86_64-cc6e9a99555654260f7f229c124a6940-31053/root/
> --releasever 7 install @buildsys-build --setopt=tsflags=nocontexts
> >
> > Checking the logs (logs.tgz file, archived on the job, under
> > vdsm/logs/mocker-epel-7-x86_64.el7.init/root.log):
> >
> >
> > DEBUG util.py:417:
> https://repos.fedorapeople.org/repos/openstack/openstack-kilo/el7/repodata/repomd.xml:
> [Errno 14] HTTPS Error 404 - Not Found
> > DEBUG util.py:417:  Trying other mirror.
> > DEBUG util.py:417:   One of the configured repositories failed ("Custom
> openstack-kilo"),
> > DEBUG util.py:417:   and yum doesn't have enough cached data to
> continue. At this point the only
> > DEBUG util.py:417:   safe thing yum can do is fail. There are a few ways
> to work "fix" this:
> > DEBUG util.py:417:   1. Contact the upstream for the repository and
> get them to fix the problem.
> > DEBUG util.py:417:   2. Reconfigure the baseurl/etc. for the
> repository, to point to a working
> > DEBUG util.py:417:  upstream. This is most often useful if you
> are using a newer
> > DEBUG util.py:417:  distribution release than is supported by
> the repository (and the
> > DEBUG util.py:417:  packages for the previous distribution
> release still work).
> > DEBUG util.py:417:   3. Disable the repository, so yum won't use it
> by default. Yum will then
> > DEBUG util.py:417:  just ignore the repository until you
> permanently enable it again or use
> > DEBUG util.py:417:  --enablerepo for temporary usage:
> > DEBUG util.py:417:  yum-config-manager --disable
> openstack-kilo
> > DEBUG util.py:417:   4. Configure the failing repository to be
> skipped, if it is unavailable.
> > DEBUG util.py:417:  Note that yum will try to contact the repo.
> when it runs most commands,
> > DEBUG util.py:417:  so will have to try and fail each time (and
> thus. yum will be be much
> > DEBUG util.py:417:  slower). If it is a very temporary problem
> though, this is often a nice
> > DEBUG util.py:417:  compromise:
> > DEBUG util.py:417:  yum-config-manager --save
> --setopt=openstack-kilo.skip_if_unavailable=true
> > DEBUG util.py:417:  failure: repodata/repomd.xml from openstack-kilo:
> [Errno 256] No more mirrors to try.
> >
> >
> > So it seems that the repo does not exist anymore, there's a README.txt
> file
> > though that says:
> >
> > RDO Kilo is hosted in CentOS Cloud SIG repository
> > http://mirror.centos.org/centos/7/cloud/x86_64/openstack-kilo/
> >
> > And that new link seems to work ok, so probably you just need to change
> the
> > automation/*.repos files on vdsm git repo to point to the new openstack
> repos
> > url instead of the old one and everything should work ok.
> >
> >
> >
> > >
> > > I am pretty sure it is not related to the patch. also fc23 job passes.
> > >
> > >
> > > Any idea what's the problem?
>
> Yep, I believe that https://gerrit.ovirt.org/57870 has solved that.
> Please rebase on top of current ovirt-3.6 branch.
>
___
Infra mailing list
Infra@ovirt.org
http://lists.ovirt.org/mailman/listinfo/infra


Re: vdsm_master_check-patch-el7 job is failing

2016-05-24 Thread Dan Kenigsberg
On Tue, May 24, 2016 at 10:22:16AM +0200, David Caro wrote:
> On 05/24 11:07, Amit Aviram wrote:
> > Hi.
> > For the last day I am getting this error over and over again from jenkins:
> > 
> > Start: yum install*07:23:55* ERROR: Command failed. See logs for
> > output.*07:23:55*  # /usr/bin/yum-deprecated --installroot
> > /var/lib/mock/epel-7-x86_64-cc6e9a99555654260f7f229c124a6940-31053/root/
> > --releasever 7 install @buildsys-build
> > --setopt=tsflags=nocontexts*07:23:55* WARNING: unable to delete
> > selinux filesystems (/tmp/mock-selinux-plugin.3tk4zgr4): [Errno 1]
> > Operation not permitted: '/tmp/mock-selinux-plugin.3tk4zgr4'*07:23:55*
> > Init took 3 seconds
> > 
> > 
> > (see http://jenkins.ovirt.org/job/vdsm_master_check-patch-el7-x86_64/2026/)
> > 
> > 
> > This fails the job, so I get -1 from Jenkins CI for my patch.
> 
> 
> That's not what's failing the job, is just a warning, the failure is happening
> before that, when installing the chroot:
> 
> 07:23:53 Start: yum install
> 07:23:55 ERROR: Command failed. See logs for output.
> 07:23:55  # /usr/bin/yum-deprecated --installroot 
> /var/lib/mock/epel-7-x86_64-cc6e9a99555654260f7f229c124a6940-31053/root/ 
> --releasever 7 install @buildsys-build --setopt=tsflags=nocontexts
> 
> Checking the logs (logs.tgz file, archived on the job, under
> vdsm/logs/mocker-epel-7-x86_64.el7.init/root.log):
> 
> 
> DEBUG util.py:417:  
> https://repos.fedorapeople.org/repos/openstack/openstack-kilo/el7/repodata/repomd.xml:
>  [Errno 14] HTTPS Error 404 - Not Found
> DEBUG util.py:417:  Trying other mirror.
> DEBUG util.py:417:   One of the configured repositories failed ("Custom 
> openstack-kilo"),
> DEBUG util.py:417:   and yum doesn't have enough cached data to continue. At 
> this point the only
> DEBUG util.py:417:   safe thing yum can do is fail. There are a few ways to 
> work "fix" this:
> DEBUG util.py:417:   1. Contact the upstream for the repository and get 
> them to fix the problem.
> DEBUG util.py:417:   2. Reconfigure the baseurl/etc. for the repository, 
> to point to a working
> DEBUG util.py:417:  upstream. This is most often useful if you are 
> using a newer
> DEBUG util.py:417:  distribution release than is supported by the 
> repository (and the
> DEBUG util.py:417:  packages for the previous distribution release 
> still work).
> DEBUG util.py:417:   3. Disable the repository, so yum won't use it by 
> default. Yum will then
> DEBUG util.py:417:  just ignore the repository until you permanently 
> enable it again or use
> DEBUG util.py:417:  --enablerepo for temporary usage:
> DEBUG util.py:417:  yum-config-manager --disable openstack-kilo
> DEBUG util.py:417:   4. Configure the failing repository to be skipped, 
> if it is unavailable.
> DEBUG util.py:417:  Note that yum will try to contact the repo. when 
> it runs most commands,
> DEBUG util.py:417:  so will have to try and fail each time (and thus. 
> yum will be be much
> DEBUG util.py:417:  slower). If it is a very temporary problem 
> though, this is often a nice
> DEBUG util.py:417:  compromise:
> DEBUG util.py:417:  yum-config-manager --save 
> --setopt=openstack-kilo.skip_if_unavailable=true
> DEBUG util.py:417:  failure: repodata/repomd.xml from openstack-kilo: [Errno 
> 256] No more mirrors to try.
> 
> 
> So it seems that the repo does not exist anymore, there's a README.txt file
> though that says:
> 
> RDO Kilo is hosted in CentOS Cloud SIG repository
> http://mirror.centos.org/centos/7/cloud/x86_64/openstack-kilo/
> 
> And that new link seems to work ok, so probably you just need to change the
> automation/*.repos files on vdsm git repo to point to the new openstack repos
> url instead of the old one and everything should work ok.
> 
> 
> 
> > 
> > I am pretty sure it is not related to the patch. also fc23 job passes.
> > 
> > 
> > Any idea what's the problem?

Yep, I believe that https://gerrit.ovirt.org/57870 has solved that.
Please rebase on top of current ovirt-3.6 branch.
___
Infra mailing list
Infra@ovirt.org
http://lists.ovirt.org/mailman/listinfo/infra


[oVirt Jenkins] ovirt-engine_3.6_upgrade-from-3.6_el6_merged - Build # 33 - Failure!

2016-05-24 Thread jenkins
Project: 
http://jenkins.ovirt.org/job/ovirt-engine_3.6_upgrade-from-3.6_el6_merged/ 
Build: 
http://jenkins.ovirt.org/job/ovirt-engine_3.6_upgrade-from-3.6_el6_merged/33/
Build Number: 33
Build Status:  Failure
Triggered By: Triggered by Gerrit: https://gerrit.ovirt.org/57804

-
Changes Since Last Success:
-
Changes for Build #33
[Sandro Bonazzola] ovirt-engine: switch from 3.6.6 to 3.6.7

[Jakub Niedermertl] frontend: Import VM dialog - architecture




-
Failed Tests:
-
No tests ran. 

___
Infra mailing list
Infra@ovirt.org
http://lists.ovirt.org/mailman/listinfo/infra


Jenkins build is back to stable : ovirt_master_system-tests #59

2016-05-24 Thread jenkins
See 

___
Infra mailing list
Infra@ovirt.org
http://lists.ovirt.org/mailman/listinfo/infra


Re: IPv6 RR disabled on lists.ovirt.org -- WHY???

2016-05-24 Thread David Caro
On 05/24 18:38, Marc Dequènes (Duck) wrote:
> Quack,
> 
> On 05/24/2016 05:32 PM, David Caro wrote:
> > On 05/24 11:27, Eyal Edri wrote:
> >> Misc,David?
> 
> Misc is on PTO
> 
> > I don't know, when was that done?
> > 
> > Maybe it's old enough so Quaid was involved back then? or dnary?
> 
> Added Quaid, please help us.
> Who is dnary? Could not find this nick/mail-prefix/…

Dave Neary, he was community manager for oVirt ~3 years ago

> 
> Here is the original mail with the unsolved question:
> 
> >> On Tue, May 24, 2016 at 8:31 AM, Marc Dequènes (Duck) 
> >> wrote:
> >>
> >>> Quack,
> >>>
> >>> I'm having a look at OVIRT-357 and found that IPv6 was disabled for
> >>> Postfix as a workaround.
> >>>
> >>> It seems to me the IPv6 address should be added to the DNS RR (so that
> >>> SPF would allow this address too) and Postfix could have IPv6
> >>> reactivated. I see no other problem with other services on the machine
> >>> if we do so.
> >>>
> >>> Nevertheless, I found out this in the dns-maps:
> >>> ; TASK0043529 - TASK0108580 overwriten
> >>> ;linode01IN  2600:3c01::f03c:91ff:fe93:4b0d
> >>>
> >>> Which means IPv6 RR were activated and then later disabled. I don't know
> >>> how to have access to these TASKs (SNOW?) but I'd really like to know
> >>> the reason for this before any action.
> >>>
> >>> Do any one know why this DNS RR was removed? or were I could find it?
> >>>
> >>> Regards.
> 



-- 
David Caro

Red Hat S.L.
Continuous Integration Engineer - EMEA ENG Virtualization R

Tel.: +420 532 294 605
Email: dc...@redhat.com
IRC: dcaro|dcaroest@{freenode|oftc|redhat}
Web: www.redhat.com
RHT Global #: 82-62605


signature.asc
Description: PGP signature
___
Infra mailing list
Infra@ovirt.org
http://lists.ovirt.org/mailman/listinfo/infra


Re: IPv6 RR disabled on lists.ovirt.org -- WHY???

2016-05-24 Thread Duck
Quack,

On 05/24/2016 05:32 PM, David Caro wrote:
> On 05/24 11:27, Eyal Edri wrote:
>> Misc,David?

Misc is on PTO

> I don't know, when was that done?
> 
> Maybe it's old enough so Quaid was involved back then? or dnary?

Added Quaid, please help us.
Who is dnary? Could not find this nick/mail-prefix/…

Here is the original mail with the unsolved question:

>> On Tue, May 24, 2016 at 8:31 AM, Marc Dequènes (Duck) 
>> wrote:
>>
>>> Quack,
>>>
>>> I'm having a look at OVIRT-357 and found that IPv6 was disabled for
>>> Postfix as a workaround.
>>>
>>> It seems to me the IPv6 address should be added to the DNS RR (so that
>>> SPF would allow this address too) and Postfix could have IPv6
>>> reactivated. I see no other problem with other services on the machine
>>> if we do so.
>>>
>>> Nevertheless, I found out this in the dns-maps:
>>> ; TASK0043529 - TASK0108580 overwriten
>>> ;linode01IN  2600:3c01::f03c:91ff:fe93:4b0d
>>>
>>> Which means IPv6 RR were activated and then later disabled. I don't know
>>> how to have access to these TASKs (SNOW?) but I'd really like to know
>>> the reason for this before any action.
>>>
>>> Do any one know why this DNS RR was removed? or were I could find it?
>>>
>>> Regards.



signature.asc
Description: OpenPGP digital signature
___
Infra mailing list
Infra@ovirt.org
http://lists.ovirt.org/mailman/listinfo/infra


Re: IPv6 RR disabled on lists.ovirt.org -- WHY???

2016-05-24 Thread David Caro
On 05/24 11:27, Eyal Edri wrote:
> Misc,David?

I don't know, when was that done?

Maybe it's old enough so Quaid was involved back then? or dnary?

> 
> On Tue, May 24, 2016 at 8:31 AM, Marc Dequènes (Duck) 
> wrote:
> 
> > Quack,
> >
> > I'm having a look at OVIRT-357 and found that IPv6 was disabled for
> > Postfix as a workaround.
> >
> > It seems to me the IPv6 address should be added to the DNS RR (so that
> > SPF would allow this address too) and Postfix could have IPv6
> > reactivated. I see no other problem with other services on the machine
> > if we do so.
> >
> > Nevertheless, I found out this in the dns-maps:
> > ; TASK0043529 - TASK0108580 overwriten
> > ;linode01IN  2600:3c01::f03c:91ff:fe93:4b0d
> >
> > Which means IPv6 RR were activated and then later disabled. I don't know
> > how to have access to these TASKs (SNOW?) but I'd really like to know
> > the reason for this before any action.
> >
> > Do any one know why this DNS RR was removed? or were I could find it?
> >
> > Regards.
> >
> >
> > ___
> > Infra mailing list
> > Infra@ovirt.org
> > http://lists.ovirt.org/mailman/listinfo/infra
> >
> >
> 
> 
> -- 
> Eyal Edri
> Associate Manager
> RHEV DevOps
> EMEA ENG Virtualization R
> Red Hat Israel
> 
> phone: +972-9-7692018
> irc: eedri (on #tlv #rhev-dev #rhev-integ)

-- 
David Caro

Red Hat S.L.
Continuous Integration Engineer - EMEA ENG Virtualization R

Tel.: +420 532 294 605
Email: dc...@redhat.com
IRC: dcaro|dcaroest@{freenode|oftc|redhat}
Web: www.redhat.com
RHT Global #: 82-62605


signature.asc
Description: PGP signature
___
Infra mailing list
Infra@ovirt.org
http://lists.ovirt.org/mailman/listinfo/infra


Re: IPv6 RR disabled on lists.ovirt.org -- WHY???

2016-05-24 Thread Eyal Edri
Misc,David?

On Tue, May 24, 2016 at 8:31 AM, Marc Dequènes (Duck) 
wrote:

> Quack,
>
> I'm having a look at OVIRT-357 and found that IPv6 was disabled for
> Postfix as a workaround.
>
> It seems to me the IPv6 address should be added to the DNS RR (so that
> SPF would allow this address too) and Postfix could have IPv6
> reactivated. I see no other problem with other services on the machine
> if we do so.
>
> Nevertheless, I found out this in the dns-maps:
> ; TASK0043529 - TASK0108580 overwriten
> ;linode01IN  2600:3c01::f03c:91ff:fe93:4b0d
>
> Which means IPv6 RR were activated and then later disabled. I don't know
> how to have access to these TASKs (SNOW?) but I'd really like to know
> the reason for this before any action.
>
> Do any one know why this DNS RR was removed? or were I could find it?
>
> Regards.
>
>
> ___
> Infra mailing list
> Infra@ovirt.org
> http://lists.ovirt.org/mailman/listinfo/infra
>
>


-- 
Eyal Edri
Associate Manager
RHEV DevOps
EMEA ENG Virtualization R
Red Hat Israel

phone: +972-9-7692018
irc: eedri (on #tlv #rhev-dev #rhev-integ)
___
Infra mailing list
Infra@ovirt.org
http://lists.ovirt.org/mailman/listinfo/infra


Re: vdsm_master_check-patch-el7 job is failing

2016-05-24 Thread David Caro
On 05/24 11:07, Amit Aviram wrote:
> Hi.
> For the last day I am getting this error over and over again from jenkins:
> 
> Start: yum install*07:23:55* ERROR: Command failed. See logs for
> output.*07:23:55*  # /usr/bin/yum-deprecated --installroot
> /var/lib/mock/epel-7-x86_64-cc6e9a99555654260f7f229c124a6940-31053/root/
> --releasever 7 install @buildsys-build
> --setopt=tsflags=nocontexts*07:23:55* WARNING: unable to delete
> selinux filesystems (/tmp/mock-selinux-plugin.3tk4zgr4): [Errno 1]
> Operation not permitted: '/tmp/mock-selinux-plugin.3tk4zgr4'*07:23:55*
> Init took 3 seconds
> 
> 
> (see http://jenkins.ovirt.org/job/vdsm_master_check-patch-el7-x86_64/2026/)
> 
> 
> This fails the job, so I get -1 from Jenkins CI for my patch.


That's not what's failing the job, is just a warning, the failure is happening
before that, when installing the chroot:

07:23:53 Start: yum install
07:23:55 ERROR: Command failed. See logs for output.
07:23:55  # /usr/bin/yum-deprecated --installroot 
/var/lib/mock/epel-7-x86_64-cc6e9a99555654260f7f229c124a6940-31053/root/ 
--releasever 7 install @buildsys-build --setopt=tsflags=nocontexts

Checking the logs (logs.tgz file, archived on the job, under
vdsm/logs/mocker-epel-7-x86_64.el7.init/root.log):


DEBUG util.py:417:  
https://repos.fedorapeople.org/repos/openstack/openstack-kilo/el7/repodata/repomd.xml:
 [Errno 14] HTTPS Error 404 - Not Found
DEBUG util.py:417:  Trying other mirror.
DEBUG util.py:417:   One of the configured repositories failed ("Custom 
openstack-kilo"),
DEBUG util.py:417:   and yum doesn't have enough cached data to continue. At 
this point the only
DEBUG util.py:417:   safe thing yum can do is fail. There are a few ways to 
work "fix" this:
DEBUG util.py:417:   1. Contact the upstream for the repository and get 
them to fix the problem.
DEBUG util.py:417:   2. Reconfigure the baseurl/etc. for the repository, to 
point to a working
DEBUG util.py:417:  upstream. This is most often useful if you are 
using a newer
DEBUG util.py:417:  distribution release than is supported by the 
repository (and the
DEBUG util.py:417:  packages for the previous distribution release 
still work).
DEBUG util.py:417:   3. Disable the repository, so yum won't use it by 
default. Yum will then
DEBUG util.py:417:  just ignore the repository until you permanently 
enable it again or use
DEBUG util.py:417:  --enablerepo for temporary usage:
DEBUG util.py:417:  yum-config-manager --disable openstack-kilo
DEBUG util.py:417:   4. Configure the failing repository to be skipped, if 
it is unavailable.
DEBUG util.py:417:  Note that yum will try to contact the repo. when it 
runs most commands,
DEBUG util.py:417:  so will have to try and fail each time (and thus. 
yum will be be much
DEBUG util.py:417:  slower). If it is a very temporary problem though, 
this is often a nice
DEBUG util.py:417:  compromise:
DEBUG util.py:417:  yum-config-manager --save 
--setopt=openstack-kilo.skip_if_unavailable=true
DEBUG util.py:417:  failure: repodata/repomd.xml from openstack-kilo: [Errno 
256] No more mirrors to try.


So it seems that the repo does not exist anymore, there's a README.txt file
though that says:

RDO Kilo is hosted in CentOS Cloud SIG repository
http://mirror.centos.org/centos/7/cloud/x86_64/openstack-kilo/

And that new link seems to work ok, so probably you just need to change the
automation/*.repos files on vdsm git repo to point to the new openstack repos
url instead of the old one and everything should work ok.



> 
> I am pretty sure it is not related to the patch. also fc23 job passes.
> 
> 
> Any idea what's the problem?
> 
> 
> Thanks

> ___
> Infra mailing list
> Infra@ovirt.org
> http://lists.ovirt.org/mailman/listinfo/infra


-- 
David Caro

Red Hat S.L.
Continuous Integration Engineer - EMEA ENG Virtualization R

Tel.: +420 532 294 605
Email: dc...@redhat.com
IRC: dcaro|dcaroest@{freenode|oftc|redhat}
Web: www.redhat.com
RHT Global #: 82-62605


signature.asc
Description: PGP signature
___
Infra mailing list
Infra@ovirt.org
http://lists.ovirt.org/mailman/listinfo/infra


vdsm_master_check-patch-el7 job is failing

2016-05-24 Thread Amit Aviram
Hi.
For the last day I am getting this error over and over again from jenkins:

Start: yum install*07:23:55* ERROR: Command failed. See logs for
output.*07:23:55*  # /usr/bin/yum-deprecated --installroot
/var/lib/mock/epel-7-x86_64-cc6e9a99555654260f7f229c124a6940-31053/root/
--releasever 7 install @buildsys-build
--setopt=tsflags=nocontexts*07:23:55* WARNING: unable to delete
selinux filesystems (/tmp/mock-selinux-plugin.3tk4zgr4): [Errno 1]
Operation not permitted: '/tmp/mock-selinux-plugin.3tk4zgr4'*07:23:55*
Init took 3 seconds


(see http://jenkins.ovirt.org/job/vdsm_master_check-patch-el7-x86_64/2026/)


This fails the job, so I get -1 from Jenkins CI for my patch.

I am pretty sure it is not related to the patch. also fc23 job passes.


Any idea what's the problem?


Thanks
___
Infra mailing list
Infra@ovirt.org
http://lists.ovirt.org/mailman/listinfo/infra


[JIRA] (OVIRT-515) Re: Cannot clone bugs with more than 2^16 characters in the comments

2016-05-24 Thread Gil Shinar (oVirt JIRA)

[ 
https://ovirt-jira.atlassian.net/browse/OVIRT-515?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16303#comment-16303
 ] 

Gil Shinar commented on OVIRT-515:
--

Something is really strange here. It let's you write a comment with more then 
2^16 chars but don't let you clone it?
It looks more like a Bugzilla bug than  a Bugzilla limitation

> Re: Cannot clone bugs with more than 2^16 characters in the comments
> 
>
> Key: OVIRT-515
> URL: https://ovirt-jira.atlassian.net/browse/OVIRT-515
> Project: oVirt - virtualization made easy
>  Issue Type: By-EMAIL
>Reporter: dcaro
>Assignee: infra
> Attachments: signature.asc, signature.asc
>
>
> Looks like a restriction of the api itself, probably triggered by us adding 
> the
> 'comment from ...' header to the comments while clonning.
> Opening a bug on it to keep track
> On 05/02 12:36, Tal Nisan wrote:
> > I've encountered this in this job:
> > http://jenkins-ci.eng.lab.tlv.redhat.com/job/system_bugzilla_clone_zstream_milestone/141/console
> > 
> > For this bug:
> > https://bugzilla.redhat.com/1301083
> > ___
> > Infra mailing list
> > Infra@ovirt.org
> > http://lists.ovirt.org/mailman/listinfo/infra
> -- 
> David Caro
> Red Hat S.L.
> Continuous Integration Engineer - EMEA ENG Virtualization R
> Tel.: +420 532 294 605
> Email: dc...@redhat.com
> IRC: dcaro|dcaroest@{freenode|oftc|redhat}
> Web: www.redhat.com
> RHT Global #: 82-62605



--
This message was sent by Atlassian JIRA
(v1000.5.2#72002)
___
Infra mailing list
Infra@ovirt.org
http://lists.ovirt.org/mailman/listinfo/infra


oVirt 3.6.7 RC1 build starting

2016-05-24 Thread Sandro Bonazzola
Fyi oVirt products maintainers,

An oVirt build for an official release is going to start in 20 minutes.

If you're a maintainer for any of the projects included in oVirt
distribution and you have changes in your package ready to be released
please:

- bump version and release to be GA ready

- tag your release within git (implies a GitHub Release to be automatically
created)

- build your packages within jenkins / koji / copr / whatever

- verify all bugs on MODIFIED have target release and target milestone set.

-- 
Sandro Bonazzola
Better technology. Faster innovation. Powered by community collaboration.
See how it works at redhat.com
___
Infra mailing list
Infra@ovirt.org
http://lists.ovirt.org/mailman/listinfo/infra