[ovirt-devel] MockConfigRule for default values

2016-10-30 Thread Allon Mureinik
​​
Hi all,

One of my long time frustrations with MockConfigRule was the bloating it
caused by forcing you to mock each and every ConfigValue you may use, when
more often than not you don't really care about the value but just was your
test to not fail with a NullPointerException.

I've recently merged a series of patches to address this and return the
default value for any ConfigValues that is not explicitly specified. In
other words, this aligns MockConfigRule's behavior with the production code
- if a value is specified, return it, and if not, return the default from
the annotation.

This change has no effect on values you explicitly mock, and if you
particularly need a certain ConfigValues to return null, you may state so
implicitly, either inline when you construct the Rule:

@Rule
public static MockConfigRule mcr = new
MockConfigRule(mockConfig(ConfigValues.SomeValue, null));

Or later on when you need it:

@Test
public void someTest() {
mcr.mockConfigValue(ConfigValues.SomeOtherValue, null);
}

Additional details can be found on the website:
http://www.ovirt.org/develop/dev-process/unit-testing-utilities/mockconfigrule/

-Allon
___
Devel mailing list
Devel@ovirt.org
http://lists.ovirt.org/mailman/listinfo/devel

Re: [ovirt-devel] [vdsm] branch ovirt-4.0.5 created

2016-10-30 Thread Shlomo Ben David
Hi Eyal,

Most of the hooks are updated not to use the STABLE_BRANCHES parameter, but
there are still few hooks that are using this parameter such as:
'patchset-created.warn_if_not_merged_to_previous_branch' hook.

Best Regards,

Shlomi Ben-David | DevOps Engineer | Red Hat ISRAEL
RHCSA | RHCE
IRC: shlomibendavid (on #rhev-integ, #rhev-dev, #rhev-ci)

OPEN SOURCE - 1 4 011 && 011 4 1

On Sun, Oct 30, 2016 at 5:25 PM, Eyal Edri  wrote:

>
>
> On Mon, Oct 10, 2016 at 6:24 PM, Francesco Romani 
> wrote:
>
>> - Original Message -
>> > From: "Dan Kenigsberg" 
>> > To: "Francesco Romani" 
>> > Cc: "Nir Soffer" , devel@ovirt.org
>> > Sent: Monday, October 10, 2016 5:11:26 PM
>> > Subject: Re: [vdsm] branch ovirt-4.0.5 created
>> >
>> > On Mon, Oct 10, 2016 at 10:30:49AM -0400, Francesco Romani wrote:
>> > > Hi everyone,
>> > >
>> > > this time I choose to create the ovirt-4.0.5 branch.
>> > > I already merged some patches for 4.0.6.
>> > >
>> > > Unfortunately I branched a bit too early (from last tag :))
>> > >
>> > > So patches
>> > > https://gerrit.ovirt.org/#/c/65303/1
>> > > https://gerrit.ovirt.org/#/c/65304/1
>> > > https://gerrit.ovirt.org/#/c/65305/1
>> > >
>> > > Should be trivially mergeable - the only thing changed from ovirt-4.0
>> > > counterpart
>> > > is the change-id. Please have a quick look just to doublecheck.
>> >
>> > Change-Id should be the same for a master patch and all of its backport.
>> > It seems that it was NOT changed, at least for
>> > https://gerrit.ovirt.org/#/q/I5cea6ec71c913d74d95317ff7318259d64b40969
>> > which is a GOOD thing.
>>
>> Yes, sorry, indeed it is (and indeed it should not change).
>>
>> > I think we want to enable CI on the new 4.0.5 branch, right? Otherwise
>> > we'd need to fake the CI+1 flag until 4.0.5 is shipped.
>>
>> We should, but it is not urgently needed - just regular priority.
>> For the aforementioned first three patches especially I'm just overly
>> cautious.
>>
>>
> Was CI enabled for 4.0.5 branch?
> Adding infra as well.
>
> Shlomi, Did we enabled the regex for stable branch already and we don't
> need to manually update conf files?
>
>
>
>> --
>> Francesco Romani
>> Red Hat Engineering Virtualization R & D
>> Phone: 8261328
>> IRC: fromani
>> ___
>> Devel mailing list
>> Devel@ovirt.org
>> http://lists.ovirt.org/mailman/listinfo/devel
>>
>>
>>
>
>
> --
> Eyal Edri
> Associate Manager
> RHV DevOps
> EMEA ENG Virtualization R
> Red Hat Israel
>
> phone: +972-9-7692018
> irc: eedri (on #tlv #rhev-dev #rhev-integ)
>
___
Devel mailing list
Devel@ovirt.org
http://lists.ovirt.org/mailman/listinfo/devel

Re: [ovirt-devel] [vdsm] branch ovirt-4.0.5 created

2016-10-30 Thread Eyal Edri
On Mon, Oct 10, 2016 at 6:24 PM, Francesco Romani 
wrote:

> - Original Message -
> > From: "Dan Kenigsberg" 
> > To: "Francesco Romani" 
> > Cc: "Nir Soffer" , devel@ovirt.org
> > Sent: Monday, October 10, 2016 5:11:26 PM
> > Subject: Re: [vdsm] branch ovirt-4.0.5 created
> >
> > On Mon, Oct 10, 2016 at 10:30:49AM -0400, Francesco Romani wrote:
> > > Hi everyone,
> > >
> > > this time I choose to create the ovirt-4.0.5 branch.
> > > I already merged some patches for 4.0.6.
> > >
> > > Unfortunately I branched a bit too early (from last tag :))
> > >
> > > So patches
> > > https://gerrit.ovirt.org/#/c/65303/1
> > > https://gerrit.ovirt.org/#/c/65304/1
> > > https://gerrit.ovirt.org/#/c/65305/1
> > >
> > > Should be trivially mergeable - the only thing changed from ovirt-4.0
> > > counterpart
> > > is the change-id. Please have a quick look just to doublecheck.
> >
> > Change-Id should be the same for a master patch and all of its backport.
> > It seems that it was NOT changed, at least for
> > https://gerrit.ovirt.org/#/q/I5cea6ec71c913d74d95317ff7318259d64b40969
> > which is a GOOD thing.
>
> Yes, sorry, indeed it is (and indeed it should not change).
>
> > I think we want to enable CI on the new 4.0.5 branch, right? Otherwise
> > we'd need to fake the CI+1 flag until 4.0.5 is shipped.
>
> We should, but it is not urgently needed - just regular priority.
> For the aforementioned first three patches especially I'm just overly
> cautious.
>
>
Was CI enabled for 4.0.5 branch?
Adding infra as well.

Shlomi, Did we enabled the regex for stable branch already and we don't
need to manually update conf files?



> --
> Francesco Romani
> Red Hat Engineering Virtualization R & D
> Phone: 8261328
> IRC: fromani
> ___
> Devel mailing list
> Devel@ovirt.org
> http://lists.ovirt.org/mailman/listinfo/devel
>
>
>


-- 
Eyal Edri
Associate Manager
RHV DevOps
EMEA ENG Virtualization R
Red Hat Israel

phone: +972-9-7692018
irc: eedri (on #tlv #rhev-dev #rhev-integ)
___
Devel mailing list
Devel@ovirt.org
http://lists.ovirt.org/mailman/listinfo/devel

Re: [ovirt-devel] Outreachy internship

2016-10-30 Thread Eyal Edri
On Mon, Oct 10, 2016 at 5:59 PM, Francesco Romani 
wrote:

> - Original Message -
> > From: "Саша Ершова" 
> > To: devel@ovirt.org
> > Sent: Wednesday, October 5, 2016 7:40:21 PM
> > Subject: [ovirt-devel] Outreachy internship
> >
> > Dear all,
> > My name is Alexandra Ershova, and I'm a student in Natural Language
> > Processing in Higher School of Economics, Moscow, Russia. I'd like to
> take
> > part in the current round of Outreachy internships. My main programming
> > language is Python (I have experience with both 2 and 3). Writing system
> > tests seems like an interesting project to me, and I would like to do it.
> > Could you please give me an application task, so that I could make my
> first
> > contribution?
>
> hello Alexandra, thanks for your interet in oVirt!
>
> In addition to what Yaniv already outlined, did you manage to run the Vdsm
> testsuite?
>
> A good first step could be indeed to make sure the lago environment is up
> and running and
> it can run the ovirt system tests.
>
> Feel free to file issues ond/or ask for help also on the devel@ovirt.org
> mailing list.
> Once you are played a bit with lago, it is a good idea to introduce
> yourself here;
> you can find a broader audience and even more mentors for lago itself (the
> lago developers
> hang around on thet ML).
>

You can also try lago-de...@ovirt.org for lago specific questions.

We recently updated the Lago & OST docs and it should be very easy now to
get started with both! :)

You can start with:

http://lago.readthedocs.io/en/stable/

And after you'll install Lago, try out the simple example of getting a
Jenkins VM running, and later move on to the oVirt example, both can be
found here:

http://lago.readthedocs.io/en/stable/Lago_Examples.html#available-examples


The oVirt example is actually in another project called
'ovirt-system-tests', which Lago documentation will redirect you to it:

http://ovirt-system-tests.readthedocs.io/en/latest/


Please feel free to contact us for any questions! :)


> I recommend to work on a CentOS 7.2 or Fedora 24 system, either real or
> virtualized.
>
> Should you have any question, feel free to post a message on
> devel@ovirt.org, or ping me
> on irc (fromani on the #vdsm channel on freenode).
>
> Please note the following assumes you have one oVirt installation (of any
> kind), and basic knowledge
> of the architecture.
>
> The application task for the idea you expressed interest in is:
>
> 1. write a system test to make sure one host has the 'ovirtmgmt' network
> available (defined, and running).
>another way to check this is to make sure the host has one active nic
> which is part of the aforementioned
>network. You can check this from the oVirt Engine webadmin UI: select
> the "host" panel, check
>the "network interfaces" subtab in the lower portion of the screen.
>
> Please note the above is a VERY terse introduction. You will likely need
> clarifications.
> You are more than welcome to ask for clarifications via mail (here),
> and/or join the #vdsm
> IRC channel on the freenode network.
>
> --
> Francesco Romani
> RedHat Engineering Virtualization R & D
> Phone: 8261328
> IRC: fromani
> ___
> Devel mailing list
> Devel@ovirt.org
> http://lists.ovirt.org/mailman/listinfo/devel
>



-- 
Eyal Edri
Associate Manager
RHV DevOps
EMEA ENG Virtualization R
Red Hat Israel

phone: +972-9-7692018
irc: eedri (on #tlv #rhev-dev #rhev-integ)
___
Devel mailing list
Devel@ovirt.org
http://lists.ovirt.org/mailman/listinfo/devel

Re: [ovirt-devel] test-repo_ovirt_experimental_master job - failed

2016-10-30 Thread Yaniv Kaul
On Sun, Oct 30, 2016 at 12:57 PM, Nadav Goldin  wrote:

> On Sun, Oct 30, 2016 at 12:40 PM, Yaniv Kaul  wrote:
> > Not exactly.
>
> My bad, missed that the tests run in parallel, though what this means
> is that 'ovirt-log-collector' can fail when there are ongoing
> tasks(such as adding the storage domains), I assume that is not the
> expected behaviour. I'll send a patch separating the test for now.
>

https://gerrit.ovirt.org/#/c/65857/1

Y.
___
Devel mailing list
Devel@ovirt.org
http://lists.ovirt.org/mailman/listinfo/devel

Re: [ovirt-devel] test-repo_ovirt_experimental_master job - failed

2016-10-30 Thread Nadav Goldin
On Sun, Oct 30, 2016 at 12:40 PM, Yaniv Kaul  wrote:
> Not exactly.

My bad, missed that the tests run in parallel, though what this means
is that 'ovirt-log-collector' can fail when there are ongoing
tasks(such as adding the storage domains), I assume that is not the
expected behaviour. I'll send a patch separating the test for now.
___
Devel mailing list
Devel@ovirt.org
http://lists.ovirt.org/mailman/listinfo/devel


Re: [ovirt-devel] test-repo_ovirt_experimental_master job - failed

2016-10-30 Thread Yaniv Kaul
On Sun, Oct 30, 2016 at 12:26 PM, Nadav Goldin  wrote:

> Hi all, bumping this thread due to an almost identical failure[1]:
>
> ovirt-log-collector/ovirt-log-collector-20161030053238.log:2016-10-30
> 05:33:09::ERROR::__main__::791::root:: Failed to collect logs from:
> 192.168.200.4; /bin/ls:
> /rhev/data-center/mnt/blockSD/63c4fdd3-5d0f-4d16-b1e5-
> 5f43caa4cf82/master/tasks/6b3b6aa1-808c-42df-9db7-
> 52349f8533f2/6b3b6aa1-808c-42df-9db7-52349f8533f2.job.0:
> No such file or directory
> ovirt-log-collector/ovirt-log-collector-20161030053238.log-/bin/ls:
> cannot access /rhev/data-center/mnt/blockSD/63c4fdd3-5d0f-4d16-b1e5-
> 5f43caa4cf82/master/tasks/6b3b6aa1-808c-42df-9db7-
> 52349f8533f2/6b3b6aa1-808c-42df-9db7-52349f8533f2.recover.1:
> No such file or directory
> ovirt-log-collector/ovirt-log-collector-20161030053238.log-/bin/ls:
> cannot access /rhev/data-center/mnt/blockSD/63c4fdd3-5d0f-4d16-b1e5-
> 5f43caa4cf82/master/tasks/6b3b6aa1-808c-42df-9db7-
> 52349f8533f2/6b3b6aa1-808c-42df-9db7-52349f8533f2.task:
> No such file or directory
> ovirt-log-collector/ovirt-log-collector-20161030053238.log-/bin/ls:
> cannot access /rhev/data-center/mnt/blockSD/63c4fdd3-5d0f-4d16-b1e5-
> 5f43caa4cf82/master/tasks/6b3b6aa1-808c-42df-9db7-
> 52349f8533f2/6b3b6aa1-808c-42df-9db7-52349f8533f2.recover.0:
> No such file or directory
>
> To ensure I've checked lago/OST, and couldn't find any stage where
> there is a reference to '/rhv' nor any manipulation to
> ovirt-log-collector, only customizations made is a
> 'ovirt-log-collector.conf' with user/password. The code that pulls the
> logs in OST[2] runs the following command on the engine VM(and there
> it fails):
>
> ovirt-log-collector --conf /rot/ovirt-log-collector.conf
>
> The failure comes right after 'add_secondary_storage_domains'[3] test,
> which all of its steps ran successfully.
>

Not exactly.


>
> Can anyone look into this?
>

It may be my fault, in a way. I've added the log collector test to run in
parallel to the tests that add the secondary storage domains. The
directories it tries to access may or may not be available - this is
probably racy. I don't think it should fail, but I can certainly see why it
can.
The easiest 'fix' would be to split it to its own test (I wanted to save
execution time, as most of the time spent on secondary storage domains test
is not really useful).
Y.



>
> Thanks,
> Nadav.
>
> [1] http://jenkins.ovirt.org/job/ovirt-system-tests_master_
> check-patch-fc24-x86_64/141/console
> [2] https://github.com/oVirt/ovirt-system-tests/blob/
> master/basic_suite_master/test-scenarios/002_bootstrap.py#L490
> [3] https://github.com/oVirt/ovirt-system-tests/blob/
> master/basic_suite_master/test-scenarios/002_bootstrap.py#L243
>
>
> On Tue, Sep 20, 2016 at 9:45 AM, Sandro Bonazzola 
> wrote:
> >
> >
> >
> > On Fri, Sep 9, 2016 at 1:19 PM, Yaniv Kaul  wrote:
> >>
> >> Indeed, this is the log collector. I wonder if we collect its logs...
> >> Y.
> >
> >
> > This can't be log-collector, it can be sos vdsm plugin.
> > That said, if we run log-collector within lago we should collect the
> results as job artifacts.
> >
> >
> >>
> >>
> >>
> >> On Thu, Sep 8, 2016 at 6:54 PM, Eyal Edri  wrote:
> >>>
> >>> I'm pretty sure lago or ovirt system tests aren't doing it but its the
> log collector which is running during that test, I'm not near a computer so
> can't verify it yet.
> >>>
> >>>
> >>> On Sep 8, 2016 6:05 PM, "Nir Soffer"  wrote:
> 
>  On Thu, Sep 8, 2016 at 5:45 PM, Eyal Edri  wrote:
>  > Adding devel.
>  >
>  > On Thu, Sep 8, 2016 at 5:43 PM, Shlomo Ben David <
> sbend...@redhat.com>
>  > wrote:
>  >>
>  >> Hi,
>  >>
>  >> Job [1] is failing with the following error:
>  >>
>  >> lago.ssh: DEBUG: Command 8de75538 on lago_basic_suite_master_engine
>  >> errors:
>  >>  ERROR: Failed to collect logs from: 192.168.200.2; /bin/ls:
>  >> /rhev/data-center/mnt/blockSD/eb8c9f48-5f23-48dc-ab7d-
> 9451890fd422/master/tasks/1350bed7-443e-4ae6-ae1f-9b24d18c70a8.temp:
>  >> No such file or directory
>  >> /bin/ls: cannot open directory
>  >> /rhev/data-center/mnt/blockSD/eb8c9f48-5f23-48dc-ab7d-
> 9451890fd422/master/tasks/1350bed7-443e-4ae6-ae1f-9b24d18c70a8.temp:
>  >> No such file or directory
> 
>  This looks like a lago issue - it should never read anything inside
> /rhev
> 
>  This is a private directory for vdsm, no other process should ever
> depend
>  on the content inside this directory, or even on the fact that it
> exists.
> 
>  In particular, /rhev/data-center/mnt/blockSD/*/master/tasks/*.temp
>  Is not a log file, and lago should not collect it.
> 
>  Nir
> 
>  >> lago.utils: ERROR: Error while running thread
>  >> Traceback (most recent call last):
>  >>   File 

Re: [ovirt-devel] test-repo_ovirt_experimental_master job - failed

2016-10-30 Thread Nadav Goldin
Hi all, bumping this thread due to an almost identical failure[1]:

ovirt-log-collector/ovirt-log-collector-20161030053238.log:2016-10-30
05:33:09::ERROR::__main__::791::root:: Failed to collect logs from:
192.168.200.4; /bin/ls:
/rhev/data-center/mnt/blockSD/63c4fdd3-5d0f-4d16-b1e5-5f43caa4cf82/master/tasks/6b3b6aa1-808c-42df-9db7-52349f8533f2/6b3b6aa1-808c-42df-9db7-52349f8533f2.job.0:
No such file or directory
ovirt-log-collector/ovirt-log-collector-20161030053238.log-/bin/ls:
cannot access 
/rhev/data-center/mnt/blockSD/63c4fdd3-5d0f-4d16-b1e5-5f43caa4cf82/master/tasks/6b3b6aa1-808c-42df-9db7-52349f8533f2/6b3b6aa1-808c-42df-9db7-52349f8533f2.recover.1:
No such file or directory
ovirt-log-collector/ovirt-log-collector-20161030053238.log-/bin/ls:
cannot access 
/rhev/data-center/mnt/blockSD/63c4fdd3-5d0f-4d16-b1e5-5f43caa4cf82/master/tasks/6b3b6aa1-808c-42df-9db7-52349f8533f2/6b3b6aa1-808c-42df-9db7-52349f8533f2.task:
No such file or directory
ovirt-log-collector/ovirt-log-collector-20161030053238.log-/bin/ls:
cannot access 
/rhev/data-center/mnt/blockSD/63c4fdd3-5d0f-4d16-b1e5-5f43caa4cf82/master/tasks/6b3b6aa1-808c-42df-9db7-52349f8533f2/6b3b6aa1-808c-42df-9db7-52349f8533f2.recover.0:
No such file or directory

To ensure I've checked lago/OST, and couldn't find any stage where
there is a reference to '/rhv' nor any manipulation to
ovirt-log-collector, only customizations made is a
'ovirt-log-collector.conf' with user/password. The code that pulls the
logs in OST[2] runs the following command on the engine VM(and there
it fails):

ovirt-log-collector --conf /rot/ovirt-log-collector.conf

The failure comes right after 'add_secondary_storage_domains'[3] test,
which all of its steps ran successfully.

Can anyone look into this?

Thanks,
Nadav.

[1] 
http://jenkins.ovirt.org/job/ovirt-system-tests_master_check-patch-fc24-x86_64/141/console
[2] 
https://github.com/oVirt/ovirt-system-tests/blob/master/basic_suite_master/test-scenarios/002_bootstrap.py#L490
[3] 
https://github.com/oVirt/ovirt-system-tests/blob/master/basic_suite_master/test-scenarios/002_bootstrap.py#L243


On Tue, Sep 20, 2016 at 9:45 AM, Sandro Bonazzola  wrote:
>
>
>
> On Fri, Sep 9, 2016 at 1:19 PM, Yaniv Kaul  wrote:
>>
>> Indeed, this is the log collector. I wonder if we collect its logs...
>> Y.
>
>
> This can't be log-collector, it can be sos vdsm plugin.
> That said, if we run log-collector within lago we should collect the results 
> as job artifacts.
>
>
>>
>>
>>
>> On Thu, Sep 8, 2016 at 6:54 PM, Eyal Edri  wrote:
>>>
>>> I'm pretty sure lago or ovirt system tests aren't doing it but its the log 
>>> collector which is running during that test, I'm not near a computer so 
>>> can't verify it yet.
>>>
>>>
>>> On Sep 8, 2016 6:05 PM, "Nir Soffer"  wrote:

 On Thu, Sep 8, 2016 at 5:45 PM, Eyal Edri  wrote:
 > Adding devel.
 >
 > On Thu, Sep 8, 2016 at 5:43 PM, Shlomo Ben David 
 > wrote:
 >>
 >> Hi,
 >>
 >> Job [1] is failing with the following error:
 >>
 >> lago.ssh: DEBUG: Command 8de75538 on lago_basic_suite_master_engine
 >> errors:
 >>  ERROR: Failed to collect logs from: 192.168.200.2; /bin/ls:
 >> /rhev/data-center/mnt/blockSD/eb8c9f48-5f23-48dc-ab7d-9451890fd422/master/tasks/1350bed7-443e-4ae6-ae1f-9b24d18c70a8.temp:
 >> No such file or directory
 >> /bin/ls: cannot open directory
 >> /rhev/data-center/mnt/blockSD/eb8c9f48-5f23-48dc-ab7d-9451890fd422/master/tasks/1350bed7-443e-4ae6-ae1f-9b24d18c70a8.temp:
 >> No such file or directory

 This looks like a lago issue - it should never read anything inside /rhev

 This is a private directory for vdsm, no other process should ever depend
 on the content inside this directory, or even on the fact that it exists.

 In particular, /rhev/data-center/mnt/blockSD/*/master/tasks/*.temp
 Is not a log file, and lago should not collect it.

 Nir

 >> lago.utils: ERROR: Error while running thread
 >> Traceback (most recent call last):
 >>   File "/usr/lib/python2.7/site-packages/lago/utils.py", line 53, in
 >> _ret_via_queue
 >> queue.put({'return': func()})
 >>   File
 >> "/home/jenkins/workspace/test-repo_ovirt_experimental_master/ovirt-system-tests/basic_suite_master/test-scenarios/002_bootstrap.py",
 >> line 493, in log_collector
 >> result.code, 0, 'log collector failed. Exit code is %s' % 
 >> result.code
 >>   File "/usr/lib/python2.7/site-packages/nose/tools/trivial.py", line 
 >> 29,
 >> in eq_
 >> raise AssertionError(msg or "%r != %r" % (a, b))
 >> AssertionError: log collector failed. Exit code is 2
 >>
 >>
 >> * The previous issue already fixed (SDK) and now we have a new issue on
 >> the same area.
 >>
 >>
 >> [1] -
 >> 

Re: [ovirt-devel] patches failing continuous integration after being merged

2016-10-30 Thread Eyal Edri
There was a thread on failing system tests with a few patches for fixing
the issue sent, was it related to this?
It seems it didn't fail the upgrade job on the patch, so my suspicion is it
might be related to the auto rebase done on merge.

We have an open ticket to enable Zuul for pre-merge tests (should actually
run the check-merge jobs), that will help re-mediate such issues (by
blocking the merge and giving -1),
We'll need to see if we can introduce it soon, after reviewing current
infra tasks and priorities.

On Sat, Oct 22, 2016 at 10:40 AM, Yaniv Kaul  wrote:

>
>
> On Fri, Oct 21, 2016 at 10:01 AM, Sandro Bonazzola 
> wrote:
>
>> Hi,
>> we have more than 300 patches failing continuous integration after being
>> merged[1]
>> Is anybody monitoring what's happening there?
>> Have we broken check-merge tests?
>> Have we auto rebase on merge automation breaking our patches?
>> I would suggest to go over the failures and nail down issues.
>>
>> [1] https://gerrit.ovirt.org/#/q/status:merged+label:Continuous-
>> Integration%253C%253D-1
>>
>
> From a random look it seems to be related to upgrade, but it's difficult
> to asses without data:
> http://jenkins.ovirt.org/job/ovirt-engine_master_upgrade-
> from-4.0_el7_merged/1329/ for example gives 404.
> Y.
>
>
>>
>>
>> --
>> Sandro Bonazzola
>> Better technology. Faster innovation. Powered by community collaboration.
>> See how it works at redhat.com
>> 
>>
>> ___
>> Devel mailing list
>> Devel@ovirt.org
>> http://lists.ovirt.org/mailman/listinfo/devel
>>
>
>
> ___
> Infra mailing list
> in...@ovirt.org
> http://lists.ovirt.org/mailman/listinfo/infra
>
>


-- 
Eyal Edri
Associate Manager
RHV DevOps
EMEA ENG Virtualization R
Red Hat Israel

phone: +972-9-7692018
irc: eedri (on #tlv #rhev-dev #rhev-integ)
___
Devel mailing list
Devel@ovirt.org
http://lists.ovirt.org/mailman/listinfo/devel