Re: [ovirt-devel] Reported-By - giving credit to our testers

2015-11-12 Thread Yaniv Kaul
On Thu, Nov 12, 2015 at 4:45 PM, Nir Soffer  wrote:

> Hi all,
>
> Our QE (and sometimes our users) are working hard testing and
> reporting bugs, but
> their effort is never mentioned in our code.
>
> Looking at kernel git history, I found that they are using the
> Reported-By header for
> giving credit to the person reporting a bug. I suggest we adapt this
> header.
>
> Here are some examples how we can use it:
>
> - https://gerrit.ovirt.org/#/c/48483/3//COMMIT_MSG
> -
> https://github.com/oVirt/vdsm/commit/fb4c72af5e4c200409c74834111d44d92959ebbd
> -
> https://github.com/oVirt/vdsm/commit/f8127d88add881a4775e7030dde2433125c7b598
>
> Thanks,
> Nir
> ___
> Devel mailing list
> Devel@ovirt.org
> http://lists.ovirt.org/mailman/listinfo/devel
>

+1

In case they help with the verification of the change, I suggest adding
Tested-By which is just as important.
>From https://www.kernel.org/doc/Documentation/SubmittingPatches :

A Tested-by: tag indicates that the patch has been successfully tested (in
some environment) by the person named.  This tag informs maintainers that
some testing has been performed, provides a means to locate testers for
future patches, and ensures credit for the testers.


Y.
___
Devel mailing list
Devel@ovirt.org
http://lists.ovirt.org/mailman/listinfo/devel

Re: [ovirt-devel] [ovirt-users] Must a user see his/her VMs through the web browser?

2015-11-05 Thread Yaniv Kaul
On Thu, Nov 5, 2015 at 9:56 AM, John Hunter  wrote:

> nah, i prefer not using arc welder :(
>

As already mentioned in the thread, perhaps Boxes can do the job, at least
on Gnome.
See http://community.redhat.com/blog/2014/10/gnome-boxes-3-14-unboxed/ and
https://www.ovirt.org/images/6/6c/Fergeau-ovirt-boxes.pdf for more details.
Y.


>
> On Thu, Nov 5, 2015 at 3:03 PM, Tomas Jelinek  wrote:
>
>>
>>
>> - Original Message -
>> > From: "John Hunter" 
>> > To: "Tomas Jelinek" 
>> > Cc: "Robert Story" , us...@ovirt.org, "devel" <
>> devel@ovirt.org>, "Filip Krepinsky"
>> > 
>> > Sent: Thursday, November 5, 2015 7:20:54 AM
>> > Subject: Re: [ovirt-users] Must a user see his/her VMs through the web
>> browser?
>> >
>> > Hi Tomas,
>> >
>> > I have see the repo a little bit, as far as I can see, it's just for
>> mobile.
>>
>> yes, it is a mobile client.
>> There is a way to run it as a google chrome app using arc welder so on a
>> desktop (on fedora at least it works for me).
>> But this way it is more interesting than useful I'd say...
>>
>> > Do you have a plan to make it useable on some Linux distro, like
>> > Ubuntu, or Debian?
>> >
>> > What I need is a GUI application or a library that can be communicate
>> > with the ovirt-engine.
>> >
>> > Cheers,
>> > John
>> >
>> > On Wed, Nov 4, 2015 at 11:14 PM, Tomas Jelinek 
>> wrote:
>> >
>> > >
>> > >
>> > > - Original Message -
>> > > > From: "John Hunter" 
>> > > > To: "Tomas Jelinek" 
>> > > > Cc: "Robert Story" , us...@ovirt.org, "devel" <
>> > > devel@ovirt.org>
>> > > > Sent: Wednesday, November 4, 2015 2:59:21 PM
>> > > > Subject: Re: [ovirt-users] Must a user see his/her VMs through the
>> web
>> > > browser?
>> > > >
>> > > > On Wed, Nov 4, 2015 at 9:29 PM, Tomas Jelinek 
>> > > wrote:
>> > > >
>> > > > >
>> > > > >
>> > > > > - Original Message -
>> > > > > > From: "Robert Story" 
>> > > > > > To: "John Hunter" 
>> > > > > > Cc: us...@ovirt.org, "devel" 
>> > > > > > Sent: Wednesday, November 4, 2015 1:56:21 PM
>> > > > > > Subject: Re: [ovirt-users] Must a user see his/her VMs through
>> the
>> > > web
>> > > > > browser?
>> > > > > >
>> > > > > > On Wed, 4 Nov 2015 17:50:15 +0800 John wrote:
>> > > > > > JH> I have installed the oVirt all in one, and I can log in the
>> user
>> > > > > portal
>> > > > > > JH> through web browser to see user's VMs.
>> > > > > > JH>
>> > > > > > JH> I am wondering if there is an client Application that can
>> do the
>> > > same
>> > > > > > JH> thing, like VMware Horizon client has version for Windows,
>> Linux
>> > > and
>> > > > > > JH> IOS, etc.
>> > > > > >
>> > > > > > Haven't heard much about it lately,
>> > > > >
>> > > > > but you will soon ;)
>> > > > > There is one guy which as soon as he sets his development env up
>> will
>> > > > > start to contribute to it heavily.
>> > > > >
>> > > > > Look forword to testing it ASAP, btw, how can I contact with this
>> guy,
>> > > and
>> > > > maybe work togother with him on his project.
>> > >
>> > > It is not really his project, he is joining the moVirt.
>> > >
>> > > It would be pretty awesome if you wanted to cooperate with us on
>> moVirt!
>> > >
>> > > If you have some particular ideas what you would like to see or see
>> some
>> > > problems than just tell us.
>> > >
>> > > We can discuss here (ovirt/devel list), on moVirt list (
>> mov...@ovirt.org),
>> > > or on irc (#ovirt channel of irc.oftc.net and look for me, mbetak or
>> > > fkrepins)
>> > > The sources are here: https://github.com/matobet/moVirt/ (with
>> > > explanation how to setup devel env)
>> > > And it is released on play store here:
>> > > https://play.google.com/store/apps/details?id=org.ovirt.mobile.movirt
>> > > (or just look for movirt in play store)
>> > >
>> > > >
>> > > >
>> > > > > > but you might want to check out this android app:
>> > > > > >
>> > > > > >   https://github.com/matobet/moVirt/
>> > > > >
>> > > > > you might also want to have a look at Arc Welder [1]. Last time I
>> tried
>> > > > > moVirt was working in it pretty
>> > > > > nice and it felt like having a desktop client.
>> > > > >
>> > > > > [1]: https://developer.chrome.com/apps/getstarted_arc
>> > > > >
>> > > > > yeah, sure, I will check it.
>> > >
>> > > just don't expect it to be production ready ;)
>> > >
>> > > >
>> > > > > >
>> > > > > >
>> > > > > > Robert
>> > > > > >
>> > > > > > --
>> > > > > > Senior Software Engineer @ Parsons
>> > > > > >
>> > > > > > ___
>> > > > > > Users mailing list
>> > > > > > us...@ovirt.org
>> > > > > > http://lists.ovirt.org/mailman/listinfo/users
>> > > > > >
>> > > > >
>> > > >
>> > > >
>> > > >
>> > > > --
>> > > > Best regards
>> > > > 

Re: [ovirt-devel] migration enhancements feature

2015-09-14 Thread Yaniv Kaul
On Mon, Sep 14, 2015 at 3:35 PM, Tomas Jelinek  wrote:
> Hi all,
>
> there is an effort for enhancing the speed and convergence of the migrations 
> (especially for large VMs).
>
> The feature page targeted for 4.0 is [1].
>
> TL;DR:
> - remove current logic from VDSM and move to engine in form of policies
> - employ post-copy migration
> - employ traffic shaping
> - protect destination VDSM against migration storms
>
> Any comments more than welcome!
> Tomas
>
> [1]: http://www.ovirt.org/Features/Migration_Enhancements

I think we need to look at (any/the) feature from the user
perspective, first and foremost. How would the user use the feature?
What 'knobs' he may tweak to get better migration results? Which can
we do for him? Which ones will be used on the expense of others?
Do we truly believe a user will know what to tweak to get a better
result? Exposing every parameter, in that sense, is
counter-productive.

Specific example: should a user enable or not compression? What will
he gain? I assume, less bandwidth needed for migration. Would it help
for his migration (I assume it'll take longer, take more CPU, etc.) or
not? When migrating one big heavily-used VM? When migrating twenty
idle single-core VMs? Any point enabling it for 10Gb dedicated
migration network? And 1Gb shared network which is heavily used by
others? etc.

Y.

> ___
> Devel mailing list
> Devel@ovirt.org
> http://lists.ovirt.org/mailman/listinfo/devel
___
Devel mailing list
Devel@ovirt.org
http://lists.ovirt.org/mailman/listinfo/devel


Re: [ovirt-devel] Subject: Looking for advice regarding user portal development.

2015-12-12 Thread Yaniv Kaul
On Sat, Dec 12, 2015 at 12:07 AM, Alexander Wels  wrote:

> TL;DR
> Don't extend the User Portal use the REST api to do what you need to do and
> determine which SDK works the best for your needs.
>


I'd actually bring up a complete proposal first to the list - perhaps the
use case is interesting and general, not only to a specific use case.
The briefly described one sounds as such.
If so, it'll make much more sense to do the (initially harder) contribution
of code to the user portal over the (maintenance forever) fork.
Y.
___
Devel mailing list
Devel@ovirt.org
http://lists.ovirt.org/mailman/listinfo/devel

Re: [ovirt-devel] Push notifications in 4.0 backend

2015-11-25 Thread Yaniv Kaul
On Wed, Nov 25, 2015 at 6:32 PM, Martin Betak  wrote:

>
>
>
>
> - Original Message -
> > From: "Marek Libra" 
> > To: "Martin Perina" 
> > Cc: "Piotr Kliczewski" , "Michal Skrivanek" <
> mskri...@redhat.com>, "engine-de...@ovirt.org"
> > 
> > Sent: Wednesday, November 25, 2015 5:19:31 PM
> > Subject: Re: [ovirt-devel] Push notifications in 4.0 backend
> >
> >
> >
> > - Original Message -
> > > From: "Martin Perina" 
> > > To: "Eli Mesika" 
> > > Cc: "Piotr Kliczewski" , "engine-de...@ovirt.org"
> > > , "Michal Skrivanek"
> > > 
> > > Sent: Wednesday, 25 November, 2015 11:20:49 AM
> > > Subject: Re: [ovirt-devel] Push notifications in 4.0 backend
> > >
> > >
> > >
> > > - Original Message -
> > > > From: "Eli Mesika" 
> > > > To: "Vojtech Szocs" 
> > > > Cc: "Piotr Kliczewski" , "Michal Skrivanek"
> > > > , "engine-de...@ovirt.org"
> > > > 
> > > > Sent: Wednesday, November 25, 2015 10:42:35 AM
> > > > Subject: Re: [ovirt-devel] Push notifications in 4.0 backend
> > > >
> > > >
> > > >
> > > > - Original Message -
> > > > > From: "Vojtech Szocs" 
> > > > > To: "Martin Betak" 
> > > > > Cc: "engine-de...@ovirt.org" , "Piotr Kliczewski"
> > > > > , "Michal Skrivanek"
> > > > > 
> > > > > Sent: Monday, November 23, 2015 6:22:45 PM
> > > > > Subject: Re: [ovirt-devel] Push notifications in 4.0 backend
> > > > >
> > > > >
> > > > >
> > > > > - Original Message -
> > > > > > From: "Martin Betak" 
> > > > > > To: "Vojtech Szocs" 
> > > > > > Cc: "Einav Cohen" , "engine-de...@ovirt.org"
> > > > > > , "Roy Golan" ,
> > > > > > "Roman Mohr" , "Michal Skrivanek"
> > > > > > ,
> > > > > > "Piotr Kliczewski" ,
> > > > > > "Tomas Jelinek" , "Alexander Wels"
> > > > > > ,
> > > > > > "Greg Sheremeta" ,
> > > > > > "Scott Dickerson" , "Arik Hadas"
> > > > > > ,
> > > > > > "Allon Mureinik" ,
> > > > > > "Shmuel Melamud" , "Jakub Niedermertl"
> > > > > > , "Marek Libra"
> > > > > > , "Martin Perina" ,
> "Alona
> > > > > > Kaplan"
> > > > > > , "Martin Mucha"
> > > > > > 
> > > > > > Sent: Thursday, November 19, 2015 1:53:07 PM
> > > > > > Subject: Re: Push notifications in 4.0 backend
> > > > > >
> > > > > > Hi All,
> > > > > >
> > > > > > I have created a PoC patch [1] demonstrating the idea of
> annotating
> > > > > > basic CRUD commands to publish CDI events. It is not meant as
> 100%
> > > > > > solution, but as a simplification of the common use cases when
> > > > > > one would inject CDI event with given qualifier and fire it after
> > > > > > successful completion of transaction.
> > > > >
> > > > > The patches (mentioned below) look interesting.
> > > > >
> > > > > At this point, it would be great if backend core maintainers
> > > > > voiced their opinions on the general idea of firing CDI events
> > > > > in response to important actions happening on Engine, such as
> > > > > backend commands being executed. So, what do you think guys?
> > > >
> > > > +1
> > > >
> > > > I am for it, I think it may reduce load from our DB
> > >
> > > +1
> > >
> > The load reduction can be achieved and seems like not a big deal to
> implement
> > it.
> > +1
>
> Yes, the DB load reduction is perhaps the bigest boon of this effort :-).
>

This needs to be quantified.

As always, we need a cost-risk-benefit evaluation of a change, especially
for infra changes, since:
1. Infra changes are more likely to affect cross-teams, multiple features,
so the potential 'damage' to existing flows is not limited as in specific
feature changes.
2. If they are beneficial, we'll want/need more flows to be modified -
which adds more cost and risk (and hopefully, benefit!). If few flows
aren't changed, there's usually little value in making the change in the
first place. If only few flows are changed, unless they are critical, no
point in pushing the infra change too soon.

The events mechanism in VDSM<->engine is an example of a change that meets
this - and we haven't yet executed on item #2 above for it - not many flows
use it.
Mainly for risk and cost factors, as we do believe there is benefit in it.

Of course, such an effort has to be coordinated with the Infra team.
Y.



>
> My first patch makes very convenient to fire notification from 

Re: [ovirt-devel] [vdsm] strange network test failure on FC23

2015-11-29 Thread Yaniv Kaul
On Fri, Nov 27, 2015 at 6:55 PM, Francesco Romani 
wrote:

> Using taskset, the ip command now takes a little longer to complete.


Since we always use the same set of CPUs, I assume using a mask (for 0 & 1,
just use 0x3, as the man suggests) might be a tiny of a fraction faster to
execute taskset with, instead of the need to translate the numeric CPU list.
However, the real concern is making sure CPUs 0 & 1 are not really too busy
with stuff (including interrupt handling, etc.)
Y.
___
Devel mailing list
Devel@ovirt.org
http://lists.ovirt.org/mailman/listinfo/devel

Re: [ovirt-devel] [vdsm] strange network test failure on FC23

2015-11-29 Thread Yaniv Kaul
On Sun, Nov 29, 2015 at 5:37 PM, Nir Soffer <nsof...@redhat.com> wrote:

> On Sun, Nov 29, 2015 at 10:37 AM, Yaniv Kaul <yk...@redhat.com> wrote:
> >
> > On Fri, Nov 27, 2015 at 6:55 PM, Francesco Romani <from...@redhat.com>
> > wrote:
> >>
> >> Using taskset, the ip command now takes a little longer to complete.
> >
> >
> > Since we always use the same set of CPUs, I assume using a mask (for 0 &
> 1,
> > just use 0x3, as the man suggests) might be a tiny of a fraction faster
> to
> > execute taskset with, instead of the need to translate the numeric CPU
> list.
>
> Creating the string "0-" is one line in vdsm. The code
> handling this in
> taskset is written in C, so the parsing time is practically zero. Even
> if it was non-zero,
> this code run once when we run a child process, so the cost is
> insignificant.
>

I think it's easier to just to have it as a mask in a config item
somewhere, without need to create it or parse it anywhere.
For us and for the user.


> > However, the real concern is making sure CPUs 0 & 1 are not really too
> busy
> > with stuff (including interrupt handling, etc.)
>
> This code is used when we run a child process, to allow the child
> process to run on
> all cpus (in this case, cpu 0 and cpu 1). So I think there is no concern
> here.
>
> Vdsm itself is running by default on cpu 1, which should be less busy
> then cpu 0.
>

I assume those are cores, which probably in a multi-socket will be in the
first socket only.
There's a good chance that the FC and or network/cards will also bind their
 interrupts to core0 & core 1 (check /proc/interrupts) on the same socket.
>From my poor laptop (1s, 4c):
42:1487104   9329   4042   3598  IR-PCI-MSI 512000-edge
 :00:1f.2

(my SATA controller)

43:   14664923 34 18 13  IR-PCI-MSI 327680-edge
 xhci_hcd
(my dock station connector)

45:6754579   4437   2501   2419  IR-PCI-MSI 32768-edge
 i915
(GPU)

47: 187409  11627   1235   1259  IR-PCI-MSI 2097152-edge
   iwlwifi
(NIC, wifi)

Y.



> The user can modify this configuration on the host, I guess we need to
> expose this
> on the engine side (cluster setting?).
>
> Also if vdsm is pinned to certain cpu, should user get a warning
> trying to pin a vm
> to this cpu?
>
> Michal, what do you think?
>
> Nir
>
___
Devel mailing list
Devel@ovirt.org
http://lists.ovirt.org/mailman/listinfo/devel

Re: [ovirt-devel] [oVirt 4.0 Localization Question #7] "unsynced entries" variables

2016-06-06 Thread Yaniv Kaul
+ Sahina.

On Tue, Jun 7, 2016 at 3:13 AM, Yuko Katabami  wrote:

> Hi all,
>
> I would like to ask for your help with the following question.​​
>
> *File: *ApplicationMessages
>
> *Resource IDs: *
>
> brickStatusWithUnSyncedEntriesPresent needsGlusterHealingWithVolumeStatus
> unSyncedEntriesPresent
>
>
> *Strings: *
>
> {0}, {1} unsynced entries present
>
> {0}, Unsynced entries present - Needs healing
>
> {0} unsynced entries present
>
> *Question: *Could anyone tell us what those variables will be replaced
> with?
>
>
> Kind regards,
>
>
> Yuko
>
> ___
> Devel mailing list
> Devel@ovirt.org
> http://lists.ovirt.org/mailman/listinfo/devel
>
___
Devel mailing list
Devel@ovirt.org
http://lists.ovirt.org/mailman/listinfo/devel

Re: [ovirt-devel] [VDSM] Random network tests failing

2016-06-02 Thread Yaniv Kaul
On Thu, Jun 2, 2016 at 11:34 AM, Barak Korren  wrote:

> > Do we understand why they fail randomly?
>
> Could be due to dirty slave state.
>
> > Is it possible to make them reliable in mock?
>
> Mock will not help here as it only does FS isolation.
>
> We are working on a stateless slave solution which will hopefully
> solve such issues. But it'll take a while because the oVirt infra
> needs a few upgrades to support that.
> A possible alternative is to use Lago.
>

I've offered [1] in the past.
It was isolated and quick (running both F23 and EL7 tests in parallel) and
could run on anyone's laptop.
Y.

[1] https://gerrit.ovirt.org/#/c/56389/

>
> --
> Barak Korren
> bkor...@redhat.com
> RHEV-CI Team
> ___
> Devel mailing list
> Devel@ovirt.org
> http://lists.ovirt.org/mailman/listinfo/devel
>
___
Devel mailing list
Devel@ovirt.org
http://lists.ovirt.org/mailman/listinfo/devel

Re: [ovirt-devel] oVirt 3.6.7 Fifth Release Candidate is now available for testing

2016-06-22 Thread Yaniv Kaul
On Tue, Jun 21, 2016 at 9:54 PM, Sven Kieske  wrote:

> Hi,
>
> while I appreciate all the work
> that has been done for this release, please
> let me add some critic:
>
> on the old website it was pretty easy to track
> down what exactly changed between each release
> candidate in terms of closed BZs etc.
>
> This is not possible anymore.
>
> Are there any plans to restore the previous
> functionality?
>

No, see below for the reason.


>
> Are there technical reasons this information
> is not mentioned in the release notes anymore?
>

It's harder to do for minor releases, as they don't have distinct target
milestones as major versions do.
For example, 4.0 has 4.0.0-rc1, 4.0.0-rc2, 4.0.0-rc3, etc.
3.6.7 doesn't have that.
Y.


>
> kind regards
>
> Sven
>
>
>
> ___
> Devel mailing list
> Devel@ovirt.org
> http://lists.ovirt.org/mailman/listinfo/devel
>
___
Devel mailing list
Devel@ovirt.org
http://lists.ovirt.org/mailman/listinfo/devel

Re: [ovirt-devel] oVirt 3.6.7 Fifth Release Candidate is now available for testing

2016-06-22 Thread Yaniv Kaul
On Wed, Jun 22, 2016 at 11:13 AM, Sven Kieske <s.kie...@mittwald.de> wrote:

> On 22/06/16 09:50, Yaniv Kaul wrote:
> > It's harder to do for minor releases, as they don't have distinct target
> > milestones as major versions do.
> > For example, 4.0 has 4.0.0-rc1, 4.0.0-rc2, 4.0.0-rc3, etc.
> > 3.6.7 doesn't have that.
> > Y.
> But there are no special categories for the 4.0.0 rcs as well?
>
> https://www.ovirt.org/release/4.0.0/
>
> There used to be tracker bugs for every RC IIRC, from which
> you could extract at least the blocking bugs for each RC.
>

I'm not aware of a specific trackers, but I am aware of specific target
milestones.
Y.


>
>
> --
> Mit freundlichen Grüßen / Regards
>
> Sven Kieske
>
> Systemadministrator
> Mittwald CM Service GmbH & Co. KG
> Königsberger Straße 6
> 32339 Espelkamp
> T: +495772 293100
> F: +495772 29
> https://www.mittwald.de
> Geschäftsführer: Robert Meyer
> St.Nr.: 331/5721/1033, USt-IdNr.: DE814773217, HRA 6640, AG Bad Oeynhausen
> Komplementärin: Robert Meyer Verwaltungs GmbH, HRB 13260, AG Bad Oeynhausen
>
>
> ___
> Devel mailing list
> Devel@ovirt.org
> http://lists.ovirt.org/mailman/listinfo/devel
>
___
Devel mailing list
Devel@ovirt.org
http://lists.ovirt.org/mailman/listinfo/devel

Re: [ovirt-devel] ovirt-system tests 4.0 using 3.6 code

2016-06-22 Thread Yaniv Kaul
On Wed, Jun 22, 2016 at 2:49 PM, Eyal Edri <ee...@redhat.com> wrote:

>
>
> On Wed, Jun 22, 2016 at 2:27 PM, Yaniv Kaul <yk...@redhat.com> wrote:
>
>>
>>
>> On Wed, Jun 22, 2016 at 12:28 PM, Eyal Edri <ee...@redhat.com> wrote:
>>
>>> After the recent merge of testing install of ovirt-cockpit-dashboard to
>>> 3.6,
>>> the 4.0 tests failed also. [1]
>>>
>>
>> They should not have failed. The only reason they could have failed is if
>> there were added deps - and there were - to the dashboard.
>> Specifically, hosted-engine-setup was added as a dep, bringing with it a
>> huge amount of other packages (virt-viewer, which although I requested was
>> not removed, which brings spice, which brings GTK...) - overall, ~500 RPMs
>> (!).
>>
>
>
> They failed because 4.0 was linked to 3.6, I've posted this fix:
> https://gerrit.ovirt.org/#/c/59603/
> When we'll make it work for 4.0, we'll restore the link.
>

So 4.0 was broken? Because I've tested on BOTH 3.6 and master - see my
comment @ https://gerrit.ovirt.org/#/c/58775/
Y.


>
>>
>> This is the reason I thought of abandoning this patch - which I think
>> I've commented on in the patch itself.
>>
>>
>>> And then I found out that some of the 4.0 tests are linked to 3.6 still,
>>> is this intentional?
>>>
>>
>> Yes, for two reasons:
>> 1. It allows less code duplication.
>> 2. It allows us to test 4.0 with v3 API.
>> 3. It allows us to compare 4.0 to 3.6.x.
>>
>
> I'm fully aware of these, but the fact we might have different tests /
> deps for different version will require us at some point to split it.
> And later on do some refactor to make sure we're sharing what we can to
> all tests.
>
>
>>
>>
>>> Should we now create a new separate 4.0 test or change the link to
>>> master?
>>>
>>
>> We need at some point to add v4 API tests to 4.0.
>> Y.
>>
>>
>>>
>>> lrwxrwxrwx. 1 eedri eedri 64 Jun 22 00:12 001_initialize_engine.py ->
>>> ../../basic_suite_master/test-scenarios/001_initialize_engine.py
>>> lrwxrwxrwx. 1 eedri eedri 53 Jun 22 00:12 002_bootstrap.py ->
>>> ../../basic_suite_3.6/test-scenarios/002_bootstrap.py
>>> lrwxrwxrwx. 1 eedri eedri 56 Jun 22 00:12 004_basic_sanity.py ->
>>> ../../basic_suite_3.6/test-scenarios/004_basic_sanity.py
>>>
>>>
>>>
>>> [1] http://jenkins.ovirt.org/job/ovirt_4.0_system-tests/72/console
>>>
>>> --
>>> Eyal Edri
>>> Associate Manager
>>> RHEV DevOps
>>> EMEA ENG Virtualization R
>>> Red Hat Israel
>>>
>>> phone: +972-9-7692018
>>> irc: eedri (on #tlv #rhev-dev #rhev-integ)
>>>
>>
>>
>
>
> --
> Eyal Edri
> Associate Manager
> RHEV DevOps
> EMEA ENG Virtualization R
> Red Hat Israel
>
> phone: +972-9-7692018
> irc: eedri (on #tlv #rhev-dev #rhev-integ)
>
___
Devel mailing list
Devel@ovirt.org
http://lists.ovirt.org/mailman/listinfo/devel

Re: [ovirt-devel] ovirt-system tests 4.0 using 3.6 code

2016-06-22 Thread Yaniv Kaul
On Wed, Jun 22, 2016 at 12:28 PM, Eyal Edri  wrote:

> After the recent merge of testing install of ovirt-cockpit-dashboard to
> 3.6,
> the 4.0 tests failed also. [1]
>

They should not have failed. The only reason they could have failed is if
there were added deps - and there were - to the dashboard.
Specifically, hosted-engine-setup was added as a dep, bringing with it a
huge amount of other packages (virt-viewer, which although I requested was
not removed, which brings spice, which brings GTK...) - overall, ~500 RPMs
(!).

This is the reason I thought of abandoning this patch - which I think I've
commented on in the patch itself.


> And then I found out that some of the 4.0 tests are linked to 3.6 still,
> is this intentional?
>

Yes, for two reasons:
1. It allows less code duplication.
2. It allows us to test 4.0 with v3 API.
3. It allows us to compare 4.0 to 3.6.x.


> Should we now create a new separate 4.0 test or change the link to master?
>

We need at some point to add v4 API tests to 4.0.
Y.


>
> lrwxrwxrwx. 1 eedri eedri 64 Jun 22 00:12 001_initialize_engine.py ->
> ../../basic_suite_master/test-scenarios/001_initialize_engine.py
> lrwxrwxrwx. 1 eedri eedri 53 Jun 22 00:12 002_bootstrap.py ->
> ../../basic_suite_3.6/test-scenarios/002_bootstrap.py
> lrwxrwxrwx. 1 eedri eedri 56 Jun 22 00:12 004_basic_sanity.py ->
> ../../basic_suite_3.6/test-scenarios/004_basic_sanity.py
>
>
>
> [1] http://jenkins.ovirt.org/job/ovirt_4.0_system-tests/72/console
>
> --
> Eyal Edri
> Associate Manager
> RHEV DevOps
> EMEA ENG Virtualization R
> Red Hat Israel
>
> phone: +972-9-7692018
> irc: eedri (on #tlv #rhev-dev #rhev-integ)
>
___
Devel mailing list
Devel@ovirt.org
http://lists.ovirt.org/mailman/listinfo/devel

Re: [ovirt-devel] Ending support for 3.x engines on master

2016-06-21 Thread Yaniv Kaul
On Tue, Jun 21, 2016 at 10:05 AM, Moran Goldboim 
wrote:

>
>
> On Tue, Jun 21, 2016 at 9:39 AM, Sandro Bonazzola 
> wrote:
>
>>
>>
>> On Sun, Jun 19, 2016 at 3:06 PM, Moran Goldboim 
>> wrote:
>>
>>> my pov - really depends on when do we release 4.1.
>>> short version (Dec 16) - we should keep it, and 3.6 api as well
>>> long version (Mar 17) - let's drop it than.
>>>
>>> bottom line - we would like to give customers/community the time to
>>> migrate their el6 base deployment to el7 , while allowing them dual
>>> management infra. by march next year we should be in the 7.4/.next
>>> timeframe - this should be ok to our conservative customer base as well
>>>
>>
>> el6 to el7 host migration should have been done in 3.5. 3.6 doesn't
>> support el6 as hosts...
>>
>>
> busted here - you are right.. :)
> the main point is to allow this deprecation to happen form our integration
> projects and users side. if we will release early - i would suggest to keep
> the support for additional version.
>

Keep support for what exactly? What can use the XML-RPC but our own
internal tools (such as MOM) which we'll convert?
Y.


>
>
>
>
>>
>>
>>
>>>
>>> On Sun, Jun 19, 2016 at 4:00 PM, Yaniv Dary  wrote:
>>>
 What is the cost of keeping them?

 Yaniv Dary
 Technical Product Manager
 Red Hat Israel Ltd.
 34 Jerusalem Road
 Building A, 4th floor
 Ra'anana, Israel 4350109

 Tel : +972 (9) 7692306
 8272306
 Email: yd...@redhat.com
 IRC : ydary


 On Sun, Jun 19, 2016 at 3:40 PM, Dan Kenigsberg 
 wrote:

> On Sun, Jun 19, 2016 at 02:13:40PM +0300, Nir Soffer wrote:
> > Hi all,
> >
> > We should not support now Engine 3.5 with ovirt 4.0, but vdsm still
> > accept 3.5 engines - I guess we forgot to disable it in 4.0.
>
> Correct. There was a moment in time where we considered supporting 3.5
> as well.
>
> >
> > In 4.1, I don't think we should support any 3.x Engine - otherwise we
> > will have to waste time maintaining old apis and infrastructure,
> instead
> > of adding new features that matter to our users.
>
> Do you have a list of old APIs you'd like to throw away? We've killed a
> few in 4.0, and
>   $ git grep REQUIRED_FOR
> is not huge.
>
> >
> > I suggest we disable now support for 3.x engines.
> >
> > Please see Eli patch:
> > https://gerrit.ovirt.org/59308
> >
> > Thoughts?
>
> +1, but let's see if ydary/mgoldboi think we should keep 3.6.
>


>>>
>>> ___
>>> Devel mailing list
>>> Devel@ovirt.org
>>> http://lists.ovirt.org/mailman/listinfo/devel
>>>
>>
>>
>>
>> --
>> Sandro Bonazzola
>> Better technology. Faster innovation. Powered by community collaboration.
>> See how it works at redhat.com
>>
>
>
> ___
> Devel mailing list
> Devel@ovirt.org
> http://lists.ovirt.org/mailman/listinfo/devel
>
___
Devel mailing list
Devel@ovirt.org
http://lists.ovirt.org/mailman/listinfo/devel

Re: [ovirt-devel] Missing admin permissions.

2016-01-18 Thread Yaniv Kaul
On Mon, Jan 18, 2016 at 11:00 PM, Alexander Wels  wrote:

> Hi,
>
> Somewhere during master upgrades somehow my admin@internal did not get
> permissions to create VMs. Its is compiling about no permissions to assign
> a
> CPU profile. Its been a while since I tried creating a VM. Can any one
> point me
> to what is missing and how to fix it?
>

Perhaps it's https://bugzilla.redhat.com/show_bug.cgi?id=1293338 , and the
workaround is to re-run engine-setup, I believe.
Y.


>
> This is the error in the log:
> 2016-01-18 15:59:46,843 INFO  [org.ovirt.engine.core.bll.AddVmCommand]
> (default task-56) [a6fd12b] Lock Acquired to object 'EngineLock:
> {exclusiveLocks='[=]',
> sharedLocks='[----= ACTION_TYPE_FAILED_TEMPLATE_IS_USED_FOR_CREATE_VM$VmName >]'}'
> 2016-01-18 15:59:46,893 WARN  [org.ovirt.engine.core.bll.AddVmCommand]
> (default task-56) [] Validation of action 'AddVm' failed for user
> admin@internal. Reasons:
>
> VAR__ACTION__ADD,VAR__TYPE__VM,ACTION_TYPE_NO_PERMISSION_TO_ASSIGN_CPU_PROFILE,
> $cpuProfileId f9a05b39-9f57-4655-aab2-2846fe6519f6,$cpuProfileName DEV35
> 2016-01-18 15:59:46,894 INFO  [org.ovirt.engine.core.bll.AddVmCommand]
> (default task-56) [] Lock freed to object 'EngineLock:
> {exclusiveLocks='[=]',
> sharedLocks='[----= ACTION_TYPE_FAILED_TEMPLATE_IS_USED_FOR_CREATE_VM$VmName >]'}'
>
> ___
> Devel mailing list
> Devel@ovirt.org
> http://lists.ovirt.org/mailman/listinfo/devel
>
___
Devel mailing list
Devel@ovirt.org
http://lists.ovirt.org/mailman/listinfo/devel

Re: [ovirt-devel] Changing the name of VDSM in oVirt 4.0.

2016-01-28 Thread Yaniv Kaul
On Thu, Jan 28, 2016 at 11:16 AM, Martin Polednik 
wrote:

> On 27/01/16 09:53 +0200, Nir Soffer wrote:
>
>> On Wed, Jan 27, 2016 at 9:29 AM, Yedidyah Bar David 
>> wrote:
>>
>>> On Tue, Jan 26, 2016 at 7:26 PM, Nir Soffer  wrote:
>>>
 On Tue, Jan 26, 2016 at 5:29 PM, Yaniv Dary  wrote:

> I suggest for ease of use and tracking we change the versioning to
> align to
> the engine (4.0.0 in oVirt 4.0 GA) to make it easy to know which
> version was
> in which release and also change the package naming to something like
> ovirt-host-manager\ovirt-host-agent.
>

 When we think about the names, we should consider all the components
 installed or running on the host. Here is the current names and future
 options:

 Current names:

 vdsmd
 supervdsmd
 vdsm-tool
 vdsClient
 (we have also two hosted engine daemons, I don't remember the names)

 Here are some options in no particular order to name these components:

 Alt 1:
 ovirt-hypervisor
 ovirt-hypervisor-helper
 ovirt-hypervisor-tool
 ovirt-hyperviosr-cli

 Alt 2:

>>>
>>> Not sure it's that important. Still, how about:
>>>
>>> ovirt-host

>>>
>>> ovirt-hostd
>>>
>>
>> I like this
>>
>>
>>> ovirt-host-helper

>>>
>>> ovirt-priv-hostd
>>>
>>
>> How about ovirt-privd?
>>
>> I like short names.
>>
>>
>>> ovirt-host-tool

>>>
>>> ovirt-hostd-tool
>>>
>>> ovirt-host-cli

>>>
>>> ovirt-hostd-cli
>>>
>>
>> I think we should use the example of systemd:
>>
>> systemd
>> systemctl
>>
>> So ovirt-hostd, ovirt-hostctl ovirt-hostcli
>>
>
> I'd even suggest going simply with ovirtd and ovirtctl (maybe
> ovirtdctl to differentiate ovirt, ovirtd and ovirt-engine).
>
> Names like ovirt-host-agent possibly introduce abbreviation
> clashes - we would most likely end up abbreviating host-agent to HA
> and that could be mistaken for high availability in discussions.


ohad (oVirt Host Agent Daemon, and that's also an Israeli name), or just
oha (not to be confused with the Office of Hawaiian Affairs)
ohanha (oVirt Host Agent Not High Availability , and that's also an Israeli
family name, mostly known for some past famous Israeli soccer player)
ohd  (oVirt Host Daemon), far enough from OCD.

I think we are doing a bit of yak shaving here, but it is entertaining ;-)
Y.


>
>
>>> Also we should get rid of '/rhev/' in start of mount points IMO. How
>>> about '/var/lib/ovirt-hostd/mounts/' or something like that?
>>>
>>
>> We want to use /run/ovirt-host/storage/ for that, but this is hard change,
>> since it breaks migration to from hosts using different vdsm versions.
>> New vms expect the disks at /rhev/data-center and old vms at
>> /rhev/data-center/
>>
>> Maybe we can change disks path during migartion on the destination, but
>> migrating vms to older hosts will be impossible, as the vdsm on the older
>> machine does not support such manipulation.
>>
>> Nir
>> ___
>> Devel mailing list
>> Devel@ovirt.org
>> http://lists.ovirt.org/mailman/listinfo/devel
>>
> ___
> Devel mailing list
> Devel@ovirt.org
> http://lists.ovirt.org/mailman/listinfo/devel
>
___
Devel mailing list
Devel@ovirt.org
http://lists.ovirt.org/mailman/listinfo/devel

Re: [ovirt-devel] outdated quickstart guide

2016-02-03 Thread Yaniv Kaul
On Wed, Feb 3, 2016 at 9:50 AM, Yedidyah Bar David <d...@redhat.com> wrote:

> On Wed, Feb 3, 2016 at 5:27 AM, Greg Sheremeta <gsher...@redhat.com>
> wrote:
> > Trying to go through this with a new hire, and there doesn't seem to
> > be a version matrix anywhere.
> >
> > We're just guessing at which OSes are supported for both engine and
> hosts.
> >
> > Is that information handy somewhere?
>
> AFAIK the "official" place is the release notes page for the version you
> intend to install. [1] [2] [3] [4]
>

If you are going to do editing to the content, please do it on the newer
site in github[1]

Y.
[1] https://github.com/oVirt/ovirt-site


>
> All 3.6.N versions support el7 for the engine and hosts, and all of them
> support el6 for the engine and can work with el6 hosts installed by 3.5
> (in 3.5 cluster compatibility level).
>
> 3.6(.0) supported also fedora 22. Since it's considered old and quickly
> approaching EOL, it's not mentioned anymore in 3.6.1 and 3.6.2. We still
> do builds for it in both official latest 3.6 (currently 3.6.2) and in
> 3.6-snapshot [5]. I am not aware of actual problems on it.
>
> We recently merged some patches for the engine on fedora 23,
> and are aware of at least one bug for hosts on it [6] (with a simple
> workaround). So a later 3.6 (3.6.3?) might support it too.
>
> [1] http://www.ovirt.org/OVirt_3.6_Release_Notes
> [2] http://www.ovirt.org/OVirt_3.6.1_Release_Notes
> [3] http://www.ovirt.org/OVirt_3.6.2_Release_Notes
> [4] http://www.ovirt.org/OVirt_3.6.z_Release_Management
> [5] http://www.ovirt.org/Install_nightly_snapshot
> [6] https://bugzilla.redhat.com/show_bug.cgi?id=1297835
>
> >
> > Greg
> >
> >
> > On Sat, Jan 30, 2016 at 5:48 PM, Yaniv Kaul <yk...@redhat.com> wrote:
> >>
> >>
> >> On Fri, Jan 29, 2016 at 5:35 PM, Sven Kieske <s.kie...@mittwald.de>
> wrote:
> >>>
> >>> On 29/01/16 16:17, Nir Soffer wrote:
> >>> > This is a wiki, you can update it.
> >>> yes I know, i even have an account, but just no time atm.
> >>> so I figured maybe someone can do it quicker.
> >>
> >>
> >> Note that we are moving away from the Wiki to the new website soon.
> >> I suggest updating it there[1]
> >>
> >> Y.
> >> [1] https://github.com/oVirt/ovirt-site
> >>
> >>
> >>>
> >>>
> >>> I also noticed that el 7.2  does not get mentioned as a supported
> >>> platform, just 7.1.
> >>>
> >>> I will edit all places when I find the time, maybe next week :)
> >>>
> >>> --
> >>> Mit freundlichen Grüßen / Regards
> >>>
> >>> Sven Kieske
> >>>
> >>> Systemadministrator
> >>> Mittwald CM Service GmbH & Co. KG
> >>> Königsberger Straße 6
> >>> 32339 Espelkamp
> >>> T: +495772 293100
> >>> F: +495772 29
> >>> https://www.mittwald.de
> >>> Geschäftsführer: Robert Meyer
> >>> St.Nr.: 331/5721/1033, USt-IdNr.: DE814773217, HRA 6640, AG Bad
> Oeynhausen
> >>> Komplementärin: Robert Meyer Verwaltungs GmbH, HRB 13260, AG Bad
> >>> Oeynhausen
> >>>
> >>>
> >>> ___
> >>> Devel mailing list
> >>> Devel@ovirt.org
> >>> http://lists.ovirt.org/mailman/listinfo/devel
> >>
> >>
> >>
> >> ___
> >> Devel mailing list
> >> Devel@ovirt.org
> >> http://lists.ovirt.org/mailman/listinfo/devel
> >
> >
> >
> > --
> > Greg Sheremeta, MBA
> > Red Hat, Inc.
> > Sr. Software Engineer
> > gsher...@redhat.com
> > 919-741-4016
> > ___
> > Devel mailing list
> > Devel@ovirt.org
> > http://lists.ovirt.org/mailman/listinfo/devel
>
>
>
> --
> Didi
>
___
Devel mailing list
Devel@ovirt.org
http://lists.ovirt.org/mailman/listinfo/devel

Re: [ovirt-devel] check-merged job for vdsm

2016-02-22 Thread Yaniv Kaul
On Mon, Feb 22, 2016 at 5:25 PM, Yaniv Bronheim  wrote:

> Hi,
> I added recently to check-merged phase a job for automatic functional test
> run. see for example
> http://jenkins.ovirt.org/job/vdsm_master_check-merged-fc23-x86_64/52/
>

Excellent!


>
> https://gerrit.ovirt.org/#/c/48268/58/automation/check-merged.sh -
> generally, it installs lago in the jenkins machine, set up f23 vm, ssh to
> it, run vdsm service (in deploy.sh), and the commands in check-merged.sh
>

I think it'd be cool if it could set up and additional EL 7.2 VM and run
those same tests in parallel to the one on the F23 one.
Y.


>
> You can see the commands in check-merged.sh (I'll try to improve the
> readability there).
> you can add new tests there by adding them to FUNCTIONAL_TESTS_LIST or
> calling your own script inside the vm - I think maybe to change it to run
> all scripts under certain directory instead of the current ./run_test.sh
> call. I'll see how the usage involves and will improve it.
>
> To check your changes before merging you can use -
> http://jenkins.ovirt.org/job/vdsm_master_check-merged-fc23-x86_64/build?delay=0sec
> which requires jenkins.com login
>
> Currently many functional tests under tests/functional are broken
> - vmQoSTests.py virtTests.py momTests.py that import VdsProxy - please try
> to fix them or remove them if its not in use.
>
> --
> *Yaniv Bronhaim.*
>
> ___
> Devel mailing list
> Devel@ovirt.org
> http://lists.ovirt.org/mailman/listinfo/devel
>
___
Devel mailing list
Devel@ovirt.org
http://lists.ovirt.org/mailman/listinfo/devel

Re: [ovirt-devel] Cockpit-oVirt plugin

2016-02-25 Thread Yaniv Kaul
On Thu, Feb 25, 2016 at 11:15 AM, Marek Libra  wrote:

> Please see attachments.
>

Needs adjustments to match the look and feel of the planned dashboards /
cockpit, but otherwise - looks great! Kudos for the idea!
Y.


>
> The main page (see cockpit_vmsList.png) shows all running VMs on the host.
> A detail can be displayed by clicking on a name.
>
> When logged to the engine, there's a list of all VMs (in all states)
> available. User will be redirected to corresponding host (and it's
> cockpit-oVirt) to show VM detail when clicking on a running VM.
> Basic actions will be provided (like Run). These actions will issue
> corresponding command on engine (REST API).
>
> On the VDSM page, the user can edit the vdsm.conf or manage the vdsmd
> service running on the host (click-through to detail of the cockpit's
> 'Service' component).
>
> I plan to add screenshots to the Feature Wiki page once it's ready.
> Video is a great idea, adding it to the todo list.
>
> Regards,
> Marek
>
> --
>
> *From: *"Yaniv Dary" 
> *To: *"Marek Libra" 
> *Cc: *"Fabian Deutsch" , "devel" 
> *Sent: *Wednesday, February 24, 2016 1:22:01 PM
>
> *Subject: *Re: [ovirt-devel] Cockpit-oVirt plugin
>
> Can we have some screenshot or even a short video? That would be wonderful!
>
> Yaniv Dary
> Technical Product Manager
> Red Hat Israel Ltd.
> 34 Jerusalem Road
> Building A, 4th floor
> Ra'anana, Israel 4350109
>
> Tel : +972 (9) 7692306
> 8272306
> Email: yd...@redhat.com
> IRC : ydary
>
>
> On Wed, Feb 24, 2016 at 1:19 PM, Marek Libra  wrote:
>
>> The rpm is not yet ready. As soon as some WIP is done, I'll focus on that.
>>
>> - Original Message -
>> > From: "Fabian Deutsch" 
>> > To: "Marek Libra" 
>> > Cc: "devel" 
>> > Sent: Tuesday, February 23, 2016 4:46:51 PM
>> > Subject: Re: [ovirt-devel] Cockpit-oVirt plugin
>> >
>> > On Tue, Feb 23, 2016 at 1:23 PM, Marek Libra  wrote:
>> > > I'm pleased to announce initial version of new oVirt plugin for
>> Cockpit,
>> > > which is recently a proposed feature for oVirt 4.0.
>> > >
>> > > The oVirt Wiki Feature can be found at [3].
>> > >
>> > > Sources and README file can be found on the github [1].
>> > > Up-to-date issue list is at [2] (includes planed enhancements)
>> > >
>> > > Please refer the README file for install instructions, so far tested
>> on
>> > > Centos 7 (minimal).
>> > >
>> > > The plugin will be distributed as an rpm in the future and is meant
>> as an
>> > > optional add-on to oVirt.
>> > >
>> > > Main focus is on
>> > > - troubleshooting when accessibility/functionality of webadmin is
>> limited
>> > > - easy-to-use tool for VM-centric host monitoring/administration
>> > > - potential integration point with oVirt on UI level
>> > > - easy to use, so for small setups the preferred choice with an
>> option to
>> > > "upgrade" to full oVirt later
>> > >
>> > > The plugin or its parts can be used as standalone same as embedded
>> into
>> > > other UIs, like drill-down from webadmin or ManageIQ for more details
>> or
>> > > fine-tuning.
>> > >
>> > > The plugin has dependency on VDSM.
>> > > Please note, since there was a VDSM patch needed, recent master is
>> required
>> > > (see README in the source).
>> > >
>> > > Dependency on oVirt's engine is optional - when accessible then
>> additional
>> > > plugin functionality is available.
>> > > Please note, the plugin is in early development phase showing basic
>> > > concept.
>> >
>> > Hey Marek,
>> >
>> > this work looks pretty promising!
>> >
>> > Do you already have rpm build which we could include in our nightly
>> > oVirt Node Next builds?
>> >
>> > Greetnigs
>> > fabian
>> >
>> ___
>> Devel mailing list
>> Devel@ovirt.org
>> http://lists.ovirt.org/mailman/listinfo/devel
>>
>
>
>
> ___
> Devel mailing list
> Devel@ovirt.org
> http://lists.ovirt.org/mailman/listinfo/devel
>
___
Devel mailing list
Devel@ovirt.org
http://lists.ovirt.org/mailman/listinfo/devel

Re: [ovirt-devel] oVirt Node Next upgrades

2016-02-29 Thread Yaniv Kaul
On Mon, Feb 29, 2016 at 1:54 PM, Fabian Deutsch  wrote:
> oVirt 3.6
> -
> We noticed that Node is using the rpms from the oVirt master branches, which
> makes it a bit unstable. We plan to switch to consume oVirt 3.6 rpms.

Can you clarify this?
Y.
___
Devel mailing list
Devel@ovirt.org
http://lists.ovirt.org/mailman/listinfo/devel


Re: [ovirt-devel] vdsm meeting summary - 23.2.16

2016-02-24 Thread Yaniv Kaul
On Tue, Feb 23, 2016 at 6:30 PM, Yaniv Bronheim  wrote:

> (Nir, Adam, YanivB, Dan, Milan, Piotr, Francesco, Martin Polednik, Edward)
>
> - Some of us started to work on refactoring for python3, basic file
> movements and improving testings
>
>  - We added functional tests to check-merged automation script -
> currently, for some reason, it doesn't run tests that require root. I'm on
> it.
>
>  - if anyone know that their functional tests work good please add them to
> the list
> so they will run each merge
>
>  - moving code from supervdsmServer to supervdsm_api - follow
> https://gerrit.ovirt.org/53496 and move also virt, storage and sla part
>
>  - Nir says to use the weekly contact from storage team to help with
> verification about storage code changes- I'll try to reach them next week
> to test direct LUN
>
>   - I also wanted to use lago basic_suite (ovirt-system-tests) to run full
> flow from specific vdsm commit - but the infrastructure for that requires
> many manual steps and its not easy as I expected it to be.
>
> - moving code from vdsm dir (/usr/share/vdsm/) to site-packages/vdsm (by
> moving them to lib/vdsm) - this is required to avoid relative imports which
> are not allowed in python3 - storage dir is the main gap we currently have.
>
> - python-modernize - Dan uses it for network tests dir and encourage us to
> start running it for our parts
>
> - there are some schema changes, mostly removal parts that Piotr posted as
> part of converting to yaml structure. this code needs to be reviewed
> - Nir asks to keep the current API order instead of sorting by names
> - Nir concerns about yaml notation looks for a list with single element
> - split the yaml schema to several files is complex but nir asks to see if
> its possible
>
> - storage team mainly works hard on SDM patches
>
> - Edward works on splitting network tests between unit tests and
> integration tests which do environment setup changes
>
> - we currently don't run the slow test automatically
> - we might want more "tags" for tests such as SlowTests, StressTests
>
> And the most important issue - vdsm for ovirt 4.0 will keep only 3.6
> backward compatibility - if you don't agree with that statement, please say
> why ... but as far as we see it, this is the direction and we already
> removing 3.5 stuff.
>

If a customer has 3.5 level cluster with all hosts on RHEL 7, why would we
break compatibility? Need to understand the cost here.
The very least is to allow live migration .
Y.


> Thanks all for participating,
>
> --
> *Yaniv Bronhaim.*
>
> ___
> Devel mailing list
> Devel@ovirt.org
> http://lists.ovirt.org/mailman/listinfo/devel
>
___
Devel mailing list
Devel@ovirt.org
http://lists.ovirt.org/mailman/listinfo/devel

Re: [ovirt-devel] Iscsi and other BAD buggy

2016-03-01 Thread Yaniv Kaul
On Tue, Mar 1, 2016 at 4:56 PM, Вячеслав Бадалян  wrote:
> Hello.

Hi - please join the devel or users mailing list.

>
> ZERO) https://bugzilla.redhat.com/enter_bug.cgi?product=ovirt
> Sorry, entering a bug into the product oVirt has been disabled

We were promoted to a classification. Please try
https://bugzilla.redhat.com/enter_bug.cgi?classification=oVirt

>
> And lets go.

It may be more efficient to send an email per issue - especially if
they are separate issues (storage, networking, etc.)
Y.

>
> A) If host with ovirt have iscsi target we will have bug troubles.
>
> Its looks like
> 1. Create targetcli from LVM
> 2. All ok
> 3. Reboot
> 4/ LVM found oVIrt partitions and connect it.
> 5. targetcli can't get device. It's allready busy.
>
> If you add some filter to don's search iscsi devices then ovirt does not
> work correctly.
>
> B) Some throuble with glusterD. VSDM do mount of hosted engine volume and
> this host is aslo GluserD Server. Bug troubles.
>
> C) If i reboot 1 of 3 gluster servers then hosted engine will very slowly.
> Have many errors in dmes, Agent-ha reboot VM. Vm can't conect to don't full
> cluster and stuck in Paused mode. Realy...
>
> Its all look so bad... i mirgrate to NFS :(
>
> D) Network conffugurate realy bad. If i need to change network i can't do
> it. oVirt save posted from web settings. Bug vsdm-networking MUST fallback
> to previos setup if i mistake or SERVIS does not start.
>
> E). Ovirt reports does not install.
>
>  [java] org.springframework.beans.factory.BeanCreationException: Error
> creating bean with name 'actionModelService' defined in file
> [/usr/share/jasperreports-server/buildomatic/conf_source/ieCe/applicationContext.xml]:
> Invocation of init method failed; nested exception is
> java.lang.ExceptionInInitializerError
>
> 2016-02-28 09:20:59 DEBUG
> otopi.plugins.ovirt_engine_setup.ovirt_engine_reports.jasper.deploy
> plugin.execute:941 execute-output: ('./js-ant',
> '-DmasterPropsSource=/tmp/tmp3paFhC/config', 'import-minimal-ce') stderr:
> OpenJDK 64-Bit Server VM warning: ignoring option MaxPermSize=512m; support
> was removed in 8.0
>
> BUILD FAILED
> /usr/share/jasperreports-server/buildomatic/bin/import-export.xml:263: The
> following error occurred while executing this line:
> /usr/share/jasperreports-server/buildomatic/bin/import-export.xml:142: Java
> returned: 255
>
> Total time: 24 seconds
>
> 2016-02-28 09:20:59 DEBUG otopi.context context._executeMethod:156 method
> exception
> Traceback (most recent call last):
>   File "/usr/lib/python2.7/site-packages/otopi/context.py", line 146, in
> _executeMethod
> method['method']()
>   File
> "/usr/share/ovirt-engine/setup/bin/../plugins/ovirt-engine-setup/ovirt-engine-reports/jasper/deploy.py",
> line 656, in _deploy
> self._buildJs(config=config, cmd=cmd)
>   File
> "/usr/share/ovirt-engine/setup/bin/../plugins/ovirt-engine-setup/ovirt-engine-reports/jasper/deploy.py",
> line 215, in _buildJs
> 'buildomatic',
>   File "/usr/lib/python2.7/site-packages/otopi/plugin.py", line 946, in
> execute
> command=args[0],
> RuntimeError: Command './js-ant' failed to execute
> 2016-02-28 09:20:59 ERROR otopi.context context._executeMethod:165 Failed to
> execute stage 'Misc configuration': Command './js-ant' failed to execute
> 2016-02-28 09:20:59 DEBUG otopi.transaction transaction.abort:134 aborting
> 'Yum Transaction'
> 2016-02-28 09:20:59 INFO otopi.plugins.otopi.packagers.yumpackager
> yumpackager.info:95 Yum Performing yum transaction rollback
> Загружены модули: fastestmirror, versionlock
>
>
>
>
>
> --
>
> --
> С уважением,
> Бадалян Вячеслав Борисович
>
> ООО "Открытые бизнес-решения"
> Технический директор
> +7 (495) 666-0-111
> http://www.open-bs.ru
>
> ___
> Devel mailing list
> Devel@ovirt.org
> http://lists.ovirt.org/mailman/listinfo/devel
___
Devel mailing list
Devel@ovirt.org
http://lists.ovirt.org/mailman/listinfo/devel

Re: [ovirt-devel] Where's ovirt-release-master RPM on resources.ovirt.org ?

2016-03-09 Thread Yaniv Kaul
Answer to self: sent https://github.com/lago-project/lago/pull/169 to
overcome those empty files in Lago.
Y.

On Wed, Mar 9, 2016 at 12:58 PM, Yaniv Kaul <yk...@redhat.com> wrote:

>
>
> On Wed, Mar 9, 2016 at 9:25 AM, Sandro Bonazzola <sbona...@redhat.com
> <https://mail.google.com/mail/?view=cm=1=1=sbona...@redhat.com>>
> wrote:
>
>>
>>
>> On Wed, Mar 9, 2016 at 8:17 AM, Yedidyah Bar David <d...@redhat.com
>> <https://mail.google.com/mail/?view=cm=1=1=d...@redhat.com>>
>> wrote:
>>
>>> On Tue, Mar 8, 2016 at 11:38 PM, Yaniv Kaul <yk...@redhat.com
>>> <https://mail.google.com/mail/?view=cm=1=1=yk...@redhat.com>>
>>> wrote:
>>> > I can find it @
>>> >
>>> http://rsync.nl.gentoo.org/pub/software/ovirt/ovirt-master-snapshot/rpm/el7Workstation/noarch/ovirt-release-master-4.0.0-0.1.master.20160304151417.gita200923.noarch.rpm
>>>
>>> Not sure what's this one - perhaps gentoo compiles our packages
>>> (including -release).
>>>
>>> The "canonical" place to check should be:
>>>
>>> http://www.ovirt.org/develop/dev-process/install-nightly-snapshot/
>>>
>>> >
>>> > I can't find it @
>>> >
>>> http://resources.ovirt.org/pub/ovirt-master-snapshot-static/rpm/el7Workstation/noarch/
>>>
>>> ovirt-release-master.rpm includes both -snapshot and -snapshot-static .
>>>
>>
>>
>> Instructions on how to install nightly are here:
>> http://www.ovirt.org/develop/dev-process/install-nightly-snapshot/
>> as pointed out by didi.
>> Canonical URL for the ovirt-master-snapshot rpm is
>> http://resources.ovirt.org/pub/yum-repo/ovirt-release-master.rpm
>> Since last week, ovirt-release-master is also published nightly at
>> http://resources.ovirt.org/pub/ovirt-master-snapshot/rpm/el7/noarch/ovirt-release-master.rpm
>> being it now a required package for oVirt Node Next Generation.
>>
>
> It's not really me, it's Lago that is failing to fetch that RPM properly;
> ovirt-release-master-host-node FAILED
>
> (1/2): ovirt-release-master 0% [ ]  0.0 B/s |0 B
>  --:-- ETA ovirt-release-master-4.0.0-0.1 FAILED
>
> (1/2): ovirt-release-master 0% [ ]  0.0 B/s |0 B
>  --:-- ETA ovirt-release-master-4.0.0-0.1.master.noarch: [Errno 256] No
> more mirrors to try.
> ovirt-release-master-host-node-4.0.0-0.1.master.noarch: [Errno 256] No
> more mirrors to try.
>
> The URLs the reposync is syncing:
> baseurl=http://resources.ovirt.org/pub/ovirt-master-snapshot/rpm/el7/
> baseurl=http://resources.ovirt.org/pub/ovirt-master-snapshot/rpm/fc23/
> baseurl=
> http://resources.ovirt.org/pub/ovirt-master-snapshot-static/rpm/el7/
> baseurl=
> http://resources.ovirt.org/pub/ovirt-master-snapshot-static/rpm/fc23/
> baseurl=
> http://copr-be.cloud.fedoraproject.org/results/patternfly/patternfly1/epel-7-x86_64/
> baseurl=
> http://copr-be.cloud.fedoraproject.org/results/patternfly/patternfly1/fedora-23-x86_64/
> baseurl=
> http://download.gluster.org/pub/gluster/glusterfs/LATEST/EPEL.repo/epel-7/x86_64/
> baseurl=
> http://download.gluster.org/pub/gluster/glusterfs/LATEST/Fedora/fedora-23/x86_64/
> baseurl=
> http://download.gluster.org/pub/gluster/glusterfs/LATEST/EPEL.repo/epel-7/noarch
> baseurl=
> http://download.gluster.org/pub/gluster/glusterfs/LATEST/Fedora/fedora-23/noarch
> baseurl=
> http://copr-be.cloud.fedoraproject.org/results/patternfly/patternfly1/epel-7-x86_64/
> baseurl=
> http://copr-be.cloud.fedoraproject.org/results/patternfly/patternfly1/fedora-23-x86_64/
> baseurl=http://fedorapeople.org/groups/virt/virtio-win/repo/stable
> baseurl=http://fedorapeople.org/groups/virt/virtio-win/repo/stable
> baseurl=http://download.fedoraproject.org/pub/epel/7/x86_64
> baseurl=http://mirror.centos.org/centos/7/virt/x86_64/ovirt-3.6/
>
> When attempting to download (via 'reposync -c
> ~/ovirt-system-tests/basic_suite_master/reposync-config.repo  --newest-only
> --delete ') , it only partially succeeds:
> mini@ykaul-mini:/tmp$ find . -name "ovirt-release-master*" |xargs file
> ./ovirt-master-snapshot-static-fc23/noarch/ovirt-release-master-host-node-4.0.0-0.1.master.noarch.rpm:
>empty
> ./ovirt-master-snapshot-static-fc23/noarch/ovirt-release-master-4.0.0-0.1.master.noarch.rpm:
>  empty
> ./ovirt-master-snapshot-static-el7/noarch/ovirt-release-master-host-node-4.0.0-0.1.master.noarch.rpm:
> empty
> ./ovirt-master-snapshot-static-el7/noarch/ovirt-release-master-4.0.0-0.1.master.noarch.rpm:
> 

Re: [ovirt-devel] Where's ovirt-release-master RPM on resources.ovirt.org ?

2016-03-09 Thread Yaniv Kaul
On Wed, Mar 9, 2016 at 9:25 AM, Sandro Bonazzola <sbona...@redhat.com
<https://mail.google.com/mail/?view=cm=1=1=sbona...@redhat.com>>
wrote:

>
>
> On Wed, Mar 9, 2016 at 8:17 AM, Yedidyah Bar David <d...@redhat.com
> <https://mail.google.com/mail/?view=cm=1=1=d...@redhat.com>>
> wrote:
>
>> On Tue, Mar 8, 2016 at 11:38 PM, Yaniv Kaul <yk...@redhat.com
>> <https://mail.google.com/mail/?view=cm=1=1=yk...@redhat.com>>
>> wrote:
>> > I can find it @
>> >
>> http://rsync.nl.gentoo.org/pub/software/ovirt/ovirt-master-snapshot/rpm/el7Workstation/noarch/ovirt-release-master-4.0.0-0.1.master.20160304151417.gita200923.noarch.rpm
>>
>> Not sure what's this one - perhaps gentoo compiles our packages
>> (including -release).
>>
>> The "canonical" place to check should be:
>>
>> http://www.ovirt.org/develop/dev-process/install-nightly-snapshot/
>>
>> >
>> > I can't find it @
>> >
>> http://resources.ovirt.org/pub/ovirt-master-snapshot-static/rpm/el7Workstation/noarch/
>>
>> ovirt-release-master.rpm includes both -snapshot and -snapshot-static .
>>
>
>
> Instructions on how to install nightly are here:
> http://www.ovirt.org/develop/dev-process/install-nightly-snapshot/
> as pointed out by didi.
> Canonical URL for the ovirt-master-snapshot rpm is
> http://resources.ovirt.org/pub/yum-repo/ovirt-release-master.rpm
> Since last week, ovirt-release-master is also published nightly at
> http://resources.ovirt.org/pub/ovirt-master-snapshot/rpm/el7/noarch/ovirt-release-master.rpm
> being it now a required package for oVirt Node Next Generation.
>

It's not really me, it's Lago that is failing to fetch that RPM properly;
ovirt-release-master-host-node FAILED

(1/2): ovirt-release-master 0% [ ]  0.0 B/s |0 B  --:--
ETA ovirt-release-master-4.0.0-0.1 FAILED

(1/2): ovirt-release-master 0% [ ]  0.0 B/s |0 B  --:--
ETA ovirt-release-master-4.0.0-0.1.master.noarch: [Errno 256] No more
mirrors to try.
ovirt-release-master-host-node-4.0.0-0.1.master.noarch: [Errno 256] No more
mirrors to try.

The URLs the reposync is syncing:
baseurl=http://resources.ovirt.org/pub/ovirt-master-snapshot/rpm/el7/
baseurl=http://resources.ovirt.org/pub/ovirt-master-snapshot/rpm/fc23/
baseurl=http://resources.ovirt.org/pub/ovirt-master-snapshot-static/rpm/el7/
baseurl=
http://resources.ovirt.org/pub/ovirt-master-snapshot-static/rpm/fc23/
baseurl=
http://copr-be.cloud.fedoraproject.org/results/patternfly/patternfly1/epel-7-x86_64/
baseurl=
http://copr-be.cloud.fedoraproject.org/results/patternfly/patternfly1/fedora-23-x86_64/
baseurl=
http://download.gluster.org/pub/gluster/glusterfs/LATEST/EPEL.repo/epel-7/x86_64/
baseurl=
http://download.gluster.org/pub/gluster/glusterfs/LATEST/Fedora/fedora-23/x86_64/
baseurl=
http://download.gluster.org/pub/gluster/glusterfs/LATEST/EPEL.repo/epel-7/noarch
baseurl=
http://download.gluster.org/pub/gluster/glusterfs/LATEST/Fedora/fedora-23/noarch
baseurl=
http://copr-be.cloud.fedoraproject.org/results/patternfly/patternfly1/epel-7-x86_64/
baseurl=
http://copr-be.cloud.fedoraproject.org/results/patternfly/patternfly1/fedora-23-x86_64/
baseurl=http://fedorapeople.org/groups/virt/virtio-win/repo/stable
baseurl=http://fedorapeople.org/groups/virt/virtio-win/repo/stable
baseurl=http://download.fedoraproject.org/pub/epel/7/x86_64
baseurl=http://mirror.centos.org/centos/7/virt/x86_64/ovirt-3.6/

When attempting to download (via 'reposync -c
~/ovirt-system-tests/basic_suite_master/reposync-config.repo  --newest-only
--delete ') , it only partially succeeds:
mini@ykaul-mini:/tmp$ find . -name "ovirt-release-master*" |xargs file
./ovirt-master-snapshot-static-fc23/noarch/ovirt-release-master-host-node-4.0.0-0.1.master.noarch.rpm:
   empty
./ovirt-master-snapshot-static-fc23/noarch/ovirt-release-master-4.0.0-0.1.master.noarch.rpm:
 empty
./ovirt-master-snapshot-static-el7/noarch/ovirt-release-master-host-node-4.0.0-0.1.master.noarch.rpm:
empty
./ovirt-master-snapshot-static-el7/noarch/ovirt-release-master-4.0.0-0.1.master.noarch.rpm:
  empty
./ovirt-master-snapshot-fc23/noarch/ovirt-release-master-host-node-4.0.0-0.1.master.20160304151417.gita200923.noarch.rpm:
RPM v3.0 bin i386/x86_64
ovirt-release-master-host-node-4.0.0-0.1.master.20160304151417.
./ovirt-master-snapshot-fc23/noarch/ovirt-release-master-4.0.0-0.1.master.20160304151417.gita200923.noarch.rpm:
  RPM v3.0 bin i386/x86_64
ovirt-release-master-4.0.0-0.1.master.20160304151417.gita200923
./ovirt-master-snapshot-el7/noarch/ovirt-release-master-host-node-4.0.0-0.1.master.20160304151417.gita200923.noarch.rpm:
 RPM v3.0 bin i386/x86_64
ovirt-release-master-host-node-4.0.0-0.1.master

Re: [ovirt-devel] [vdsm] Running VDSM unit tests on Travis CI using Docker

2016-04-06 Thread Yaniv Kaul
On Wed, Apr 6, 2016 at 2:23 PM, David Caro  wrote:

> On 04/06 14:19, Edward Haas wrote:
> > On Wed, Apr 6, 2016 at 1:41 PM, Milan Zamazal 
> wrote:
> >
> > > Edward Haas  writes:
> > >
> > > > On Wed, Apr 6, 2016 at 11:39 AM, Milan Zamazal 
> > > wrote:
> > > >
> > > > Thank you, Edward, this is useful not only for CI. I use docker
> for
> > > > building Vdsm and running its unit tests and this helped me to
> get
> > > the
> > > > proper updated set of packages after recent changes in Vdsm.
> > > >
> > > > BTW, it seems that the following packages should be additionally
> > > added
> > > > for `make check-all': psmisc, which, python-ioprocess
> > > >
> > > >
> > > > Are you saying that make check is passing on your local machine?
> > >
> > > When I add the packages given above, `make check-all' (as well as `make
> > > check') works for me except for 4 tests in lib/vdsm/schedule.py that
> > > produce the following errors with `make check-all':
> > >
> > > File "/home/pdm/ovirt/vdsm/vdsm-test/lib/vdsm/schedule.py", line
> 134,
> > > in schedule
> > >   heapq.heappush(self._calls, (deadline, call))
> > >   nose.proxy.TypeError: unorderable types: ScheduledCall() <
> > > ScheduledCall()
> > >
> > > File "/home/pdm/ovirt/vdsm/vdsm-test/tests/scheduleTests.py", line
> > > 160, in test_latency
> > >   med = ticker.latency[len(ticker.latency) / 2]
> > >   nose.proxy.TypeError: list indices must be integers, not float
> > >
> > > Those are probably Python 3 failures that should be fixed in Vdsm.
> > > The docker environment works fine for running the unit tests on my
> > > machine.
> > >
> >
> > I ran it on Travis CI with your recommended addition, and I am getting
> this
> > result: FAILED (SKIP=107, errors=14):
> > You can view the run here:
> https://travis-ci.org/EdDev/vdsm/builds/121117253
>
>
> Afaik, you won't be able to run any tests that touch networking, or kernel
> modules (bonding and such). That is as much a limitation of travis as of
> docker, that was one of the points why we started using chroots instead of
> docker containers on ovirt ci.
>

All of those should run fine in Lago, right?
Y.


>
>
> > ___
> > Devel mailing list
> > Devel@ovirt.org
> > http://lists.ovirt.org/mailman/listinfo/devel
>
>
> --
> David Caro
>
> Red Hat S.L.
> Continuous Integration Engineer - EMEA ENG Virtualization R
>
> Tel.: +420 532 294 605
> Email: dc...@redhat.com
> IRC: dcaro|dcaroest@{freenode|oftc|redhat}
> Web: www.redhat.com
> RHT Global #: 82-62605
>
> ___
> Devel mailing list
> Devel@ovirt.org
> http://lists.ovirt.org/mailman/listinfo/devel
>
___
Devel mailing list
Devel@ovirt.org
http://lists.ovirt.org/mailman/listinfo/devel

[ovirt-devel] VDSM fails 'autogen.sh' when shallow cloning - is that expected?

2016-04-12 Thread Yaniv Kaul
Cloning VDSM with the following command:
git clone -b master --depth 1 git://gerrit.ovirt.org/vdsm

Which works, but then I can't run ./autogen.sh succesfully:
mini@ykaul-mini:/tmp/github/vdsmshallow$ ./autogen.sh  |less
fatal: No names found, cannot describe anything.
fatal: No names found, cannot describe anything.
fatal: No names found, cannot describe anything.
fatal: No names found, cannot describe anything.
fatal: No names found, cannot describe anything.
fatal: No names found, cannot describe anything.
fatal: No names found, cannot describe anything.
fatal: No names found, cannot describe anything.
fatal: No names found, cannot describe anything.
fatal: No names found, cannot describe anything.
fatal: No names found, cannot describe anything.
fatal: No names found, cannot describe anything.
fatal: No names found, cannot describe anything.
fatal: No names found, cannot describe anything.
fatal: No names found, cannot describe anything.
fatal: No names found, cannot describe anything.
fatal: No names found, cannot describe anything.
fatal: No names found, cannot describe anything.
fatal: No names found, cannot describe anything.
fatal: No names found, cannot describe anything.
fatal: No names found, cannot describe anything.
fatal: No names found, cannot describe anything.
fatal: No names found, cannot describe anything.
fatal: No names found, cannot describe anything.
fatal: No names found, cannot describe anything.
fatal: No names found, cannot describe anything.
fatal: No names found, cannot describe anything.
fatal: No names found, cannot describe anything.
fatal: No names found, cannot describe anything.
fatal: No names found, cannot describe anything.
fatal: No names found, cannot describe anything.
fatal: No names found, cannot describe anything.
fatal: No names found, cannot describe anything.
fatal: No names found, cannot describe anything.
fatal: No names found, cannot describe anything.
fatal: No names found, cannot describe anything.
fatal: No names found, cannot describe anything.
fatal: No names found, cannot describe anything.
fatal: No names found, cannot describe anything.
fatal: No names found, cannot describe anything.
fatal: No names found, cannot describe anything.
fatal: No names found, cannot describe anything.
I am going to run ./configure with no arguments - if you wish
to pass any to it, please specify them on the ./autogen.sh command line.
configure: error: package version not defined

As the difference is 1:10 in data size and I only wish to work on the tip,
this could be very useful to me.
Any ideas?
___
Devel mailing list
Devel@ovirt.org
http://lists.ovirt.org/mailman/listinfo/devel

Re: [ovirt-devel] VDSM fails 'autogen.sh' when shallow cloning - is that expected?

2016-04-12 Thread Yaniv Kaul
Probably fails because of:
git describe
fatal: No names found, cannot describe anything.



On Tue, Apr 12, 2016 at 9:32 PM, Yaniv Kaul <yk...@redhat.com> wrote:

> Cloning VDSM with the following command:
> git clone -b master --depth 1 git://gerrit.ovirt.org/vdsm
>
> Which works, but then I can't run ./autogen.sh succesfully:
> mini@ykaul-mini:/tmp/github/vdsmshallow$ ./autogen.sh  |less
> fatal: No names found, cannot describe anything.
> fatal: No names found, cannot describe anything.
> fatal: No names found, cannot describe anything.
> fatal: No names found, cannot describe anything.
> fatal: No names found, cannot describe anything.
> fatal: No names found, cannot describe anything.
> fatal: No names found, cannot describe anything.
> fatal: No names found, cannot describe anything.
> fatal: No names found, cannot describe anything.
> fatal: No names found, cannot describe anything.
> fatal: No names found, cannot describe anything.
> fatal: No names found, cannot describe anything.
> fatal: No names found, cannot describe anything.
> fatal: No names found, cannot describe anything.
> fatal: No names found, cannot describe anything.
> fatal: No names found, cannot describe anything.
> fatal: No names found, cannot describe anything.
> fatal: No names found, cannot describe anything.
> fatal: No names found, cannot describe anything.
> fatal: No names found, cannot describe anything.
> fatal: No names found, cannot describe anything.
> fatal: No names found, cannot describe anything.
> fatal: No names found, cannot describe anything.
> fatal: No names found, cannot describe anything.
> fatal: No names found, cannot describe anything.
> fatal: No names found, cannot describe anything.
> fatal: No names found, cannot describe anything.
> fatal: No names found, cannot describe anything.
> fatal: No names found, cannot describe anything.
> fatal: No names found, cannot describe anything.
> fatal: No names found, cannot describe anything.
> fatal: No names found, cannot describe anything.
> fatal: No names found, cannot describe anything.
> fatal: No names found, cannot describe anything.
> fatal: No names found, cannot describe anything.
> fatal: No names found, cannot describe anything.
> fatal: No names found, cannot describe anything.
> fatal: No names found, cannot describe anything.
> fatal: No names found, cannot describe anything.
> fatal: No names found, cannot describe anything.
> fatal: No names found, cannot describe anything.
> fatal: No names found, cannot describe anything.
> I am going to run ./configure with no arguments - if you wish
> to pass any to it, please specify them on the ./autogen.sh command line.
> configure: error: package version not defined
>
> As the difference is 1:10 in data size and I only wish to work on the tip,
> this could be very useful to me.
> Any ideas?
>
___
Devel mailing list
Devel@ovirt.org
http://lists.ovirt.org/mailman/listinfo/devel

Re: [ovirt-devel] VDSM fails 'autogen.sh' when shallow cloning - is that expected?

2016-04-12 Thread Yaniv Kaul
On Tue, Apr 12, 2016 at 9:46 PM, Nir Soffer <nsof...@redhat.com> wrote:

> Trying new clone:
>
> $ time git clone https://github.com/oVirt/vdsm.git
> Cloning into 'vdsm'...
> remote: Counting objects: 47555, done.
> remote: Compressing objects: 100% (155/155), done.
> remote: Total 47555 (delta 76), reused 0 (delta 0), pack-reused 47400
> Receiving objects: 100% (47555/47555), 27.16 MiB | 435.00 KiB/s, done.
> Resolving deltas: 100% (35297/35297), done.
> Checking connectivity... done.
>
> real 0m43.045s
> user 0m3.650s
> sys 0m0.555s
>
> Are you sure you need to do a shallow clone?
>

It might be that the cloud provider of oVirt's gerrit is throttling
connections from the office, but the data size is ratio is huge:
mini@ykaul-mini:/tmp/github$ time git clone -b master --depth 1
https://github.com/oVirt/vdsm.git
Cloning into 'vdsm'...
remote: Counting objects: 924, done.
remote: Compressing objects: 100% (849/849), done.
remote: Total 924 (delta 181), reused 308 (delta 50), pack-reused 0
Receiving objects: 100% (924/924), *1.37 MiB* | 2.39 MiB/s, done.
Resolving deltas: 100% (181/181), done.
Checking connectivity... done.

real 0m2.059s
user 0m0.095s
sys 0m0.052s

mini@ykaul-mini:/tmp/github$ time git clone -b master
https://github.com/oVirt/vdsm.git vdsm-full
Cloning into 'vdsm-full'...
remote: Counting objects: 47555, done.
remote: Compressing objects: 100% (155/155), done.
remote: Total 47555 (delta 76), reused 0 (delta 0), pack-reused 47400
Receiving objects: 100% (47555/47555), *27.16 MiB* | 8.61 MiB/s, done.
Resolving deltas: 100% (35297/35297), done.
Checking connectivity... done.

real 0m4.795s
user 0m2.006s
sys 0m0.184s


So while I got much nicer speeds now on the non-shallow, I assume the
shallow should beat it always.
In any case, perhaps I should clone from Github...
Y.


> Nir
>
> On Tue, Apr 12, 2016 at 9:37 PM, Yaniv Kaul <yk...@redhat.com> wrote:
> > Probably fails because of:
> > git describe
> > fatal: No names found, cannot describe anything.
> >
> >
> >
> > On Tue, Apr 12, 2016 at 9:32 PM, Yaniv Kaul <yk...@redhat.com> wrote:
> >>
> >> Cloning VDSM with the following command:
> >> git clone -b master --depth 1 git://gerrit.ovirt.org/vdsm
> >>
> >> Which works, but then I can't run ./autogen.sh succesfully:
> >> mini@ykaul-mini:/tmp/github/vdsmshallow$ ./autogen.sh  |less
> >> fatal: No names found, cannot describe anything.
> >> fatal: No names found, cannot describe anything.
> >> fatal: No names found, cannot describe anything.
> >> fatal: No names found, cannot describe anything.
> >> fatal: No names found, cannot describe anything.
> >> fatal: No names found, cannot describe anything.
> >> fatal: No names found, cannot describe anything.
> >> fatal: No names found, cannot describe anything.
> >> fatal: No names found, cannot describe anything.
> >> fatal: No names found, cannot describe anything.
> >> fatal: No names found, cannot describe anything.
> >> fatal: No names found, cannot describe anything.
> >> fatal: No names found, cannot describe anything.
> >> fatal: No names found, cannot describe anything.
> >> fatal: No names found, cannot describe anything.
> >> fatal: No names found, cannot describe anything.
> >> fatal: No names found, cannot describe anything.
> >> fatal: No names found, cannot describe anything.
> >> fatal: No names found, cannot describe anything.
> >> fatal: No names found, cannot describe anything.
> >> fatal: No names found, cannot describe anything.
> >> fatal: No names found, cannot describe anything.
> >> fatal: No names found, cannot describe anything.
> >> fatal: No names found, cannot describe anything.
> >> fatal: No names found, cannot describe anything.
> >> fatal: No names found, cannot describe anything.
> >> fatal: No names found, cannot describe anything.
> >> fatal: No names found, cannot describe anything.
> >> fatal: No names found, cannot describe anything.
> >> fatal: No names found, cannot describe anything.
> >> fatal: No names found, cannot describe anything.
> >> fatal: No names found, cannot describe anything.
> >> fatal: No names found, cannot describe anything.
> >> fatal: No names found, cannot describe anything.
> >> fatal: No names found, cannot describe anything.
> >> fatal: No names found, cannot describe anything.
> >> fatal: No names found, cannot describe anything.
> >> fatal: No names found, cannot describe anything.
> >> fatal: No names found

Re: [ovirt-devel] VDSM fails 'autogen.sh' when shallow cloning - is that expected?

2016-04-13 Thread Yaniv Kaul
On Wed, Apr 13, 2016 at 11:49 AM, Nir Soffer <nsof...@redhat.com> wrote:

> On Wed, Apr 13, 2016 at 11:27 AM, Sandro Bonazzola <sbona...@redhat.com>
> wrote:
> >
> >
> > On Tue, Apr 12, 2016 at 8:37 PM, Yaniv Kaul <yk...@redhat.com> wrote:
> >>
> >> Probably fails because of:
> >> git describe
> >> fatal: No names found, cannot describe anything.
> >
> >
> > Yes. I think autogen.sh and configure.ac shouldn't rely on git for
> package
> > versioning.
> > The automation/build-artifacs.sh script can take care of adding suffixes
> to
> > rpm release if needed at build time.
> > Version should be statically defined within configure.ac.
>
> Yaniv is using a git clone, so depending on git is fine in this case.
>
> The tarball should not have any dependencies on git.
>
> I think the issue is missing tags - this works:
>
> git clone -b master --depth 1 git://gerrit.ovirt.org/vdsm
> git tag v4.17.999
>

How do I know which tag to use?
Y.


> ./autogen.sh --system
>
> Nir
>
> >
> >
> >>
> >>
> >>
> >>
> >> On Tue, Apr 12, 2016 at 9:32 PM, Yaniv Kaul <yk...@redhat.com> wrote:
> >>>
> >>> Cloning VDSM with the following command:
> >>> git clone -b master --depth 1 git://gerrit.ovirt.org/vdsm
> >>>
> >>> Which works, but then I can't run ./autogen.sh succesfully:
> >>> mini@ykaul-mini:/tmp/github/vdsmshallow$ ./autogen.sh  |less
> >>> fatal: No names found, cannot describe anything.
> >>> fatal: No names found, cannot describe anything.
> >>> fatal: No names found, cannot describe anything.
> >>> fatal: No names found, cannot describe anything.
> >>> fatal: No names found, cannot describe anything.
> >>> fatal: No names found, cannot describe anything.
> >>> fatal: No names found, cannot describe anything.
> >>> fatal: No names found, cannot describe anything.
> >>> fatal: No names found, cannot describe anything.
> >>> fatal: No names found, cannot describe anything.
> >>> fatal: No names found, cannot describe anything.
> >>> fatal: No names found, cannot describe anything.
> >>> fatal: No names found, cannot describe anything.
> >>> fatal: No names found, cannot describe anything.
> >>> fatal: No names found, cannot describe anything.
> >>> fatal: No names found, cannot describe anything.
> >>> fatal: No names found, cannot describe anything.
> >>> fatal: No names found, cannot describe anything.
> >>> fatal: No names found, cannot describe anything.
> >>> fatal: No names found, cannot describe anything.
> >>> fatal: No names found, cannot describe anything.
> >>> fatal: No names found, cannot describe anything.
> >>> fatal: No names found, cannot describe anything.
> >>> fatal: No names found, cannot describe anything.
> >>> fatal: No names found, cannot describe anything.
> >>> fatal: No names found, cannot describe anything.
> >>> fatal: No names found, cannot describe anything.
> >>> fatal: No names found, cannot describe anything.
> >>> fatal: No names found, cannot describe anything.
> >>> fatal: No names found, cannot describe anything.
> >>> fatal: No names found, cannot describe anything.
> >>> fatal: No names found, cannot describe anything.
> >>> fatal: No names found, cannot describe anything.
> >>> fatal: No names found, cannot describe anything.
> >>> fatal: No names found, cannot describe anything.
> >>> fatal: No names found, cannot describe anything.
> >>> fatal: No names found, cannot describe anything.
> >>> fatal: No names found, cannot describe anything.
> >>> fatal: No names found, cannot describe anything.
> >>> fatal: No names found, cannot describe anything.
> >>> fatal: No names found, cannot describe anything.
> >>> fatal: No names found, cannot describe anything.
> >>> I am going to run ./configure with no arguments - if you wish
> >>> to pass any to it, please specify them on the ./autogen.sh command
> line.
> >>> configure: error: package version not defined
> >>>
> >>> As the difference is 1:10 in data size and I only wish to work on the
> >>> tip, this could be very useful to me.
> >>> Any ideas?
> >>
> >>
> >>
> >> ___
> >> Devel mailing list
> >> Devel@ovirt.org
> >> http://lists.ovirt.org/mailman/listinfo/devel
> >
> >
> >
> >
> > --
> > Sandro Bonazzola
> > Better technology. Faster innovation. Powered by community collaboration.
> > See how it works at redhat.com
> >
> > ___
> > Devel mailing list
> > Devel@ovirt.org
> > http://lists.ovirt.org/mailman/listinfo/devel
>
___
Devel mailing list
Devel@ovirt.org
http://lists.ovirt.org/mailman/listinfo/devel

Re: [ovirt-devel] [lago-devel] Lago + ovirt-system-tests run fail on collecting logs

2016-04-13 Thread Yaniv Kaul
On Wed, Apr 13, 2016 at 2:02 PM, David Caro  wrote:

> Though you will not have any caching and zram execution (just as the
> jenkins
> slaves don't have it).
>

That should change. The sooner the better.

If they have enough ram, they can actually run in /dev/shm/something.
That's what I'm doing with the VDSM functional tests.
Y.
___
Devel mailing list
Devel@ovirt.org
http://lists.ovirt.org/mailman/listinfo/devel

Re: [ovirt-devel] [lago-devel] Lago + ovirt-system-tests run fail on collecting logs

2016-04-13 Thread Yaniv Kaul
On Wed, Apr 13, 2016 at 1:05 PM, David Caro <dc...@redhat.com> wrote:

> On 04/10 20:52, Yaniv Kaul wrote:
> > On Sun, Apr 10, 2016 at 6:06 PM, Eyal Edri <ee...@redhat.com> wrote:
> >
> > > test_logs/
> >
> >
> > This is an annoying change of behavior. In the past, I believe the logs
> > were under the deployment dir. Now, they are here. It requires cleaning
> > them manually every time.
>
> Before it also required manual cleanup every time, it just turned out, that
> while doing the manual cleanup of the prefix (with ./run_suite -c) the logs
> were removed too (that was also an issue on jenkins, as you had to extract
> the
> logs before the cleanup)
>

1. Makes sense to me that you'll extract the logs before cleanup.
2. It did not cause a re-run to fail, which now, unless you cleanup AND rm
the files, it will.

>
> > It's part of issues we'll have to fix if we want (and I believe we do)
> > support multiple execution.
>

Yep.


>
> It supports multiple execution as long as you are not running the same
> suite,
> same as before, the issue here is that you are using a very specific flow
> that
> is not used anywhere else, and thus, facing issues and user cases that
> noone
> else has.
> I really recommend:
>   * Moving to the same flow jenkins uses
>   * Moving jenkins to the same flow you use
>

I'm running ./run_suite.sh -o /home/zram/3.6 basic_suite_3.6
and cleanup:
lagocli --prefix-path /home/zram/3.6/current cleanup

over and over and over and over...

What should I be running?
Y.


> > I consider it as a regression in a way, since it's a changed behavior -
> and
> > I'm not sure for the better.
>
> It changed behavior yes, and it improved the log collection and cleanup
> procedures on jenkins.
> To alleviate that issue, I sent a patch for that in the beginning of the
> lago
> project that was never merged, feel free to open a task for that too,
> should be
> relatively easy to implement some kind of log rotation if the destination
> directory already exists.
>
> > Y.
>
> > ___
> > lago-devel mailing list
> > lago-de...@ovirt.org
> > http://lists.ovirt.org/mailman/listinfo/lago-devel
>
>
> --
> David Caro
>
> Red Hat S.L.
> Continuous Integration Engineer - EMEA ENG Virtualization R
>
> Tel.: +420 532 294 605
> Email: dc...@redhat.com
> IRC: dcaro|dcaroest@{freenode|oftc|redhat}
> Web: www.redhat.com
> RHT Global #: 82-62605
>
___
Devel mailing list
Devel@ovirt.org
http://lists.ovirt.org/mailman/listinfo/devel

Re: [ovirt-devel] [lago-devel] Weekly bug status for ovirt-system-tests (using Lago) [ 10/04/16 ]

2016-04-10 Thread Yaniv Kaul
On Sun, Apr 10, 2016 at 5:19 PM, Eyal Edri  wrote:

> A weekly reminder on open bugs & patches to keep track on blockers for
> oVirt testing:
>
>
> *Open bugs:*
> https://goo.gl/eUaPUY
>
> *Open issues:*
> Still suffering from add host failures, pending a fix from network team:
> https://bugzilla.redhat.com/show_bug.cgi?id=1322257
>
>
> *Recently merged:* (Thanks Yaniv K!)
>
>- Multiple changes to deployment script
>https://gerrit.ovirt.org/#/c/55695/
>
>
That was a very incomplete change. A better one has just been posted @
https://gerrit.ovirt.org/#/c/55915/


>
>-
>- Fix IP Address of engine *https://gerrit.ovirt.org/#/c/55903/
>*
>
>
That one, while correct, did not entirely fix for me an elusive issue where
*sometimes* ovirt-engine installation takes 5-9 minutes. Still looking into
it.


>
>- Improvements to the storage domain add tests -
>https://gerrit.ovirt.org/#/c/55233/
>
>
>
> *Open patches:*  (reviews are welcome)
> https://gerrit.ovirt.org/#/q/project:ovirt-system-tests+status:open
>
>
>- *Basic quota tests: https://gerrit.ovirt.org/#/c/55514/
>*
>- *Switched between hot-add disk and hot-add NIC tests *
>https://gerrit.ovirt.org/#/c/55740/
>   - (Attempt to reduce chance of hitting the deactivate storage bug,
>   though the fix of this bug might be in already, so not sure its needed,
>   Yaniv?)
>
>
Doesn't matter, as I plan to introduce a test that does both in the same
time, then does something else (the quota stuff).
Y.


>
>- WIP: Fix master (4.0) suite, still in progress
>https://gerrit.ovirt.org/#/c/55425/
>- Merging the storage VMs into one, still in progress:
>https://gerrit.ovirt.org/#/c/54957/
>
>
This will require to re-do some of my changes (above) to simplify the
provisioning of the servers. They are still easy to do, though.

>
> If you have any updates on the bugs/patches please reply with more info.
>

I've just sent https://gerrit.ovirt.org/#/c/55917/ to remove
ovirt-system-tests for oVirt 3.5 No one is using or maintaining it.
Y.



>
>
>
> --
> Eyal Edri
> Associate Manager
> RHEV DevOps
> EMEA ENG Virtualization R
> Red Hat Israel
>
> phone: +972-9-7692018
> irc: eedri (on #tlv #rhev-dev #rhev-integ)
>
>
>
>
>
> --
> Eyal Edri
> Associate Manager
> RHEV DevOps
> EMEA ENG Virtualization R
> Red Hat Israel
>
> phone: +972-9-7692018
> irc: eedri (on #tlv #rhev-dev #rhev-integ)
>
> ___
> lago-devel mailing list
> lago-de...@ovirt.org
> http://lists.ovirt.org/mailman/listinfo/lago-devel
>
>
___
Devel mailing list
Devel@ovirt.org
http://lists.ovirt.org/mailman/listinfo/devel

Re: [ovirt-devel] [lago-devel] Lago + ovirt-system-tests run fail on collecting logs

2016-04-10 Thread Yaniv Kaul
On Sun, Apr 10, 2016 at 6:06 PM, Eyal Edri  wrote:

> test_logs/


This is an annoying change of behavior. In the past, I believe the logs
were under the deployment dir. Now, they are here. It requires cleaning
them manually every time.
It's part of issues we'll have to fix if we want (and I believe we do)
support multiple execution.
I consider it as a regression in a way, since it's a changed behavior - and
I'm not sure for the better.
Y.
___
Devel mailing list
Devel@ovirt.org
http://lists.ovirt.org/mailman/listinfo/devel

[ovirt-devel] No RPMs under http://resources.ovirt.org/pub/ovirt-master-snapshot/rpm/ ?

2016-04-11 Thread Yaniv Kaul
I can't find EL7 RPMs under the above path.
They used to be there...
TIA,
Y.
___
Devel mailing list
Devel@ovirt.org
http://lists.ovirt.org/mailman/listinfo/devel

Re: [ovirt-devel] v2v: download/stream disks via Libvirt API

2016-04-07 Thread Yaniv Kaul
On Thu, Apr 7, 2016 at 11:51 AM, Sven Kieske  wrote:

> On 07/04/16 10:14, Shahar Havivi wrote:
> > Any suggestions/notes?
> >
> Very cool feature, but
> I have a question:
>
> Would it be possible to tunnel this through ssh or other
> TCP Connections over the network?
>

If it's using libvirt underneath, one should make sure Libvirt is
configured with security[1] and we should support it. The URI can use TLS
with X509.
Note that there's quite of an overhead using it.
[1]
http://libvirt.org/guide/html/Application_Development_Guide-Architecture-Remote_URIs.html
Y.


> Because in many cases you don't want to stream
> sensible data via unencrypted connections or you just have ssh
> access to special servers.
>
> --
> Mit freundlichen Grüßen / Regards
>
> Sven Kieske
>
> Systemadministrator
> Mittwald CM Service GmbH & Co. KG
> Königsberger Straße 6
> 32339 Espelkamp
> T: +495772 293100
> F: +495772 29
> https://www.mittwald.de
> Geschäftsführer: Robert Meyer
> St.Nr.: 331/5721/1033, USt-IdNr.: DE814773217, HRA 6640, AG Bad Oeynhausen
> Komplementärin: Robert Meyer Verwaltungs GmbH, HRB 13260, AG Bad Oeynhausen
>
>
> ___
> Devel mailing list
> Devel@ovirt.org
> http://lists.ovirt.org/mailman/listinfo/devel
>
___
Devel mailing list
Devel@ovirt.org
http://lists.ovirt.org/mailman/listinfo/devel

Re: [ovirt-devel] Engine High Availability

2016-03-04 Thread Yaniv Kaul
On Fri, Mar 4, 2016 at 2:03 AM, Sunny Shin  wrote:

> Hi All,
>
> In the new architecture page (
> http://www.ovirt.org/documentation/architecture/architecture/), overall
> architecture picture shows that engine supports active-active high
> availability.
>

This is inaccurate - the engine is not highly available in A/A
architecture. I'd appreciate if you could file an issue for it so we'll fix
this page.


>
> However, as per engine HA page (
> http://www.ovirt.org/develop/release-management/features/engine/engine-high-availability/),
> active-active HA is not supported yet and there are several implementation
> issues.
>
> So, could anyone give me a brief explanation about discrepancy between
> these two pages, and about the plan to implement this feature if not
> implemented yet?
>

I suggest looking at self-hosted engine, which provides non-A/A high
availability for the engine.
Y.


>
> Sunny
>
> ___
> Devel mailing list
> Devel@ovirt.org
> http://lists.ovirt.org/mailman/listinfo/devel
>
___
Devel mailing list
Devel@ovirt.org
http://lists.ovirt.org/mailman/listinfo/devel

Re: [ovirt-devel] Engine High Availability

2016-03-04 Thread Yaniv Kaul
On Fri, Mar 4, 2016 at 10:45 AM, Sunny Shin <sunny4s@gmail.com> wrote:

> Hi Yaniv,
>
> Is there a plan to implement active-active HA in the near future? Is there
> any progress on it since the discussion in August 2013?
>

No, not at the moment. We feel that Self-Hosted Engine (SHE) is a great
solution thus far.
We are working on improving the performance and scale of the engine, to be
able to handle more load, more hosts, more VMs, etc.
This is an ongoing work across all components, from VDSM, to engine core,
REST API, database calls and more.
Y.


>
> Sunny
>
> 2016-03-04 17:38 GMT+09:00 Yaniv Kaul <yk...@redhat.com>:
>
>>
>>
>> On Fri, Mar 4, 2016 at 2:03 AM, Sunny Shin <sunny4s@gmail.com> wrote:
>>
>>> Hi All,
>>>
>>> In the new architecture page (
>>> http://www.ovirt.org/documentation/architecture/architecture/), overall
>>> architecture picture shows that engine supports active-active high
>>> availability.
>>>
>>
>> This is inaccurate - the engine is not highly available in A/A
>> architecture. I'd appreciate if you could file an issue for it so we'll fix
>> this page.
>>
>>
>>>
>>> However, as per engine HA page (
>>> http://www.ovirt.org/develop/release-management/features/engine/engine-high-availability/),
>>> active-active HA is not supported yet and there are several implementation
>>> issues.
>>>
>>> So, could anyone give me a brief explanation about discrepancy between
>>> these two pages, and about the plan to implement this feature if not
>>> implemented yet?
>>>
>>
>> I suggest looking at self-hosted engine, which provides non-A/A high
>> availability for the engine.
>> Y.
>>
>>
>>>
>>> Sunny
>>>
>>> ___
>>> Devel mailing list
>>> Devel@ovirt.org
>>> http://lists.ovirt.org/mailman/listinfo/devel
>>>
>>
>>
>
___
Devel mailing list
Devel@ovirt.org
http://lists.ovirt.org/mailman/listinfo/devel

Re: [ovirt-devel] Iscsi and other BAD buggy

2016-03-02 Thread Yaniv Kaul
On Wed, Mar 2, 2016 at 2:56 AM, Вячеслав Бадалян <v.badal...@open-bs.ru> wrote:
> Thanks. I submit.
>
> Bad link there
> http://www.ovirt.org/community/
> Report oVirt bugs on the Red Hat bugzilla. If you don't have a Bugzilla
> account yet,find out how to get one.

Thanks - to be fixed in https://github.com/oVirt/ovirt-site/pull/116
Y.

>
> Some BZ added 1313584 1313585 1313586 1313588 1313583
> Thanks for life link to project BZ
>
>
>
> вт, 1 мар. 2016 г. в 23:23, Yaniv Kaul <yk...@redhat.com>:
>>
>> On Tue, Mar 1, 2016 at 4:56 PM, Вячеслав Бадалян <v.badal...@open-bs.ru>
>> wrote:
>> > Hello.
>>
>> Hi - please join the devel or users mailing list.
>>
>> >
>> > ZERO) https://bugzilla.redhat.com/enter_bug.cgi?product=ovirt
>> > Sorry, entering a bug into the product oVirt has been disabled
>>
>> We were promoted to a classification. Please try
>> https://bugzilla.redhat.com/enter_bug.cgi?classification=oVirt
>>
>> >
>> > And lets go.
>>
>> It may be more efficient to send an email per issue - especially if
>> they are separate issues (storage, networking, etc.)
>> Y.
>>
>> >
>> > A) If host with ovirt have iscsi target we will have bug troubles.
>> >
>> > Its looks like
>> > 1. Create targetcli from LVM
>> > 2. All ok
>> > 3. Reboot
>> > 4/ LVM found oVIrt partitions and connect it.
>> > 5. targetcli can't get device. It's allready busy.
>> >
>> > If you add some filter to don's search iscsi devices then ovirt does not
>> > work correctly.
>> >
>> > B) Some throuble with glusterD. VSDM do mount of hosted engine volume
>> > and
>> > this host is aslo GluserD Server. Bug troubles.
>> >
>> > C) If i reboot 1 of 3 gluster servers then hosted engine will very
>> > slowly.
>> > Have many errors in dmes, Agent-ha reboot VM. Vm can't conect to don't
>> > full
>> > cluster and stuck in Paused mode. Realy...
>> >
>> > Its all look so bad... i mirgrate to NFS :(
>> >
>> > D) Network conffugurate realy bad. If i need to change network i can't
>> > do
>> > it. oVirt save posted from web settings. Bug vsdm-networking MUST
>> > fallback
>> > to previos setup if i mistake or SERVIS does not start.
>> >
>> > E). Ovirt reports does not install.
>> >
>> >  [java] org.springframework.beans.factory.BeanCreationException:
>> > Error
>> > creating bean with name 'actionModelService' defined in file
>> >
>> > [/usr/share/jasperreports-server/buildomatic/conf_source/ieCe/applicationContext.xml]:
>> > Invocation of init method failed; nested exception is
>> > java.lang.ExceptionInInitializerError
>> >
>> > 2016-02-28 09:20:59 DEBUG
>> > otopi.plugins.ovirt_engine_setup.ovirt_engine_reports.jasper.deploy
>> > plugin.execute:941 execute-output: ('./js-ant',
>> > '-DmasterPropsSource=/tmp/tmp3paFhC/config', 'import-minimal-ce')
>> > stderr:
>> > OpenJDK 64-Bit Server VM warning: ignoring option MaxPermSize=512m;
>> > support
>> > was removed in 8.0
>> >
>> > BUILD FAILED
>> > /usr/share/jasperreports-server/buildomatic/bin/import-export.xml:263:
>> > The
>> > following error occurred while executing this line:
>> > /usr/share/jasperreports-server/buildomatic/bin/import-export.xml:142:
>> > Java
>> > returned: 255
>> >
>> > Total time: 24 seconds
>> >
>> > 2016-02-28 09:20:59 DEBUG otopi.context context._executeMethod:156
>> > method
>> > exception
>> > Traceback (most recent call last):
>> >   File "/usr/lib/python2.7/site-packages/otopi/context.py", line 146, in
>> > _executeMethod
>> > method['method']()
>> >   File
>> >
>> > "/usr/share/ovirt-engine/setup/bin/../plugins/ovirt-engine-setup/ovirt-engine-reports/jasper/deploy.py",
>> > line 656, in _deploy
>> > self._buildJs(config=config, cmd=cmd)
>> >   File
>> >
>> > "/usr/share/ovirt-engine/setup/bin/../plugins/ovirt-engine-setup/ovirt-engine-reports/jasper/deploy.py",
>> > line 215, in _buildJs
>> > 'buildomatic',
>> >   File "/usr/lib/python2.7/site-packages/otopi/plugin.py", line 946, in
>> > execute
>> > command=args[0],
>> > RuntimeError: Command './js-ant' failed to execute
>> > 20

[ovirt-devel] Where's ovirt-release-master RPM on resources.ovirt.org ?

2016-03-08 Thread Yaniv Kaul
I can find it @
http://rsync.nl.gentoo.org/pub/software/ovirt/ovirt-master-snapshot/rpm/el7Workstation/noarch/ovirt-release-master-4.0.0-0.1.master.20160304151417.gita200923.noarch.rpm

I can't find it @
http://resources.ovirt.org/pub/ovirt-master-snapshot-static/rpm/el7Workstation/noarch/
___
Devel mailing list
Devel@ovirt.org
http://lists.ovirt.org/mailman/listinfo/devel

Re: [ovirt-devel] [ACTION REQUIRED] vdsm_master_build-artifacts-fc23-x86_64 is failing due to missing dep on fc23 .packages

2016-04-14 Thread Yaniv Kaul
On Thu, Apr 14, 2016 at 10:59 AM, Nir Soffer  wrote:

> On Thu, Apr 14, 2016 at 10:45 AM, Yaniv Bronheim 
> wrote:
> > I don't think this package is available in epel. if not, just remove it
> from
> > py3 list
> >
> > about why we didn't catch it in jenkins - its because I don't run "make
> > check" anymore over el7 to save resources. we just build the rpm there to
> > see that we don't miss any dependencies.
> > maybe we should bring back the make check there ... what do you think?
>
> We should, make check takes about 1.5 minutes, typical build time is
> about 10 minutes
>
> fc23 build, with make check: 10:19
>
> http://jenkins.ovirt.org/job/vdsm_master_check-patch-fc23-x86_64/5210/console


It can be drastically reduced, just by running it into RAM.
I can run both EL7 and FC23 concurrently, on my laptop, @ about 5:15. Till
they fail.

EL7 make check fails on:
/usr/bin/pep8 --exclude="${exclude}" --filename '*.py' . \
contrib/logdb contrib/profile-stats contrib/repoplot init/daemonAdapter
vdsm/get-conf-item vdsm/set-conf-item vdsm/supervdsmServer vdsm/vdsm
vdsm/vdsm-restore-net-config vdsm/storage/curl-img-wrap
vdsm/storage/fc-scan vdsm-tool/vdsm-tool
./vdsm_hooks/checkips/after_get_stats.py:81:17: E126 continuation line
over-indented for hanging indent
./vdsm_hooks/checkips/after_get_stats.py:88:21: E126 continuation line
over-indented for hanging indent
./vdsm_hooks/checkips/after_get_stats.py:94:21: E126 continuation line
over-indented for hanging indent
make: *** [pep8] Error 1





>
> el7 build, no make check: 11:53
> http://jenkins.ovirt.org/job/vdsm_master_check-patch-el7-x86_64/747/console


I'm running both EL7 and FC23 in parallel.
Y.


>
>
> travis build: 3-4 minutes
> https://travis-ci.org/nirs/vdsm/builds
>
> The time of the tests does not make a real difference, and we know
> that the (some) code
> works on both platforms.
>
> We should work on reducing the build time, 10 minutes for running
> tests that take
> 1.5 minutes is crazy overhead.
>
> Nir
>
> >
> > On Thu, Apr 14, 2016 at 10:26 AM, Francesco Romani 
> > wrote:
> >>
> >>
> >> 
> >>
> >> From: "Sandro Bonazzola" 
> >> To: "Francesco Romani" 
> >> Cc: "Eyal Edri" , "Dan Kenigsberg"  >,
> >> "devel" , "Yaniv Bronheim" , "Nir
> >> Soffer" 
> >> Sent: Thursday, April 14, 2016 9:13:04 AM
> >>
> >> Subject: Re: [ovirt-devel] [ACTION REQUIRED]
> >> vdsm_master_build-artifacts-fc23-x86_64 is failing due to missing dep on
> >> fc23 .packages
> >>
> >>
> >>
> >> On Thu, Apr 14, 2016 at 9:12 AM, Sandro Bonazzola 
> >> wrote:
> >>>
> >>>
> >>>
> >>> On Thu, Apr 14, 2016 at 9:01 AM, Francesco Romani 
> >>> wrote:
> 
> 
> 
>  
> 
>  From: "Eyal Edri" 
>  To: "Sandro Bonazzola" 
>  Cc: "Dan Kenigsberg" , "devel" ,
>  "Yaniv Bronheim" , "Nir Soffer" <
> nsof...@redhat.com>,
>  "Francesco Romani" 
>  Sent: Thursday, April 14, 2016 8:54:50 AM
>  Subject: Re: [ovirt-devel] [ACTION REQUIRED]
>  vdsm_master_build-artifacts-fc23-x86_64 is failing due to missing dep
> on
>  fc23 .packages
> 
> 
>  Don't we run it per patch as well?
>  How did it got merged?
> 
>  On Apr 14, 2016 9:42 AM, "Sandro Bonazzola" 
> wrote:
> >
> >
> >
> http://jenkins.ovirt.org/job/vdsm_master_build-artifacts-fc23-x86_64/823/console
> >
> > 00:05:46.751
> >
> ==
> > 00:05:46.751 ERROR: Failure: ImportError (No module named 'netaddr')
> > 00:05:46.751
> >
> --
> > 00:05:46.752 Traceback (most recent call last):
> > 00:05:46.752   File
> "/usr/lib/python3.4/site-packages/nose/failure.py",
> > line 39, in runTest
> > 00:05:46.752 raise self.exc_val.with_traceback(self.tb)
> > 00:05:46.752   File
> "/usr/lib/python3.4/site-packages/nose/loader.py",
> > line 418, in loadTestsFromName
> > 00:05:46.752 addr.filename, addr.module)
> > 00:05:46.752   File
> > "/usr/lib/python3.4/site-packages/nose/importer.py", line 47, in
> > importFromPath
> > 00:05:46.752 return self.importFromDir(dir_path, fqname)
> > 00:05:46.752   File
> > "/usr/lib/python3.4/site-packages/nose/importer.py", line 94, in
> > importFromDir
> > 00:05:46.753 mod = load_module(part_fqname, fh, filename, desc)
> > 00:05:46.753   File "/usr/lib64/python3.4/imp.py", line 235, in
> > load_module
> > 00:05:46.753 return load_source(name, filename, 

Re: [ovirt-devel] Vdsm call 17/5/2016 summary

2016-05-17 Thread Yaniv Kaul
On Tue, May 17, 2016 at 5:44 PM, Yaniv Bronheim  wrote:

> *VDSM call 17/5/2016*
>
> mzamzal, mpolednik, edwardh, danken, ybronhei, igoihman, nsoffer, pkliczew
>
> *mzamza* -
> * Working on debian support to run vms. still experimental. Milan will
> send elaborated mail about it.
>
> *pkliczew* -
> * We replaced the json with yaml schema, now when engine sends not well
> defined request we warn it in vdsm.log. this leaded to many warn logs which
> makes it harder to follow issues. we currently look for solution there.
> * Additional to that, it exposed issue with gluster cli that we currently
> investigate.
>
> *igoihman* -
> * We merged a patch that from now on every test that we add to the system
> it needs to pass python3 unless you add it to a blacklist. This is another
> way to remind developers to be compatible with python3 (this part of
> accelerating our move to python3 as for now python-blivert which gluster
> uses won't be available in f25 and above for python2 - so we must support
> python3 to work over later fedora versions).
> * We merged tox support which replaces the need for pep8 and pyflakes.
> This allows to align with specific versions of pep8 and pyflakes. I updated
> vdsm-developers wiki patch about that.
>
> *nsoffer* -
> * Working on discard fixes - allows to reuse wiped storage.
> * Supporting flash in mount points.
> * Fix circular dependencies in storage server tests.
> * Introduced Image uploader - daemon (ovirt-imageio) that allows to upload
> (later download) images directly from engine by http from hosts
> * Proposed to have vdsm blog that talks about new plans such as the debian
> support.
>

We can blog every other week or so, or on a monthly basis, on ovirt.org
blog about this. The blog is not strictly for users.
I'd be happy to see such content.
Y.


>
> *mpolednik *-
> * Working on fixing the api to changeCD (
> https://gerrit.ovirt.org/#/c/56805/).
>
> *danken *-
> * openvswitch support - still work in process. we might get it async to
> 4.0.
>
> *ybronhei *-
> * Managed to push forward host package refactoring and will publish soon
> the metric collection plans
>
> Thanks for participating. Feel free to comment
>
> --
> *Yaniv Bronhaim.*
>
> ___
> Devel mailing list
> Devel@ovirt.org
> http://lists.ovirt.org/mailman/listinfo/devel
>
___
Devel mailing list
Devel@ovirt.org
http://lists.ovirt.org/mailman/listinfo/devel

Re: [ovirt-devel] [VDSM] Excessive warnings in vdsm log

2016-05-17 Thread Yaniv Kaul
On Tue, May 17, 2016 at 9:44 AM, Piotr Kliczewski 
wrote:

> Nir,
>
> The warnings were added to annoy people so we could keep the schema align
> with the code.
> I think that we should use this opportunity to push fixes instead of
> disabling it.
>

I agree to the end goal, but I don't think this specific method would work.
I much rather we'll do a concentrated effort to drastically reduce those -
and I don't think right now is the best time for it (a bit late for 4.0)
Y.


> Thanks,
> Piotr
>
> On Mon, May 16, 2016 at 10:09 PM, Nir Soffer  wrote:
>
>> Hi all,
>>
>> Since data verification patches were merged, vdsm logs is spammed with
>> useless warnings (see bellow).
>>
>> This spam make it harder to debug vdsm.
>>
>> Please add configuration variable to enable this warnings, and make
>> them disabled by default.
>>
>> Thanks,
>> Nir
>>
>> 
>> jsonrpc.Executor/0::WARNING::2016-05-16
>> 23:05:21,719::schemaapi::140::SchemaCache::(_report_inconsistency)
>> Following parameters ['ksmMergeAcrossNodes', 'haStats'] were not
>> recognized
>> jsonrpc.Executor/0::WARNING::2016-05-16
>> 23:05:21,720::schemaapi::140::SchemaCache::(_report_inconsistency)
>> Parameter cpuUserVdsmd is not float type
>> jsonrpc.Executor/0::WARNING::2016-05-16
>> 23:05:21,720::schemaapi::140::SchemaCache::(_report_inconsistency)
>> Parameter rxRate is not float type
>> jsonrpc.Executor/0::WARNING::2016-05-16
>> 23:05:21,720::schemaapi::140::SchemaCache::(_report_inconsistency)
>> Parameter cpuLoad is not float type
>> jsonrpc.Executor/0::WARNING::2016-05-16
>> 23:05:21,720::schemaapi::140::SchemaCache::(_report_inconsistency)
>> Parameter memUsed is not uint type
>> jsonrpc.Executor/0::WARNING::2016-05-16
>> 23:05:21,720::schemaapi::140::SchemaCache::(_report_inconsistency)
>> Parameter cpuIdle is not float type
>> jsonrpc.Executor/0::WARNING::2016-05-16
>> 23:05:21,721::schemaapi::140::SchemaCache::(_report_inconsistency)
>> Parameter txRate is not float type
>> jsonrpc.Executor/0::WARNING::2016-05-16
>> 23:05:21,721::schemaapi::140::SchemaCache::(_report_inconsistency)
>> Parameter txDropped is not uint type
>> jsonrpc.Executor/0::WARNING::2016-05-16
>> 23:05:21,721::schemaapi::140::SchemaCache::(_report_inconsistency)
>> Parameter elapsedTime is not uint type
>> jsonrpc.Executor/0::WARNING::2016-05-16
>> 23:05:21,721::schemaapi::140::SchemaCache::(_report_inconsistency)
>> Parameter netConfigDirty is not boolean type
>> jsonrpc.Executor/0::WARNING::2016-05-16
>> 23:05:21,721::schemaapi::140::SchemaCache::(_report_inconsistency)
>> Parameter rxErrors is not uint type
>> jsonrpc.Executor/0::WARNING::2016-05-16
>> 23:05:21,721::schemaapi::140::SchemaCache::(_report_inconsistency)
>> Parameter rxRate is not float type
>> jsonrpc.Executor/0::WARNING::2016-05-16
>> 23:05:21,722::schemaapi::140::SchemaCache::(_report_inconsistency)
>> Parameter rx is not uint type
>> jsonrpc.Executor/0::WARNING::2016-05-16
>> 23:05:21,722::schemaapi::140::SchemaCache::(_report_inconsistency)
>> Parameter txDropped is not uint type
>> jsonrpc.Executor/0::WARNING::2016-05-16
>> 23:05:21,722::schemaapi::140::SchemaCache::(_report_inconsistency)
>> Parameter txErrors is not uint type
>> jsonrpc.Executor/0::WARNING::2016-05-16
>> 23:05:21,722::schemaapi::140::SchemaCache::(_report_inconsistency)
>> Parameter txRate is not float type
>> jsonrpc.Executor/0::WARNING::2016-05-16
>> 23:05:21,722::schemaapi::140::SchemaCache::(_report_inconsistency)
>> Parameter speed is not uint type
>> jsonrpc.Executor/0::WARNING::2016-05-16
>> 23:05:21,723::schemaapi::140::SchemaCache::(_report_inconsistency)
>> Parameter tx is not uint type
>> jsonrpc.Executor/0::WARNING::2016-05-16
>> 23:05:21,723::schemaapi::140::SchemaCache::(_report_inconsistency)
>> Parameter rxDropped is not uint type
>> jsonrpc.Executor/0::WARNING::2016-05-16
>> 23:05:21,723::schemaapi::140::SchemaCache::(_report_inconsistency)
>> Parameter rxErrors is not uint type
>> jsonrpc.Executor/0::WARNING::2016-05-16
>> 23:05:21,723::schemaapi::140::SchemaCache::(_report_inconsistency)
>> Parameter rxRate is not float type
>> jsonrpc.Executor/0::WARNING::2016-05-16
>> 23:05:21,723::schemaapi::140::SchemaCache::(_report_inconsistency)
>> Parameter rx is not uint type
>> jsonrpc.Executor/0::WARNING::2016-05-16
>> 23:05:21,723::schemaapi::140::SchemaCache::(_report_inconsistency)
>> Parameter txDropped is not uint type
>> jsonrpc.Executor/0::WARNING::2016-05-16
>> 23:05:21,724::schemaapi::140::SchemaCache::(_report_inconsistency)
>> Parameter txErrors is not uint type
>> jsonrpc.Executor/0::WARNING::2016-05-16
>> 23:05:21,724::schemaapi::140::SchemaCache::(_report_inconsistency)
>> Parameter txRate is not float type
>> jsonrpc.Executor/0::WARNING::2016-05-16
>> 23:05:21,724::schemaapi::140::SchemaCache::(_report_inconsistency)
>> Parameter speed is not uint type
>> jsonrpc.Executor/0::WARNING::2016-05-16
>> 23:05:21,724::schemaapi::140::SchemaCache::(_report_inconsistency)

Re: [ovirt-devel] Permission issues when trying to migrate vm through the api (ovirt system tests)

2016-04-18 Thread Yaniv Kaul
On Mon, Apr 18, 2016 at 1:32 PM, David Caro  wrote:

>
> Hi everyone!
>
>
> I'm having some issues when trying to run the ovirt system tests from ovirt
> master branch, and I need some help from you guys.
>

https://bugzilla.redhat.com/show_bug.cgi?id=1328011
Y.


>
> The issue is that when trying to migrate a vm through the api, I get the
> error:
>
>   RequestError:
>   status: 400
>   reason: Bad Request
>   detail: User is not authorized to perform this action.
>
>
> That does not happen when doing the same through the ui, the vm is migrated
> correctly.
>
> The engine logs don't add much more details:
>
> 2016-04-18 06:04:15,393 INFO
> [org.ovirt.engine.core.bll.MigrateVmToServerCommand] (default task-15)
> [29237280] No permission found for user
> '001a-001a-001a-001a-02dd' or one of the groups he is member
> of, when running action 'MigrateVmToServer', Required permissions are:
> Action type: 'USER' Action group: 'CREATE_VM' Object type: 'Cluster'
> Object ID: 'null'.
> 2016-04-18 06:04:15,393 WARN
> [org.ovirt.engine.core.bll.MigrateVmToServerCommand] (default task-15)
> [29237280] Validation of action 'MigrateVmToServer' failed for user
> admin@internal-authz. Reasons:
> VAR__ACTION__MIGRATE,VAR__TYPE__VM,USER_NOT_AUTHORIZED_TO_PERFORM_ACTION
> 2016-04-18 06:04:15,413 ERROR
> [org.ovirt.engine.api.restapi.resource.AbstractBackendResource] (default
> task-15) [] Operation Failed: [User is not authorized to perform this
> action.]
>
>
> Something that looks odd to me too, is that in the roles, when you edit the
> 'SuperUser' role (the one the admin user belongs to) there there's one
> permission missing, the 'VM->Provisioning Operations->Create Instance', and
> can't be added (it's greyed out), not sure if it's related though, I can
> pass
> you a screenshot if you want.
>
>
> I can give you access to an environment where that happens and more
> details/logs/etc if you want to look deeper into it.
>
>
> Thanks!
>
>
> --
> David Caro
>
> Red Hat S.L.
> Continuous Integration Engineer - EMEA ENG Virtualization R
>
> Tel.: +420 532 294 605
> Email: dc...@redhat.com
> IRC: dcaro|dcaroest@{freenode|oftc|redhat}
> Web: www.redhat.com
> RHT Global #: 82-62605
>
___
Devel mailing list
Devel@ovirt.org
http://lists.ovirt.org/mailman/listinfo/devel

Re: [ovirt-devel] Seeing a memory issue in 4.0.1.1-1.el7.centos

2016-07-27 Thread Yaniv Kaul
On Tue, Jul 26, 2016 at 11:26 PM, Eldad Marciano 
wrote:

> Hi Lynn
>
> Thats know issue but nothing to worry about.
>
> It looks like you have nice amount of RAM
>
> Engine and dwh calculating the total ram size of the host and set 1/4 of
> it as heap space
>

Without an upper limit? Makes little sense to me.
Y.

>
> You can easily change the max heap size to 2 gb under
> /etc/ovirt-engine/engine.conf/10-setup-java
>
> And restart engine and dwh
>
> Hope it works for you..
>
> Regards,
> -Eldad
>
> On 26 ביולי 2016, at 22:56, Lynn Dixon  wrote:
>
> Hello all.  I hope I am sending this to the proper mailing list. If not,
> please let me know.
>
> I just installed the latest oVirt engine onto a RHEL7.2 machine last
> night.  I haven't built any VM's yet, but noticed that 27gig of memory was
> being consumed in the Dashboard.
>
> I did a quick check on top and noticed this:
> http://i.imgur.com/j9LB4q9.jpg
>
> The two java processes seem to be using a heavy amount of ram and
> reservations.  How shall I troubleshoot this to find out whats going on?
>
>
>
> *Lynn Dixon* | Red Hat Certified Architect #100-006-188
> *Sr. Cloud Consultant* | Cloud Management Practice
> Google Voice: 423-618-1414
> Cell/Text: 423-774-3188
> Click here to view my Certification Portfolio 
>
>
> ___
> Devel mailing list
> Devel@ovirt.org
> http://lists.ovirt.org/mailman/listinfo/devel
>
>
> ___
> Devel mailing list
> Devel@ovirt.org
> http://lists.ovirt.org/mailman/listinfo/devel
>
___
Devel mailing list
Devel@ovirt.org
http://lists.ovirt.org/mailman/listinfo/devel

Re: [ovirt-devel] Which GlusterFS for oVirt 4.1?

2016-08-10 Thread Yaniv Kaul
Yes, especially in Lago / ovirt-system-tests, which doesn't REALLY use it,
we should first try it there to ensure no dependency issues.
Y.

On Tue, Aug 9, 2016 at 10:37 AM, Sandro Bonazzola 
wrote:

> Hi,
> currently we're building and composing master (will become 4.1) assuming
> GlusterFS 3.7 will be used.
> I see that 3.8 is available in Fedora 24 and on CentOS Storage SIG.
> Is it time to move to 3.8 for 4.1?
>
> Thanks,
> --
> Sandro Bonazzola
> Better technology. Faster innovation. Powered by community collaboration.
> See how it works at redhat.com
>
> ___
> Devel mailing list
> Devel@ovirt.org
> http://lists.ovirt.org/mailman/listinfo/devel
>
___
Devel mailing list
Devel@ovirt.org
http://lists.ovirt.org/mailman/listinfo/devel

Re: [ovirt-devel] system tests failing

2016-08-10 Thread Yaniv Kaul
On Wed, Aug 10, 2016 at 3:51 PM, Evgheni Dereveanchin 
wrote:

> Hi everyone,
>
> We have the test-repo_ovirt_experimental_master job failing since build
> 717:
> http://jenkins.ovirt.org/job/test-repo_ovirt_experimental_master/717/
>
> System tests are also starting to fail:
> http://jenkins.ovirt.org/job/ovirt_master_system-tests/380/
>
> Here's the list of patches which appeared in build 717:
> https://gerrit.ovirt.org/#/c/61384/ - Sandro Bonazzola - packaging: spec:
> drop default defattr
> https://gerrit.ovirt.org/#/c/62026/ - Yaniv Bronhaim   - Using %{_libdir}
> macro instead of /usr/lib
> https://gerrit.ovirt.org/#/c/62028/ - Yaniv Bronhaim   - Require
> python2-devel specifically to avoid python3 pkg
> https://gerrit.ovirt.org/#/c/62049/ - Fabian Deutsch   - Revert "imgbase:
> Drop journal support"
> https://gerrit.ovirt.org/#/c/62050/ - Fabian Deutsch   - cli: Add journal
> logging
>
> The error is related to secondary storage domain addition,
>

Note that I (ab)use the secondary storage domains addition (as it takes
time) to do other things. For example, run ovirt-log-collector.
So one needs to look at the error carefully.
Y.


> it sounds like a known bug but now it seems to be triggered every time.
>
> I'm not sure how we can identify the source of the regression
> since these tests run on repos of already built RPMs, so the
> complete set has to be re-built with a specific patch and tested
> to find out the offending one.
>
> Regards,
> Evgheni Dereveanchin
>
> ___
> Devel mailing list
> Devel@ovirt.org
> http://lists.ovirt.org/mailman/listinfo/devel
>
___
Devel mailing list
Devel@ovirt.org
http://lists.ovirt.org/mailman/listinfo/devel

Re: [ovirt-devel] oVirt 4.1: plan to update patternfly?

2016-08-10 Thread Yaniv Kaul
On Tue, Aug 9, 2016 at 6:25 PM, Sandro Bonazzola 
wrote:

>
>
> On Tue, Aug 9, 2016 at 5:22 PM, Greg Sheremeta 
> wrote:
>
>> Yes, I actually have a patch on it right now. It'll be done this week.
>>
>> In prep, I made https://copr.fedorainfracloud.org/coprs/patternfly/
>> patternfly3/
>>
>
> Thanks! Looks like copr-be is donw now so I can't access
> https://copr-be.cloud.fedoraproject.org/results/patternfly/patternfly3
> Please open a bz on ovirt-release component to switch to the new repo.
>

As well as on ovirt-system-tests?
Y.


>
>
>
>
>>
>>
>> Greg
>>
>>
>> On Tue, Aug 9, 2016 at 11:21 AM, Sandro Bonazzola 
>> wrote:
>>
>>> Hi,
>>> we're currently using patternfly 1.3.0, I see 3.8.1 is out here
>>> https://github.com/patternfly/patternfly/releases
>>>
>>> Are there plan to update to it?
>>>
>>> --
>>> Sandro Bonazzola
>>> Better technology. Faster innovation. Powered by community collaboration.
>>> See how it works at redhat.com
>>>
>>
>>
>>
>> --
>> Greg Sheremeta, MBA
>> Red Hat, Inc.
>> Sr. Software Engineer
>> gsher...@redhat.com
>>
>
>
>
> --
> Sandro Bonazzola
> Better technology. Faster innovation. Powered by community collaboration.
> See how it works at redhat.com
>
> ___
> Devel mailing list
> Devel@ovirt.org
> http://lists.ovirt.org/mailman/listinfo/devel
>
___
Devel mailing list
Devel@ovirt.org
http://lists.ovirt.org/mailman/listinfo/devel

Re: [ovirt-devel] (MASTER) exception on server.log: ERROR: insert or update on table "host_device" violates foreign key constraint "fk_host_device_parent_name"

2016-07-07 Thread Yaniv Kaul
On Thu, Jul 7, 2016 at 10:05 AM, Michal Skrivanek <
michal.skriva...@redhat.com> wrote:

>
> > On 07 Jul 2016, at 08:54, Tomas Jelinek <tjeli...@redhat.com> wrote:
> >
> >
> >
> > ----- Original Message -
> >> From: "Yaniv Kaul" <yk...@redhat.com>
> >> To: "devel" <devel@ovirt.org>
> >> Sent: Sunday, July 3, 2016 12:59:29 PM
> >> Subject: [ovirt-devel] (MASTER) exception on server.log: ERROR: insert
> or update on table "host_device" violates
> >> foreign key constraint "fk_host_device_parent_name"
> >>
> >> Caused by: org.postgresql.util.PSQLException: ERROR: insert or update on
> >> table "host_device" violates foreign key constraint
> >> "fk_host_device_parent_name"
> >> Detail: Key (host_id,
> >> parent_device_name)=(767142d5-6242-4bb1-bcbd-095595c3b7f1, scsi_host2)
> is
> >> not present in table "host_device".
> >> at
> >>
> org.postgresql.core.v3.QueryExecutorImpl.receiveErrorResponse(QueryExecutorImpl.java:2157)
> >> at
> >>
> org.postgresql.core.v3.QueryExecutorImpl.processResults(QueryExecutorImpl.java:1886)
> >> at
> >>
> org.postgresql.core.v3.QueryExecutorImpl.execute(QueryExecutorImpl.java:255)
> >> at
> >>
> org.postgresql.jdbc2.AbstractJdbc2Connection.executeTransactionCommand(AbstractJdbc2Connection.java:793)
> >> at
> >>
> org.postgresql.jdbc2.AbstractJdbc2Connection.commit(AbstractJdbc2Connection.java:817)
> >> at
> >>
> org.jboss.jca.adapters.jdbc.local.LocalManagedConnection.commit(LocalManagedConnection.java:96)
> >>
> >>
> >> Seen on
> >>
> ovirt-engine-4.1.0-0.0.master.2016070322.gitf1c9a5c.el7.centos.noarch
> >> Y.
> >
> > it is a known issue: https://bugzilla.redhat.com/show_bug.cgi?id=1315100
> (ugly stack trace but not dangerous by itself)
> > caused by underlying libvirt issue:
> https://bugzilla.redhat.com/show_bug.cgi?id=1306333
> > which seems to be hard to reproduce consistently and no one have found a
> good reproducer so far...
> > So, if you have faced this issue and know how did you get to it, please
> comment to the bug.
>
> Important thing to note would be if this is a real hw or lago/virtual
> host, and whether there is a history of adding/removing of a device on that
> host while running.
>

- I've seen it on Lago.
- We do perform hot add of disk and NIC in ovirt-system-tests, AFAIR.
Y.


> >
> >>
> >>
> >> ___
> >> Devel mailing list
> >> Devel@ovirt.org
> >> http://lists.ovirt.org/mailman/listinfo/devel
> > ___
> > Devel mailing list
> > Devel@ovirt.org
> > http://lists.ovirt.org/mailman/listinfo/devel
> >
> >
>
>
___
Devel mailing list
Devel@ovirt.org
http://lists.ovirt.org/mailman/listinfo/devel

Re: [ovirt-devel] [lago-devel] ovirt tests failing on missing libxml2-python

2016-06-29 Thread Yaniv Kaul
On Mon, Jun 27, 2016 at 9:45 AM, Barak Korren  wrote:

> >
> >
> > It means that packages will be fetched EVERY time from outside, which
> may be
> > slow(er).
> > Y.
> >
>
> We can (and mostly already have) setup simple caches to prevent that.
>

How do you set up cache on a developer's laptop?


> AFAIK CI slaves are cleaned every time anyway, so in practice there
> wouldn`t be much difference except we will have less hard-coding and
> perhaps be more efficient (are we certain we only download what we
> need atm?)
>

The repo directory does not need to be cleaned every time. It can also be
resync'ed from a central repo - which still going to be faster than any
other fetching.
(hopefully sync'ed into the slave /dev/shm btw).


> The existing solution looks more like premature optimization gone badly
> IMO.
>

Try to run ovirt-system-tests, clean the repo and re-run - it's 20-30
minutes at least longer - which is far more than what it takes to run the
whole test suite.

I completely agree the manual maintenance is an annoyance, wish we had
something in between.
Y.


>
> --
> Barak Korren
> bkor...@redhat.com
> RHEV-CI Team
>
___
Devel mailing list
Devel@ovirt.org
http://lists.ovirt.org/mailman/listinfo/devel

[ovirt-devel] (MASTER) exception on server.log: ERROR: insert or update on table "host_device" violates foreign key constraint "fk_host_device_parent_name"

2016-07-03 Thread Yaniv Kaul
Caused by: org.postgresql.util.PSQLException: ERROR: insert or update on
table "host_device" violates foreign key constraint
"fk_host_device_parent_name"
  Detail: Key (host_id,
parent_device_name)=(767142d5-6242-4bb1-bcbd-095595c3b7f1, scsi_host2) is
not present in table "host_device".
at
org.postgresql.core.v3.QueryExecutorImpl.receiveErrorResponse(QueryExecutorImpl.java:2157)
at
org.postgresql.core.v3.QueryExecutorImpl.processResults(QueryExecutorImpl.java:1886)
at
org.postgresql.core.v3.QueryExecutorImpl.execute(QueryExecutorImpl.java:255)
at
org.postgresql.jdbc2.AbstractJdbc2Connection.executeTransactionCommand(AbstractJdbc2Connection.java:793)
at
org.postgresql.jdbc2.AbstractJdbc2Connection.commit(AbstractJdbc2Connection.java:817)
at
org.jboss.jca.adapters.jdbc.local.LocalManagedConnection.commit(LocalManagedConnection.java:96)


Seen
on ovirt-engine-4.1.0-0.0.master.2016070322.gitf1c9a5c.el7.centos.noarch
Y.
___
Devel mailing list
Devel@ovirt.org
http://lists.ovirt.org/mailman/listinfo/devel

Re: [ovirt-devel] [lago-devel] ovirt tests failing on missing libxml2-python

2016-07-04 Thread Yaniv Kaul
On Sun, Jul 3, 2016 at 3:31 PM, Barak Korren  wrote:

> >> Maybe we can take a middle ground, pre-fetch, but also enable external
> >> repos in CI (perhaps with some way to log and find out what was not
> >> pre-fetched).
> >
> > This is what the code is supposed to do, I suspect. reposync syncs
> between
> > what you already have and what you fetch, no?
> > Y.
> >
> I was referring to the deployment/test code.
> AFAIK right now the external repos are disabled before the test starts
>

Indeed, to save time not fetching their metadata, and ensure we have the
correct deps. I guess in some cases we can enable them - need to see how
much time it wastes.
Y.


>
>
> --
> Barak Korren
> bkor...@redhat.com
> RHEV-CI Team
>
___
Devel mailing list
Devel@ovirt.org
http://lists.ovirt.org/mailman/listinfo/devel

Re: [ovirt-devel] [lago-devel] ovirt tests failing on missing libxml2-python

2016-07-01 Thread Yaniv Kaul
On Wed, Jun 29, 2016 at 11:15 PM, Barak Korren <bkor...@redhat.com> wrote:

> On 29 June 2016 at 21:45, Yaniv Kaul <yk...@redhat.com> wrote:
> > On Mon, Jun 27, 2016 at 9:45 AM, Barak Korren <bkor...@redhat.com>
> wrote:
> >>
> >> >
> >> >
> >> > It means that packages will be fetched EVERY time from outside, which
> >> > may be
> >> > slow(er).
> >> > Y.
> >> >
> >>
> >> We can (and mostly already have) setup simple caches to prevent that.
> >
> >
> > How do you set up cache on a developer's laptop?
> >
> We may have been unclear in our intentions, we want to make the
> pre-syncing optional not remove it completely. It does make sense on
> the laptop (sometimes), but not so much in the CI env.
>
> > The repo directory does not need to be cleaned every time.
>
> This is an assumption that may break if we end up having any corrupt
> or failing packages in the cache. It also make it hard to "go back in
> time" if we want to test without some update.
> (Cleaning corrupt caches an re-running is easy in a local setting, in
> CI you end up dealing with angry devs getting false '-1's)
>

True, and we don't want that. Developers have to trust the CI system.
This is an important point.


>
> > It can also be
> > resync'ed from a central repo - which still going to be faster than any
> > other fetching.
> > (hopefully sync'ed into the slave /dev/shm btw).
>
> It could be faster, but could also be slower if you end up fetching
> more then you have to. (if engine setup fails on missing dependency,
> you just spent needless time fetching VDSM dpes)
> Also fetching by itself may not be the bottleneck in all cases, it is
> surely slow when fetching from PHX to TLV, but when fetching from the
> Squid proxy's RAM inside PHX it can actually end up being faster then
> copying from the local disk.
>

I always fetch and store on /dev/shm/repostore
It's faster than anything else.

I did copy its content once to the disk, so when the host reboots, it
rsync's this to /dev/shm/repostore , then tests begin.
That perhaps is indeed not very needed in CI.


> >> The existing solution looks more like premature optimization gone badly
> >> IMO.
> >
> > Try to run ovirt-system-tests, clean the repo and re-run - it's 20-30
> > minutes at least longer - which is far more than what it takes to run the
> > whole test suite.
>
> I wonder how many of those minutes are spend on fetching things we
> actually need, and how much is spent on overhead. I suspect that
> without a local cache, the test run will be longer, but not as long as
> the pre-fetching+tests takes currently. More importantly, this may
> allow the CI to fail faster. I think we should at least test that.
>
> > I completely agree the manual maintenance is an annoyance, wish we had
> > something in between.
>
> Maybe we can take a middle ground, pre-fetch, but also enable external
> repos in CI (perhaps with some way to log and find out what was not
> pre-fetched).
>

This is what the code is supposed to do, I suspect. reposync syncs between
what you already have and what you fetch, no?
Y.


> --
> Barak Korren
> bkor...@redhat.com
> RHEV-CI Team
>
___
Devel mailing list
Devel@ovirt.org
http://lists.ovirt.org/mailman/listinfo/devel

Re: [ovirt-devel] Question about testing

2017-02-05 Thread Yaniv Kaul
On Feb 5, 2017 1:58 PM, "Marc Young" <3vilpeng...@gmail.com> wrote:

I see emails floating around about Jenkins.

Is there a public Jenkins, or possibly one that I can be invited to?
I just finished the vagrant provider and



Excellent news -  saw it on twitter!
Can you send an email to the users mailing list about it?
Perhaps even a blog post on ovirt.org?

pushed it to rubygems and am
currently bringing up a new permanent oVirt server so that i can write
some acceptance tests, but having access to a Jenkins and a testable
oVirt setup would save me a ton of time.


I'm sure it can be integrated nicely to ovirt-system-tests. Let me think
about it for awhile.
Y.

___
Devel mailing list
Devel@ovirt.org
http://lists.ovirt.org/mailman/listinfo/devel
___
Devel mailing list
Devel@ovirt.org
http://lists.ovirt.org/mailman/listinfo/devel

Re: [ovirt-devel] [lago-devel] vdsm service fails to start on HC setup

2017-02-06 Thread Yaniv Kaul
+Nir

On Feb 6, 2017 10:12 AM, "Sahina Bose"  wrote:

> Hi all,
>
> While verifying the test to deploy hyperconverged HE [1], I'm running into
> an issue today where vdsm fails to start.
>
> In the logs -
>  lago-basic-suite-hc-host0 vdsmd_init_common.sh: Error:
> Feb  6 02:21:32 lago-basic-suite-hc-host0 vdsmd_init_common.sh: One of the
> modules is not configured to work with VDSM.
>
> Starting manually - vdsm-tool configure --force gives:
> Units need configuration: {'lvm2-lvmetad.service': {'LoadState': 'masked',
> 'ActiveState': 'failed'}}
>
> Is this a known issue?
>
> [1] - https://gerrit.ovirt.org/57283
>
> thanks
> sahina
>
>
> ___
> lago-devel mailing list
> lago-de...@ovirt.org
> http://lists.ovirt.org/mailman/listinfo/lago-devel
>
>
___
Devel mailing list
Devel@ovirt.org
http://lists.ovirt.org/mailman/listinfo/devel

[ovirt-devel] API model documentation is broken?

2017-01-24 Thread Yaniv Kaul
Just below [1], something is a bit messed up.

TIA,
Y.

[1]
http://ovirt.github.io/ovirt-engine-api-model/master/#services/attached_storage_domain_disks
___
Devel mailing list
Devel@ovirt.org
http://lists.ovirt.org/mailman/listinfo/devel

Re: [ovirt-devel] job test-repo_ovirt_experimental_4.0 (3718) failed

2017-01-24 Thread Yaniv Kaul
On Tue, Jan 24, 2017 at 11:09 AM, Piotr Kliczewski <
piotr.kliczew...@gmail.com> wrote:

> On Tue, Jan 24, 2017 at 9:38 AM, Fred Rolland  wrote:
> > Hi,
> >
> > I see some issues in the communication between host0 and the the engine.
> > Vdsm returned the answer to call getSpmStatus and then got an SSL error.
> [1]
> >
> > After that, the engine tried to send to the same host0
> connectStorageServer
> > requests that failed.
> > The process of attaching the storage domain failed because the host was
> not
> > connected to the NFS server.
> > I think the engine should not have tried to attach the storage domain if
> > connectStorageServer request failed.
> > Can you open a bug on this ?
> >
> > Regardless, the SSL error and communication issue should be investigated
> > further by infra team.
> >
> > Regards,
> >
> > Fred
> >
> > [1]
> > jsonrpc.Executor/5::DEBUG::2017-01-19
> > 12:13:48,711::__init__::555::jsonrpc.JsonRpcServer::(_handle_request)
> Return
> > 'StoragePool.getSpmStatus' in bridge with {'spmId': 1, 'spmStatus':
> 'SPM',
> > 'spmLver': 2L}
> > jsonrpc.Executor/5::INFO::2017-01-19
> > 12:13:48,711::__init__::513::jsonrpc.JsonRpcServer::(_serveRequest) RPC
> call
> > StoragePool.getSpmStatus succeeded in 0.00 seconds
> > JsonRpc (StompReactor)::ERROR::2017-01-19
> > 12:13:49,449::betterAsyncore::113::vds.dispatcher::(recv) SSL error
> during
> > reading data: unexpected eof
>
> The above log entry is a result of the engine closing connection due
> to higher level
> timeout occurring
>
> >
> > [2]
> > 2017-01-19 12:13:49,455 ERROR
> > [org.ovirt.engine.core.vdsbroker.vdsbroker.
> ConnectStorageServerVDSCommand]
> > (org.ovirt.thread.pool-6-thread-5) [4a2c0e44] Command
> > 'ConnectStorageServerVDSCommand(HostName = lago-basic-suite-4-0-host0,
> > StorageServerConnectionManagementVDSParameters:{runAsync='true',
> > hostId='4c4d9ad9-32a1-406a-93da-1710e069c690',
> > storagePoolId='----', storageType='NFS',
> > connectionList='[StorageServerConnections:{id='da9131c8-1cd9-46bb-b7ea-
> f83228e04d3f',
> > connection='192.168.201.4:/exports/nfs/iso', iqn='null', vfsType='null',
> > mountOptions='null', nfsVersion='V3', nfsRetrans='null', nfsTimeo='null',
> > iface='null', netIfaceName='null'}]'})' execution failed:
> > VDSGenericException: VDSNetworkException: Message timeout which can be
> > caused by communication issues
> >
>
> Above log entry tells us that a command timeout. In order to
> understand what happened
> I would check this command why it did not get a response. Usually it
> is send 300 secs
> prior above message.
>
> It is desired approach when we see higher timeout to close the
> connection and attempt to
> reconnect.
>

Is there a reason not to close gracefully?
Y.


>
> >
> >
> > On Mon, Jan 23, 2017 at 4:35 PM, Shlomo Ben David 
> > wrote:
> >>
> >> Hi,
> >>
> >> Job [1] failed with the following errors (logs [2]):
> >>
> >> {"jsonrpc": "2.0", "id": "669e1306-3206-4a64-a33f-d18176531ff8",
> "error":
> >> {"message": "Storage domain does not exist:
> >> (u'ab6c9588-d957-4be3-9862-d2596db463d9',)", "code": 358}}�
> >> 2017-01-19 12:13:51,619 DEBUG
> >> [org.ovirt.vdsm.jsonrpc.client.internal.ResponseWorker]
> (ResponseWorker)
> >> [3a116b94] Message received: {"jsonrpc": "2.0", "id":
> >> "669e1306-3206-4a64-a33f-d18176531ff8", "error": {"message": "Storage
> domain
> >> does not exist: (u'ab6c9588-d957-4be3-9862-d2596db463d9',)", "code":
> 358}}
> >> 2017-01-19 12:13:51,620 ERROR
> >> [org.ovirt.engine.core.vdsbroker.irsbroker.
> AttachStorageDomainVDSCommand]
> >> (default task-27) [6a6480a4] Failed in 'AttachStorageDomainVDS' method
> >> 2017-01-19 12:13:51,624 ERROR
> >> [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector]
> >> (default task-27) [6a6480a4] Correlation ID: null, Call Stack: null,
> Custom
> >> Event ID: -1, Message: VDSM command failed: Storage domain does not
> exist:
> >> (u'ab6c9588-d957-4be3-9862-d2596db463d9',)
> >> 2017-01-19 12:13:51,624 ERROR
> >> [org.ovirt.engine.core.vdsbroker.irsbroker.
> AttachStorageDomainVDSCommand]
> >> (default task-27) [6a6480a4] Command 'AttachStorageDomainVDSCommand(
> >> AttachStorageDomainVDSCommandParameters:{runAsync='true',
> >> storagePoolId='fc33da6d-5da7-4005-a693-f170437c176c',
> >> ignoreFailoverLimit='false',
> >> storageDomainId='ab6c9588-d957-4be3-9862-d2596db463d9'})' execution
> failed:
> >> IRSGenericException: IRSErrorException: Failed to
> AttachStorageDomainVDS,
> >> error = Storage domain does not exist:
> >> (u'ab6c9588-d957-4be3-9862-d2596db463d9',), code = 358
> >> 2017-01-19 12:13:51,632 DEBUG
> >> [org.ovirt.engine.core.utils.timer.FixedDelayJobListener]
> >> (DefaultQuartzScheduler10) [] Rescheduling
> >> DEFAULT.org.ovirt.engine.core.vdsbroker.irsbroker.IrsProxyData.
> hostsStorageConnectionsAndPoolMetadataRefresh#-9223372036854775793
> >> as there is no unfired trigger.
> >> 2017-01-19 12:13:51,624 DEBUG
> >> 

Re: [ovirt-devel] [monitoring][collectd] the collectd virt plugin is now on par with Vdsm needs

2017-02-21 Thread Yaniv Kaul
On Tue, Feb 21, 2017 at 1:06 PM Francesco Romani  wrote:

> Hello everyone,
>
>
> in the last weeks I've been submitting PRs to collectd upstream, to
> bring the virt plugin up to date with Vdsm and oVirt needs.
>
> Previously, the collectd virt plugin reported only a subset of metrics
> oVirt uses.
>
> In current collectd master, the collectd virt plugin provides all the
> data Vdsm (thus Engine) needs. This means that it is now
>
> possible for Vdsm or Engine to query collectd, not Vdsm/libvirt, and
> have the same data.
>

Do we wish to ship the unixsock collectd plugin? I'm not sure we do these
days (4.1).
We can do that later, of course, when we ship this.
Y.


>
> There are only two caveats:
>
> 1. it is yet to be seen which version of collectd will ship all those
> enhancements
>
> 2. collectd *intentionally* report metrics as rates, not as absolute
> values as Vdsm does. This may be one issue in presence of restarts/data
> loss in the link between collectd and the metrics store.
>
>
> Please keep reading for more details:
>
>
> How to get the code?
>
> 
>
> This somehow tricky until we get one official release. If one is
> familiar with the RPM build process, it is easy to build one custom
> packages
>
> from a snapshot from collectd master
> (https://github.com/collectd/collectd) and a recent 5.7.1 RPM (like
> https://koji.fedoraproject.org/koji/buildinfo?buildID=835669)
>
>
> How to configure it?
>
> --
>
> Most thing work out of the box. One currently in progress Vdsm patch
> ships the recommended configuration
> https://gerrit.ovirt.org/#/c/71176/6/static/etc/collectd.d/virt.conf
>
> The meaning of the configuration option is documented in man 5
> collectd.conf
>
>
> How it looks like?
>
> --
>
>
> Let me post one "screenshot" :)
>
>
>
>   $ collectdctl listval | grep a0
>   a0/virt/disk_octets-hdc
>   a0/virt/disk_octets-vda
>   a0/virt/disk_ops-hdc
>   a0/virt/disk_ops-vda
>   a0/virt/disk_time-hdc
>   a0/virt/disk_time-vda
>   a0/virt/if_dropped-vnet0
>   a0/virt/if_errors-vnet0
>   a0/virt/if_octets-vnet0
>   a0/virt/if_packets-vnet0
>   a0/virt/memory-actual_balloon
>   a0/virt/memory-rss
>   a0/virt/memory-total
>   a0/virt/ps_cputime
>   a0/virt/total_requests-flush-hdc
>   a0/virt/total_requests-flush-vda
>   a0/virt/total_time_in_ms-flush-hdc
>   a0/virt/total_time_in_ms-flush-vda
>   a0/virt/virt_cpu_total
>   a0/virt/virt_vcpu-0
>   a0/virt/virt_vcpu-1
>
>
> How to consume the data?
> -
>
> Among the ways to query collectd, the two most popular (and most fitting
> for oVirt use case) ways are perhaps the network protocol
> (https://collectd.org/wiki/index.php/Binary_protocol)
> and the plain text protocol
> (https://collectd.org/wiki/index.php/Plain_text_protocol). The first
> could be used by Engine to get the data directly, or to consolidate the
> metrics in one database (e.g to run any kind of query, for historical
> series...).
> The latter will be used by Vdsm to keep reporting the metrics (again
> https://gerrit.ovirt.org/#/c/71176/6)
>
> Please note that the performance of the plain text protocol are known to
> be lower than the binary protocol
>
> What about the unresponsive hosts?
> ---
>
> We know from experience that hosts may become unresponsive, and this can
> disrupt monitoring. however, we do want to keep monitoring the
> responsive hosts, avoiding that one rogue hosts makes us lose all the
> monitoring data.
> To  cope with this need, the virt plugin gained support for "partition
> tag". With this, we can group VMs together using one arbitrary tag. This
> is completely transparent to collectd, and also completely optional.
> oVirt can use this tag to group VMs per-storage-domain, or however it
> sees fit, trying to minimize the disruption should one host become
> unresponsive.
>
> Read the full docs here:
>
> https://github.com/collectd/collectd/commit/999efc28d8e2e96bc15f535254d412a79755ca4f
>
>
> What about the collectd-ovirt plugin?
> 
>
> Some time ago I implemented one out-of-tree collectd plugin leveraging
> the libvirt bulk stats: https://github.com/fromanirh/collectd-ovirt
> This plugin is meant to be a modern, drop-in replacement for the
> existing virt plugin.
> The development of that out of tree plugin is now halted, because we
> have everything we need in the upstream collectd plugin.
>
> Future work
> --
>
> We believe we have reached feature parity, so we are looking for
> bugixes/performance tuning in the near term future. I'll be happy to
> provide more patches/PRs about that.
>
>
>
> Thanks and bests,
>
> --
> Francesco Romani
> Red Hat Engineering Virtualization R & D
> IRC: fromani
>
> ___
> Devel mailing list
> Devel@ovirt.org
> 

Re: [ovirt-devel] Glance Images

2017-01-19 Thread Yaniv Kaul
On Jan 19, 2017 4:59 PM, "Marc Young" <3vilpeng...@gmail.com> wrote:

Is there any way to suggest new packages be baked into the openstack
glance images?
Ie. the ovirt-guest-agent is missing from the atomic images, and
pretty tough to install given that gcc, make, yum, and pretty much
everything else is gone.


You are using multiple terms here that confuse me and are not directly
related: Glance, OpenStack, ovirt-guest-agent and Atomic. Can you elaborate
on your needs?
Y.

___
Devel mailing list
Devel@ovirt.org
http://lists.ovirt.org/mailman/listinfo/devel
___
Devel mailing list
Devel@ovirt.org
http://lists.ovirt.org/mailman/listinfo/devel

Re: [ovirt-devel] master and 4.1 dwhd Fails to Start (was ovirt_experimental_master Fails: PostgreSQL is not Accessible During Engine Setup)

2017-01-18 Thread Yaniv Kaul
On Wed, Jan 18, 2017 at 3:09 PM, Anton Marchukov 
wrote:

> Hello Shirly.
>
> Thanks for your fix. No. Today there are no things of dwh related failures
> in both master and 4.1
>

Would be great if it could be published, so us mere mortals could enjoy the
fix and run o-s-t as well :-)
Y.


>
> Anton.
>
> On Tue, Jan 17, 2017 at 9:26 PM, Shirly Radco  wrote:
>
>> I fixed the master bug. Is dwh in master still failing?
>>
>> Best regards,
>>
>> Shirly Radco
>>
>> BI Software Engineer
>> Red Hat Israel Ltd.
>> 34 Jerusalem Road
>> Building A, 4th floor
>> Ra'anana, Israel 4350109
>>
>>
>> On Tue, Jan 17, 2017 at 5:30 PM, Anton Marchukov 
>> wrote:
>>
>>> Hello All.
>>>
>>> With Shirly help we found that 4.1 was using master and when 
>>> ovirt_engine_dwh
>>> was branched today 4.1 job was not updated. This resulted in 4.1 repo be
>>> poisoned with 4.1 dwh rpm. I manually cleared it and waiting for test
>>> results, looks like we have a pile up on Jenkins due to lot of patches
>>> merged.
>>>
>>> For master I believe there are still some problems introduced so it will
>>> continue to fail "Error: Could not find or load main class
>>> ovirt_engine_dwh.historyetl_4_2.HistoryETL" till we find and fix the
>>> root cause.
>>>
>>> Anton.
>>>
>>> On Tue, Jan 17, 2017 at 2:18 PM, Anton Marchukov 
>>> wrote:
>>>
 Hello All.

 We checked this with did and postgres connection error is not an error
 (although it prints stacktrace... cannot we not print stacktraces, please,
 for antyhing that we handle in code... it is really confusing when you need
 to find the root cause).

 The test is checking for dwhd to be up using systemd:

 testlib.assert_true_within_short(
 lambda: engine.service('ovirt-engine-dwhd').alive()
 )

 that runs:

 /usr/bin/systemctl status --lines=0 ovirt-engine-dwhd
 lago.ssh: DEBUG: Command 90e98548 on lago-basic-suite-master-engine
 returned with 3
 lago.ssh: DEBUG: Command 90e98548 on lago-basic-suite-master-engine
 output:
  ● ovirt-engine-dwhd.service - oVirt Engine Data Warehouse
Loaded: loaded (/usr/lib/systemd/system/ovirt-engine-dwhd.service;
 enabled; vendor preset: disabled)
Active: activating (auto-restart) (Result: exit-code) since Tue
 2017-01-17 07:33:23 EST; 3min 4s ago
  Main PID: 22448 (code=exited, status=1/FAILURE)
CGroup: /system.slice/ovirt-engine-dwhd.service

 dwhd log [1] has the following error:

 Error: Could not find or load main class ovirt_engine_dwh.historyetl_4_
 2.HistoryETL

 so this looks to be the actual problem. The latest job failed with this
 is [2]. This also affects 4.1, e.g. [3].


 [1] http://jenkins.ovirt.org/job/test-repo_ovirt_experimental_ma
 ster/4791/artifact/exported-artifacts/basic_suite_master.sh-
 el7/exported-artifacts/test_logs/basic-suite-master/post-001
 _initialize_engine.py/lago-basic-suite-master-engine/_var_lo
 g/ovirt-engine-dwh/ovirt-engine-dwhd.log
 [2] http://jenkins.ovirt.org/job/test-repo_ovirt_experimenta
 l_master/4791/
 [3] http://jenkins.ovirt.org/job/test-repo_ovirt_experimental_4.1/287

 --
 Anton Marchukov
 Senior Software Engineer - RHEV CI - Red Hat


>>>
>>>
>>> --
>>> Anton Marchukov
>>> Senior Software Engineer - RHEV CI - Red Hat
>>>
>>>
>>
>
>
> --
> Anton Marchukov
> Senior Software Engineer - RHEV CI - Red Hat
>
>
> ___
> Devel mailing list
> Devel@ovirt.org
> http://lists.ovirt.org/mailman/listinfo/devel
>
___
Devel mailing list
Devel@ovirt.org
http://lists.ovirt.org/mailman/listinfo/devel

Re: [ovirt-devel] [ACTION REQUIRED] 4.1 RC blocked

2017-01-20 Thread Yaniv Kaul
On Fri, Jan 20, 2017 at 11:12 AM, Tomas Jelinek  wrote:

>
>
> On Fri, Jan 20, 2017 at 9:56 AM, Sandro Bonazzola 
> wrote:
>
>> Hi, we still have 2 approved blockers:
>> https://bugzilla.redhat.com/buglist.cgi?quicksearch=target_m
>> ilestone%3Aovirt-4.1.0%20status%3Anew%2Cassigned%2Cpost%
>> 20flag%3Ablocker%2B
>>
>> Please provide ETA or push them out of 4.1,
>>
>
> the https://bugzilla.redhat.com/show_bug.cgi?id=1411739 already has a fix
> - should be acked by end of today.
>

Good. The 2nd I've moved to 4.1.1.
Y.


>
>
>> Thanks
>>
>> --
>> Sandro Bonazzola
>> Better technology. Faster innovation. Powered by community collaboration.
>> See how it works at redhat.com
>>
>> ___
>> Devel mailing list
>> Devel@ovirt.org
>> http://lists.ovirt.org/mailman/listinfo/devel
>>
>
>
> ___
> Devel mailing list
> Devel@ovirt.org
> http://lists.ovirt.org/mailman/listinfo/devel
>
___
Devel mailing list
Devel@ovirt.org
http://lists.ovirt.org/mailman/listinfo/devel

Re: [ovirt-devel] [ OST Failure Report ] [ oVirt master ] [ 26.02.2017 ] [test-repo_ovirt_experimental_master]

2017-02-26 Thread Yaniv Kaul
On Sun, Feb 26, 2017 at 3:04 PM Shlomo Ben David 
wrote:

Hi,


Test failed: [ test-repo_ovirt_experimental_master ]

Link to Job: [1]

Link to all logs: [2]

Link to error log: [3]


[1] http://jenkins.ovirt.org/job/test-repo_ovirt_experimental_master/5538

[2]
http://jenkins.ovirt.org/job/test-repo_ovirt_experimental_master/5538/artifact/exported-artifacts/basic-suit-master-el7/test_logs/basic-suite-master/post-006_migrations.py/

[3]
http://jenkins.ovirt.org/job/test-repo_ovirt_experimental_master/5538/artifact/exported-artifacts/basic-suit-master-el7/nosetests-006_migrations.py.xml

Error snippet from the log:


The below is not the issue. The issue is (engine log):
2017-02-26 05:43:52,178-05 ERROR
[org.ovirt.engine.core.bll.network.host.HostValidator] (default task-11)
[d3b3a59d-6cc0-4896-b3f2-8483f9b77fe2] Unable to setup network: operation
can only be done when Host status is one of: Maintenance, Up,
NonOperational; current status is Connecting

And it comes from the host disconnecting a bit from engine, with various
errors such as (again, engine log):


2017-02-26 05:43:26,763-05 ERROR
[org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector]
(DefaultQuartzScheduler5) [63e30973] EVENT_ID:
VDS_BROKER_COMMAND_FAILURE(10,802), Correlation ID: null, Call Stack: null,
Custom Event ID: -1, Message: VDSM lago-basic-suite-master-host1 *command
FullListVDS failed: Unrecognized message received*


Y.





2017-02-26 05:35:53,340-0500 ERROR (jsonrpc/3)
[storage.TaskManager.Task]
(Task='22828901-d87f-4607-9690-a106c474ebe4') Unexpected error
(task:871)
Traceback (most recent call last):
  File "/usr/lib/python2.7/site-packages/vdsm/storage/task.py", line
878, in _run
return fn(*args, **kargs)
  File "/usr/lib/python2.7/site-packages/vdsm/logUtils.py", line 52, in wrapper
res = f(*args, **kwargs)
  File "/usr/share/vdsm/storage/hsm.py", line 3200, in teardownImage
dom.deactivateImage(imgUUID)
  File "/usr/share/vdsm/storage/blockSD.py", line 1278, in deactivateImage
lvm.deactivateLVs(self.sdUUID, volUUIDs)
  File "/usr/lib/python2.7/site-packages/vdsm/storage/lvm.py", line
1304, in deactivateLVs
_setLVAvailability(vgName, toDeactivate, "n")
  File "/usr/lib/python2.7/site-packages/vdsm/storage/lvm.py", line
845, in _setLVAvailability
raise error(str(e))
CannotDeactivateLogicalVolume: Cannot deactivate Logical Volume:
('General Storage Exception: ("5 [] [\'  Logical volume
1dd0ee2a-f26a-423c-90f9-79703343aa1e/8323e511-eb93-4b1c-a9fa-ad66409994e7
in use.\', \'  Logical volume
1dd0ee2a-f26a-423c-90f9-79703343aa1e/99662dfa-acf2-4392-8ab8-106412c2afa5
in 
use.\']\\n1dd0ee2a-f26a-423c-90f9-79703343aa1e/[\'99662dfa-acf2-4392-8ab8-106412c2afa5\',
\'8323e511-eb93-4b1c-a9fa-ad66409994e7\']",)',)
2017-02-26 05:35:53,347-0500 INFO  (jsonrpc/3)
[storage.TaskManager.Task]
(Task='22828901-d87f-4607-9690-a106c474ebe4') aborting: Task is
aborted: 'Cannot deactivate Logical Volume' - code 552 (task:1176)
2017-02-26 05:35:53,348-0500 ERROR (jsonrpc/3) [storage.Dispatcher]
{'status': {'message': 'Cannot deactivate Logical Volume: (\'General
Storage Exception: ("5 [] [\\\'  Logical volume
1dd0ee2a-f26a-423c-90f9-79703343aa1e/8323e511-eb93-4b1c-a9fa-ad66409994e7
in use.\\\', \\\'  Logical volume
1dd0ee2a-f26a-423c-90f9-79703343aa1e/99662dfa-acf2-4392-8ab8-106412c2afa5
in 
use.\\\']n1dd0ee2a-f26a-423c-90f9-79703343aa1e/[\\\'99662dfa-acf2-4392-8ab8-106412c2afa5\\\',
\\\'8323e511-eb93-4b1c-a9fa-ad66409994e7\\\']",)\',)', 'code': 552}}
(dispatcher:78)
2017-02-26 05:35:53,349-0500 INFO  (jsonrpc/3) [jsonrpc.JsonRpcServer]
RPC call Image.teardown failed (error 552) in 19.36 seconds
(__init__:552)




Best Regards,

Shlomi Ben-David | Software Engineer | Red Hat ISRAEL
RHCSA | RHCVA | RHCE
IRC: shlomibendavid (on #rhev-integ, #rhev-dev, #rhev-ci)

OPEN SOURCE - 1 4 011 && 011 4 1

___
Devel mailing list
Devel@ovirt.org
http://lists.ovirt.org/mailman/listinfo/devel
___
Devel mailing list
Devel@ovirt.org
http://lists.ovirt.org/mailman/listinfo/devel

Re: [ovirt-devel] [ OST Failure Report ] [ oVirt master ] [ 02.03.2017 ] [001_upgrade_engine.py]

2017-03-02 Thread Yaniv Kaul
On Thu, Mar 2, 2017 at 11:32 AM Yedidyah Bar David  wrote:

> On Thu, Mar 2, 2017 at 11:24 AM, Pavel Zhukov  wrote:
> >
> >
> > On Thu, Mar 02 2017, Sandro Bonazzola wrote:
> >
> >> ovirt-engine-hosts-ansible-inventory has been dropped in favor of
> >> ovirt-engine-metrics
> >> Maybe this is the root cause.
> > Right, I see the fix was merged https://gerrit.ovirt.org/73415 and the
> job
> > is green now.
>
> We merged ovirt-engine-metrics-1.0.0 and made sure it was built correctly
> by build-artifacts, prior to merging the engine patch that needs it.
>
> If that's not enough, please explain how should such things - two patches
> in two different git repos that need to be merged and then tested in a
> specific order - should be handled in the future.
>

Don't think there's a very good solution to this in our current
architecture.
I think it's quite alright for the CI to fail on this - and then to succeed
when fixed.
Y.


>
> The relevant patches, in current case, are:
>
> https://gerrit.ovirt.org/73414
>
> Build ovirt-engine-metrics-1.0.0.
> build-artifacts finished at 09:36 (IST).
>
> https://gerrit.ovirt.org/73363
>
> Remove ovirt-engine-hosts-ansible-inventory,
> and require ovirt-engine-metrics, which replaces it.
> Merged at 09:40, build-artifacts finished 10:00.
>
> Can we expect ost to run on packages based on the order in which they
> were merged, or built? If not, is there any other assumption we can
> make re the order? Also, can we affect this order somehow?
>
> Thanks,
>
> > Thank you!
> >>
> >> On Thu, Mar 2, 2017 at 9:23 AM, Pavel Zhukov 
> wrote:
> >>
> >>>
> >>> Hello,
> >>>
> >>> ovirt-engine upgrade is failed on 'rpm -q' command [2] so job [1] was
> >>> marked as
> >>> failed. It's reproducible and started from build [1] onward.
> >>> I don't see any relevant patches in otopi/engine merged recently so no
> >>> suspected patches
> >>> sp far.
> >>>
> >>> [1] http://jenkins.ovirt.org/view/experimental%20jobs/job/test-
> >>> repo_ovirt_experimental_master/5626/
> >>>
> >>> [2]
> >>>
> >>> 2017-03-02 02:48:13 DEBUG otopi.plugins.ovirt_engine_
> >>> setup.ovirt_engine_common.distro-rpm.packages plugin.execute:926
> >>> execute-output: ('/bin/rpm', '-q', 'ovirt-engine-webadmin-portal',
> >>> 'ovirt-engine-dwh', 'ovirt-engine', 'ovirt-engine-restapi',
> >>> 'ovirt-engine-dbscripts', 'ovirt-engine-tools-backup',
> >>> 'ovirt-engine-dashboard', 'ovirt-engine-userportal',
> >>> 'ovirt-engine-wildfly', 'ovirt-engine-backend',
> >>> 'ovirt-engine-wildfly-overlay', 'ovirt-engine-hosts-ansible-inventory',
> >>> 'ovirt-engine-tools', 'ovirt-engine-extension-aaa-jdbc') stderr:
> >>>
> >>>
> >>> 2017-03-02 02:48:13 DEBUG otopi.transaction transaction.abort:119
> aborting
> >>> 'Yum Transaction'
> >>> Loaded plugins: fastestmirror, versionlock
> >>> 2017-03-02 02:48:13 DEBUG otopi.transaction transaction.abort:119
> aborting
> >>> 'DWH Engine database Transaction'
> >>> 2017-03-02 02:48:13 DEBUG otopi.transaction transaction.abort:119
> aborting
> >>> 'Database Transaction'
> >>> 2017-03-02 02:48:13 DEBUG otopi.context context._executeMethod:142
> method
> >>> exception
> >>> Traceback (most recent call last):
> >>>   File "/usr/lib/python2.7/site-packages/otopi/context.py", line 132,
> in
> >>> _executeMethod
> >>> method['method']()
> >>>   File "/usr/share/otopi/plugins/otopi/core/transaction.py", line 93,
> in
> >>> _main_end
> >>> self._mainTransaction.commit()
> >>>   File "/usr/lib/python2.7/site-packages/otopi/transaction.py", line
> 148,
> >>> in commit
> >>> element.commit()
> >>>   File "/usr/share/ovirt-engine/setup/bin/../plugins/ovirt-
> >>> engine-setup/ovirt-engine-common/distro-rpm/packages.py", line 146, in
> >>> commit
> >>> osetupcons.RPMDistroEnv.VERSION_LOCK_APPLY
> >>>   File "/usr/lib/python2.7/site-packages/otopi/plugin.py", line 931, in
> >>> execute
> >>> command=args[0],
> >>> RuntimeError: Command '/bin/rpm' failed to execute
> >>> 2017-03-02 02:48:13 ERROR otopi.context context._executeMethod:151
> Failed
> >>> to execute stage 'Transaction commit': Command '/bin/rpm' failed to
> execute
> >>>
> >>> --
> >>> Pavel
> >>> ___
> >>> Devel mailing list
> >>> Devel@ovirt.org
> >>> http://lists.ovirt.org/mailman/listinfo/devel
> >>>
> >
> >
> > --
> > Pavel Zhukov
> > Software Engineer
> > RHV DevOps
> > IRC: landgraf
>
>
>
> --
> Didi
> ___
> Devel mailing list
> Devel@ovirt.org
> http://lists.ovirt.org/mailman/listinfo/devel
>
___
Devel mailing list
Devel@ovirt.org
http://lists.ovirt.org/mailman/listinfo/devel

Re: [ovirt-devel] [ OST Failure Report ] [ oVirt master ] [ 02.03.2017 ] [001_upgrade_engine.py]

2017-03-02 Thread Yaniv Kaul
On Thu, Mar 2, 2017 at 11:40 AM Yedidyah Bar David <d...@redhat.com> wrote:

> On Thu, Mar 2, 2017 at 11:34 AM, Yaniv Kaul <yk...@redhat.com> wrote:
> >
> >
> > On Thu, Mar 2, 2017 at 11:32 AM Yedidyah Bar David <d...@redhat.com>
> wrote:
> >>
> >> On Thu, Mar 2, 2017 at 11:24 AM, Pavel Zhukov <pzhu...@redhat.com>
> wrote:
> >> >
> >> >
> >> > On Thu, Mar 02 2017, Sandro Bonazzola wrote:
> >> >
> >> >> ovirt-engine-hosts-ansible-inventory has been dropped in favor of
> >> >> ovirt-engine-metrics
> >> >> Maybe this is the root cause.
> >> > Right, I see the fix was merged https://gerrit.ovirt.org/73415 and
> the
> >> > job
> >> > is green now.
> >>
> >> We merged ovirt-engine-metrics-1.0.0 and made sure it was built
> correctly
> >> by build-artifacts, prior to merging the engine patch that needs it.
> >>
> >> If that's not enough, please explain how should such things - two
> patches
> >> in two different git repos that need to be merged and then tested in a
> >> specific order - should be handled in the future.
> >
> >
> > Don't think there's a very good solution to this in our current
> > architecture.
>
> But for sure there is _something_ we can say, no?
>
> If I wait a day, might this not be enough either?
>

I don't see who's going to wait - especially as we'll be moving to running
CI more and more - hopefully per patch at some point.


>
> > I think it's quite alright for the CI to fail on this - and then to
> succeed
> > when fixed.
>
> Of course, in general. I just want to understand what's the minimum I
> need to do (wait some more, something else?) to save some noise...
>

Mainly communication - a heads up that you are about to break CI and will
fix it right after makes sense to me. There are more sophisticated
solutions (Zuul?) out there, but I think straightforward communication is
the easiest, at this point - it doesn't happen often.
Y.


>
> > Y.
> >
> >>
> >>
> >> The relevant patches, in current case, are:
> >>
> >> https://gerrit.ovirt.org/73414
> >>
> >> Build ovirt-engine-metrics-1.0.0.
> >> build-artifacts finished at 09:36 (IST).
> >>
> >> https://gerrit.ovirt.org/73363
> >>
> >> Remove ovirt-engine-hosts-ansible-inventory,
> >> and require ovirt-engine-metrics, which replaces it.
> >> Merged at 09:40, build-artifacts finished 10:00.
> >>
> >> Can we expect ost to run on packages based on the order in which they
> >> were merged, or built? If not, is there any other assumption we can
> >> make re the order? Also, can we affect this order somehow?
> >>
> >> Thanks,
> >>
> >> > Thank you!
> >> >>
> >> >> On Thu, Mar 2, 2017 at 9:23 AM, Pavel Zhukov <pzhu...@redhat.com>
> >> >> wrote:
> >> >>
> >> >>>
> >> >>> Hello,
> >> >>>
> >> >>> ovirt-engine upgrade is failed on 'rpm -q' command [2] so job [1]
> was
> >> >>> marked as
> >> >>> failed. It's reproducible and started from build [1] onward.
> >> >>> I don't see any relevant patches in otopi/engine merged recently so
> no
> >> >>> suspected patches
> >> >>> sp far.
> >> >>>
> >> >>> [1] http://jenkins.ovirt.org/view/experimental%20jobs/job/test-
> >> >>> repo_ovirt_experimental_master/5626/
> >> >>>
> >> >>> [2]
> >> >>>
> >> >>> 2017-03-02 02:48:13 DEBUG otopi.plugins.ovirt_engine_
> >> >>> setup.ovirt_engine_common.distro-rpm.packages plugin.execute:926
> >> >>> execute-output: ('/bin/rpm', '-q', 'ovirt-engine-webadmin-portal',
> >> >>> 'ovirt-engine-dwh', 'ovirt-engine', 'ovirt-engine-restapi',
> >> >>> 'ovirt-engine-dbscripts', 'ovirt-engine-tools-backup',
> >> >>> 'ovirt-engine-dashboard', 'ovirt-engine-userportal',
> >> >>> 'ovirt-engine-wildfly', 'ovirt-engine-backend',
> >> >>> 'ovirt-engine-wildfly-overlay',
> >> >>> 'ovirt-engine-hosts-ansible-inventory',
> >> >>> 'ovirt-engine-tools', 'ovirt-engine-extension-aaa-jdbc') stderr:
> >> >>>
> >> >>>
> >> >>> 2017-03-02 02:48:13 DEBUG otopi.transaction transaction.abor

Re: [ovirt-devel] Failure in log collector on master: Failure fetching information about hypervisors from API. Error (ValueError): legacy is not a valid RngSource

2016-09-06 Thread Yaniv Kaul
Interesting that we don't see it on 4.0, saw it's a regression introduced
somewhere, or a test case we have not seen before?
Y.

On Tue, Sep 6, 2016 at 11:37 AM, Eyal Edri <ee...@redhat.com> wrote:

> Glad to hear you found the issue!
> Let me know when it's merged so we can publish it to the nightlies and
> rerun the tests.
>
> On Tue, Sep 6, 2016 at 11:34 AM, Juan Hernández <jhern...@redhat.com>
> wrote:
>
>> On 09/05/2016 08:43 PM, Rafael Martins wrote:
>> > - Original Message -
>> >> From: "Juan Hernández" <jhern...@redhat.com>
>> >> To: "Yaniv Kaul" <yk...@redhat.com>
>> >> Cc: "Sandro Bonazzola" <sbona...@redhat.com>, "Rafael Martins" <
>> rmart...@redhat.com>, "Ondra Machacek"
>> >> <omach...@redhat.com>, "devel" <devel@ovirt.org>
>> >> Sent: Friday, September 2, 2016 2:31:42 PM
>> >> Subject: Re: [ovirt-devel] Failure in log collector on master: Failure
>> fetching information about hypervisors from
>> >> API. Error (ValueError): legacy is not a valid RngSource
>> >>
>> >> On 09/02/2016 02:24 PM, Yaniv Kaul wrote:
>> >>> On Fri, Sep 2, 2016 at 3:14 PM, Juan Hernández <jhern...@redhat.com
>> >>> <mailto:jhern...@redhat.com>> wrote:
>> >>>
>> >>> On 09/02/2016 02:00 PM, Sandro Bonazzola wrote:
>> >>> >
>> >>> >
>> >>> > On Fri, Sep 2, 2016 at 1:38 PM, Yaniv Kaul <yk...@redhat.com
>> >>> > <mailto:yk...@redhat.com>
>> >>> > <mailto:yk...@redhat.com <mailto:yk...@redhat.com>>> wrote:
>> >>> >
>> >>> > Log:
>> >>> >
>> >>> > 2016-09-02 07:26:52::ERROR::hypervisors::197::root::
>> Failure
>> >>> > fetching information about hypervisors from API.
>> >>> > Error (ValueError): legacy is not a valid RngSource
>> >>> > 2016-09-02 07:26:52::ERROR::__main__::1147::root::
>> >>> > _get_hypervisors_from_api: legacy is not a valid RngSource
>> >>> > 2016-09-02 07:26:52::INFO::__main__::1424::root::
>> Gathering oVirt
>> >>> > Engine information...
>> >>> > 2016-09-02 07:27:03::INFO::__main__::1398::root:: Gathering
>> >>> > PostgreSQL the oVirt Engine database and log files from
>> >>> > localhost...
>> >>> > 2016-09-02 07:27:05::INFO::__main__::1859::root:: No
>> hypervisors
>> >>> > were selected, therefore no hypervisor data will be
>> collected.
>> >>> > 2016-09-02 07:27:08::INFO::__main__::1862::root:: Log
>> files have
>> >>> > been collected and placed in
>> >>> > /tmp/sosreport-LogCollector-20160902072705.tar.xz.
>> >>> >
>> >>> >
>> >>> > I am not familiar with this error - first time I've seen it,
>> >>> > while
>> >>> > running on Master, on Lago (with a patch I'm working on -
>> that
>> >>> > adds
>> >>> > DNS and IPv6 support to Lago, nothing more - doesn't seem
>> >>> > relevant).
>> >>> > Any idea?
>> >>> >
>> >>> >
>> >>> >
>> >>> > Probably a change in the API.
>> >>> > Rafael can you reproduce?
>> >>> >
>> >>> > Juan, Ondra, any insight?
>> >>> >
>> >>>
>> >>> That means that the API is returning "legacy" as the value for
>> >>> something
>> >>> that is declared of type "RngSource", and the valid values for
>> that are
>> >>> "random" and "hwrng". But the API can't return that, at least not
>> >>> version 4 of the API. Are you using engine 4? Can you share the
>> output
>> >>> of the clusters resource?
>> >>>
>> >>>   https://.../ovirt-engine/api/clusters
>> >>>
>> >>>
>> >>> Lago is still using the v3 API.
>> >>> I'm not sure what the log collector is using. I assum

Re: [ovirt-devel] Failure in log collector on master: Failure fetching information about hypervisors from API. Error (ValueError): legacy is not a valid RngSource

2016-09-02 Thread Yaniv Kaul
On Fri, Sep 2, 2016 at 3:14 PM, Juan Hernández <jhern...@redhat.com> wrote:

> On 09/02/2016 02:00 PM, Sandro Bonazzola wrote:
> >
> >
> > On Fri, Sep 2, 2016 at 1:38 PM, Yaniv Kaul <yk...@redhat.com
> > <mailto:yk...@redhat.com>> wrote:
> >
> > Log:
> >
> > 2016-09-02 07:26:52::ERROR::hypervisors::197::root:: Failure
> > fetching information about hypervisors from API.
> > Error (ValueError): legacy is not a valid RngSource
> > 2016-09-02 07:26:52::ERROR::__main__::1147::root::
> > _get_hypervisors_from_api: legacy is not a valid RngSource
> > 2016-09-02 07:26:52::INFO::__main__::1424::root:: Gathering oVirt
> > Engine information...
> > 2016-09-02 07:27:03::INFO::__main__::1398::root:: Gathering
> > PostgreSQL the oVirt Engine database and log files from localhost...
> > 2016-09-02 07:27:05::INFO::__main__::1859::root:: No hypervisors
> > were selected, therefore no hypervisor data will be collected.
> > 2016-09-02 07:27:08::INFO::__main__::1862::root:: Log files have
> > been collected and placed in
> > /tmp/sosreport-LogCollector-20160902072705.tar.xz.
> >
> >
> > I am not familiar with this error - first time I've seen it, while
> > running on Master, on Lago (with a patch I'm working on - that adds
> > DNS and IPv6 support to Lago, nothing more - doesn't seem relevant).
> > Any idea?
> >
> >
> >
> > Probably a change in the API.
> > Rafael can you reproduce?
> >
> > Juan, Ondra, any insight?
> >
>
> That means that the API is returning "legacy" as the value for something
> that is declared of type "RngSource", and the valid values for that are
> "random" and "hwrng". But the API can't return that, at least not
> version 4 of the API. Are you using engine 4? Can you share the output
> of the clusters resource?
>
>   https://.../ovirt-engine/api/clusters
>

Lago is still using the v3 API.
I'm not sure what the log collector is using. I assume[1] it's v4.

Y.
[1]
https://github.com/oVirt/ovirt-log-collector/blob/dfaf35675bee3da1c53b4fd74b816efafa13d070/src/helper/hypervisors.py#L8


>
> >
> >
> >
> >
> > Y.
> >
> >
> >
> >
> > --
> > Sandro Bonazzola
> > Better technology. Faster innovation. Powered by community collaboration.
> > See how it works at redhat.com <http://redhat.com>
> > <https://www.redhat.com/it/about/events/red-hat-open-source-day-2016>
> >
> >
> > ___
> > Devel mailing list
> > Devel@ovirt.org
> > http://lists.ovirt.org/mailman/listinfo/devel
> >
>
>
> --
> Dirección Comercial: C/Jose Bardasano Baos, 9, Edif. Gorbea 3, planta
> 3ºD, 28016 Madrid, Spain
> Inscrita en el Reg. Mercantil de Madrid – C.I.F. B82657941 - Red Hat S.L.
>
___
Devel mailing list
Devel@ovirt.org
http://lists.ovirt.org/mailman/listinfo/devel

Re: [ovirt-devel] Update gerrit plugins (oauth, avatars-gravatar)

2016-09-05 Thread Yaniv Kaul
On Sun, Sep 4, 2016 at 7:41 PM, Shlomo Ben David 
wrote:

> Hi,
>
> Today 04/09/2016 at 23:00 i'm planning to update gerrit-oauth plugin from v0.3
> ==> v2.11.3
>
> In addition i will add the new avatars-gravatar plugin (v2.11)
>

Does it affect the performance? It looks very colorful and fun, but I'd
hate it to reduce the performance of our Gerrit instance.
Y.


>
> Update Duration: ~10 min
> During the update the gerrit.ovirt.org server won't be available
>
> An update email will be sent when done.
>
> Best Regards,
>
> Shlomi Ben-David | DevOps Engineer | Red Hat ISRAEL
> RHCSA | RHCE
> IRC: shlomibendavid (on #rhev-integ, #rhev-dev, #rhev-ci)
>
> OPEN SOURCE - 1 4 011 && 011 4 1
>
> ___
> Devel mailing list
> Devel@ovirt.org
> http://lists.ovirt.org/mailman/listinfo/devel
>
___
Devel mailing list
Devel@ovirt.org
http://lists.ovirt.org/mailman/listinfo/devel

[ovirt-devel] Failure in log collector on master: Failure fetching information about hypervisors from API. Error (ValueError): legacy is not a valid RngSource

2016-09-02 Thread Yaniv Kaul
Log:

2016-09-02 07:26:52::ERROR::hypervisors::197::root:: Failure fetching
information about hypervisors from API.
Error (ValueError): legacy is not a valid RngSource
2016-09-02 07:26:52::ERROR::__main__::1147::root::
_get_hypervisors_from_api: legacy is not a valid RngSource
2016-09-02 07:26:52::INFO::__main__::1424::root:: Gathering oVirt Engine
information...
2016-09-02 07:27:03::INFO::__main__::1398::root:: Gathering PostgreSQL the
oVirt Engine database and log files from localhost...
2016-09-02 07:27:05::INFO::__main__::1859::root:: No hypervisors were
selected, therefore no hypervisor data will be collected.
2016-09-02 07:27:08::INFO::__main__::1862::root:: Log files have been
collected and placed in /tmp/sosreport-LogCollector-20160902072705.tar.xz.


I am not familiar with this error - first time I've seen it, while running
on Master, on Lago (with a patch I'm working on - that adds DNS and IPv6
support to Lago, nothing more - doesn't seem relevant).
Any idea?


Y.
___
Devel mailing list
Devel@ovirt.org
http://lists.ovirt.org/mailman/listinfo/devel

Re: [ovirt-devel] test-repo_ovirt_experimental_master job - failed

2016-09-09 Thread Yaniv Kaul
Indeed, this is the log collector. I wonder if we collect its logs...
Y.


On Thu, Sep 8, 2016 at 6:54 PM, Eyal Edri  wrote:

> I'm pretty sure lago or ovirt system tests aren't doing it but its the log
> collector which is running during that test, I'm not near a computer so
> can't verify it yet.
>
> On Sep 8, 2016 6:05 PM, "Nir Soffer"  wrote:
>
>> On Thu, Sep 8, 2016 at 5:45 PM, Eyal Edri  wrote:
>> > Adding devel.
>> >
>> > On Thu, Sep 8, 2016 at 5:43 PM, Shlomo Ben David 
>> > wrote:
>> >>
>> >> Hi,
>> >>
>> >> Job [1] is failing with the following error:
>> >>
>> >> lago.ssh: DEBUG: Command 8de75538 on lago_basic_suite_master_engine
>> >> errors:
>> >>  ERROR: Failed to collect logs from: 192.168.200.2; /bin/ls:
>> >> /rhev/data-center/mnt/blockSD/eb8c9f48-5f23-48dc-ab7d-945189
>> 0fd422/master/tasks/1350bed7-443e-4ae6-ae1f-9b24d18c70a8.temp:
>> >> No such file or directory
>> >> /bin/ls: cannot open directory
>> >> /rhev/data-center/mnt/blockSD/eb8c9f48-5f23-48dc-ab7d-945189
>> 0fd422/master/tasks/1350bed7-443e-4ae6-ae1f-9b24d18c70a8.temp:
>> >> No such file or directory
>>
>> This looks like a lago issue - it should never read anything inside /rhev
>>
>> This is a private directory for vdsm, no other process should ever depend
>> on the content inside this directory, or even on the fact that it exists.
>>
>> In particular, /rhev/data-center/mnt/blockSD/*/master/tasks/*.temp
>> Is not a log file, and lago should not collect it.
>>
>> Nir
>>
>> >> lago.utils: ERROR: Error while running thread
>> >> Traceback (most recent call last):
>> >>   File "/usr/lib/python2.7/site-packages/lago/utils.py", line 53, in
>> >> _ret_via_queue
>> >> queue.put({'return': func()})
>> >>   File
>> >> "/home/jenkins/workspace/test-repo_ovirt_experimental_master
>> /ovirt-system-tests/basic_suite_master/test-scenarios/002_bootstrap.py",
>> >> line 493, in log_collector
>> >> result.code, 0, 'log collector failed. Exit code is %s' %
>> result.code
>> >>   File "/usr/lib/python2.7/site-packages/nose/tools/trivial.py", line
>> 29,
>> >> in eq_
>> >> raise AssertionError(msg or "%r != %r" % (a, b))
>> >> AssertionError: log collector failed. Exit code is 2
>> >>
>> >>
>> >> * The previous issue already fixed (SDK) and now we have a new issue on
>> >> the same area.
>> >>
>> >>
>> >> [1] -
>> >> http://jenkins.ovirt.org/view/experimental%20jobs/job/test-r
>> epo_ovirt_experimental_master/1462/testReport/(root)/002_
>> bootstrap/add_secondary_storage_domains/
>> >>
>> >>
>> >> Best Regards,
>> >>
>> >> Shlomi Ben-David | DevOps Engineer | Red Hat ISRAEL
>> >> RHCSA | RHCE
>> >> IRC: shlomibendavid (on #rhev-integ, #rhev-dev, #rhev-ci)
>> >>
>> >> OPEN SOURCE - 1 4 011 && 011 4 1
>> >
>> >
>> >
>> >
>> > --
>> > Eyal Edri
>> > Associate Manager
>> > RHV DevOps
>> > EMEA ENG Virtualization R
>> > Red Hat Israel
>> >
>> > phone: +972-9-7692018
>> > irc: eedri (on #tlv #rhev-dev #rhev-integ)
>> >
>> > ___
>> > Devel mailing list
>> > Devel@ovirt.org
>> > http://lists.ovirt.org/mailman/listinfo/devel
>>
>
___
Devel mailing list
Devel@ovirt.org
http://lists.ovirt.org/mailman/listinfo/devel

[ovirt-devel] ovirt-appliance (today) is 1.5G...

2016-10-06 Thread Yaniv Kaul
I'm sure it was less just days ago, but now[1] it is again quite big.
4.0's is 'only' 1G in size[2].
Which anyway raises the question - should we provide delta RPMs for it?

Thanks,
Y.

[1]
http://resources.ovirt.org/pub/ovirt-master-snapshot/rpm/el7/noarch/ovirt-engine-appliance-4.1-20160927.1.el7.centos.noarch.rpm
[2]
http://resources.ovirt.org/pub/ovirt-4.0-snapshot/rpm/el7/noarch/ovirt-engine-appliance-4.0-20161003.1.el7.centos.noarch.rpm
___
Devel mailing list
Devel@ovirt.org
http://lists.ovirt.org/mailman/listinfo/devel

Re: [ovirt-devel] backend jar dependencies that should probably be symlinks

2016-10-04 Thread Yaniv Kaul
On Tue, Oct 4, 2016 at 5:33 PM, Sandro Bonazzola 
wrote:

>
>
> On Tue, Oct 4, 2016 at 3:27 PM, Juan Hernández 
> wrote:
>
>> On 10/04/2016 03:16 PM, Sandro Bonazzola wrote:
>> > Hi, I'm checking the packaging of ovirt-engine for 4.1 and I've some
>> doubts:
>> >
>> > $ LC_ALL=C rpm -qlvp
>> > http://resources.ovirt.org/pub/ovirt-master-snapshot/rpm/fc2
>> 4/noarch/ovirt-engine-backend-4.1.0-0.0.master.2016100322192
>> 1.git2653cbc.fc24.noarch.rpm|grep
>> > jar |grep -v ^l |grep common
>> > -rw-r--r--1 rootroot77761 Oct  4 00:20
>> > /usr/share/ovirt-engine/modules/common/com/netflix/config/
>> main/archaius-core.jar
>> > -rw-r--r--1 rootroot16442 Oct  4 00:20
>> > /usr/share/ovirt-engine/modules/common/com/netflix/hystrix/
>> contrib/main/hystrix-metrics-event-stream.jar
>> > -rw-r--r--1 rootroot   290223 Oct  4 00:20
>> > /usr/share/ovirt-engine/modules/common/com/netflix/hystrix/
>> main/hystrix-core.jar
>> > -rw-r--r--1 rootroot   738300 Oct  4 00:20
>> > /usr/share/ovirt-engine/modules/common/io/reactivex/rxjava/
>> main/rxjava.jar
>> > -rw-r--r--1 rootroot 6073 Oct  4 00:20
>> > /usr/share/ovirt-engine/modules/common/org/ovirt/engine/api/
>> metamodel-server/main/metamodel-server.jar
>> > -rw-r--r--1 rootroot 8224 Oct  4 00:21
>> > /usr/share/ovirt-engine/modules/common/org/ovirt/engine/
>> core/auth-plugin/main/auth-plugin.jar
>> > -rw-r--r--1 rootroot 4010 Oct  4 00:22
>> > /usr/share/ovirt-engine/modules/common/org/ovirt/engine/
>> core/logger/main/logger.jar
>> > -rw-r--r--1 rootroot   370051 Oct  4 00:20
>> > /usr/share/ovirt-engine/modules/common/org/springframework/
>> main/spring-aop.jar
>> > -rw-r--r--1 rootroot   731512 Oct  4 00:20
>> > /usr/share/ovirt-engine/modules/common/org/springframework/
>> main/spring-beans.jar
>> > -rw-r--r--1 rootroot  1097552 Oct  4 00:20
>> > /usr/share/ovirt-engine/modules/common/org/springframework/
>> main/spring-context.jar
>> > -rw-r--r--1 rootroot  1078737 Oct  4 00:20
>> > /usr/share/ovirt-engine/modules/common/org/springframework/
>> main/spring-core.jar
>> > -rw-r--r--1 rootroot   262990 Oct  4 00:20
>> > /usr/share/ovirt-engine/modules/common/org/springframework/
>> main/spring-expression.jar
>> > -rw-r--r--1 rootroot 7243 Oct  4 00:20
>> > /usr/share/ovirt-engine/modules/common/org/springframework/
>> main/spring-instrument.jar
>> > -rw-r--r--1 rootroot   423369 Oct  4 00:20
>> > /usr/share/ovirt-engine/modules/common/org/springframework/
>> main/spring-jdbc.jar
>> > -rw-r--r--1 rootroot   265523 Oct  4 00:20
>> > /usr/share/ovirt-engine/modules/common/org/springframework/
>> main/spring-tx.jar
>> >
>> > $ dnf provides "*/archaius-core.jar" -> archaius-core-0.7.3-4.fc24.noa
>> rch
>> > $ dnf provides "*/hystrix-metrics-event-stream.jar"
>> > -> hystrix-metrics-event-stream-1.4.21-5.fc24.noarch
>> > and so on with the other jar files.
>> >
>> > Any chance we can just reuse system libs and symlink them?
>> >
>> > Please note that the question is for el7 as well:
>> >
>> > $ LC_ALL=C rpm -qlvp
>> > http://resources.ovirt.org/pub/ovirt-master-snapshot/rpm/el7
>> /noarch/ovirt-engine-backend-4.1.0-0.0.master.2016100321131
>> 3.git2653cbc.el7.centos.noarch.rpm
>> > |grep jar |grep -v ^l |grep common
>> > -rw-r--r--1 rootroot   608376 Oct  3 23:14
>> > /usr/share/ovirt-engine/modules/common/com/mchange/c3p0/main/c3p0.jar
>> > -rw-r--r--1 rootroot77761 Oct  3 23:14
>> > /usr/share/ovirt-engine/modules/common/com/netflix/config/
>> main/archaius-core.jar
>> > -rw-r--r--1 rootroot16442 Oct  3 23:14
>> > /usr/share/ovirt-engine/modules/common/com/netflix/hystrix/
>> contrib/main/hystrix-metrics-event-stream.jar
>> > -rw-r--r--1 rootroot   290223 Oct  3 23:14
>> > /usr/share/ovirt-engine/modules/common/com/netflix/hystrix/
>> main/hystrix-core.jar
>> > -rw-r--r--1 rootroot23234 Oct  3 23:14
>> > /usr/share/ovirt-engine/modules/common/com/woorea/openstack/
>> sdk/main/cinder-client.jar
>> > -rw-r--r--1 rootroot20755 Oct  3 23:14
>> > /usr/share/ovirt-engine/modules/common/com/woorea/openstack/
>> sdk/main/cinder-model.jar
>> > -rw-r--r--1 rootroot18277 Oct  3 23:14
>> > /usr/share/ovirt-engine/modules/common/com/woorea/openstack/
>> sdk/main/glance-client.jar
>> > -rw-r--r--1 rootroot 8780 Oct  3 23:14
>> > /usr/share/ovirt-engine/modules/common/com/woorea/openstack/
>> sdk/main/glance-model.jar
>> > -rw-r--r--1 rootroot

Re: [ovirt-devel] Outreachy internship

2016-10-05 Thread Yaniv Kaul
Hi,

We have a very interesting testing framework[1] and tests[2] for oVirt.
The framework, Lago, is using nested virtualization (a VM within a VM,
running on a host - two levels of virtualization) to completely virtualize
an oVirt environment - a manager and hosts.
I suggest you first try to run it. Once you do, there are plenty of
opportunities to contribute in both areas: the framework, and the tests.
The most important tasks in the tests is moving them to oVirt's latest API
- v4.
Y.

[1] http://lago.readthedocs.io/en/latest/README.html
[2] https://gerrit.ovirt.org/#/q/project:ovirt-system-tests

On Wed, Oct 5, 2016 at 8:40 PM, Саша Ершова 
wrote:

> Dear all,
> My name is Alexandra Ershova, and I'm a student in Natural Language
> Processing in Higher School of Economics, Moscow, Russia. I'd like to take
> part in the current round of Outreachy internships. My main programming
> language is Python (I have experience with both 2 and 3). Writing system
> tests seems like an interesting project to me, and I would like to do it.
> Could you please give me an application task, so that I could make my
> first contribution?
>
> Best regards,
> Alexandra Ershova
> github.com/religofsil
>
> ___
> Devel mailing list
> Devel@ovirt.org
> http://lists.ovirt.org/mailman/listinfo/devel
>
___
Devel mailing list
Devel@ovirt.org
http://lists.ovirt.org/mailman/listinfo/devel

Re: [ovirt-devel] [VDSM] All tests using directio fail on CI

2016-09-28 Thread Yaniv Kaul
On Sep 28, 2016 11:37 PM, "Nir Soffer"  wrote:
>
> On Wed, Sep 28, 2016 at 11:20 PM, Nir Soffer  wrote:
> > On Wed, Sep 28, 2016 at 10:31 PM, Barak Korren 
wrote:
> >> The CI setup did not change recently.
> >
> > Great
> >
> >> All standard-CI jobs run inside mock (chroot) which is stored on top
> >> of a regular FS, so they should not be affected by the slave OS at all
> >> as far as FS settings go.
> >>
> >> But perhaps some slave-OS/mock-OS combination is acting strangely, so
> >> could you be more specific and point to particular job runs that fail?
> >
> > This jobs failed, but it was deleted (I get 404 now):
> > http://jenkins.ovirt.org/job/vdsm_master_check-patch-fc24-x86_64/2530/
>
> Oops, wrong build.
>
> This is the failing build:
>
>
http://jenkins.ovirt.org/job/vdsm_master_check-patch-el7-x86_64/1054/console
>
> And this is probably the reason - using a ram disk:
>
> 12:24:53 Building remotely on ovirt-srv08.phx.ovirt.org (phx physical
> integ-tests ram_disk fc23) in workspace
> /home/jenkins/workspace/vdsm_master_check-patch-el7-x86_64
>
> We cannot run the storage tests using a ramdisk. We are creating
> (tiny) volumes and storage domains and doing copies, this code cannot
> work with ramdisk.

Will it work on zram?
What if we configure ram based iSCSI targets?
Y.

>
> Attaching full console log before the job is deleted.
>
> >
> > Then I triggered the tests and they passed:
> > http://jenkins.ovirt.org/job/vdsm_master_check-patch-fc24-x86_64/2548/
> >
> > Thanks
> > Nir
> >
> >>
> >> On 28 September 2016 at 22:00, Nir Soffer  wrote:
> >>> Hi all,
> >>>
> >>> It seems that the CI setup has changed, and /var/tmp is using now
tempfs.
> >>>
> >>> This is not compatible with vdsm tests, assuming that /var/tmp is a
real file
> >>> system. This is the reason we do not use /tmp.
> >>>
> >>> We have lot of storage tests using directio, and directio cannot work
on
> >>> tempfs.
> >>>
> >>> Please check the slaves and make sure /var/tmp is using file system
supporting
> >>> directio.
> >>>
> >>> See example failure bellow.
> >>>
> >>> Nir
> >>>
> >>> 
> >>>
> >>> 12:33:20
==
> >>> 12:33:20 ERROR: test_create_fail_creating_lease
> >>> (storage_volume_artifacts_test.BlockVolumeArtifactsTests)
> >>> 12:33:20
--
> >>> 12:33:20 Traceback (most recent call last):
> >>> 12:33:20   File
> >>>
"/home/jenkins/workspace/vdsm_master_check-patch-el7-x86_64/vdsm/tests/storage_volume_artifacts_test.py",
> >>> line 485, in test_create_fail_creating_lease
> >>> 12:33:20 *BASE_PARAMS[sc.RAW_FORMAT])
> >>> 12:33:20   File "/usr/lib64/python2.7/unittest/case.py", line 513, in
> >>> assertRaises
> >>> 12:33:20 callableObj(*args, **kwargs)
> >>> 12:33:20   File
> >>>
"/home/jenkins/workspace/vdsm_master_check-patch-el7-x86_64/vdsm/vdsm/storage/sdm/volume_artifacts.py",
> >>> line 391, in create
> >>> 12:33:20 desc, parent)
> >>> 12:33:20   File
> >>>
"/home/jenkins/workspace/vdsm_master_check-patch-el7-x86_64/vdsm/vdsm/storage/sdm/volume_artifacts.py",
> >>> line 482, in _create_metadata
> >>> 12:33:20 sc.LEGAL_VOL)
> >>> 12:33:20   File
> >>>
"/home/jenkins/workspace/vdsm_master_check-patch-el7-x86_64/vdsm/vdsm/storage/volume.py",
> >>> line 427, in newMetadata
> >>> 12:33:20 cls.createMetadata(metaId, meta.legacy_info())
> >>> 12:33:20   File
> >>>
"/home/jenkins/workspace/vdsm_master_check-patch-el7-x86_64/vdsm/vdsm/storage/volume.py",
> >>> line 420, in createMetadata
> >>> 12:33:20 cls._putMetadata(metaId, meta)
> >>> 12:33:20   File
> >>>
"/home/jenkins/workspace/vdsm_master_check-patch-el7-x86_64/vdsm/vdsm/storage/blockVolume.py",
> >>> line 242, in _putMetadata
> >>> 12:33:20 f.write(data)
> >>> 12:33:20   File
> >>>
"/home/jenkins/workspace/vdsm_master_check-patch-el7-x86_64/vdsm/lib/vdsm/storage/directio.py",
> >>> line 161, in write
> >>> 12:33:20 raise OSError(err, msg)
> >>> 12:33:20 OSError: [Errno 22] Invalid argument
> >>> 12:33:20  >> begin captured logging <<

> >>> 12:33:20 2016-09-28 05:32:03,254 DEBUG   [storage.PersistentDict]
> >>> (MainThread) Created a persistent dict with VGTagMetadataRW backend
> >>> 12:33:20 2016-09-28 05:32:03,255 DEBUG   [storage.PersistentDict]
> >>> (MainThread) read lines (VGTagMetadataRW)=[]
> >>> 12:33:20 2016-09-28 05:32:03,255 DEBUG   [storage.PersistentDict]
> >>> (MainThread) Empty metadata
> >>> 12:33:20 2016-09-28 05:32:03,255 DEBUG   [storage.PersistentDict]
> >>> (MainThread) Starting transaction
> >>> 12:33:20 2016-09-28 05:32:03,256 DEBUG   [storage.PersistentDict]
> >>> (MainThread) Flushing changes
> >>> 12:33:20 2016-09-28 05:32:03,256 DEBUG   [storage.PersistentDict]
> >>> (MainThread) about to write lines (VGTagMetadataRW)=['CLASS=Data',
> >>> 

Re: [ovirt-devel] [VDSM] All tests using directio fail on CI

2016-09-29 Thread Yaniv Kaul
On Sep 29, 2016 10:28 AM, "Evgheni Dereveanchin" <edere...@redhat.com>
wrote:
>
> Hi,
>
> Indeed the proposed dd test does not work on zRAM slaves.
> Can we modify the job not to run on nodes with ram_disk label?

Are those zram based or ram based *virtio-blk* disks,  or zram/ram disks
within the VMs?
The former should work. The latter -  no idea.

>
> The node will be offline for now until we agree on what to do.
> An option is to abandon RAM disks completely as we didn't find
> any performance benefits from using them so far.

That's very surprising. On my case it doubles the performance,  at least.
But I assume my storage (single disk) is far slower than yours.
Y.

>
> Regards,
> Evgheni Dereveanchin
>
> - Original Message -
> From: "Eyal Edri" <ee...@redhat.com>
> To: "Nir Soffer" <nsof...@redhat.com>, "Evgheni Dereveanchin" <
edere...@redhat.com>
> Cc: "Yaniv Kaul" <yk...@redhat.com>, "devel" <devel@ovirt.org>, "infra" <
in...@ovirt.org>
> Sent: Thursday, 29 September, 2016 8:08:45 AM
> Subject: Re: [ovirt-devel] [VDSM] All tests using directio fail on CI
>
> Evgheni,
> Can you try switching the current RAM drive with zram?
>
> On Wed, Sep 28, 2016 at 11:43 PM, Nir Soffer <nsof...@redhat.com> wrote:
>
> > On Wed, Sep 28, 2016 at 11:39 PM, Yaniv Kaul <yk...@redhat.com> wrote:
> > > On Sep 28, 2016 11:37 PM, "Nir Soffer" <nsof...@redhat.com> wrote:
> > >>
> > >> On Wed, Sep 28, 2016 at 11:20 PM, Nir Soffer <nsof...@redhat.com>
> > wrote:
> > >> > On Wed, Sep 28, 2016 at 10:31 PM, Barak Korren <bkor...@redhat.com>
> > >> > wrote:
> > >> >> The CI setup did not change recently.
> > >> >
> > >> > Great
> > >> >
> > >> >> All standard-CI jobs run inside mock (chroot) which is stored on
top
> > >> >> of a regular FS, so they should not be affected by the slave OS at
> > all
> > >> >> as far as FS settings go.
> > >> >>
> > >> >> But perhaps some slave-OS/mock-OS combination is acting
strangely, so
> > >> >> could you be more specific and point to particular job runs that
> > fail?
> > >> >
> > >> > This jobs failed, but it was deleted (I get 404 now):
> > >> > http://jenkins.ovirt.org/job/vdsm_master_check-patch-fc24-
> > x86_64/2530/
> > >>
> > >> Oops, wrong build.
> > >>
> > >> This is the failing build:
> > >>
> > >>
> > >> http://jenkins.ovirt.org/job/vdsm_master_check-patch-el7-
> > x86_64/1054/console
> > >>
> > >> And this is probably the reason - using a ram disk:
> > >>
> > >> 12:24:53 Building remotely on ovirt-srv08.phx.ovirt.org (phx physical
> > >> integ-tests ram_disk fc23) in workspace
> > >> /home/jenkins/workspace/vdsm_master_check-patch-el7-x86_64
> > >>
> > >> We cannot run the storage tests using a ramdisk. We are creating
> > >> (tiny) volumes and storage domains and doing copies, this code cannot
> > >> work with ramdisk.
> > >
> > > Will it work on zram?
> > > What if we configure ram based iSCSI targets?
> >
> > I don't know, but it is easy to test - it this works the tests will
work:
> >
> > dd if=/dev/zero of=file bs=512 count=1 oflag=direct
> > ___
> > Infra mailing list
> > in...@ovirt.org
> > http://lists.ovirt.org/mailman/listinfo/infra
> >
> >
> >
>
>
> --
> Eyal Edri
> Associate Manager
> RHV DevOps
> EMEA ENG Virtualization R
> Red Hat Israel
>
> phone: +972-9-7692018
> irc: eedri (on #tlv #rhev-dev #rhev-integ)
___
Devel mailing list
Devel@ovirt.org
http://lists.ovirt.org/mailman/listinfo/devel

Re: [ovirt-devel] [VDSM] All tests using directio fail on CI

2016-09-29 Thread Yaniv Kaul
zram does not support direct IO (tested, indeed fails).
What I do is host the VMs there, though - this is working - but I'm using
Lago (and not oVirt). does oVirt need direct IO for the temp disks? I
thought we are doing them on the libvirt level?

This is the command I use:
sudo modprobe zram num_devices=1 && sudo zramctl --find --size 12G && sudo
mkfs.xfs -K /dev/zram0 && sudo mount -o nobarrier /dev/zram0 /home/zram &&
sudo chmod 777 /home/zram

And then I run lago with: ./run_suite.sh -o /home/zram basic_suite_master

Y.


On Thu, Sep 29, 2016 at 10:47 AM, Evgheni Dereveanchin <edere...@redhat.com>
wrote:

> Hi Yaniv,
>
> this is a physical server with work directories
> created on a zRAM device, here's the patch:
> https://gerrit.ovirt.org/#/c/62249/2/site/ovirt_jenkins_
> slave/templates/prepare-ram-disk.service.erb
>
> I'll still need to read up on this but the only
> slave having this class (ovirt-srv08) is now offline
> and should not cause issues. I tested on VM slaves
> and did not see errors from the dd test command you provided.
>
> Please tell me if you see errors on other nodes and I'll
> check what's going on but it must be something else than RAM disks.
>
> Regards,
> Evgheni Dereveanchin
>
> - Original Message -
> From: "Yaniv Kaul" <yk...@redhat.com>
> To: "Evgheni Dereveanchin" <edere...@redhat.com>
> Cc: "infra" <in...@ovirt.org>, "devel" <devel@ovirt.org>, "Eyal Edri" <
> ee...@redhat.com>, "Nir Soffer" <nsof...@redhat.com>
> Sent: Thursday, 29 September, 2016 9:32:45 AM
> Subject: Re: [ovirt-devel] [VDSM] All tests using directio fail on CI
>
> On Sep 29, 2016 10:28 AM, "Evgheni Dereveanchin" <edere...@redhat.com>
> wrote:
> >
> > Hi,
> >
> > Indeed the proposed dd test does not work on zRAM slaves.
> > Can we modify the job not to run on nodes with ram_disk label?
>
> Are those zram based or ram based *virtio-blk* disks,  or zram/ram disks
> within the VMs?
> The former should work. The latter -  no idea.
>
> >
> > The node will be offline for now until we agree on what to do.
> > An option is to abandon RAM disks completely as we didn't find
> > any performance benefits from using them so far.
>
> That's very surprising. On my case it doubles the performance,  at least.
> But I assume my storage (single disk) is far slower than yours.
> Y.
>
> >
> > Regards,
> > Evgheni Dereveanchin
> >
> > - Original Message -
> > From: "Eyal Edri" <ee...@redhat.com>
> > To: "Nir Soffer" <nsof...@redhat.com>, "Evgheni Dereveanchin" <
> edere...@redhat.com>
> > Cc: "Yaniv Kaul" <yk...@redhat.com>, "devel" <devel@ovirt.org>, "infra"
> <
> in...@ovirt.org>
> > Sent: Thursday, 29 September, 2016 8:08:45 AM
> > Subject: Re: [ovirt-devel] [VDSM] All tests using directio fail on CI
> >
> > Evgheni,
> > Can you try switching the current RAM drive with zram?
> >
> > On Wed, Sep 28, 2016 at 11:43 PM, Nir Soffer <nsof...@redhat.com> wrote:
> >
> > > On Wed, Sep 28, 2016 at 11:39 PM, Yaniv Kaul <yk...@redhat.com> wrote:
> > > > On Sep 28, 2016 11:37 PM, "Nir Soffer" <nsof...@redhat.com> wrote:
> > > >>
> > > >> On Wed, Sep 28, 2016 at 11:20 PM, Nir Soffer <nsof...@redhat.com>
> > > wrote:
> > > >> > On Wed, Sep 28, 2016 at 10:31 PM, Barak Korren <
> bkor...@redhat.com>
> > > >> > wrote:
> > > >> >> The CI setup did not change recently.
> > > >> >
> > > >> > Great
> > > >> >
> > > >> >> All standard-CI jobs run inside mock (chroot) which is stored on
> top
> > > >> >> of a regular FS, so they should not be affected by the slave OS
> at
> > > all
> > > >> >> as far as FS settings go.
> > > >> >>
> > > >> >> But perhaps some slave-OS/mock-OS combination is acting
> strangely, so
> > > >> >> could you be more specific and point to particular job runs that
> > > fail?
> > > >> >
> > > >> > This jobs failed, but it was deleted (I get 404 now):
> > > >> > http://jenkins.ovirt.org/job/vdsm_master_check-patch-fc24-
> > > x86_64/2530/
> > > >>
> > > >> Oops, wrong build.
> > > >>
> > > >

Re: [ovirt-devel] [VDSM] All tests using directio fail on CI

2016-09-29 Thread Yaniv Kaul
On Thu, Sep 29, 2016 at 12:55 PM, Anton Marchukov 
wrote:

>
>> > The node will be offline for now until we agree on what to do.
>> > An option is to abandon RAM disks completely as we didn't find
>> > any performance benefits from using them so far.
>>
>> That's very surprising. On my case it doubles the performance,  at least.
>> But I assume my storage (single disk) is far slower than yours.
>>
>
> What amount of RAM you had available to Linux file system cache and were
> there any previous runs so Linux were able to put any mock caches into the
> RAM cache?
>

I don't do mock. And if I run everything in RAM (whether directly under
/dev/shm/ or in a zram disk), I honestly don't need the Linux
system cache.


>
> Besides the possible difference in disk speeds I think the second factor
> is this Linux fs cache that basically create an analog of RAM disk on the
> fly.
>

Well, theoretically, if you have enough RAM and you keep re-running, many
of the data is indeed going to be cached. I'd argue that it's a better use
of RAM to just run it there.


>
> Those two things might explain why we do not see any performance
> improvement from RAM drives in our case.
>

Indeed.
Y.


>
> Anton.
>
>
___
Devel mailing list
Devel@ovirt.org
http://lists.ovirt.org/mailman/listinfo/devel

Re: [ovirt-devel] Failure on start VM in ovirt-system-tests from patches merged to master on the 25/10/2016

2016-10-29 Thread Yaniv Kaul
IDK - still fails for me, though on potentiall different issue:
Cluster creation fails with:
RequestError:
status: 400
reason: Bad Request
detail: Cannot create Cluster. The chosen CPU is not supported.


The chosen CPU is the same CPU I've used for several weeks now...

Engine.log shows:
2016-10-29 12:18:50,952 DEBUG
[org.ovirt.engine.core.dal.dbbroker.generic.DBConfigUtils] (ServerService
Thread Pool -- 54) [] Didn't find the value of 'ServerCPUList' in DB for
version '4.1' - using default: '1:
pentium3:vmx:pentium3;2:intel-qemu64-nx:vmx,sse2:qemu64,-nx,+sse2;3:intel-qemu64:vmx,sse2,nx:qemu64,+sse2;2:amd-qemu64-nx:svm,sse2:qemu64,-nx,+sse2;3:amd-qemu64:svm,sse2,nx:qemu64,+sse2'
2016-10-29 12:18:50,952 ERROR
[org.ovirt.engine.core.bll.CpuFlagsManagerHandler] (ServerService Thread
Pool -- 54) [] Error getting info for CPU '1:pentium3:vmx:pentium3', not in
expected format.
2016-10-29 12:18:50,952 ERROR
[org.ovirt.engine.core.bll.CpuFlagsManagerHandler] (ServerService Thread
Pool -- 54) [] Error getting info for CPU
'2:intel-qemu64-nx:vmx,sse2:qemu64,-nx,+sse2', not in expected for
mat.
2016-10-29 12:18:50,952 ERROR
[org.ovirt.engine.core.bll.CpuFlagsManagerHandler] (ServerService Thread
Pool -- 54) [] Error getting info for CPU
'3:intel-qemu64:vmx,sse2,nx:qemu64,+sse2', not in expected format.
2016-10-29 12:18:50,952 ERROR
[org.ovirt.engine.core.bll.CpuFlagsManagerHandler] (ServerService Thread
Pool -- 54) [] Error getting info for CPU
'2:amd-qemu64-nx:svm,sse2:qemu64,-nx,+sse2', not in expected forma
t.
2016-10-29 12:18:50,952 ERROR
[org.ovirt.engine.core.bll.CpuFlagsManagerHandler] (ServerService Thread
Pool -- 54) [] Error getting info for CPU
'3:amd-qemu64:svm,sse2,nx:qemu64,+sse2', not in expected format.
2016-10-29 12:18:50,953 DEBUG
[org.ovirt.engine.core.dal.dbbroker.generic.DBConfigUtils] (ServerService
Thread Pool -- 54) [] Didn't find the value of 'ServerCPUList' in DB for
version '4.0' - using default: '1:
pentium3:vmx:pentium3;2:intel-qemu64-nx:vmx,sse2:qemu64,-nx,+sse2;3:intel-qemu64:vmx,sse2,nx:qemu64,+sse2;2:amd-qemu64-nx:svm,sse2:qemu64,-nx,+sse2;3:amd-qemu64:svm,sse2,nx:qemu64,+sse2'
2016-10-29 12:18:50,953 ERROR
[org.ovirt.engine.core.bll.CpuFlagsManagerHandler] (ServerService Thread
Pool -- 54) [] Error getting info for CPU '1:pentium3:vmx:pentium3', not in
expected format.
2016-10-29 12:18:50,953 ERROR
[org.ovirt.engine.core.bll.CpuFlagsManagerHandler] (ServerService Thread
Pool -- 54) [] Error getting info for CPU
'2:intel-qemu64-nx:vmx,sse2:qemu64,-nx,+sse2', not in expected for
mat.
2016-10-29 12:18:50,953 ERROR
[org.ovirt.engine.core.bll.CpuFlagsManagerHandler] (ServerService Thread
Pool -- 54) [] Error getting info for CPU
'3:intel-qemu64:vmx,sse2,nx:qemu64,+sse2', not in expected format.
2016-10-29 12:18:50,953 ERROR
[org.ovirt.engine.core.bll.CpuFlagsManagerHandler] (ServerService Thread
Pool -- 54) [] Error getting info for CPU
'2:amd-qemu64-nx:svm,sse2:qemu64,-nx,+sse2', not in expected forma
t.
2016-10-29 12:18:50,953 ERROR
[org.ovirt.engine.core.bll.CpuFlagsManagerHandler] (ServerService Thread
Pool -- 54) [] Error getting info for CPU
'3:amd-qemu64:svm,sse2,nx:qemu64,+sse2', not in expected format.
2016-10-29 12:18:50,954 INFO
 [org.ovirt.engine.core.bll.CpuFlagsManagerHandler] (ServerService Thread
Pool -- 54) [] Finished initializing dictionaries


On Thu, Oct 27, 2016 at 10:01 PM, Allon Mureinik <amure...@redhat.com>
wrote:

> And now the CI job is [finally] passing.
> Piotr/Eli - the stomp timeout may be worth investigating, but it's
> definitely NOT the root cause of the previous failures, so feel free to
> deprioritize it as you see fit.
>
> Thanks to everyone who helped debug/investigate/review these issues, and
> sorry for the noise.
>
> On Thu, Oct 27, 2016 at 6:40 PM, Allon Mureinik <amure...@redhat.com>
> wrote:
>
>> The 004 CI is now passing, and it fails on 006.
>> I merged a patch for the failure, let's see where we get next.
>>
>> On Thu, Oct 27, 2016 at 3:13 PM, Allon Mureinik <amure...@redhat.com>
>> wrote:
>>
>>> Now I also see it in the CI.
>>>
>>> I merged the patch so we can squeeze in as many CI runs as possible
>>> before the weekend.
>>>
>>> On Thu, Oct 27, 2016 at 11:38 AM, Allon Mureinik <amure...@redhat.com>
>>> wrote:
>>>
>>>> [Adding Martin Sivak.]
>>>>
>>>> That reproduces on my setup too, but didn't see it in CI, and is not
>>>> related to the recent injection issues.
>>>>
>>>> Martin - This issue seems to have been introduced in your patch 0e4ae6b.
>>>> I'm not sure eaxctly why java doesn't like the @NotNull annotation on
>>>> schedule, but ampyrically it does, as removing it solves the issue.
>>>> I've posted https://gerrit.ovirt.org/65784 to do so - please review
>

Re: [ovirt-devel] test-repo_ovirt_experimental_master job - failed

2016-10-30 Thread Yaniv Kaul
On Sun, Oct 30, 2016 at 12:26 PM, Nadav Goldin <ngol...@redhat.com> wrote:

> Hi all, bumping this thread due to an almost identical failure[1]:
>
> ovirt-log-collector/ovirt-log-collector-20161030053238.log:2016-10-30
> 05:33:09::ERROR::__main__::791::root:: Failed to collect logs from:
> 192.168.200.4; /bin/ls:
> /rhev/data-center/mnt/blockSD/63c4fdd3-5d0f-4d16-b1e5-
> 5f43caa4cf82/master/tasks/6b3b6aa1-808c-42df-9db7-
> 52349f8533f2/6b3b6aa1-808c-42df-9db7-52349f8533f2.job.0:
> No such file or directory
> ovirt-log-collector/ovirt-log-collector-20161030053238.log-/bin/ls:
> cannot access /rhev/data-center/mnt/blockSD/63c4fdd3-5d0f-4d16-b1e5-
> 5f43caa4cf82/master/tasks/6b3b6aa1-808c-42df-9db7-
> 52349f8533f2/6b3b6aa1-808c-42df-9db7-52349f8533f2.recover.1:
> No such file or directory
> ovirt-log-collector/ovirt-log-collector-20161030053238.log-/bin/ls:
> cannot access /rhev/data-center/mnt/blockSD/63c4fdd3-5d0f-4d16-b1e5-
> 5f43caa4cf82/master/tasks/6b3b6aa1-808c-42df-9db7-
> 52349f8533f2/6b3b6aa1-808c-42df-9db7-52349f8533f2.task:
> No such file or directory
> ovirt-log-collector/ovirt-log-collector-20161030053238.log-/bin/ls:
> cannot access /rhev/data-center/mnt/blockSD/63c4fdd3-5d0f-4d16-b1e5-
> 5f43caa4cf82/master/tasks/6b3b6aa1-808c-42df-9db7-
> 52349f8533f2/6b3b6aa1-808c-42df-9db7-52349f8533f2.recover.0:
> No such file or directory
>
> To ensure I've checked lago/OST, and couldn't find any stage where
> there is a reference to '/rhv' nor any manipulation to
> ovirt-log-collector, only customizations made is a
> 'ovirt-log-collector.conf' with user/password. The code that pulls the
> logs in OST[2] runs the following command on the engine VM(and there
> it fails):
>
> ovirt-log-collector --conf /rot/ovirt-log-collector.conf
>
> The failure comes right after 'add_secondary_storage_domains'[3] test,
> which all of its steps ran successfully.
>

Not exactly.


>
> Can anyone look into this?
>

It may be my fault, in a way. I've added the log collector test to run in
parallel to the tests that add the secondary storage domains. The
directories it tries to access may or may not be available - this is
probably racy. I don't think it should fail, but I can certainly see why it
can.
The easiest 'fix' would be to split it to its own test (I wanted to save
execution time, as most of the time spent on secondary storage domains test
is not really useful).
Y.



>
> Thanks,
> Nadav.
>
> [1] http://jenkins.ovirt.org/job/ovirt-system-tests_master_
> check-patch-fc24-x86_64/141/console
> [2] https://github.com/oVirt/ovirt-system-tests/blob/
> master/basic_suite_master/test-scenarios/002_bootstrap.py#L490
> [3] https://github.com/oVirt/ovirt-system-tests/blob/
> master/basic_suite_master/test-scenarios/002_bootstrap.py#L243
>
>
> On Tue, Sep 20, 2016 at 9:45 AM, Sandro Bonazzola <sbona...@redhat.com>
> wrote:
> >
> >
> >
> > On Fri, Sep 9, 2016 at 1:19 PM, Yaniv Kaul <yk...@redhat.com> wrote:
> >>
> >> Indeed, this is the log collector. I wonder if we collect its logs...
> >> Y.
> >
> >
> > This can't be log-collector, it can be sos vdsm plugin.
> > That said, if we run log-collector within lago we should collect the
> results as job artifacts.
> >
> >
> >>
> >>
> >>
> >> On Thu, Sep 8, 2016 at 6:54 PM, Eyal Edri <ee...@redhat.com> wrote:
> >>>
> >>> I'm pretty sure lago or ovirt system tests aren't doing it but its the
> log collector which is running during that test, I'm not near a computer so
> can't verify it yet.
> >>>
> >>>
> >>> On Sep 8, 2016 6:05 PM, "Nir Soffer" <nsof...@redhat.com> wrote:
> >>>>
> >>>> On Thu, Sep 8, 2016 at 5:45 PM, Eyal Edri <ee...@redhat.com> wrote:
> >>>> > Adding devel.
> >>>> >
> >>>> > On Thu, Sep 8, 2016 at 5:43 PM, Shlomo Ben David <
> sbend...@redhat.com>
> >>>> > wrote:
> >>>> >>
> >>>> >> Hi,
> >>>> >>
> >>>> >> Job [1] is failing with the following error:
> >>>> >>
> >>>> >> lago.ssh: DEBUG: Command 8de75538 on lago_basic_suite_master_engine
> >>>> >> errors:
> >>>> >>  ERROR: Failed to collect logs from: 192.168.200.2; /bin/ls:
> >>>> >> /rhev/data-center/mnt/blockSD/eb8c9f48-5f23-48dc-ab7d-
> 9451890fd422/master/tasks/1350bed7-443e-4ae6-ae1f-9b24d18c70a8.temp:
> >>>> >> No such file or directory
> >>>> >> /bin/ls: cannot open directory
> 

Re: [ovirt-devel] test-repo_ovirt_experimental_master job - failed

2016-10-30 Thread Yaniv Kaul
On Sun, Oct 30, 2016 at 12:57 PM, Nadav Goldin <ngol...@redhat.com> wrote:

> On Sun, Oct 30, 2016 at 12:40 PM, Yaniv Kaul <yk...@redhat.com> wrote:
> > Not exactly.
>
> My bad, missed that the tests run in parallel, though what this means
> is that 'ovirt-log-collector' can fail when there are ongoing
> tasks(such as adding the storage domains), I assume that is not the
> expected behaviour. I'll send a patch separating the test for now.
>

https://gerrit.ovirt.org/#/c/65857/1

Y.
___
Devel mailing list
Devel@ovirt.org
http://lists.ovirt.org/mailman/listinfo/devel

Re: [ovirt-devel] Can't add DC with API v4 - client issue

2016-11-09 Thread Yaniv Kaul
On Sat, Oct 15, 2016 at 1:04 AM, Ravi Nori <rn...@redhat.com> wrote:

> Also can you please try following command to directly obtain token from
> SSO. Can replace engine with FQDN and IP to see if both work
>
> curl -v -k -H "Accept: application/json" 'https://:443/ovirt-
> engine/sso/oauth/token?grant_type=password=admin@
> internal=123=ovirt-app-api'
>
> You should see output similar to the one below
>
> {"access_token":"K0sBa0D3rLtmNTdMJ-Q4FzOgCtGGY2cSFSCwbLkG94te9nDd
> mEzHSizsFaOeNMdwOziIv3l2-Uqm8bxWkMpwMA","scope":"ovirt-app-api
> ovirt-ext=token-info:authz-search ovirt-ext=token-info:public-authz-search
> ovirt-ext=token-info:validate","exp":-381399824,"token_type":"bearer"}
>

Sorry it took me so long to get back to it, but here it is:
{"access_token":"eA8w0DaapkKAQ8tfHakzA-R0l-mjD_CsTlAqBaH4iVVjXxQN33poXzt9UhPJLxMU8YOvVNX6LICcxL1EeAiAlw","scope":"ovirt-app-api
ovirt-ext=token-info:authz-search ovirt-ext=token-info:public-authz-search
ovirt-ext=token-info:validate","exp":["java.lang.Long",1479290132000],"token_type":"bearer"}

And here's the difference between the SDK and the manual curl command in
ssl_access log:
192.168.201.1 - - [09/Nov/2016:04:52:19 -0500] "POST
/ovirt-engine/sso/oauth/token HTTP/1.1" 404 74
192.168.201.1 - - [09/Nov/2016:04:55:32 -0500] "GET
/ovirt-engine/sso/oauth/token?grant_type=password=admin@internal=123=ovirt-app-api
HTTP/1.1" 200 295



> Thanks
>
> Ravi
>
> On Fri, Oct 14, 2016 at 4:00 PM, Yaniv Kaul <yk...@redhat.com> wrote:
>
>> On Oct 14, 2016 7:13 PM, "Ravi Nori" <rn...@redhat.com> wrote:
>> >
>> > SSO configuration looks good.
>> >
>> > Can you please share any additional httpd configuration in
>> /etc/httpd/conf.d. Anything to do with LocationMatch for ovirt-engine urls.
>>
>> This is a standard ovirt-system-tests on Lago installation, nothing out
>> of the ordinary,  but I'll check.
>> Y.
>>
>> >
>> > On Fri, Oct 14, 2016 at 12:52 PM, Yaniv Kaul <yk...@redhat.com> wrote:
>> >>
>> >>
>> >>
>> >> On Fri, Oct 14, 2016 at 3:50 PM, Ravi Nori <rn...@redhat.com> wrote:
>> >>>
>> >>> Hi Yaniv,
>> >>>
>> >>> Can you check the output of https::///ovirt-engine/sso/status
>> in your browser and see if the SSO service is active.
>> >>>
>> >>> If SSO is deployed, you should see an output similar to the one
>> below. Also are you able to login to webadmin using the browser?
>> >>
>> >>
>> >> I am able to login using the webui.
>> >>
>> >>>
>> >>>
>> >>> {"status_description":"SSO Webapp Deployed","version":"0","statu
>> s":"active"}
>> >>
>> >>
>> >> Indeed:
>> >> {"status_description":"SSO Webapp Deployed","version":"0","statu
>> s":"active"}
>> >>
>> >> (not sure what 'version 0' means?)
>> >>
>> >>>
>> >>>
>> >>> Please share the content of /etc/ovirt-engine/engine.conf.
>> d/11-setup-sso.conf
>> >>
>> >>
>> >> [root@lago-basic-suite-master-engine ~]# cat
>> /etc/ovirt-engine/engine.conf.d/11-setup-sso.conf
>> >> ENGINE_SSO_CLIENT_ID="ovirt-engine-core"
>> >> ENGINE_SSO_CLIENT_SECRET="bsOabtD7gE2McwLe80P109UV800XLx4O"
>> >> ENGINE_SSO_AUTH_URL="https://${ENGINE_FQDN}:443/ovirt-engine/sso;
>> >> ENGINE_SSO_SERVICE_URL="https://localhost:443/ovirt-engine/sso;
>> >> ENGINE_SSO_SERVICE_SSL_VERIFY_HOST=false
>> >> ENGINE_SSO_SERVICE_SSL_VERIFY_CHAIN=true
>> >> SSO_ALTERNATE_ENGINE_FQDNS=""
>> >> SSO_ENGINE_URL="https://${ENGINE_FQDN}:443/ovirt-engine/;
>> >>
>> >>
>> >> Thanks,
>> >> Y.
>> >>
>> >>
>> >>>
>> >>>
>> >>> Thanks
>> >>>
>> >>> Ravi
>> >>>
>> >>>
>> >>>
>> >>>
>> >>>
>> >>> On Fri, Oct 14, 2016 at 7:57 AM, Juan Hernández <jhern...@redhat.com>
>> wrote:
>> >>>>
>> >>>> On 10/14/2016 01:45 PM, Yaniv Kaul wrote:
>> >>>> 

Re: [ovirt-devel] [vdsm] Connection refused when talking to jsonrpc

2016-11-09 Thread Yaniv Kaul
On Tue, Nov 8, 2016 at 7:12 PM, Michal Skrivanek 
wrote:

>
>
> > On 08 Nov 2016, at 17:52, Martin Sivak  wrote:
> >
> > Hi,
> >
> > mom-vdsm.service contains:
> >
> > Requires=vdsmd.service
> > After=vdsmd.service
> >
> > So when Shira restarted vdsm, mom was also restarted.
>

What is the reason to restart mom when VDSM is restarted?
Y.



> >
> > [journalctl --unit vdsmd]
> > Nov 08 18:25:27 RHEL7.2Server systemd[1]: Stopping Virtual Desktop
> > Server Manager...
> > Nov 08 18:25:27 RHEL7.2Server vdsmd_init_common.sh[3053]: vdsm:
> > Running run_final_hooks
> > Nov 08 18:25:27 RHEL7.2Server systemd[1]: Starting Virtual Desktop
> > Server Manager...
> >
> > [journalctl --unit mom-vdsm]
> > Nov 08 18:17:23 RHEL7.2Server systemd[1]: Starting MOM instance
> > configured for VDSM purposes...
> > Nov 08 18:25:16 RHEL7.2Server systemd[1]: Stopping MOM instance
> > configured for VDSM purposes...
> > Nov 08 18:25:29 RHEL7.2Server systemd[1]: Started MOM instance
> > configured for VDSM purposes.
> >
> >
> > But mom then immediately failed with:
> >
> > 2016-11-08 18:25:08,008 - mom.RPCServer - INFO - ping()
> > 2016-11-08 18:25:08,010 - mom.RPCServer - INFO - getStatistics()
> > 2016-11-08 18:25:17,028 - mom.RPCServer - INFO - RPC Server ending
> > 2016-11-08 18:25:24,705 - mom.GuestManager - INFO - Guest Manager ending
> > 2016-11-08 18:25:26,575 - mom.HostMonitor - INFO - Host Monitor ending
> >
> > 2016-11-08 18:25:29,869 - mom - INFO - MOM starting
> > 2016-11-08 18:25:29,905 - mom.HostMonitor - INFO - Host Monitor starting
> > 2016-11-08 18:25:29,905 - mom - INFO - hypervisor interface
> vdsmjsonrpcbulk
> > 2016-11-08 18:25:30,029 - mom.vdsmInterface - ERROR - Cannot connect
> > to VDSM! [Errno 111] Connection refused
> > 2016-11-08 18:25:30,030 - mom - ERROR - Failed to initialize MOM threads
> > Traceback (most recent call last):
> >  File "/usr/lib/python2.7/site-packages/mom/__init__.py", line 29, in
> run
> >hypervisor_iface = self.get_hypervisor_interface()
> >  File "/usr/lib/python2.7/site-packages/mom/__init__.py", line 217,
> > in get_hypervisor_interface
> >return module.instance(self.config)
> >  File "/usr/lib/python2.7/site-packages/mom/HypervisorInterfaces/
> vdsmjsonrpcbulkInterface.py",
> > line 47, in instance
> >return JsonRpcVdsmBulkInterface()
> >  File "/usr/lib/python2.7/site-packages/mom/HypervisorInterfaces/
> vdsmjsonrpcbulkInterface.py",
> > line 29, in __init__
> >super(JsonRpcVdsmBulkInterface, self).__init__()
> >  File "/usr/lib/python2.7/site-packages/mom/HypervisorInterfaces/
> vdsmjsonrpcInterface.py",
> > line 43, in __init__
> >.orRaise(RuntimeError, 'No connection to VDSM.')
> >  File "/usr/lib/python2.7/site-packages/mom/optional.py", line 28, in
> orRaise
> >raise exception(*args, **kwargs)
> > RuntimeError: No connection to VDSM.
> >
> >
> > The question here is, how much time does VDSM need to allow jsonrpc to
> > connect and request a ping and list of VMs?
>
> The only correct answer is - when it's ready and responds with success
> rather than either not respond at all(as in your case) or with "recovering
> from crash or initializing" code
>
> >
> >
> > Martin
> > ___
> > vdsm-devel mailing list -- vdsm-de...@lists.fedorahosted.org
> > To unsubscribe send an email to vdsm-devel-le...@lists.fedorahosted.org
> >
> >
> ___
> Devel mailing list
> Devel@ovirt.org
> http://lists.ovirt.org/mailman/listinfo/devel
>
___
Devel mailing list
Devel@ovirt.org
http://lists.ovirt.org/mailman/listinfo/devel

Re: [ovirt-devel] Can't add DC with API v4 - client issue

2016-11-10 Thread Yaniv Kaul
On Wed, Nov 9, 2016 at 8:05 PM, Juan Hernández <jhern...@redhat.com> wrote:

> On 11/09/2016 11:12 AM, Yaniv Kaul wrote:
> >
> >
> > On Sat, Oct 15, 2016 at 1:04 AM, Ravi Nori <rn...@redhat.com
> > <mailto:rn...@redhat.com>> wrote:
> >
> > Also can you please try following command to directly obtain token
> > from SSO. Can replace engine with FQDN and IP to see if both work
> >
> > curl -v -k -H "Accept: application/json"
> > 'https://:443/ovirt-engine/sso/oauth/token?grant_
> type=password=admin@internal=123=ovirt-app-api'
> >
> > You should see output similar to the one below
> >
> > {"access_token":"K0sBa0D3rLtmNTdMJ-Q4FzOgCtGGY2cSFSCwbLkG94te9nDd
> mEzHSizsFaOeNMdwOziIv3l2-Uqm8bxWkMpwMA","scope":"ovirt-app-api
> > ovirt-ext=token-info:authz-search
> > ovirt-ext=token-info:public-authz-search
> > ovirt-ext=token-info:validate","exp":-381399824,"token_type"
> :"bearer"}
> >
> >
> > Sorry it took me so long to get back to it, but here it is:
> > {"access_token":"eA8w0DaapkKAQ8tfHakzA-R0l-mjD_
> CsTlAqBaH4iVVjXxQN33poXzt9UhPJLxMU8YOvVNX6LICcxL1EeAiAlw","
> scope":"ovirt-app-api
> > ovirt-ext=token-info:authz-search
> > ovirt-ext=token-info:public-authz-search
> > ovirt-ext=token-info:validate","exp":["java.lang.Long",
> 1479290132000],"token_type":"bearer"}
> >
>
> That "java.lang.Long" there is an error, but not related to this
> problem, as the SDK doesn't use the "exp" attribute. I guess it is a
> side effect of the recent change to use "long" instead of "int", looks
> like the JSON library used in the engine doesn't like longs.
>
> > And here's the difference between the SDK and the manual curl command in
> > ssl_access log:
> > 192.168.201.1 - - [09/Nov/2016:04:52:19 -0500] "POST
> > /ovirt-engine/sso/oauth/token HTTP/1.1" 404 74
> > 192.168.201.1 - - [09/Nov/2016:04:55:32 -0500] "GET
> > /ovirt-engine/sso/oauth/token?grant_type=password=
> admin@internal=123=ovirt-app-api
> > HTTP/1.1" 200 295
> >
>
> That difference is by design. The SDK uses POST to avoid sending the
> credentials (specially the password) as a query parameter, as that is
> most probably logged and archived.
>
> We discovered recently an issue with the Python SDK, due to a bug in the
> "pycurl" library:
>
>   Debug mode raises UnicodeDecodeError: 'utf8' codec can't decode byte
> 0x8d in position 7: invalid start byte
>   https://bugzilla.redhat.com/1392878
>
> It isn't exactly the same problem, but as the cause of that bug is a
> pointer that is used after releasing, it can cause all kinds of strange
> effects.
>
> Please try the latest build of the SDK.
>

I thought I was:
python-ovirt-engine-sdk4-4.1.0-0.1.a0.20161108gitaad5627.fc24.x86_64


>
> >
> >
> > Thanks
> >
> > Ravi
> >
> > On Fri, Oct 14, 2016 at 4:00 PM, Yaniv Kaul <yk...@redhat.com
> >     <mailto:yk...@redhat.com>> wrote:
> >
> > On Oct 14, 2016 7:13 PM, "Ravi Nori" <rn...@redhat.com
> > <mailto:rn...@redhat.com>> wrote:
> > >
> > > SSO configuration looks good.
> > >
> > > Can you please share any additional httpd configuration in
> /etc/httpd/conf.d. Anything to do with LocationMatch for ovirt-engine urls.
> >
> > This is a standard ovirt-system-tests on Lago installation,
> > nothing out of the ordinary,  but I'll check.
> > Y.
> >
> > >
> > > On Fri, Oct 14, 2016 at 12:52 PM, Yaniv Kaul <yk...@redhat.com
> > <mailto:yk...@redhat.com>> wrote:
> > >>
> > >>
> > >>
> > >> On Fri, Oct 14, 2016 at 3:50 PM, Ravi Nori <rn...@redhat.com
> > <mailto:rn...@redhat.com>> wrote:
> > >>>
> > >>> Hi Yaniv,
> > >>>
> > >>> Can you check the output of
> > https::///ovirt-engine/sso/status in your browser and
> > see if the SSO service is active.
> > >>>
> > >>> If SSO is deployed, you should see an output similar to the
> > one below. Also are you able to login to webadmin using the
> > browser?
&g

Re: [ovirt-devel] Can't add DC with API v4 - client issue

2016-10-14 Thread Yaniv Kaul
On Oct 14, 2016 7:13 PM, "Ravi Nori" <rn...@redhat.com> wrote:
>
> SSO configuration looks good.
>
> Can you please share any additional httpd configuration in
/etc/httpd/conf.d. Anything to do with LocationMatch for ovirt-engine urls.

This is a standard ovirt-system-tests on Lago installation, nothing out of
the ordinary,  but I'll check.
Y.

>
> On Fri, Oct 14, 2016 at 12:52 PM, Yaniv Kaul <yk...@redhat.com> wrote:
>>
>>
>>
>> On Fri, Oct 14, 2016 at 3:50 PM, Ravi Nori <rn...@redhat.com> wrote:
>>>
>>> Hi Yaniv,
>>>
>>> Can you check the output of https::///ovirt-engine/sso/status
in your browser and see if the SSO service is active.
>>>
>>> If SSO is deployed, you should see an output similar to the one below.
Also are you able to login to webadmin using the browser?
>>
>>
>> I am able to login using the webui.
>>
>>>
>>>
>>> {"status_description":"SSO Webapp
Deployed","version":"0","status":"active"}
>>
>>
>> Indeed:
>> {"status_description":"SSO Webapp
Deployed","version":"0","status":"active"}
>>
>> (not sure what 'version 0' means?)
>>
>>>
>>>
>>> Please share the content of
/etc/ovirt-engine/engine.conf.d/11-setup-sso.conf
>>
>>
>> [root@lago-basic-suite-master-engine ~]# cat
/etc/ovirt-engine/engine.conf.d/11-setup-sso.conf
>> ENGINE_SSO_CLIENT_ID="ovirt-engine-core"
>> ENGINE_SSO_CLIENT_SECRET="bsOabtD7gE2McwLe80P109UV800XLx4O"
>> ENGINE_SSO_AUTH_URL="https://${ENGINE_FQDN}:443/ovirt-engine/sso;
>> ENGINE_SSO_SERVICE_URL="https://localhost:443/ovirt-engine/sso;
>> ENGINE_SSO_SERVICE_SSL_VERIFY_HOST=false
>> ENGINE_SSO_SERVICE_SSL_VERIFY_CHAIN=true
>> SSO_ALTERNATE_ENGINE_FQDNS=""
>> SSO_ENGINE_URL="https://${ENGINE_FQDN}:443/ovirt-engine/;
>>
>>
>> Thanks,
>> Y.
>>
>>
>>>
>>>
>>> Thanks
>>>
>>> Ravi
>>>
>>>
>>>
>>>
>>>
>>> On Fri, Oct 14, 2016 at 7:57 AM, Juan Hernández <jhern...@redhat.com>
wrote:
>>>>
>>>> On 10/14/2016 01:45 PM, Yaniv Kaul wrote:
>>>> >
>>>> >
>>>> > On Thu, Oct 13, 2016 at 11:13 AM, Juan Hernández <jhern...@redhat.com
>>>> > <mailto:jhern...@redhat.com>> wrote:
>>>> >
>>>> > On 10/13/2016 12:04 AM, Yaniv Kaul wrote:
>>>> > > On Fri, Oct 7, 2016 at 10:44 PM, Yaniv Kaul <yk...@redhat.com
<mailto:yk...@redhat.com>
>>>> > > <mailto:yk...@redhat.com <mailto:yk...@redhat.com>>> wrote:
>>>> > >
>>>> > > I'm trying on FC24, using
>>>> > >
>>>> >
python-ovirt-engine-sdk4-4.1.0-0.0.20161003git056315d.fc24.x86_64 to
>>>> > > add a DC, and failing - against master. The client is
unhappy:
>>>> > > File
>>>> > >
>>>> >
"/home/ykaul/ovirt-system-tests/basic-suite-master/test-scenarios/002_bootstrap.py",
>>>> > > line 98, in add_dc4
>>>> > >
 version=sdk4.types.Version(major=DC_VER_MAJ,minor=DC_VER_MIN),
>>>> > >   File
"/usr/lib64/python2.7/site-packages/ovirtsdk4/services.py",
>>>> > > line 4347, in add
>>>> > > response = self._connection.send(request)
>>>> > >   File
"/usr/lib64/python2.7/site-packages/ovirtsdk4/__init__.py",
>>>> > > line 276, in send
>>>> > > return self.__send(request)
>>>> > >   File
"/usr/lib64/python2.7/site-packages/ovirtsdk4/__init__.py",
>>>> > > line 298, in __send
>>>> > > self._sso_token = self._get_access_token()
>>>> > >   File
"/usr/lib64/python2.7/site-packages/ovirtsdk4/__init__.py",
>>>> > > line 460, in _get_access_token
>>>> > > sso_response = self._get_sso_response(self._sso_url,
>>>> > post_data)
>>>> > >   File
"/usr/lib64/python2.7/site-packages/ovirtsdk4/__init__.py",
>>>> > > line 498, in _get_sso_response
>>>> > > return json.load

Re: [ovirt-devel] Engine Health

2016-10-14 Thread Yaniv Kaul
On Fri, Oct 14, 2016 at 11:27 PM, Stuart Gott  wrote:

> All,
>
> We're working on a script that stands up an oVirt Engine and adds a node
> to it. The issue is we don't know how long to wait before trying to add a
> node. What we're doing right now is to check the status of the engine using:
>
> https://ENGINE_IP/ovirt-engine/services/health
>
> to determine when the oVirt engine itself has booted. That link reports
> "DB Up!Welcome to Health Status!" as soon as the web UI is accessible, but
> this is not the same thing as having an actual usable cluster attached.
>
> Would it be possible to have separate status messages to distinguish
> between an engine that has/is missing a usable cluster? Is that already
> possible some other way? Blindly waiting for arbitrary time periods is
> error prone.
>

The API also has a test command as well. I don't think we need to extend it
for a specific use case. What about a missing data center? Host? SPM?
You can see in ovirt-system-test for an example that at least checks that
the service is up (and could use the serice health end point as well)
connects to the API and performs a test() against it to verify it works. I
think that overall should suffice.
Y.


>
> Thanks!
>
> Stu
>
> ___
> Devel mailing list
> Devel@ovirt.org
> http://lists.ovirt.org/mailman/listinfo/devel
>
___
Devel mailing list
Devel@ovirt.org
http://lists.ovirt.org/mailman/listinfo/devel

Re: [ovirt-devel] Can't add DC with API v4 - client issue

2016-10-14 Thread Yaniv Kaul
On Thu, Oct 13, 2016 at 11:13 AM, Juan Hernández <jhern...@redhat.com>
wrote:

> On 10/13/2016 12:04 AM, Yaniv Kaul wrote:
> > On Fri, Oct 7, 2016 at 10:44 PM, Yaniv Kaul <yk...@redhat.com
> > <mailto:yk...@redhat.com>> wrote:
> >
> > I'm trying on FC24, using
> > python-ovirt-engine-sdk4-4.1.0-0.0.20161003git056315d.fc24.x86_64 to
> > add a DC, and failing - against master. The client is unhappy:
> > File
> > "/home/ykaul/ovirt-system-tests/basic-suite-master/test-
> scenarios/002_bootstrap.py",
> > line 98, in add_dc4
> > version=sdk4.types.Version(major=DC_VER_MAJ,minor=DC_VER_MIN),
> >   File "/usr/lib64/python2.7/site-packages/ovirtsdk4/services.py",
> > line 4347, in add
> > response = self._connection.send(request)
> >   File "/usr/lib64/python2.7/site-packages/ovirtsdk4/__init__.py",
> > line 276, in send
> > return self.__send(request)
> >   File "/usr/lib64/python2.7/site-packages/ovirtsdk4/__init__.py",
> > line 298, in __send
> > self._sso_token = self._get_access_token()
> >   File "/usr/lib64/python2.7/site-packages/ovirtsdk4/__init__.py",
> > line 460, in _get_access_token
> > sso_response = self._get_sso_response(self._sso_url, post_data)
> >   File "/usr/lib64/python2.7/site-packages/ovirtsdk4/__init__.py",
> > line 498, in _get_sso_response
> > return json.loads(body_buf.getvalue().decode('utf-8'))
> >   File "/usr/lib64/python2.7/json/__init__.py", line 339, in loads
> > return _default_decoder.decode(s)
> >   File "/usr/lib64/python2.7/json/decoder.py", line 364, in decode
> > obj, end = self.raw_decode(s, idx=_w(s, 0).end())
> >   File "/usr/lib64/python2.7/json/decoder.py", line 382, in
> raw_decode
> > raise ValueError("No JSON object could be decoded")
> > ValueError: No JSON object could be decoded
> >
> >
> > Surprisingly, I now can't find that RPM of this SDK in
> > resources.ovirt.org <http://resources.ovirt.org> now.
> >
> > I've tried
> > with http://resources.ovirt.org/pub/ovirt-master-snapshot/rpm/
> fc24/x86_64/python-ovirt-engine-sdk4-4.0.0-0.1.
> 20161004gitf94eeb5.fc24.x86_64.rpm
> > <http://resources.ovirt.org/pub/ovirt-master-snapshot/rpm/
> fc24/x86_64/python-ovirt-engine-sdk4-4.0.0-0.1.
> 20161004gitf94eeb5.fc24.x86_64.rpm>
> >
> > - same result.
> >
> > Did not see anything obvious on server or engine logs.
> > The code:
> > def add_dc4(api):
> > nt.assert_true(api != None)
> > dcs_service = api.system_service().data_centers_service()
> > nt.assert_true(
> > dc = dcs_service.add(
> > sdk4.types.DataCenter(
> > name=DC_NAME4,
> > description='APIv4 DC',
> > local=False,
> >
> > version=sdk4.types.Version(major=DC_VER_MAJ,minor=DC_VER_MIN),
> > ),
> > )
> > )
> >
> >
> > And the api object is from:
> > return sdk4.Connection(
> > url=url,
> > username=constants.ENGINE_USER,
> > password=str(self.metadata['
> ovirt-engine-password']),
> > insecure=True,
> > debug=True,
> > )
> >
> >
> > The clue is actually on the HTTPd logs:
> > 192.168.203.1 - - [12/Oct/2016:17:56:27 -0400] "POST
> > /ovirt-engine/sso/oauth/token HTTP/1.1" 404 74
> >
> > And indeed, from the deubg log:
> > begin captured logging << \n
> > root: DEBUG: Trying 192.168.203.3...\n
> > root: DEBUG: Connected to 192.168.203.3 (192.168.203.3) port 443 (#0)\n
> > root: DEBUG: Initializing NSS with certpath: sql:/etc/pki/nssdb\n
> > root: DEBUG: skipping SSL peer certificate verification\n
> > root: DEBUG: ALPN/NPN, server did not agree to a protocol\n
> > root: DEBUG: SSL connection using TLS_ECDHE_RSA_WITH_AES_128_
> GCM_SHA256\n
> > root: DEBUG: Server certificate:\n
> > root: DEBUG: subject: CN=engine,O=Test,C=US\n
> > root: DEBUG: start date: Oct 11 21:55:29 2016 GMT\n
> > root: DEBUG: expire date: Sep 16 21:55:29 2021 GMT\n
> > root: DEBUG: common name: engine\nroot: DEBUG: issuer:
> > CN=engine.38998,O=Test,C=US\n
>

[ovirt-devel] Latest python-sdk4 package?

2016-10-22 Thread Yaniv Kaul
1. I can't find 4.1 @ resources.ovirt.org. I'm quite sure I did, at some
point.
2. In ovirt-system-tests, when reposync'ing 4.0 and master, seems that 4.0
has a newer build than master?
./ovirt-4.0-snapshot-el7/x86_64/python-ovirt-engine-sdk4-4.
0.2-1.el7.centos.x86_64.rpm
./ovirt-master-snapshot-el7/x86_64/python-ovirt-engine-sdk4-4.0.0-0.1.
20161020gitd8e8924.el7.centos.x86_64.rpm

Any ideas?

TIA,
Y.
___
Devel mailing list
Devel@ovirt.org
http://lists.ovirt.org/mailman/listinfo/devel

Re: [ovirt-devel] system tests failing on template export

2016-10-25 Thread Yaniv Kaul
On Mon, Oct 24, 2016 at 2:30 PM, Piotr Kliczewski <
piotr.kliczew...@gmail.com> wrote:

> All,
>
> I noticed that on Friday the problem do not occur but we have a
> different one [1] which could be related to storage as well.
>
> Thanks,
> Piotr
>
> [1] http://jenkins.ovirt.org/job/ovirt_master_system-tests/692/console


This one is actually https://bugzilla.redhat.com/show_bug.cgi?id=1379130 .
Y.


>
>
> On Mon, Oct 17, 2016 at 10:45 PM, Adam Litke  wrote:
> > On 17/10/16 11:51 +0200, Piotr Kliczewski wrote:
> >>
> >> Adam,
> >>
> >> I see constant failures due to this and found:
> >>
> >> 2016-10-17 03:55:21,045 ERROR   (jsonrpc/3) [storage.TaskManager.Task]
> >> Task=`8989d694-7099-449b-bd66-4d63786be089`::Unexpected error
> >> (task:870)
> >> Traceback (most recent call last):
> >>  File "/usr/share/vdsm/storage/task.py", line 877, in _run
> >>return fn(*args, **kargs)
> >>  File "/usr/lib/python2.7/site-packages/vdsm/logUtils.py", line 50, in
> >> wrapper
> >>res = f(*args, **kwargs)
> >>  File "/usr/share/vdsm/storage/hsm.py", line 2212, in getAllTasksInfo
> >>allTasksInfo = sp.getAllTasksInfo()
> >>  File "/usr/lib/python2.7/site-packages/vdsm/storage/securable.py",
> >> line 77, in wrapper
> >>raise SecureError("Secured object is not in safe state")
> >> SecureError: Secured object is not in safe state
> >
> >
> > This usually indicates that the SPM role has been lost which happens
> > most likely due to connection issues with the storage.  What is the
> > storage environment being used for the system tests?
> >
> >
> >>
> >> Please take a look not sure whether it is related. You can find latest
> >> build here [1]
> >>
> >> Thanks,
> >> Piotr
> >>
> >> [1] http://jenkins.ovirt.org/job/ovirt_master_system-tests/668/
> >>
> >> On Fri, Oct 14, 2016 at 11:22 AM, Evgheni Dereveanchin
> >>  wrote:
> >>>
> >>> Hello,
> >>>
> >>> We've got several cases today where system tests failed
> >>> when attempting to export templates:
> >>>
> >>>
> >>> http://jenkins.ovirt.org/job/ovirt_master_system-tests/655/
> testReport/junit/(root)/004_basic_sanity/template_export/
> >>>
> >>> Related engine.log looks something like this:
> >>> https://paste.fedoraproject.org/449936/47643643/raw/
> >>>
> >>> I could not find any obvious issues in SPM logs, could someone
> >>> please take a look to confirm what may be causing this issue?
> >>>
> >>> Full logs from the test are available here:
> >>> http://jenkins.ovirt.org/job/ovirt_master_system-tests/655/artifact/
> >>>
> >>> Regards,
> >>> Evgheni Dereveanchin
> >>> ___
> >>> Devel mailing list
> >>> Devel@ovirt.org
> >>> http://lists.ovirt.org/mailman/listinfo/devel
> >
> >
> > --
> > Adam Litke
> ___
> Devel mailing list
> Devel@ovirt.org
> http://lists.ovirt.org/mailman/listinfo/devel
>
___
Devel mailing list
Devel@ovirt.org
http://lists.ovirt.org/mailman/listinfo/devel

Re: [ovirt-devel] patches failing continuous integration after being merged

2016-10-22 Thread Yaniv Kaul
On Fri, Oct 21, 2016 at 10:01 AM, Sandro Bonazzola 
wrote:

> Hi,
> we have more than 300 patches failing continuous integration after being
> merged[1]
> Is anybody monitoring what's happening there?
> Have we broken check-merge tests?
> Have we auto rebase on merge automation breaking our patches?
> I would suggest to go over the failures and nail down issues.
>
> [1] https://gerrit.ovirt.org/#/q/status:merged+label:
> Continuous-Integration%253C%253D-1
>

>From a random look it seems to be related to upgrade, but it's difficult to
asses without data:
http://jenkins.ovirt.org/job/ovirt-engine_master_upgrade-from-4.0_el7_merged/1329/
for example gives 404.
Y.


>
>
> --
> Sandro Bonazzola
> Better technology. Faster innovation. Powered by community collaboration.
> See how it works at redhat.com
> 
>
> ___
> Devel mailing list
> Devel@ovirt.org
> http://lists.ovirt.org/mailman/listinfo/devel
>
___
Devel mailing list
Devel@ovirt.org
http://lists.ovirt.org/mailman/listinfo/devel

Re: [ovirt-devel] Fedora 25 and ovirt-engine-sdk-python

2016-11-14 Thread Yaniv Kaul
On Tue, Nov 15, 2016 at 9:03 AM, Sandro Bonazzola <sbona...@redhat.com>
wrote:

>
>
> On Tue, Nov 15, 2016 at 8:59 AM, Yaniv Kaul <yk...@redhat.com> wrote:
>
>> As I try to upgrade to FC25, it is actually suggest to downgrade
>> my ovirt-engine-sdk-python to 3.6.3.0-2.fc25 - while I am happily running
>> with ovirt-engine-sdk-python-3.6.9.1-1.fc24.noarch
>>
>> Do we need to rebuild for F25?
>>
>
> You can keep fc24 repos but you need to explicitly set 24 instead of
> $releasever in your .repo file.
>

Yes, I know, but I don't see why I'd do that (I usually install whatever I
need directly from ovirt.org).
My question is if we need to rebuild.
Y.


>
>
>
>>
>>
>> ___
>> Devel mailing list
>> Devel@ovirt.org
>> http://lists.ovirt.org/mailman/listinfo/devel
>>
>
>
>
> --
> Sandro Bonazzola
> Better technology. Faster innovation. Powered by community collaboration.
> See how it works at redhat.com
>
___
Devel mailing list
Devel@ovirt.org
http://lists.ovirt.org/mailman/listinfo/devel

[ovirt-devel] Fedora 25 and ovirt-engine-sdk-python

2016-11-14 Thread Yaniv Kaul
As I try to upgrade to FC25, it is actually suggest to downgrade
my ovirt-engine-sdk-python to 3.6.3.0-2.fc25 - while I am happily running
with ovirt-engine-sdk-python-3.6.9.1-1.fc24.noarch

Do we need to rebuild for F25?
___
Devel mailing list
Devel@ovirt.org
http://lists.ovirt.org/mailman/listinfo/devel

[ovirt-devel] Where's MOM (on latest master)

2016-11-17 Thread Yaniv Kaul
I've recently seen, including now on Master, the following warnings:
Nov 17 13:33:25 lago-basic-suite-master-host0 systemd[1]: Started MOM
instance configured for VDSM purposes.
Nov 17 13:33:25 lago-basic-suite-master-host0 systemd[1]: Starting MOM
instance configured for VDSM purposes...
Nov 17 13:33:35 lago-basic-suite-master-host0 vdsm[2012]: vdsm MOM WARN MOM
not available, Policy could not be set.
Nov 17 13:33:39 lago-basic-suite-master-host0 vdsm[2012]: vdsm MOM WARN MOM
not available.
Nov 17 13:33:39 lago-basic-suite-master-host0 vdsm[2012]: vdsm MOM WARN MOM
not available, KSM stats will be missing.
Nov 17 13:33:55 lago-basic-suite-master-host0 vdsm[2012]: vdsm MOM WARN MOM
not available.
Nov 17 13:33:55 lago-basic-suite-master-host0 vdsm[2012]: vdsm MOM WARN MOM
not available, KSM stats will be missing.
Nov 17 13:34:10 lago-basic-suite-master-host0 vdsm[2012]: vdsm MOM WARN MOM
not available.
Nov 17 13:34:10 lago-basic-suite-master-host0 vdsm[2012]: vdsm MOM WARN MOM
not available, KSM stats will be missing.
Nov 17 13:34:26 lago-basic-suite-master-host0 vdsm[2012]: vdsm MOM WARN MOM
not available.
Nov 17 13:34:26 lago-basic-suite-master-host0 vdsm[2012]: vdsm MOM WARN MOM
not available, KSM stats will be missing.
Nov 17 13:34:42 lago-basic-suite-master-host0 vdsm[2012]: vdsm MOM WARN MOM
not available.
Nov 17 13:34:42 lago-basic-suite-master-host0 vdsm[2012]: vdsm MOM WARN MOM
not available, KSM stats will be missing.
Nov 17 13:34:57 lago-basic-suite-master-host0 vdsm[2012]: vdsm MOM WARN MOM
not available.
Nov 17 13:34:57 lago-basic-suite-master-host0 vdsm[2012]: vdsm MOM WARN MOM
not available, KSM stats will be missing.
Nov 17 13:35:12 lago-basic-suite-master-host0 vdsm[2012]: vdsm MOM WARN MOM
not available.



Any ideas what this is and why?
___
Devel mailing list
Devel@ovirt.org
http://lists.ovirt.org/mailman/listinfo/devel

Re: [ovirt-devel] Failures in OST (4.0/master) ( was error msg from Jenkins )

2016-11-20 Thread Yaniv Kaul
On Nov 20, 2016 6:30 PM, "Eyal Edri" <ee...@redhat.com> wrote:
>
> Renaming title and adding devel.
>
> On Sun, Nov 20, 2016 at 2:36 PM, Piotr Kliczewski <pklic...@redhat.com>
wrote:
>>
>> The last failure seems to be storage related.
>>
>> @Nir please take a look.
>>
>> Here is engine side error:
>>
>> 2016-11-20 05:54:59,605 DEBUG
[org.ovirt.engine.core.vdsbroker.vdsbroker.ConnectStoragePoolVDSCommand]
(default task-5) [59fc0074] Exception:
org.ovirt.engine.core.vdsbroker.irsbroker.IRSNoMasterDomainException:
IRSGenericException: IRSErrorException: IRSNoMasterDomainException: Cannot
find master domain: u'spUUID=1ca141f1-b64d-4a52-8861-05c7de2a72b2,
msdUUID=7d4bf750-4fb8-463f-bbb0-92156c47306e'
>>
>> and here is vdsm:
>>
>> jsonrpc.Executor/5::ERROR::2016-11-20
05:54:56,331::multipath::95::Storage.Multipath::(resize_devices) Could not
resize device 360014052749733c7b8248628637b990f
>> Traceback (most recent call last):
>>   File "/usr/share/vdsm/storage/multipath.py", line 93, in resize_devices
>> _resize_if_needed(guid)
>>   File "/usr/share/vdsm/storage/multipath.py", line 101, in
_resize_if_needed
>> for slave in devicemapper.getSlaves(name)]
>>   File "/usr/share/vdsm/storage/multipath.py", line 158, in getDeviceSize
>> bs, phyBs = getDeviceBlockSizes(devName)
>>   File "/usr/share/vdsm/storage/multipath.py", line 150, in
getDeviceBlockSizes
>> "queue", "logical_block_size")).read())
>> IOError: [Errno 2] No such file or directory:
'/sys/block/sdb/queue/logical_block_size'
>
>
>
> We now see a different error in master [1], which also indicates the
hosts are in a problematic state: ( failing 'assign_hosts_network_label'
test  )
>
> status: 409
> reason: Conflict
> detail: Cannot add Label. Operation can be performed only when Host
status is  Maintenance, Up, NonOperational.

I believe you are mixing unrelated issues.
I've seen this once and I have an unproven theory :
The previous suite restarts Engine after LDAP configuration then performs
its test, which is quite short (24 seconds on my poor laptop + few
additional secs between suites).
I'm not convinced it is enough time for hosts status to be updated in
Engine back to UP state.

Y.

>  >> begin captured logging << 
>
>
> [1]
http://jenkins.ovirt.org/job/test-repo_ovirt_experimental_master/3506/testReport/junit/(root)/006_network_by_label/assign_hosts_network_label/
>
>
>>
>>
>>
>> On Sun, Nov 20, 2016 at 12:50 PM, Eyal Edri <ee...@redhat.com> wrote:
>>>
>>>
>>>
>>> On Sun, Nov 20, 2016 at 1:42 PM, Yaniv Kaul <yk...@redhat.com> wrote:
>>>>
>>>>
>>>>
>>>> On Sun, Nov 20, 2016 at 1:30 PM, Yaniv Kaul <yk...@redhat.com> wrote:
>>>>>
>>>>>
>>>>>
>>>>> On Sun, Nov 20, 2016 at 1:18 PM, Eyal Edri <ee...@redhat.com> wrote:
>>>>>>
>>>>>> the test fails to run VM because no hosts are in UP state(?) [1],
not sure it is related to the triggering patch[2]
>>>>>>
>>>>>> status: 400
>>>>>> reason: Bad Request
>>>>>> detail: There are no hosts to use. Check that the cluster contains
at least one host in Up state.
>>>>>>
>>>>>> Thoughts? Shouldn't we fail the test earlier we hosts are not UP?
>>>>>
>>>>>
>>>>> Yes. It's more likely that we are picking the wrong host or so, but
who knows - where are the engine and VDSM logs?
>>>>
>>>>
>>>> A simple grep on the engine.log[1] finds serveral unrelated issues I'm
not sure are reported, it's despairing to even begin...
>>>> That being said, I don't see the issue there. We may need better
logging on the API level, to see what is being sent. Is it consistent?
>>>
>>>
>>> Just failed now the first time, I didn't see it before.
>>>
>>>>
>>>> Y.
>>>>
>>>>
>>>> [1]
http://jenkins.ovirt.org/job/test-repo_ovirt_experimental_4.0/3015/artifact/exported-artifacts/basic_suite_4.0.sh-el7/exported-artifacts/test_logs/basic-suite-4.0/post-004_basic_sanity.py/lago-basic-suite-4-0-engine/_var_log_ovirt-engine/engine.log

>>>>>
>>>>> Y.
>>>>>
>>>>>>
>>>>>>
>>>>>>
>>>>>> [1]
http://jenkins.ovirt.org/job/test-repo_ovirt_experimental_4.0/3015/testReport/junit/(root)/004_basic_sanit

Re: [ovirt-devel] system tests failing on template export

2016-11-20 Thread Yaniv Kaul
On Nov 20, 2016 6:33 PM, "Nir Soffer"  wrote:
>
> On Sun, Nov 20, 2016 at 6:25 PM, Eyal Edri  wrote:
> > It happened again in [1]
> >
> > 2016-11-20 10:48:12,106 ERROR (jsonrpc/2) [storage.TaskManager.Task]
> > (Task='6c1ec6e7-fb37-465b-8e30-1613317683b2') Unexpected error
(task:870)
> > Traceback (most recent call last):
> >   File "/usr/share/vdsm/storage/task.py", line 877, in _run
> > return fn(*args, **kargs)
> >   File "/usr/lib/python2.7/site-packages/vdsm/logUtils.py", line 50, in
> > wrapper
> > res = f(*args, **kwargs)
> >   File "/usr/share/vdsm/storage/hsm.py", line 2205, in getAllTasksInfo
> > allTasksInfo = sp.getAllTasksInfo()
> >   File "/usr/lib/python2.7/site-packages/vdsm/storage/securable.py",
line
> > 77, in wrapper
> > raise SecureError("Secured object is not in safe state")
> > SecureError: Secured object is not in safe state
> > 2016-11-20 10:48:12,109 INFO  (jsonrpc/2) [storage.TaskManager.Task]
> > (Task='6c1ec6e7-fb37-465b-8e30-1613317683b2') aborting: Task is aborted:
> > u'Secured object is not in safe state' - code 100 (task:1175)
> > 2016-11-20 10:48:12,110 ERROR (jsonrpc/2) [storage.Dispatcher] Secured
> > object is not in safe state (dispatcher:80)
> > Traceback (most recent call last):
> >   File "/usr/share/vdsm/storage/dispatcher.py", line 72, in wrapper
> > result = ctask.prepare(func, *args, **kwargs)
> >   File "/usr/share/vdsm/storage/task.py", line 105, in wrapper
> > return m(self, *a, **kw)
> >   File "/usr/share/vdsm/storage/task.py", line 1183, in prepare
> > raise self.error
> > SecureError: Secured object is not in safe state
>
> This can also mean that the SPM is not started yet. Maybe you are not
> waiting until the SPM is ready before you try to perform an operation?
>
> Who is the owner of this test? This person should debug this test.

The relevant team for the feature.

>
> >
http://jenkins.ovirt.org/job/test-repo_ovirt_experimental_master/3506/artifact/exported-artifacts/basic_suite_master.sh-el7/exported-artifacts/test_logs/basic-suite-master/post-006_network_by_label.py/lago-basic-suite-master-host1/_var_log_vdsm/vdsm.log
> >
> > The storage VM is running on the same VM as engine ( to save memory )
and
> > its serving both NFS & ISCSI.
> > Do you think running it on the same VM as engine might cause such
issues?
>
> I don't think so, but this prevents testing lot of interesting negative
flows.

Which don't belong to CI.

>
> For example, when one storage server is down, the system should be
> able to use the other storage domain. Having each storage server in
> its own vm makes this possible.

You have both NFS and ISCSI there. It's trival to set multiple of each if
needed, of course.
I do wish to add more IPs and test iSCSI bonding as well as both NFSv3 and
NFSv4.

>
> Also, we may like to test multiple storage servers of same type.
> the storage servers should be decoupled so we can start any number
> of them as needed for the current test.

Right, but not on this suite.
Again, it's trivial to do so. The main motivation was to conserve resources
so everyone could run the tests.

Y.

>
> > On Mon, Oct 17, 2016 at 11:45 PM, Adam Litke  wrote:
> >>
> >> On 17/10/16 11:51 +0200, Piotr Kliczewski wrote:
> >>>
> >>> Adam,
> >>>
> >>> I see constant failures due to this and found:
> >>>
> >>> 2016-10-17 03:55:21,045 ERROR   (jsonrpc/3) [storage.TaskManager.Task]
> >>> Task=`8989d694-7099-449b-bd66-4d63786be089`::Unexpected error
> >>> (task:870)
> >>> Traceback (most recent call last):
> >>>  File "/usr/share/vdsm/storage/task.py", line 877, in _run
> >>>return fn(*args, **kargs)
> >>>  File "/usr/lib/python2.7/site-packages/vdsm/logUtils.py", line 50, in
> >>> wrapper
> >>>res = f(*args, **kwargs)
> >>>  File "/usr/share/vdsm/storage/hsm.py", line 2212, in getAllTasksInfo
> >>>allTasksInfo = sp.getAllTasksInfo()
> >>>  File "/usr/lib/python2.7/site-packages/vdsm/storage/securable.py",
> >>> line 77, in wrapper
> >>>raise SecureError("Secured object is not in safe state")
> >>> SecureError: Secured object is not in safe state
> >>
> >>
> >> This usually indicates that the SPM role has been lost which happens
> >> most likely due to connection issues with the storage.  What is the
> >> storage environment being used for the system tests?
> >>
> >>>
> >>> Please take a look not sure whether it is related. You can find latest
> >>> build here [1]
> >>>
> >>> Thanks,
> >>> Piotr
> >>>
> >>> [1] http://jenkins.ovirt.org/job/ovirt_master_system-tests/668/
> >>>
> >>> On Fri, Oct 14, 2016 at 11:22 AM, Evgheni Dereveanchin
> >>>  wrote:
> 
>  Hello,
> 
>  We've got several cases today where system tests failed
>  when attempting to export templates:
> 
> 
> 
http://jenkins.ovirt.org/job/ovirt_master_system-tests/655/testReport/junit/(root)/004_basic_sanity/template_export/
> 
>  Related engine.log looks 

Re: [ovirt-devel] [vdsm] exploring a possible integration between collectd and Vdsm

2016-10-11 Thread Yaniv Kaul
On Tue, Oct 11, 2016 at 2:05 PM, Francesco Romani 
wrote:

> Hi all,
>
> In the last 2.5 days I was exploring if and how we can integrate collectd
> and Vdsm.
>
> The final picture could look like:
> 1. collectd does all the monitoring and reporting currently Vdsm does
> 2. Engine consumes data from collectd
> 3. Vdsm consumes *notifications* from collectd - for few but important
> tasks like Drive high water mark monitoring
>
> Benefits (aka: why to bother?):
> 1. less code in Vdsm / long-awaited modularization of Vdsm
> 2. better integration with the system, reuse of well-known components
> 3. more flexibility in monitoring/reporting: collectd is special purpose
> existing solution
> 4. faster, more scalable operation because all the monitoring can be done
> in C
>
> At first glance, Collectd seems to have all the tools we need.
> 1. A plugin interface (https://collectd.org/wiki/
> index.php/Plugin_architecture and https://collectd.org/wiki/
> index.php/Table_of_Plugins)
> 2. Support for notifications and thresholds (https://collectd.org/wiki/
> index.php/Notifications_and_thresholds)
> 3. a libvirt plugin https://collectd.org/wiki/index.php/Plugin:virt
>
> So, the picture is like
>
> 1. we start requiring collectd as dependency of Vdsm
> 2. we either configure it appropriately (collectd support config drop-ins:
> /etc/collectd.d) or we document our requirements (or both)
> 3. collectd monitors the hosts and libvirt
> 4. Engine polls collectd
> 5. Vdsm listens from notifications
>
> Should libvirt deliver us the event we need (see
> https://bugzilla.redhat.com/show_bug.cgi?id=1181659),
> we can just stop using collectd notifications, everything else works as
> previously.
>
> Challenges:
> 1. Collectd does NOT consider the plugin API stable (
> https://collectd.org/wiki/index.php/Plugin_architecture#
> The_interface.27s_stability)
>so the plugins should be inclueded in the main tree, much like the
> modules of the linux kernel
>Worth mentioning that the plugin API itself has a good deal of rough
> edges.
>we will need to maintain this plugin ourselves, *and* we need to
> maintain our thin API
>layer, to make sure the plugin loads and works with recent versions of
> collectd.
> 2. the virt plugin is out of date, doesn't report some data we need: see
> https://github.com/collectd/collectd/issues/1945
> 3. the notification message(s) are tailored for human consumption, those
> messages are not easy
>to parse for machines.
> 4. the threshold support in collectd seems to match values against
> constants; it doesn't seem possible
>to match a value against another one, as we need to do for high water
> monitoring (capacity VS allocation).
>
> How I'm addressing, or how I plan to address those challenges (aka action
> items):
> 1. I've been experimenting with out-of-tree plugins, and I managed
> develop, build, install and run
>one out-of-tree plugin: https://github.com/mojaves/
> vmon/tree/master/collectd
>The development pace of collectd looks sustainable, so this doesn't
> look such a big deal.
>Furthermore, we can engage with upstream to merge our plugins, either
> as-is or to extend existing ones.
> 2. Write another collectd plugin based on the Vdsm python code and/or my
> past accelerator executable project
>(https://github.com/mojaves/vmon)
> 3. patch the collectd notification code. It is yet another plugin
>OR
> 4. send notification from the new virt module as per #2, bypassing the
> threshold system. This move could preclude
>the new virt module to be merged in the collectd tree.
>
> Current status of the action items:
> 1. done BUT PoC quality
> 2. To be done (more work than #1/possible dupe with github issue)
> 3. need more investigation, conflicts with #4
> 4. need more investigation, conflicts with #3
>
> All the code I'm working on will be found on https://github.com/mojaves/
> vmon
>
> Comments are appreciated
>

This generally sounds like a good idea - and I hope it is coordinated with
our efforts for monitoring (see [1], [2]).
Note that ages ago, ovirt-node actually had it already[3].

Few notes:
- I think the most compelling reason to move to collectd is actually to
benefit from the already existing plugins that it already has, which will
cover
a lot of the missing monitoring requirements and wishes we have (example:
local disk usage on the host), as well as integrate it into
Engine monitoring (example: postgresql performance monitoring).
- You can't remove monitoring from VDSM - as it new VDSM may work against
older Engine setups. You can gradually remove them.
I'd actually begin with cleanup - there are some 'metrics' that are simply
not needed and should not be reported in the first place and
are there for historic reasons only. Remove them - from Engine first, from
the DB and all, then later we can either send fake values or remove
from VDSM.
- If you are moving to collectd, as you can see from the metrics effort,
we'd 

Re: [ovirt-devel] Can't add DC with API v4 - client issue

2016-10-12 Thread Yaniv Kaul
On Fri, Oct 7, 2016 at 10:44 PM, Yaniv Kaul <yk...@redhat.com> wrote:

> I'm trying on FC24, using python-ovirt-engine-sdk4-4.1.0
> -0.0.20161003git056315d.fc24.x86_64 to add a DC, and failing - against
> master. The client is unhappy:
> File 
> "/home/ykaul/ovirt-system-tests/basic-suite-master/test-scenarios/002_bootstrap.py",
> line 98, in add_dc4
> version=sdk4.types.Version(major=DC_VER_MAJ,minor=DC_VER_MIN),
>   File "/usr/lib64/python2.7/site-packages/ovirtsdk4/services.py", line
> 4347, in add
> response = self._connection.send(request)
>   File "/usr/lib64/python2.7/site-packages/ovirtsdk4/__init__.py", line
> 276, in send
> return self.__send(request)
>   File "/usr/lib64/python2.7/site-packages/ovirtsdk4/__init__.py", line
> 298, in __send
> self._sso_token = self._get_access_token()
>   File "/usr/lib64/python2.7/site-packages/ovirtsdk4/__init__.py", line
> 460, in _get_access_token
> sso_response = self._get_sso_response(self._sso_url, post_data)
>   File "/usr/lib64/python2.7/site-packages/ovirtsdk4/__init__.py", line
> 498, in _get_sso_response
> return json.loads(body_buf.getvalue().decode('utf-8'))
>   File "/usr/lib64/python2.7/json/__init__.py", line 339, in loads
> return _default_decoder.decode(s)
>   File "/usr/lib64/python2.7/json/decoder.py", line 364, in decode
> obj, end = self.raw_decode(s, idx=_w(s, 0).end())
>   File "/usr/lib64/python2.7/json/decoder.py", line 382, in raw_decode
> raise ValueError("No JSON object could be decoded")
> ValueError: No JSON object could be decoded
>
>
> Surprisingly, I now can't find that RPM of this SDK in resources.ovirt.org
>  now.
>
> I've tried with http://resources.ovirt.org/pub/ovirt-master-snapshot/rp
> m/fc24/x86_64/python-ovirt-engine-sdk4-4.0.0-0.1.20161004git
> f94eeb5.fc24.x86_64.rpm
>
> - same result.
>
> Did not see anything obvious on server or engine logs.
> The code:
> def add_dc4(api):
> nt.assert_true(api != None)
> dcs_service = api.system_service().data_centers_service()
> nt.assert_true(
> dc = dcs_service.add(
> sdk4.types.DataCenter(
> name=DC_NAME4,
> description='APIv4 DC',
> local=False,
> version=sdk4.types.Version(maj
> or=DC_VER_MAJ,minor=DC_VER_MIN),
> ),
> )
> )
>
>
> And the api object is from:
> return sdk4.Connection(
> url=url,
> username=constants.ENGINE_USER,
> password=str(self.metadata['ovirt-engine-password']),
> insecure=True,
> debug=True,
> )
>
>
The clue is actually on the HTTPd logs:
192.168.203.1 - - [12/Oct/2016:17:56:27 -0400] "POST
/ovirt-engine/sso/oauth/token HTTP/1.1" 404 74

And indeed, from the deubg log:
begin captured logging << \n
root: DEBUG: Trying 192.168.203.3...\n
root: DEBUG: Connected to 192.168.203.3 (192.168.203.3) port 443 (#0)\n
root: DEBUG: Initializing NSS with certpath: sql:/etc/pki/nssdb\n
root: DEBUG: skipping SSL peer certificate verification\n
root: DEBUG: ALPN/NPN, server did not agree to a protocol\n
root: DEBUG: SSL connection using TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256\n
root: DEBUG: Server certificate:\n
root: DEBUG: subject: CN=engine,O=Test,C=US\n
root: DEBUG: start date: Oct 11 21:55:29 2016 GMT\n
root: DEBUG: expire date: Sep 16 21:55:29 2021 GMT\n
root: DEBUG: common name: engine\nroot: DEBUG: issuer:
CN=engine.38998,O=Test,C=US\n
*root: DEBUG: POST /ovirt-engine/sso/oauth/token HTTP/1.1\n*
*root: DEBUG: Host: 192.168.203.3\n*
*root: DEBUG: User-Agent: PythonSDK/4.1.0a0\n*
*root: DEBUG: Accept: application/json\n*
*root: DEBUG: Content-Length: 78\n*
*root: DEBUG: Content-Type: application/x-www-form-urlencoded\nroot: DEBUG:
username=admin%40internal=ovirt-app-api=123_type=password\n*
*root: DEBUG: upload completely sent off: 78 out of 78 bytes\n*
*root: DEBUG: HTTP/1.1 404 Not Found\n*
*root: DEBUG: Date: Wed, 12 Oct 2016 21:56:27 GMT\n*
*root: DEBUG: Server: Apache/2.4.6 (CentOS) OpenSSL/1.0.1e-fips\n*
*root: DEBUG: Content-Length: 74\n*
*root: DEBUG: Content-Type: text/html; charset=UTF-8\n*
*root: DEBUG: \n*
*root: DEBUG: Error404 - Not
Found\n*
root: DEBUG: Connection #0 to host 192.168.203.3 left intact\n
- >> end captured logging
___
Devel mailing list
Devel@ovirt.org
http://lists.ovirt.org/mailman/listinfo/devel

Re: [ovirt-devel] Can't add DC with API v4 - client issue

2016-10-14 Thread Yaniv Kaul
On Fri, Oct 14, 2016 at 3:50 PM, Ravi Nori <rn...@redhat.com> wrote:

> Hi Yaniv,
>
> Can you check the output of https::///ovirt-engine/sso/status in
> your browser and see if the SSO service is active.
>
> If SSO is deployed, you should see an output similar to the one below.
> Also are you able to login to webadmin using the browser?
>

I am able to login using the webui.


>
> {"status_description":"SSO Webapp Deployed","version":"0","
> status":"active"}
>

Indeed:
{"status_description":"SSO Webapp Deployed","version":"0","status":"active"}

(not sure what 'version 0' means?)


>
> Please share the content of /etc/ovirt-engine/engine.conf.
> d/11-setup-sso.conf
>

[root@lago-basic-suite-master-engine ~]# cat
/etc/ovirt-engine/engine.conf.d/11-setup-sso.conf
ENGINE_SSO_CLIENT_ID="ovirt-engine-core"
ENGINE_SSO_CLIENT_SECRET="bsOabtD7gE2McwLe80P109UV800XLx4O"
ENGINE_SSO_AUTH_URL="https://${ENGINE_FQDN}:443/ovirt-engine/sso;
ENGINE_SSO_SERVICE_URL="https://localhost:443/ovirt-engine/sso;
ENGINE_SSO_SERVICE_SSL_VERIFY_HOST=false
ENGINE_SSO_SERVICE_SSL_VERIFY_CHAIN=true
SSO_ALTERNATE_ENGINE_FQDNS=""
SSO_ENGINE_URL="https://${ENGINE_FQDN}:443/ovirt-engine/;


Thanks,
Y.



>
> Thanks
>
> Ravi
>
>
>
>
>
> On Fri, Oct 14, 2016 at 7:57 AM, Juan Hernández <jhern...@redhat.com>
> wrote:
>
>> On 10/14/2016 01:45 PM, Yaniv Kaul wrote:
>> >
>> >
>> > On Thu, Oct 13, 2016 at 11:13 AM, Juan Hernández <jhern...@redhat.com
>> > <mailto:jhern...@redhat.com>> wrote:
>> >
>> > On 10/13/2016 12:04 AM, Yaniv Kaul wrote:
>> > > On Fri, Oct 7, 2016 at 10:44 PM, Yaniv Kaul <yk...@redhat.com
>> <mailto:yk...@redhat.com>
>> > > <mailto:yk...@redhat.com <mailto:yk...@redhat.com>>> wrote:
>> > >
>> > > I'm trying on FC24, using
>> > >
>> >  python-ovirt-engine-sdk4-4.1.0-0.0.20161003git056315d.fc24.x86_64
>> to
>> > > add a DC, and failing - against master. The client is unhappy:
>> > > File
>> > >
>> >  "/home/ykaul/ovirt-system-tests/basic-suite-master/test-scen
>> arios/002_bootstrap.py",
>> > > line 98, in add_dc4
>> > > version=sdk4.types.Version(ma
>> jor=DC_VER_MAJ,minor=DC_VER_MIN),
>> > >   File "/usr/lib64/python2.7/site-pac
>> kages/ovirtsdk4/services.py",
>> > > line 4347, in add
>> > > response = self._connection.send(request)
>> > >   File "/usr/lib64/python2.7/site-pac
>> kages/ovirtsdk4/__init__.py",
>> > > line 276, in send
>> > > return self.__send(request)
>> > >   File "/usr/lib64/python2.7/site-pac
>> kages/ovirtsdk4/__init__.py",
>> > > line 298, in __send
>> > > self._sso_token = self._get_access_token()
>> > >   File "/usr/lib64/python2.7/site-pac
>> kages/ovirtsdk4/__init__.py",
>> > > line 460, in _get_access_token
>> > > sso_response = self._get_sso_response(self._sso_url,
>> > post_data)
>> > >   File "/usr/lib64/python2.7/site-pac
>> kages/ovirtsdk4/__init__.py",
>> > > line 498, in _get_sso_response
>> > > return json.loads(body_buf.getvalue().decode('utf-8'))
>> > >   File "/usr/lib64/python2.7/json/__init__.py", line 339, in
>> loads
>> > > return _default_decoder.decode(s)
>> > >   File "/usr/lib64/python2.7/json/decoder.py", line 364, in
>> decode
>> > > obj, end = self.raw_decode(s, idx=_w(s, 0).end())
>> > >   File "/usr/lib64/python2.7/json/decoder.py", line 382, in
>> > raw_decode
>> > > raise ValueError("No JSON object could be decoded")
>> > > ValueError: No JSON object could be decoded
>> > >
>> > >
>> > > Surprisingly, I now can't find that RPM of this SDK in
>> > > resources.ovirt.org <http://resources.ovirt.org>
>> > <http://resources.ovirt.org> now.
>> > >
>> > > I've tried
>> > > with
>> > http://r

[ovirt-devel] VDSM changes Linux memory dirty ratios - why?

2016-11-29 Thread Yaniv Kaul
It appears that VDSM changes the following params:
vm.dirty_ratio = 5
vm.dirty_background_ratio = 2

Any idea why? Because we use cache=none it's irrelevant anyway?
TIA,
Y.
___
Devel mailing list
Devel@ovirt.org
http://lists.ovirt.org/mailman/listinfo/devel

Re: [ovirt-devel] [RFC] Communicating within the community [was Re: [ovirt-users] I wrote an oVirt thing]

2016-11-29 Thread Yaniv Kaul
On Tue, Nov 29, 2016 at 6:33 PM, Sven Kieske  wrote:

> On 29/11/16 15:54, Martin Sivak wrote:
> > I actually checked the oVirt 4.0.0 and 4.0.5 release notes and I do
> > not see anything mentioning ovirt-cli or REST v3 deprecation. This
> > will (removal of RESTv3 support) affect the optimizer and quite
> > possibly even hosted engine setup.
>
> Hi,
>
> this seems to indicate that even core ovirt devs do not
> know about every feature which gets deprecated.
>
> Maybe this could be solved by sandro's suggestion
> to at least create an BZ for each deprecation.
>

Makes sense, even as a Tracker BZ, because there's the work to remove the
code, and it might be in several components, as well as Documentation.
Y.


> I would agree with that, but also like to propose
> an alternative way, because at the stage of bug creation
> the deprecation is already set in stone and can't be discussed further
> (I fear).
>
> So my proposal would be to mail to the devel list at least
> the proposal: "I/we want to get rid of feature/implementation X, use Z
> instead".
>
> This would give developers and users enough time to engage with the
> community, replacing dependencies or start a discussion if removal can't
> be done later.
>
> A clear deprecation guideline would also be helpful, I guess.
>
> Something like: Features will get marked as deprecated in release X.Y
> and removed in X.Y+2 (or whatever number you might chose).
>
> This would allow for much better preparation in the community and lead
> to less unpleasant surprises.
>
> HTH & keep up the good work!
>
> --
> Mit freundlichen Grüßen / Regards
>
> Sven Kieske
>
> Systemadministrator
> Mittwald CM Service GmbH & Co. KG
> Königsberger Straße 6
> 32339 Espelkamp
> T: +495772 293100
> F: +495772 29
> https://www.mittwald.de
> Geschäftsführer: Robert Meyer
> St.Nr.: 331/5721/1033, USt-IdNr.: DE814773217, HRA 6640, AG Bad Oeynhausen
> Komplementärin: Robert Meyer Verwaltungs GmbH, HRB 13260, AG Bad Oeynhausen
>
>
> ___
> Devel mailing list
> Devel@ovirt.org
> http://lists.ovirt.org/mailman/listinfo/devel
>
___
Devel mailing list
Devel@ovirt.org
http://lists.ovirt.org/mailman/listinfo/devel

Re: [ovirt-devel] test-repo_ovirt_experimental_3.6/4457/ job fails

2016-12-11 Thread Yaniv Kaul
On Sun, Dec 11, 2016 at 3:10 PM, Shlomo Ben David 
wrote:

> Hi,
>
> The [1] job fails with the following errors:
>
> *12:34:20  [31mError while running thread
> 12:34:20 Traceback (most recent call last):
> 12:34:20   File "/usr/lib/python2.7/site-packages/lago/utils.py", line 55, in 
> _ret_via_queue
> 12:34:20 queue.put({'return': func()})
> 12:34:20   File "/usr/lib/python2.7/site-packages/lago/prefix.py", line 1108, 
> in _collect_artifacts
> 12:34:20 vm.collect_artifacts(path)
> 12:34:20   File "/usr/lib/python2.7/site-packages/lago/plugins/vm.py", line 
> 461, in collect_artifacts
> 12:34:20 ) for guest_path in self._artifact_paths()
> 12:34:20   File "/usr/lib/python2.7/site-packages/lago/plugins/vm.py", line 
> 307, in extract_paths
> 12:34:20 return self.provider.extract_paths(paths, *args, **kwargs)
> 12:34:20   File "/usr/lib/python2.7/site-packages/lago/vm.py", line 196, in 
> extract_paths
> 12:34:20 self._extract_paths_live(paths=paths)
> 12:34:20   File "/usr/lib/python2.7/site-packages/lago/vm.py", line 417, in 
> _extract_paths_live
> 12:34:20 self._extract_paths_dead(paths=paths)
> 12:34:20   File "/usr/lib/python2.7/site-packages/lago/vm.py", line 432, in 
> _extract_paths_dead
> 12:34:20 gfs_cli.launch()
> 12:34:20   File "/usr/lib64/python2.7/site-packages/guestfs.py", line 4731, 
> in launch
> 12:34:20 r = libguestfsmod.launch (self._o)
> 12:34:20 RuntimeError: guestfs_launch failed.
> 12:34:20 This usually means the libguestfs appliance failed to start or 
> crashed.
> 12:34:20 See http://libguestfs.org/guestfs-faq.1.html#debugging-libguestfs 
> 
> 12:34:20 or run 'libguestfs-test-tool' and post the *complete* output into a
> 12:34:20 bug report or message to the libguestfs mailing list. [0m
> 12:34:32  [36m  # [Thread-1] lago-basic-suite-3-6-storage:  [32mSuccess [0m 
> (in 0:00:12) [0m
> 12:34:32  [36m  # [Thread-3] lago-basic-suite-3-6-host1:  [32mSuccess [0m (in 
> 0:00:13) [0m
> 12:34:32  [36m  # [Thread-2] lago-basic-suite-3-6-engine:  [32mSuccess [0m 
> (in 0:00:13) [0m
> 12:34:32  [36m@ Collect artifacts:  [31mERROR [0m (in 0:00:13) [0m
> 12:34:32  [31mError occured, aborting
> 12:34:32 Traceback (most recent call last):
> 12:34:32   File "/usr/lib/python2.7/site-packages/ovirtlago/cmd.py", line 
> 264, in do_run
> 12:34:32 self.cli_plugins[args.ovirtverb].do_run(args)
> 12:34:32   File "/usr/lib/python2.7/site-packages/lago/plugins/cli.py", line 
> 184, in do_run
> 12:34:32 self._do_run(**vars(args))
> 12:34:32   File "/usr/lib/python2.7/site-packages/lago/utils.py", line 489, 
> in wrapper
> 12:34:32 return func(*args, **kwargs)
> 12:34:32   File "/usr/lib/python2.7/site-packages/lago/utils.py", line 500, 
> in wrapper
> 12:34:32 return func(*args, prefix=prefix, **kwargs)
> 12:34:32   File "/usr/lib/python2.7/site-packages/ovirtlago/cmd.py", line 
> 230, in do_ovirt_collect
> 12:34:32 prefix.collect_artifacts(output)
> 12:34:32   File "/usr/lib/python2.7/site-packages/lago/log_utils.py", line 
> 621, in wrapper
> 12:34:32 return func(*args, **kwargs)
> 12:34:32   File "/usr/lib/python2.7/site-packages/lago/prefix.py", line 1112, 
> in collect_artifacts
> 12:34:32 self.virt_env.get_vms().values(),
> 12:34:32   File "/usr/lib/python2.7/site-packages/lago/utils.py", line 97, in 
> invoke_in_parallel
> 12:34:32 vt.join_all()
> 12:34:32   File "/usr/lib/python2.7/site-packages/lago/utils.py", line 55, in 
> _ret_via_queue
> 12:34:32 queue.put({'return': func()})
> 12:34:32   File "/usr/lib/python2.7/site-packages/lago/prefix.py", line 1108, 
> in _collect_artifacts
> 12:34:32 vm.collect_artifacts(path)
> 12:34:32   File "/usr/lib/python2.7/site-packages/lago/plugins/vm.py", line 
> 461, in collect_artifacts
> 12:34:32 ) for guest_path in self._artifact_paths()
> 12:34:32   File "/usr/lib/python2.7/site-packages/lago/plugins/vm.py", line 
> 307, in extract_paths
> 12:34:32 return self.provider.extract_paths(paths, *args, **kwargs)
> 12:34:32   File "/usr/lib/python2.7/site-packages/lago/vm.py", line 196, in 
> extract_paths
> 12:34:32 self._extract_paths_live(paths=paths)
> 12:34:32   File "/usr/lib/python2.7/site-packages/lago/vm.py", line 417, in 
> _extract_paths_live
> 12:34:32 self._extract_paths_dead(paths=paths)
> 12:34:32   File "/usr/lib/python2.7/site-packages/lago/vm.py", line 432, in 
> _extract_paths_dead
> 12:34:32 gfs_cli.launch()
> 12:34:32   File "/usr/lib64/python2.7/site-packages/guestfs.py", line 4731, 
> in launch
> 12:34:32 r = libguestfsmod.launch (self._o)
> 12:34:32 RuntimeError: guestfs_launch failed.
> 12:34:32 This usually means the libguestfs appliance failed to start or 
> crashed.
> 12:34:32 See http://libguestfs.org/guestfs-faq.1.html#debugging-libguestfs 
> 
> 12:34:32 or run 'libguestfs-test-tool' and 

Re: [ovirt-devel] test-repo_ovirt_experimental_3.6/4457/ job fails

2016-12-11 Thread Yaniv Kaul
On Sun, Dec 11, 2016 at 3:13 PM, Yaniv Kaul <yk...@redhat.com> wrote:

>
>
> On Sun, Dec 11, 2016 at 3:10 PM, Shlomo Ben David <sbend...@redhat.com>
> wrote:
>
>> Hi,
>>
>> The [1] job fails with the following errors:
>>
>> *12:34:20  [31mError while running thread
>> 12:34:20 Traceback (most recent call last):
>> 12:34:20   File "/usr/lib/python2.7/site-packages/lago/utils.py", line 55, 
>> in _ret_via_queue
>> 12:34:20 queue.put({'return': func()})
>> 12:34:20   File "/usr/lib/python2.7/site-packages/lago/prefix.py", line 
>> 1108, in _collect_artifacts
>> 12:34:20 vm.collect_artifacts(path)
>> 12:34:20   File "/usr/lib/python2.7/site-packages/lago/plugins/vm.py", line 
>> 461, in collect_artifacts
>> 12:34:20 ) for guest_path in self._artifact_paths()
>> 12:34:20   File "/usr/lib/python2.7/site-packages/lago/plugins/vm.py", line 
>> 307, in extract_paths
>> 12:34:20 return self.provider.extract_paths(paths, *args, **kwargs)
>> 12:34:20   File "/usr/lib/python2.7/site-packages/lago/vm.py", line 196, in 
>> extract_paths
>> 12:34:20 self._extract_paths_live(paths=paths)
>> 12:34:20   File "/usr/lib/python2.7/site-packages/lago/vm.py", line 417, in 
>> _extract_paths_live
>> 12:34:20 self._extract_paths_dead(paths=paths)
>> 12:34:20   File "/usr/lib/python2.7/site-packages/lago/vm.py", line 432, in 
>> _extract_paths_dead
>> 12:34:20 gfs_cli.launch()
>> 12:34:20   File "/usr/lib64/python2.7/site-packages/guestfs.py", line 4731, 
>> in launch
>> 12:34:20 r = libguestfsmod.launch (self._o)
>> 12:34:20 RuntimeError: guestfs_launch failed.
>> 12:34:20 This usually means the libguestfs appliance failed to start or 
>> crashed.
>> 12:34:20 See http://libguestfs.org/guestfs-faq.1.html#debugging-libguestfs 
>> <http://libguestfs.org/guestfs-faq.1.html#debugging-libguestfs>
>> 12:34:20 or run 'libguestfs-test-tool' and post the *complete* output into a
>> 12:34:20 bug report or message to the libguestfs mailing list. [0m
>> 12:34:32  [36m  # [Thread-1] lago-basic-suite-3-6-storage:  [32mSuccess [0m 
>> (in 0:00:12) [0m
>> 12:34:32  [36m  # [Thread-3] lago-basic-suite-3-6-host1:  [32mSuccess [0m 
>> (in 0:00:13) [0m
>> 12:34:32  [36m  # [Thread-2] lago-basic-suite-3-6-engine:  [32mSuccess [0m 
>> (in 0:00:13) [0m
>> 12:34:32  [36m@ Collect artifacts:  [31mERROR [0m (in 0:00:13) [0m
>> 12:34:32  [31mError occured, aborting
>> 12:34:32 Traceback (most recent call last):
>> 12:34:32   File "/usr/lib/python2.7/site-packages/ovirtlago/cmd.py", line 
>> 264, in do_run
>> 12:34:32 self.cli_plugins[args.ovirtverb].do_run(args)
>> 12:34:32   File "/usr/lib/python2.7/site-packages/lago/plugins/cli.py", line 
>> 184, in do_run
>> 12:34:32 self._do_run(**vars(args))
>> 12:34:32   File "/usr/lib/python2.7/site-packages/lago/utils.py", line 489, 
>> in wrapper
>> 12:34:32 return func(*args, **kwargs)
>> 12:34:32   File "/usr/lib/python2.7/site-packages/lago/utils.py", line 500, 
>> in wrapper
>> 12:34:32 return func(*args, prefix=prefix, **kwargs)
>> 12:34:32   File "/usr/lib/python2.7/site-packages/ovirtlago/cmd.py", line 
>> 230, in do_ovirt_collect
>> 12:34:32 prefix.collect_artifacts(output)
>> 12:34:32   File "/usr/lib/python2.7/site-packages/lago/log_utils.py", line 
>> 621, in wrapper
>> 12:34:32 return func(*args, **kwargs)
>> 12:34:32   File "/usr/lib/python2.7/site-packages/lago/prefix.py", line 
>> 1112, in collect_artifacts
>> 12:34:32 self.virt_env.get_vms().values(),
>> 12:34:32   File "/usr/lib/python2.7/site-packages/lago/utils.py", line 97, 
>> in invoke_in_parallel
>> 12:34:32 vt.join_all()
>> 12:34:32   File "/usr/lib/python2.7/site-packages/lago/utils.py", line 55, 
>> in _ret_via_queue
>> 12:34:32 queue.put({'return': func()})
>> 12:34:32   File "/usr/lib/python2.7/site-packages/lago/prefix.py", line 
>> 1108, in _collect_artifacts
>> 12:34:32 vm.collect_artifacts(path)
>> 12:34:32   File "/usr/lib/python2.7/site-packages/lago/plugins/vm.py", line 
>> 461, in collect_artifacts
>> 12:34:32 ) for guest_path in self._artifact_paths()
>> 12:34:32   File "/usr/lib/python2.7/site-packages/lago/plugins/vm.py", line 
>> 307, in extract_paths
>> 12:34:32 return self.provider.extract_paths(paths, *args, **kwargs)
>> 12:34:

[ovirt-devel] 4.0.x dependency failure (vdsm-jsonrpc-java)

2016-12-09 Thread Yaniv Kaul
See [1]:

+ yum install --nogpgcheck -y --downloaddir=/dev/shm ovirt-engine
ovirt-log-collector 'ovirt-engine-extension-aaa-ldap*'*13:55:25*
Error: Package:
ovirt-engine-backend-4.0.7-0.0.master.20161208233116.git3dff5ce.el7.centos.noarch
(alocalsync)*13:55:25*Requires: vdsm-jsonrpc-java >=
1.2.10*13:55:25*Available:
vdsm-jsonrpc-java-1.2.9-1.20161208102442.gite5c0c8e.el7.centos.noarch
(alocalsync)*13:55:25*vdsm-jsonrpc-java =
1.2.9-1.20161208102442.gite5c0c8e.el7.centos




[1] 
http://jenkins.ovirt.org/job/ovirt-system-tests_master_check-patch-fc24-x86_64/319/console
___
Devel mailing list
Devel@ovirt.org
http://lists.phx.ovirt.org/mailman/listinfo/devel

  1   2   3   4   >