Re: [ovirt-devel] [vdsm][RFC] reconsidering branching out ovirt-4.2

2018-02-01 Thread Yaniv Bronheim
On Mon, Jan 29, 2018 at 9:40 AM Francesco Romani  wrote:

> Hi all,
>
>
> It is time again to reconsider branching out the 4.2 stable branch.
>
> So far we decided to *not* branch out, and we are taking tags for ovirt
> 4.2 releases from master branch.
>
> This means we are merging safe and/or stabilization patches only in master.
>
>
> I think it is time to reconsider this decision and branch out for 4.2,
> because of two reasons:
>
> 1. it sends a clearer signal that 4.2 is going in stabilization mode
>
> 2. we have requests from virt team, which wants to start working on the
> next cycle features.
>

This the only reason to branch out -  "next cycle features" should be part
of 4.2 as well?
Do other teams also plan to push new features to 4.2 that are not stable
yet?
If not, I don't see any reason to branch out or backport "not stable"
patches to 4.2 branch. We can keep it stable and avoid this branch-out
unless we want to add new big feature that might cause regression in
current stable 4.2 code.


>
> If we decide to branch out, I'd start the new branch on monday, February
> 5 (1 week from now).
>
>
> The discussion is open, please share your acks/nacks for branching out,
> and for the branching date.
>
>
> I for myself I'm inclined to branch out, so if noone chimes in (!!) I'll
> execute the above plan.
>
>
> --
> Francesco Romani
> Senior SW Eng., Virtualization R
> Red Hat
> IRC: fromani github: @fromanirh
>
> ___
> Devel mailing list
> Devel@ovirt.org
> http://lists.ovirt.org/mailman/listinfo/devel
>
-- 
Yaniv Bronhaim.
___
Devel mailing list
Devel@ovirt.org
http://lists.ovirt.org/mailman/listinfo/devel

Re: [ovirt-devel] vdsm stable branch maitainership

2018-01-11 Thread Yaniv Bronheim
On Wed, Jan 10, 2018 at 6:04 PM Nir Soffer  wrote:

> On Wed, Jan 10, 2018 at 11:38 AM Francesco Romani 
> wrote:
>
>> On 01/09/2018 06:54 PM, Michal Skrivanek wrote:
>>
>>
>>
>> On 9 Jan 2018, at 18:48, Nir Soffer  wrote:
>>
>> On Tue, Jan 9, 2018 at 3:55 PM Adam Litke  wrote:
>>
>>> +1
>>>
>>> On Tue, Jan 9, 2018 at 8:17 AM, Francesco Romani 
>>> wrote:
>>>
 On 01/09/2018 12:43 PM, Dan Kenigsberg wrote:
 > Hello,
 >
 > I would like to nominate Milan Zamazal and Petr Horacek as maintainers
 > of vdsm stable branches. This job requires understanding of vdsm
 > packaging and code, a lot of attention to details and awareness of the
 > requirements of other components and teams.
 >
 > I believe that both Milan and Petr have these qualities. I am certain
 > they would work in responsive caution when merging and tagging patches
 > to the stable branches.
 >
 > vdsm maintainers, please confirm if you approve.

>>>
>> Why do we need 4 maintainers for the stable branch?
>>
>> Currently Yaniv and Francesco maintain this branch.
>>
>>
>> they both have quite a few other duties recently, and less and less time
>> to attend to vdsm
>> if it wasn’t so noticable for Francesco yet, then it is going to be quite
>> soon.
>>
>> I believe it makes sense to ramp up others before it happens.
>>
>>
>> I'd like to stress that I plan to float around as backup and to help
>> people during the ramp up phase
>> (and for any general advice/help that could be needed).
>>
>
> I see, the proposal was not clear before.
>
> +1 for Milan, I worked with him a lot and I'm sure he will do great job
>
> I never worked with Petr, but seems that everyone is happy, good enough
> for me.
>
> I think we need to keep Yaniv or someone else from the tlv office, so we
> have a way
> to push urgent patches on weekends or holidays when nobody is available in
> Brno.
>

I keep this hat whenever needed


> Nir
>


-- 
Yaniv Bronhaim.
___
Devel mailing list
Devel@ovirt.org
http://lists.ovirt.org/mailman/listinfo/devel

Re: [ovirt-devel] vdsm stable branch maitainership

2018-01-09 Thread Yaniv Bronheim
+1

On Tue, Jan 9, 2018 at 1:47 PM Piotr Kliczewski  wrote:

> +1
>
> On Tue, Jan 9, 2018 at 12:43 PM, Dan Kenigsberg  wrote:
>
>> Hello,
>>
>> I would like to nominate Milan Zamazal and Petr Horacek as maintainers
>> of vdsm stable branches. This job requires understanding of vdsm
>> packaging and code, a lot of attention to details and awareness of the
>> requirements of other components and teams.
>>
>> I believe that both Milan and Petr have these qualities. I am certain
>> they would work in responsive caution when merging and tagging patches
>> to the stable branches.
>>
>> vdsm maintainers, please confirm if you approve.
>>
>> Regards,
>> Dan.
>>
>
> ___
> Devel mailing list
> Devel@ovirt.org
> http://lists.ovirt.org/mailman/listinfo/devel



-- 
Yaniv Bronhaim.
___
Devel mailing list
Devel@ovirt.org
http://lists.ovirt.org/mailman/listinfo/devel

Re: [ovirt-devel] jsonrpc go client

2017-07-16 Thread Yaniv Bronheim
It depends who will be the users of this client.. For now, this only
experimental for your plays around kubernetes, not more than that..

On Sun, Jul 16, 2017 at 6:10 PM Nir Soffer <nsof...@redhat.com> wrote:

> Cool!
>
> This needs integration tests with real vdsm, or at least a server using
> vdsm
> yajsonrpc code. I'm worried about incompatibilities between the go stomp
> library and our own stomp implementation, not used by any other code.
>
> When it works, we can convert vdsm-client to go :-)
>
> On Sat, Jul 15, 2017 at 8:53 AM Yaniv Bronheim <ybron...@redhat.com>
> wrote:
>
>> Great! make it official under ovirt imo . This will be totally useful
>> later on with openshift integration. Im almost convinced that once ovirt
>> will run in parallel to openshift or as part of openshift, we'll need to
>> call vdsm api commands via modules that with high chance will be written in
>> go. Give specific example won't be meaningful much because we still
>> designing all this vm+containers architecture and flows.
>> Thanks
>>
>> On Fri, Jul 14, 2017 at 4:40 PM Adam Litke <ali...@redhat.com> wrote:
>>
>>> On Fri, Jul 14, 2017 at 9:32 AM, Piotr Kliczewski <
>>> piotr.kliczew...@gmail.com> wrote:
>>>
>>>> On Fri, Jul 14, 2017 at 3:14 PM, Dan Kenigsberg <dan...@redhat.com>
>>>> wrote:
>>>> > On Fri, Jul 14, 2017 at 3:11 PM, Piotr Kliczewski
>>>> > <piotr.kliczew...@gmail.com> wrote:
>>>> >> All,
>>>> >>
>>>> >> I pushed very simple jsonrpc go client [1] which allows to talk to
>>>> >> vdsm. I had a request to create it but if there are more people
>>>> >> willing to use it I am happy to maintain it.
>>>>
>>>
>>> Awesome Piotr!  Thanks for the great work.
>>>
>>>
>>>> >>
>>>> >> Please let me know if you find any issues with it or you have any
>>>> >> feature requests.
>>>> >
>>>> > Interesting. Which use case do you see for this client?
>>>> > Currently, Vdsm has very few clients: Engine, vdsm-client, mom and
>>>> > hosted-engine. Too often we forget about the non-Engine ones and break
>>>> > them, so I'd be happy to learn more about a 5th.
>>>>
>>>> Adam asked for the client for his storage related changes. I am not
>>>> sure about specific use case.
>>>>
>>>
>>> I am looking at implementing a vdsm flexvol driver for kubernetes.  This
>>> would allow kubernetes pods to access vdsm volumes using the native PV and
>>> PVC mechanisms.
>>>
>>>
>>>>
>>>> >
>>>> > Regarding
>>>> https://github.com/pkliczewski/vdsm-jsonrpc-go/blob/master/example/main.go
>>>> > : programming without exceptions and try-except is a pain. don't you
>>>> > need to check the retval of Subscribe and disconnect on failure?
>>>>
>>>> By no means example is not perfect and you are correct. I will fix.
>>>>
>>>
>>>
>>>
>>> --
>>> Adam Litke
>>> ___
>>> Devel mailing list
>>> Devel@ovirt.org
>>> http://lists.ovirt.org/mailman/listinfo/devel
>>
>> --
>> Yaniv Bronhaim.
>> ___
>> Devel mailing list
>> Devel@ovirt.org
>> http://lists.ovirt.org/mailman/listinfo/devel
>
> --
Yaniv Bronhaim.
___
Devel mailing list
Devel@ovirt.org
http://lists.ovirt.org/mailman/listinfo/devel

Re: [ovirt-devel] UI Redesign patch has been merged.

2017-06-14 Thread Yaniv Bronheim
Erray !! :P Great work

On Wed, Jun 14, 2017 at 4:29 PM Alexander Wels  wrote:

> Hi,
>
> I have just merged [1] which is a huge patch to update the look and feel of
> the webadmin UI to be more modern and based on Patternfly. I believe this
> is a
> huge step forward for the UI and should hopefully improve the usability and
> functionality of the UI (it certainly looks better).
>
> There are a bunch of follow up patches that once merged will make the UI
> look
> like [2]. It might take a few days for all of these patches to get into
> master. There are more enhancements planned to the usability of the UI as
> part
> of the overhaul process.
>
> As always, if there are any issues or comments let me know and I will be
> sure
> to address them ASAP.
>
> Alexander
>
>
> ps. You will have to clear your browser cache to get some of the
> formatting to
> look right.
>
> [1] https://gerrit.ovirt.org/#/c/75669/
> [2] http://imgur.com/a/JX0iG
> ___
> Devel mailing list
> Devel@ovirt.org
> http://lists.ovirt.org/mailman/listinfo/devel
>
-- 
Yaniv Bronhaim.
___
Devel mailing list
Devel@ovirt.org
http://lists.ovirt.org/mailman/listinfo/devel

Re: [ovirt-devel] [VDSM] check-merge fails: error: [Errno 98] Address already in use

2017-05-23 Thread Yaniv Bronheim
do you still see that after merged? seems like broken version of lago in ci

On Mon, May 22, 2017 at 4:36 PM Nir Soffer  wrote:

> I see these failure in check-merge
>
> *13:26:59* Error occured, aborting*13:26:59* Traceback (most recent call 
> last):*13:26:59*   File "/usr/lib/python2.7/site-packages/ovirtlago/cmd.py", 
> line 360, in do_run*13:26:59* 
> self.cli_plugins[args.ovirtverb].do_run(args)*13:26:59*   File 
> "/usr/lib/python2.7/site-packages/lago/plugins/cli.py", line 184, in 
> do_run*13:26:59* self._do_run(**vars(args))*13:26:59*   File 
> "/usr/lib/python2.7/site-packages/lago/utils.py", line 501, in 
> wrapper*13:26:59* return func(*args, **kwargs)*13:26:59*   File 
> "/usr/lib/python2.7/site-packages/lago/utils.py", line 512, in 
> wrapper*13:26:59* return func(*args, prefix=prefix, **kwargs)*13:26:59*   
> File "/usr/lib/python2.7/site-packages/ovirtlago/cmd.py", line 166, in 
> do_deploy*13:26:59* prefix.deploy()*13:26:59*   File 
> "/usr/lib/python2.7/site-packages/lago/log_utils.py", line 635, in 
> wrapper*13:26:59* return func(*args, **kwargs)*13:26:59*   File 
> "/usr/lib/python2.7/site-packages/ovirtlago/reposetup.py", line 110, in 
> wrapper*13:26:59* with utils.repo_server_context(args[0]):*13:26:59*   
> File "/usr/lib64/python2.7/contextlib.py", line 17, in __enter__*13:26:59*
>  return self.gen.next()*13:26:59*   File 
> "/usr/lib/python2.7/site-packages/ovirtlago/utils.py", line 100, in 
> repo_server_context*13:26:59* 
> root_dir=prefix.paths.internal_repo(),*13:26:59*   File 
> "/usr/lib/python2.7/site-packages/ovirtlago/utils.py", line 76, in 
> _create_http_server*13:26:59* 
> generate_request_handler(root_dir),*13:26:59*   File 
> "/usr/lib64/python2.7/SocketServer.py", line 419, in __init__*13:26:59* 
> self.server_bind()*13:26:59*   File "/usr/lib64/python2.7/BaseHTTPServer.py", 
> line 108, in server_bind*13:26:59* 
> SocketServer.TCPServer.server_bind(self)*13:26:59*   File 
> "/usr/lib64/python2.7/SocketServer.py", line 430, in server_bind*13:26:59*
>  self.socket.bind(self.server_address)*13:26:59*   File 
> "/usr/lib64/python2.7/socket.py", line 224, in meth*13:26:59* return 
> getattr(self._sock,name)(*args)*13:26:59* error: [Errno 98] Address already 
> in use
>
>
> Failed build: 
> http://jenkins.ovirt.org/job/vdsm_master_check-merged-el7-x86_64/1794/console
>
> Other builds with same error:
> http://jenkins.ovirt.org/job/vdsm_master_check-merged-el7-x86_64/1793/console
> http://jenkins.ovirt.org/job/vdsm_master_check-merged-el7-x86_64/1790/console
>
> Nir
>
-- 
Yaniv Bronhaim.
___
Devel mailing list
Devel@ovirt.org
http://lists.ovirt.org/mailman/listinfo/devel

Re: [ovirt-devel] abrt integration in vdsm 4.1

2017-04-24 Thread Yaniv Bronheim
I'll contact QE. Lets continue follow
https://bugzilla.redhat.com/show_bug.cgi?id=917062 for more information and
requests. This "devel" mail is only aimed for the rest to be aware of this
integration

On Sun, Apr 23, 2017 at 6:53 PM Dan Kenigsberg <dan...@redhat.com> wrote:

>
>
> On Apr 23, 2017 5:21 PM, "Yaniv Bronheim" <ybron...@redhat.com> wrote:
>
> All, Great to hear the interest.
> Sandro - Maybe I can install sos-abrt package - I didn't try. However,
> ovirt collects only vdsm-sos report and I want to include this information
> there - so it was easier and simplest way to expose it
> Yaniv - We don't see why not to include it in 4.1, it runs already two
> weeks or so in master :) and its something that we want for quite long, and
> its ready ... why not let others benefit from it without waiting for next
> major release
>
>
> No patch is harmless. When introducing new code to a stable branch, it is
> your responsibility to explain what does the feature do, what are it's
> dangers, how well was it tested.
>
> Dan - I will raise the need for more intensive testings. I didn't want to
> share the information with fedora, because I didn't think about it much..
> maybe it can be nice. From my point of you, having abrt output locally and
> exposed by vdsm is enough for ovirt orchestration with abrt.
>
>
> On Fri, Apr 21, 2017 at 2:38 PM Dan Kenigsberg <dan...@redhat.com> wrote:
>
>> On Wed, Apr 19, 2017 at 5:43 PM, Yaniv Bronheim <ybron...@redhat.com>
>> wrote:
>> > Hi, I posted the new integration [1] to 4.1 -
>> > https://gerrit.ovirt.org/#/q/topic:backport-abrt-intgr for review.
>> > Abrt is a service that runs in parallel to vdsm and collect binaries and
>> > python crashes under /var/run/tmp - to try that out you can crash a qemu
>> > process or vdsm with signal -6 and watch the report using "abrt-cli
>> list"
>> > command, which its output will be reported by our sos plugin.
>> >
>> > Thanks,
>> > Yaniv Bronhaim.
>> >
>> > [1] https://bugzilla.redhat.com/show_bug.cgi?id=917062
>>
>> I love to see this integration. It could provide us a lot of
>> information about common failures.
>> The downside is that it can also swamp us with meaningless spam.
>>
>> I see that the bug is destined to 4.1.3. It makes sense to me, since
>> it would let us test it thoroughly on master. Did we do extensive
>> testing already? Can a user disable this (per cluster? on each host?)
>> if he does not like to share the data with Fedora?
>>
>> Regards,
>> Dan.
>>
> --
> Yaniv Bronhaim.
>
>
> --
Yaniv Bronhaim.
___
Devel mailing list
Devel@ovirt.org
http://lists.ovirt.org/mailman/listinfo/devel

Re: [ovirt-devel] abrt integration in vdsm 4.1

2017-04-23 Thread Yaniv Bronheim
All, Great to hear the interest.
Sandro - Maybe I can install sos-abrt package - I didn't try. However,
ovirt collects only vdsm-sos report and I want to include this information
there - so it was easier and simplest way to expose it
Yaniv - We don't see why not to include it in 4.1, it runs already two
weeks or so in master :) and its something that we want for quite long, and
its ready ... why not let others benefit from it without waiting for next
major release
Dan - I will raise the need for more intensive testings. I didn't want to
share the information with fedora, because I didn't think about it much..
maybe it can be nice. From my point of you, having abrt output locally and
exposed by vdsm is enough for ovirt orchestration with abrt.


On Fri, Apr 21, 2017 at 2:38 PM Dan Kenigsberg <dan...@redhat.com> wrote:

> On Wed, Apr 19, 2017 at 5:43 PM, Yaniv Bronheim <ybron...@redhat.com>
> wrote:
> > Hi, I posted the new integration [1] to 4.1 -
> > https://gerrit.ovirt.org/#/q/topic:backport-abrt-intgr for review.
> > Abrt is a service that runs in parallel to vdsm and collect binaries and
> > python crashes under /var/run/tmp - to try that out you can crash a qemu
> > process or vdsm with signal -6 and watch the report using "abrt-cli list"
> > command, which its output will be reported by our sos plugin.
> >
> > Thanks,
> > Yaniv Bronhaim.
> >
> > [1] https://bugzilla.redhat.com/show_bug.cgi?id=917062
>
> I love to see this integration. It could provide us a lot of
> information about common failures.
> The downside is that it can also swamp us with meaningless spam.
>
> I see that the bug is destined to 4.1.3. It makes sense to me, since
> it would let us test it thoroughly on master. Did we do extensive
> testing already? Can a user disable this (per cluster? on each host?)
> if he does not like to share the data with Fedora?
>
> Regards,
> Dan.
>
-- 
Yaniv Bronhaim.
___
Devel mailing list
Devel@ovirt.org
http://lists.ovirt.org/mailman/listinfo/devel

[ovirt-devel] abrt integration in vdsm 4.1

2017-04-19 Thread Yaniv Bronheim
Hi, I posted the new integration [1] to 4.1 -
https://gerrit.ovirt.org/#/q/topic:backport-abrt-intgr for review.
Abrt is a service that runs in parallel to vdsm and collect binaries and
python crashes under /var/run/tmp - to try that out you can crash a qemu
process or vdsm with signal -6 and watch the report using "abrt-cli list"
command, which its output will be reported by our sos plugin.

Thanks,
Yaniv Bronhaim.

[1] https://bugzilla.redhat.com/show_bug.cgi?id=917062
-- 
Yaniv Bronhaim.
___
Devel mailing list
Devel@ovirt.org
http://lists.ovirt.org/mailman/listinfo/devel

Re: [ovirt-devel] [Engine] Runtime log controll script

2017-04-18 Thread Yaniv Bronheim
Thanks! I would mention it in the main ovirt-engine README file, its basic
usage and very useful.

On Tue, Apr 18, 2017 at 2:41 PM Martin Betak  wrote:

> On Tue, Apr 18, 2017 at 9:52 AM, Roy Golan  wrote:
>
>> Ever wanted to raise level of the engine logs and wanted to to that fast
>> and runtime?
>>
>> This is a small wrapper around wildfly mgmt interface, called
>> *log-control* to do the trick[1]
>>
>> Example, debug the db interaction layer:
>>
>> ./log-control org.ovirt.engine.core.dal debug
>>
>> It will first try blantly to add the log category and then will set the
>> log level according to what you set. It is simple and stupid.
>>
>> More interesting logger categories:
>>
>> business logic (commands, queries) - org.ovirt.engine.core.bll
>> hosts interaction - org.ovirt.engine.core.vdsbroker
>> various utilities - org.ovirt.engine.core.utils
>> aaa - org.ovirt.engine.exttool.aaa
>>
>> General suggestion -
>> I think is is it time for *ovirt-engine-contrib* so mini-helpers like
>> that can exists and when they are solid can go into mainstream repo, if
>> needed in there.
>>
>
> Nice job, Roy! +1 to the "ovirt-engine-contrib" idea.
>
>
>>
>> [1] https://gist.github.com/rgolangh/1cb9f9b3b7f7f0a1d16b5a976d90bd55
>>
>> Thanks,
>> Roy
>>
>>
>> ___
>> Devel mailing list
>> Devel@ovirt.org
>> http://lists.ovirt.org/mailman/listinfo/devel
>>
> ___
> Devel mailing list
> Devel@ovirt.org
> http://lists.ovirt.org/mailman/listinfo/devel

-- 
Yaniv Bronhaim.
___
Devel mailing list
Devel@ovirt.org
http://lists.ovirt.org/mailman/listinfo/devel

Re: [ovirt-devel] master branch host installation fails in ovirt-system-tests - regression due to ABRT integration?

2017-03-27 Thread Yaniv Bronheim
The abrt packages are indeed not included. thanks for the report
https://gerrit.ovirt.org/#/c/74654/1/common/yum-repos/ovirt-master.repo

Verifying

On Sun, Mar 26, 2017 at 4:56 PM Yaniv Kaul  wrote:

> 2017-03-26 09:50:47,235-04 ERROR
> [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector]
> (VdsDeploy) [2c34dfce] EVENT_ID: VDS_INSTALL_IN_PROGRESS_ERROR(511),
> Correlation ID: 2c34dfce, Call Stack:
> null, Custom Event ID: -1, Message: Failed to install Host
> lago-basic-suite-master-host1. Failed to execute stage 'Package
> installation': [u'vdsm-4.20.0-542.git93156a7.el7.centos.x86_64 requires
> abrt-addon-vmcor
> e', u'vdsm-4.20.0-542.git93156a7.el7.centos.x86_64 requires
> abrt-addon-ccpp', u'vdsm-4.20.0-542.git93156a7.el7.centos.x86_64 requires
> abrt-addon-python'].
>
> --
Yaniv Bronhaim.
___
Devel mailing list
Devel@ovirt.org
http://lists.ovirt.org/mailman/listinfo/devel

Re: [ovirt-devel] [lago-devel] vdsm service fails to start on HC setup

2017-02-06 Thread Yaniv Bronheim
we merged https://gerrit.ovirt.org/#/c/71231/ yesterday, calling vdsm-tool
configure without specifying modules will run this new lvm configure.
didn't see any issues that can come up, but it is a regression

On Mon, Feb 6, 2017 at 10:26 AM, Yaniv Kaul  wrote:

> +Nir
>
> On Feb 6, 2017 10:12 AM, "Sahina Bose"  wrote:
>
>> Hi all,
>>
>> While verifying the test to deploy hyperconverged HE [1], I'm running
>> into an issue today where vdsm fails to start.
>>
>> In the logs -
>>  lago-basic-suite-hc-host0 vdsmd_init_common.sh: Error:
>> Feb  6 02:21:32 lago-basic-suite-hc-host0 vdsmd_init_common.sh: One of
>> the modules is not configured to work with VDSM.
>>
>> Starting manually - vdsm-tool configure --force gives:
>> Units need configuration: {'lvm2-lvmetad.service': {'LoadState':
>> 'masked', 'ActiveState': 'failed'}}
>>
>> Is this a known issue?
>>
>> [1] - https://gerrit.ovirt.org/57283
>>
>> thanks
>> sahina
>>
>>
>> ___
>> lago-devel mailing list
>> lago-de...@ovirt.org
>> http://lists.ovirt.org/mailman/listinfo/lago-devel
>>
>>
> ___
> Devel mailing list
> Devel@ovirt.org
> http://lists.ovirt.org/mailman/listinfo/devel
>



-- 
*Yaniv Bronhaim.*
___
Devel mailing list
Devel@ovirt.org
http://lists.ovirt.org/mailman/listinfo/devel

Re: [ovirt-devel] Vdsm - 6 years (in 10 minutes)

2017-01-09 Thread Yaniv Bronheim
craziness .. cool thing though


On Mon, Jan 9, 2017 at 5:08 PM, Nir Soffer  wrote:

> Hi all,
>
> Please enjoy this visualization of vdsm development since 2011:
> https://www.youtube.com/watch?v=Ui1ouZiENU0
>
> If you want to create your own:
>
> dnf install gourse ffmpeg
> cd /path/to/gitrepo
> gource -s 0.1 --date-format "%a, %d %b %Y" -1920x1080 -o - | ffmpeg -y
> -r 30 -f image2pipe -vcodec ppm -i - -vcodec libx264 -preset ultrafast
> -pix_fmt yuv420p -crf 1 -threads 4 -bf 0 vdsm-6-years.mp4
>
> Nir
> ___
> Devel mailing list
> Devel@ovirt.org
> http://lists.ovirt.org/mailman/listinfo/devel
>



-- 
*Yaniv Bronhaim.*
___
Devel mailing list
Devel@ovirt.org
http://lists.ovirt.org/mailman/listinfo/devel

Re: [ovirt-devel] engine: the link to host-deploy log is wrong

2017-01-02 Thread Yaniv Bronheim
the report in the audit log is miss leading. I used to find on failures the
logs under /tmp - but in recent versions it usually copied to /var/log
already when the log says to find it under /tmp/

On Mon, Jan 2, 2017 at 10:53 AM, Sandro Bonazzola <sbona...@redhat.com>
wrote:

>
>
> On Sun, Jan 1, 2017 at 4:59 PM, Yaniv Bronheim <ybron...@redhat.com>
> wrote:
>
>> Hi,
>> I'm not sure if I'm right or wrong so before filing a bug let me know if
>> I don't miss anything - I think the audit log on fail deploy that says the
>> path for the deploy log is wrong. few times I already faced that it points
>> to somewhere under /tmp/ovirt-host-deploy... but the log is actually under
>> /var/log/ovirt-engine/host-deploy/...
>>
>
> While running on the host, the logs are written in /tmp/ At the end of
> the execution of ovirt-host-deploy, the logs are copied over ssh to the
> engine host and saved under /var/log/ovirt-engine/host-deploy/
> So in ovirt-host-deploy output during its execution you should see saving
> logs in /tmp and at its end you should see a ssh copy back to the engine.
>
>
>
>
>>
>> --
>> *Yaniv Bronhaim.*
>>
>> ___
>> Devel mailing list
>> Devel@ovirt.org
>> http://lists.ovirt.org/mailman/listinfo/devel
>>
>
>
>
> --
> Sandro Bonazzola
> Better technology. Faster innovation. Powered by community collaboration.
> See how it works at redhat.com
>



-- 
*Yaniv Bronhaim.*
___
Devel mailing list
Devel@ovirt.org
http://lists.ovirt.org/mailman/listinfo/devel

[ovirt-devel] engine: the link to host-deploy log is wrong

2017-01-01 Thread Yaniv Bronheim
Hi,
I'm not sure if I'm right or wrong so before filing a bug let me know if I
don't miss anything - I think the audit log on fail deploy that says the
path for the deploy log is wrong. few times I already faced that it points
to somewhere under /tmp/ovirt-host-deploy... but the log is actually under
/var/log/ovirt-engine/host-deploy/...

-- 
*Yaniv Bronhaim.*
___
Devel mailing list
Devel@ovirt.org
http://lists.ovirt.org/mailman/listinfo/devel

Re: [ovirt-devel] Check-merged is broken since we added network test to it

2016-12-22 Thread Yaniv Bronheim
On Thu, Dec 22, 2016 at 5:39 PM, Leon Goldberg <lgold...@redhat.com> wrote:

> Doesn't seem related; the patch does nothing but move pieces around.
>
> Judging by the title I guess you're referring to
> https://gerrit.ovirt.org/#/c/67787/ ?
>


no.. you can see that after this patch it used to work (check the jobs
after the merge)
so  something in  https://gerrit.ovirt.org/#/c/68078/ broke it


>
> On Thu, Dec 22, 2016 at 5:14 PM, Yaniv Bronheim <ybron...@redhat.com>
> wrote:
>
>> Hi guys and Leon,
>>
>> https://gerrit.ovirt.org/#/c/68078/ broke the check-merged job with
>> really tough exception:
>>
>> *15:23:34* sh: [17766: 1 (255)] tcsetattr: Inappropriate ioctl for 
>> device*15:23:34* Took 2586 seconds*15:23:34* Slave went offline during the 
>> build 
>> <http://jenkins.ovirt.org/computer/vm0136.workers-phx.ovirt.org/log>*15:23:34*
>>  ERROR: Connection was broken: java.io.IOException: Unexpected termination 
>> of the channel*15:23:34*at 
>> hudson.remoting.SynchronousCommandTransport$ReaderThread.run(SynchronousCommandTransport.java:50)*15:23:34*
>>  Caused by: java.io.EOFException*15:23:34*at 
>> java.io.ObjectInputStream$PeekInputStream.readFully(ObjectInputStream.java:2353)*15:23:34*
>>at 
>> java.io.ObjectInputStream$BlockDataInputStream.readShort(ObjectInputStream.java:2822)*15:23:34*
>>   at 
>> java.io.ObjectInputStream.readStreamHeader(ObjectInputStream.java:804)*15:23:34*
>>  at 
>> java.io.ObjectInputStream.(ObjectInputStream.java:301)*15:23:34*   
>> at 
>> hudson.remoting.ObjectInputStreamEx.(ObjectInputStreamEx.java:48)*15:23:34*
>> at 
>> hudson.remoting.AbstractSynchronousByteArrayCommandTransport.read(AbstractSynchronousByteArrayCommandTransport.java:34)*15:23:34*
>> at 
>> hudson.remoting.SynchronousCommandTransport$ReaderThread.run(SynchronousCommandTransport.java:48)*15:23:34*
>>  *15:23:34* Build step 'Execute shell' marked build as failure*15:23:34* 
>> Performing Post build task...
>>
>>
>> I have no clue what causes it.. we need to investigate in the tests code.
>>
>>
>> you can see it in 
>> http://jenkins.ovirt.org/job/vdsm_master_check-merged-el7-x86_64/772/consoleFull
>>
>>
>> and this for a job run just before it got in, which worked well - 
>> http://jenkins.ovirt.org/job/vdsm_master_check-merged-el7-x86_64/688/console
>>
>>
>> I suggest to revert this patch (and backport the revert to ovirt-4.1 branch 
>> as well) until figuring what causes it
>>
>>
>> --
>> *Yaniv Bronhaim.*
>>
>
>


-- 
*Yaniv Bronhaim.*
___
Devel mailing list
Devel@ovirt.org
http://lists.ovirt.org/mailman/listinfo/devel

[ovirt-devel] Check-merged is broken since we added network test to it

2016-12-22 Thread Yaniv Bronheim
Hi guys and Leon,

https://gerrit.ovirt.org/#/c/68078/ broke the check-merged job with really
tough exception:

*15:23:34* sh: [17766: 1 (255)] tcsetattr: Inappropriate ioctl for
device*15:23:34* Took 2586 seconds*15:23:34* Slave went offline during
the build 
*15:23:34*
ERROR: Connection was broken: java.io.IOException: Unexpected
termination of the channel*15:23:34*at
hudson.remoting.SynchronousCommandTransport$ReaderThread.run(SynchronousCommandTransport.java:50)*15:23:34*
Caused by: java.io.EOFException*15:23:34*   at
java.io.ObjectInputStream$PeekInputStream.readFully(ObjectInputStream.java:2353)*15:23:34*
at 
java.io.ObjectInputStream$BlockDataInputStream.readShort(ObjectInputStream.java:2822)*15:23:34*
at 
java.io.ObjectInputStream.readStreamHeader(ObjectInputStream.java:804)*15:23:34*
at 
java.io.ObjectInputStream.(ObjectInputStream.java:301)*15:23:34*
at 
hudson.remoting.ObjectInputStreamEx.(ObjectInputStreamEx.java:48)*15:23:34*
at 
hudson.remoting.AbstractSynchronousByteArrayCommandTransport.read(AbstractSynchronousByteArrayCommandTransport.java:34)*15:23:34*
at 
hudson.remoting.SynchronousCommandTransport$ReaderThread.run(SynchronousCommandTransport.java:48)*15:23:34*
*15:23:34* Build step 'Execute shell' marked build as
failure*15:23:34* Performing Post build task...


I have no clue what causes it.. we need to investigate in the tests code.


you can see it in
http://jenkins.ovirt.org/job/vdsm_master_check-merged-el7-x86_64/772/consoleFull


and this for a job run just before it got in, which worked well -
http://jenkins.ovirt.org/job/vdsm_master_check-merged-el7-x86_64/688/console


I suggest to revert this patch (and backport the revert to ovirt-4.1
branch as well) until figuring what causes it


-- 
*Yaniv Bronhaim.*
___
Devel mailing list
Devel@ovirt.org
http://lists.ovirt.org/mailman/listinfo/devel

[ovirt-devel] Branching Vdsm to ovirt-4.1

2016-12-22 Thread Yaniv Bronheim
Hello devels,

I just introduced ovirt-4.1 branch in vdsm - which will be the base for
ovirt-4.1 alpha build.
Now master is tagged v4.20.0 and first ovirt-4.1 tag is v4.19.1.

Basically it means that every fix that should be in 4.1 needs to be
backported and merged to ovirt-4.1 branch by fromani or me. We both in
#vdsm so ping us if needed and add us to the patches. Bare in mind that in
most of the cases we will ask for Bug-Url link in the commit message and
same commit-id as in the master branch patch.

For any further questions or wonders feel free to reply.

Greetings,

-- 
*Yaniv Bronhaim.*
___
Devel mailing list
Devel@ovirt.org
http://lists.ovirt.org/mailman/listinfo/devel

Re: [ovirt-devel] Introduce new package decencies in vdsm

2016-11-16 Thread Yaniv Bronheim
On Wed, Nov 16, 2016 at 4:56 PM, Nir Soffer <nsof...@redhat.com> wrote:

> On Wed, Nov 16, 2016 at 9:35 AM, Yaniv Bronheim <ybron...@redhat.com>
> wrote:
> > Hi
> >
> > After merging
> > https://gerrit.ovirt.org/#/q/status:open+project:vdsm+
> branch:master+topic:cross-imports
>
> This is not related to this patch, we had to keep correct spec,
> automation and dockerfile
> before this patch. This patch keeps us honest, preventing wrong code
> from sneaking in.
>
yes.. but now we see how annoying it is :)


> On the first run, it found that python-requests was missing (fixed now)
> and that vdsm.rpc modules are doing wrong import from /usr/share/vdsm.
> These imports work by accident. This was not discovered in the review
> of the patch moving rpc to lib/vdsm.
>
> This patch is fixing the old crossImports check that was totally broken,
> checking imports is not a new concept, it simply works now.
>
> > we need now for any new requirement to add a line in:
> > check-patch.packages.el7
> > check-patch.packages.fc24
> > check-merged.packages.el7
> > check-merged.packages.fc24
>
> You forgot build-artifacts.packages.* - total of 6 files to update in
> automation/
>
> I suggested David Caro in the past to support reading multiple packages
> files,
> so you can have a check-patch.packages file, *and* check-patch.packages.el7
> and the ci will merge the list of packages, so you don't have to duplicate
> the
> packages 6 times in every project.
>
>
they can use the spec as well with builddep. don't see any problem with
that.

> vdsm.spec.in
> > Dockerfile.centos
> > Dockerfile.fedora
> >
> > seems like we can add it once to the spec with section for fedora and
> centos
> > and the rest of the places will use yum-builddep. sounds more reasonable
> to
> > me and probably the right way to work with rpms dependencies. no?
>
> The issue is we have different kind of requirements:
> - runtime packages
> - runtime packages needed during tests (smaller set, since not all
> code is tested)
> - build packages (needed only for building rpms)
> - check-merged packages (lago and friends?)
>
> with you check we require everything we import even if we don't have test
for that. so basically now all of the above requires the same list


> So we can use the spec as the source, and generate all the other files
> during make, but this means the spec and all the *packages files will
>

why generating? what's wrong with rpm commands to install deps?

include all the packages for the worst case, making the build even slower.
>

can you give an example of such package that we don't need in check-patch
and built-artifacts but we need in runtime? we should test all :)


>
> But note that the dockerfiles must be correct regardless where you build
> vdsm - you have to get different list of packages for fedora and centos
>

you can do that in the spec as well

so you can build the docker image on any system. I'm not sure keeping
> stuff in the spec will make it easy to extract for crating other files,
> keeping
> the files in a json / yaml file and generating the spec during configure
> may
> be easier.
>
> I think it worth the effort if we avoid updating 9 files when adding
> requirements.
>
> Nir
>



-- 
*Yaniv Bronhaim.*
___
Devel mailing list
Devel@ovirt.org
http://lists.ovirt.org/mailman/listinfo/devel

Re: [ovirt-devel] vdsm: dropped testing in the build process

2016-11-16 Thread Yaniv Bronheim
On Mon, Nov 14, 2016 at 6:51 PM, Sandro Bonazzola 
wrote:

> I've been made aware of:
>
> commit 195937c2da592d9993ada7609aea84f64523a711
> Author: Nir Soffer 
> Date:   Sun Oct 9 20:05:36 2016 +0300
>
> build: Disable tests during build
>
> Tests are needed for development, not for building a package. This
> allows us to use latest and greatest development tools, which are not
> available in brew or koji.
>
>
> Please note that the spec file in current state is doing the opposite of
> what's recommended by Fedora guidelines[1]
> Tests are not mandatory according to the guidelines but they're highly
> recommended.
>
> [1] https://fedoraproject.org/wiki/Packaging:Guidelines#Test_Suites
>
>
We break some more fedora guidelines. we need someone to help with that in
vdsm group. we raised it last vdsm call. anyone? we have bunch of
interesting issues in our packing method that we can improve
please add this issue to the vdsm-trello board for follow up - adding back
test phase in vdsm spec. maybe it'll be easier once pytest will work

> --
> Sandro Bonazzola
> Better technology. Faster innovation. Powered by community collaboration.
> See how it works at redhat.com
>
> ___
> Devel mailing list
> Devel@ovirt.org
> http://lists.ovirt.org/mailman/listinfo/devel
>



-- 
*Yaniv Bronhaim.*
___
Devel mailing list
Devel@ovirt.org
http://lists.ovirt.org/mailman/listinfo/devel

[ovirt-devel] Introduce new package decencies in vdsm

2016-11-15 Thread Yaniv Bronheim
Hi

After merging
https://gerrit.ovirt.org/#/q/status:open+project:vdsm+branch:master+topic:cross-imports
we
need now for any new requirement to add a line in:
check-patch.packages.el7
check-patch.packages.fc24
check-merged.packages.el7
check-merged.packages.fc24
vdsm.spec.in
Dockerfile.centos
Dockerfile.fedora

seems like we can add it once to the spec with section for fedora and
centos and the rest of the places will use yum-builddep. sounds more
reasonable to me and probably the right way to work with rpms dependencies.
no?


*Yaniv Bronhaim.*
___
Devel mailing list
Devel@ovirt.org
http://lists.ovirt.org/mailman/listinfo/devel

Re: [ovirt-devel] VDSM fails to install due to versions mess

2016-09-06 Thread Yaniv Bronheim
yes. the revert is in https://gerrit.ovirt.org/62922. check if you run
later commit and first remove all vdsm* to clean your env

On Mon, Sep 5, 2016 at 10:05 PM, Eyal Edri  wrote:

> Did we end up reverting the VDSM change?
>
> I see now failures on OST with the version issues [1]:
>
> 2016-09-05 13:42:53,879 DEBUG 
> [org.ovirt.engine.core.utils.timer.FixedDelayJobListener] 
> (DefaultQuartzScheduler1) [] Rescheduling 
> DEFAULT.org.ovirt.engine.core.bll.HaAutoStartVmsRunner.startFailedAutoStartVms#-9223372036854775789
>  as there is no unfired trigger.
> 2016-09-05 13:42:54,367 DEBUG [org.ovirt.otopi.dialog.MachineDialogParser] 
> (VdsDeploy) [18598dc8] Got: ***L:ERROR Yum 
> [u'vdsm-4.18.999-484.git8ec9a16.el7.centos.x86_64 requires vdsm-jsonrpc = 
> 4.18.999-484.git8ec9a16.el7.centos', 
> u'vdsm-4.18.999-484.git8ec9a16.el7.centos.x86_64 requires vdsm-hook-vmfex-dev 
> = 4.18.999-484.git8ec9a16.el7.centos', 
> u'vdsm-4.18.999-484.git8ec9a16.el7.centos.x86_64 requires vdsm-python = 
> 4.18.999-484.git8ec9a16.el7.centos', 
> u'vdsm-4.18.999-484.git8ec9a16.el7.centos.x86_64 requires vdsm-xmlrpc = 
> 4.18.999-484.git8ec9a16.el7.centos']
> 2016-09-05 13:42:54,367 DEBUG [org.ovirt.otopi.dialog.MachineDialogParser] 
> (VdsDeploy) [18598dc8] nextEvent: Log ERROR Yum 
> [u'vdsm-4.18.999-484.git8ec9a16.el7.centos.x86_64 requires vdsm-jsonrpc = 
> 4.18.999-484.git8ec9a16.el7.centos', 
> u'vdsm-4.18.999-484.git8ec9a16.el7.centos.x86_64 requires vdsm-hook-vmfex-dev 
> = 4.18.999-484.git8ec9a16.el7.centos', 
> u'vdsm-4.18.999-484.git8ec9a16.el7.centos.x86_64 requires vdsm-python = 
> 4.18.999-484.git8ec9a16.el7.centos', 
> u'vdsm-4.18.999-484.git8ec9a16.el7.centos.x86_64 requires vdsm-xmlrpc = 
> 4.18.999-484.git8ec9a16.el7.centos']
> 2016-09-05 13:42:54,413 ERROR 
> [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector] 
> (VdsDeploy) [18598dc8] Correlation ID: 18598dc8, Call Stack: null, Custom 
> Event ID: -1, Message: Failed to install Host lago_basic_suite_master_host0. 
> Yum [u'vdsm-4.18.999-484.git8ec9a16.el7.centos.x86_64 requires vdsm-jsonrpc = 
> 4.18.999-484.git8ec9a16.el7.centos', 
> u'vdsm-4.18.999-484.git8ec9a16.el7.centos.x86_64 requires vdsm-hook-vmfex-dev 
> = 4.18.999-484.git8ec9a16.el7.centos', 
> u'vdsm-4.18.999-484.git8ec9a16.el7.centos.x86_64 requires vdsm-python = 
> 4.18.999-484.git8ec9a16.el7.centos', 
> u'vdsm-4.18.999-484.git8ec9a16.el7.centos.x86_64 requires vdsm-xmlrpc = 
> 4.18.999-484.git8ec9a16.el7.centos'].
> 2016-09-05 13:42:54,413 DEBUG [org.ovirt.otopi.dialog.MachineDialogParser] 
> (VdsDeploy) [18598dc8] Got: ***L:ERROR Failed to execute stage 'Package 
> installation': [u'vdsm-4.18.999-484.git8ec9a16.el7.centos.x86_64 requires 
> vdsm-jsonrpc = 4.18.999-484.git8ec9a16.el7.centos', 
> u'vdsm-4.18.999-484.git8ec9a16.el7.centos.x86_64 requires vdsm-hook-vmfex-dev 
> = 4.18.999-484.git8ec9a16.el7.centos', 
> u'vdsm-4.18.999-484.git8ec9a16.el7.centos.x86_64 requires vdsm-python = 
> 4.18.999-484.git8ec9a16.el7.centos', 
> u'vdsm-4.18.999-484.git8ec9a16.el7.centos.x86_64 requires vdsm-xmlrpc = 
> 4.18.999-484.git8ec9a16.el7.centos']
> 2016-09-05 13:42:54,413 DEBUG [org.ovirt.otopi.dialog.MachineDialogParser] 
> (VdsDeploy) [18598dc8] nextEvent: Log ERROR Failed to execute stage 'Package 
> installation': [u'vdsm-4.18.999-484.git8ec9a16.el7.centos.x86_64 requires 
> vdsm-jsonrpc = 4.18.999-484.git8ec9a16.el7.centos', 
> u'vdsm-4.18.999-484.git8ec9a16.el7.centos.x86_64 requires vdsm-hook-vmfex-dev 
> = 4.18.999-484.git8ec9a16.el7.centos', 
> u'vdsm-4.18.999-484.git8ec9a16.el7.centos.x86_64 requires vdsm-python = 
> 4.18.999-484.git8ec9a16.el7.centos', 
> u'vdsm-4.18.999-484.git8ec9a16.el7.centos.x86_64 requires vdsm-xmlrpc = 
> 4.18.999-484.git8ec9a16.el7.centos']
> 2016-09-05 13:42:54,428 ERROR 
> [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector] 
> (VdsDeploy) [18598dc8] Correlation ID: 18598dc8, Call Stack: null, Custom 
> Event ID: -1, Message: Failed to install Host lago_basic_suite_master_host0. 
> Failed to execute stage 'Package installation': 
> [u'vdsm-4.18.999-484.git8ec9a16.el7.centos.x86_64 requires vdsm-jsonrpc = 
> 4.18.999-484.git8ec9a16.el7.centos', 
> u'vdsm-4.18.999-484.git8ec9a16.el7.centos.x86_64 requires vdsm-hook-vmfex-dev 
> = 4.18.999-484.git8ec9a16.el7.centos', 
> u'vdsm-4.18.999-484.git8ec9a16.el7.centos.x86_64 requires vdsm-python = 
> 4.18.999-484.git8ec9a16.el7.centos', 
> u'vdsm-4.18.999-484.git8ec9a16.el7.centos.x86_64 requires vdsm-xmlrpc = 
> 4.18.999-484.git8ec9a16.el7.centos'].
> 2016-09-05 13:42:54,428 DEBUG [org.ovirt.otopi.dialog.MachineDialogParser] 
> (VdsDeploy) [18598dc8] Got: ***L:INFO Yum Performing yum transaction rollback
> 2016-09-05 13:42:54,428 DEBUG [org.ovirt.otopi.dialog.MachineDialogParser] 
> (VdsDeploy) [18598dc8] nextEvent: Log INFO Yum Performing yum transaction 
> rollback
> 2016-09-05 13:42:54,445 INFO  [org.ovirt.engine.core.dal.dbbroker.auditlo
>
>
> This happened right 

Re: [ovirt-devel] Fwd: Unversioned and >/=/>= obsoletes

2016-09-02 Thread Yaniv Bronheim
yes, sounds like that. please take the patch, I plan to publish new build
with that change and without the exclusive arch that I added

On Fri, Sep 2, 2016 at 3:59 PM, Nir Soffer  wrote:

> On Fri, Sep 2, 2016 at 2:26 PM, Sandro Bonazzola 
> wrote:
>
>> FYI, Fedora reviewed vdsm spec file regarding obsoletes.
>>
>>
>> -- Forwarded message --
>> From: Igor Gnatenko 
>> Date: Fri, Sep 2, 2016 at 1:14 PM
>> Subject: Unversioned and >/=/>= obsoletes
>> To: Development discussions related to Fedora <
>> de...@lists.fedoraproject.org>, devel-annou...@lists.fedoraproject.org
>>
>>
>> All guidelines mandate the use of > have some number of packages (179 source rpms -> 292 binary rpms) with
>> unversioned Obsoletes or with >/=/>= Obsoletes.
>>
>> It is causing problems with upgrade (if package is getting re-added)
>> or with 3rd-party repositories. Older package is obsoleting new
>> package.
>>
>> Problem categories (in following text by "never" I mean latest N-2
>> releases):
>>
>> * Package/SubPackage was never built in Fedora
>> Package "python" has "Obsoletes: python2" which was never built ->
>> drop Obsoletes
>> SubPackage "qpid-proton-c" of "qpid-proton" has "Obsoletes:
>> qpid-proton" which was not the package for long time -> drop Obsoletes
>>
>> * Package replacement
>> Package "storaged" has "Obsoletes: udisks2" -> take latest version
>> from koji (2.1.7-1) and make Obsoletes versioned: udisks2 < 2.1.7-2
>> storaged is not simple use-case as it replaces udisks2, but latter is
>> still not retired.
>>
>> * "=" Obsoletes
>> "rubygem-vte" has "Obsoletes: ruby-vte = 3.0.9-1.fc26" (probably it's
>> macro in spec) which seems really weird as it will not obsolete
>> F24/F25 with such version
>>
>> * Obsoletes by Provides
>> It doesn't work to prevent undefined behavior. Imagine you have
>> installed "A" and "B", both providing "C". Package "D" has "Obsoletes:
>> C", it should not remove "A" and "B".
>> ** %{?_isa}
>> "glibc-headers" has "Obsoletes: glibc-headers(i686)". %{?_isa} is just
>> text, it's not part of architecture or something else.
>> ** Other provides
>> "rubygem-http_connection" has "Obsoletes:
>> rubygem(right_http_connection)". Latter is virtual provides.
>>
>> * Weird obsoletes (broken)
>> "krb5-server" has "Obsoletes: krb5-server-1.14.3-8.fc26.i686".
>> Basically it will not obsolete anything because it's threated as
>> package name (and we definitely don't have such package name).
>>
>> * >/>= Obsoletes
>> "vdsm" has "Obsoletes: vdsm-infra >= 4.16.0". It's almost same as
>> unversioned Obsoletes. So it must not be used.
>>
>
> Should be fixed here if I understood the problem
> https://gerrit.ovirt.org/63215
>



-- 
*Yaniv Bronhaim.*
___
Devel mailing list
Devel@ovirt.org
http://lists.ovirt.org/mailman/listinfo/devel

[ovirt-devel] broken master: removal of release in spec requirements

2016-08-28 Thread Yaniv Bronheim
Hi,

In https://gerrit.ovirt.org/#/c/62672 we removed the release from vdsm*
requirements. Although it sounds reasonable and quite safe, it was not.
In development env it causes mixup of versions when upgrading.

"yum install vdsm" will eventually cause this mixup as vdsm won't require
the newer vdsm-python

I think this change should be reverted and the solution should be in the
build process. Newer release is equal to higher version, requirement should
include the release number.
so for example, 4.18.7-1 was shipped and for ppc we built 4.18.7-2 so
4.18.7-2 is newer. can't see how this can be different.

e.g for the bug:

===
 Package  Arch
   Version
   Repository   Size
===
Removing:
 vdsm x86_64
   4.18.999-457.gitcde215c.el7.centos
  @ovirt-master-snapshot  2.7 M
 vdsm-api noarch
   4.18.999-342.git40c3bbb.el7.centos
  @ovirt-master-snapshot  325 k
 vdsm-cli noarch
   4.18.999-457.gitcde215c.el7.centos
  @ovirt-master-snapshot  342 k
 vdsm-hook-vmfex-dev  noarch
   4.18.999-457.gitcde215c.el7.centos
  @ovirt-master-snapshot   21 k
 vdsm-jsonrpc noarch
   4.18.999-342.git40c3bbb.el7.centos
  @ovirt-master-snapshot   81 k
 vdsm-python  noarch
   4.18.999-342.git40c3bbb.el7.centos
  @ovirt-master-snapshot  2.4 M
 vdsm-xmlrpc  noarch
   4.18.999-342.git40c3bbb.el7.centos
  @ovirt-master-snapshot  109 k
 vdsm-yajsonrpc   noarch
   4.18.999-342.git40c3bbb.el7.centos
  @ovirt-master-snapshot   95 k

Problem.



-- 
*Yaniv Bronhaim.*
___
Devel mailing list
Devel@ovirt.org
http://lists.ovirt.org/mailman/listinfo/devel

Re: [ovirt-devel] Moving configuration files to separate directory

2016-08-02 Thread Yaniv Bronheim
On Tue, Aug 2, 2016 at 11:06 AM, Martin Polednik 
wrote:

> Hey devels,
>
> last week, I've been working on patch series that moves most of
> configuration and "static" files away from our source code to a dir
> called "static"[1]. (based on the previous' week VDSM weekly)
>
> Current version has static dir's layout as flat - keeping all files in
> the directory with few exceptions (mom.d and systemd). The downside of
> the approach is that we still have to rename some of the files in
> makefile due to possibility of name clashes if we had similarly named
> files (50_vdsm from sudoers and 50_vdsm anything else).
>
> There is another possibility - hierarchy within the folder. Instead of
> current structure -
>
> static
> ├── Makefile.am
> ├── limits.conf
> ├── logger.conf.in
> ├── mom.conf.in
> ├── mom.d
> │   ├── 00-defines.policy
> │   ├── 01-parameters.policy
> │   ├── 02-balloon.policy
> │   ├── 03-ksm.policy
> │   ├── 04-cputune.policy
> │   ├── 05-iotune.policy
> │   └── Makefile.am
> ├── sudoers.vdsm.in
> ├── svdsm.logger.conf.in
> ├── systemd
> │   ├── Makefile.am
> │   ├── mom-vdsm.service.in
> │   ├── supervdsmd.service.in
> │   ├── vdsm-network.service.in
> │   └── vdsmd.service.in
> ├── vdsm-bonding-modprobe.conf
> ├── vdsm-logrotate.conf
> ├── vdsm-modules-load.d.conf
> ├── vdsm-sysctl.conf
> └── vdsm.rwtab.in
>
> we could structure the directory to a corresponding subfolders over
> the system:
>
> etc
> ├── modprobe.d
> │   └── vdsm-bonding-modprobe.conf
> ├── modules-load.d
> │   └── vdsm.conf
> ├── rwtab.d
> │   └── vdsm
> ├── security
> │   └── limits.d
> │   └── 99-vdsm.conf
> ├── sudoers.d
> │   ├── 50_vdsm
> ├── sysctl.d
> │   └── vdsm.conf
> └── vdsm
>├── logger.conf
>├── logrotate
>│   └── vdsm
>├── mom.conf
>├── mom.d
>│   ├── 00-defines.policy
>│   ├── 01-parameters.policy
>│   ├── 02-balloon.policy
>│   ├── 03-ksm.policy
>│   ├── 04-cputune.policy
>│   └── 05-iotune.policy
>├── svdsm.logger.conf
>├── vdsm.conf
>└── vdsm.conf.d
>

Second approach is much better. More organized and more clean. It's more
reasonable that way for developers, and having more makefiles is not a big
deal.


> There is little downside to the second approach, that is more code is
> added to VDSM in a sense that more makefiles will have to exist. On
> the other hand, we can drop all the renaming and have the files named
> as they would be named on their destination after install.
> Opinions?
>
> [1]
> https://gerrit.ovirt.org/#/q/status:open+project:vdsm+branch:master+topic:static-assets
>



-- 
*Yaniv Bronhaim.*
___
Devel mailing list
Devel@ovirt.org
http://lists.ovirt.org/mailman/listinfo/devel

Re: [ovirt-devel] execCmd() and storing stdout and stderr in log file

2016-07-19 Thread Yaniv Bronheim
On Tue, Jul 19, 2016 at 4:56 PM, Tomáš Golembiovský 
wrote:

> On Thu, 14 Jul 2016 17:25:28 +0300
> Nir Soffer  wrote:
>
> > After https://gerrit.ovirt.org/#/c/46733/ you should be able to create
> > the pipeline in python like this:
> >
> > v2v = Popen(["virt-v2v", ...], stdout=PIPE, stderr=STDOUT)
> > tee = Popen(["tee", "-a", logfile], stdin=v2v.stdout, stdout=PIPE,
> > stderr=PIPE)
> >
> > Now we can read output from tee.stdout, and when tee is finished, we can
> wait
> > for v2v to get the exit code.
> >
> > Since all output would go to tee stdout and stderr may only contain tee
> usage
> > errors, we don't need to use AsyncProc, making this code python 3
> compatible.
>
>
> Yes, this may actualy work. And do we plan to adopt the cpopen 1.4.1, where
> this is fixed, in VDSM?
>

cpopen 1.5.1 - and yes but not in ovirt-4.0

>
>
> --
> Tomáš Golembiovský 
> ___
> Devel mailing list
> Devel@ovirt.org
> http://lists.ovirt.org/mailman/listinfo/devel
>



-- 
*Yaniv Bronhaim.*
___
Devel mailing list
Devel@ovirt.org
http://lists.ovirt.org/mailman/listinfo/devel

[ovirt-devel] oVirt metrics

2016-07-13 Thread Yaniv Bronheim
Hi,

In oVirt-4.0 we introduced integration with metrics collectors, In [1] you
will find a guide for utilizing your environment to retrieve visualized
reports about hosts and vms statistics.

I encourage to try that out and send us requests for additional valuable
metrics that you think vdsm should publish.
This area is still work in progress and we plan to support more
technologies and different architectures for metrics collections as
describes in the post. This will follow by additional links in the post
([1]) that describe how to do so.. stay tuned.

[1] https://bronhaim.wordpress.com/2016/06/26/ovirt-metrics

--
*Yaniv Bronhaim.*
___
Devel mailing list
Devel@ovirt.org
http://lists.ovirt.org/mailman/listinfo/devel

Re: [ovirt-devel] execCmd() and storing stdout and stderr in log file

2016-07-11 Thread Yaniv Bronheim
On Mon, Jul 11, 2016 at 12:53 PM, Tomáš Golembiovský <tgole...@redhat.com>
wrote:

> On Wed, 6 Jul 2016 18:37:54 +0300
> Yaniv Bronheim <ybron...@redhat.com> wrote:
>
> > On Wed, Jul 6, 2016 at 5:07 PM, Tomáš Golembiovský <tgole...@redhat.com>
> > wrote:
> >
> > >
> > > Merging stdout and stderr to one can POpen do for us, I belive. Any
> > > logging can indeed be done as a wrapper around execCmd.
> >
> > saving stdout and err to log while the process is running is useful only
> > for your purpose currently. using asyncproc as you do now in v2v allows
> you
> > to to run a process and monitor it.. can you use overriding of aysncProc
> > wrapper for your needs instead of changing cpopen or execcmd code?
>
> I am not talking about CPOpen. I meant that when calling
> `subprocess.Popen`, you can pass it `stderr=subprocess.STDOUT` argument
> and it will handle the FD redirection (stream merging). To me it seems
> like a proper way of doing this.
>


In vdsm we currently use cpopen to start external process - as
multiprocess.popen in py2 is buggy. using execCmd doesn't provide stderr
and stdout parameters to modify. but you can use CPopen directly and
override stderr.
Over py3 we import the standard multiprocess.Popen. basically if its
actually useful you can add those parameters to execCmd



>
> > > > [...]
> > > >
> > > > btw, after examine the area again, isn't watchCmd func is what you
> > > >  describe? we just need to replace the asyncProc usages there with
> > > > something that doesn't use StringIO as we do to support py3
> > >
> > > I'm not sure how watchCmd can help with this. Isn't it just a wrapper
> to
> > > get asynchrounous process with a stop condition?
> > >
> >
> > it is. thought you need something similar and afterwards log the outputs
>
> I can run async process with `execCmd` directly and I don't need any
> stop condition. Am I missing something that `watchCmd` provides?
>
> probably I didn't get you right at first. forget about watchcmd. I thought
that you're trying to log the output and in watchcmd we do it with
 execCmdLogger, so I wrote that to show you a reference .


> --
> Tomáš Golembiovský <tgole...@redhat.com>
> ___
> Devel mailing list
> Devel@ovirt.org
> http://lists.ovirt.org/mailman/listinfo/devel
>



-- 
*Yaniv Bronhaim.*
___
Devel mailing list
Devel@ovirt.org
http://lists.ovirt.org/mailman/listinfo/devel

Re: [ovirt-devel] execCmd() and storing stdout and stderr in log file

2016-07-06 Thread Yaniv Bronheim
On Wed, Jul 6, 2016 at 5:07 PM, Tomáš Golembiovský <tgole...@redhat.com>
wrote:

> On Tue, 5 Jul 2016 11:18:58 +0300
> Yaniv Bronheim <ybron...@redhat.com> wrote:
>
> > On Tue, Jul 5, 2016 at 10:44 AM, Yaniv Bronheim <ybron...@redhat.com>
> wrote:
> >
> > > Hi
> > > I do work to remove the cpopen usages from execCmd. Using std popen
> over
> > > py3 and subprocess32 over py2 which both implements the same api. The
> only
> > > gap is the output object for async calls that we need to align with the
> > > standard implementation and modify our current usages. I don't think
> that
> > > adding more non-standard logics to execCmd is a good idea. we should
> fit
> > > the standard usage in this function or override it separately with
> specific
> > > implementation in commands.py. You may propose such patch
>
> Sure, that makes sense. Are there any existing drafts/patches I could
> look at or help with?
>
no..

>
> Merging stdout and stderr to one can POpen do for us, I belive. Any

logging can indeed be done as a wrapper around execCmd.
>
> saving stdout and err to log while the process is running is useful only
for your purpose currently. using asyncproc as you do now in v2v allows you
to to run a process and monitor it.. can you use overriding of aysncProc
wrapper for your needs instead of changing cpopen or execcmd code?


>
> > [...]
> >
> > btw, after examine the area again, isn't watchCmd func is what you
> >  describe? we just need to replace the asyncProc usages there with
> > something that doesn't use StringIO as we do to support py3
>
> I'm not sure how watchCmd can help with this. Isn't it just a wrapper to
> get asynchrounous process with a stop condition?
>

it is. thought you need something similar and afterwards log the outputs

>
> Could you elaborate on why we want to get rid of StringIO in AsyncProc?
>
it uses StringIO.read and it is not supported in py3. we need to change the
implementation to support six.StringIO


> If I understand it right, it's purpose is to make sure the executed
> program doesn't stall on full pipe if VDSM isn't fast enough in
> processing the output. Or am I missing something? But it could again be
> implemented as a wrapper around execCmd and not in it. Is that what you
> mean?
>
>
> --
> Tomáš Golembiovský <tgole...@redhat.com>
> ___
> Devel mailing list
> Devel@ovirt.org
> http://lists.ovirt.org/mailman/listinfo/devel
>



-- 
*Yaniv Bronhaim.*
___
Devel mailing list
Devel@ovirt.org
http://lists.ovirt.org/mailman/listinfo/devel

Re: [ovirt-devel] execCmd() and storing stdout and stderr in log file

2016-07-05 Thread Yaniv Bronheim
On Tue, Jul 5, 2016 at 10:44 AM, Yaniv Bronheim <ybron...@redhat.com> wrote:

> Hi
> I do work to remove the cpopen usages from execCmd. Using std popen over
> py3 and subprocess32 over py2 which both implements the same api. The only
> gap is the output object for async calls that we need to align with the
> standard implementation and modify our current usages. I don't think that
> adding more non-standard logics to execCmd is a good idea. we should fit
> the standard usage in this function or override it separately with specific
> implementation in commands.py. You may propose such patch
>
> On Fri, Jul 1, 2016 at 5:43 PM, Tomáš Golembiovský <tgole...@redhat.com>
> wrote:
>
>> Hi,
>>
>> I had a need recently to run a command with execCmd() and store it's
>> output and error to a log file, while still receiving it in the calling
>> code. Redirecting the error to output stream to have all in one stream
>> is also useful feature.
>>
>> All this can be done in the calling code:
>>
>> a)  On the shell level, by modyfing the command. This can be
>> intentionally dangerous because things like quoting of arguments has to
>> be considered and also could cause problems when wrappers (sudo, nice,
>> ...) are used.
>>
>> b)  By handling the writing to files in a code. This would add
>> unnecessary code duplication in a long run. (I don't think I'm the only
>> one who can see a potential in this.) Also for asynchronous process
>> runs, when storing both stderr & stdout in one file, it requires polling
>> and some stream magic. It would be better to have this done right and
>> only once so it can be properly tested.
>>
>> That's why I think having it present in execCmd() ready for everyone's
>> use is the best solution. Unfortunately it seems that the code is a)
>> essential on many places in vdsm and b) not properly covered by tests.
>> Which makes it hard to touch. Also apparently some refactoring is either
>> planned or already underway.
>>
>> What is the situation about refactoring that code area? Anyone working
>> on it? Do we have an estimation of time-frame for it?
>>
>> Any suggestions/ideas?
>>
>>
btw, after examine the area again, isn't watchCmd func is what you
 describe? we just need to replace the asyncProc usages there with
something that doesn't use StringIO as we do to support py3


>> Tomas
>>
>> --
>> Tomáš Golembiovský <tgole...@redhat.com>
>> ___
>> Devel mailing list
>> Devel@ovirt.org
>> http://lists.ovirt.org/mailman/listinfo/devel
>
>
>
>
> --
> *Yaniv Bronhaim.*
>



-- 
*Yaniv Bronhaim.*
___
Devel mailing list
Devel@ovirt.org
http://lists.ovirt.org/mailman/listinfo/devel

Re: [ovirt-devel] execCmd() and storing stdout and stderr in log file

2016-07-05 Thread Yaniv Bronheim
Hi
I do work to remove the cpopen usages from execCmd. Using std popen over
py3 and subprocess32 over py2 which both implements the same api. The only
gap is the output object for async calls that we need to align with the
standard implementation and modify our current usages. I don't think that
adding more non-standard logics to execCmd is a good idea. we should fit
the standard usage in this function or override it separately with specific
implementation in commands.py. You may propose such patch

On Fri, Jul 1, 2016 at 5:43 PM, Tomáš Golembiovský 
wrote:

> Hi,
>
> I had a need recently to run a command with execCmd() and store it's
> output and error to a log file, while still receiving it in the calling
> code. Redirecting the error to output stream to have all in one stream
> is also useful feature.
>
> All this can be done in the calling code:
>
> a)  On the shell level, by modyfing the command. This can be
> intentionally dangerous because things like quoting of arguments has to
> be considered and also could cause problems when wrappers (sudo, nice,
> ...) are used.
>
> b)  By handling the writing to files in a code. This would add
> unnecessary code duplication in a long run. (I don't think I'm the only
> one who can see a potential in this.) Also for asynchronous process
> runs, when storing both stderr & stdout in one file, it requires polling
> and some stream magic. It would be better to have this done right and
> only once so it can be properly tested.
>
> That's why I think having it present in execCmd() ready for everyone's
> use is the best solution. Unfortunately it seems that the code is a)
> essential on many places in vdsm and b) not properly covered by tests.
> Which makes it hard to touch. Also apparently some refactoring is either
> planned or already underway.
>
> What is the situation about refactoring that code area? Anyone working
> on it? Do we have an estimation of time-frame for it?
>
> Any suggestions/ideas?
>
>
> Tomas
>
> --
> Tomáš Golembiovský 
> ___
> Devel mailing list
> Devel@ovirt.org
> http://lists.ovirt.org/mailman/listinfo/devel




-- 
*Yaniv Bronhaim.*
___
Devel mailing list
Devel@ovirt.org
http://lists.ovirt.org/mailman/listinfo/devel

Re: [ovirt-devel] [VDSM] Handling of scripts without a .py suffix

2016-05-29 Thread Yaniv Bronheim
On Sun, May 29, 2016 at 3:31 PM, Dan Kenigsberg  wrote:

> On Sun, May 29, 2016 at 02:53:41PM +0300, Nir Soffer wrote:
> > On Sun, May 29, 2016 at 12:36 PM, Dan Kenigsberg 
> wrote:
> > > On Sat, May 28, 2016 at 03:16:10PM +0300, Nir Soffer wrote:
> > >> Hi all,
> > >>
> > >> We have several scripts spread in the source, typically installed in
> > >> /usr/libexec/vdsm.
> > >> We had a useless WHITELIST[1], trying to compile these scripts with
> python3, and
> > >> we have similar (but working) whitelist for pyflakes and pep8.
> > >>
> > >> To simplify the various checks, I think we need to to do this:
> > >> 1. Keep .py suffix for all python files
> > >> 2. Move all scripts to helpers/ ([2] handles storage scripts)
> > >> 3. During installation, strip the .py suffix.
> > >>
> > >> With these changes, we can use the various checking commands on the
> entire
> > >> source tree.
> > >>
> > >> For example, these commands check the entire tree:
> > >>
> > >> PYTHONDONTWRITEBYTECODE=1 python3 -m compileall -f -x
> '(\.tox/|\.git/)' .
> > >> pep8 .
> > >> pyflakes .
> > >>
> > >> Thoughts?
> > >>
> > >> [1] https://gerrit.ovirt.org/58204
> > >> [2] https://gerrit.ovirt.org/57363
> > >
> > > Sounds good, though I'd love to keep the separation of scripts into
> > > their natuaral vertical. Keep storage understand storage, etc. Why are
> > > you piling them into one source directory?
> >
> > This is a separate topic.
> >
> > The helpers do not belong in the library - we should keep libv/vdsm/xxx
> > with only the code that is needed for the xxx package. Helpers are
> external
> > programs that should have access only to public vdsm apis, so they don't
> > need to and should not have access to other files inside lib/vdsm/xxx.
> >
> > This also make the source easier to understand, the structure is closer
> to
> > the final structure after installation.
> >
> > So we can have:
> >
> > helpers/storage
> > helpers/virt
> > ...
> >
> > But I don't see any value in this separation, we have only about 10
> > helpers.
> >
> > Also each directory we add adds overhead of more useless autotools
> > files to maintain. Look how many makefiles we got rid by moving all the
> > tests to one directory.
>
> Ok, I'm convinced.
>

iiuc you plan to change the source tree only, after installation those
executables will stay under /usr/libexec/vdsm .
we have helper folder since https://gerrit.ovirt.org/55797 , now we need to
expect verticals there? ok by me.


> ___
> Devel mailing list
> Devel@ovirt.org
> http://lists.ovirt.org/mailman/listinfo/devel
>



-- 
*Yaniv Bronhaim.*
___
Devel mailing list
Devel@ovirt.org
http://lists.ovirt.org/mailman/listinfo/devel

Re: [ovirt-devel] [ACTION REQUIRED] vdsm_master_build-artifacts-fc23-x86_64 is failing due to missing dep on fc23 .packages

2016-04-14 Thread Yaniv Bronheim
ok .. https://gerrit.ovirt.org/56122 . and
https://gerrit.ovirt.org/#/c/55604/ can get in too

On Thu, Apr 14, 2016 at 10:59 AM, Nir Soffer <nsof...@redhat.com> wrote:

> On Thu, Apr 14, 2016 at 10:45 AM, Yaniv Bronheim <ybron...@redhat.com>
> wrote:
> > I don't think this package is available in epel. if not, just remove it
> from
> > py3 list
> >
> > about why we didn't catch it in jenkins - its because I don't run "make
> > check" anymore over el7 to save resources. we just build the rpm there to
> > see that we don't miss any dependencies.
> > maybe we should bring back the make check there ... what do you think?
>
> We should, make check takes about 1.5 minutes, typical build time is
> about 10 minutes
>
> fc23 build, with make check: 10:19
>
> http://jenkins.ovirt.org/job/vdsm_master_check-patch-fc23-x86_64/5210/console
>
> el7 build, no make check: 11:53
> http://jenkins.ovirt.org/job/vdsm_master_check-patch-el7-x86_64/747/console
>
> travis build: 3-4 minutes
> https://travis-ci.org/nirs/vdsm/builds
>
> The time of the tests does not make a real difference, and we know
> that the (some) code
> works on both platforms.
>
> We should work on reducing the build time, 10 minutes for running
> tests that take
> 1.5 minutes is crazy overhead.
>
> Nir
>
> >
> > On Thu, Apr 14, 2016 at 10:26 AM, Francesco Romani <from...@redhat.com>
> > wrote:
> >>
> >>
> >> 
> >>
> >> From: "Sandro Bonazzola" <sbona...@redhat.com>
> >> To: "Francesco Romani" <from...@redhat.com>
> >> Cc: "Eyal Edri" <ee...@redhat.com>, "Dan Kenigsberg" <dan...@redhat.com
> >,
> >> "devel" <devel@ovirt.org>, "Yaniv Bronheim" <ybron...@redhat.com>, "Nir
> >> Soffer" <nsof...@redhat.com>
> >> Sent: Thursday, April 14, 2016 9:13:04 AM
> >>
> >> Subject: Re: [ovirt-devel] [ACTION REQUIRED]
> >> vdsm_master_build-artifacts-fc23-x86_64 is failing due to missing dep on
> >> fc23 .packages
> >>
> >>
> >>
> >> On Thu, Apr 14, 2016 at 9:12 AM, Sandro Bonazzola <sbona...@redhat.com>
> >> wrote:
> >>>
> >>>
> >>>
> >>> On Thu, Apr 14, 2016 at 9:01 AM, Francesco Romani <from...@redhat.com>
> >>> wrote:
> >>>>
> >>>>
> >>>>
> >>>> 
> >>>>
> >>>> From: "Eyal Edri" <ee...@redhat.com>
> >>>> To: "Sandro Bonazzola" <sbona...@redhat.com>
> >>>> Cc: "Dan Kenigsberg" <dan...@redhat.com>, "devel" <devel@ovirt.org>,
> >>>> "Yaniv Bronheim" <ybron...@redhat.com>, "Nir Soffer" <
> nsof...@redhat.com>,
> >>>> "Francesco Romani" <from...@redhat.com>
> >>>> Sent: Thursday, April 14, 2016 8:54:50 AM
> >>>> Subject: Re: [ovirt-devel] [ACTION REQUIRED]
> >>>> vdsm_master_build-artifacts-fc23-x86_64 is failing due to missing dep
> on
> >>>> fc23 .packages
> >>>>
> >>>>
> >>>> Don't we run it per patch as well?
> >>>> How did it got merged?
> >>>>
> >>>> On Apr 14, 2016 9:42 AM, "Sandro Bonazzola" <sbona...@redhat.com>
> wrote:
> >>>>>
> >>>>>
> >>>>>
> http://jenkins.ovirt.org/job/vdsm_master_build-artifacts-fc23-x86_64/823/console
> >>>>>
> >>>>> 00:05:46.751
> >>>>>
> ==
> >>>>> 00:05:46.751 ERROR: Failure: ImportError (No module named 'netaddr')
> >>>>> 00:05:46.751
> >>>>>
> --
> >>>>> 00:05:46.752 Traceback (most recent call last):
> >>>>> 00:05:46.752   File
> "/usr/lib/python3.4/site-packages/nose/failure.py",
> >>>>> line 39, in runTest
> >>>>> 00:05:46.752 raise self.exc_val.with_traceback(self.tb)
> >>>>> 00:05:46.752   File
> "/usr/lib/python3.4/site-packages/nose/loader.py",
> >>>>> line 418, in loadTestsFromName
> >>>>> 00:05:46.752 addr.filename, addr.module)
> >>>>> 00

Re: [ovirt-devel] [ACTION REQUIRED] vdsm_master_build-artifacts-fc23-x86_64 is failing due to missing dep on fc23 .packages

2016-04-14 Thread Yaniv Bronheim
I don't think this package is available in epel. if not, just remove it
from py3 list

about why we didn't catch it in jenkins - its because I don't run "make
check" anymore over el7 to save resources. we just build the rpm there to
see that we don't miss any dependencies.
maybe we should bring back the make check there ... what do you think?

On Thu, Apr 14, 2016 at 10:26 AM, Francesco Romani <from...@redhat.com>
wrote:

>
> --
>
> *From: *"Sandro Bonazzola" <sbona...@redhat.com>
> *To: *"Francesco Romani" <from...@redhat.com>
> *Cc: *"Eyal Edri" <ee...@redhat.com>, "Dan Kenigsberg" <dan...@redhat.com>,
> "devel" <devel@ovirt.org>, "Yaniv Bronheim" <ybron...@redhat.com>, "Nir
> Soffer" <nsof...@redhat.com>
> *Sent: *Thursday, April 14, 2016 9:13:04 AM
>
> *Subject: *Re: [ovirt-devel] [ACTION REQUIRED]
> vdsm_master_build-artifacts-fc23-x86_64 is failing due to missing dep on
> fc23 .packages
>
>
>
> On Thu, Apr 14, 2016 at 9:12 AM, Sandro Bonazzola <sbona...@redhat.com>
> wrote:
>
>>
>>
>> On Thu, Apr 14, 2016 at 9:01 AM, Francesco Romani <from...@redhat.com>
>> wrote:
>>
>>>
>>>
>>> --
>>>
>>> *From: *"Eyal Edri" <ee...@redhat.com>
>>> *To: *"Sandro Bonazzola" <sbona...@redhat.com>
>>> *Cc: *"Dan Kenigsberg" <dan...@redhat.com>, "devel" <devel@ovirt.org>,
>>> "Yaniv Bronheim" <ybron...@redhat.com>, "Nir Soffer" <nsof...@redhat.com>,
>>> "Francesco Romani" <from...@redhat.com>
>>> *Sent: *Thursday, April 14, 2016 8:54:50 AM
>>> *Subject: *Re: [ovirt-devel] [ACTION REQUIRED]
>>> vdsm_master_build-artifacts-fc23-x86_64 is failing due to missing dep on
>>> fc23 .packages
>>>
>>>
>>> Don't we run it per patch as well?
>>> How did it got merged?
>>> On Apr 14, 2016 9:42 AM, "Sandro Bonazzola" <sbona...@redhat.com> wrote:
>>>
>>>>
>>>> http://jenkins.ovirt.org/job/vdsm_master_build-artifacts-fc23-x86_64/823/console
>>>>
>>>> *00:05:46.751* 
>>>> ==*00:05:46.751*
>>>>  ERROR: Failure: ImportError (No module named 'netaddr')*00:05:46.751* 
>>>> --*00:05:46.752*
>>>>  Traceback (most recent call last):*00:05:46.752*   File 
>>>> "/usr/lib/python3.4/site-packages/nose/failure.py", line 39, in 
>>>> runTest*00:05:46.752* raise 
>>>> self.exc_val.with_traceback(self.tb)*00:05:46.752*   File 
>>>> "/usr/lib/python3.4/site-packages/nose/loader.py", line 418, in 
>>>> loadTestsFromName*00:05:46.752* addr.filename, 
>>>> addr.module)*00:05:46.752*   File 
>>>> "/usr/lib/python3.4/site-packages/nose/importer.py", line 47, in 
>>>> importFromPath*00:05:46.752* return self.importFromDir(dir_path, 
>>>> fqname)*00:05:46.752*   File 
>>>> "/usr/lib/python3.4/site-packages/nose/importer.py", line 94, in 
>>>> importFromDir*00:05:46.753* mod = load_module(part_fqname, fh, 
>>>> filename, desc)*00:05:46.753*   File "/usr/lib64/python3.4/imp.py", line 
>>>> 235, in load_module*00:05:46.753* return load_source(name, filename, 
>>>> file)*00:05:46.753*   File "/usr/lib64/python3.4/imp.py", line 171, in 
>>>> load_source*00:05:46.753* module = methods.load()*00:05:46.753*   File 
>>>> "", line 1220, in load*00:05:46.753*   File 
>>>> "", line 1200, in 
>>>> _load_unlocked*00:05:46.753*   File "", line 
>>>> 1129, in _exec*00:05:46.753*   File "", line 
>>>> 1471, in exec_module*00:05:46.754*   File "", 
>>>> line 321, in _call_with_frames_removed*00:05:46.754*   File 
>>>> "/home/jenkins/workspace/vdsm_master_build-artifacts-fc23-x86_64/vdsm/rpmbuild/BUILD/vdsm-4.17.999/tests/network/models_test.py",
>>>>  line 27, in *00:05:46.754* from vdsm.netinfo import bonding, 
>>>> mtus*00:05:46.754*   File 
>>>> "/home/jenkins/workspace/vdsm_master_build-artifacts-fc23-x86_64/vdsm/rpmbuild/BUILD/vdsm-4.17.999/lib/vdsm/netinfo/__init__.py",
>>>>  line 26, in *00:05:46.754* fr

Re: [ovirt-devel] are there any ways to disable the log file?

2016-03-31 Thread Yaniv Bronheim
right, since 3.5 we use libvirt default configurations which is to report
logs to syslog, about the cmd line log to /var/log/libvirt/qemu/.. I think
its done by libvirt - doesn't it? probably you can disable it by some
libvirtd.conf\qemu.conf line . vdsm should not change it by default

On Thu, Mar 31, 2016 at 4:36 AM, zhukaijie  wrote:

> "Since oVirt 3.5.0 (VDSM 4.16.0), VDSM does not enable the overly verbose
> libvirt debug logs automatically." from
> http://www.ovirt.org/develop/developer-guide/vdsm/log-files/.
> So my problem is.
> Each running VM has it's command line logged in
> /var/log/libvirt/qemu/${vmname}. And now I'd like to disable this log
> function, that is to say, not to record the QEMU command line of VM. So
> could oVirt or VDSM disable this qemu command line log file just like
> libvirt debug logs? Thank you.
> ___
> Devel mailing list
> Devel@ovirt.org
> http://lists.ovirt.org/mailman/listinfo/devel
>



-- 
*Yaniv Bronhaim.*
___
Devel mailing list
Devel@ovirt.org
http://lists.ovirt.org/mailman/listinfo/devel

Re: [ovirt-devel] virt-v2v become zombie via python cpopen on error

2016-03-31 Thread Yaniv Bronheim
yep, we don't wait for the process inside cpopen code - we expect the
caller to take care of it. if you use execCmd with sync=False (as in
virt-v2v) - you have to take care for killing the process and wait for its
pid after its done - you can see that with the fix  we raise exception and
this exception is caught in _import func and call _abort which calls kill
and sign the pid to zombiereaper which waits for it to die.
If you use execCmd with sync=True (the default) you can see that we wait
until process exists and return.

On Wed, Mar 30, 2016 at 5:30 PM, Nir Soffer  wrote:

> On Wed, Mar 30, 2016 at 3:32 PM, Shahar Havivi  wrote:
> > On 30.03.16 14:28, Nir Soffer wrote:
> >> On Wed, Mar 30, 2016 at 1:36 PM, Michal Skrivanek
> >>  wrote:
> >> >
> >> >> On 30 Mar 2016, at 11:49, Richard W.M. Jones 
> wrote:
> >> >>
> >> >> On Wed, Mar 30, 2016 at 12:19:35PM +0300, Shahar Havivi wrote:
> >> >>> Hi,
> >> >>>
> >> >>> We encounter a problem in VDSM project that virt-v2v become zombie
> task while
> >> >>> importing vm from vmware.
> >> >>> When virt-v2v is in 'copy disk' mode and we someone deletes the vm
> at vmware
> >> >>> the process hang in read() method,
> >> >>> I am pretty sure that its not virt-v2v problem because when I run
> it from the
> >> >>> shell virt-v2v exit with an error, still maybe someone have an
> idea
> >> >>>
> >> >>> I wrote a small python script that encounter the problem:
> >> >>>
> >> >>>
> 
> >> >>> from cpopen import CPopen
> >> >>>
> >> >>> env = {'LIBGUESTFS_BACKEND': 'direct'}
> >> >>> cmd = ['/usr/bin/virt-v2v', '-ic',
> >> >>>   'vpx://', '-o',
> >> >>>   'local', '-os', '/tmp', '-of', 'raw', '-oa', 'sparse',
> >> >>>   '--password-file', '/tmp/passwd', '--machine-readable', 'bbb']
> >> >>> p = CPopen(cmd, env=env)
> >> >>> while p.returncode is None:
> >>
> >> p.returncode just return the instance variable, there is no wait()
> involved.
> >>
> >> The right way:
> >>
> >> while p.poll() is None:
> > the problem is proc.stdout.read(1) didn't raise and error when the stream
> > closed but return ''.
> > its a CPopen behaviour and works differently in subprocess.
>
> No, it works the same in both, this was just a bug in our code.
> Fixed in https://gerrit.ovirt.org/55477
>
> >
> >> ...
> >>
> >> p.returncode calling wait is non-standard feature in vdsm AsyncProc
> >> wrapper. This is
> >> the object used by v2v vdsm module, so there accessing p.returncode
> does call
> >> p.poll().
> >>
> >> These non-standard apis will be removed from vdsm, please do not use
> them.
> >>
> >> >>>c = p.stdout.read(1)
> >> >>>print c
> >> >>>
> 
> >> >>
> >> >> An actual zombie task?  That would indicate that the parent process
> >> >> (your Python program) wasn't doing a wait system call.
> >> >>
> >> >> I downloaded the cpopen-1.4 program, and it doesn't appear to call
> any
> >> >> of the wait*(2) system calls anywhere, so that could be the problem.
> >> >
> >> > I suppose the cpopen parameters are not alright…I’m sure vdsm
> developers can help with that.
> >> >
> >> >>
> >> >> Rich.
> >> >>
> >> >> --
> >> >> Richard Jones, Virtualization Group, Red Hat
> http://people.redhat.com/~rjones
> >> >> Read my programming and virtualization blog:
> http://rwmj.wordpress.com
> >> >> virt-p2v converts physical machines to virtual machines.  Boot with a
> >> >> live CD or over the network (PXE) and turn machines into KVM guests.
> >> >> http://libguestfs.org/virt-v2v
> >> >
> >> > ___
> >> > Devel mailing list
> >> > Devel@ovirt.org
> >> > http://lists.ovirt.org/mailman/listinfo/devel
> ___
> Devel mailing list
> Devel@ovirt.org
> http://lists.ovirt.org/mailman/listinfo/devel
>



-- 
*Yaniv Bronhaim.*
___
Devel mailing list
Devel@ovirt.org
http://lists.ovirt.org/mailman/listinfo/devel

Re: [ovirt-devel] [URGENT][ACTION REQUIRED] vdsm versioning system need to be fixed

2016-03-08 Thread Yaniv Bronheim
latest is 4.17.19 right? so where we test latest 3.6 we checkout ovirt-3.6
branch, build the rpms and install them - can we do it when we disable the
stable repository and remove the current installed version?

Its either that, or each tag to stable will follow tag to latest 3.6 which
will be higher


On Tue, Mar 8, 2016 at 10:52 AM, Sandro Bonazzola <sbona...@redhat.com>
wrote:

>
>
> On Tue, Mar 8, 2016 at 9:45 AM, Yaniv Bronheim <ybron...@redhat.com>
> wrote:
>
>> No. the jobs alright. ovirt-3.6 build each night will output vdsm
>> 4.17.19. older version than the build for ovirt-3.6.3 which will raise
>> (currently 4.17.23). That's all we wanted, no ? once 3.6.3 will end,
>> ovirt-3.6 new tag will be the newer version for users that want it
>>
>
>
> Yaniv, when you release 4.17.23, other projects like Hosted Engine will
> require it. And if it's not available in snapshot repository you'll end up
> with a broken dependency there, since you're requiring >= 4.17.23-0 and you
> have only 4.17.19-39 which should be the one you're supposed to test. Note
> that enabling the stable repo will provide 4.17.23 required by Hosted
> engine but won't allow you to test new code since the upgrade won't be
> possible.
>
> If you don't want new vdsm code to be tested, just let me know, I'll add
> stable repository to all projects requiring vdsm and I'll solve the
> dependency breakage.
>
>
>
>
>
>>
>> On Tue, Mar 8, 2016 at 10:30 AM, Sandro Bonazzola <sbona...@redhat.com>
>> wrote:
>>
>>>
>>>
>>> On Tue, Mar 8, 2016 at 9:07 AM, Yaniv Bronheim <ybron...@redhat.com>
>>> wrote:
>>>
>>>> its reasonable that you can't upgrade 4.17.23 to 4.17.19 and it
>>>> shouldn't be that way.. latest (which afaiu means latest snapshot?
>>>> ovirt-3.6.3) should get higher numbering than last stable (which should be
>>>> last tag on ovirt-3.6 branch). in other words, tags in ovirt-3.6.3 must be
>>>> newer than tags in ovirt-3.6 branch until we stop to update ovirt-3.6.3 and
>>>> backport only to ovirt-3.6 - than we can continue to tag ovirt-3.6. bottom
>>>> line, as I see it - as long as ovirt-3.6.3 alive we raise the tagging only
>>>> there
>>>>
>>>>
>>> this breaks automation at several layers.
>>> are you saying that we shouldn't put in ovirt-master-snapshot what comes
>>> out from ovirt-3.6 branch but push there only the output of 3.6.3 branch?
>>> If so, at least 3 new jenkins jobs (check patch, check merge, build
>>> artifact) have to be created and the nightly publisher need to be updated.
>>> Who's maintaining VDSM jenkins jobs?
>>>
>>>
>>>
>>>
>>>>
>>>>
>>>> On Mon, Mar 7, 2016 at 6:39 PM, Sandro Bonazzola <sbona...@redhat.com>
>>>> wrote:
>>>>
>>>>>
>>>>>
>>>>> On Mon, Mar 7, 2016 at 1:29 PM, Yaniv Bronheim <ybron...@redhat.com>
>>>>> wrote:
>>>>>
>>>>>> Again, I should understand the direction you ack here ... and its not
>>>>>> clear in any way I try to read it
>>>>>> lets summaries current status:
>>>>>> master == v4.17.999 (I recalled 4.18 tag for 4.0... probably it
>>>>>> hasn't happened yet)
>>>>>> ovit-3.6.3 last commit points to v4.17.23 (and also contains in its
>>>>>> history 4.17.22 4.17.21 4.17.20 and 4.17.19
>>>>>> ovirt-3.6 == was not tagged since v4.17.19
>>>>>>
>>>>>> So, as far as I see - last "official published" version is tagged
>>>>>> anyway. once we'll finish with z-streams, we can continue tagging only on
>>>>>> ovirt-3.6 branch. but as long as we publish new snapshots (or z-stream
>>>>>> releases as I call them) we can continue the tagging only on ovirt-3.6.3
>>>>>> branch
>>>>>>
>>>>>> The rest of your suggestions can't help in any way. if you prefer you
>>>>>> can use 4th level versioning (4.17.x-y) later on. but currently we just
>>>>>> continue to raise the current 4.17 we have
>>>>>>
>>>>>> now, getting back to the origin mail that Sandro sent:
>>>>>>
>>>>>> """ snip
>>>>>> > vdsm-4.17.19-32.git171584b.el7.centos.src.rpm
>>>>>> <http://jenkins.ovirt.org/job/vdsm_3.6_build-artifacts-el7-x86_64/1

Re: [ovirt-devel] [URGENT][ACTION REQUIRED] vdsm versioning system need to be fixed

2016-03-08 Thread Yaniv Bronheim
No. the jobs alright. ovirt-3.6 build each night will output vdsm 4.17.19.
older version than the build for ovirt-3.6.3 which will raise (currently
4.17.23). That's all we wanted, no ? once 3.6.3 will end, ovirt-3.6 new tag
will be the newer version for users that want it

On Tue, Mar 8, 2016 at 10:30 AM, Sandro Bonazzola <sbona...@redhat.com>
wrote:

>
>
> On Tue, Mar 8, 2016 at 9:07 AM, Yaniv Bronheim <ybron...@redhat.com>
> wrote:
>
>> its reasonable that you can't upgrade 4.17.23 to 4.17.19 and it shouldn't
>> be that way.. latest (which afaiu means latest snapshot? ovirt-3.6.3)
>> should get higher numbering than last stable (which should be last tag on
>> ovirt-3.6 branch). in other words, tags in ovirt-3.6.3 must be newer than
>> tags in ovirt-3.6 branch until we stop to update ovirt-3.6.3 and backport
>> only to ovirt-3.6 - than we can continue to tag ovirt-3.6. bottom line, as
>> I see it - as long as ovirt-3.6.3 alive we raise the tagging only there
>>
>>
> this breaks automation at several layers.
> are you saying that we shouldn't put in ovirt-master-snapshot what comes
> out from ovirt-3.6 branch but push there only the output of 3.6.3 branch?
> If so, at least 3 new jenkins jobs (check patch, check merge, build
> artifact) have to be created and the nightly publisher need to be updated.
> Who's maintaining VDSM jenkins jobs?
>
>
>
>
>>
>>
>> On Mon, Mar 7, 2016 at 6:39 PM, Sandro Bonazzola <sbona...@redhat.com>
>> wrote:
>>
>>>
>>>
>>> On Mon, Mar 7, 2016 at 1:29 PM, Yaniv Bronheim <ybron...@redhat.com>
>>> wrote:
>>>
>>>> Again, I should understand the direction you ack here ... and its not
>>>> clear in any way I try to read it
>>>> lets summaries current status:
>>>> master == v4.17.999 (I recalled 4.18 tag for 4.0... probably it hasn't
>>>> happened yet)
>>>> ovit-3.6.3 last commit points to v4.17.23 (and also contains in its
>>>> history 4.17.22 4.17.21 4.17.20 and 4.17.19
>>>> ovirt-3.6 == was not tagged since v4.17.19
>>>>
>>>> So, as far as I see - last "official published" version is tagged
>>>> anyway. once we'll finish with z-streams, we can continue tagging only on
>>>> ovirt-3.6 branch. but as long as we publish new snapshots (or z-stream
>>>> releases as I call them) we can continue the tagging only on ovirt-3.6.3
>>>> branch
>>>>
>>>> The rest of your suggestions can't help in any way. if you prefer you
>>>> can use 4th level versioning (4.17.x-y) later on. but currently we just
>>>> continue to raise the current 4.17 we have
>>>>
>>>> now, getting back to the origin mail that Sandro sent:
>>>>
>>>> """ snip
>>>> > vdsm-4.17.19-32.git171584b.el7.centos.src.rpm
>>>> <http://jenkins.ovirt.org/job/vdsm_3.6_build-artifacts-el7-x86_64/178/artifact/exported-artifacts/vdsm-4.17.19-32.git171584b.el7.centos.src.rpm>
>>>>  because
>>>> the last tag on the 3.6 branch was 4.17.19 and new tags have been created
>>>> in different branches.
>>>>
>>>> this is correct - no problem with that approach, new tag still will be
>>>> higher then 4.17.19 as we see with 4.17.23
>>>>
>>>> > This make impossible to upgrade from stable (4.17.23) to latest
>>>> snapshot.
>>>>
>>>> But we just said that stable (ovirt-3.6) is 4.17.19 and latest is
>>>> 4.17.23 - so it sounds right to me.
>>>>
>>>
>>>
>>> No, we said that stable is 4.17.23 and latest is 4.17.19. So we can't
>>> upgrade from stable to latest, since latest has lower version than stable.
>>>
>>> Let's make it simple, try install stable and then try to upgrade to
>>> snapshot.
>>>
>>> You'll see yourself.
>>>
>>>
>>>
>>>
>>>>
>>>> > This also break dependencies on other projects requiring the latest
>>>> released version like hosted engine.
>>>>
>>>> No its not. HE may require 4.17.23 which is the latest we publish as
>>>> part of 3.6.3
>>>>
>>>> """
>>>>
>>>> Yaniv Bronhaim.
>>>>
>>>> On Mon, Mar 7, 2016 at 1:58 PM, Dan Kenigsberg <dan...@redhat.com>
>>>> wrote:
>>>>
>>>>> On Mon, Mar 07, 2016 at 12:06:32PM +0200, Nir Soffer wrote:
>&

Re: [ovirt-devel] [URGENT][ACTION REQUIRED] vdsm versioning system need to be fixed

2016-03-08 Thread Yaniv Bronheim
its reasonable that you can't upgrade 4.17.23 to 4.17.19 and it shouldn't
be that way.. latest (which afaiu means latest snapshot? ovirt-3.6.3)
should get higher numbering than last stable (which should be last tag on
ovirt-3.6 branch). in other words, tags in ovirt-3.6.3 must be newer than
tags in ovirt-3.6 branch until we stop to update ovirt-3.6.3 and backport
only to ovirt-3.6 - than we can continue to tag ovirt-3.6. bottom line, as
I see it - as long as ovirt-3.6.3 alive we raise the tagging only there



On Mon, Mar 7, 2016 at 6:39 PM, Sandro Bonazzola <sbona...@redhat.com>
wrote:

>
>
> On Mon, Mar 7, 2016 at 1:29 PM, Yaniv Bronheim <ybron...@redhat.com>
> wrote:
>
>> Again, I should understand the direction you ack here ... and its not
>> clear in any way I try to read it
>> lets summaries current status:
>> master == v4.17.999 (I recalled 4.18 tag for 4.0... probably it hasn't
>> happened yet)
>> ovit-3.6.3 last commit points to v4.17.23 (and also contains in its
>> history 4.17.22 4.17.21 4.17.20 and 4.17.19
>> ovirt-3.6 == was not tagged since v4.17.19
>>
>> So, as far as I see - last "official published" version is tagged anyway.
>> once we'll finish with z-streams, we can continue tagging only on ovirt-3.6
>> branch. but as long as we publish new snapshots (or z-stream releases as I
>> call them) we can continue the tagging only on ovirt-3.6.3 branch
>>
>> The rest of your suggestions can't help in any way. if you prefer you can
>> use 4th level versioning (4.17.x-y) later on. but currently we just
>> continue to raise the current 4.17 we have
>>
>> now, getting back to the origin mail that Sandro sent:
>>
>> """ snip
>> > vdsm-4.17.19-32.git171584b.el7.centos.src.rpm
>> <http://jenkins.ovirt.org/job/vdsm_3.6_build-artifacts-el7-x86_64/178/artifact/exported-artifacts/vdsm-4.17.19-32.git171584b.el7.centos.src.rpm>
>>  because
>> the last tag on the 3.6 branch was 4.17.19 and new tags have been created
>> in different branches.
>>
>> this is correct - no problem with that approach, new tag still will be
>> higher then 4.17.19 as we see with 4.17.23
>>
>> > This make impossible to upgrade from stable (4.17.23) to latest
>> snapshot.
>>
>> But we just said that stable (ovirt-3.6) is 4.17.19 and latest is 4.17.23
>> - so it sounds right to me.
>>
>
>
> No, we said that stable is 4.17.23 and latest is 4.17.19. So we can't
> upgrade from stable to latest, since latest has lower version than stable.
>
> Let's make it simple, try install stable and then try to upgrade to
> snapshot.
>
> You'll see yourself.
>
>
>
>
>>
>> > This also break dependencies on other projects requiring the latest
>> released version like hosted engine.
>>
>> No its not. HE may require 4.17.23 which is the latest we publish as part
>> of 3.6.3
>>
>> """
>>
>> Yaniv Bronhaim.
>>
>> On Mon, Mar 7, 2016 at 1:58 PM, Dan Kenigsberg <dan...@redhat.com> wrote:
>>
>>> On Mon, Mar 07, 2016 at 12:06:32PM +0200, Nir Soffer wrote:
>>> > +1
>>> >
>>> > On Mon, Mar 7, 2016 at 10:29 AM, Sandro Bonazzola <sbona...@redhat.com>
>>> wrote:
>>> > >
>>> > >
>>> > > On Mon, Mar 7, 2016 at 9:03 AM, Martin Perina <mper...@redhat.com>
>>> wrote:
>>> > >>
>>> > >>
>>> > >>
>>> > >> - Original Message -
>>> > >> > From: "Yaniv Bronheim" <ybron...@redhat.com>
>>> > >> > To: "Martin Perina" <mper...@redhat.com>
>>> > >> > Cc: "Nir Soffer" <nsof...@redhat.com>, "Sandro Bonazzola"
>>> > >> > <sbona...@redhat.com>, "Francesco Romani"
>>> > >> > <from...@redhat.com>, "Dan Kenigsberg" <dan...@redhat.com>,
>>> "devel"
>>> > >> > <devel@ovirt.org>
>>> > >> > Sent: Monday, March 7, 2016 8:16:05 AM
>>> > >> > Subject: Re: [ovirt-devel] [URGENT][ACTION REQUIRED] vdsm
>>> versioning
>>> > >> > system need to be fixed
>>> > >> >
>>> > >> > I don't understand what's the different .. that's what we
>>> currently do.
>>> > >> > Sandro complains that he can't upgrade latest stable which can be
>>> > >> > 4.17.23
>>> > >> > to lat

Re: [ovirt-devel] NGN - Network error when adding a new NGN node

2016-02-24 Thread Yaniv Bronheim
I suspect that it relates to the desire to remove the dependency in
vdsm-cli - https://gerrit.ovirt.org/#/c/53831/ which is not merged yet. and
Fabian created the iso without vdsm-cli so the deploy failed in the middle.
after it failed it couldn't recover even when vdsm-cli was installed.
sounds reasonable?

On Wed, Feb 24, 2016 at 4:01 PM, Dan Kenigsberg  wrote:

> On Wed, Feb 24, 2016 at 08:16:45AM -0500, Eli Mesika wrote:
> > Hi Guys
> >
> > I am working on NGN
> > I had installed a ovirt-node from ISO, it (by mistake) did not include
> the vdsm-cli package, so host-deploy failed on that
> > Fabian asked me to install vdsm-cli manually using :
> >
> > yum --enablerepo=ovirt* install vdsm-cli
> >
> > In that time the host was installed but did not came up having
> networking issues (see attached logs)
> >
> > Fabian thinks that this is regression in VDSM, he said that same method
> was tested 2 weeks ago with no problems
>
> Let's see {super,vdsm}.log, then and the exact vdsm version involved.
> ___
> Devel mailing list
> Devel@ovirt.org
> http://lists.ovirt.org/mailman/listinfo/devel
>



-- 
*Yaniv Bronhaim.*
___
Devel mailing list
Devel@ovirt.org
http://lists.ovirt.org/mailman/listinfo/devel

Re: [ovirt-devel] vdsm meeting summary - 23.2.16

2016-02-24 Thread Yaniv Bronheim
On Tue, Feb 23, 2016 at 10:54 PM, Nir Soffer <nsof...@redhat.com> wrote:

> On Tue, Feb 23, 2016 at 6:30 PM, Yaniv Bronheim <ybron...@redhat.com>
> wrote:
>
>> (Nir, Adam, YanivB, Dan, Milan, Piotr, Francesco, Martin Polednik, Edward)
>>
>> - Some of us started to work on refactoring for python3, basic file
>> movements and improving testings
>>
>>  - We added functional tests to check-merged automation script -
>> currently, for some reason, it doesn't run tests that require root. I'm on
>> it.
>>
>>  - if anyone know that their functional tests work good please add them
>> to the list
>> so they will run each merge
>>
>>  - moving code from supervdsmServer to supervdsm_api - follow
>> https://gerrit.ovirt.org/53496 and move also virt, storage and sla part
>>
>>  - Nir says to use the weekly contact from storage team to help with
>> verification about storage code changes- I'll try to reach them next week
>> to test direct LUN
>>
>
> Tal should know who is the qe contact
>

can you publish it somewhere or send mail to the devels when you change
shifts?
I struggle with testing specific rpms with ovirt-system-tests currently -
it requires some tweaks there and I prefer to trust your verification - I
moved blkid, parted_utils, hostdev, alignmentScan - how do you suggest to
approve that it didn't break anything (https://gerrit.ovirt.org/#/c/53897/4
)


>
>>
>>   - I also wanted to use lago basic_suite (ovirt-system-tests) to run
>> full flow from specific vdsm commit - but the infrastructure for that
>> requires many manual steps and its not easy as I expected it to be.
>>
>> - moving code from vdsm dir (/usr/share/vdsm/) to site-packages/vdsm (by
>> moving them to lib/vdsm) - this is required to avoid relative imports which
>> are not allowed in python3 - storage dir is the main gap we currently have.
>>
>> - python-modernize - Dan uses it for network tests dir and encourage us
>> to start running it for our parts
>>
>
> Where is this tool?
>

 dnf install python-modernize


>
>>
>> - there are some schema changes, mostly removal parts that Piotr posted
>> as part of converting to yaml structure. this code needs to be reviewed
>> - Nir asks to keep the current API order instead of sorting by names
>> - Nir concerns about yaml notation looks for a list with single element
>>
>
> Whis is less nice than the previous notation:
>
> new:
>
> - name: pathlist
> type:
> - *IscsiSessionInfo
>
> old:
>
>   'pathlist': ['IscsiSessionInfo'],
>
> I don't see how to make this better.
>
> - split the yaml schema to several files is complex but nir asks to see if
>> its possible
>>
>> - storage team mainly works hard on SDM patches
>>
>> - Edward works on splitting network tests between unit tests and
>> integration tests which do environment setup changes
>>
>> - we currently don't run the slow test automatically
>> - we might want more "tags" for tests such as SlowTests, StressTests
>>
>
> We can solve this issue by marking tests as slowtest, but we better use
> more
> specific tags such as "integration", for tests that depend on the
> environment, and
> "privileged" for tests that requires root.
>

>
>>
>> And the most important issue - vdsm for ovirt 4.0 will keep only 3.6
>> backward compatibility - if you don't agree with that statement, please say
>> why ... but as far as we see it, this is the direction and we already
>> removing 3.5 stuff.
>>
>
> +2 from storage side
>
>
>>
>> Thanks all for participating,
>>
>
> Thanks for taking the notes!
>
> Nir
>
>


-- 
*Yaniv Bronhaim.*
___
Devel mailing list
Devel@ovirt.org
http://lists.ovirt.org/mailman/listinfo/devel

[ovirt-devel] vdsm meeting summary - 23.2.16

2016-02-23 Thread Yaniv Bronheim
(Nir, Adam, YanivB, Dan, Milan, Piotr, Francesco, Martin Polednik, Edward)

- Some of us started to work on refactoring for python3, basic file
movements and improving testings

 - We added functional tests to check-merged automation script - currently,
for some reason, it doesn't run tests that require root. I'm on it.

 - if anyone know that their functional tests work good please add them to
the list
so they will run each merge

 - moving code from supervdsmServer to supervdsm_api - follow
https://gerrit.ovirt.org/53496 and move also virt, storage and sla part

 - Nir says to use the weekly contact from storage team to help with
verification about storage code changes- I'll try to reach them next week
to test direct LUN

  - I also wanted to use lago basic_suite (ovirt-system-tests) to run full
flow from specific vdsm commit - but the infrastructure for that requires
many manual steps and its not easy as I expected it to be.

- moving code from vdsm dir (/usr/share/vdsm/) to site-packages/vdsm (by
moving them to lib/vdsm) - this is required to avoid relative imports which
are not allowed in python3 - storage dir is the main gap we currently have.

- python-modernize - Dan uses it for network tests dir and encourage us to
start running it for our parts

- there are some schema changes, mostly removal parts that Piotr posted as
part of converting to yaml structure. this code needs to be reviewed
- Nir asks to keep the current API order instead of sorting by names
- Nir concerns about yaml notation looks for a list with single element
- split the yaml schema to several files is complex but nir asks to see if
its possible

- storage team mainly works hard on SDM patches

- Edward works on splitting network tests between unit tests and
integration tests which do environment setup changes

- we currently don't run the slow test automatically
- we might want more "tags" for tests such as SlowTests, StressTests

And the most important issue - vdsm for ovirt 4.0 will keep only 3.6
backward compatibility - if you don't agree with that statement, please say
why ... but as far as we see it, this is the direction and we already
removing 3.5 stuff.

Thanks all for participating,

-- 
*Yaniv Bronhaim.*
___
Devel mailing list
Devel@ovirt.org
http://lists.ovirt.org/mailman/listinfo/devel

Re: [ovirt-devel] check-merged job for vdsm

2016-02-22 Thread Yaniv Bronheim
On Mon, Feb 22, 2016 at 9:06 PM, Yaniv Kaul <yk...@redhat.com> wrote:

> On Mon, Feb 22, 2016 at 5:25 PM, Yaniv Bronheim <ybron...@redhat.com>
> wrote:
>
>> Hi,
>> I added recently to check-merged phase a job for automatic functional
>> test run. see for example
>> http://jenkins.ovirt.org/job/vdsm_master_check-merged-fc23-x86_64/52/
>>
>
> Excellent!
>
>
>>
>> https://gerrit.ovirt.org/#/c/48268/58/automation/check-merged.sh -
>> generally, it installs lago in the jenkins machine, set up f23 vm, ssh to
>> it, run vdsm service (in deploy.sh), and the commands in check-merged.sh
>>
>
> I think it'd be cool if it could set up and additional EL 7.2 VM and run
> those same tests in parallel to the one on the F23 one.
> Y.
>

yes, that's part of the plans and the reason we use lago. should be quick
and easy to add additional envs - I'll check what stops us now, we wanted
to stable it first with the libvirt bug we had last week in f23, and to
avoid overload on the jenkins vms. but I think now we're ready


>
>
>>
>> You can see the commands in check-merged.sh (I'll try to improve the
>> readability there).
>> you can add new tests there by adding them to FUNCTIONAL_TESTS_LIST or
>> calling your own script inside the vm - I think maybe to change it to run
>> all scripts under certain directory instead of the current ./run_test.sh
>> call. I'll see how the usage involves and will improve it.
>>
>> To check your changes before merging you can use -
>> http://jenkins.ovirt.org/job/vdsm_master_check-merged-fc23-x86_64/build?delay=0sec
>> which requires jenkins.com login
>>
>> Currently many functional tests under tests/functional are broken
>> - vmQoSTests.py virtTests.py momTests.py that import VdsProxy - please try
>> to fix them or remove them if its not in use.
>>
>> --
>> *Yaniv Bronhaim.*
>>
>> ___
>> Devel mailing list
>> Devel@ovirt.org
>> http://lists.ovirt.org/mailman/listinfo/devel
>>
>
>


-- 
*Yaniv Bronhaim.*
___
Devel mailing list
Devel@ovirt.org
http://lists.ovirt.org/mailman/listinfo/devel

[ovirt-devel] check-merged job for vdsm

2016-02-22 Thread Yaniv Bronheim
Hi,
I added recently to check-merged phase a job for automatic functional test
run. see for example
http://jenkins.ovirt.org/job/vdsm_master_check-merged-fc23-x86_64/52/

https://gerrit.ovirt.org/#/c/48268/58/automation/check-merged.sh -
generally, it installs lago in the jenkins machine, set up f23 vm, ssh to
it, run vdsm service (in deploy.sh), and the commands in check-merged.sh

You can see the commands in check-merged.sh (I'll try to improve the
readability there).
you can add new tests there by adding them to FUNCTIONAL_TESTS_LIST or
calling your own script inside the vm - I think maybe to change it to run
all scripts under certain directory instead of the current ./run_test.sh
call. I'll see how the usage involves and will improve it.

To check your changes before merging you can use -
http://jenkins.ovirt.org/job/vdsm_master_check-merged-fc23-x86_64/build?delay=0sec
which requires jenkins.com login

Currently many functional tests under tests/functional are broken
- vmQoSTests.py virtTests.py momTests.py that import VdsProxy - please try
to fix them or remove them if its not in use.

-- 
*Yaniv Bronhaim.*
___
Devel mailing list
Devel@ovirt.org
http://lists.ovirt.org/mailman/listinfo/devel

[ovirt-devel] SyNc meeting - Vdsm 26.1

2016-01-26 Thread Yaniv Bronheim
(ybronhei, piotr, fromani, mzamazal, edwafh, ydary)

Hi,

Yaniv Dary raised the desire to modify vdsm name to ovirt-host-agent. The
motivation is to align with ovirt components (ovirt-agent, ovirt-engine,
ovirt-host-deploy, ovirt-node  and so on). In addition, vdsm versions
are not aligned with ovirt versions at all (historical reasons which we
keep to avoid upgrade issues).
GSS gets many questions about the differences between vdsm and other
components in ovirt, and this will make their life easier.
Basically, we all agreed that it is a good direction to have consistency
with other components in ovirt but ovirt-host-agent sounds less than what
vdsm actually does (sounds like ovirt-host-manager fits better).
Yaniv will send official mail to devel list and we'll discuss it more
deeply there.

Francesco backported health patches merged to ovirt-3.5, will be published
as part of vdsm v4.16.33 (https://gerrit.ovirt.org/#/c/52699  h
ttps://gerrit.ovirt.org/#/c/52700 
https://gerrit.ovirt.org/#/c/52701) it can help for later investigations
with 3.5 vdsms.

Milan works on debian packing, he arranges the mess we have under our
current Debian folder. He currently discovered issues with sanlock package
in debian which he still investigates.


Greetings,

-- 
*Yaniv Bronhaim.*
___
Devel mailing list
Devel@ovirt.org
http://lists.ovirt.org/mailman/listinfo/devel

Re: [ovirt-devel] Packaging: Rationale for some split packages

2016-01-25 Thread Yaniv Bronheim
I'd love to have such handling also in our rpm build..
we started some work in https://gerrit.ovirt.org/#/c/42491 , lets try to
align the handling in both debian and fedora packaging.
feel free to post a patch for that as well and I promise to push it forward

On Mon, Jan 25, 2016 at 10:14 AM, Milan Zamazal  wrote:

> Nir Soffer  writes:
>
> > Note that safeelase requires several packages which are *not* required
> > for safelease, but for vdsm.
> >
> > Currently vdsm is noarch rpm, so it cannot require arch specific
> packages.
> > since it requires safelease, we added arch specific packages to
> safelease.
>
> Thank you for making me aware about those dependencies.  The same
> problem exists in Debian, I'll think how to handle it (making vdsm
> package architecture specific being the obvious default choice, the
> safelease hack is out question).
> ___
> Devel mailing list
> Devel@ovirt.org
> http://lists.ovirt.org/mailman/listinfo/devel
>



-- 
*Yaniv Bronhaim.*
___
Devel mailing list
Devel@ovirt.org
http://lists.ovirt.org/mailman/listinfo/devel

Re: [ovirt-devel] Ensure processes death by terminating decorator - https://gerrit.ovirt.org/51407

2016-01-24 Thread Yaniv Bronheim
On Sat, Jan 23, 2016 at 8:05 PM, Nir Soffer <nsof...@redhat.com> wrote:

> On Sat, Jan 23, 2016 at 7:11 PM, Yaniv Bronheim <ybron...@redhat.com>
> wrote:
> > any updates about that?
> >
> > https://gerrit.ovirt.org/#/c/52357/ - this can be verified and get it
>
> Should wait for execCmd (it should use the new terminating() context
> manager
> to ensure process is killed on errors).
>
> https://gerrit.ovirt.org/#/c/52349/ - got tiny comment about unneeded kill
> > call
>
> Not ready yet
>
> > https://gerrit.ovirt.org/51407 - Nir can merge
>
> Waiting for Piotr to ack the tests (https://gerrit.ovirt.org/52362)
>
> >
> > Nir - please review https://gerrit.ovirt.org/52646 or take over
>
> Should wait for execCmd
>
> >
> > and update soon what the plans regarding the async usages in
> > vdsm/storage/mount.py
> > vdsm/storage/iscsiadm.py
> > vdsm/storage/imageSharing.py
> > vdsm/storage/hba.py
> > vdsm/storage/blockSD.py
>
> I will not have time for this at least after devconf, and even after
> devconf we are
> busy with spm removal (probably not for 4.0) and ovirt-image (for 4.0).
>
> I will add this to the storage todo list for now.
>
> > and v2v.py
>
> v2v need to read only from stdout. If v2v is not expected to write
> more then 64k
> errors to stderr (highly unlikely), we don't need to use AsyncProc.
> Can create the
> process like we do in qemuimg.py, and read from the process stdout.
>
> If we want to handle the unlikely case of huge error output, we can
> use CommandStream
> to collect the output from both streams, but we will have to the
> change the output
> parser to be driven by output callback, instead of reading lines from
> stdout.
>
> > I prefer not to wait for that too long - we can remove the deathSignal
> > usages there, and continue with https://gerrit.ovirt.org/#/c/48384
>
> I think we should continue with other Python 3 porting efforts until
> we can eliminate
> cpopen non-standard apis.
>
> > Please also check if you can take over the re-implementation of async
> proc
> > (https://gerrit.ovirt.org/49441) as you (storage operations) are the
> main
> > and only user of it, and it should fit Popen proc.
>
> I think the current patch does not need too much work - only implement wait
> with a timeout.
>
> What about the subproces32 project you found? it looks like the right
> direction:
> https://pypi.python.org/pypi/subprocess32/

The author is Python core developer:
> https://github.com/python/cpython/commits?author=gpshead
>
> We can drop cpopen, AsyncProc, AsyncProcOperation (used only in iscsi), and
> use this module to get Python 3 features and reliability, and be
> compatible with
> both Python 3 (fedora 24?) and 2 (el 7.x).
>

Maybe, we first need to remove our non standard usages, then we might be
able to change implementation easily and see the results.. first thing
first. Try to estimate when your team will be able to proceed with storage
parts, I already started with https://gerrit.ovirt.org/#/c/52646/ , if I'll
get to more parts it will be great.. but we need to have more force here


> > On Mon, Jan 18, 2016 at 3:12 PM, Francesco Romani <from...@redhat.com>
> > wrote:
> >>
> >> - Original Message -
> >> > From: "Yaniv Bronheim" <ybron...@redhat.com>
> >> > To: "devel" <devel@ovirt.org>, "Shahar Havivi" <shav...@redhat.com>,
> >> > "Francesco Romani" <from...@redhat.com>, "Nir
> >> > Soffer" <nsof...@redhat.com>
> >> > Sent: Monday, January 18, 2016 11:01:10 AM
> >> > Subject: Ensure processes death by terminating decorator -
> >> > https://gerrit.ovirt.org/51407
> >> >
> >> > Hi guys,
> >> >
> >> > Following the work to omit deathSignal attribute from our cpopen
> >> > implementation we posted https://gerrit.ovirt.org/51407 which is
> ready
> >> > for
> >> > use.
> >> > Currently locations that should use it are:
> >> > (I wrote above who I expect to check the area and post a patch for
> that
> >> > -
> >> > we'll discuss it during next vdsm-sync to follow the work)
> >>
> >> > fromani:
> >> > vdsm_hooks/checkimages/before_vm_start.py - in checkImage - the code
> >> > looks
> >> > ok, but check if not better to use the terminating decorator.. I think
> >> > it
> >> > will be nicer
> >>
> >> Fair enough, posted https://gerrit.ovirt.org/52349
> >>
> >> > some places define deathSignal for no reason, the call is sync -
> please
> >> > remove those places:
> >> [...]
> >> > fromani:
> >> > lib/vdsm/virtsparsify.py
> >>
> >> Done in https://gerrit.ovirt.org/52357
> >>
> >>
> >>
> >> --
> >> Francesco Romani
> >> RedHat Engineering Virtualization R & D
> >> Phone: 8261328
> >> IRC: fromani
> >
> >
> >
> >
> > --
> > Yaniv Bronhaim.
>



-- 
*Yaniv Bronhaim.*
___
Devel mailing list
Devel@ovirt.org
http://lists.ovirt.org/mailman/listinfo/devel

Re: [ovirt-devel] Ensure processes death by terminating decorator - https://gerrit.ovirt.org/51407

2016-01-23 Thread Yaniv Bronheim
any updates about that?

https://gerrit.ovirt.org/#/c/52357/ - this can be verified and get it
https://gerrit.ovirt.org/#/c/52349/ - got tiny comment about unneeded kill
call
https://gerrit.ovirt.org/51407 - Nir can merge

Nir - please review https://gerrit.ovirt.org/52646 or take over

and update soon what the plans regarding the async usages in
vdsm/storage/mount.py
vdsm/storage/iscsiadm.py
vdsm/storage/imageSharing.py
vdsm/storage/hba.py
vdsm/storage/blockSD.py
and v2v.py

I prefer not to wait for that too long - we can remove the deathSignal
usages there, and continue with https://gerrit.ovirt.org/#/c/48384

Please also check if you can take over the re-implementation of async proc (
https://gerrit.ovirt.org/49441) as you (storage operations) are the main
and only user of it, and it should fit Popen proc.


On Mon, Jan 18, 2016 at 3:12 PM, Francesco Romani <from...@redhat.com>
wrote:

> - Original Message -
> > From: "Yaniv Bronheim" <ybron...@redhat.com>
> > To: "devel" <devel@ovirt.org>, "Shahar Havivi" <shav...@redhat.com>,
> "Francesco Romani" <from...@redhat.com>, "Nir
> > Soffer" <nsof...@redhat.com>
> > Sent: Monday, January 18, 2016 11:01:10 AM
> > Subject: Ensure processes death by terminating decorator -
> https://gerrit.ovirt.org/51407
> >
> > Hi guys,
> >
> > Following the work to omit deathSignal attribute from our cpopen
> > implementation we posted https://gerrit.ovirt.org/51407 which is ready
> for
> > use.
> > Currently locations that should use it are:
> > (I wrote above who I expect to check the area and post a patch for that -
> > we'll discuss it during next vdsm-sync to follow the work)
>
> > fromani:
> > vdsm_hooks/checkimages/before_vm_start.py - in checkImage - the code
> looks
> > ok, but check if not better to use the terminating decorator.. I think it
> > will be nicer
>
> Fair enough, posted https://gerrit.ovirt.org/52349
>
> > some places define deathSignal for no reason, the call is sync - please
> > remove those places:
> [...]
> > fromani:
> > lib/vdsm/virtsparsify.py
>
> Done in https://gerrit.ovirt.org/52357
>
>
>
> --
> Francesco Romani
> RedHat Engineering Virtualization R & D
> Phone: 8261328
> IRC: fromani
>



-- 
*Yaniv Bronhaim.*
___
Devel mailing list
Devel@ovirt.org
http://lists.ovirt.org/mailman/listinfo/devel

Re: [ovirt-devel] Packaging: Rationale for some split packages

2016-01-22 Thread Yaniv Bronheim
If in debian we will have different set of rpms for vdsm, can you please
also add section about how to build and install vdsm over debian in
https://www.ovirt.org/Vdsm_Developers ?

On Fri, Jan 22, 2016 at 3:34 PM, Milan Zamazal  wrote:

> Thank you for clarification.  So as for Debian, I'll create vdsm-api
> package (after 3.6) and I won't separate the other mentioned packages
> from vdsm-python.
> ___
> Devel mailing list
> Devel@ovirt.org
> http://lists.ovirt.org/mailman/listinfo/devel
>



-- 
*Yaniv Bronhaim.*
___
Devel mailing list
Devel@ovirt.org
http://lists.ovirt.org/mailman/listinfo/devel

[ovirt-devel] Vdsm sync 19/1

2016-01-19 Thread Yaniv Bronheim
Hi all,

(Participated: Piotr, Francesco, Adam, Nir, Edy and I)

We discussed about the following topics:

Python3  - terminating contextmanager is ready
https://gerrit.ovirt.org/51407  - this allows us to remove deathSignal
usages and safely kill processes on failures.
Francesco already covered one location (https://gerrit.ovirt.org/#/c/52349/)
this week I hope we'll *start* the work to cover the rest of the code as
follow:
vdsm/storage/mount.py
vdsm/storage/iscsiadm.py
vdsm/storage/imageSharing.py
vdsm/storage/hba.py
vdsm/storage/blockSD.py
vdsm/v2v.py
If any gaps or dependencies are raised I asked to state it in python3 work
sheet (
https://docs.google.com/spreadsheets/d/180F-C1jU54ajUn7TuR-NwrKRZY1IiZI1Z8U5HWbvEvM/edit#gid=0)
and we'll discuss it in next call.

next will be to remove all deathSignal usages - currently there are
redundant place where its being used that can be removed today:
lib/vdsm/qemuimg.py
vdsm/storage/curlImgWrap.py
vdsm/storage/storage_mailbox.py
vdsm/storage/misc.py
vdsm/API.py

Hopefully patches for that will be posted during the week.
Next step will be to use Popen where cpopen is not available (
https://gerrit.ovirt.org/48384) and we'll move forward with migrate more
tests to python3.

- Vdsm communicate - we did a little summary about what was explored and
what we want to have. It was only a discussion to see where we're heading.
 External Broker - In the past we checked Cupid (IMQP) and ActiveMQ (STOMP).
In cupid we found many bugs and gaps, we thought about using only the
client of it for point to point communication, but then found it too
complex. actionMQ is in java and found to be too slow and consume a lot of
memory so we thought about having only one instance in cluster, or maybe
run it in external host that doesn't run vdsm, or put it with the engine
host - for that we will need to change all "host lifecycle" that we have
today so we left it as well (piotr, fill me up if I missed something)
We have implementation for "mini borker" as part of vdsm which perform what
we need to run it as external process to vdsm and forward messages to
clients. there are alternatives as ZeroMQ that we can explore.
Bottom line - we want to approve the communicate inside the host, between
vdsm to other services such as supervdsm, mom and maybe later we'll split
the current implementation of vdsm to more services that will run in
parallel (such as vm monitoring, vdsm-storage and so on. For that we can
use dbus, multiprocessing(uds), or some kind of a broker.
This leaded us to talk about service separation which we want to design
soon.

- Vdsm contract - Piotr sent yaml schema plan in
https://gerrit.ovirt.org/#/c/52404 - please ack that you agree with the
concept. Piotr moves on with that and start to migrate all verbs to that
form. For next version we shell have both types of schema available, and
start to add new verbs and events structures in the new yaml format.

- Exceptions - Nir raises that we should define Virt specific exceptions -
Francesco can elaborate about the plans next week.

- Network updates - Edy checks how to improve network configuration time.
Currently vdsm modifies ifcfg files, which after changing them it takes
time for the update to catch.

Interesting call guys, I encourage more developers to participate, listen
and influence vdsm 4.0 directions.

See you next week.

-- 
*Yaniv Bronhaim.*
___
Devel mailing list
Devel@ovirt.org
http://lists.ovirt.org/mailman/listinfo/devel

[ovirt-devel] Ensure processes death by terminating decorator - https://gerrit.ovirt.org/51407

2016-01-18 Thread Yaniv Bronheim
Hi guys,

Following the work to omit deathSignal attribute from our cpopen
implementation we posted https://gerrit.ovirt.org/51407 which is ready for
use.
Currently locations that should use it are:
(I wrote above who I expect to check the area and post a patch for that -
we'll discuss it during next vdsm-sync to follow the work)

shavivi:
vdsm/v2v.py - in _start_virt_v2v you return aysnProc that should call
kill() on fail

fromani:
vdsm_hooks/checkimages/before_vm_start.py - in checkImage - the code looks
ok, but check if not better to use the terminating decorator.. I think it
will be nicer

nsoffer:
vdsm/storage/mount.py - good looks ok, I prefer to use terminator there
vdsm/storage/iscsiadm.py
vdsm/storage/imageSharing.py
vdsm/storage/hba.py -good handling, use terminator
vdsm/storage/blockSD.py

please check your usage with the returned process and see that you're not
depending on deathSignal for it to die properly on crush

some places define deathSignal for no reason, the call is sync - please
remove those places:

nsoffer:
lib/vdsm/qemuimg.py
vdsm/storage/curlImgWrap.py
vdsm/storage/storage_mailbox.py
vdsm/storage/misc.py

fromani:
lib/vdsm/virtsparsify.py

ybronhei:
vdsm/API.py


If you can't get to it in a reasonable time, add the task to the list [1]
and someone else will be it up.
Please try to go over before the sync call.

[1] -
https://docs.google.com/spreadsheets/d/180F-C1jU54ajUn7TuR-NwrKRZY1IiZI1Z8U5HWbvEvM/edit#gid=0


-- 
*Yaniv Bronhaim.*
___
Devel mailing list
Devel@ovirt.org
http://lists.ovirt.org/mailman/listinfo/devel

[ovirt-devel] Summary for Vdsm sync 12.1

2016-01-12 Thread Yaniv Bronheim
(edwad, nsoffer, pkliczew, alitke, fromani, ybronhei)

Python3 current work and gaps -
https://docs.google.com/a/redhat.com/spreadsheets/d/180F-C1jU54ajUn7TuR-NwrKRZY1IiZI1Z8U5HWbvEvM/edit?usp=sharing
Current main gaps are the removal of deathSignal to unify cpopen api to
standard Popen api, and AsyncProc which uses StringIO internal attributes
which don't exist in six's stringIO . because of those two we can't import
commands.py which most of the tests use.

Jsonrpc client - we need to close gaps before disabling xmlrpc client -
Pioter can add more words about the gaps that we currently have.

Testing new storage verbs without engine - by such client ^ is a must-
https://gerrit.ovirt.org/#/c/35181/6/contrib/jsonrpc
basically when we finish with that we'll be able to remove vdsClient

Francesco sent as update about virt items -
https://www.mail-archive.com/devel@ovirt.org/msg04976.html

New leak was exposed in vdsm - fromani will elaborate more once finding the
exact location. Nir suggested https://gerrit.ovirt.org/#/c/51708/ to see
the status of the gc and gather more internal information about the current
vdsm instance - this can help to debug remotely without changing the code.

-- 
*Yaniv Bronhaim.*
___
Devel mailing list
Devel@ovirt.org
http://lists.ovirt.org/mailman/listinfo/devel

[ovirt-devel] Vdsm sync 5/1 summary after short delay

2016-01-10 Thread Yaniv Bronheim
(fromani, nsoffer, ybronhei, alitke)

 - Removing xmlrpc for good - who should accept it? where do we stand with
full jsonrpc client ? (we didn't get to any conclusions and said that we'll
reraise this topic next week with pioter)

 - Moving from nose to pytest - generally good approach to achieve. It
requires some changes in current testlib.py code. must be an item for next
major version (nir already managed to run most of the tests with it, and
stated few gaps)

 - Exception patches - still on progress, please review (
https://gerrit.ovirt.org/48868)

 - python3 effort to cover all asyncProc usage, and allowing utils import
without having python3-cpopen - https://gerrit.ovirt.org/51421
https://gerrit.ovirt.org/49441 . still under review

We didn't take notes during that talk, so if I forgot to mention something
I apologize. Feel free to reply and raise it

Greetings,

-- 
*Yaniv Bronhaim.*
___
Devel mailing list
Devel@ovirt.org
http://lists.ovirt.org/mailman/listinfo/devel

Re: [ovirt-devel] [VDSM] Vdsm weekly call

2016-01-10 Thread Yaniv Bronheim
On Sun, Jan 10, 2016 at 10:26 PM, Nir Soffer  wrote:

> Hi all,
>
> We are having vdsm call each Tuesday, 17:00 – 17:30
> at https://bluejeans.com/2061366027
>
> Topics for this week call:
> - jsonrpc client
> - Killing vdsClient
> - Removing xmlrpc
> - Anything else *you* like to discuss
>

adding
 - utils.py split (https://gerrit.ovirt.org/51421) and plans for removing
deathSignal usages by guarding processes in a context, to allow using
standard Popen api.
   ^ this part of python3 effort without having python-cpopen for python3
 - once utils.py is split we can move on with adding tests to run with
python3 (https://gerrit.ovirt.org/48052 https://gerrit.ovirt.org/50760 some
ready for review)

>
> Cheers,
> Nir
>



-- 
*Yaniv Bronhaim.*
___
Devel mailing list
Devel@ovirt.org
http://lists.ovirt.org/mailman/listinfo/devel

Re: [ovirt-devel] vdsm_master_unit-tests_merged is failing

2015-12-22 Thread Yaniv Bronheim
Nothing hiding there.. the automation CI method is the only thing we need
to keep to avoid leaving garbage that nobody looks at

On Tue, Dec 22, 2015 at 9:59 AM, Barak Korren  wrote:

> > Please remove it - unless you have plans to revert the
> > automation/*-based approach.
>
> Since I don't know who wrote it, and what useful bits of code may be
> hiding inside, I rather not make any irreversible changes while almost
> everyone is on PTO.
>
>
> --
> Barak Korren
> bkor...@redhat.com
> RHEV-CI Team
>



-- 
*Yaniv Bronhaim.*
___
Devel mailing list
Devel@ovirt.org
http://lists.ovirt.org/mailman/listinfo/devel

Re: [ovirt-devel] vdsm_master_unit-tests_merged is failing

2015-12-20 Thread Yaniv Bronheim
On Sun, Dec 20, 2015 at 9:46 AM, Nir Soffer <nsof...@redhat.com> wrote:

> On Sun, Dec 20, 2015 at 8:49 AM, Barak Korren <bkor...@redhat.com> wrote:
> > On 19 December 2015 at 13:05, Yaniv Bronheim <ybron...@redhat.com>
> wrote:
> >> for some runs it is installed - we added it to the requirements for
> >> check-patch -
> >> https://gerrit.ovirt.org/#/c/48051/29/automation/check-patch.packages
> >>
> > check_patch runs are done by the "vdsm_master_check-patch" job, not this
> one.
> > If this job's functionality is indeed already covered by check_patch,
> > then I vote for it to be removed.
> > VDSM maintainers, please confirm.
>
> I think check-patch.sh and check-merge are enough.
>
> I don't know what are the other jobs and what they are doing.
> Better remove them since nobody care about them anyway.
>
> If they do something valuable, they should be integrated with
> the standard automation scripts.
>

ack


> Nir
>



-- 
*Yaniv Bronhaim.*
___
Devel mailing list
Devel@ovirt.org
http://lists.ovirt.org/mailman/listinfo/devel

Re: [ovirt-devel] Automation CI for vdsm

2015-12-16 Thread Yaniv Bronheim
exactly - just posted a patch again that sings all fails as broken .
we'll get the report soon and I'll publish it as well. hope the run will
take less time now

On Wed, Dec 16, 2015 at 6:19 PM, Nir Soffer <nsof...@redhat.com> wrote:

> Nice, but we cannot enable this until all the tests pass or disabled.
>
> There is no point in broken or flaky functional tests.
>
> On Wed, Dec 16, 2015 at 5:34 PM, Yaniv Bronheim <ybron...@redhat.com>
> wrote:
> > So its not stable. It won't block merges and at least give us report
> after
> > each merge. It takes really long time to run it (because of the tests
> > themselves. Lago things takes maximum 15minutes, but the run last for
> more
> > than 2hrs right now and I suspect functional/storageTests.py gets stuck)
> >
> > Bellow you can see where we stand (before I added python-rtslib package).
> >
> > Now, I still want to merge the patch https://gerrit.ovirt.org/#/c/48268/
> -
> > which enables this run after merges, and I still want you to consider the
> > addition of Automation CI flag to our gerrit so that developer will be
> able
> > to use it as a trigger for the check-merged.sh script run, just to see if
> > their patch fixes\brakes something realted to the functional tests
> >
> > http://jenkins.ovirt.org/job/vdsm_master_check-patch-fc23-x86_64/1480/
> - is
> > an example of how the run looks like. I still work to improve the output
> >
> >
> > Please reply and let me know if the idea around the automation flag is
> > acceptable by you.. and please review the patch for comments and acks.
> > We can ask dcaro to add the flag until Friday, otherwise we'll need to
> delay
> > this effort after the holiday..
> >
> >
> > functional.sosPluginTests.SosPluginTest
> > testSosPlugin   OK
> > functional.vmRecoveryTests.RecoveryTests
> > test_vm_recoveryFAIL
> > functional.vmQoSTests.VMQosTests
> > testSmallVMBallooning   FAIL
> > functional.virtTests.VirtTest
> > testComplexVm   FAIL
> > testHeadlessVm  OK
> > testSimpleVmFAIL
> > testVmDefinitionGraphics('spice')   FAIL
> > testVmDefinitionGraphics('vnc') OK
> > testVmDefinitionLegacyGraphics('qxl')   FAIL
> > testVmDefinitionLegacyGraphics('vnc')   OK
> > testVmDefinitionMultipleGraphics('spice', 'vnc')FAIL
> > testVmDefinitionMultipleGraphics('vnc', 'spice')FAIL
> > testVmWithCdrom('self') FAIL
> > testVmWithCdrom('specParams')   FAIL
> > testVmWithCdrom('vmPayload')FAIL
> > testVmWithDevice('hotplugDisk') FAIL
> > testVmWithDevice('hotplugNic')  FAIL
> > testVmWithDevice('smartcard')   FAIL
> > testVmWithDevice('virtioNic')   FAIL
> > testVmWithDevice('virtioRng')   FAIL
> > testVmWithSla   FAIL
> > testVmWithStorage('iscsi')  SKIP:
> > python-rtslib is not installed.
> > testVmWithStorage('localfs')FAIL
> > testVmWithStorage('nfs')FAIL
> > functional.storageTests.StorageTest
> > testCreatePoolErrorsOK
> > testStorage('glusterfs', 0) ERROR
> > testStorage('glusterfs', 3) ERROR
> > testStorage('iscsi', 0) SKIP:
> > python-rtslib is not installed.
> > testStorage('iscsi', 3) SKIP:
> > python-rtslib is not installed.
> > testStorage('localfs', 0)   FAIL
> > testStorage('localfs', 3)   FAIL
> > testStorage('nfs', 0)   FAIL
> > testStorage('nfs', 3)   FAIL
> > functional.networkTests.NetworkTest
> > testAddVlanedBridgeless ERROR
> > testAddVlanedBridgeless_one

Re: [ovirt-devel] Master building issue

2015-12-13 Thread Yaniv Bronheim
pep8 violation that was introduced by commit
2a053f98a1cf1fb717b90b1900bf4c7b4318d254
there you go - https://gerrit.ovirt.org/#/c/50383

On Sun, Dec 13, 2015 at 12:23 PM, Fred Rolland  wrote:

> Hi ,
>
> I have issues building the engine on master.
>
> The build finishes quick with this error :
> packaging/setup/plugins/ovirt-engine-setup/ovirt-engine-common/distro-rpm/packages.py:188:9:
> E301 expected 1 blank line, found 0
> Makefile:316: recipe for target 'validations' failed
>
>
> Is there a fix for it ?
>
> Thanks,
>
> Fred
>
> ___
> Devel mailing list
> Devel@ovirt.org
> http://lists.ovirt.org/mailman/listinfo/devel
>



-- 
*Yaniv Bronhaim.*
___
Devel mailing list
Devel@ovirt.org
http://lists.ovirt.org/mailman/listinfo/devel

[ovirt-devel] Debian support for vdsm and subpackages

2015-12-13 Thread Yaniv Bronheim
We currently manage debian packaging files under
https://gerrit.ovirt.org/#/q/project:releng-tools instead of in the project
itself. imo it makes it harder, as developer won't update the debian folder
while changing spec parts , which makes much more observation work for
Simone..
But if we keep to manage debian that way we should remove the debian folder
from cpopen,  vdsm,  pthreading to avoid duplication.

Simone can you explain more how you manage to flow code changes and update
the debian support currently? should I indeed drop the current debian code
that we have in vdsm?

-- 
*Yaniv Bronhaim.*
___
Devel mailing list
Devel@ovirt.org
http://lists.ovirt.org/mailman/listinfo/devel

[ovirt-devel] Automation CI for vdsm

2015-12-10 Thread Yaniv Bronheim
Hi all,

We want to run functional tests as part of Vdsm CI for each patch before
merge. Therefore we need to declare how to automate this process without
overloading our jenkins machines.
The functional tests will run using lago (https://github.com/ovirt/lago) -
It will initiate multiply vms, install vdsm and manipulate it by nosetests
or other procedures such as upgrade, removal and so on.

Currently standard CI provides check-patch and check-merged scripts (
http://ovirt-infra-docs.readthedocs.org/en/latest/CI/Build_and_test_standards.html)
the problem with check-merged is that it will run after merge which doesn't
help if something fails.

We want to allow developers to trigger the script once reviews and
verification are ready (last step before merge). To do so we agreed to add
Continues Integration flag for each vdsm patch. Once this flag will be
signed with +1 it will trigger Jenkins CI to run the check-merged script
(adding new button to gerrit is not an option - you can image that flag as
a trigger button), on success Jenkins CI flag will turn to +2. on fail
we'll get -1 and once new patchset is ready the developer will remove the
+1 and add it back to the Continues Integration flag to re-trigger the job.

Please ack the process before we move on with that

The patch for those scripts still under review and testing -
https://gerrit.ovirt.org/#/c/48268

Thanks

-- 
*Yaniv Bronhaim.*
___
Devel mailing list
Devel@ovirt.org
http://lists.ovirt.org/mailman/listinfo/devel

Re: [Engine-devel] RES: oVirt 3.4 test day - PPC support

2014-01-26 Thread Yaniv Bronheim
Yes, sounds like the overriding of vdsm.conf is an issue that we need to avoid
what I wonder is that it is not a regression, old versions of vdsm also 
overwrited the vdsm.conf file during deploy, so how is it harming us only now?

please open a bug on the overwriting issue

Thanks.

Yaniv Bronhaim.

- Original Message -
 From: Vitor de Lima vitor.l...@eldorado.org.br
 To: Barak Azulay bazu...@redhat.com, Roy Golan rgo...@redhat.com, 
 Michal Skrivanek mskri...@redhat.com
 Cc: engine-devel engine-devel@ovirt.org, VDSM Project Development 
 vdsm-de...@lists.fedorahosted.org
 Sent: Friday, January 24, 2014 4:52:50 PM
 Subject: [Engine-devel] RES: oVirt 3.4 test day - PPC support
 
 Maybe this is a problem with ovirt-host-deploy, it must be configured to
 avoid overwriting your vdsm.conf file. There are instructions in the wiki:
 
 http://www.ovirt.org/Features/Vdsm_for_PPC64#Testing_the_PPC64_support
 
 
 De: Barak Azulay [bazu...@redhat.com]
 Enviado: quinta-feira, 23 de janeiro de 2014 17:29
 Para: Vitor de Lima; Roy Golan; Michal Skrivanek
 Cc: VDSM Project Development; engine-devel
 Assunto: oVirt 3.4 test day - PPC support
 
 Hi,
 
 I tried to test various engine features related to PPC support,
 However since I don't have a real Power PC HW I tried using the fake PPC
 configuration introduced by http://gerrit.ovirt.org/#/c/18718
 
 So I added the following configuration to /etc/vdsm/vdsm.conf on a x86_64
 host:
 
fake_kvm_support=true
fake_kvm_architecture=ppc64
 
 And indeed it looked successful as you can see below
 
 [root@bazulay1 ~]# vdsClient -s 0 getVdsCaps | grep -i cpu
 cpuCores = '4'
 cpuFlags = 'powernv,model_POWER7_v2.3'
 cpuModel = 'POWER 7 (fake)'
 cpuSockets = '1'
 cpuSpeed = '3401.000'
 cpuThreads = '8'
 
 
 However after creating the appropriate cluster:
 CPU Architecture = ppc64
 CPU name = IBM POWER 7* (meaning I tried all IBM POWER 7... cpus)
 
 Adding the host always ended in non operational status with the error:
 
 Host bazulay1 has architecture x86_64 and cannot join Cluster
 TESTDAY-CLUSTER which has architecture ppc64.
 
 
 Thanks
 Barak Azulay
 ___
 Engine-devel mailing list
 Engine-devel@ovirt.org
 http://lists.ovirt.org/mailman/listinfo/engine-devel
 
___
Engine-devel mailing list
Engine-devel@ovirt.org
http://lists.ovirt.org/mailman/listinfo/engine-devel


Re: [Engine-devel] CodeQuality/Automated Checking

2013-12-12 Thread Yaniv Bronheim
- Original Message -
 From: Sven Kieske s.kie...@mittwald.de
 To: Alon Bar-Lev alo...@redhat.com, Eli Mesika emes...@redhat.com
 Cc: engine-devel@ovirt.org
 Sent: Thursday, December 12, 2013 2:16:53 PM
 Subject: Re: [Engine-devel] CodeQuality/Automated Checking
 
 Yeah of course it's always better you stay POSIX compliant, if it's
 needed :)
 The bash example was just the first one I thought of.
 
 Glad to see this idea got picked up so fast!

glad to cause this thread :) as it was my glitch leaving a fi out there
I'll be glad to review such patch as http://gerrit.ovirt.org/22332 for Vdsm 
doesn't look complicate to integrate it to vdsm code and run it as part of the 
make

+1

 
 Am 12.12.2013 12:15, schrieb Alon Bar-Lev:
  first, using bash is not a good idea... better to use POSIX compliant
  shell.
 
 --
 Mit freundlichen Grüßen / Regards
 
 Sven Kieske
 
 Systemadministrator
 Mittwald CM Service GmbH  Co. KG
 Königsberger Straße 6
 32339 Espelkamp
 T: +49-5772-293-100
 F: +49-5772-293-333
 https://www.mittwald.de
 Geschäftsführer: Robert Meyer
 St.Nr.: 331/5721/1033, USt-IdNr.: DE814773217, HRA 6640, AG Bad Oeynhausen
 Komplementärin: Robert Meyer Verwaltungs GmbH, HRB 13260, AG Bad Oeynhausen
 ___
 Engine-devel mailing list
 Engine-devel@ovirt.org
 http://lists.ovirt.org/mailman/listinfo/engine-devel

___
Engine-devel mailing list
Engine-devel@ovirt.org
http://lists.ovirt.org/mailman/listinfo/engine-devel


Re: [Engine-devel] Things to be done to support Ubuntu hosts

2013-11-14 Thread Yaniv Bronheim
Hey,
Most of the issues you mentioned in slides 11-13 
(http://www.ovirt.org/images/5/57/Shanghai-VDSM-on-Ubuntu.pdf) are already 
solved (if not all of them), 
If there are more issues to solve can you please open RFE bz on them to close 
those gaps (like making the services' names configurable)?

Thanks
Yaniv Bronhaim.

- Original Message -
 From: Zhou Zheng Sheng zhshz...@linux.vnet.ibm.com
 To: engine-devel engine-devel@ovirt.org
 Sent: Thursday, November 14, 2013 8:57:19 AM
 Subject: [Engine-devel] Things to be done to support Ubuntu hosts
 
 Hi,
 
 Recently Ubuntu support is added to VDSM, and .deb binray packages can
 be downloaded from launchpad.net PPA [1]. Most of the key features such
 as storage management and VM lifecycle work on Ubuntu. The cross
 distribution network management patches are upstream as well. One big
 piece left is making ovirt-host-deploy support Ubuntu, so that we can
 manage Ubuntu hosts from Engine, thus close the gap on host side.
 
 In May 2013  I made some hacks to ovirt-host-deploy and otopi. I made it
 skipped parts not supported on Ubuntu and configure the environment
 manually, and successfully added Ubuntu host to Engine [2].
 Unfortunately I have no plans to continue on it, but I'd like to make a
 summary of the things I hacked to help anyone who wants to submit
 patches in future. I was to add a WIKI page but I found there were not
 many items, so a mail would be enough.
 
 1. Package management operations
 Both otopi and ovirt-host-deploy query for dependency packages and
 install them on demand. The otopi package management just supports yum,
 we need to add apt-get support. Package names are different in Ubuntu,
 so I made a list mapping the names. The list in in VDSM source
 directory, debian/dependencyMap.txt .
 
 2. Network configuration operations
 ovirt-host-deploy asks VDSM's configNetwork.py to create bridge network.
 Cross distribution support patches for configNetwork.py are under
 review. ovirt-host-deploy supports Ubuntu bridge configuration as long
 as they are merged.
 
 [1] https://launchpad.net/~zhshzhou/+archive/vdsm-ubuntu
 [2] http://www.ovirt.org/images/5/57/Shanghai-VDSM-on-Ubuntu.pdf
 
 Here goes the detailed hack patch. The hack was base on v1.0.1, I
 rebased it to the latest master. The rebased patch are not tested. If
 you find this email useless, it's actually a good news, which means
 there is not a lot of work and problems ahead ;-)
 
 Hack patch for ovirt-host-deploy.
 
 From 120493a242046d19794ef3da83b32486d372aa39 Mon Sep 17 00:00:00 2001
 From: Zhou Zheng Sheng zhshz...@linux.vnet.ibm.com
 Date: Wed, 13 Nov 2013 18:02:26 +0800
 Subject: [PATCH] Ubuntu Hacks
 
 Change-Id: Ifb4ebc829101c92d06475619b1b5986e87b83d57
 Signed-off-by: Zhou Zheng Sheng zhshz...@linux.vnet.ibm.com
 ---
  src/plugins/ovirt-host-deploy/gluster/packages.py | 17 +
  src/plugins/ovirt-host-deploy/tune/tuned.py   |  3 +-
  src/plugins/ovirt-host-deploy/vdsm/bridge.py  | 45
 ---
  src/plugins/ovirt-host-deploy/vdsm/packages.py|  9 +++--
  src/plugins/ovirt-host-deploy/vdsm/pki.py |  3 +-
  src/plugins/ovirt-host-deploy/vdsm/software.py|  2 +
  src/plugins/ovirt-host-deploy/vdsm/vdsmid.py  |  3 +-
  7 files changed, 45 insertions(+), 37 deletions(-)
 
 diff --git a/src/plugins/ovirt-host-deploy/gluster/packages.py
 b/src/plugins/ovirt-host-deploy/gluster/packages.py
 index 1fecfda..eb37744 100644
 --- a/src/plugins/ovirt-host-deploy/gluster/packages.py
 +++ b/src/plugins/ovirt-host-deploy/gluster/packages.py
 @@ -60,13 +60,13 @@ class Plugin(plugin.PluginBase):
  ),
  )
  def _validation(self):
 -if not self.packager.queryPackages(patterns=('vdsm-gluster',)):
 -raise RuntimeError(
 -_(
 -'Cannot locate gluster packages, '
 -'possible cause is incorrect channels'
 -)
 -)
 +# if not self.packager.queryPackages(patterns=('vdsm-gluster',)):
 +# raise RuntimeError(
 +# _(
 +# 'Cannot locate gluster packages, '
 +# 'possible cause is incorrect channels'
 +# )
 +# )
  self._enabled = True
 
  @plugin.event(
 @@ -74,7 +74,8 @@ class Plugin(plugin.PluginBase):
  condition=lambda self: self._enabled,
  )
  def _packages(self):
 -self.packager.installUpdate(('vdsm-gluster',))
 +# self.packager.installUpdate(('vdsm-gluster',))
 +pass
 
  @plugin.event(
  stage=plugin.Stages.STAGE_CLOSEUP,
 diff --git a/src/plugins/ovirt-host-deploy/tune/tuned.py
 b/src/plugins/ovirt-host-deploy/tune/tuned.py
 index d8e00c5..d0dcea5 100644
 --- a/src/plugins/ovirt-host-deploy/tune/tuned.py
 +++ b/src/plugins/ovirt-host-deploy/tune/tuned.py
 @@ -70,7 +70,8 @@ class Plugin(plugin.PluginBase):
  condition=lambda self: self._enabled,
  )
 

Re: [Engine-devel] [vdsm] stale gerrit patches

2013-09-29 Thread Yaniv Bronheim


- Original Message -
 From: Zhou Zheng Sheng zhshz...@linux.vnet.ibm.com
 To: Ayal Baron aba...@redhat.com
 Cc: engine-devel engine-devel@ovirt.org, vdsm-de...@lists.fedorahosted.org
 Sent: Wednesday, September 25, 2013 5:16:16 AM
 Subject: Re: [Engine-devel] [vdsm] stale gerrit patches
 
 
 
 on 2013/09/24 05:21, Ayal Baron wrote:
  
  
  - Original Message -
 
 
  - Original Message -
  From: Itamar Heim ih...@redhat.com
  To: Alon Bar-Lev alo...@redhat.com
  Cc: David Caro dcaro...@redhat.com, engine-devel
  engine-devel@ovirt.org, vdsm-de...@lists.fedorahosted.org
  Sent: Monday, September 23, 2013 1:54:39 PM
  Subject: Re: [vdsm] stale gerrit patches
 
  On 09/23/2013 01:52 PM, Alon Bar-Lev wrote:
 
 
  - Original Message -
  From: Itamar Heim ih...@redhat.com
  To: Alon Bar-Lev alo...@redhat.com
  Cc: David Caro dcaro...@redhat.com, engine-devel
  engine-devel@ovirt.org, vdsm-de...@lists.fedorahosted.org
  Sent: Monday, September 23, 2013 1:50:35 PM
  Subject: Re: [vdsm] stale gerrit patches
 
  On 09/23/2013 01:49 PM, Alon Bar-Lev wrote:
 
 
  - Original Message -
  From: Itamar Heim ih...@redhat.com
  To: David Caro dcaro...@redhat.com
  Cc: engine-devel engine-devel@ovirt.org,
  vdsm-de...@lists.fedorahosted.org
  Sent: Monday, September 23, 2013 1:47:47 PM
  Subject: Re: [vdsm] stale gerrit patches
 
  On 09/23/2013 01:46 PM, David Caro wrote:
  On Mon 23 Sep 2013 12:36:58 PM CEST, Itamar Heim wrote:
  we have some very old gerrit patches.
  I'm for abandoning patches which were not touched over 60 days (to
  begin with, I think the number should actually be lower).
  they can always be re-opened by any interested party post their
  closure.
 
  i.e., looking at gerrit, the patch list should actually get
  attention,
  and not be a few worth looking at, with a lot of old patches
 
  thoughts?
 
  Thanks,
Itamar
  ___
  vdsm-devel mailing list
  vdsm-de...@lists.fedorahosted.org
  https://lists.fedorahosted.org/mailman/listinfo/vdsm-devel
 
  It might helpful to have a cron-like script that checks the age of
  the
  posts and first notifies the sender, the reviewers and the
  maintainer,
  and if the patch is not updated in a certain period just abandons
  it.
 
 
  yep - warn after X days via email to just owner (or all subscribed to
  the patch), and close if no activity for X+14 days or something like
  that.
 
  This will be annoying.
 
  And there are patches that pending with good reason.
 
  pending for 60 days with zero activity on them (no comment, no rebase,
  nothing)?
 
  http://gerrit.ovirt.org/#/q/status:open+project:ovirt-engine+branch:master+topic:independent_deployments,n,z
 
  so how does it help us to have these patches, some without any comment
  from any reviewer.
  lets get them reviewed and decide one way or the other, rather than let
  them get old and stay forever
 
  Again... maintainer can close these if he likes.
  Owner can close these if he likes.
  
  right, but why?
  a patch without activity being abandoned might actually spur someone into
  motion (rebasing and resubmitting, prodding maintainers etc).
  I'm +1 for automatically abandoning old patches.
  
 
 At least we all agree on that old patches should be abandoned.
 
 I think we can do this in a semi-automatic way. A cron job checks the
 patch's freshness, and sends an email to warn the author and reviewers
 of an old patch. If the someone has a good reason to keep the patch, he
 can leave a comment on the gerrit web page saying I want to #keep the
 patch# because  Then the system skips the patches whose last
 comment contains #keep the patch#. If no one cares it, the patch is
 abandoned after some time.

+1 for Zhou Zheng Sheng.
Much better suggestion than automatically forgetting old patches by removing 
them.
A reminder can be sent after couple of weeks or even a month, and auto abandon 
the patch if no response added to the bug within a week.

I like this suggestion if we want to add automation for this process (as we all 
prefer automation when possible), and it'll probably help a bit to clean our 
gerrit dash board

 --
 Thanks and best regards!
 
 Zhou Zheng Sheng / 周征晟
 E-mail: zhshz...@linux.vnet.ibm.com
 Telephone: 86-10-82454397
 
 ___
 Engine-devel mailing list
 Engine-devel@ovirt.org
 http://lists.ovirt.org/mailman/listinfo/engine-devel

___
Engine-devel mailing list
Engine-devel@ovirt.org
http://lists.ovirt.org/mailman/listinfo/engine-devel


Re: [Engine-devel] fake VDSM as oVirt project?

2013-09-15 Thread Yaniv Bronheim
+1

- Original Message -
 From: Itamar Heim ih...@redhat.com
 To: Liran Zelkha liran.zel...@gmail.com
 Cc: engine-devel engine-devel@ovirt.org, infra in...@ovirt.org
 Sent: Sunday, September 15, 2013 12:57:29 PM
 Subject: Re: [Engine-devel] fake VDSM as oVirt project?
 
 On 09/13/2013 09:52 AM, Liran Zelkha wrote:
  +1 I use it constantly.
 
 +1
 adding infra, where new git repo's are usually requested.
 (we also ask board if a new project scope, but this seems like just a
 repo for a help/test program)
 
 if more +1's and no objections, ping next week to create  repo.
 
 thanks,
 Itamar
 
 
 
  On Fri, Sep 13, 2013 at 8:48 AM, Tomas Jelinek tjeli...@redhat.com
  mailto:tjeli...@redhat.com wrote:
 
  Hi all,
 
  some time ago Libor Spevak created a simple web app called vdsm fake:
  documented: http://www.ovirt.org/VDSM_Fake
  published: https://github.com/lspevak/ovirt-vdsmfake
 
  It is basically a simple hackable java web application which can
  emulate the VDSM so you can connect the
  engine to it. It is especially useful for:
  - having tons of cheap fake hosts on one machine to stress your engine
  - doing some experiments with VDSM API (e.g. vfeenstr proposes a new
  VDSM API to lower the network traffic between
 engine - VDSM and uses the vdsm fake to implement it and do
  some tests to get some numbers on how does this change the things)
 
  Omer came up with an idea of making this app as one of oVirt's
  project (http://www.ovirt.org/Subprojects) maybe with repository on
  oVirt's gerrit making it more accessible for getting/contributing
  for the whole community.
 
  What do you think about it?
 
  Tomas
  ___
  Engine-devel mailing list
  Engine-devel@ovirt.org mailto:Engine-devel@ovirt.org
  http://lists.ovirt.org/mailman/listinfo/engine-devel
 
 
 
 
  ___
  Engine-devel mailing list
  Engine-devel@ovirt.org
  http://lists.ovirt.org/mailman/listinfo/engine-devel
 
 
 ___
 Engine-devel mailing list
 Engine-devel@ovirt.org
 http://lists.ovirt.org/mailman/listinfo/engine-devel
 
___
Engine-devel mailing list
Engine-devel@ovirt.org
http://lists.ovirt.org/mailman/listinfo/engine-devel


Re: [Engine-devel] Odd Host Activation with mgmt-if

2013-05-19 Thread Yaniv Bronheim
Its true only if you add the host by using an hostname and not by explicit IP, 
with IP you can pass 0 to vdsClient and it works

- Original Message -
 From: Alon Bar-Lev alo...@redhat.com
 To: Oded Ramraz oram...@redhat.com, Moti Asayag masa...@redhat.com
 Cc: engine-devel@ovirt.org, Yaniv Bronheim ybron...@redhat.com
 Sent: Sunday, May 19, 2013 1:15:29 PM
 Subject: Re: [Engine-devel] Odd Host Activation with mgmt-if
 
 
 Most probably, when SSL is enabled and configured properly this will not
 work...
 
 One need:
 
 vdsClient -s hostname_as_specified_in_certificate 
 
 - Original Message -
  From: Oded Ramraz oram...@redhat.com
  To: Moti Asayag masa...@redhat.com
  Cc: engine-devel@ovirt.org
  Sent: Sunday, May 19, 2013 9:43:15 AM
  Subject: Re: [Engine-devel] Odd Host Activation with mgmt-if
  
  small correction ( typo ) : 'vdsClient -s 0 getVdsCapabilities'
  
  Oded.
  
  - Original Message -
  From: Moti Asayag masa...@redhat.com
  To: Dead Horse deadhorseconsult...@gmail.com
  Cc: engine-devel@ovirt.org
  Sent: Sunday, May 19, 2013 9:39:33 AM
  Subject: Re: [Engine-devel] Odd Host Activation with mgmt-if
  
  Could you provide the output of the 'vdsClient -s 0 getVdsCapavilities' for
  the specific host?
  
  - Original Message -
   From: Dead Horse deadhorseconsult...@gmail.com
   To: engine-devel@ovirt.org
   Sent: Friday, May 17, 2013 12:21:47 AM
   Subject: Re: [Engine-devel] Odd Host Activation with mgmt-if
   
   Still puzzling on this all I get for an error when I see it is:
   WARN [org.ovirt.engine.core.bll.
   network.host.SetupNetworksCommand] (pool-5-thread-42) [21543ac]
   CanDoAction
   of action SetupNetworks failed.
   Reasons:VAR__ACTION__SETUP,VAR__TYPE__NETWORKS,NETWORKS_ALREADY_ATTACHED_TO_IFACES,$NETWORKS_ALREADY_ATTACHED_TO_IFACES_LIST
   ovirtmgmt
   
   Nothing is logged VDSM wise other then the above standard host activation
   probe.
   
   - DHC
   
   
   
   On Mon, May 13, 2013 at 1:19 PM, Dead Horse 
   deadhorseconsult...@gmail.com
   
   wrote:
   
   
   
   Seeing this as of late when activating hosts. The odd this is that it
   reports
   a failure to activate the host (EL6.4) but still does it anyways.
   
   Engine side:
   
   2013-05-13 12:53:38,547 INFO
   [org.ovirt.engine.core.bll.HandleVdsVersionCommand] (pool-5-thread-42)
   [21543ac] Running command: HandleVdsVersionCommand internal: true.
   Entities
   affected : ID: 15f11023-9746-49de-b33f-e3cc7dca6f65 Type: VDS
   2013-05-13 12:53:38,549 INFO
   [org.ovirt.engine.core.vdsbroker.ActivateVdsVDSCommand]
   (pool-5-thread-42)
   [21543ac] FINISH, ActivateVdsVDSCommand, return: Host[durotar], log id:
   3796a7bd
   2013-05-13 12:53:38,625 WARN
   [org.ovirt.engine.core.bll.network.host.SetupNetworksCommand]
   (pool-5-thread-42) [21543ac] CanDoAction of action SetupNetworks failed.
   Reasons:VAR__ACTION__SETUP,VAR__TYPE__NETWORKS,NETWORKS_ALREADY_ATTACHED_TO_IFACES,$NETWORKS_ALREADY_ATTACHED_TO_IFACES_LIST
   ovirtmgmt
   2013-05-13 12:53:39,368 INFO
   [org.ovirt.engine.core.bll.ActivateVdsCommand]
   (pool-5-thread-42) [21543ac] Activate finished. Lock released. Monitoring
   can run now for host durotar from data-center Azeroth
   
   VDSM Side:
   
   Thread-13::DEBUG::2013-05-13
   12:53:37,841::BindingXMLRPC::933::vds::(wrapper)
   return getCapabilities with {'status': {'message': 'Done', 'code': 0},
   'info': {'HBAInventory': {'iSCSI': [{'InitiatorName':
   'iqn.2012-09.net.azeroth:durotar'}], 'FC': []}, 'packages2': {'kernel':
   {'release': '358.6.1.el6.x86_64', 'buildtime': 1366751713.0, 'version':
   '2.6.32'}, 'glusterfs-rdma': {'release': '0.3.alpha3.el6', 'buildtime':
   1367604893L, 'version': '3.4.0'}, 'glusterfs-fuse': {'release':
   '0.3.alpha3.el6', 'buildtime': 1367604893L, 'version': '3.4.0'},
   'spice-server': {'release': '12.el6', 'buildtime': 1361573005L,
   'version':
   '0.12.0'}, 'vdsm': {'release': '17.el6', 'buildtime': 1368196305L,
   'version': '4.10.3'}, 'qemu-kvm': {'release': '2.355.el6_4.2',
   'buildtime':
   1362691270L, 'version': '0.12.1.2'}, 'qemu-img': {'release':
   '2.355.el6_4.2', 'buildtime': 1362691270L, 'version': '0.12.1.2'},
   'libvirt': {'release': '18.el6_4.4', 'buildtime': 1366301801L, 'version':
   '0.10.2'}, 'glusterfs': {'release': '0.3.alpha3.el6', 'buildtime':
   1367604893L, 'version': '3.4.0'}, 'mom': {'release': '1.el6',
   'buildtime':
   1349470062L, 'version': '0.3.0'}, 'glusterfs-server': {'release':
   '0.3.alpha3.el6', 'buildtime': 1367604893L, 'version': '3.4.0'}},
   'cpuModel': 'Intel(R) Xeon(R) CPU X5570 @ 2.93GHz', 'hooks':
   {'before_vm_start': {'50_sriov': {'md5':
   '3ebc60cd2e4eb089820102285fad7c45'}, '50_pincpu': {'md5':
   '0b5fb99ff0e7acb9ad534b87c02c59e3'}, '50_qos': {'md5':
   '18b596a6b4e4bad80357f240ba122a5e'}, '50_vmfex': {'md5':
   '9f5abb892ddb6b3daa779985d38d9f55'}, '50_scratchpad': {'md5':
   '7db25a4b8cb04f6e7132cb7c2300c111'}, '50_numa': {'md5

Re: [Engine-devel] vdsm/zombiereaper

2013-02-19 Thread Yaniv Bronheim
Agree.. Until we'll upload better solution for the zombies issue I prefer to 
merge that patch.


- Original Message -
From: Dan Kenigsberg dan...@redhat.com
To: Dead Horse deadhorseconsult...@gmail.com
Cc: engine-devel@ovirt.org, Yaniv Bronhaim ybron...@redhat.com, Royce Lv 
lvro...@linux.vnet.ibm.com, ShaoHe Feng shao...@linux.vnet.ibm.com, 
vdsm-de...@fedorahosted.org
Sent: Tuesday, February 19, 2013 3:33:05 PM
Subject: Re: [Engine-devel] vdsm/zombiereaper

On Mon, Feb 18, 2013 at 04:21:16PM -0600, Dead Horse wrote:
 Any movement or thoughts yet on this one: http://gerrit.ovirt.org/#/c/11492/
 
 It still has vdsm master broken.

Guys, I think that DHC is right - we cannot keep deliberating forever.
Having a process leak is bad, but keeping master broken for so long is
even worse.

So unless anyone violently objects, I'll revert the offending patch.

Dan.
___
Engine-devel mailing list
Engine-devel@ovirt.org
http://lists.ovirt.org/mailman/listinfo/engine-devel