On Thu, 5 Nov 2015 11:51:37 +0100
Arvin Schnell <[email protected]> wrote:

> On Thu, Nov 05, 2015 at 11:03:24AM +0100, Josef Reidinger wrote:
> > On Thu, 5 Nov 2015 10:32:23 +0100
> > Arvin Schnell <[email protected]> wrote:
> 
> > > You can also run unit test, code coverage and some integration
> > > tests in the VM. Apart from that the VM is not so special, just
> > > the standard SDK is needed.
> > 
> > unit tests should be run in test phase of build process, so osc
> > build should do it for you with all required libraries.  Code
> > coverage is a bit tricky, but you usually do not do it on
> > jenkins just for code submission.
> 
> I have heard that we want to move code coverage from Travis to
> Jenkins.
> 

yes, thanks plan, but as I said it is not critical for code submission.

> > For integration testing it is needed to have proper env, but it
> > require to really maintain such VMs, which is a bit time
> > demanding.
> 
> Maintain such VMs? Those are standard images just started. If the
> VM has a unused disk some integration tests for libstorage and
> snapper would already be possible.

keeping it up to date, world around changed, so also VM need some love.
I already have over 20 vms for various env and for older one, there is
always problems when started that something outside changed ( e.g.
YaST:Devel no longer support the old distribution, maintenance updates
released, etc. )

> 
> > Just consider you have security fix that goes to SLE11 SP1, SP2,
> > SP3, SP4 and SLE12 GA, SP1....so in your way you have to start 6
> > VMs which have to be updated. with git tarball and osc approach,
> > you do testing just once or twice ( 11 and 12) and rest is covered
> > by unit tests in test phase.
> 
> Jenkins starts the VMs, not me. And they don't have to be
> updated. Jenkins just starts the latest image for each
> distribution.

jenkins now use osc which use chroot, so more docker like solution. Why
is using VMs better then using osc chroot?

And it is not image for each distribution, it is image for each
distribution and each package, as you env is different for each package
( devel libraries for yast2-core is different then for snapper ). And
also do not forget about interdependencies. New snapper maybe need new
libbtrfs library, so it also have to be updated.

> 
> Unit tests are run in any case, not just your way.

it is not my way, it is just one way during rpm build. So I do not see
sense to run it more then once during one submission.

> 
> > > > Remember what you need to do if you want to build an YaST
> > > > package for SLE11  - you need installed yast2-devtools in the
> > > > appropriate version + all tools used (like autoconf,
> > > > automake,...) + the needed libraries. You need different set of
> > > > packages for each SP release, that's too complicated even with
> > > > the help of VMs.
> > > 
> > > Don't think about all the different packages but about a
> > > different distribution. Then it is not complicated at all. I
> > > always do development for old distributions in VMs. You need them
> > > for testing anyway.
> > 
> > And you need also to maintain it and whats more, if other need to do
> > some development, then it is a bit tricky to set it up correctly.
> > Just stop thinking for a moment about snapper and libstorage.
> > Imagine that we need urgent fix for yast2-ruby-bindings and you as
> > C++ expert look at it and fix it.
> > Is easier for you to do
> > 
> > git clone
> > hacking/writing unit test
> 
> And here you want a complete system where you can compile the
> sources directly from your editor. So you need a VM/system.

No, for compilation you use osc, which already have information what is
dependencies and whats more, it is done in chroot, so your working
environment is not broken by installing any library. For me osc chroot
is like docker, so more lightweigth then VM and also without any
maintenance.

> 
> > rake osc:build
> 
> Far too slow to find mistakes in the sources. Also your IDE will
> not simply jump to the file with the error. Turn around time
> increases from few seconds to several minutes killing
> productivity.

As I said, if you regularly work with project it make sense to have own
specialized env, e.g. in VM, but current way force everyone to use it,
even if you want to fix typo or trivial case. Where few minutes for
compilation is balanced by not seting up whole VMs with development
environment. Try to be friendly to non-regular contributors and new
comers. I do not say that using VM is wrong, I just say that forcing it
to use by everyone is wrong.

> 
> > Or use proper VM, set properly cmake, install all required packages
> > in proper versions, create a tarball ( in cmake very tricky as
> > there is at least three different commands that can do it and
> > result is different ), generate spec file and then copy it to osc
> > checkout directory?
> 
> You just gave some reasons to drop cmake.

And also autotools, which from my POV is same beast written in obscure
language (M4), need configuration ( what is enabled, what not ), adds
own specific make tasks, etc.

> 
> Apart from that those tasks have to be done only once but the
> development itself is much faster that your repeating 'rake
> osc:build' version.

See above reasons for regular contributors versus newcomers or
non-regular contributors.

Josef

> 
> Regards,
>   Arvin
> 

-- 
To unsubscribe, e-mail: [email protected]
To contact the owner, e-mail: [email protected]

Reply via email to