+1 for pre- commit testing.  Whichever tool enforces it would be good choice.  
For feature check in, we ( community) require sanity tests to be submitted by 
feature owners and this was followed well in 4.0 release  but there is lapse in 
this practice now. This would be a great if RM can enforce this during check 
ins  -  review unit tests  and results before approving a check in. 

-----Original Message-----
From: Trippie [mailto:trip...@gmail.com] On Behalf Of Hugo Trippaers
Sent: Friday, February 07, 2014 12:46 AM
To: dev
Cc: jfarr...@apache.org
Subject: Re: Code quality, QA, etc

Hey David,

I would make a distinction between code issues and functional issues. 
Occasionally somebody just plainly breaks the build, i'm guilty of that myself 
actually, and thats just plain stupid. Luckily we have Jenkins to catch these 
errors quickly. I'm in a continuous struggle with Jenkins to get the build time 
to less than 5 minutes. I feel that is an acceptable time to get feedback on a 
commit, any longer and you have moved on to the next thing or gone home. Also 
this kind of testing isn't really hard, run the build and unit tests. By 
introducing something like gerrit we can actually make this happen before 
committing it to the repo. Push a patch to gerrit, gerrit tells jenkins to test 
the patch, if +1 from jerkins commit, for non committers the step would be to 
invite somebody for review as well. Second nice thing about jenkins is the 
post-review test, if a contributor submits a patch its build by jenkins, if a 
reviewes approves the patch, jerkins will again run a build to ensure that the 
patch will still apply and doesn't break the build. Very handy if there is some 
time between patch submission and patch review.

Functional issues are much harder to track. For example yesterday i found 
several issues in the contrail plugin that would not cause any pain in a 
contrail environment, but any other environments creating a network would fail. 
These examples are too common and difficult to catch with unit tests. It can be 
done, but requires some serious effort on the developers side and we in general 
don't seem to be very active at writing unit tests. These kind of issues can 
only be found by actually running CloudStack and executing a series of 
functional tests. Ideally that is what we have the BVT suite for, but i think 
our current BVT setup is not documented enough to give accurate feedback to a 
developer about which patch broke a certain piece of functionality. In jenkins 
the path from code to BVT is not kept yet, so it is almost impossible to see 
which commits were new in a particular run of the bvt suite.

Personally i'm trying to get into the habit of running a series of tests on 
devcloud before committing something. Doesn't prove a lot, but does guarantee 
that the bare basic developer functionality is working before committing 
something. After a commit at least i'm sure that anybody will be able to spinup 
devcloud and deploy an instance. I'm trying to get this automated as well so we 
can use this as feedback on a patch. Beers for anyone who writes an easy to use 
script that configures devcloud with a zone and tests if a user vm can be 
instantiated on an isolated sourcenat network. If we could include such a 
script in the tree it might help people with testing their patch before 
committing.

I think we are seeing more and more reverts in the tree. Not necessarily a good 
thing, but at least people know that there is that option if a commit really 
breaks a build. Also please help each other out, everybody can make a mistake 
and commit it. If its a trivial mistake it might not be much effort to track it 
down and fix it, which is way better than a revert or a mail that something is 
broken. 

In short, we need to make testing more efficient and transparent to allow 
people to easily incorporate it in their personal workflow.

Cheers,

Hugo

On 7 feb. 2014, at 04:50, David Nalley <da...@gnsa.us> wrote:

> Hi folks,
> 
> We continue to break things large and small in the codebase, and after 
> a number of different conversations; I thought I'd bring that 
> discussion here.
> 
> First - coding quality is only one factor that the PMC considers when 
> making someone a committer.
> 
> Second - CloudStack is a huge codebase; has a ton of inter-related 
> pieces, and unintended consequences are easy.
> 
> We also have an pretty heady commit velocity - 20+ commits today alone.
> 
> Some communities have Review-then-commit - which would slow us down, 
> and presumably help us increase quality. However, I am not personally 
> convinced that it will do so measurably because even the most 
> experienced CloudStack developers occasionally break a build or worse.
> 
> We could have an automated pipeline that verifies a number of 
> different tests pass - before a patch/commit makes it into a mainline 
> branch. That is difficult with our current tooling; but perhaps 
> something worth considering.
> 
> At FOSDEM, Hugo and I were discussing his experiences with Gerrit and 
> OpenDaylight, and he thinks thats a viable option. I think it would 
> certainly be a step in the right direction.
> 
> Separately, Jake Farrell and I were discussing our git-related 
> proposal for ApacheCon, and broached the subject of Gerrit. Jake is 
> the current person bearing most of the load for git at the ASF, and 
> he's also run Gerrit in other contexts. He points out a number of 
> difficulties. (And I'd love for him to weigh in on this conversation, 
> hence the CC) He wants to expand RB significantly, including 
> pre-commit testing.
> 
> So - thoughts, comments, flames? How do we improve code quality, stop 
> needless breakage? Much of this is going to be cultural I think, and I 
> personally think we struggle with that. Many folks have voiced an 
> opinion about stopping continued commits when the build is broken; but 
> we haven't been able to do that.
> 
> --David

Reply via email to