We currently have jenkins pre commit testing available [1] for patches
attached to jira tickets, I own the filter listed in step 5 and can help
get this setup if needed.

I am working on getting pre-commit testing available from reviewboard and
hopefully will have this available shortly. The reviewbot plugin for
jenkins is still very new and does not support multiple reviewboard
repositories yet which we would need in order to use it as a possible
solution.

We currently have over 30k users on our reviewboard and it is working well
for projects that are leveraging it. Gerrit setup has some pain points as
it wants to have control of the repositories in order to allow merging of
patches. We would not want to run this on the same server that our
repositories run on due to security and potential hardware contentions.
That said, I am not completely against the use of Gerrit, but I think that
we need to wait and review Gerrit and its potential use after we complete
the migration of git-wip to git.a.o.

I'm happy to discuss or help in any way I can to facilitate easier
workflows and pre-commit testing to ensure that our projects are providing
the highest quality software.

-Jake

[1]: http://wiki.apache.org/general/PreCommitBuilds


On Fri, Feb 7, 2014 at 2:16 PM, Yoshikazu Nojima <m...@ynojima.net> wrote:

> +1 for pre-commit testing.
> I put pre-commit testing in practice personally, and it helped me a lot.
> Before I submit a patch, I create a pull request in my github repo. My
> Jenkins subscribes pull requests by Jenkins plugin for Github and the
> build result is displayed in Github.
> I heard there are plugins for ReviewBoard and plugins for Gerrit that
> automatically test patches submitted.
> ex. )
> https://wiki.jenkins-ci.org/display/JENKINS/Jenkins-Reviewbot
> https://wiki.jenkins-ci.org/display/JENKINS/Gerrit+Trigger
>
> If possible, executing automated end-to-end testing is better to
> eliminate most of regressions, but
> even executing build and unit testing will prevent minor mistakes and
> broken commits.
>
>
> 2014-02-07 9:28 GMT-07:00 Mike Tutkowski <mike.tutkow...@solidfire.com>:
> > I would love to see pre-commit testing such as what Hugo described.
> >
> > At the time being, I tend to mvn -P developer,systemvm clean install to
> > make sure I have a clean build and run whatever tests it runs, then I run
> > my own suite of tests manually (I'd like to automated these when I have
> > time), then I check my code in.
> >
> >
> > On Fri, Feb 7, 2014 at 5:02 AM, Sudha Ponnaganti <
> > sudha.ponnaga...@citrix.com> wrote:
> >
> >> +1 for pre- commit testing.  Whichever tool enforces it would be good
> >> choice.
> >> For feature check in, we ( community) require sanity tests to be
> submitted
> >> by feature owners and this was followed well in 4.0 release  but there
> is
> >> lapse in this practice now. This would be a great if RM can enforce this
> >> during check ins  -  review unit tests  and results before approving a
> >> check in.
> >>
> >> -----Original Message-----
> >> From: Trippie [mailto:trip...@gmail.com] On Behalf Of Hugo Trippaers
> >> Sent: Friday, February 07, 2014 12:46 AM
> >> To: dev
> >> Cc: jfarr...@apache.org
> >> Subject: Re: Code quality, QA, etc
> >>
> >> Hey David,
> >>
> >> I would make a distinction between code issues and functional issues.
> >> Occasionally somebody just plainly breaks the build, i'm guilty of that
> >> myself actually, and thats just plain stupid. Luckily we have Jenkins to
> >> catch these errors quickly. I'm in a continuous struggle with Jenkins to
> >> get the build time to less than 5 minutes. I feel that is an acceptable
> >> time to get feedback on a commit, any longer and you have moved on to
> the
> >> next thing or gone home. Also this kind of testing isn't really hard,
> run
> >> the build and unit tests. By introducing something like gerrit we can
> >> actually make this happen before committing it to the repo. Push a
> patch to
> >> gerrit, gerrit tells jenkins to test the patch, if +1 from jerkins
> commit,
> >> for non committers the step would be to invite somebody for review as
> well.
> >> Second nice thing about jenkins is the post-review test, if a
> contributor
> >> submits a patch its build by jenkins, if a reviewes approves the patch,
> >> jerkins will again run a build to ensure that the patch will still apply
> >> and doesn't break the build. Very handy if there is some time between
> patch
> >> submission and patch review.
> >>
> >> Functional issues are much harder to track. For example yesterday i
> found
> >> several issues in the contrail plugin that would not cause any pain in a
> >> contrail environment, but any other environments creating a network
> would
> >> fail. These examples are too common and difficult to catch with unit
> tests.
> >> It can be done, but requires some serious effort on the developers side
> and
> >> we in general don't seem to be very active at writing unit tests. These
> >> kind of issues can only be found by actually running CloudStack and
> >> executing a series of functional tests. Ideally that is what we have the
> >> BVT suite for, but i think our current BVT setup is not documented
> enough
> >> to give accurate feedback to a developer about which patch broke a
> certain
> >> piece of functionality. In jenkins the path from code to BVT is not kept
> >> yet, so it is almost impossible to see which commits were new in a
> >> particular run of the bvt suite.
> >>
> >> Personally i'm trying to get into the habit of running a series of tests
> >> on devcloud before committing something. Doesn't prove a lot, but does
> >> guarantee that the bare basic developer functionality is working before
> >> committing something. After a commit at least i'm sure that anybody
> will be
> >> able to spinup devcloud and deploy an instance. I'm trying to get this
> >> automated as well so we can use this as feedback on a patch. Beers for
> >> anyone who writes an easy to use script that configures devcloud with a
> >> zone and tests if a user vm can be instantiated on an isolated sourcenat
> >> network. If we could include such a script in the tree it might help
> people
> >> with testing their patch before committing.
> >>
> >> I think we are seeing more and more reverts in the tree. Not
> necessarily a
> >> good thing, but at least people know that there is that option if a
> commit
> >> really breaks a build. Also please help each other out, everybody can
> make
> >> a mistake and commit it. If its a trivial mistake it might not be much
> >> effort to track it down and fix it, which is way better than a revert
> or a
> >> mail that something is broken.
> >>
> >> In short, we need to make testing more efficient and transparent to
> allow
> >> people to easily incorporate it in their personal workflow.
> >>
> >> Cheers,
> >>
> >> Hugo
> >>
> >> On 7 feb. 2014, at 04:50, David Nalley <da...@gnsa.us> wrote:
> >>
> >> > Hi folks,
> >> >
> >> > We continue to break things large and small in the codebase, and after
> >> > a number of different conversations; I thought I'd bring that
> >> > discussion here.
> >> >
> >> > First - coding quality is only one factor that the PMC considers when
> >> > making someone a committer.
> >> >
> >> > Second - CloudStack is a huge codebase; has a ton of inter-related
> >> > pieces, and unintended consequences are easy.
> >> >
> >> > We also have an pretty heady commit velocity - 20+ commits today
> alone.
> >> >
> >> > Some communities have Review-then-commit - which would slow us down,
> >> > and presumably help us increase quality. However, I am not personally
> >> > convinced that it will do so measurably because even the most
> >> > experienced CloudStack developers occasionally break a build or worse.
> >> >
> >> > We could have an automated pipeline that verifies a number of
> >> > different tests pass - before a patch/commit makes it into a mainline
> >> > branch. That is difficult with our current tooling; but perhaps
> >> > something worth considering.
> >> >
> >> > At FOSDEM, Hugo and I were discussing his experiences with Gerrit and
> >> > OpenDaylight, and he thinks thats a viable option. I think it would
> >> > certainly be a step in the right direction.
> >> >
> >> > Separately, Jake Farrell and I were discussing our git-related
> >> > proposal for ApacheCon, and broached the subject of Gerrit. Jake is
> >> > the current person bearing most of the load for git at the ASF, and
> >> > he's also run Gerrit in other contexts. He points out a number of
> >> > difficulties. (And I'd love for him to weigh in on this conversation,
> >> > hence the CC) He wants to expand RB significantly, including
> >> > pre-commit testing.
> >> >
> >> > So - thoughts, comments, flames? How do we improve code quality, stop
> >> > needless breakage? Much of this is going to be cultural I think, and I
> >> > personally think we struggle with that. Many folks have voiced an
> >> > opinion about stopping continued commits when the build is broken; but
> >> > we haven't been able to do that.
> >> >
> >> > --David
> >>
> >>
> >
> >
> > --
> > *Mike Tutkowski*
> > *Senior CloudStack Developer, SolidFire Inc.*
> > e: mike.tutkow...@solidfire.com
> > o: 303.746.7302
> > Advancing the way the world uses the
> > cloud<http://solidfire.com/solution/overview/?video=play>
> > *(tm)*
>

Reply via email to