I like the idea of having it as a release barrier, and I also like the idea of getting an email saying "Code Coverage Regression" and printing out the package(s) that have regressed below a low-water mark.

What I am at a loss for is what the low-water mark should be. I think whatever we choose, we are going to have some immediate regressions. Then the question becomes, how much work are we willing to put into this to get it fixed.

One approach that comes to mind is to set a reachable goal for each release as a step along the way to our ultimate goal. For right now, a regression could be if any package goes 10% below what our current baseline is. Then we try to raise the water each release and re-set our baseline.

David

Rick Hillegas wrote:
In previous lives, I've seen code-coverage metrics generated on, say, a monthly basis and used as a release barrier. I do not think they are appropriate as a barrier to checkin.

Regards,
-Rick

Kathey Marsden wrote:

David W. Van Couvering wrote:

Did I ask this before?  Do we want to agree upon a "low water mark"
for code coverage and send out a "Quality Regression" email if our
testing coverage falls below that mark?  I think this would have a lot
of value.


This sounds like an interesting idea.   Code coverage is an important
quality data point.  What kind of granularity would it have?  Would it
be just the overall number or would individual packages or files be
flagged?   Also for areas that have poor coverage, how could we
encourage numbers to be brought up before or during enhancements?

Kathey




Reply via email to