Test coverage is not a quality metric, it is a quantity metric. That
is it says something about the amount of tests. The coverage can say
something about the overall code given that the code under test infact
reflect the remaining code. As the code under test usually are better
than the remaining code the overall code is usually worse than the
predicted number of faults in the remaining code.

My 0.05€ is to enforce testing wherever possible, possibly by using
automatic gatekeepers, and also add in other quality and security
metrics for the reports. Err, automated reports, that is another
problem...

On Tue, Jun 4, 2013 at 6:36 PM, Jeroen De Dauw <jeroended...@gmail.com> wrote:
> Hey,
>
> My own experience is that "test coverage" is a poor evaluation metric
>> for anything but "test coverage"; it doesn't produce better code, and
>> tends to produce code that is considerably harder to understand
>> conceptually because it has been over-factorized into simple bits that
>> hide the actual code and data flow.  "Forest for the trees".
>>
>
> Test coverage is a metric to see how much of your code is executed by your
> tests. From this alone you cannot say if some code is good or bad. You can
> have bad code with 100% coverage, and good code without any coverage. You
> are first stating it is a poor metric to measure quality and then proceed
> to make the claim that more coverage implies bad code. Aside from
> contradicting yourself, this is pure nonsense. Perhaps you just expressed
> yourself badly, as test coverage does not "produce" code to begin with.
>
> Cheers
>
> --
> Jeroen De Dauw
> http://www.bn2vs.com
> Don't panic. Don't be evil. ~=[,,_,,]:3
> --
> _______________________________________________
> Wikitech-l mailing list
> Wikitech-l@lists.wikimedia.org
> https://lists.wikimedia.org/mailman/listinfo/wikitech-l

_______________________________________________
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l

Reply via email to