Andy Lester wrote:
Why is there a scoreboard?  Why do we care about rankings?  Why is it
necessary to compare one measure to another?  What purpose is being
served?

Why is there XP on perlmonks? Or Karma on Slashdot? Or for that matter, why do we grade students' exams (particularly, why do we often grade them on a curve)?


I think the advantage of a scoreboard system is that metrics like this are a motivator. Rather than defining a qualitative standard of "good module style", CPANTS defines a quantitative standard and measures against it, Many programmers may well be motivated to improve their metrics, either for personal improvement or through competitive spirit.

This is standard process improvement stuff. Whether you call it by fancy names like "Six Sigma" or not, the basic steps are:

* Define what's important
* Measure it quantitatively
* Analyze root causes of metrics below a desired standard
* Improve the process accordingly
* Measure again and repeat

The advantage of a scoreboard is that it provides a peer benchmark, which is a self-defining and evolving standard. (And for those into these kinds of things, it's an emergent property of a complex system of individual actors!) Otherwise, there's no way to calibrate a score except with an arbitrary scale saying that 0-12 is an F, 13 is a D, 14 a C, and so on.

We can/should debate the metrics (what is important to quality), but not the philosophy of measurement. Kwalitee defined by CPANTS may not be perfect, but it's a start. Should it become the "official standard" for quality for perl? I don't know -- that's worthy of debate, certainly, though I'm not sure what reflects "official" except perhaps inclusion on the CPAN page for a module.

Regards,
David Golden



Reply via email to