Re: About tidying up Kwalitee metrics

2008-06-30 Thread Jonathan Rockway
* On Sun, Jun 29 2008, chromatic wrote:
 However, does making CPAN a better place require publishing a Hall of Shame 
 on 
 perl.org?

   http://cpants.perl.org/highscores/hall_of_shame

Good point.  

The same could be said for CPAN Ratings also.  Why should my module have
1 star next to it because any goof with a web browser can write a
review?  Why is the opinion of someone with no ties to the community
considered relevant enough to show in the search.cpan search results?
(The same goes for positive ratings.  I've seen a lot of high ratings of
modules that are rated highly for no good reason, or rated that way by
its own author.)

I personally don't care and generally ignore the ratings, but it's the
same thing as Kwalitee, except not even objective.

Regards,
Jonathan Rockway

-- 
print just = another = perl = hacker = if $,=$


Re: About tidying up Kwalitee metrics

2008-06-30 Thread Andy Lester


On Jun 30, 2008, at 1:08 AM, Jonathan Rockway wrote:


 Why is the opinion of someone with no ties to the community
considered relevant enough to show in the search.cpan search results?


Why do you think the opinion of someone with ties to the  
community (however THAT is defined) is more relevant than someone who  
doesn't?


Our little echo chamber is not some hallowed hall that indicates  
programming wisdom.


xoa

--
Andy Lester = [EMAIL PROTECTED] = www.petdance.com = AIM:petdance






Re: About tidying up Kwalitee metrics

2008-06-30 Thread chromatic
On Sunday 29 June 2008 23:08:50 Jonathan Rockway wrote:

 * On Sun, Jun 29 2008, chromatic wrote:

  However, does making CPAN a better place require publishing a Hall of
  Shame on perl.org?
 
  http://cpants.perl.org/highscores/hall_of_shame

 Good point.

 The same could be said for CPAN Ratings also.  Why should my module have
 1 star next to it because any goof with a web browser can write a
 review?  Why is the opinion of someone with no ties to the community
 considered relevant enough to show in the search.cpan search results?
 (The same goes for positive ratings.  I've seen a lot of high ratings of
 modules that are rated highly for no good reason, or rated that way by
 its own author.)

 I personally don't care and generally ignore the ratings, but it's the
 same thing as Kwalitee, except not even objective.

There are important differences.  CPAN Ratings are much more obviously 
subjective.  No one (so far) has ranked all 16,000 or however many CPAN 
distributions against each other in a canonical list.

Ratings have individual names attached to them.  They're not just perl.org 
says that these X distributions from these Y authors are particularly 
shameful.  (Note that the Hall of Shame doesn't include the Kwalitee is not 
Quality dodge.  Then again, neither does the Hall of Triumph.)

Ratings have text that people can read and analyze on their own, if they want.

None of these mean that potential users *will* use all of their tools, but the 
differences seem important to me.

-- c


Re: CPAN Ratings and the problem of choice (was Re: About tidying up Kwalitee metrics)

2008-06-30 Thread Salve J Nilsen

Paul Fenwick said:

Jonathan Rockway wrote:

The same could be said for CPAN Ratings also.  Why should my module 
have 1 star next to it because any goof with a web browser can write a 
review?  Why is the opinion of someone with no ties to the community 
considered relevant enough to show in the search.cpan search results?


I'm a big supporter of CPAN Ratings, because I view them as solving one 
of the biggest problems facing the CPAN today.  Choice overload.


CPAN is suffering from its own success.  One of the most common 
questions I get asked is Which CPAN module should I use?  There's like 
300 that cover my problem.  The worst thing is, faced with too many 
choices, typical humans are more likely to choose *none* of them, 
compared with if they were only offered one or two[1].


Thank you for making this point. I've had this problem too, many times, 
and I'd love to see something that helps me manage it.


Let's assume I'm in hurry to buy a present to someone I don't know (or any 
other situation where I'm forced to make a low-info, low-context 
decision.) I have to make the best out of the situation with the 
information that I have. Sometimes the only solution is just to ask the 
clerk what toy would you give as a birthday present to a 5 year-old 
friend of your nephew?. The clerk would at least be able to give _some_ 
useful info, like this is popular amongst the pre-schoolers or this toy 
got a prize for being the most educational in 2007 or we are getting 
lots of these toys in return, so don't buy it until the problems are fixed 
upstream...


The criteria for choosing software are of course a bit different. I'd 
argue the major one is that WE can also choose to improve the software we 
select (at least when it comes to OSS.)


So when we're discussing Kwalitee metrics or the CPANTS game, we're in 
fact discussing new datapoints for people to use when they choose. We make 
information available. We're communicating.


But as with all other kinds of communication, we have both transmitters of 
information (the CPANTS website, metrics, explanations, reviews etc.) and 
a receiver (the individual end users, the distro authors), and as with all 
other kinds of communication, there's always a danger for the recipient to 
interpret the info wrong.


There's a tradition in the marketing and sales professions that if a 
message doesn't land well, then one should assume something is wrong 
with the message, and not the recipient. This may be well and true in most 
cases, but it doesn't take much to imagine situations where this 
assumption is wrong - or at least not precise enough.


But for our purposes I think this tradition would apply well. If people 
are actually annoyed about getting in the hall of shame, we shouldn't 
remove the hall, but instead give them useful info on how to get out of 
it. If authors add useless workarounds just to get on the top of the 
CPANTS game, we shouldn't remove the game, but instead find ways to make 
this tactic useless.



It's extremely telling when one of the most popular parts of Perl 
Training Australia's courses is showing students the Phalanx 100 as a 
short-list. Even though the list is quite some years old, there's almost 
palpable relief when the students realise they can just pick XML::Parser 
from the Phalanx top 10, rather than having to examine the multitude of 
choices on the CPAN.


So, why do ratings make a difference here?

Well, ratings provide at least a partial way for the community to solve 
the choice overload problem.  If a search reveals a 4.5 star module with 
eight reviews, one doesn't feel compelled to look at the other options; 
the choice becomes clear.


Let's look at one assumption I think we're making... Who are actually the 
information recipients in this matter? Here's my take on it:


 * End users of CPAN modules
 * CPAN module authors
 x People who are in a learning mode
 x People who are in a getting things done mode

So, who should we tailor the messages for? Here's how I would rank the 
message recipients:


 1. End users of CPAN modules who are in a getting things done mode
(help users choose, because this makes CPAN into Perl's killer app)
 2. CPAN module authors who are in a learning mode
(help authors make better modules, because we want less than 90% crap)
 3. End users of CPAN modules who are in a learning mode
(help users become authors, because this is how the community grows)
 4. CPAN module authors who are in a getting things done mode.
(help authors work efficiently/without annoyances)

If we can agree on this, I think it'll be a lot easier to decide of ways 
and means to move CPAN forward, and even make some good decisions.



- Salve

--
#!/usr/bin/perl
sub AUTOLOAD{$AUTOLOAD=~/.*::(\d+)/;seek(DATA,$1,0);print#  Salve Joshua Nilsen
getc DATA}$='};{';@_=unpack(C*,unpack(u*,':4@,$'.# [EMAIL 
PROTECTED]
'2!--5-(50P%$PL,!0X354UC-PP%/0\`'.\n));eval {'@_'};   __END__ is near! :)


Re: CPAN Ratings and the problem of choice (was Re: About tidying up Kwalitee metrics)

2008-06-30 Thread Greg Sabino Mullane
 So, why do ratings make a difference here?
 
 Well, ratings provide at least a partial way for the community to solve
 the choice overload problem.  If a search reveals a 4.5 star module with
 eight reviews, one doesn't feel compelled to look at the other options;
 the choice becomes clear.

I question the usefulness of the ratings because they are almost
completely unused. The module mentioned in this thread, XML::Parser, has 6
reviews (2 of which are basically bug reports, and one tells you to not
use it for any new code). One of the oldest and most important modules
ever, DBI, has a mere 29. That's 29 reviews in 8 years - pathetic. It
should have hundreds of ratings. The important question to ask is,
(assuming the ratings are something worth keeping), why are people not
rating modules, and how can we encourage people to do so?

-- 
Greg Sabino Mullane [EMAIL PROTECTED]
End Point Corporation


signature.asc
Description: PGP signature


Re: About tidying up Kwalitee metrics

2008-06-29 Thread Ovid
--- On Sat, 28/6/08, Aristotle Pagaltzis [EMAIL PROTECTED] wrote:

 I think the game is actually an excellent idea. The problem is
 with the metrics. Here are some metrics that are inarguably good:
 
 • has_buildtool
 • extracts_nicely
 • metayml_conforms_to_known_spec

One problem with this is when you get dinged for an unknown key.  This means 
you can't extend your meta YAML file.  It's a hash disguised as YAML.  There 
shouldn't be a problem with adding to it, only subtracting from it.

On a side note, I still don't understand why I sometimes get dinged for CPANTs 
errors.

Here's one for HOP-Lexer (http://cpants.perl.org/dist/errors/HOP-Lexer):

  STDERR: Invalid row in Debian file: libhtml-wikiconverter-moinmoin-perl, 
HTM
  STDOUT:

I have no idea what this is and I have no way of correcting it yet I am getting 
dinged for it.  I see that I can send an email to [EMAIL PROTECTED], but why?  
I don't understand why CPANTs bugs are counted against me.

Cheers,
Ovid


Re: About tidying up Kwalitee metrics

2008-06-29 Thread Thomas Klausner
Hi!

On Sun, Jun 29, 2008 at 01:54:19AM -0700, Ovid wrote:
 
 On a side note, I still don't understand why I sometimes get dinged for 
 CPANTs errors.
 
 Here's one for HOP-Lexer (http://cpants.perl.org/dist/errors/HOP-Lexer):
 
   STDERR: Invalid row in Debian file: libhtml-wikiconverter-moinmoin-perl, 
 HTM
   STDOUT:
 
 I have no idea what this is and I have no way of correcting it yet I am 
 getting dinged for it.  I see that I can send an email to [EMAIL PROTECTED], 
 but why?  I don't understand why CPANTs bugs are counted against me.

As Gabor already suggested, most of the texts on cpants.perl.org should 
be overhauled and extendend. 

For example:
http://cpants.perl.org/kwalitee.html#no_cpants_errors
  no_cpants_errors
Shortcoming: Some errors occured during CPANTS testing. They might 
  be caused by bugs in CPANTS or some strange features of this 
  distribution. See 'cpants' in the dist error view for more info.
Remedy: Please report the error(s) to [EMAIL PROTECTED]

'Shortcoming' should be extended to say:
The goal of deducting a kwalitee point for 'no_cpants_errors' is to get 
authors to report CPANTS bugs. As you might guess, testing 10.000+ 
different dists is hard. There are lot of special cases. It's impossible 
to figure out all those special cases in advance. 'no_cpants_errors' is 
a way to outsource the discovery of special cases to module authors.

or something like that...



-- 
#!/usr/bin/perl  http://domm.plix.at
for(ref bless{},just'another'perl'hacker){s-:+-$-gprint$_.$/}


Re: About tidying up Kwalitee metrics

2008-06-29 Thread Aristotle Pagaltzis
* Ovid [EMAIL PROTECTED] [2008-06-29 10:55]:
 --- On Sat, 28/6/08, Aristotle Pagaltzis [EMAIL PROTECTED] wrote:
 I think the game is actually an excellent idea. The problem is
 with the metrics. Here are some metrics that are inarguably
 good:

 • has_buildtool
 • extracts_nicely
 • metayml_conforms_to_known_spec

 One problem with this is when you get dinged for an unknown
 key.  This means you can't extend your meta YAML file.  It's a
 hash disguised as YAML.  There shouldn't be a problem with
 adding to it, only subtracting from it.

 On a side note, I still don't understand why I sometimes get
 dinged for CPANTs errors.

Yes, but that doesn’t detract from my point. If those metrics are
faulty, they should and *can* be fixed – and either way they
measure good form directly, as good metrics should.

The problems with them don’t fall in the same category as looking
for arbitrarily chosen proxies for unmeasurable aspects of good
form (or even style).

Regards,
-- 
Aristotle Pagaltzis // http://plasmasturm.org/


Re: About tidying up Kwalitee metrics

2008-06-29 Thread brian d foy
In article [EMAIL PROTECTED], Thomas Klausner
[EMAIL PROTECTED] wrote:

 The goal of deducting a kwalitee point for 'no_cpants_errors' is to get 
 authors to report CPANTS bugs.

Why do you need authors to report those? After a run, you have a list
of all of the errors already.


Re: About tidying up Kwalitee metrics

2008-06-29 Thread chromatic
On Sunday 29 June 2008 02:28:54 Thomas Klausner wrote:

 For example:
 http://cpants.perl.org/kwalitee.html#no_cpants_errors
   no_cpants_errors
 Shortcoming: Some errors occured during CPANTS testing. They might
   be caused by bugs in CPANTS or some strange features of this
   distribution. See 'cpants' in the dist error view for more info.
 Remedy: Please report the error(s) to
 [EMAIL PROTECTED]

 'Shortcoming' should be extended to say:
 The goal of deducting a kwalitee point for 'no_cpants_errors' is to get
 authors to report CPANTS bugs. As you might guess, testing 10.000+
 different dists is hard. There are lot of special cases. It's impossible
 to figure out all those special cases in advance. 'no_cpants_errors' is
 a way to outsource the discovery of special cases to module authors.

 or something like that...

I thought the goal of Kwalitee was to identify good free software, not to 
humiliate thousands of other authors of free software for not anticipating 
and working around your bugs.

I didn't ask you to scan my distributions, and it's kind of a problem for me 
that you're willing to write publicly that their Kwalitee would be higher if 
I reported bugs in code I didn't write, don't use, and don't believe in -- 
especially if you're going to claim that Kwalitee metrics are useful in 
deciding whether to use my distributions.

(If you don't claim that, then replace my objection with Okay, so what's the 
point again?)

Want to fix CPANTS and Kwalitee?  It's simple:

* get rid of the scoreboard
* dump the harmful metrics (POD checking, etc)
* separate all of the informational metrics from the genuinely useful metrics
* report to authors when their uploads fail the useful metrics

-- c


Re: About tidying up Kwalitee metrics

2008-06-29 Thread Gabor Szabo
On Sun, Jun 29, 2008 at 4:49 PM, chromatic [EMAIL PROTECTED] wrote:
 On Sunday 29 June 2008 02:28:54 Thomas Klausner wrote:

 For example:
 http://cpants.perl.org/kwalitee.html#no_cpants_errors
   no_cpants_errors
 Shortcoming: Some errors occured during CPANTS testing. They might
   be caused by bugs in CPANTS or some strange features of this
   distribution. See 'cpants' in the dist error view for more info.
 Remedy: Please report the error(s) to
 [EMAIL PROTECTED]

 'Shortcoming' should be extended to say:
 The goal of deducting a kwalitee point for 'no_cpants_errors' is to get
 authors to report CPANTS bugs. As you might guess, testing 10.000+
 different dists is hard. There are lot of special cases. It's impossible
 to figure out all those special cases in advance. 'no_cpants_errors' is
 a way to outsource the discovery of special cases to module authors.

 or something like that...

 I thought the goal of Kwalitee was to identify good free software, not to
 humiliate thousands of other authors of free software for not anticipating
 and working around your bugs.

I also think the no_cpants_errors has no place in the core metrics nor actually
any metric. It should be only seen by the CPANTS authors

... but chromatic, while I have not added that specific metric your tone is
offending and humiliating me and maybe also Thomas and possibly others who
invest time to try to make CPAN a better place.

Gabor


Re: About tidying up Kwalitee metrics

2008-06-29 Thread chromatic
On Sunday 29 June 2008 11:02:17 Gabor Szabo wrote:

 On Sun, Jun 29, 2008 at 4:49 PM, chromatic [EMAIL PROTECTED] wrote:

  I thought the goal of Kwalitee was to identify good free software, not to
  humiliate thousands of other authors of free software for not
  anticipating and working around your bugs.

 I also think the no_cpants_errors has no place in the core metrics nor
 actually any metric. It should be only seen by the CPANTS authors

 ... but chromatic, while I have not added that specific metric your tone is
 offending and humiliating me and maybe also Thomas and possibly others who
 invest time to try to make CPAN a better place.

I certainly don't mean to humiliate anyone.  Please accept my apologies.

However, does making CPAN a better place require publishing a Hall of Shame on 
perl.org?

http://cpants.perl.org/highscores/hall_of_shame

I think what I want from CPANTS is conceptually simple:

* tell me (and my potential users) if a recent upload is well-behaved (all but 
three of the core metrics achieve this)

* provide optional information as information alone (packaged by various OS 
distributions, used by other CPAN distributions)

* drop the game, with winners and losers and (especially) scores

-- c


Re: About tidying up Kwalitee metrics

2008-06-28 Thread Gabor Szabo
On Thu, Jun 26, 2008 at 2:23 AM, Hilary Holz [EMAIL PROTECTED] wrote:
 On 6/25/08 10:24 AM, chromatic [EMAIL PROTECTED] wrote:

 On Wednesday 25 June 2008 03:15:59 Thomas Klausner wrote:

 One comment regarding 'each devel sets his/her own kwalitee metrics':
 This could be quite easy for the various views etc. But I'm not sure how
 to calculate a game score then. Do we end up with lots of different
 games? But then, it's only the game (which still motivates a few
 people..)

 Removing the game score completely would fix a lot of what I consider wrong
 with CPANTS.

 -- c
 second!

It seems that the game theme is after all turned into fierce competition or
lack of interest depending on ... I don't know on what, but neither
is good for
CPAN.
In some cases - me included - people fix the symptom to get the metric point
while the underlying code does not really change. So the indicator stops being
an indicator.

I don't know how to fix that.
Maybe the suggestions above and elsewhere to get rid of the game theme
and the top N bottom N authors would help.

Maybe what we need to do is
1) remove the game
2) fix the current metrics (e.g. license is not correct now)
3) Add detailed explanations for each metric, or maybe to create a page on
the TPF Perl 5 wiki for each metric where it would be easier to provide
pro and contra explanations for each metric.
4) add more metrics (including those that collect data from external sources)
5) categorize the metrics as suggested by Salve
6) get the search engines to start to use some of the metrics
 in their search results.

Not necessarily in that order

Gabor


Re: About tidying up Kwalitee metrics

2008-06-28 Thread chromatic
On Saturday 28 June 2008 08:54:34 Gabor Szabo wrote:

 It seems that the game theme is after all turned into fierce competition
 or lack of interest depending on ... I don't know on what, but neither is
 good for CPAN.
 In some cases - me included - people fix the symptom to get the metric
 point while the underlying code does not really change. So the indicator
 stops being an indicator.

Exactly -- and in other cases, the metric point is actively harmful to the 
CPAN.

 Maybe what we need to do is
 1) remove the game
 2) fix the current metrics (e.g. license is not correct now)
 3) Add detailed explanations for each metric, or maybe to create a page on
 the TPF Perl 5 wiki for each metric where it would be easier to provide
 pro and contra explanations for each metric.
 4) add more metrics (including those that collect data from external
 sources) 5) categorize the metrics as suggested by Salve
 6) get the search engines to start to use some of the metrics
  in their search results.

 Not necessarily in that order

Full support from me on these.

-- c


Re: About tidying up Kwalitee metrics

2008-06-25 Thread Thomas Klausner
Hi!

On Tue, Jun 24, 2008 at 10:10:07AM +0200, Salve J Nilsen wrote:

 I propose to split the current main and optional kwalitee scales into 
 topical ones, so we can allow for richer set of metrics while allowing 
 everyone that care mostly about certain types of metric access to 
 untainted versions.
 ...
 Thoughts?

I've been very quite lately regarding CPANTS, mostly because I currently 
have more interesting things to do (at the moment I'm in the lucky 
situation that my day job is more fun than my non-paid open source 
activities). This does not mean that I want to give up maintaing CPANTS.

I like most of the feedback given here (and on use.perl) in the last 
months, and would love to turn some of the suggestions into code. I 
would of course love it even more, if you [all of you, not Slave] would 
turn some of the suggestions into code...

Anyway, next week my kids are on holidays with my father, which will buy 
me some extra time. Some of this time will go into CPANTS.

I have a talk on CPANTS scheduled for YAPC::Europe. I would love to turn 
this into a how to contribute thing, followed by a hacking-session.


One comment regarding 'each devel sets his/her own kwalitee metrics':
This could be quite easy for the various views etc. But I'm not sure how 
to calculate a game score then. Do we end up with lots of different 
games? But then, it's only the game (which still motivates a few 
people..)
Oh, and of course even yet only 'core' metrics are used to calculate the 
game score.

Anyway, even if I do not reply on all comments, I'm collecting the 
feedback and will comment/implement parts of it. When I have tuits.


-- 
#!/usr/bin/perl  http://domm.plix.at
for(ref bless{},just'another'perl'hacker){s-:+-$-gprint$_.$/}


Re: About tidying up Kwalitee metrics

2008-06-25 Thread Nicholas Clark
On Tue, Jun 24, 2008 at 10:10:07AM +0200, Salve J Nilsen wrote:
 Hello, folks
 
 I propose to split the current main and optional kwalitee scales into 
 topical ones, so we can allow for richer set of metrics while allowing 
 everyone that care mostly about certain types of metric access to 
 untainted versions.
 
 Let's remove the optional type, and instead create the following metrics 
 where we can place the existing tests:
 
 Disto Kwalitee
  (most of the original test should go here)
 Security Kwalitee
  (checks for taint-mode or other security-related issues go here)
 Community Support Kwalitee
  (checks for supplied mailing list address, bugtracker, archives, etc. go 
  here)
 Community Trust Kwalitee
  (analysis of external acceptance of the module, including Debian use go 
  here)
 
 Thoughts?

Certainly, I would like the metrics to be split into those I can control by
what I upload to PAUSE, and those that I can't fix however much I upload.
Which I think most obviously is those that you group here as
Community Trust Kwalitee.

The previous 2 seem good, as they are likely to be categories that some
people have legitimate disagreements with. ie I've not been paying close
attention to CPANTS, but if I did, I suspect that it would annoy me that
it expects me to have a POD coverage test, and that in turn to make it pass
I could well spend more time bodging that than actually writing
documentation. Which, I agree with chromatic, would be stupid, and not
something that I'd like to see promoted.

(Is You have POD and it's well formed is something that is already tested?)

Nicholas Clark


Re: About tidying up Kwalitee metrics

2008-06-25 Thread Paul Fenwick

G'day Thomas,

Thomas Klausner wrote:

I've been very quite lately regarding CPANTS, mostly because I currently 
have more interesting things to do (at the moment I'm in the lucky 
situation that my day job is more fun than my non-paid open source 
activities). This does not mean that I want to give up maintaing CPANTS.


Congratulations! ;)

would of course love it even more, if you [all of you, not Slave] would 
turn some of the suggestions into code...


In other words, put our code where our mouths are.  ;)  For those of you who 
want hacking the CPANTS game to be a game in itself, it now earns you ohloh 
kudos too:


http://www.ohloh.net/projects/cpants

This could be quite easy for the various views etc. But I'm not sure how 
to calculate a game score then. Do we end up with lots of different 
games? But then, it's only the game (which still motivates a few 
people..)


I've tossed out a few of these suggestions, so I guess I better start coming 
up with answers.  I'll start with what I think are the least controversial 
things, and get into more risky territory as I go.


== Honours ==

One of the proposals was that some of the optional metrics like packaged by 
Debian become honours.  These are things which are (more-or-less) out of 
the author's control, but which we already have (disabled) tests for, and 
which are useful indicators of quality.  I suggest that completed honours 
are shown automatically for any distribution that has them.  Honours that a 
distribution doesn't have just don't get shown.


== Optional Metrics ==

I've also proposed that things that the author does have control over, but 
which they don't consider relevant to their distribution(s), can be switched 
off.  For the optional metrics these allegedly don't contribute to the 
game score[1], and so the ability to disable them *should* be a non-issue; 
you don't gain or lose game rankings by having them or not.  Optional 
metrics that an author doesn't want are simply not shown.


== Kwalitee Scores ==

Getting a little more controversial here, this means splitting the Kwalitee 
score into two.  Rather than showing an aggregate Kwalitee score for each 
distribution, we'd show the Core Kwalitee (for non-optional metric) and 
the Bonus Kwalitee.  If you're a gamer, you'll be turning on bonus 
kwalitee metrics and trying to complete them to obtain the fabled Amulet of 
CPANTS.  If you're not a gamer, you'll turn them off, and that's that.


Whether that means we have *two* scoreboards I'll leave as an open question, 
but I'd be willing to bet the general consensus is that we should.


Cheerio,

Paul

[1] However it appears that http://cpants.perl.org/highscores/hall_of_fame 
has scores going up to 128, which I understand means it *does* includes 
optional metrics.  http://cpants.perl.org/dist/overview/IPC-System-Simple 
has a score of 124, and fails only one optional metric (which it would have 
passed if I had remembered 'make manifest').


--
Paul Fenwick [EMAIL PROTECTED] | http://perltraining.com.au/
Director of Training   | Ph:  +61 3 9354 6001
Perl Training Australia| Fax: +61 3 9354 2681


Re: About tidying up Kwalitee metrics

2008-06-25 Thread chromatic
On Wednesday 25 June 2008 03:15:59 Thomas Klausner wrote:

 One comment regarding 'each devel sets his/her own kwalitee metrics':
 This could be quite easy for the various views etc. But I'm not sure how
 to calculate a game score then. Do we end up with lots of different
 games? But then, it's only the game (which still motivates a few
 people..)

Removing the game score completely would fix a lot of what I consider wrong 
with CPANTS.

-- c