You could try collecting up a bunch of these different metrics and then run a 
regression analysis against the graph wise recursive downstream dep count for 
everything on CPAN and see which metrics fall out in the real world.

So many times we come up with arbitrary scoring systems that don't actually 
match to the real things that happen in the wild.

Adam

Sent from my iPhone

> On 23 Dec 2015, at 9:05 AM, Neil Bowers <neil.bow...@cogendo.com> wrote:
> 
> At the London Perl Workshop I gave a talk on the CPAN River, and how 
> development and release practices should mature as a dist moves up river. 
> This was prompted by the discussions we had at Berlin earlier this year.
> 
> Writing the talk prompted a bunch of ideas, one of which is having a “water 
> quality” metric, which gives some indication of whether a dist is a good one 
> to rely on (needs a better name). I’ve come up with a first definition, and 
> calculated the metric for the different stages of the river:
> 
> http://neilb.org/2015/12/22/cpan-river-water-quality.html
> 
> Any thoughts on what factors should be included in such a metric? I think it 
> should really include factors that it would be hard for anyone to argue with. 
> Currently the individual factors are:
> 
> Not having too many CPAN Testers fails
> Having a META.json or META.yml file
> Specifying the min perl version required for the dist
> 
> Cheers,
> Neil
> 
> At some point I’ll share the slides from my talk, but slideshare doesn’t 
> handle keynote presentations, and the exported powerpoint from keynote is 
> broken (neither powerpoint nor slideshare can handle it!)
> 

Reply via email to