> You could try collecting up a bunch of these different metrics and then run a 
> regression analysis against the graph wise recursive downstream dep count for 
> everything on CPAN and see which metrics fall out in the real world.

I might have a dabble at this, perhaps roping in help from someone more 
mathematically, er rigorous, than me.

> 
> So many times we come up with arbitrary scoring systems that don't actually 
> match to the real things that happen in the wild.

Having played with various scoring metrics, the one I use for CPAN Testers 
seems to be pretty reliable, and good for this purpose. A CPAN Testers fail for 
one of your upstream dependencies could indicate someone unable to install your 
dist.

The other measure that worked well for the adoption criteria is the bug 
scoring: basically have multiple bugs (not wishes) been raised since the last 
release, and was that last release more than N months ago. That basically 
indicates that people are using it, but there doesn’t appear to be an engaged 
maintainer.

Neil

Reply via email to