On Nov 6, 2009, at 4:52 PM, exar...@twistedmatrix.com wrote:

On 09:48 pm, rdmur...@bitdance.com wrote:
On Fri, 6 Nov 2009 at 15:48, Glyph Lefkowitz wrote:


Documentation would be great, but then you have to get people to read the documentation and that's kind of tricky. Better would be for every project on PyPI to have a score which listed warnings emitted with each version of Python. People love optimizing for stuff like that and comparing it.

I suspect that even if all warnings were completely silent by default, developers would suddenly become keenly interested in fixing them if there were a metric like that publicly posted somewhere :).

+1, but somebody needs to write the code...

How would you collect this information? Would you run the test suite for each project? This would reward projects with small or absent test suites. ;)



*I* would not collect this information, as I am far enough behind on other projects ;-) but I if I were to advise someone *else* as to how to do it, I'd probably add a feature to the 'warnings' module where users could opt-in (sort of like popcon.debian.org) to report warnings encountered during normal invocations of any of their Python programs.

I would also advise such a hypothetical data-gathering project to start with a buildbot doing coverage runs; any warning during the test suite would be 1 demerit, any warning during an actual end-user run of the application *not* caught by the test suite would be 1000 demerits :).

And actually it would make more sense if this were part of an overall quality metric, like http://pycheesecake.org/ proposes (although I think that cheesecake's current metric is not really that great, the idea is wonderful).

_______________________________________________
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com

Reply via email to