On Mar 04, 2013, at 05:04 PM, Brett Cannon wrote: >Sure, but that has nothing to do with programmatic package discovery. >That's something you will have to do as a person in making a qualitative >decision along the same lines as API design. Flipping a bit in a config >file saying "I have tests" doesn't say much beyond you flipped a bit, e.g. >no idea on coverage, quality, etc.
What I'm looking for is something that automated tools can use to easily discover how to run a package's tests. I want it to be dead simple for developers of a package to declare how their tests are to be run, and what extra dependencies they might need. It seems like PEP 426 only addresses the latter. Maybe that's fine and a different PEP is needed to describe automated test discover, but I still think it's an important use case. Imagine: * Every time you upload a package to PyPI, snakebite runs your test suite on a variety of Python versions and platforms. You get a nice link to the Jenkins results so you and your users get a good sense of overall package quality. * You have an automated gatekeeper that will prevent commits or uploads if your coverage or test results get worse instead of better. * Distro packagers can build tools that auto-discover the tests so that they are run automatically when the package is built, ensuring high quality packages specifically targeted to those distros. As a community, we know how important tests are, so I think our tools should reflect that and make it easy for those tests to be expressed. As a selfish side-effect, I want to reduce the amount of guesswork I need to perform in order to know how to run a package's test when I `$vcs clone` their repository. ;) Cheers, -Barry
signature.asc
Description: PGP signature
_______________________________________________ Python-Dev mailing list Python-Dev@python.org http://mail.python.org/mailman/listinfo/python-dev Unsubscribe: http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com