Re: Stupid prove tricks
On Wed, February 25, 2009 6:06 pm, Michael G Schwern wrote: $ prove --exec 'cat -' test.dummy test Now you can write TAP and finish with ctrl-d. But test.dummy has to exist. /dev/null works for me. Perhaps you could post your tricks: http://perlmonks.org/?node=Meditations#post
Re: Public Humiliation and Kwalitee (was Re: Tested File-Find-Object-0.1.1 with Class::Accessor not installed)
On Thu, October 23, 2008 10:37 am, chromatic wrote: I don't care about backchannel communication between other authors and CPAN Testers, but how can you blame Shlomi for thinking that public humiliation isn't a vital component of Kwalitee? There's prior art: http://cpants.perl.org/highscores/hall_of_shame That looks sorted by kwalitee and author. If we're shaming people, author name shouldn't be a factor. Could it be by kwalitee and most recent release date instead?
Re: TAP Frameworks Continue to Spread
Someone who's actually looked at this stuff may want to update http://en.wikipedia.org/wiki/Test_Anything_Protocol I think there's also stuff on that page that needs updating re: Test::Harness 3. On Tue, May 13, 2008 2:44 pm, chromatic wrote: PHP's Symfony has a test framework called lime, based on Test::More: http://www.symfony-project.org/book/1_0/15-Unit-and-Functional-Testing I just heard about a C++ test framework based on lime called lemon: http://eric.scrivnercreative.com/?p=8 -- c
Re: W3C validator without network access?
On Sun, April 6, 2008 9:28 pm, Gabor Szabo wrote: Is there a W3C validator that works locally on my computer? You mean an X?HTML validator? I haven't used it, but: http://htmlhelp.com/tools/validator/offline/index.html.en
Re: W3C validator without network access?
On Sun, April 6, 2008 9:28 pm, Gabor Szabo wrote: Is there a W3C validator that works locally on my computer? All the modules I found so far use the http://validator.w3.org/ service including Test::HTML::W3C but that's not really usable in a frequently running test suit. The source for that is available as described on http://validator.w3.org/source/
Re: Dev version numbers, warnings, XS and MakeMaker dont play nicely together.
On Sun, January 6, 2008 4:54 pm, demerphq wrote: So we are told the way to mark a module as development is to use an underbar in the version number: $VERSION= 1.23_01; but this will produce warnings if you assert a required version number, as the version isn't numeric. So the standard response is to do $VERSION= eval $VERSION; on the next line. This means that MakeMaker sees 1.23_01 but Perl internal code for doing version checks sees 1.2301. This is all fine and dandy with pure perl modules. BUT, if the module is bog standard and uses XS this is a recipe for a fatal error. XS_BOOT_VERSIONCHECK will compare 1.23_01 to 1.2301 and decide that they are different versions and die. See perlmodstyle: If you want to release a 'beta' or 'alpha' version of a module but don't want CPAN.pm to list it as most recent use an '_' after the regular version number followed by at least 2 digits, eg. 1.20_01. If you do this, the following idiom is recommended: $VERSION = 1.12_01; $XS_VERSION = $VERSION; # only needed if you have XS code $VERSION = eval $VERSION;
Re: Summarizing the pod tests thread
On Tue, July 31, 2007 9:56 pm, chromatic wrote: On Tuesday 31 July 2007 20:25:15 Salve J. Nilsen wrote: Turning off syntax checking of your POD is comparable to not turning on warnings in your code. Now would you publish code developed without use warnings;? Now that's just silly. Is it? Comparable. Capable of being compared. 1913 Webster. I really have nothing more to say in this thread. Wow. It's one of those threads that I didn't even bother reading till now, since I felt pretty confident that I wasn't going to change my opinion (and that most others would not as well). Pity I broke down.
Re: Eliminating STDERR without any disruption.
Michael G Schwern wrote: print TAP version 15\n; print 1..1\n; print # Information\n; print not ok 1\n; print ! Failure\n; I'd really not like to see meaningful punctuation. How about diag Failure\n. Or even levels of keywords debug/info/notice/warning/ err/crit/alert/emerg (stolen from syslog.h). Oh, and yaml if that idea lives.
Re: The price of synching STDOUT and STDERR
David Cantrell wrote: Michael G Schwern wrote: First thing is breaks, and probably most important: No warnings. Any test suite that blithely ignores warnings is BROKEN. There are two types of warning. First, those which you deliberately spit out, like use of foo() is deprecated, please use bar() instead. Your tests need to exercise that warning, so need to somehow capture them and check that they're spat out at the right time. If you don't do that, you're not testing properly. The second type of warning is the one that tells you when you the author have fucked up, like, when you say my $variable twice, or saying $variable = 'one thing' and $varable = 'something else'. I deal with those by having $SIG{__WARN__} turn them into die()s in the tests. If you don't deal with them, you're saying that you don't care about tripping yourself up in the future. I have to disagree here. You are saying that it is the test's responsibility to fail if it issues any warnings. I believe an author may choose to scan the test output and rely on seeing messages to go to stderr. Obviously it's better to do as you propose, but that takes extra work in every test to implement. Authors should not be counted on to do so, and when they don't go that extra mile, their warnings should not just get lost.
Re: The price of synching STDOUT and STDERR
On 14 Mar 2007, at 07:29, chromatic wrote: The problem is that there's no way to tell that that information sent to Test::Builder-diag() is diagnostic information for the tests because once it goes out on STDERR, it could be anything. So we seem to have two reasonably sensible options on the table. I don't think they're mutually incompatible. Ovid's 'only merge STDOUT and STDERR when in verbose mode' seems to be workable with current Test::Builder. We should also push forward with machine readable diagnostics as a formal part of TAP and have those show up on STDOUT along with all the other TAP. Did I miss anything? A way to request verbose output without the merging.
Test::Builder: mixing use_numbers on and off
Test::Builder has a method use_numbers to turn off test numbering; this can be useful when running tests in different processes. But the doc says: Most useful when you can't depend on the test output order, such as when threads or forking is involved. Test::Harness will accept either, but avoid mixing the two styles. Defaults to on. In a longer test script (doesn't test program sound silly?), I have only a short section that is potentially going to be out of order, so I want to say: $builder-use_numbers( 0 ); ... $builder-current_test( blah ); $builder-use_numbers( 1 ); around that section. This works fine with current Test::Harness. Is the avoid mixing warning out of date? Or in accord with future TAP plans?
Re: fetching module version from the command line
On Thu, Jul 13, 2006 at 02:29:38PM +0300, Gabor Szabo wrote: On 7/13/06, Fergal Daly [EMAIL PROTECTED] wrote: I could change it so that it tries to figure out whether it's being used for real or not and disable the END block code but that's stress and hassle. As a module author, as far as I'm concerned, if MakeMaker can figure out my version then my job is done, So the only thing that would be correct is to search @INC for the .pm file and then grep it with the same regex MakeMaker uses. That wouldn't be correct for modules that aren't the one that determine the distribution version, since other modules won't necessarily follow the one-line rule.
Re: Test me please: P/PE/PETDANCE/Test-Harness-2.57_06.tar.gz
On Sun, Apr 23, 2006 at 11:01:17AM +0200, Marcus Holland-Moritz wrote: The only thing worth mentioning is that with perl 5.003, the following happens: [EMAIL PROTECTED] $ perl5.003 Makefile.PL Can't locate ExtUtils/Command.pm in @INC at Makefile.PL line 4. BEGIN failed--compilation aborted at Makefile.PL line 4. Yes, run-time require VERSION is almost always the wrong thing to do. I'd suggest the following change: --- Makefile.PL.orig2006-04-23 10:58:31.0 +0200 +++ Makefile.PL 2006-04-23 10:58:50.0 +0200 @@ -1,4 +1,4 @@ -require 5.004_05; +BEGIN { require 5.004_05 } use 5.004_05; would be better.
Re: Show-stopping Bug in Module::Install and the Havoc it Created
On Mon, Apr 03, 2006 at 10:32:12PM +1000, Adam Kennedy wrote: Yitzchak Scott-Thoennes wrote: On Sat, Mar 11, 2006 at 10:20:29AM +0100, Tels wrote: B when it breaks, end-users cannot fix the problem for themselves, they need to bug the author and he has to release a new version. (Good luck with that with sparsely maintained modules...) Last time this happened to me, I just replaced the bundled version with a newer one. No need to bug the author (though I did verify that others already had). (Since then, I've heard (but haven't confirmed) that just installing a newer MI and running Makefile.PL in the broken dist will do this automatically. Is this correct?) No, although if you delete the inc directory entirely, the Makefile.PL should latch onto the system version and go into author mode, which for the purposes of installation is almost entirely the same. Right, I meant to say deleting inc in there. HOWEVER, that requires that they NOT be using a custom extension, and than the commands used in the Makefile.PL match those in the current version of Module::Install. Good to know, thanks. The command list isn't entirely frozen yet, so while it may work, there's some risks. Once the commands have frozen, this will be a lot safer to do. Assuming someone doing this has whatever is then the newest Module::Install, are there still command list issues?
Re: Show-stopping Bug in Module::Install and the Havoc it Created
On Sat, Mar 11, 2006 at 10:20:29AM +0100, Tels wrote: B when it breaks, end-users cannot fix the problem for themselves, they need to bug the author and he has to release a new version. (Good luck with that with sparsely maintained modules...) Last time this happened to me, I just replaced the bundled version with a newer one. No need to bug the author (though I did verify that others already had). (Since then, I've heard (but haven't confirmed) that just installing a newer MI and running Makefile.PL in the broken dist will do this automatically. Is this correct?)
Where did I see this use of plan()?
I remember working with some module that had tests something like: use Test::More; plan tests = numtests(); ... is($foo, $bar, 'foo is bar'); sub numtests { 13 } So that when you added a new test to the bottom, the number to modify was right there also. Ring a bell with anyone?
Re: Request for Comments: Package testing results analysis, result codes
On Mon, Feb 20, 2006 at 11:36:27AM +, Barbie wrote: 12. System is incompatible with the package. Linux::, Win32::, Mac:: modules. Irreconcilable differences. Not sure how you would cover this, but point 12 seems to possibly fit. POSIX.pm is created for the platform it's installed on. A recent package I was testing, File::Flock (which is why I can't install PITA) attempted to use the macro EWOULDBLOCK. Windows doesn't support this, and there doesn't seem to be a suitable way to detect this properly. use POSIX EWOULDBLOCK; $ok = eval { EWOULDBLOCK(); 1 }; print $ok
Re: Binary distributions
On Mon, Feb 06, 2006 at 08:16:11AM +0200, Offer Kaye wrote: OT question - why is Scalar-List-Utils listed as CORE? It is not part of the Perl5 core http://perldoc.perl.org/perl58delta.html#New-Modules-and-Pragmata
Re: [Module::Build] [RFC] author tests
On Thu, Feb 02, 2006 at 02:56:09AM -0800, Tyler MacDonald wrote: A new module doesn't need to be added to the core, so long as there is a way that we can reliably detect when a person wishes to build and test any given perl package for an objectively unselfish purpose such as 1:prepackaging, 2:automated testing, or 3:releasing. All three are viral so it's best to make sure they do no harm, while still maintaining some level of convenience for the end user. There's already AUTOMATED_TESTING for 2 and make disttest for #3 (just keep a file around in your repo that's not in your MANIFEST and test for it's presence.) Any good way to detect #1? An environment variable. PERL_TEST_EXHAUSTIVE? And make/Build disttest can set it too. In the perl core, the few tests that are normally not run are requested by a -torture switch. But I think despite this precedent exhaustive sounds better. But please, lets not have new ta/ or whatever directories or .ta files; this is a problem that is well taken care of with skip_all. If somebody really wants, they could go the Test::Exhaustive route, which would automatically do the skip_all if the env vars weren't set. But putting a few stock lines at the start of your .t file isn't really all that big a deal.
Re: [Module::Build] [RFC] author tests
On Thu, Feb 02, 2006 at 10:01:48AM -0800, Tyler MacDonald wrote: I strongly feel that authors should keep everything necessary for their distribution public; either in the CPAN distribution itself, or via a permanent publicly available version control system. Who's to say you won't lose interest in maintaining the module or in perl altogether at some point? Or move to Antarctica and be unable to maintain it? Or have your home/business or wherever your files are kept destroyed in a hurricane or other natural disaster? And, of course, we all will die someday. Some suddenly. I do agree that many (all?) of these tests are irrelevant to someone packaging your distribution. Chris Dolan [EMAIL PROTECTED] wrote: * copyright.t - Ensures that there is a Copyright .([localtime]- [5]+1900) somewhere in every .pm file. Will break 11 months from now. * distribution.t - Relies on Test::Distribution, which is not in my prereq list * perlcritic.t - Runs Test::Perl::Critic on all .pm files. Will fail without my specific $HOME/.perlcriticrc Test::Perl::Critic allows you to configure Perl::Critic by passing the path to a Fperlcriticrc file in the Cuse pragma. For example: use Test::Perl::Critic (-profile = 't/perlcriticrc'); all_critic_ok(); Probably you'd like this to keep in sync with any changes to your .perlcriticrc, which may require some changes to your Makefile.PL/ Build.PL. and will fail with future, more exhaustive versions of P::C It would be nice if there was some way to indicate which version of P::C was expected to pass, and TODO any newly looked for problems. * spelling.t - Runs Test::Spelling. Will fail without my custom dictionary There's Test::Spelling::add_stopwords and =for stopwords. These should be used as much as possible instead of adding to your custom dictionary. But at least your custom dictionary is recreatable (because all the needed words are included in your pod :) should someone else pick up maintenance of your distribution. * versionsync.t - Checks that the $VERSION is the same in all bin/* and *.pm files. This test is pointless after release, since it's already been tested before release * pod.t - Checks POD validity. This test is pointless after release, since it's already been tested before release * pod-coverage.t - Checks POD completeness. This test is pointless after release, since it's already been tested before release and one I have not yet employed: * coverage.t - Ensures that Devel::Cover totals are higher than some threshold Wow, you really *are* exhaustive. How do you find the time to write any code? ;-) Now that I understand exactly what you mean by author tests, here's what I think: Whatever convention you're using, if these tests are only going to work on your system, then they definately shouldn't be in t. And since there's absolutely no value in these types of tests for anybody else except the module author, there's no real point in having a convention, just stick 'em anywhere that the make/buildfiles will ignore them. I disagree; presumably anyone running disttest would want these tests run, so they belong in t and named .t, with an appropriate skip_all at the top. Does anyone have a problem with disttest setting PERL_TEST_EXHAUSTIVE? Or a suggestion for a better name? Chris, how are you currently set up to run these tests only when preparing a release?
Re: YAML and Makefile.PL (was various topics)
On Sun, Jan 29, 2006 at 12:04:22PM +0100, Tels wrote: Just witness Graph::Dependency, it will fail when their is no META.yml available, and what do you want me to do then? Parse Makefile.PLs? The correct WTDI is to execute the Makefile.PL and parse the resulting Makefile, looking for the PREREQ_PM line.
Re: Fwd: CPAN Upload: D/DO/DOMM/Module-CPANTS-Analyse-0.5.tar.gz
On Fri, Jan 27, 2006 at 03:42:58PM +0100, Tels wrote: On Thursday 26 January 2006 15:26, Thomas Klausner wrote: I just uploaded Module::CPANTS::Analyse to CPAN. MCA contains most of the previous Kwalitee indicators and some code to check if one distribution tarball conforms to those indicators. It also includes a script calls icpants_lint.pl/p which is basically a frontend to the module. Very cool. However, I am _really really_ starting to wonder whether we need a Kwalitee rating based on *excessive usage of prerequisites*. How about two; one, a point for not having lots of prerequisites, and another, a point for having lots of prerequisites. Where lots is defined as the same number in both cases.
Re: Test Script Best-Practices
On Tue, Jan 24, 2006 at 10:25:44PM -0500, David Golden wrote: Jeffrey Thalhammer wrote: * Should a test script have a shebang? What should it be? Any flags on that? I often see -t in a shebang. One downside of the shebang, though, is that it's not particularly portable. As chromatic said, with prove it's not really necessary. (prove -t) -T or -t in a shebang tells Test::Harness or perl's TEST that perl needs to be run with that switch for the tests to test what they are supposed to test.
Re: SKIP blocks and the debugger
On Mon, Jan 09, 2006 at 07:06:08PM +1100, Kirrily Robert wrote: Does anyone else find that SKIP: { } blocks bugger up the debugger? I'll be happily bouncing on the n key to get to round about the vicinity of the failing test, and then blam, it sees a skipped test and just fast-forwards to the end. Yup. I assume n just puts a breakpoint on the next statement, but skip() bypasses said statement. Use a c nnn where nnn is the line number of the statement after the skip block.
Re: Test::Exception and XS code
On Mon, Aug 15, 2005 at 05:58:23PM +0200, S?bastien Aperghis-Tramoni wrote: use strict; use Test::More tests = 2; use Test::Exception; use Net::Pcap; throws_ok( sub { Net::Pcap::lookupdev() }, '/^Usage: Net::Pcap::lookupdev\(err\)/', calling lookupdev() with no argument ); throws_ok { Net::Pcap::lookupdev() } '/^Usage: Net::Pcap::lookupdev\(err\)/', calling lookupdev() with no argument ; Now, if I move the use Test::Exception inside an eval-string and execute the new script: $ perl -W exception.pl 1..2 ok 1 - calling lookupdev() with no argument Usage: Net::Pcap::lookupdev(err) at exception.pl line 13. # Looks like you planned 2 tests but only ran 1. # Looks like your test died just after 1. Aha! The first test, which uses the normal form of throws_ok() passes, but the second one, which uses the grep-like form, fails. The throw_ok { ... } syntax only works because the throw_ok sub exists and has a prototype that specifies a subref is expected; if you don't load Test::Exception by the time the throw_ok call is compiled, it is parsed as an indirect object call of the throw_ok method on the object or class returned by the {} block: $ perl -MO=Deparse,-p -we'throws_ok { Net::Pcap::lookupdev() } /^Usage: Net::Pcap::lookupdev\(err\)/, calling lookupdev() with no argument' BEGIN { $^W = 1; } do { Net::Pcap::lookupdev() }-throws_ok('/^Usage: Net::Pcap::lookupdev(err)/', 'calling lookupdev() with no argument'); -e syntax OK which is perfectly valid perl, but unlikely to do what you want.
Re: what slow could be in Compress::Zlib? (was RE: 5.004_xx in the wild?)
On Mon, Jul 04, 2005 at 02:19:16PM +0100, Paul Marquess wrote: Whilst I'm here, when I do get around to posting a beta on CPAN, I'd prefer it doesn't get used in anger until it has bedded-in. If I give the module a version number like 2.000_00, will the CPAN shell ignore it? This is often done incorrectly. See Lperlmodstyle/Version numbering for the correct WTDI: $VERSION = 2.000_00;# let EU::MM and co. see the _ $XS_VERSION = $VERSION; # XS_VERSION has to be an actual string $VERSION = eval $VERSION; # but VERSION has to be a number Just doing $VERSION = 2.000_00 doesn't get the _ into the actual distribution version, and just doing $VERSION = 2.000_00 makes use Compress::Zlib 1.0; give a warning (because it does: 1.0 = 2.000_00 internally, and _ doesn't work in numified strings). But if you are doing a beta leading up to a 2.000 release, it should be numbered 2.000, e.g. 1.990_01. Nothing wrong with a 2.000_01 beta in preparation for a release 2.010 or whatever, though.
Re: Fwd: [EMAIL PROTECTED]: Re: fixing is_deeply]
On Sat, Jul 02, 2005 at 12:24:12AM -0700, chromatic wrote: On Sat, 2005-07-02 at 08:55 +0200, demerphq wrote: The entire basis of computer science is based around the idea that if you do the same operation to two items that are the same the end result is the same. Without this there is no predictability. No program could ever be expected to run the same way twice. Throw in some sort of external state and you have exactly that. Perhaps the name of is_deeply() is misleading, but I don't understand why the argument about whether container identity should matter to the function is so important. I expect the following test to pass: my $a = \1; my $b = \1; is_deeply( $a, $b ); Should it not? The values are the same, as are the types of the containers, but the containers are different. It should, but is_deeply( [\1, \1], [(\1) x 2] ) should fail.
Re: Fwd: [demerphq@gmail.com: Re: fixing is_deeply]
On Fri, Jul 01, 2005 at 07:11:26AM +, Smylers wrote: To me 'deeply' implies recursing as deep as the data structure goes, not that there's a special rule for the top-level that's treated differently from the others. Nobody is saying is_deeply shouldn't be deep. If I understand correctly, is_deeply($a,$b) on a deep structure can still return ok if $a and $b have no references in common; it's not the specific value of the references that needs to match, it's the patterns of which parts within each of $a and $b have duplicate references, and whether those patterns match between the two. Another way of looking at it is that Schwern is saying is_deeply returns ok if the leaf values and overall structures match so long as no changes are made, while Yves is saying is_deeply should return not ok if the same change made to both ends up with different leaf values or overall structure.
Re: Fwd: [EMAIL PROTECTED]: Re: fixing is_deeply]
On Thu, Jun 30, 2005 at 05:09:39PM -0700, Michael G Schwern wrote: So, I conclude that is_deeply()'s behavior is ok and something like Test::Deep should be enhanced with an option to deal with this problem. So, am I correct in understanding that is_deeply will only notice value differences, if all the reference types are the same? And Yves's proposal is that patterns of which references are dups must also match? And that you feel an option to do it the latter way would belong in Test::Deep?
Re: Kwalitee and has_test_*
On Fri, Apr 01, 2005 at 09:00:17PM +0200, Thomas Klausner wrote: Well, kwalitee != quality. Currently, kwalitee basically only says how well-formed a distribution is. For my definition of well-formed :-) But I'm always open to suggestion etc. Since you ask... An important part of kwalitee to me is that Makefile.PL / Build.PL run successfully with stdin redirected to /dev/null (that is, that any user interaction be optional). Another is that a bug-reporting address or mechanism (e.g. clp.modules or cpan RT) be listed in the README or pod.
Re: Why a scoreboard?
On Sat, Apr 02, 2005 at 10:43:57AM -0500, Ricardo SIGNES wrote: * David A. Golden [EMAIL PROTECTED] [2005-04-02T05:27:18] Andy Lester wrote: Why is there a scoreboard? Why do we care about rankings? Why is it necessary to compare one measure to another? What purpose is being served? Why is there XP on perlmonks? Or Karma on Slashdot? Or for that matter, why do we grade students' exams (particularly, why do we often grade them on a curve)? This is not a good analogy to Kwalitee, because XP and Karma are primarily awarded by humans who can make judgements based on reason. I think you are thinking of Reputation (which a node has), not XP (which a user has). In point of fact, XP can be awarded for logging on daily, for voting on other people's nodes, or even for cash contributions, not just for others' votes on one's own nodes.
Re: Test::META
On Mon, Mar 28, 2005 at 08:35:34PM -0800, Michael G Schwern wrote: Whether things that are required for *testing* belong in build_requires really depends on whether you view testing as an integral part of the build process. This is something that is likely to depend on the *builder*, not the module author, which is, in my mind, the only argument (and a good one) for a separate test_requires. The distinction between build_recommends and and a possible test_recommends is more ambiguous. I agree with this, however I don't really see the ambiguity about test_recommends. ambiguous was the wrong word to use, sorry. I just meant that the argument for separating out test_requires is a lot stronger than for test_recommends; I'd like to see them both, but I had the impression public opinion was weighted against them, so I was trying to argue for the more important one.
Re: Test::META
On Mon, Mar 28, 2005 at 03:10:52PM -0800, Michael G Schwern wrote: On Mon, Mar 28, 2005 at 08:42:50AM -0500, Christopher H. Laco wrote: That's another gripe of mine about M::B and create_makefile_pl. It puts the requires AND build_requires in the PREREQ_PM in the Makefile.PL, which I won't want; nor do I think it right for everyone. There is no build_requires or recommends equivalent in MakeMaker, nor will there be, Too bad. Seems to me it would make sense to have MakeMaker support adding the tags to META.yml. Don't see what other MakeMaker change would be needed. Maybe that presupposes something else that I see as very desirable: META.yml being (re)generated by Makefile.PL at build time, and CPAN* looking in META.yml instead of Makefile for prereqs. so putting it into PREREQ_PM is the best thing to do. That's what every MakeMaker-based module on CPAN does after all. Its better than just dropping it and having the build fail. Take Test::More for example. It's usually a build_requires and the other Test* things like Test::Strict, Apache::Test, etc are in recommends. Test probably won't run with Test::More, but skipping a few subtests based on recommends is ok. But I don't think build_requires should be a PREREQ_PM requirement at all. *scratch head* but if you don't have the modules necessary to BUILD the module (as opposed to those necessary to run it)... how are you going to build it? The distinction between PREREQ_PM and build_requires only becomes meaningful for binary (that is to say, pre-built) distributions. Such distributions should be able to rely on PREREQ_PM to indicate what other (binary) distributions are required. Whether things that are required for *testing* belong in build_requires really depends on whether you view testing as an integral part of the build process. This is something that is likely to depend on the *builder*, not the module author, which is, in my mind, the only argument (and a good one) for a separate test_requires. The distinction between build_recommends and and a possible test_recommends is more ambiguous.
Re: testing non-modules
On Sun, Mar 06, 2005 at 10:32:26AM -0800, Michael G Schwern wrote: #!/usr/bin/perl -w use Getopt::Long; my %Opts; GetOptions(\%Opts, test); sub main { return if $Opts{test}; ...the program using the functions below... } main(); sub some_function { ... } sub some_other_function { ... } I'd make that just: sub main { ...the program using the functions below... } main() unless caller; sub some_function { ... } sub some_other_function { ... }