Re: Comment about BAIL_OUT
# from Ovid # on Thursday 04 January 2007 01:34 pm: >However, if you use the '-s' switch to shuffle your tests and bailout >is not first, then some tests will run until the BAIL_OUT is hit. > This seems to violate the principle that tests should be able to run > in any order without dependencies. Is it possible to shuffle all but the first tests? The only reason I use BAIL_OUT is to give better diagnostics about something horribly wrong with the environment or some piece of code that won't compile. Example: http://svn.dotreader.com/svn/dotreader/trunk/t/ 01-load.t ensures that all of the modules compile -- I haven't forgotten a semicolon or missed a dependency. It uses checksums to optimize this (which, yes could be wrong, but I haven't seen it happen.) 01-db_timestamp.t is similar, but doesn't matter right now and could probably be handled by a build dependency when I get back to it. pretend you didn't see 03-prereq.t Other uses might include "You have no display" for GUI tests, "no network connection", "wrong operating system", "it is not 1965" or other diagnostics where you need to tell the user that something is very wrong without it getting buried (or scrolled away) by a pile of error messages. I do *prefer* the _gui tests to run first, but that's only a matter of learning about horribly wrong things earlier. Any of the tests in the directories could be run in any order, skipped around across directories, etc. In this case, the tests in the top directory are special. I suppose you would call them sanity tests. If they don't pass, you might as well go home. I suppose BAIL_OUT could be replaced with something like "if anything in 01-load.t fails, quit testing." I could imagine similar naming schemes in use with non-recursive t directories (probably with numbers (or low numbers) on the sanity tests and the rest un-numbered or maybe numbered 100+.) It seems unreasonable to call BAIL_OUT from anything but a sanity test, but I've been wrong before. Even in shuffle mode, the sanity tests should still run first, but what is the best way to explain that to the harness? --Eric -- Issues of control, repair, improvement, cost, or just plain understandability all come down strongly in favor of open source solutions to complex problems of any sort. --Robert G. Brown --- http://scratchcomputing.com ---
Re: Comment about BAIL_OUT
# from Andy Lester # on Thursday 04 January 2007 06:25 pm: >On Jan 4, 2007, at 8:17 PM, Eric Wilhelm wrote: >> Is it possible to shuffle all but the first tests? > >No. You either have tests that are ordered, or you don't. Stated as if it were some sort of immutable law of the universe! My point, and my usage of BAIL_OUT(), and in fact the only reason that any of my tests would require some order is basically what others here have also described. # from Greg Sabino Mullane # on Thursday 04 January 2007 07:39 pm: >[1] I've never had a need for random tests myself. The only reason I >break mine apart is to isolate testing various sub-systems, but I > almost always end up having some dependencies put into an early "00" > file. I also tend to a have a final "99" cleanup file. While I could > in theory have each file be independent, in practice it's a lot of > duplicated code and a lot of time overhead, so it's either the 00-99 > or (as I sometimes have done) one giant testing file. It sounds like the 00 & 99 .t files are not really tests at all, but rather just scripts for pre and post. But, since the harness only runs '*.t' files, we have to pretend the setup, tear-down, and sanity checks are tests, right? I suppose this sort of thing could (and maybe should) be pushed up into the build system, but it seems that it has historically been simpler to just make it a .t file. --Eric -- "Everything should be made as simple as possible, but no simpler." --Albert Einstein --- http://scratchcomputing.com ---
Re: Desired test output?
# from Ovid # on Friday 05 January 2007 01:50 am: >TAPx::Parser collects far more >information than Test::Harness, so if there's more stuff you'd like to >see, that's fine, too. You could dump it all into some kind of data (yaml?) file, then execute $ENV{TAP_RESULTS_VIEWER} or something? TAP_RESULTS_VIEWER could then dump some ascii art, run/signal a gui program, do an html convert + browser launch, etc. Console output at 80 chars wide is inherently limited. Even starting some ncurses program or emacs mode would be more powerful for those who are stuck developing on a tty (and of course, said program could merely dump the report d'jour on the terminal (or stock tickers if things are going particularly badly.)) --Eric -- "Everything goes wrong all at once." --Quantized Revision of Murphy's Law --- http://scratchcomputing.com ---
Module::ScanDeps
Hi all, Anybody like testing and depend on PAR to work? If so, Module::ScanDeps *badly* needs your help. There are currently only two tests, one of which is conditional on Module::Pluggable, the other of which doesn't provide much coverage. What's the best way to test this sort of thing? I could certainly run some simple checks on core modules, but I envision myself quickly getting into difference between platforms and perl versions. It seems like the best way to get traction quickly is to throw some real-world data at it and do some reasonable checks on the results. e.g. we could easily generate a known set of required modules for each module in the test set. However, the "who used what" data is currently incorrect in a few cases, so a full expect test would break after a bugfix or two. So, where to get a good set of perl program data to test against? Bundle a static set of modules with it? Create an optional "data pack" to drive the testing? I don't really want to write-up a bunch of Foo.pm and Bar.pm modules, particularly since what we're dealing with is a rather large and feature-packed system that's gone through quite a bit of fix-this- here-and-that-there. Suggestions? Volunteers? Thanks, Eric -- Cult: A small, unpopular religion. Religion: A large, popular cult. -- Unknown --- http://scratchcomputing.com ---
Re: Test::Harness 3.0
# from Andy Armstrong # on Sunday 21 January 2007 09:21 am: >> OR possibly even... >> >> /home/me/directory/ADAMK/Dist-Name-0.01.tar.gz/01_compile.t >> /home/me/directory/ADAMK/Dist-Name-0.01.tar.gz/02_main.t > >That might be harder. I guess whatever harness you use to set the >environment variable could maybe take care of that? Yes. It should be expected to provide a single directory and not apply any policy to that beyond mirroring the t/ directory structure. If you need to run it fairly manually, but still stay in your dump framework, you could do: PERL_TEST_HARNESS_DUMP_TAP="$(test_dir_for_this_dist)" If that isn't enough (which may be the case in Adam's cpan injection), I suppose you could do "if the env var is an executable, run it and capture the output"? Note, for recursive test directories, it should create t/foo/foo_basic.t, t/bar/bar_basic.t --Eric -- To succeed in the world, it is not enough to be stupid, you must also be well-mannered. --Voltaire --- http://scratchcomputing.com ---
Re: Test::Harness 3.0
# from Andy Armstrong # on Sunday 21 January 2007 11:37 am: >On 21 Jan 2007, at 19:16, Eric Wilhelm wrote: >> PERL_TEST_HARNESS_DUMP_TAP="$(test_dir_for_this_dist)" >> >> If that isn't enough (which may be the case in Adam's cpan >> injection), I >> suppose you could do "if the env var is an executable, run it and >> capture the output"? >> >> Note, for recursive test directories, it should create >> t/foo/foo_basic.t, t/bar/bar_basic.t > >I've just committed a change that does just that :) "just that" being just the dir structure, or also the env var execute-if-not-dir thing? If you're talking about r53 at http://svn.hexten.net/tapx, I guess Adam is going to have to wait. --Eric -- "Insert random misquote here" --- http://scratchcomputing.com ---
Re: Test::Harness 3.0
# from Smylers # on Sunday 21 January 2007 11:50 pm: >Eric Wilhelm writes: >> If that isn't enough, I suppose you could do "if the env var is an >> executable, run it and capture the output"? > >Nice -- so that if you manage to trick somebody into setting that >environment variable you can get them to run any code you want the > next time they install a Cpan module that doesn't explicitly set this > variable? Sure. That, and $EDITOR. I don't think defining an environment variable to point to an executable is a huge issue. If one is running as root and can't control one's environment, one should shutdown the computer and replace the disk (yes, that goes for windows too ;-) --Eric -- We who cut mere stones must always be envisioning cathedrals. --Quarry worker's creed --- http://scratchcomputing.com ---
Re: Using pip to get testing done better and faster...
# from Adam Kennedy # on Tuesday 09 January 2007 03:05 am: >Since I moved to SVN, one of the things I've been doing is commiting > my release tarballs into a /releases/ directory. > >One side-effect of this is that even before I've uploaded it to CPAN, >ever release already has a URI. I was doing that for a while, until I had to rebuild my svn repository due to disk corruption. Now, I just scp them to a directory on the webserver to get the uri thing, and I use a hacked version of cpan-upload[1]. I figure if I can't do ./Build dist from the tag, it was a bad release. Anyway, this cpan-upload will work either way as long as your tarball is on http. http://scratchcomputing.com/svn/Module-Subversion-Juggle/trunk/bin/cpan-upload --Eric -- You can't whack a chisel without a big wooden mallet. --- http://scratchcomputing.com ---
Re: Bad test functions in Test::Exception
# from Nadim Khemir # on Tuesday 30 January 2007 09:17 am: > # all Test::Exceptions subroutines are guaranteed to preserve the > state # of $@ so you can do things like this after throws_ok and > dies_ok like $@, 'what the stringified exception should look like'; > >This wouldn't be needed if dies_ok didn't exist. I think the arguments in favor of dies_ok are good. But, I think it would be better to return the $@ rather than trying to preserve it. my $error = dies_ok { $foo->method1 } 'expecting to die'; isa_ok($error, 'Whatever'); Alternatively, at least a $Test::Exception::Error variable (and/or imported "sub last_error () {$Test::Exception::Error}") would probably be more robust than preserving $@, though I don't see why returning the value would break compatibility (well, maybe in some cases of overloaded exception objects?) --Eric -- perl -e 'srand; print join(" ",sort({rand() < 0.5} qw(sometimes it is important to be consistent)));' --- http://scratchcomputing.com ---
Parallelizing TAPx::Harness?
Hi all, I was just thinking that since my tests are nicely orthogonal and yet there's very little cpu utilization during testing, that I could probably get them done about 20x faster if they were parallelized. Has anyone looked at doing this with TAPx::Harness or otherwise? Is there a big underlying assumption that I'm missing which would prevent tests from being run concurrently in different processes (or on different machines?) Just briefly glancing at the code in svn, it looks like overriding aggregate_tests would do the trick, except all I would really need to rewrite would be the inner foreach loop. What troubles me is the few lexicals that would then have to be passed in, as those appear to be mostly for formatting purposes. Perhaps pre-figuring the formatting ("$name$periods") and then calling an overridable tests_loop method with a list of array refs and the $aggregate object would better lend itself to subclassing? $self->test_loop($aggregate, [[$label, $test] ,...]); There's also the bail-out issue, but maybe die()ing with an error object (or something) rather than "exit 1" would allow the parent to see that one child is waving the white flag and call off the rest of them asap. Thanks, Eric -- software: a hypothetical exercise which happens to compile. --- http://scratchcomputing.com ---
Re: Parallelizing TAPx::Harness?
# from Nicholas Clark # on Monday 05 February 2007 03:24 am: >which was sufficient to give a lot of flexibility in how tests were > run (and discover that the core is very unoriginal in its naming of > temporary files) # from Adam Kennedy # on Monday 05 February 2007 03:41 am: >Each test is still independent (they randomize just fine) but if you > ran two at the same time they could wallop each other's data. True, and I too have already discovered this in my own tests. I imagine sprinkling some $$'s in the temp filename declaration would do the trick. If anybody wants to play with it, attached is a stupid snippet that just dumps output to files in /tmp/t/. Of course, sorting tests by subdir might be better than randomly, etc. The tests run in 1/2 to 1/3 the time on my dual CPU system, so it's not just adding CPU's that changes things (I think a lot of the delay is in starting each test.) --Eric -- If the above message is encrypted and you have lost your pgp key, please send a self-addressed, stamped lead box to the address below. --- http://scratchcomputing.com --- forked_tests.pl Description: Perl program
Re: Parallelizing TAPx::Harness?
# from Andy Armstrong # on Monday 05 February 2007 04:20 am: >I can't remember whether it's mod_perl or CGI.pm that launches Apache > a few times during testing but I imagine there might be a problem > with multiple Apaches trying to bind to the same port in that case > too. Ditto anything that tests against a live database. > >Unfortunately that disqualifies a whole category of I/O bound tests >that would otherwise benefit the most. Well, it disqualifies them from running on the same box/chroot/whatever. Consider a scheme that uses FAM+rsync to keep another box in lock-step with your working copy and prepending ssh to the subprocess -- we still just read the output and exit code from the child. Distributed tests will need the same slave/master scheme support as in the concurrent single-box situation. --Eric -- I arise in the morning torn between a desire to improve the world and a desire to enjoy the world. This makes it hard to plan the day. --E.B. White --- http://scratchcomputing.com ---
Re: Parallelizing TAPx::Harness?
# from Andy Armstrong # on Monday 05 February 2007 10:27 am: >I guess there are already frameworks >available that'd help with managing that kind of distributed setup? >The most similar thing I have any experience of is distcc[1]. > >As a matter of interest I wonder if anyone's worked out whether >Perl's build process plays nicely with distcc? A nice test case for >distributed tests might be to start with a working distributed build >process and then find a way of making the post-build tests >distributed too. From my reading, distcc only ships a single pre-processed file across the wire and gets a binary back from the cc on the other end. That's a very simple and easy to distribute scheme, and I think the claim is that it will work in any build as a drop-in replacement for cc. A similar scheme might work instead of the rsync (or nfs?) full-mirror that I suggested, but it would basically mean shipping the entire dependency tree across the wire, plus whatever arbitrary data might be needed. I think this means it is best to setup the delegate boxen beforehand. Either way, I'm not sure it really needs a different test execution and reporting scheme than a locally parallelized build. --Eric -- But as soon as you hear the Doppler shift dropping in pitch, you know that they're probably going to miss your house, because if they were on a collision course with your house, the pitch would stay the same until impact. As I said, that's one's subtle. --Larry Wall --- http://scratchcomputing.com ---
[PATCH] dereference bug in TAPx::Harness exec
It seems that exec accumulates arguments, thus running the first test over and over with more and more arguments. http://scratchcomputing.com/tmp/tapx_exec.patch The runtests utility was also passing a string from the '-e' switch. The split(/ /, ...) is a little naive, but demonstrates the issue. --Eric -- Minimum wage help gives you minimum service. --David Schomer --- http://scratchcomputing.com ---
Re: Parallelizing TAPx::Harness?
# from Andy Armstrong # on Monday 05 February 2007 11:22 am: >To run tests locally you need to > >* launch the tests in // and gather their output >* display the test progress / results differently >* make it possible to attach some metadata to tests > that describes such things as which tests can safely > be run concurrently with which other tests. Yes, except we can leave this last one off until later. Maybe never. Here I'm guessing that a small change to the tests to make them all "concurrent-safe" is going to be easier than trying to keep tabs on which ones are or not and then writing that into a configuration somewhere. Let's call it YMMV and see how often it really matters -- i.e. if you can't take the trouble to make your tests concurrent-safe, you don't get the fun of parallelizing them. >Distributing tests among multiple boxes requires all those things but >potentially there's a lot more marshalling to be done to create a sane >test environment on each box. It's a much bigger problem I think. Quite, but not really one that needs to be solved by the test harness. I'm thinking the harness only needs to use the parser's exec attribute to ssh to another box, cd, and run the test. If everything is in place beforehand, that should be the only difference from a locally parallel run. I'm thinking that any amount of marshalling that gets builtin to the harness is going to end up being wrong for somebody's situation, so it is better to leave it out entirely, but maybe providing a setup hook if exec isn't enough. Hey, couldn't the program in 'exec' even decide whom to contact and just do it, leaving even the ssh stuff out of the harness? --Eric -- The reasonable man adapts himself to the world; the unreasonable man persists in trying to adapt the world to himself. Therefore all progress depends on the unreasonable man. --George Bernard Shaw --- http://scratchcomputing.com ---
Re: Parallelizing TAPx::Harness?
# from Nicholas Clark # on Monday 05 February 2007 03:24 am: >Important differences were > >1: you only print out the entire test results as an atomic line when > it has finished >2: You have no idea which test is going to finish next >3: You can't print running progress on tests Ah, there's the rub. I was planning to simply fork before hitting _runtest(), then serialize the parser object[*] from the slave to collect it in the aggregator in the master. I guess I figured that legible progress reporting could go hang :-D [*] Though Storable::freeze($parser) fails because of globs. But, YAML::Syck is perfectly happy to dump it. We shouldn't need the _stream attribute after the parser is done, so maybe we can delete that with $parser->freeze? So, I'm thinking progress reporting comes from the harness subprocess on STDOUT (in "$i/$n\n" form), and TAP just gets read by the parser as usual (though one level deeper in subprocesses), then a stripped-down parser object gets shipped back to the master for aggregation. With all of that in place, I think we could figure out what to do about outputting running progress from the master (starting with simply which tests are complete.) Summarizing, I think it would be very feasible as a subclass given these refactorings and additions: o TAPx::Harness::test_loop() method as overridable inner part of aggregate_tests(), with some rejuggling of $periods calculation o TAPx::Harness::_do_test(\%args) method as overridable inner part of _runtest(), allowing the subclass to not have to deal with args processing, spool, etc. o TAPx::Parser::freeze() -- disconnect _stream and etc., serialize o TAPx::Parser::thaw() -- deserialize (no need to reconnect) Sound workable? Thanks, Eric -- Don't worry about what anybody else is going to do. The best way to predict the future is to invent it. --Alan Kay --- http://scratchcomputing.com ---
Re: Parallelizing TAPx::Harness?
# from Andy Armstrong # on Monday 05 February 2007 12:23 pm: >We could just ship the raw TAP >back along with the test summaries I guess. Is that what the 'tap' field is for, or is that intended for input? Maybe the definition of freeze/thaw should recycle that field? --Eric -- You can't whack a chisel without a big wooden mallet. --- http://scratchcomputing.com ---
Re: Fixtures
# from Ovid # on Tuesday 13 February 2007 01:16 am: >--- Kirrily Robert <[EMAIL PROTECTED]> wrote: >> Does anyone here understand "fixtures" as a testing concept, and >> could they please explain it to me in a Perlish way? >> >> At least half of what I've heard described is what I usually achieve >> with a t/data/ directory, and another half is what I'd do by writing >> a specialized Test::Builder-based module. > >A test fixture establishes a "known state" of a system. For example, > when running with Test::Class, you can use "setup" and "teardown" > methods which run before and after every test to load test data in a > database, set up temp files, or do anything necessary to start a > known, stable state and later do transaction rollbacks, clean up temp > files and so on. Good description, but I think "teardown" is (at least conceptually) part of "setup". How about, "a thing into which the item under test gets inserted"? The c2 wiki (http://c2.com/cgi/wiki?TestFixture) has it as analogous to an electronics test fixture. They should probably include "light fixture" as a very simple metaphor. If you're testing electric lamps (non-engineers usually call them "light bulbs"), you screw them into a test fixture, plug it in, and flip the switch. This emulates the production environment, but with known variables (input voltage, surface reflectance, etc) so you can measure the luminosity and verify that it is within spec. So, the fixture gives you the surrounding environment (and/or related inputs, overrides CORE::GLOBAL::time and whatever), then your test just involves one chunk of code and the direct inputs to it. e.g. You might have several fixtures, each of which creates an object with a known set of attributes, then the tests call a method call on that object with various parameters. --Eric -- Speak softly and carry a big carrot. --- http://scratchcomputing.com ---
Re: r9130 - testall needs fancy harness bits
Apologies for the cross-post. We should probably discuss it on perl-qa, but I wanted to keep module-build informed. So, it turns out that the implementation that we came up with can't be so handy. Originally, I had done this to just run files with different extensions. During the hackfest, we came up with the bright idea that a test_type could be essentially virtual, and so we thought dispatching to the action would be best. The trouble is that dispatching to the action means the test harness does one summary per action. That's not acceptable, so I just backed this off to go strictly by testfiles again. Question: Is it possible to *cleanly* do multi-run tests (multiple execute()s, one summary()) with Test::Harness and still remain as backwards compatible as Module::Build wants to be? Other question: How difficult is it to do with TAPx::Harness? Essentially, we're talking about 2 or more actions doing runtests(), but I think M.B.ACTION_testall() can easily tell M.B.do_tests() to just execute() (or aggregate() in the TAPx.H case.) ATM, I'm just wondering how thick this is going to get. If it's not easy (meaning someone else will do it), I'm going to play the plugin card, make that depend on TAPx, and leave it at that. Note, the virtual tests are not exactly marginal given that 'testpod' and 'testpodcoverage' are builtins (though they do both currently suffer from the issue that they are not able to be integrated into the summary as virtual test actions.) # from ericwilhelm # on Sunday 18 February 2007 05:52 pm: >- for my $action ('', grep { $_ ne 'all' } $self->get_test_types) { >- $self->_call_action( "test$action" ); >+ my @types; >+ for my $action (grep { $_ ne 'all' } $self->get_test_types) { >+ # XXX We can't just dispatch because we get multiple summaries > but + # we'll need to dispatch to support custom setup/teardown in > the + # action. To support that, we'll need to call something > besides + # Harness::runtests() because we'll need to collect the > results in + # parts, then run the summary. >+ push(@types, $action); >+ #$self->_call_action( "test$action" ); > } >+ $self->generic_test(types => ['default', @types]); > } --Eric -- Any sufficiently advanced incompetence is indistinguishable from malice. --The Napolean-Clarke Law --- http://scratchcomputing.com ---
Re: using $SIG{__DIE__} correctly (if you must) (was: Object Identification Cold War and the return of autobox.pm)
# from Michael G Schwern # on Monday 26 February 2007 01:50 pm: >And then someone defined a $SIG{__DIE__} so now its C<<{ local >$SIG{__DIE__}; eval { $obj->isa($class) } }>> No. If that $SIG{__DIE__} doesn't check $^S, then it's just delete($SIG{__DIE__}) and you're back to eval {$obj->isa($class)} and balance is restored. --Eric -- So malloc calls a timeout and starts rummaging around the free chain, sorting things out, and merging adjacent small free blocks into larger blocks. This takes 3 1/2 days. --Joel Spolsky --- http://scratchcomputing.com ---
Re: using $SIG{__DIE__} correctly (if you must)
# from Michael G Schwern # on Monday 26 February 2007 03:29 pm: >Eric Wilhelm wrote: >> # from Michael G Schwern >> >> # on Monday 26 February 2007 01:50 pm: >>> And then someone defined a $SIG{__DIE__} so now its C<<{ local >>> $SIG{__DIE__}; eval { $obj->isa($class) } }>> >> >> No. If that $SIG{__DIE__} doesn't check $^S, then it's just >> delete($SIG{__DIE__}) and you're back to eval {$obj->isa($class)} >> and balance is restored. > >You don't want to delete someone else's $SIG{__DIE__}. No, I do. Why would anyone else's $SIG{__DIE__} be in my code? Now, maybe you're going to say that someone might use my module and be upset because their broken $SIG{__DIE__} is broken. >And how can > you know if it checks $^S (most don't)? Maybe some juggling of exit and die. Hmm, sounds like a job for chromatic. Acme::OhNoYouDidn::t? Or, you could just curry it into a sub that does check $^S if you wanted to be safe and weren't concerned about the call stack. Or you could always just walk down the hall and tell whoever wrote it to fix it. > Or was that a round-about > way to say "you should always check $^S in your $SIG{__DIE__}" Yeah. No, I don't actually delete it. But if you're having problems, delete() may well be the answer. > which > would be great but nobody does which brings me right back to "it > shouldn't be so hard to do it right!" Why doesn't anybody do it right? Yes, the docs say that it was an accident that $SIG{__DIE__} gets called from within an eval and "that may be fixed in the future." But, it's very clear in both perlvar and perlfunc#die, so why bother with the eval {local $SIG{__DIE__}; ...} mess? Just cause broken code to break instead of working around it. --Eric -- Anyone who has the power to make you believe absurdities has the power to make you commit injustices. --Voltaire --- http://scratchcomputing.com ---
Re: using $SIG{__DIE__} correctly (if you must)
# from Michael G Schwern # on Monday 26 February 2007 05:53 pm: >Put another way... be lax in what you accept, strict in what you >output. That's a different subject having more to do with piped text than code (if anybody in this case is being strict about acceptance here, it's perl) and even if it weren't, that philosophy can only go so far. What we have here is more a case of walking on eggshells, doing various preventative things "in case somebody 'd." How many cases can you account for? And, is it worth the loss of an idiom to try to sweep this flaw under the rug? >Also this: > eval { delete $SIG{__DIE__}; $obj->isa($class) } >is no shorter than this: > eval { local $SIG{__DIE__}; $obj->isa($class) } To be clear (since I must not be funny enough), the delete() bit was a joke. This is much shorter than both: eval {$obj->isa($class)} >>Or you could always just walk down the hall and >> tell whoever wrote it to fix it. >That's a long bloody hall you've got there for CPAN code. I would hope to not find many modules on CPAN that install such a thing. It's global, so whoever wrote $0 gets to decide what to do with it. That's a pretty short hall in most sane situations. >And how do >you know the user didn't intend it to run even if its in an eval? The bit about it "might be fixed later" implies that this intention would eventually lead to disappointment anyway. >>But, it's very clear in both perlvar and >> perlfunc#die, so why bother with the eval {local $SIG{__DIE__}; ...} >> mess? Just cause broken code to break instead of working around it. > >BECAUSE WE ALL THROUGHLY STUDY THE DOCUMENTATION, RIGHT?! > >Yeah. Thoroughly study? It apologizes about the brokenness (in both places) before it even explains how to use it! >People do it wrong because its easier to do it that way. And it > usually works fine. Most people don't even know about $^S. Hell, > even the designers didn't think of it as evidenced by the accidental > feature. > >Doing it slightly wrong but usable is easier than doing it right. >That's why nobody does it right. Design failure. Sure it's a design failure. But, breaking broken code is easier than accounting for ignorance with the unfortunate side-effect that the user learns something. The ignorance goes away and balance is restored. To be clear, I totally agree that it is a design failure. To cite a seemingly completely unrelated issue: a buggy reimplementation of require() that is a direct result of not understanding that a bad $SIG{__DIE__} could be fixed to allow eval {require foo} (and possibly not realizing that local $SIG{__DIE__} would be an option.) So, the code in question digs around in @INC to see if it can find a file. Unfortunately, that solution breaks @INC subrefs. So, now we're down an idiom and a feature! If the perpetrator of the die hook in question had just been told "too bad", that mess wouldn't have happened. Yeah, it shouldn't suck that much and ruby should be faster and perl 6 should be out by now and python should just exit instead of suggesting that maybe you meant Ctrl-D. --Eric -- The only thing that could save UNIX at this late date would be a new $30 shareware version that runs on an unexpanded Commodore 64. --Don Lancaster (1991) --- http://scratchcomputing.com ---
Re: using $SIG{__DIE__} correctly (if you must)
# from Michael G Schwern # on Monday 26 February 2007 09:20 pm: >>breaking broken code is easier than >> accounting for ignorance with the unfortunate side-effect that the >> user learns something. The ignorance goes away and balance is >> restored. > >Again, you're assuming the "user" here to be the author of the naughty > code in question. Or that you can write code which knows what's > naughty and what's intended. I say that most of the time such > vigilante coding will only bother 3rd parties who use your vigilante > module in conjunction with someone else's code. There's nothing vigilante about writing code that assumes other code will behave properly. If I were going to put something on CPAN that messed with __DIE__ hooks, it would only be an audit module. I'm certainly not going to put delete($SIG{__DIE__}) at the beginning of every module either (take a joke -- it saves typing.) I will, however, refuse to say "local $SIG{__DIE__}" inside of every eval just because "*maybe* *somebody* did *something* wrong *somewhere*." The user has every right to shoot themselves in the foot however they see fit. > Your CPAN module is > going to break other CPAN modules and the poor sap using them who > didn't write any of it is going to have no idea why. You're placing the blame in the wrong place. Modules which rely on a poorly-implemented $SIG{__DIE__} are going to break anyway. I'm just saying we should all leave the slack in the rope and not walk on eggshells. If the "poor sap" (though I tend to give Perl programmers more credit than that) wants to send me an e-mail questioning why I would be so bold as to use eval, I'll happily diagnose the problem and send them in the right direction. --Eric -- "Insert random misquote here" --- http://scratchcomputing.com ---
Re: Unit vs. use case/functional testing
# from Andrew Gianni # on Thursday 01 March 2007 08:42 am: >However, our business rules >have gotten complicated enough that we are no longer writing them that > way explicitly in the code. In the last application we built, we put > the rules in a database and the appropriate ones were pulled based on > circumstances (using generalized code) and run. Now we're embarking > on something different that allows us to essentially write our > business rules declaratively, Assuming that applying a rule to some input yields a "verdict", you may want to use data-driven testing and have the same users write data to drive tests of the rules. They would use a spreadsheet or some other means to create data which exercises the rules. input1, input2, input3, expected verdict Try to keep it simple (possibly breaking sub-conditions into a different set of data or labels for a common group of inputs) so that anyone on the business team can audit it. I'm assuming "expected verdict" is one (or more) of a finite number of answers that you get from your rules engine. (Possibly a method name, key in a dispatch table, or input to a function.) If this is true, then your application can be unit tested against each of the verdict's action-points. Changes to the rules engine shouldn't break the data-driven tests and thus shouldn't break the action points. Changes to the rules require changes to the expected verdicts, and may require changes to the action points, but at that point you should have good coverage. > although it's taken care of by a module > that we can assume is fully tested (details will be forthcoming at > some point, methinks). I would like to see that. Please keep us posted. --Eric -- "Unthinking respect for authority is the greatest enemy of truth." --Albert Einstein --- http://scratchcomputing.com ---
a safer way to use no_plan?
At the bottom of a test file: {my $finish = 1; END {$finish or die "\n unplanned exit"}}; Yeah, you have to remember to put it at the end of the file, but it may be easier than counting tests. Thoughts? Maybe an 'until_done' directive and 'tests_done' function? Ways to check that there is no code after tests_done()? --Eric -- The opinions expressed in this e-mail were randomly generated by the computer and do not necessarily reflect the views of its owner. --Management --- http://scratchcomputing.com ---
Re: a safer way to use no_plan?
# from Andy Lester # on Saturday 03 March 2007 06:18 pm: >Good Lord do I get frustrated at the handwringing over test >counting. Look, it's simple. You write your tests. You run it >through prove. You see how many tests it reports. You add it at the > top of the file. Voila! I'm not wringing my hands, I just don't like step 3 of what could otherwise be a 2 step process. Further, it equates to steps 1 and 4 when adding more tests! Is it bad that I want it to work in a way that encourages me to add tests rather than punishing me for it? Do you comment out the tests => foo and uncomment the no_plan line whenever you edit tests, or do you use a ternary and the r0/r1 vim idiom? Or do you just delete the 'tests' option and type no_plan longhand? Seems like that would lead to at least some wristwringing. The ternary also leads to eyewringing in my case. In fact, so much eyewringing that I've taken to this lately: use inc::testplan(0, + 3 # use + 199 # those others ); ... done; --Eric -- To a database person, every nail looks like a thumb. --Jamie Zawinski --- http://scratchcomputing.com ---
Re: a safer way to use no_plan?
# from Ricardo SIGNES # on Saturday 03 March 2007 07:11 pm: >> use inc::testplan(0, >> + 3 # use >> + 199 # those others >> ); > >What is that ... for? It's a substitute for use Test::More (0 ? (no_plan) : (tests => 202)); ... mostly because I don't like the number of parens in that. The args are: use inc::testplan (, ); where is the battle-mode switch which is quite handily flipped with vim's 'r0' or 'r1' idiom (or Ctrl+a/Ctrl+x if you prefer.) In your case, you could: inc::testplan->import(1, 3 + @test_data * 3 ); I didn't bother wrapping plan() or adding much other interface as it is currently just an experiment in planning. Actually, if my test could figure out how many tests there will be so I got to watch that neato x/y progress output, I would never plan. http://svn.dotreader.com/svn/dotreader/trunk/inc/testplan.pm --Eric -- Entia non sunt multiplicanda praeter necessitatem. --Occam's Razor --- http://scratchcomputing.com ---
Re: a safer way to use no_plan?
# from Dominique Quatravaux # on Sunday 04 March 2007 04:33 am: >And what if you are running a variable number of tests depending on >stuff such as compile-time options, maintainer mode enabled or not, or >whatever? Even under no_plan, I would say you should use skip() there. --Eric -- "Time flies like an arrow, but fruit flies like a banana." --Groucho Marx --- http://scratchcomputing.com ---
Re: a safer way to use no_plan?
# from Sébastien Aperghis-Tramoni # on Sunday 04 March 2007 06:19 am: > use Test::More; > > my $tests; > plan tests => $tests; > > BEGIN { $tests += n } > # paragraph of code with n tests > > BEGIN { $tests += m } > # paragraph of code with m tests Interesting. What if... use More::Tests; use More::Tests n; # paragraph of code with n tests use More::Tests m; # paragraph of code with m tests no More::Tests; # :-D --Eric -- If the above message is encrypted and you have lost your pgp key, please send a self-addressed, stamped lead box to the address below. --- http://scratchcomputing.com ---
Re: a safer way to use no_plan?
# from brian d foy # on Sunday 04 March 2007 10:02 am: >I run into problems where a loop runs fewer iterations than it should >but the test script otherwise runs to completion normally. I typically treat that as a test case. my $counter = 0; for(things(@foos)) { ok($_); $counter++; } is($counter, scalar(@foo), 'enough things'); I suppose a run-time goal adjustment and/or a for_ok([EMAIL PROTECTED], $sub) would be useful there? I guess what I'm getting at is a goal-based plan. Aside from the (and I do love to watch it spin) pretty progress output, the only real need for a plan is to make sure that every test ran. I think that can easily be done at run-time, which would make my life easier. --Eric -- I arise in the morning torn between a desire to improve the world and a desire to enjoy the world. This makes it hard to plan the day. --E.B. White --- http://scratchcomputing.com ---
Re: a safer way to use no_plan?
# from A. Pagaltzis # on Sunday 04 March 2007 12:42 pm: >* Eric Wilhelm <[EMAIL PROTECTED]> [2007-03-04 08:20]: >> It's a substitute for >> >> use Test::More (0 ? 'no_plan' : (tests => 202)); >> >> ... mostly because I don't like the number of parens in that. > >Uh? > >use Test::More 0 ? 'no_plan' : ( tests => 202 ); Search pattern not terminated at uh.pl line 1. I think you meant: use Test::More 0 0 ? (tests => 202 ) : 'no_plan', { q ( o ) =>' \_/ ' } ; Then, to turn off the plan, I'll just do: use constant X => 1; use Test::More 0 X ? (tests => 202 ) : 'no_plan', { q ( o ) =>' \_/ ' } ; --Eric -- use warnings; use strict; use constant X => 1; use Test::More 0 X ? (tests => 42 ) : join('', 'n', map {split} q ( o ) => qw/_ p l a n/), q( \_/careful or you'll plan your eye out );
Re: customizing test behavior/strictures
# from Ovid # on Monday 05 March 2007 06:26 am: >--- Andy Lester <[EMAIL PROTECTED]> wrote: >> I'm very wary of potentially hidden files causing changes to whether >> my tests pass or not. "Gee, these tests pass for me, but not for >> you..." caused by a hidden .rc file. :-( Sure, but that might be the user's fault and/or preference. I suppose somebody might be picky enough to refuse to install a module which had not planned all of its tests, but that should be their problem. Depending on what it does, Ovid's suggested notice might be necessary. > Using .rc file: /home/ovid/.runtestsrc ... >If color output is supported, then we go ahead and make that bold and >in a different color. > >We get the convenience of customizing our test behavior but the safety >of always being warned about it. It would be nice to be able to use an rc file from the current directory. E.g. you could have it in svn, but in the MANIFEST.skip so that removes the warnings/errors for those installing the code. I've never liked modules that shipped enabled pod/etc tests in their t/. I prefer that sort of kwalitee/meta stuff to be checked once before it ships and not while I'm trying to install it. Aside: available values of "warn" or "die" for various options might give the necessary granularity. --Eric -- "I've often gotten the feeling that the only people who have learned from computer assisted instruction are the authors." --Ben Schneiderman --- http://scratchcomputing.com ---
Re: --squeal-like-a-pig option
# from Ovid # on Monday 05 March 2007 05:38 am: >I have no idea what to name that switch, though, as 'warnings' is >already taken to enable warnings in the programs. '--tap-warnings' is >probably a decent choice even though I prefer '--squeal-like-a-pig'. --have-a-good-talking-to --tsk --Eric -- Atavism n: The recurrence of any peculiarity or disease of an ancestor in a subsequent generation, usually due to genetic recombination. --- http://scratchcomputing.com ---
Re: Custom extensions to META.yml
# from brian d foy # on Monday 05 March 2007 10:41 am: >In article <[EMAIL PROTECTED]>, Ricardo > >SIGNES <[EMAIL PROTECTED]> wrote: >> * brian d foy <[EMAIL PROTECTED]> [2007-03-04T12:09:26] >> >> > I'm not talking about the particular field name, but the idea that >> > I'd want to say in META.yml "Don't send me mail", or whatever >> > setting I want. >> > >> > Instead of having to disable (or enable) CC for every new tool, >> > I'd want a setting that new tools could look at without me having >> > to change the META.yml in all of my distributions then >> > re-uploading them all. ... >I'm just saying that setting configuration options per tool isn't the >way to handle global preferences. Are you saying that you want a per-author META.yml or that you don't want to have to say "don't send me mail" in two places in each distribution, or both? --Eric -- "It is impossible to make anything foolproof because fools are so ingenious." --Murphy's Second Corollary --- http://scratchcomputing.com ---
Re: per-author META.yml
# from Ricardo SIGNES # on Monday 05 March 2007 10:09 am: >* brian d foy <[EMAIL PROTECTED]> [2007-03-04T12:09:26] > >> ... without me having to >> change the META.yml in all of my distributions then re-uploading >> them all. > >So, for some subset of META.yml settings, you could consult the > module's author settings, found at (say) > $MIRROR/authors/.../RJBS/METAMETA.yml >... >Something like that? I feel a potentially irrational sense of dread. It could just be META.yml, or maybe AUTHOR.yml. Why the dread? If we're going to have per-author settings, I think that would be the place for them. --Eric -- You can't whack a chisel without a big wooden mallet. --- http://scratchcomputing.com ---
Re: a safer way to use no_plan?
# from Michael G Schwern # on Tuesday 06 March 2007 04:40 pm: >Eric Wilhelm wrote: >> At the bottom of a test file: >> >> {my $finish = 1; END {$finish or die "\n unplanned exit"}}; >> >> Yeah, you have to remember to put it at the end of the file, but it >> may be easier than counting tests. Thoughts? Maybe an 'until_done' >> directive and 'tests_done' function? Ways to check that there is no >> code after tests_done()? > >*sigh* Nobody ever looks at the ticket tracker. Gah! When I look at the tracker, it is always empty. When I don't, it is always full. I guess I should start a service where the act of me looking at the tracker keeps it empty :-D >http://rt.cpan.org/Public/Bug/Display.html?id=20959 It sounds like this needs work in the done() functionality. Also, it's not clear at first reading whether a prefixed plan is able to make the done() into a no-op. Ah well, on the pile it goes. --Eric -- "Everything should be made as simple as possible, but no simpler." --Albert Einstein --- http://scratchcomputing.com ---
Re: run C++ TAP output? (even even easier)
# from Julien Beasley # on Friday 09 March 2007 03:39 pm: >Thanks Ovid! This may be exactly what I'm looking for (since I'm going > to have tests in libtap and perl). However, and I apologize if I'm > wrong about this, doesn't your proposed solution have to start a new > perl interpreter for every single test file? If so, that might up > being too slow for practical use. As others have said, it's not that big of a deal. Also note that --exec 'bash -c' should do the same thing (iff you're running 0.50_07 (because --exec 'anything' was broken before this.)) However, if you feel the need for speed, I *think* you could add support in runtests for "--exec ''" to turn into @exec=() rather than @exec=(''). This would, of course, only work with properly shebanged and +x'd test files on real operating systems (and on compiled-executable-only test suites on windows), but I think the same can be said for the aforementioned naive run.pl. For mixed-suite support on windows, we probably just say "welcome to windows!" and make you write a more whatisit-aware run.pl (vs e.g. convoluting the usage of --exec with --exec-if-whatever and what-not.) --Eric -- We who cut mere stones must always be envisioning cathedrals. --Quarry worker's creed --- http://scratchcomputing.com ---
Re: run C++ TAP output? (even even even easier)
# from Eric Wilhelm # on Friday 09 March 2007 06:26 pm: >However, if you feel the need for speed, I *think* you could add > support in runtests for "--exec ''" to turn into @exec=() rather than > @exec=(''). Darn! That sounded like so much fun. I just couldn't resist. http://svn.hexten.net/tapx/trunk $ PERL5LIB="lib:$PERL5LIB" runtests --exec '' --Eric -- perl -e 'srand; print join(" ",sort({rand() < 0.5} qw(sometimes it is important to be consistent)));' --- http://scratchcomputing.com ---
Tap Version Number
# from Andy Armstrong # on Friday 09 March 2007 04:47 pm: >I'm just adding support for specifying the TAP version number[1] to >TAPx::Parser. It seems reasonable that the version, if present, >should be the first thing in the TAP. I think that should always be the case. While I don't forsee needing to do it differently, I think it's safe to assume: if we ever need to break that for some reason, then it's a newer version of TAP. Taking that logic a step further, I vote YAGNI on the "complete syntax for meta-information" bit. If we need more syntax, we bump the version and continue on our merry way, right? Is anybody itching for more metadata right now? I figure we should keep including this link in every mail until everybody has read it so many times that they start complaining :-) http://perl-qa.yi.org/index.php/TAP_version --Eric -- "Everything should be made as simple as possible, but no simpler." --Albert Einstein --- http://scratchcomputing.com ---
Re: Worrying about future proofing TAP is a little premature
# from Michael G Schwern # on Monday 12 March 2007 04:49 pm: >If all we do is argue about TAP extensions and never actually produce > one we will never have to worry about new versions! That's a good plan. To implement it, we really need a committee. Perhaps perl-qa is a little overwhelmed with this sort of traffic. We've already got a tapx-dev list, but perhaps we should form a tap-design-committee-formation-committee list (of course, we may first need to form a committee to decide on an acronym for the committee formation committee.) --Eric -- "Insert random misquote here" --- http://scratchcomputing.com ---
Re: The price of synching STDOUT and STDERR
# from chromatic # on Wednesday 14 March 2007 01:47 am: >They don't have to interfere with the TAP stream unless they call >Test::Builder->diag(), Yep. We need to flow everything from stderr through asap. However, I don't think we should be trying to do *anything* with it (except maybe archive it.) >which was my laborious and heavily sarcastic point. So, rather than trying to sync stderr, chromatic will be defining a diag syntax for TAP[1]. (and possibly the harness prints that info (hey, it can be sync'd now!) on stderr.) [1] as soon as he's through with the sarcastic "ok...not!" support --Eric -- "You can't win. You can't break even. You can't quit." --Ginsberg's Restatement of the Three Laws of Thermodynamics --- http://scratchcomputing.com ---
utApia
The diag() debate raged on in pdx tonight. Of course, the sides are roughly in agreement about most things, but with differing priorities and ideas about particulars of the implementation. Perhaps it's time to collect the issues and do some thinking. Fundamentals: 1. Anything on STDERR cannot be meaningful to a tap consumer. Facts: 1. STDERR has historically been used for (non-parsable) failure messages. 2. Completely synchronizing two filehandles is not possible at the harness level (unless the harness is the kernel, then *maybe*.) 3. Merging STDOUT and STDERR breaks starting with `warn "ok";` Wants: 1. Identifiable (tagged) error messages as a formal part of TAP. ~1.5 Display errors synchronously in relation to failing tests. 2. Backwards parser/producer compatibility (mostly with expectations created by fact 1.) Please correct and clarify as needed. I've tried to cook this down into the essentials. If there is a significant fact or want which is not a subset of the above, then I have completely misunderstood. If I've got the fundamentals wrong, then I'm done -- stick a fork in me. If I've got the facts wrong, correct me; otherwise, they're uh... facts. At the moment, what I'm seeing is differences in priorities placed on wants #1 and #2 and/or how much of "which want" you're willing to give up for the other. Forget for a moment about 'Undefined variable' and such warnings (except to remember that they exist and can ruin your day if the protocol gets in their way (I think that's the "cannot be meaningful" part of the fundamental.)) --Eric -- "Beware of bugs in the above code; I have only proved it correct, not tried it." --Donald Knuth --- http://scratchcomputing.com ---
Re: [tapx-dev] TAP::Parser, structured diagnostics
# from Michael G Schwern # on Friday 16 March 2007 02:59 am: >I chose #--- because 1) its backwards compatible as long as you ignore > unknown directives and 2) it allows TAP to stream. Otherwise its > pretty damn inelegant. We could say that a name ending in --- > indicates a forthcoming TAP stream... Would "a directive ending with '---'" work? not ok 2 - the most important test in the world # TODO because I said so --- name: the most important test in the world line: 17 directive: TODO reason:because I said so ... Or, possibly a trailing "&...\n" --Eric -- Peer's Law: The solution to the problem changes the problem. --- http://scratchcomputing.com ---
Re: Eliminating STDERR without any disruption.
# from A. Pagaltzis # on Friday 16 March 2007 06:08 am: >* Michael G Schwern <[EMAIL PROTECTED]> [2007-03-16 11:55]: >> fatal !!! >> fail !! >> warn ! >> notice >> pass !!! >> info !! >> debug ! > >The most bangs I can count instantly by looking at them is four. I would say anything below notice isn't exciting enough to warrant an exclamation point anyway. If pass/info/debug data can be conveyed another way (#), then that makes four. --Eric -- "Everything should be made as simple as possible, but no simpler." --Albert Einstein --- http://scratchcomputing.com ---
Re: subscribing to the TAP list (a.k.a. raise your hand if you heard about testanything.org)
Hi! Would anybody mind if, when the subject changes, we, uh... change the subject line? # from A. Pagaltzis # on Sunday 18 March 2007 07:05 pm: >* Andy Armstrong <[EMAIL PROTECTED]> [2007-03-19 00:35]: >> On 18 Mar 2007, at 23:10, A. Pagaltzis wrote: >> >* Andy Armstrong <[EMAIL PROTECTED]> [2007-03-19 00:05]: >> >> http://testanything.org/pipermail/tap-l/ >> >Subscribed! >> Can't see you on the subscribers list. Did it not work? >Hmm, I never got a subscription confirmation request. >Check the pending subscriber list, I should be in there. I got a mail server bounce, and didn't see it until just now because google was helpilly protecting me from it by filing it under "penis enlargement". Sending my confirm again seems to have dislodged the clog. We now return to your regularly scheduled spam. --Eric -- Issues of control, repair, improvement, cost, or just plain understandability all come down strongly in favor of open source solutions to complex problems of any sort. --Robert G. Brown --- http://scratchcomputing.com ---
Re: Custom Test::More problem
# from Michael G Schwern # on Wednesday 28 March 2007 08:31 pm: >Its base.pm. > > local $SIG{__DIE__}; > eval "require $base"; > >Test::Builder::Module is loaded which loads Test::Builder which > instantiates a __DIE__ handler which is localized and thrown out. Hey, that is pretty funny (cause it's delete($SIG{__DIE__}) :-D) I'm going to go put that in my "quit hedging bets and just eval" bag now. --Eric -- "Everything should be made as simple as possible, but no simpler." --Albert Einstein --- http://scratchcomputing.com ---
Re: New CPANTS metrics
# from ReneeB # on Sunday 01 April 2007 12:41 am: >On 31 Mar 2007, at 23:14, Thomas Klausner wrote: Yay! Now that the time zones have caught up, I get to participate in the discussion. >>> Even though I was fighting mail servers today, I managed to put >>> some time aside >>> for long-needed CPANTS improvements. (And I really have to thank >>> the Catalyst/DBIC-guys for their wonderfull tools which made me >>> finish a big project on time (more on this later...)) >>> >>> So, the new CPANTS metrics are: >>> >>> * docs_make_sense >>> * mentions_kwalitee >>> * uses_version_control >>> * reuses_code >>> * uses_recursion >>> * correct_speling >>> * nice_code_layout I'm looking forward to the source (must just be delayed by PAUSE.) I'm curious whether the mentions_kwalitee metric has any gaming-prevention. If I say "reduced kwalitee" in the changelog, does that count for or against me? >Some of the new metrics can't be satisfied. I doubt that all dists can >"use" 5 or more other CPAN dists. I think some of the metrics should > be optional (uses_recursion, nice_code_layout, reuses_code). You > shouldn't punish the people (like me) who don't like the code layout > you like. If your code doesn't have *any* recursion, the module is probably lacking several features anyway. I think we would all be better off if whenever we start to write a "for" loop, we stop to think "how could I do this recursively?" If done correctly, it also tends to rid code of those silly temporary arrays that lead to so much needless head-scratching. As for code reuse, I think the metric needs work. It should detect whether I've paste-reused code. Any good module contains at least 15 verbatim lines from each of 10 existing modules. Saves the end-user the hassle of installing prerequisites or dealing with bugfixes. For those that prefer a more SPOT style, the metric should detect whether we're using eval(`curl http://search.cpan.org/src/$wantcode`) to implement reuse. The point of having the nice_code_layout metric is to force conformity. That's the important thing here. We should also probably require vim modelines somewhere: # vim:ts=1:sw=1:noet:syn=lua Is my personal favorite. Though I think the lines should start with "peterbuilt" just to be perfectly clear. After all, ";" is awfully abbreviated. How can you expect an intern to understand something so terse?! >You also should mention what "docs_make_sense" is! What are the rules >for "docs_make_sense". That one is still under development. We're working on a massively parallel distributed human comprehension evaluator. At present, it seems that the HaMCaQE (harmonic mean captcha quiz engine) may prove more viable than the SDMC (shakespearian digital monkey cluster) due to the wide availability of porn site subcontractors. I'm still working on the SIGSEC (statistical ignorance game show entropy collector) though, and think it shows real promise. For the time-being, we're using a stopgap hack of a simple part-of-speech ordering analysis, though that tends to get easily confused by recently trendified nounverbifications. --Eric -- perl -e 'srand; print join(" ",sort({rand() < 0.5} qw(sometimes it is important to be consistent)));' --- http://scratchcomputing.com ---
Re: Does this pattern have a name?
# from Yuval Kogman # on Monday 02 April 2007 03:57 pm: >Then just proxy everything: >For the proper distinction between a setter and a method that >accepts arguments (and should still be shadowed) I guess you need >some meta programming, but the value is dubious IMHO. My first thought was actually to just use class inheritance. It seems that what Andy is doing here is something like singleton object inheritance, so a class is as good an object as any. Just start with a root class and inheritable class accessors. The variant() method then returns a new class, which only exists in memory and only contains @ISA, getters, and setters. my $w10 = FormClass->variant(width => 10); # that's a "FormClass::+0", which isa('FormClass') my $blue_w10 = $w10->variant(color => 'blue'); # that's a 'FormClass::+1", which isa('FromClass::+0') Increment the number to prevent conflicts. The "+" is also a conflict prevention bit. Note that the class name is illegal at the *compiler*, but you're installing typeglobs and the runtime has no qualms about that. See chromatic's "Perl Hacks". The object could just be a string. But, I suppose it could be a blessed reference reference reference or whatever if you want DESTROY to get rid of those globs. Since you have to be careful not to DESTROY a parent of a class which is still in use, perhaps $self = \$parent would let the garbage collector do the work for you. I recommend using bare attribute names for getters and set_attrib for setters. If you want to allow instances to override values outside of variant(), then your setters have to be checking ref($self) vs $package and installing a new getter/setter pair in the ref($self) package (note that $package is not __PACKAGE__ when you play in the ether like that.) Considering how much perl does for you in keeping the hierarchy straight, I'm thinking the symbol table is as good a data store as any in this case. Even without the singletons. The only caveat I wonder about is whether there is any sort of arbitrary limit on the symbol table size. Of course, you could reimplement the symbol table with autoload and hash references, just don't break can() when you do it. --Eric -- [...proprietary software is better than gpl because...] "There is value in having somebody you can write checks to, and they fix bugs." --Mike McNamara (president of a commercial software company) --- http://scratchcomputing.com ---
Re: Passing parameters to test scripts
# from Andy Armstrong # on Wednesday 04 April 2007 05:58 am: > runtests t/*.t --select regression,docs > >Would select only regression and documentation tests. That's just an >example plucked out of the ether of course - it wouldn't /have/ to >work like that. Meaning that every test has to decide what sort it is and whether or not it should run? I added a concept of "test profiles" to Module::Build for just this sort of thing. You make your gui tests be '.gt' files, your author tests '.at' files, and your network tests '.nt' files. Then run testgui, etc or testall for everything. Of course, this is intended mostly to provide a mechanism for tests that don't run by default (e.g. you have no access to the required data or your machine environment needs to be configured just so.) I'm not sure if I'm exactly happy with it, but it is a start. Perhaps the setup should be turned into a yml file that could be utilized by runtests, etc. For running categories of tests which are also in the default set, I suppose we would need something of a label. It would be nice to have that natively supported in runtests. It would also eliminate code from the test files themselves. Of course, if the desire is to run only part of a particular test file, I'm more inclined to say that the tests need to be divided out into smaller pieces. Not sure how that plays with Test::Class (or rather, whether Test::Class can learn to play nicely with the test-files concept.) --Eric -- We who cut mere stones must always be envisioning cathedrals. --Quarry worker's creed --- http://scratchcomputing.com ---
Re: Passing parameters to test scripts
# from Philippe Bruhat (BooK) # on Wednesday 04 April 2007 05:42 am: >Usually because I want to run a specific subset of tests only. > >A typical example is: > > $ AMS_REMOTE=1 perl -Ilib t/90up2date.t simpsons > 1..26 > # Testing 13 themes using the network (may take a while) > ok 1 # skip Acme::MetaSyntactic::dilbert ignored upon request ... > ok 18 # skip Acme::MetaSyntactic::services ignored upon request > ok 19 - Acme::MetaSyntactic::simpsons has 246 items > ok 20 - Acme::MetaSyntactic::simpsons is up to date > ok 21 # skip Acme::MetaSyntactic::tmnt ignored upon request ... >(Even though the example is for an Acme:: module, I have the intention >to use this at work with a semi-intelligent wrapper around prove that >I'm working on.) Well, I'm not sure about passing arbitrary parameters to test scripts via prove/runtests. If my tests take arguments, it is typically when I'm running them manually with some sort of non-batch-mode interaction or debugging enabled. Thus, prove would need to allow me to say which arguments went to which tests. That might be nice, but why would I be running the tests in a harness in anything other than batch mode? If you're passing the same argument to all of the tests, that is essentially equivalent to an environment variable, so I can't see the benefit in having prove/runtests supporting that. Further, if you really want it, the "everybody gets the same argument" mode is already supported via --exec as ovid mentioned. Now, the problem you're trying to solve appears to be "run only part of the tests". To me, that doesn't involve passing parameters to the scripts. Can we think about this in terms of the producer supporting some sort of dynamic skip concept involving labels or such? Possibly a bit of yaml shoved in an environment variable? --Eric -- "You can't win. You can't break even. You can't quit." --Ginsberg's Restatement of the Three Laws of Thermodynamics --- http://scratchcomputing.com ---
Re: Passing parameters to test scripts
# from chromatic # on Wednesday 04 April 2007 11:24 am: >> Can we think about this in terms of the producer supporting some >> sort of dynamic skip concept involving labels or such? Possibly a >> bit of yaml shoved in an environment variable? > >It seems to me that we already have a good way of separating tests per > their behavior: separate test files and even separate test > directories. Correct. See my other post and the test profiles thing in Module::Build. What I was thinking of here was more along the lines of how to get Test::Class to play that game. --Eric -- The reasonable man adapts himself to the world; the unreasonable man persists in trying to adapt the world to himself. Therefore all progress depends on the unreasonable man. --George Bernard Shaw --- http://scratchcomputing.com ---
Re: test taxonomies
# from Andy Armstrong # on Thursday 05 April 2007 02:18 am: >> I added a concept of "test profiles" to Module::Build >> for just this sort of thing. You make your gui tests be '.gt' >> files, your author tests '.at' files, ... > >I think I'd rather put my tests into subdirectories to group them - That only gets you the ability to run some of them at once, not the ability to *not* run some of them without special action. That is, unless you put them all in t_gui or some other directory completely outside of t/. That might be a viable option, but the trouble is that you end up with more toplevel directories (or a bunch under t_special or whatever.) Other than breaking ack, I haven't had any trouble using a .xt extension under t/. >but either way that only gets you a single taxonomy. In practice >you're quite likely to want tests to be classified in more than one > way. True. I'm interested in pursuing that. As I said, possibly an environment variable with some YAML to tell the producer what to skip or thereabouts. As chromatic pointed out, it does potentially add quite a bit of complexity. I'm pretty certain it doesn't involve command-line arguments for tests though. --Eric -- "If you dig it, it's yours." --An old village poet (via Al Pacino) --- http://scratchcomputing.com ---
Re: Passing parameters to test scripts
# from Philippe Bruhat (BooK) # on Thursday 05 April 2007 02:29 am: >No, I've already solved that problem by putting the tests in separate >directories. E.g., I've now a single pod.t file in test/pod that >will use Test::Pod on all the files in cwd (or the files passed as >command line parameters to the test script). Seems like one good installable tool to test pod would be preferable to creating multiple copies of a test script. Perhaps something like a podprove (and maybe it could do podcoverage and podspelling too.) `./Build testpod` seems to be all we have at the moment. --Eric -- "...our schools have been scientifically designed to prevent overeducation from happening." --William Troy Harris --- http://scratchcomputing.com ---
podcoverage vs Method::Alias vs the easter bunny
Hi all, The only easter egg I got today was this conundrum: Pod::Coverage reports aliased methods as undocumented because Method::Alias does the eval("package $your_package; sub $alias {shift->$canonical([EMAIL PROTECTED])}") thing. Of course, it doesn't bother reporting things like accessors and constants as undocumented because those are installed via the *{"${your_package}::$name"} = sub {...} way. So, is it good to be slapped for not documenting your aliases? (Or, for that matter: accessors and constants.) Or is that clutter? I tend to put all of this sort of meta stuff at the beginning of the code, so if you're reading the code, it is sort of self-documenting, at which point anything about it in pod becomes clutter (and quite prone to going stale.) So, do I need a new Method::Alias? (Aside: I had some code that changed the way it installed the subs, but Adam seems to have disliked it or forgotten it.) I was thinking maybe it could read the aliases from the pod, but that would break in a PAR and generally leaves a bad taste, so no dice. I also somehow can't ever seem to remember the order of arguments (guess I keep getting it confused with `ln`), but that might just be me. Alternatively, maybe what I want is just for Pod::Coverage to not check any symbols that aren't static subs in the package in question. (I guess that would solve the whole @INC bug as well and we could just statically scan the code.) I mean, if you're creating the symbols via glob-juggling and etc, do you want your podcoverage tests checking those? On another note, I hacked together a Pod::Coverage subclass that allows "=for podcoverage_private foo_\w+" and "=for podcoverage_trustme bar" in your pod. Seems sort of natural to have the pod-reading tool reading pod and all that. Is that the sort of thing that should be a patch to Pod::Coverage? http://svn.dotreader.com/svn/dotreader/trunk/inc/dtRdrBuilder/PodCoverage.pm Thanks, Eric -- "Politics is not a bad profession. If you succeed there are many rewards, if you disgrace yourself you can always write a book." --Ronald Reagan --- http://scratchcomputing.com ---
Re: podcoverage vs Method::Alias vs the easter herring
# from Eric Wilhelm # on Sunday 08 April 2007 09:41 pm: >Alternatively, maybe what I want is just for Pod::Coverage to not > check any symbols that aren't static subs in the package in question. > (I guess that would solve the whole @INC bug as well and we could > just statically scan the code.) I mean, if you're creating the > symbols via glob-juggling and etc, do you want your podcoverage tests > checking those? Wow! Why didn't I think of that earlier? This is actually far more correct, plus *actually* checks the pod in packages which don't compile (e.g. they require Win32) on the current machine. And, it's a crazy amount faster than loading all of your modules. WTF? http://svn.dotreader.com/svn/dotreader/trunk/inc/dtRdrBuilder/AlsoPodCoverage.pm So what if it is naive to think that all subs match qr/^sub /? I'm not looking for all of the subs, only the ones that should be documented. As a convention, I think it works better to just do a stupid-simple static scan. If you indent the sub, you're implying that you're not going to document it (and probably including a comment as to why you have to override the base method or whatever.) So, why did I bother with the whole =for podcoverage_trustme thing? Easter herring I guess. --Eric -- But you can never get 3n from n, ever, and if you think you can, please email me the stock ticker of your company so I can short it. --Joel Spolsky --- http://scratchcomputing.com ---
Re: podcoverage vs pragmatism
# from Ricardo SIGNES # on Monday 09 April 2007 05:10 am: >I need to finish/test/release my PC subclass that looks at >@PKG::POD_COVERAGE_TRUSTME. I saw that in rt, but I really think pod is the place for it. Why clutter the code with variables which are only used by a tool that reads pod? Isn't that what '=for' is for? >I am definitely not happy with the idea > of letting Pod::Coverage just look for subs declared with "sub name" > as you suggest. I generate too many methods dynamically. Well, we could have *that* debate, but I'm inclined to think that a project should get to choose between them (at least in terms of what's the policy for whether the code is ship-ready.) I'm firmly on the "human decisions supported by tools" side of the fence. We don't really have the option to generate pod dynamically (at least, not outside of the build system), so I tend to document the autogenerated methods as: =head2 menu_view_tab_ Activates the sidebar tab. =cut And I might include every method in that bit of documentation, but I'm not going to =head2 or even =item them. With a symbol-table walking scheme, I now have to add an explicit or regexp list of trustme's to match that. With a static scan, I don't have to do anything. What was bugging me about the symtable walk was stuff like: # override this because the default behavior makes no sense sub SetFocus {shift->focus_current(@_);} In My Policy, that doesn't belong in the pod documentation because it is an implementation detail. It might get a mention in the focus_current pod, but again, will not get an =head[2-4] or =item. Being able to say "trust me" with a leading space or semicolon seems like a rather useful convention. Of course, there's also "the code can not be loaded" (e.g. on this computer - whether lack of prereqs, failure to be linux, etc.) I guess failure to die when eval("require $package") fails is a bug? ATM, Test::Pod::Coverage says "ok" there, but I'm not sure who's bug it is. Shouldn't that at least be a "skipped: $module failed to load"? The pod is static, so with all of the problems of compiling the code, plus the speed difference, it seems like this is one case where simplistic static code scanning is a net win for some. Just like kwalitee, it is not a perfect metric, but I think it makes a good tool. Symbol Table Walk: * picks-up on magic subs (iff the magic is in your package.) (skips accessors, constants, etc.) * requires explicit trustme/private declarations * requires that the code compiles on the current machine/platform * is slow Static Code Scanning: * only sees statically declared subs * ignores anything which doesn't m/^sub [A-Za-z]\w+/ (sort of a 'defaults to "trustme"' style) * is unaffected by dependency/platform issues * is quick Which of those items are pros or cons mostly depends on your POV. It seems like there's a good case for one or the other in given situations. I might use the STW as an occasional audit, but have the SCS as the check on the nightly build. I tend to have my eyes on the code enough to see blatant violations, so I'm looking to catch errors which look correct such as: =head2 foo_bar_bat Does the bar to foo's baz. =cut sub foo_bar_baz { ... What perplexes me is I guess: why have we only had the one scheme for so long? I caught a couple of actual mis-documented bits (such as above) with the SCS, but the STW always made so much noise that I had basically just given up on making it work until yesterday. I wonder how many others have similarly thrown up their hands and just walked away from pod coverage because of that? In my case, the SCS is much more pragmatic in that it allows me to catch honest errors, whereas the STW punishes my tendency to not repeat myself, plus takes longer *and* misses checking all of the windows-specific code. --Eric -- "Unthinking respect for authority is the greatest enemy of truth." --Albert Einstein --- http://scratchcomputing.com ---
Re: New CPANTS metrics
# from Andy Armstrong # on Sunday 01 April 2007 07:53 am: >Agreed. May I propose the additional requirement that the >documentation contain a lengthy treatise on the benefits of true[1] >object orientation? > >[1] For whichever value of 'true' the author prefers. Yes, but then it should also include a requirement that all accessor methods be coded longhand. No Class::Accessor, Class::Accessor::Classy or Moose or anything silly like that. I suppose it might be okay to allow inlined typeglob assignment of generated accessors IFF they're created in a recursive lexically-scoped function in a BEGIN block. BEGIN { my $def = sub { my $setter = sub {...}; my $getter = sub {...}; # bit of hand-waving no strict 'refs'; *{$setname} = $setter; *{$getname} = $getter; }; my %accessors_for; # special overrides go in here my $mk_accessors; $mk_accessors = sub { my $name = shift(@_) or return; ($accessors_for{$name} || $def)->($name); # lexical polymorphism $mk->accessors(@_); }; $mk_accessors(qw(foo bar baz)); } That's just my first-crack at it though. We should probably make $def recursive and leverage ternary parametric polymorphism to get rid of the $getter/$setter variables. my $def; $def = sub { (@_ % 2) ? (map({$def->($_ . 'et', @_)} qw(g s))) : ( sub { my ($type, $name) = @_; my $subref = eval('sub {shift->{' . $name . '}' . ( ($type =~ m/^s/) ? '= shift' : . '') . '}'); no strict 'refs'; *{$type . '_' . $name} = subref;}->(@_); ); }; I think it also needs a package global variable containing a subref for easy customization/overrides of the setter (though I suppose we would be better off with a subref-wrapped eval'd environment variables PERL_GETTER and PERL_SETTER?) I propose that we standardize this (in the peterbuilt normal form, of course) and make it part of the uses_oo metric (requiring verbatim pastage save the %accessors_for dispatch declaration.) Now the only question is whether it should use inside-out objects and/or a tied dispatch table connected to an ftp server. --Eric -- The only thing that could save UNIX at this late date would be a new $30 shareware version that runs on an unexpanded Commodore 64. --Don Lancaster (1991) --- http://scratchcomputing.com ---
Re: SKIP without counting
# from Ovid # on Monday 16 April 2007 02:29 am: >I need to skip the >rest of the tests, but I want to use a plan and I don't want to keep >counting the number of tests after the skip. ... > my $remaining_tests = looks_like_number($plan) > ? $plan - $builder->current_test > : 1; # always show at least one test skipped That does lose the rigor of knowing whether you've run enough tests up to this point. But, it seems the only way to address that is to count. Perhaps forward-declaring 'checkpoints' would make that less painful in that you would update the count at the top of your test instead of somewhere in the middle. Also, when does the ': 1' apply? --Eric -- So malloc calls a timeout and starts rummaging around the free chain, sorting things out, and merging adjacent small free blocks into larger blocks. This takes 3 1/2 days. --Joel Spolsky --- http://scratchcomputing.com ---
Re: testing dependent modules
# from Nadim Khemir # on Wednesday 09 May 2007 10:51 pm: >PAR has a module to find module dependencies. That might help. That would be Module::ScanDeps. Needs tests. The static scanning is not using PPI, and I think everyone involved at this point agrees that it should. There is also a lot of "x implies y" dependency chain knowledge encapsulated in there that could stand to be in a data file or somehow configurable+compartmentalized. The compile/run-time scanning is essentially sound in that it dumps %INC, though that leaves you without any knowledge of who required it. The dynamic library list capture is neat too. Also see Devel::TraceDeps, which is currently lacking a frontend (or really, anything to do with interface.) http://scratchcomputing.com/svn/Devel-TraceDeps/trunk I'm trying to enable separating the "we require" from "they require", the idea being that you can "-MDevel::TraceDeps=something" when running the test suite and get recursive comprehensive dependency coverage which would be as thorough as your test coverage. But, I'm lacking tuits right now, so please feel free to steal it and finish it :-D --Eric -- Issues of control, repair, improvement, cost, or just plain understandability all come down strongly in favor of open source solutions to complex problems of any sort. --Robert G. Brown --- http://scratchcomputing.com ---
Re: CPAN testers generates Makefile.PL without prerequisites
# from Matisse Enzer # on Sunday 20 May 2007 08:55 am: >So - what's the "right" way to remedy this? > > - The author (that's me!) provides a Makefile.PL as well as Build.PL? > - Fix something in the CPAN tester system? > - Other? > - Some combination of the above ? :-) http://www.nntp.perl.org/group/perl.module-authors/2007/04/msg5348.html It depends on whether you want to fix the problem "where it is" or "where you have write access". Chris has a broken cpanplus. This is not the first time this has happened. I'm pretty sure it is broken and not old (isn't PERL5_CPANPLUS_IS_RUNNING pretty recent?) It would be nice in general if FAIL reports could be crossed-off or deleted by humans. Some feedback to the tester would also be good (how do I contact a tester?) Some way to link a failure report to a known/new bug in RT would also be pretty nice. As for what you can do on your end, if you like workarounds, the passthru makefile is the best answer I have. --Eric -- A counterintuitive sansevieria trifasciata was once literalized guiltily. --Product of Artificial Intelligence --- http://scratchcomputing.com ---
Re: CPANTS: suggestion for a new metric
# from demerphq # on Saturday 26 May 2007 10:45 am: >> Sorry, but it is *the _compression_ software's* bug. > >Fine, then what do i do about it? File a bug with Archive::Tar >(maintained by a non windows programmer)? This should be properly handled by the dist action of any sufficiently modern Module::Build or Module::Install or ExtUtils::MakeMaker. If it isn't, that's where the bug is. --Eric -- Atavism n: The recurrence of any peculiarity or disease of an ancestor in a subsequent generation, usually due to genetic recombination. --- http://scratchcomputing.com ---
Re: CPANTS reports in e-mail
# from Gabor Szabo # on Wednesday 30 May 2007 08:58 pm: >I would like to be able to opt-in such e-mail >reports. >Similarly to how CPAN Testers send reports. Wouldn't it would be interesting if there were a multiplexing service for this sort of thing? E.g. maybe a system that allows you to subscribe to a personal mailing list of sorts in much the same way that you can get an rss feed for a given rt query. Cue debate over push vs pull. >In addition I think it would be acceptable to send a single e-mail to >every author >the first time one of his module is analyzed with the instructions on >how to opt-in to the reports. Possibly, but what about pause sending a page of links to new authors on their first upload? And for the rest of us to stay current, maybe a low-volume mailing list solely for notifications of various new/changed perl/cpan services? --Eric -- Chicken farmer's observation: Clunk is the past tense of cluck. --- http://scratchcomputing.com ---
Re: Pod::Critic?
# from Ian Malpass # on Wednesday 06 June 2007 08:08 am: > * Has copyright details > * Has license details On bigger projects, these things sometimes get done as "see the main module documentation for copyright and license statement". Thus, it would be good for this to be configurable as something like "has block $x", where $x is specified in a "pod_blocks/$x" file in the project tree. Similarly, I would like overrides to be able to be specified without comments or other markers in the code. Maybe a yaml file of module/method settings. (I'm imagining that I'll one day be reading a file with critic+pod_coverage+pod_critic+tidy markup in it and my eyes will bleed dry.) > * Method docs have examples > * No spelling errors (borrowing Pod::Spell) What David said. Also consider the various styles of method documenting. I don't like the '=item method', but it happens. I really don't like '=item method(parameters)', but again. There better be some way to per-project declare this policy because anything besides "=head2 method\n" is an error in my codebase. The spelling thing could maybe be helped via C. Would be nice to know that you didn't accidentally type some_metod_name when you wanted some_method_name. Possibly use C to quality external refs. Of course, "http://example.com";, acronyms, "Ingy" and similar items imply that a high-quality and sufficiently-hookable spellchecker are needed. AFAIK, that doesn't exist. > * Module names are links See 'See the "L documentation." may become "See the the perlpod manpage documentation."' Too many links don't do that correctly already. Thus, requiring links might be just making more trouble. --Eric -- Consumers want choice, consumers want openness. --Rob Glaser --- http://scratchcomputing.com ---
Re: Pod at __END__
# from Andy Armstrong # on Thursday 07 June 2007 03:13 pm: >I'd like an editor that lets me look at it either way. Sometimes I >want to look at uninterrupted code - in which case interleaved >documentation just gets in the way. Other times it's nice to be able >to read the code while documenting it - in which case interleaved >would be better. I've played with this folding scheme, but really wish vim had a way to anchor it to the front of the line. What it does with head1/head2 can be pretty useful (folding an entire head1 and all of the code) but honestly I've had foldenable turned off for quite a while now. :set foldmethod=marker :set foldmarker=\=head,\=cut Maybe there's a better folding module in vim? It would be nice if it could recognize and fold pod plus the related blank lines. I've never seen the benefit of pod after __END__. IMO, your code and docs should follow the same order/groupings. That, and you have to retype method names and such in POD anyway, so separating them just risks too much drift. On a related note, I haven't come up with a good answer for documenting accessors. See usage2pod(). --Eric -- Consumers want choice, consumers want openness. --Rob Glaser --- http://scratchcomputing.com ---
Re: podlifter
# from Andy Armstrong # on Thursday 07 June 2007 03:44 pm: >I definitely think > >$ podlifter -end lib/My/Stuff.pm > >and > >$ podlifter -interleaved lib/My/Stuff.pm > >need to exist. Ingy had something in that vein, but I'm not sure it does the round trip. Also, podadz http://scratchcomputing.com/svn/code_utilities/trunk/code/perl/bin/podadz Getting it interleaved back into the right spot without leaving a token behind would probably be difficult, particularly if an =head2 gets stuck in while out of line. (You would probably need tokens in both the code and the pod to do it right.) --Eric -- [...proprietary software is better than gpl because...] "There is value in having somebody you can write checks to, and they fix bugs." --Mike McNamara (president of a commercial software company) --- http://scratchcomputing.com ---
Re: Pod at __END__
# from Joshua ben Jore # on Thursday 07 June 2007 05:14 pm: >On 6/7/07, Eric Wilhelm <[EMAIL PROTECTED]> wrote: >> I've never seen the benefit of pod after __END__. IMO, your code >> and docs should follow the same order/groupings. > >It has two benefits. >...readable without syntax highlighting. >...don't spend the CPU The first is pretty moot with the right tools, which are easy to find. As for the browser, that can be highlighted or pod2html'd. The second is an optimization, so can be done in the build tool. I guess the "I want a more dynamic pod" thing could also be done in the build tool. Just needs an =include or thereabouts. I'll leave that as the answer to what chromatic said too. Speaking of which, what's the state of the art in pod6? --Eric -- I arise in the morning torn between a desire to improve the world and a desire to enjoy the world. This makes it hard to plan the day. --E.B. White --- http://scratchcomputing.com ---
Re: Pod at end
# from A. Pagaltzis # on Thursday 07 June 2007 10:25 pm: >Documentation should form a coherent narrative. The myopic view >of inlined local POD sections is a hindrance to that. We need to be able to switch the folding between pod and not-pod, eh? >Conversely, >when I edit code, I occasionally want to shift big swathes of >stuff around; having to carry along all the POD before I’m done >would bog me down for no gain whatsoever, particularly in light >of the fact that such changes generally result in a structucal >revision to the docs. =head1 Foo Stuff ... =head2 foo_this ... } # end sub =head2 foo_that ... } # end sub =head1 Bar Stuff ... Were you arguing against interspersed pod? If you're moving code into the Foo section, the docs go with it. If you're swapping =head1 sections, the code goes with them. Pod at the end means finding your cut points twice instead of once. --Eric -- The opinions expressed in this e-mail were randomly generated by the computer and do not necessarily reflect the views of its owner. --Management --- http://scratchcomputing.com ---
faking time() across processes for testing
Thoughts? I can override CORE::GLOBAL::time() and I've done this before with a closure (ala Time::Mock), but how would one implement accelerated time for testing a multi-process program? I'm also dealing with possibly sleep(), alarm() and other timing issues, as well as maybe Time::HiRes::time(). I can probably work my way around the sleep() and such. All of the processes are in perl, starting from fork-opens. If I use Time::HiRes::time() multiplied by some accelerating factor, would one process appear to get ahead of the other? (A quick test shows not, at least on this system with that code.) I don't have any particular sync requirements, but I'm curious if there's prior art or know pitfalls here. use Time::HiRes (); BEGIN { my $htime = \&Time::HiRes::time; my $time_start = $htime->(); my $timesub = sub () { $time_start + ($htime->() - $time_start) * 10_000; }; *Time::HiRes::time = $timesub; *CORE::GLOBAL::time = sub () { return(sprintf("%0.0f", $timesub->())); }; # TODO override sleep and T::HR::sleep, etc } I was thinking there needs to be a shared filehandle with a stream of time on it similar to the below, but with various time() and sleep() methods overridden. Calls to time() or sleep() would peel-off lines, thus keeping everyone in sync. It becomes impossible for two things to happen at the same time (and I think I'm seeing the child blocking the parent's reads.) use Time::HiRes qw(sleep); my $pid = open(my $fh, "-|"); unless($pid) { $| = 1; while(1) { print Time::HiRes::time(), "\n"; sleep(0.01); } exit; } if(fork) { while(my $line = <$fh>) { warn "parent $line"; } } else { while(my $line = <$fh>) { warn "child $line"; } } --Eric -- "But as to modern architecture, let us drop it and let us take modernistic out and shoot it at sunrise." --F.L. Wright --- http://scratchcomputing.com ---
Re: Fixing the damage caused by has_test_pod
# from Adam Kennedy # on Saturday 28 July 2007 09:38 am: >Thus, I would like to propose the following. > >1. That the running of POD-related tests by end users is considered > harmful. +20 >2. That the test_pod and test_pod_coverage tests by modified, such > that these tests check for the mentioning of $ENV{AUTOMATED_TESTING} > in the tests scripts, ensuring that the tests not only exist, but > exist in a form that is otherwise benign for end users. I propose that they be deleted. The build system already contains all of the needed info to accomplish the same thing. ./Build testpod ./Build testpodcoverage That's part of my pre-release process, thus my dists will never contain these files. Running a kwalitee check is also part of the pre-release process, but I omit these 'has a file $foo' metrics because I believe they *hinder* quality. Perhaps the dist should contain a kwalitee.yml file stating which checks were run and which metrics are being protested. That's conveying the information: "Yes, I care about quality and I have imposed these standards on myself before releasing." CPANTS should *just run* testpod and testpodcoverage -- report the results, not the presence of some file which may or may not work. --Eric -- Chicken farmer's observation: Clunk is the past tense of cluck. --- http://scratchcomputing.com ---
Re: Fixing the damage caused by has_test_pod
# from Adam Kennedy # on Saturday 28 July 2007 01:52 pm: >Unfortunately, the pod coverage tests requires the module to be >compiled, so CPANTS can never safely run it, and thus can never run it >at all. :) No. *Pod::Coverage* requires the module to be compiled. Testing the coverage of pod does not. For those that want to nit-pick. Yes, I know methods can be defined via loops/accessors at compile-time. (I'm not documenting "--set_foo"() anyway.) They can also be defined at run-time (via AUTOLOAD, and possibly even driven by HTTP or GPIO), so I'll happily draw a line at static analysis and call it good. --Eric -- "Ignorance more frequently begets confidence than does knowledge." -- Charles Darwin --- http://scratchcomputing.com ---
Re: Fixing the damage caused by has_test_pod
# from Christopher H. Laco # on Sunday 29 July 2007 08:00 am: >It's rare, but I've been in the situation where >for some reason (mistmatching dist requirements, failed upgrades, bad >dist packages, broken troff, etc) the creation of man pages/html from >the dist pod fails outright. That reinforces the point that running pod tests on the install side is a waste of time. Maybe the build tools need to make more noise and/or refuse to install if pod2man/html goes horribly wrong. After all, we abort *before testing* if compilation of code fails, so why not treat compilation of docs the same way? >But I don't think saying pod tests are something the end user should >never run is wrong. So, it would be correct to say that end users should never run pod tests then? I can totally agree with that. ;-) >Personally, part of me wants to say: stop worrying about what tests I >decide to ship and enable with my dists. I'm thinking the installer should have the option to scan the tests and skip any that use qr/Test::Pod(?:::Coverage)?/. Speaking of which, I just noticed that Module::CPANTS::Uses doesn't even check whether the Test::Pod(::Coverage) modules are used in the *tests* -- it is happy if they get mentioned anywhere. Further, since Module::ExtractUse doesn't understand strings, you can simply put this in one of your tests to game the kwalitee: my $q = 'use Test::Pod::Coverage; use Test::Pod;'; --Eric -- "It works better if you plug it in!" --Sattinger's Law --- http://scratchcomputing.com ---
Re: Fixing the damage caused by has_test_pod
# from David Golden # on Monday 30 July 2007 05:34 am: >The issue at hand is really *release* testing (i.e. did I bump the >version, did I test my Pod, do I use good style, etc.) being mixed >with *functional* testing -- and the corresponding push for release >testing modules being included as requirements. Yes. We want to know that the author tested the code and went through the checklist before the release. We would like to be able to verify it ourselves, but we don't want all of that to get in the way of installation. >For that, I blame -- among other things -- Module::Starter for >including pod.t and pod-coverage.t and with a default setting to run >tests. Better would have been to skip tests unless >$ENV{AUTHOR_TESTING} or some similar flag was set. Again, yes. Though I'm going to stick with "delete them." I think the important bit is that `make test` only runs tests which verify the module's functionality. Anything else needs to be a separate test target or out-of-band tool. The fact that this is ugly, opaque, and error-prone should tell us something: AUTHOR_TESTING=1 \ TEST_THE_NETWORK=1 \ TEST_THE_HARDWARE=1 \ ADD_THOSE_OTHER_TESTS=1 \ TEST_EVERYTHING_REALLY_I_MEAN_IT=1 \ make test Compare to something as simple as: prove t_author t_network t_hardware t_other t_more t Or ./Build testall --Eric -- Don't worry about what anybody else is going to do. The best way to predict the future is to invent it. --Alan Kay --- http://scratchcomputing.com ---
Re: Fixing the damage caused by has_test_pod
# from Christopher H. Laco # on Monday 30 July 2007 11:14 am: >I don't agree. What runs when I do 'make test' is up to me, and if I >want to litter it up with 'author' tests, then that's my business; > right or wrong. Don't like it, then don't use my modules. (I still > think all author tests should not run by default...) This is not about what happens when *you* do `make test`, it's about what happens when the end-user does `make test`. The default module-starter setup creates this t/pod.t #!perl -T use Test::More; eval "use Test::Pod 1.14"; plan skip_all => "Test::Pod 1.14 required for testing POD" if $@; all_pod_files_ok(); If *you* don't have Test::Pod, and *I* do, *I* cannot install your module if the pod doesn't pass. This could possibly fail as part of a dependency of a dependency of a dependency. This makes Perl harder to use, which is bad. Thus, I posit that the quality of the module is generally lower if 'boilerplate.t', 'pod-coverage.t', and 'pod.t' *exist*. Kwalitee is supposed to be an approximation of quality, not the opposite of it. --Eric -- If the collapse of the Berlin Wall had taught us anything, it was that socialism alone was not a sustainable economic model. --Robert Young --- http://scratchcomputing.com ---
Re: Fixing the damage caused by has_test_pod
# from David Golden # on Monday 30 July 2007 01:06 pm: ># pod-coverage.t >use Test::More; >plan skip_all => "Skipping author tests" if not $ENV{AUTHOR_TESTING}; > >my $min_tpc = 1.08; >eval "use Test::Pod::Coverage $min_tpc"; >plan skip_all => "Test::Pod::Coverage $min_tpc required for testing No. If AUTHOR_TESTING, fail miserably unless the pod and coverage both 1. gets tested and 2. passes. That means the Test::Pod::* module in question must load. While you're at it, put it somewhere "out of the way" (as Aristotle said), like "at/" and forget this environment variable sillyness. at/pod-coverage.t use Test::More; use Test::Pod::Coverage 1.08; all_pod_coverage_ok(); at/pod.t use Test::More; use Test::Pod 1.14; all_pod_files_ok(); Who knows when you'll run into a box in the publishing industry which just happens to be setup for AUTHOR_TESTING some other system. Or, simply the hardcore CPAN author who put it in their .bashrc and forgot about it until some broken pod appeared in the middle of a big dependency stack. Environment variables are as global as they get. I think the "skipped more often than run" use-case should also imply something. I keep my winter clothes in the back of the closet during the summer and all-that. --Eric -- "It is impossible to make anything foolproof because fools are so ingenious." --Murphy's Second Corollary --- http://scratchcomputing.com ---
Re: Fixing the damage caused by has_test_pod
# from chromatic # on Monday 30 July 2007 01:39 pm: >On Monday 30 July 2007 13:19:56 Christopher H. Laco wrote: >> Eric Wilhelm wrote: >> > If *you* don't have Test::Pod, and *I* do, *I* cannot install your >> > module if the pod doesn't pass. >> >> Huh? In that code, no Test::POD = skip all tests. How does that >> equate to an install failure for anyone...either the author, the >> dist maint, or the end user? > >Eric's case is where user who just wants to install the module has > Test::POD but there are POD errors, so the tests fail and he or she > has to examine the test output to see if forcing the installation is > a good idea. It's not about not having Test::POD installed. Well, it is _all about_ whether it is installed because that's the only conditional in the test. If Test::Pod *is not* on the author system, then pod errors get through. Then, if it *is* installed on the destination system, test fails, game over. --Eric -- Don't worry about what anybody else is going to do. The best way to predict the future is to invent it. --Alan Kay --- http://scratchcomputing.com ---
Re: Fixing the damage caused by has_test_pod
# from Christopher H. Laco # on Monday 30 July 2007 01:35 pm: >If it fails and you can't install it, then don't. >Arguments about whether the tests should or shouldn't be run [or >included at all] is irrelevant. Tests failed. Don't install. I think you're looking at this as "oh, I'll try whiz-bang module foo." Sure, you can just dismiss it, but that's really not the concern. It is quite relevant when we get 2-3 levels deep in dependencies and somebody made a slight slip or added a new dependency which fails for this reason. Then we end up digging through a customer e-mail trying to sort through what went wrong and finding that a lot of time has been lost on something very simple and "dismissable". And then, it turns out that this is a situation which the "perl qa community" has encouraged, condoned, and rewarded even. Gah! Where was I during that vote? >File RT. Won't happen. Not ~95% of the time anyway. >If >the author made a choice to have them run always and piss people off, > or restrict the user base, then that's the authors prerogative. In most cases, it is not the author's deliberate choice that these things happen, instead, it is believed to be some sort of mandate that they be in the distro "because Module::Starter put them there" or "CPANTS says I should have them." That is, the vast majority of authors who have them have not worked through the logic of what damage these tests do on the install side. --Eric -- Any sufficiently advanced incompetence is indistinguishable from malice. --The Napoleon-Clarke Law --- http://scratchcomputing.com ---
Re: Fixing the damage caused by has_test_pod
# from David Golden # on Monday 30 July 2007 05:03 pm: >>Or, >> simply the hardcore CPAN author who put it in their .bashrc and >> forgot about it ^-- who will it bite first, will it be him, her, or me? >> until some broken pod appeared in the middle of a >> big dependency stack. Environment variables are as global as they >> get. > >s/AUTHOR_TESTING/PERL_AUTHOR_TESTING/ > >And "hardcore" CPAN authors aren't the audience that this discussion >is trying to help. I admire your skillful rhetoric and valiant attempt to avoid typing `mkdir at`, but I fail to understand the resistance. I refer again to the "skipped more often than run" issue and the supposition that residents of warm climates tend not to keep their ski equipment next to the front door. --Eric -- Consumers want choice, consumers want openness. --Rob Glaser --- http://scratchcomputing.com ---
Re: Fixing the damage caused by has_test_pod
# from David Golden # on Monday 30 July 2007 07:39 pm: >On 7/30/07, Eric Wilhelm <[EMAIL PROTECTED]> wrote: >> `mkdir at`, but I fail to understand the resistance. > >* CPANTS Kwalitee >* Module::Starter Both need correcting anyway. >* Test::Pod::Coverage Why does that matter? Does it care where its .t file lives? >* ExtUtils::MakeMaker Not affected. prove at/ Or, runtests at/ >* Module::Build Not affected. ./Build testpod ./Build testpodcoverage Or, of course runtests at/ --Eric -- To succeed in the world, it is not enough to be stupid, you must also be well-mannered. --Voltaire --- http://scratchcomputing.com ---
Re: Fixing the damage caused by has_test_pod
# from David Cantrell # on Tuesday 31 July 2007 04:27 am: >> No. If AUTHOR_TESTING, fail miserably unless the pod and coverage >> both 1. gets tested and 2. passes. That means the Test::Pod::* >> module in question must load. > >Wrong. If AUTHOR_TESTING then surely all tests *must* pass both with >and *without* the optional module! What good does it do to skip the test if AUTHOR_TESTING? That just gets us back into the same situation as the current one where the author didn't have Test::Pod and pod which fails the tests got shipped. To accomplish author testing, you must actually *run the tests*. You can't do that if you don't have the module available. If the tests didn't run, well ... it's untested. If untested pod ships with the possibility of the tests actually running (and quite likely failing) on the install system, the error appears at the wrong end. The advocacy of "check your pod" and "cover your pod" is fine, but please apply the clue-stick *before* the dist gets shipped to CPAN. --Eric -- The opinions expressed in this e-mail were randomly generated by the computer and do not necessarily reflect the views of its owner. --Management --- http://scratchcomputing.com ---
Re: Which Modules Does Your Distro Need?
# from Andy Armstrong # on Tuesday 31 July 2007 05:55 am: >> This is the approach I thought of. I think you'll need to keep a >> persistent file of output to capture require calls across each test >> file and then summarize. (I.e. so you can set it as a command line >> option to the harness.) > >Yup, I'm just playing with it now. Seems that it'll do everything we >need of it. Lovely :) Yep. My thinking for Devel::TraceDeps was that it would be enabled with a PERL5OPT=-MDevel::TraceDeps=somedir so-as to catch the subprocesses. It would need to basically create a subpath-keyed (YAML?) file in somedir for every $0 that hits import. The post-processing then allows you to filter-out the "test that runs a program" while still allowing you to process data from a "test that runs other tests". The trouble with basically [EMAIL PROTECTED] is that it is always first-come first-serve and you never get a poke for the latter require()s. use Foo; # uses warnings, strict, Carp use warnings; # got it already, no callback use strict; # ditto use Carp; # ditto With END {} looking at %INC, you're only seeing "everything that everyone used anywhere." And to get complete coverage (e.g. for PAR to work), we also need do() for things like 'utf8_heavy.pl does unicore/lib/SpacePer.pl' and etc. Plus, of course, the .dll/.so/.dylibs. Also note that "-d" tends to segfault in Wx usage (at window creation or thereabouts) and maybe elsewhere. Would be nice to note $^S as well as far as caring whether an eval()-wrapped load succeeds or fails. http://scratchcomputing.com/svn/Devel-TraceDeps/trunk --Eric -- "I've often gotten the feeling that the only people who have learned from computer assisted instruction are the authors." --Ben Schneiderman --- http://scratchcomputing.com ---
Re: Summarizing the pod tests thread
# from Yitzchak Scott-Thoennes # on Tuesday 31 July 2007 10:19 pm: >On Tue, July 31, 2007 9:56 pm, chromatic wrote: >> On Tuesday 31 July 2007 20:25:15 Salve J. Nilsen wrote: >>> Turning off syntax checking of your POD is comparable to not >>> turning on warnings in your code. Now would you publish code >>> developed without "use >>> warnings;"? >> >> Now that's just silly. > >Is it? Comparable. Capable of being compared. 1913 Webster. You can compare them, but it is silly. An uninitialized value might strike the user at runtime, so we leave warnings on when we ship. The pod will be unchanged, so we *test it*, and *then* ship it. I like to put my socks on before my shoes, but after my pants. Sometimes I put socks on before my pants, but shoes are always last. --Eric -- Turns out the optimal technique is to put it in reverse and gun it. --Steven Squyres (on challenges in interplanetary robot navigation) --- http://scratchcomputing.com ---
Re: Fixing the damage caused by has_test_pod
# from David Cantrell # on Wednesday 01 August 2007 03:21 am: >>> Wrong. If AUTHOR_TESTING then surely all tests *must* pass both >>> with and *without* the optional module! >> >> What good does it do to skip the test if AUTHOR_TESTING? That just >> gets us back into the same situation as the current one where the >> author didn't have Test::Pod and pod which fails the tests got >> shipped. > >Let us assume that I write a module that, if you have a particular >module installed will do some extra stuff. This is very common. >... >Skipping tests because you correctly identify that the optional module >isn't available is, of course, counted as passing. Test::Pod is *not* optional to PERL_AUTHOR_TESTING. If your intent is to test the pod (here, I am taking PERL_AUTHOR_TESTING to imply that we're trying to prevent bad pod), you must have the module that tests it. That is, to test the pod, you have to test the pod. To put it another way, the pod is not tested unless the pod tests have been run. If the pod tests didn't get run, the pod hasn't been tested. A pod test which skips due to 'no Test::Pod' has not tested the pod. To test the pod, you must run the pod tests. --Eric -- Peer's Law: The solution to the problem changes the problem. --- http://scratchcomputing.com ---
page -> wiki -> wiki -> wiki
Can the responsible parties please cleanup this chain of links and shutdown/lock [1] the two abandoned wikis? http://qa.perl.org/ [2] links to: http://schwern.org/~schwern/cgi-bin/perl-qa-wiki.cgi links to: http://perl-qa.yi.org/index.php/Main_Page has spam and links to: Finally: http://perl-qa.hexten.net/wiki/ [1] I think that's just: http://perl-qa.hexten.net/wiki/";> [2] It would be nice to have a "website maintainer" e-mail address/info at the bottom of the qa.perl.org page. "<@rjbs> oh, and I meant the cpan testers wiki < ewilhelm> argh, there's *another* wiki?!" Thanks, Eric -- The only thing that could save UNIX at this late date would be a new $30 shareware version that runs on an unexpanded Commodore 64. --Don Lancaster (1991) --- http://scratchcomputing.com ---
Re: Summarizing the pod tests thread
# from Joshua ben Jore # on Thursday 02 August 2007 07:13 am: >Just FYI, using valid pod like =head3 causes runtime failures during >build on "older" versions of Pod::Whatever that's used during the >module installation by EU::MM. Good point. And still, this is something which *can* be checked at other than the install target. The author, CPANTS, or cpan-testers. --Eric -- Issues of control, repair, improvement, cost, or just plain understandability all come down strongly in favor of open source solutions to complex problems of any sort. --Robert G. Brown --- http://scratchcomputing.com ---
Re: running author tests
# from David Cantrell # on Thursday 02 August 2007 02:54 am: >Eric Wilhelm wrote: >> # from David Cantrell >> >>> Skipping tests because you correctly identify that the optional >>> module isn't available is, of course, counted as passing. >>... >> To test the pod, you must run the pod tests. > >Seeing that you obviously think I'm an idiot, there's probably not > much point continuing. No, I just want to be clear that the pod must get tested by the pod tests and that "eval {require Test::Pod}" is the wrong trigger if $ENV{PERL_AUTHOR_TESTING} is set. The switch state implies that the module is mandatory. I (the author) don't want to accidentally skip the test due to a missing/corrupt/old module (that is exactly why we're having problems with the current t/pod.t invocation.) If we're going to establish a recommendation for usage of $PERL_AUTHOR_TESTING and/or an "author_t/" directory, it should behave somewhat like 'use strict' in that it needs to actually run *all* of the author tests (with very few exceptions.) For such usage, extra dependencies would only be optional in extreme cases. Further, pod/pod-coverage would possibly even be 'assumed' tests. That is, why bother even having a .t file if the authortest tool knows how to run testpod and testpodcoverage? --Eric -- Like a lot of people, I was mathematically abused as a child. --Paul Graham --- http://scratchcomputing.com ---
Re: formalizing "extra" tests
# from Salve J Nilsen # on Thursday 02 August 2007 08:19 am: >But this isn't a binary yes/no to POD tests issue. There's no reason > to make this into an all-or-nothing situation. We can still let the > end-user be the master of her own world, by allowing her to run the > "less essential tests" only when she explicitly asks for it. e.g. by > using $ENV{PERL_AUTHOR_TESTING}, or asking during setup if she wants > to run the author tests (with default answer "no".) Yep. We do need to have "standard" ways to do this. They need to be incorporated into CPANTS and cpan-testers. They shouldn't prevent end-users from using otherwise-functioning code. We also need to make this information clear and accessible to CPAN authors. http://perl-qa.hexten.net/wiki/index.php/Best_Practices_for_Testing#TODO --Eric -- If the above message is encrypted and you have lost your pgp key, please send a self-addressed, stamped lead box to the address below. --- http://scratchcomputing.com ---
Re: running author tests
# from Adriano Ferreira # on Thursday 02 August 2007 01:13 pm: >The behavior could be triggered by updating M::B and EUMM to make them >run the extended glob "t/**/*.t" when PERL_AUTHOR_TESTING or >AUTOMATED_TESTING is setted. No. That breaks recursive tests, unless we also add code to skip t/author/. I think a new toplevel (e.g. ./author_t or ./author_test) directory is going to be most compatible. --Eric -- "If you only know how to use a hammer, every problem begins to look like a nail." --Richard B. Johnson --- http://scratchcomputing.com ---
a "standard" place for extra tests
Returning to something that came up in the 'testing pod' thread... # from Adam Kennedy on Tuesday 31 July 2007 08:23 pm: >A small nit, if we end up converging around the use of a relatively >standard directory name, can we at least use something relatively >self-evident? For example "author/"? I would like to suggest that we adopt, recommend, (and support with tools, etc) a "standard" (convention) for storing additional tests. MyModule-v0.0.1/ xt/ author/ network/ Or something along those lines. At this point, 'author' is the only child directory I'm fairly certain of. I am certain that more than one 'extra tests' directory is needed, thus the thought to make them into subdirectories (objections?) (They cannot live under 't/' due to compatibility issues.) As for the 'xt' name: it tends to sort pretty soon after 't', as in 'these come next'. I think anything starting with 't' would get annoying as far as tab-complete goes. Similarly, 't2' might conflict with primitive version-control habits. Better suggestions? Aside: I've been playing with using non-'.t' file extensions for this and storing them under t/, but this requires extra support from Module::Build, runtests, prove, ack, etc. The only drawback to putting them outside of the 't/' directory seems to be that refactoring habits like 'grep $method -r t' might need adjustment. Vs. this "more .t files one directory to the right" approach where you can easily get a bug report by asking the user to simply run `prove xt/network/`. --Eric -- "Matter will be damaged in direct proportion to its value." --Murphy's Constant --- http://scratchcomputing.com ---
Re: a "standard" place for extra tests
# from Smylers # on Thursday 16 August 2007 11:40 pm: >> I am certain that more than one 'extra tests' directory is needed, > >Why are you certain of this? Because I already have a use for three 'profiles' in one project and I can name a few others from elsewhere: 1. author (kwalitee, pod, etc) 2. gui 3. network 4. you must have an account/password on $external_service 5. postgres/mysql/whatever availability/setup/permissions 6. no modem attached to /dev/ttyS0 Thus: "more than one". What I'm saying is that "author" is not enough (and/or not descriptive enough of a top-level directory of "extra tests".) > Anything under this directory isn't going to get run with 'normal' > tests, which is an unusual requirement. It's not *that* unusual in the context of CPAN. Anything that requires intervention, setup, or access to a particular resource needs to be skipped or bypassed during unattended pre-installation testing. What I would ultimately like to see happen is that these subdirectories fall into some form of standard(ish) naming scheme which would allow testers and qa efforts to run particular subsets. That is, the smoke box (or rigorous end-user upgrade verification process) could set variables to test the author, 'gui', 'network', and 'postgres' tests[1] without stumbling into e.g. "must have an X.com web-services account". Which brings up a good point: I think "xt/foo.t" should be discouraged. That is, if the tests in xt might assume any number of pre-configured things, the only safe assumption is "test what you know you are prepared to test". Thus: prove -rb xt/author prove -rb xt/network prove -rb xt/exhaustive not `prove -rb xt/` -- which might crash for lack of gui before it gets around to testing the network bit. Therefore, 'xt/foo.t' is a problem because it has no category besides "it is extra". [1] Yes, setup gets slightly tricky on the 'database', 'account', and 'devices' resources. I like to imagine that those could be fed information via some standard mechanism such as a per-distro config data-source -- the target usage there is more in the context of distributed teams of developers and rigorous end-users than cpan testers. --Eric -- The only thing that could save UNIX at this late date would be a new $30 shareware version that runs on an unexpanded Commodore 64. --Don Lancaster (1991) --- http://scratchcomputing.com ---
Re: a "standard" place for extra tests
# from Ovid # on Friday 17 August 2007 12:40 am: >> > As for the 'xt' name: >> >> Nobody is intuitively know what "xt" means > >Because 't/' is self-explanatory? Yeah dude, it's like 't', but the x makes it eXtreme ;-) --Eric -- I eat your socks and you pay me. --The business sense of a very small goat. --- http://scratchcomputing.com ---
Re: a "standard" place for extra tests
# from chromatic # on Friday 17 August 2007 11:28 am: >... sort of like I've been doing for at least one and probably two > years now with Module::Build and my custom disttest action, which I > know I've mentioned a few times now, which leads me to wonder why > people so conveniently forget it every time this discussion comes up. Because 1. it doesn't play nicely with recursive tests[1] 2. it requires a custom disttest action[2] [1] Unless you filter out t/author (and t/gui (and t/network (and ...))) It seems to me that having a whole directory of "and also" would more easily allow one to cherry-pick which extras to run while leaving "t/" as the "run all of these, recursively" spot. [2] And that wouldn't be so bad if Module::Build had a plugin mechanism, but it doesn't. Thus, avoiding custom actions for something where they could be avoided is nice because it reduces the (currently necessary) copy/paste code reuse. --Eric -- So malloc calls a timeout and starts rummaging around the free chain, sorting things out, and merging adjacent small free blocks into larger blocks. This takes 3 1/2 days. --Joel Spolsky --- http://scratchcomputing.com ---
Re: a "standard" place for extra tests
# from Michael G Schwern # on Friday 17 August 2007 11:13 am: >> # when something different is wanted >> author_tests: t/developer > >Now you need a YAML parser to run tests, which may be fine. But you also need to update all of the tools, which may not. --Eric -- But as soon as you hear the Doppler shift dropping in pitch, you know that they're probably going to miss your house, because if they were on a collision course with your house, the pitch would stay the same until impact. As I said, that one's subtle. --Larry Wall --- http://scratchcomputing.com ---
Re: a "standard" place for extra tests
# from chromatic # on Friday 17 August 2007 01:05 pm: >On Friday 17 August 2007 12:52:54 Eric Wilhelm wrote: >> 1. it doesn't play nicely with recursive tests[1] > >"It's okay to run everything in t/ recursively" is a heuristic. You > can tell it's a heuristic because sometimes it's wrong. That's not what I meant by "doesn't play nicely". I mean, "my project cannot use recursive tests *and* have author-only tests in t/author without jumping through hoops." Hoops include: a custom test action, special options (or patches) to prove and runtests, etc. >Avoiding heuristics requires some sort of metadata. Now a custosm > build action may be *bad* for other reasons, but it's not ambiguous, > like a heuristic. Let's forget for a second that I'm trying to assume anything about t/ or any other directory. The simple fact is that the current tools will behave more nicely if everything in $the_one_directory is of one sort and everything in $the_other_directory is of the other sort. Without that, we need code to sort the one from the other because they were all thrown into the same bag. Then we need an XML config file for that code and etc, etc. >Perhaps we need to define: > >4) what options are there for distinguishing... >5) which answer to #4 sucks the least >...any answer to #4 which includes a heuristic will fail #5 Any answer which requires a config-file and modifications to existing tools will also fail. So, maybe we have to say "extra_tests: xt" (or even some deeper data-structure) to avoid a heuristic. Fine. What would that data-structure look like? extra_tests: base_dir: xt profiles: author: author network: network gui: gui exhaustive: exhaustive The human will want the directory structure to mirror the names of the profiles, right? So, now the human has to type more, but there's no heuristic (well, except for the names of the keys in the config file.) --Eric -- Speak softly and carry a big carrot. --- http://scratchcomputing.com ---
Re: a "standard" place for extra tests
# from David Golden # on Friday 17 August 2007 10:03 am: >On 8/17/07, Eric Wilhelm <[EMAIL PROTECTED]> wrote: >> 1. author (kwalitee, pod, etc) >> 2. gui >> 3. network >> 4. you must have an account/password on $external_service >> 5. postgres/mysql/whatever availability/setup/permissions >> 6. no modem attached to /dev/ttyS0 >... >As for different profiles, there's no reason that one couldn't have: > >author_t/release >author_t/kwalitee >author_t/gui >author_t/network >author_t/runs_for_27_hours >etc. > > >There really isn't a need to standardize the top level, either, unless >there is a goal of adding support for it to Module::Build and/or >ExtUtils::MakeMaker. Yes, that is the goal. The toplevel directory should be the same. While the config-file approach would allow the name to vary, I think that is over-engineering given that 't/' is an invariant "assumed directory name". > there's no need for any standardization below the top level. Except in that it would conveniently allow smokers to "sign up" for testing *specific* extra functionality. That is, "yes, I have network connectivity" means it is ok to run the "xtra_t/network/" tests. I'm not saying we have to have a committee establish *all* of the possible profile names. If we can just say 'author', 'network', 'exhaustive', and 'gui' for now, I'll be happy. Are we going to have to argue over those names too? >I think the important thing is establishing a *convention* of keeping >tests intended for developers and not end users out of the "t/" >directory. Yes. What I would like to do to motivate that is to provide the incentive of a recommended alternative (one which is supported by tools, smoke testers, etc.) >And what percent of CPAN >distros will really include so many author variations anyway? Let's >not let the perfect design be the enemy of the good. As long as we don't hastily shoot ourselves in the foot. --Eric -- To succeed in the world, it is not enough to be stupid, you must also be well-mannered. --Voltaire --- http://scratchcomputing.com ---
Re: a "standard" place for extra tests
# from chromatic # on Friday 17 August 2007 01:57 pm: >On Friday 17 August 2007 13:20:39 Eric Wilhelm wrote: >> "my project cannot use recursive tests *and* have author-only tests >> in t/author without jumping through hoops." >How are you getting recursive tests by default anywhere? I'm not getting them by default. I'm *using* the recursive functionality for my test suite. I cannot use t/author and t/normal_stuff without filtering-out "t/author". I don't want to have to filter-out t/author and t/gui and t/network and t/account. I want it to be convenient to run "all of the normal tests" and "all of the extra tests" and "a few profiles from the extra tests" and "all of the tests" I want it to be convenient with `./Build` and `make` and `prove` and `runtests`. prove -rb t prove -rb xt prove -rb xt/author xt/network xt/gui prove -rb t xt >I just temporarily changed the Build.PL for Test::MockObject (which > stores author-only tests in t/developer/) not to filter out that > directory in the find_test_files() files method, and the developer > tests still didn't run. Now set 'recursive_test_files => 1' (like I do) and try it. Then run `prove -rb t` and `runtests -rb t`. --Eric -- "...the bourgeoisie were hated from both ends: by the proles, because they had all the money, and by the intelligentsia, because of their tendency to spend it on lawn ornaments." --Neal Stephenson --- http://scratchcomputing.com ---
Re: a "standard" place for extra tests
# from Smylers # on Friday 17 August 2007 03:13 pm: >Eric Wilhelm writes: >Why can't gui tests simply be in t/ and be skipped if the appropriate >environment isn't available? That way all users who are in that >particular environment run the tests, rather than only those who've >discovered the xt/gui/ directory. "Can't you just" paste some 'if($ENV{WHATEVER})' code into all of your tests? Yes, "I can just". I don't want to. I don't want the test suite to start 100 test processes and skip 90 of them. I don't want to maintain some randomish, pasted 'if($ENV{WHATEVER})' code in 20+ test suites. I don't want other CPAN authors to have to do these things either. I don't want other CPAN authors to have to discover "the recommended way" to do these things. >> What I would ultimately like to see happen is that these >> subdirectories fall into some form of standard(ish) naming scheme >> which would allow testers and qa efforts to run particular subsets. >> That is, the smoke box (or rigorous end-user upgrade verification >> process) could set variables to test the author, 'gui', 'network', >> and 'postgres' tests[1] without stumbling into e.g. "must have an >> X.com web-services account". > >That requires a central standardized repository of permitted >subdirectories and what needs to be done to run the tests there. It might be useful, but is not strictly required. I would still prefer that to "everything is turing-complete and self-selecting". Plus, self-selecting tests bring us back to the same point which started all of this which is that not all of them are absolutely critical to the installation. Part of my original point is that it would be useful[1] to define a standardized mapping of what functionality or resources are required to run typical subsets: author: some pod testing, kwalitee, etc prereqs network: network access gui: a gui Yes, the rest is a bit more complicated. Here, I'll define the arbitrary invent-your-own-thing ad-hoc asylum location right now: misc: everything else > Surely it's better to let each distribution keep control of this, by > specifying in code what's needed to run those particular tests (and > skipping otherwise)? It is only better in specific cases of "totally unlike everything else". Self-selecting tests are clearly (to me) worse for things as simple and typical as pod, network, and gui because it is ad-hoc, non-discoverable, and requires pasting code or modifying all of the tools. There is nothing about categorization which prohibits ad-hoc from occurring within it. However, you cannot easily have categorization within ad-hoc. [1] useful for developers, smoker testers, qa-testers, and end-users. Each of which have various reasons to want to run or not run various categories of tests (and do it in a standard way.) --Eric -- If the above message is encrypted and you have lost your pgp key, please send a self-addressed, stamped lead box to the address below. --- http://scratchcomputing.com ---
Re: a "standard" place for extra tests
# from Adriano Ferreira # on Friday 17 August 2007 11:49 am: >The only drawback is redundant information: there are the tests and >the list of tests. Keeping tests in "t/" with some fine way to >discover what they are good for (according to Chris' idea) may be >promising if not too magical. I think keeping extra tests in "t/" is too magical for the existing tools. After all, I have tried it. In order to have *both* recursive tests (which many larger projects do) *and* extra tests, you have to have magic. My earlier attempt at this was to add test profiles to Module::Build via file extensions which were declared in the Build.PL. The trouble is that those don't work with EU::MM or prove or runtests (or ack (and even vi in some configurations.)) My conclusion from that, was this "xt/foo/" thing, which requires zero coding (or pasting) and works fine with all of the existing tools. The "xt/foo/" thing solves that redundant information problem too. I thought to myself, "if this works so well for what I'm doing, wouldn't it be great if it became a sort of standard?" --Eric -- "Ignorance more frequently begets confidence than does knowledge." -- Charles Darwin --- http://scratchcomputing.com ---
Re: a "standard" place for extra tests
# from A. Pagaltzis # on Saturday 18 August 2007 06:59 am: >* Eric Wilhelm <[EMAIL PROTECTED]> [2007-08-18 02:25]: >> Part of my original point is that it would be useful[1] to >> define a standardized mapping of what functionality or >> resources are required to run typical subsets: >> >> author: some pod testing, kwalitee, etc prereqs >> network: network access >> gui: a gui > >What about author tests for the GUI? Or tests that need both the >network and a GUI? OR tests for pumping screenshots from the gui (run via a network connection to the author's machine) into a postgres database and uploading them to flicker? Yes, it can get complicated. No, it doesn't have to always be complicated. What we have now sucks. So, more complicated scenarios need more complicated configuration. This could be done via "the thing to be discussed in a different thread" (e.g. "a config-file plus 'use skipme' in the test" under the misc directory ("xt/misc/gui/foo.t" -- misc meaning that they are not easily classifiable and/or are externally/internally configured.)) The main goal (well, my main goal) is to have some kind of easy-to-understand recommendation for new authors (or even experienced ones) on how to organize their tests when the tests are considered optional[1]. I think a way to cover the basics with existing tools (without having everything totally ad-hoc) is important. I would prefer that this doesn't involve skipping tests. That is, there is less noise in the output if an optional test is never run than if it starts and then says "oops, no ignore me." The tools should decide which tests to execute. I would like to be able to categorize/sort/select tests without executing them. I keep harping on this point because I think it could enable a much richer distributed testing environment. [1] It seems that "optional" is a bit of a sticking point around here. Essentially, when resources required by those tests are likely to cause spurious failure or when failure of a given test is otherwise irrelevant to functionality or when the tests are testing optional features. --Eric -- "Matter will be damaged in direct proportion to its value." --Murphy's Constant --- http://scratchcomputing.com ---
Re: a "standard" place for extra tests
# from brian d foy # on Saturday 18 August 2007 01:33 pm: > the solution is probably the >same for any other suggestions: override parts of Test::Harness to >discover the file names you want to test, or override runtests. Or override nothing and simply group the tests into directories. --Eric -- Turns out the optimal technique is to put it in reverse and gun it. --Steven Squyres (on challenges in interplanetary robot navigation) --- http://scratchcomputing.com ---
Re: 't/' vs 'something different'
I agree with everything in this message except the choice of subject line. # from Ovid on Sunday 19 August 2007 05:44 am: >What we appear to disagree on is how to handle non-MNF tests. The > main dividing line seems to be whether we should have a separate test > directory. >... >There's also the the question of recursive tests. It's not really a question though. Some distros rely on 'recursive' tests under 't/' for their MNF tests. So, any solution which is incompatible[1] with this usage cannot cover the general case. [1] incompatible: not backwards-compatible with all existing tools >So far I've not seen a single compelling argument *for* sticking >everything in the 't/' directory and I haven't seen a single > compelling argument *against* a new directory. Is anyone in this camp (the camp of "for 't/'" or "against 'something different'") actually using a recursive test hierarchy for their MNF tests? --Eric -- You can't whack a chisel without a big wooden mallet. --- http://scratchcomputing.com ---