Easy test fixtures with DBIx::Class
I wanted an easier way to create test fixtures with DBIx::Class, so I wrote it. I've written about it here: http://blogs.perl.org/users/ovid/2014/02/easy-fixtures-with-dbix-class.html Since this is the QA group, I feel that you might appreciate this more than most. Bug reports and test cases very much appreciated! Cheers, Ovid -- IT consulting, training, international recruiting http://www.allaroundtheworld.fr/. Buy my book! - http://bit.ly/beginning_perl Live and work overseas - http://www.overseas-exile.com/
Re: TAP::Harness timeout?
I don't know how easy that is to do with Test::Harness, though my App::Prove::Plugin::ProgressBar[1] might put you in the right direction. Alternatively, Test::Class::Moose[2] has a "reporting" feature that reports on timing and breaks it down into system, real and user time. You should be able to hook into that (or just take advantage of the fact that it's an OO feature and use the normal timeout features with alarm that you would do for regular classes. 1. https://github.com/Ovid/App-Prove-Plugin-ProgressBar 2. http://search.cpan.org/dist/Test-Class-Moose/ Cheers, Ovid -- IT consulting, training, international recruiting http://www.allaroundtheworld.fr/. Buy my book! - http://bit.ly/beginning_perl Live and work overseas - http://www.overseas-exile.com/ On Wednesday, 29 January 2014, 6:50, Todd Rinaldo wrote: I’m looking at using TAP::Harness to process our test suite. Up to now we’ve been using some home grown code that IMHO is a heroic attempt to re-implement TAP::Harness. > >It seems to do everything we need with one exception. We have rules that >disallow a unit test from taking more than XXX seconds to run. If it exceeds >that, we abort the test and move on to the next one, declaring the long test >file a failure. Before I look at subclassing, can anyone tell me if there’s a >way to do this with the existing code? > >Thanks, >Todd > > >
Re: New Test::Builder version broke Test::ParallelSubtest
Hi Jan, For running multiple tests in parallel, I'm not sure if there is a "recommended" alternative, but the two which immediately sprint to mind are Fennec, by Chad Granum[1] and my Test::Class::Moose module[2], using the parallel role[2]. I've also uploaded a new slide deck for Test::Class:Moose[3] and it explains some new features, or you can watch a video about it (which is just a touch outdated, but all examples work)[4]. Note that for Test::Class::Moose, I've just uploaded 0.42. It fixes a subtle issue that one company ran into. You can wait for it to hit the CPAN or grab it from github[5]. 1. http://search.cpan.org/dist/Fennec/lib/Fennec.pm#RUNNING_FENNEC_TEST_FILES_IN_PARALLEL 2. http://search.cpan.org/dist/Test-Class-Moose/lib/Test/Class/Moose/Role/Parallel.pm 3. http://www.slideshare.net/Ovid/testclassmoose 4. http://www.youtube.com/watch?v=S1Z5_Ba860g 5. https://github.com/Ovid/test-class-moose I don't know much about Fennec, but the author seems competent and responsive. As for Test::Class::Moose, it's listed as BETA, but I know companies are already using it in production. Best, Ovid -- IT consulting, training, international recruiting http://www.allaroundtheworld.fr/. Buy my book! - http://bit.ly/beginning_perl Live and work overseas - http://www.overseas-exile.com/ On Tuesday, 14 January 2014, 0:30, Jan Seidel wrote: >Hi, > >I’m using the Test::ParallelSubtest module to run multiple tests in parallel. >However, a change in Test::Builder now broke this module. Every bg_subtest >block fails with an error like the following: > >not ok 1 - parse child output for 'test 1 ' ># Failed test 'parse child output for 'test 1 '' ># at >/home/sciadmin/workspace/ws_jan_fw/util/nirvana/CPAN/lib/perl5/Test/ParallelSubtest.pm > line 216. ># ERROR: bg_subtest "test 1 " (/home/…….. /jantest.t line 24) aborted: ># Parsing failure in Test::ParallelSubtest - cannot parse: ># [ # Subtest: test 1 ># ok 1 - test 1 ># ] > >I looked at the changes in Test::Builder and found that this change introduced >the error: >0.98_05 Tue Apr 23 17:33:51 PDT 2013 >New Features >*A subtest will put its name at the front of its results to make > subtests easier to read. [github #290] [github #364] > (Brendan Byrd) > > >I’m now wondering how to solve this issue. It looks like Test::ParallelSubtest >is not actively maintained. Is there a recommended alternative for it to run >multiple tests in parallel? > >Thanks, >Jan > > >
Parallel testing comes to Test::Class::Moose
In case you haven't seen it, I now have an experimental branch of Test::Class::Moose that provides built-in parallel testing. http://blogs.perl.org/users/ovid/2013/12/merry-christmas-parallel-testing-with-testclassmoose-has-arrived.html http://blogs.perl.org/users/ovid/2013/12/eating-my-own-dogfood-parallel-tests.html In short, just drop in the parallel testing role and have fun. The second link above shows a trivial schedule that allows you to mark methods as "noparallel" and have them run after the parallel tests. I'm thinking about making that the default behavior, but easily overridable by providing your own schedule() method. Cheers, Ovid -- IT consulting, training, international recruiting http://www.allaroundtheworld.fr/. Buy my book! - http://bit.ly/beginning_perl Live and work overseas - http://www.overseas-exile.com/
Re: How to Port https://metacpan.org/module/Test::Run::Plugin::TrimDisplayedFilenames to TAP::Harness
Hi Shlomi, My definitive answer would be to say "work with Leon" on this :) He's taken over maintenance on TAP::Harness. I haven't looked at that section of the code in quite some time and honestly, I don't have the time/energy to do so right now. Best, Ovid -- IT consulting, training, international recruiting http://www.allaroundtheworld.fr/. Buy my book! - http://bit.ly/beginning_perl Live and work overseas - http://www.overseas-exile.com/ On Thursday, 28 November 2013, 9:39, Shlomi Fish wrote: On Tue, 26 Nov 2013 14:00:30 +0100 >Leon Timmermans wrote: > >> On Fri, Sep 20, 2013 at 8:29 PM, Shlomi Fish wrote: >> >> > I read that post and have one question: can I easily create several >> > specialised >> > plugins and have them all apply their modified behaviours to the relevant >> > part >> > of TAP::Harness? Seems like I can only set up a single subclass of the >> > relevant >> > parts in each plugin (like for the formatter or whatever). Or am I missing >> > something? >> > >> > I'm asking because with Test::Run, I can set up more than one plugin for >> > each >> > class and I'm wondering how to do that with TAP::Harness. >> > >> >> I think you may want to write a set of subclasses with the appropriate >> hooks (e.g. a TAP::Formatter::Extensible), and use that to write plugins >> for. >> >> Leon > >Hi Leon, > >thanks for the response, but I think I'll wait for a definitive answer from >Ovid, who originally wrote TAP::Harness, so I'll know whether I need to invest >the extra effort in doing that. Sorry if you're disappointed. > >Regards, > > > Shlomi Fish > >-- >- >Shlomi Fish http://www.shlomifish.org/ >Free (Creative Commons) Music Downloads, Reviews and more - http://jamendo.com/ > >What does “IDK” stand for? I don’t know. > >Please reply to list if it's a mailing list post - http://shlom.in/reply >. > >
Re: TAP::Harness and -w
As I said in my previous email on July 7th: backwards-incompatible changes to the backwards-compatibility layer (Test::Harness) are not a good idea. The proper response is to have people impacted by this issue switch to TAP::Harness, as was suggested several years ago when Test::Harness 3.0 was released. For example, the 'prove' utility calls App::Prove which calls TAP::Harness. If others are using Test::Harness directly, perhaps Eric is right and it should be deprecated? However, it's a core module and I don't know the implications of that. Cheers, Ovid -- IT consulting, training, international recruiting http://www.allaroundtheworld.fr/. Buy my book! - http://bit.ly/beginning_perl Live and work overseas - http://www.overseas-exile.com/ > > From: Leon Timmermans >To: Ovid >Cc: Ricardo Signes ; "perl-qa@perl.org" > >Sent: Tuesday, 17 September 2013, 17:26 >Subject: Re: TAP::Harness and -w > > > >On Sun, Jul 7, 2013 at 11:45 AM, Ovid wrote: > >I'm winding up with astonishingly little bandwidth due to launching our >company, so I was hoping to see a strong consensus from the users. I would >also love to see examples of where the change or lack thereof is causing an >issue. I am SWAMPED with so much email that receiving many opinions piecemeal >makes it hard for me to follow along. >> >>Were I not so bandwidth-constrained, this would be less of an issue, but I'd >>like to see a good Wiki page or something with the pro/con arguments laid >>out. If this is too much, I should turn over maintainership to someone with >>more bandwidth to ensure I'm not a blocker. >> > >Just as I expected, "make it a wiki" means it gets warnocked again. > > >Can we please make a decision, or if we must first come to an agreement on how >to make it? > >Leon > > >
Re: How to Port https://metacpan.org/module/Test::Run::Plugin::TrimDisplayedFilenames to TAP::Harness
Here's a complete example of a TAP::Harness plugin to create a red/green progress bar. http://blogs.perl.org/users/ovid/2010/05/making-testharness-output-a-progress-bar.html Cheers, Ovid -- IT consulting, training, international recruiting http://www.allaroundtheworld.fr/. Buy my book! - http://bit.ly/beginning_perl Live and work overseas - http://www.overseas-exile.com/ > > From: Shlomi Fish >To: perl-qa@perl.org >Sent: Friday, 6 September 2013, 17:16 >Subject: How to Port >https://metacpan.org/module/Test::Run::Plugin::TrimDisplayedFilenames to >TAP::Harness > > >Hi all, > >I'd like to know what is the best way to create a plugin for >https://metacpan.org/module/TAP::Harness which will behave similarly to >https://metacpan.org/module/Test::Run::Plugin::TrimDisplayedFilenames . I found >out that the runtests method can accept aliases to be displayed instead of the >filename itself using an arrayref of [ $test, $alias ], so I can simply wrap >runtests() in a subclass, process the arguments, and call next::method with the >modified arguments. > >However, I still don't know how to write a plugin like that exactly (and how to >get prove to recognise it). This section - >https://metacpan.org/module/TAP::Harness#WRITING-PLUGINS - explains a bit about >how to do that with some hand-waving, but does not show any complete >top-to-bottom example, and I could not find anything with a metacpan search. > >My motivation for doing this is to port the rest of the functionality I miss in >Test::Run (which failed to gain mainstream acceptance, and few people >aside from me are using it) into TAP::Harness. > >Regards, > > Shlomi Fish > >-- >- >Shlomi Fish http://www.shlomifish.org/ >Selina Mandrake - The Slayer (Buffy parody) - http://shlom.in/selina > >bzr is slower than Subversion in combination with Sourceforge. > — Sjors, http://dazjorz.com/ > >Please reply to list if it's a mailing list post - http://shlom.in/reply . > >
Re: TAP::Harness and -w
- Original Message - > From: Karen Etheridge > > On Sun, Jul 07, 2013 at 02:45:22AM -0700, Ovid wrote: >> Were I not so bandwidth-constrained, this would be less of an issue, but > I'd like to see a good Wiki page or something with the pro/con arguments > laid out. If this is too much, I should turn over maintainership to someone > with > more bandwidth to ensure I'm not a blocker. > > wiki page created: > https://github.com/Perl-Toolchain-Gang/Test-Harness/wiki/TAP::Harness-and--the-w-flag Karen, Thank you for putting that together. It made it much easier for me to follow this! The Problem: In Test::Harness, many people want to see -w no longer being enabled by default. Others object to this change because this alters the behavior and it we should tread carefully, particularly since this change effects everyone who installs modules. Backwards-incompatible changes to toolchain modules should generally not be done lightly. What's worse, different people have different desired behaviors here (mainly ribasushi, as far as I can tell, but I don't know what the silent masses think). That being said, I tend to agree that -w should not be forced on those who do not want it. Solutions: 1. Make the change. 2. Don't make the change. 3/4. Make this very easy to configure but keep/change the old behavior. We have a change that people want, but as Eric pointed out, it introduces an incompatibility in the Test::Harness backwards-compatibility layer. If I'm to be conservative on this, I have to say that the one place we *don't* want incompatible changes is in a backwards-compatibility later! I originally wrote TAP::Harness to be very configurable (we can argue later if I succeeded). Since Test::Harness is merely a compatibility layer on top of TAP::Harness, I like Eric Wilhelm's suggestion of software switching to TAP::Harness instead of Test::Harness. The basic change is simple. Instead of: use Test::Harness; runtests(@test_files); You do this: use TAP::Harness; my $harness = TAP::Harness->new( \%args ); $harness->runtests(@tests); Of course, the devil is in the details and I imagine that many tools will be more seriously impacted. This allows us to maintain backwards-compatibility and gives uses a better interface, to boot! The Real Question: What toolchain software is being impacted by this and how hard would it be to make the switch? People have long known that TAP::Harness is a better alternative to Test::Harness, but the compatibility layer meant that people wouldn't have to make that switch right away. I suppose that some time in the past 5 1/2 years I should have been urging people to make the change. Perhaps now there is a reason to make that switch? Admittedly, the above sounds like a remarkably self-serving way of passing the buck (very handy right now), but the question is very real: what are the obstacles to those wanting a different behavior in making a switch? Cheers, Ovid -- IT consulting, training, international recruiting http://www.allaroundtheworld.fr/. Buy my book! - http://bit.ly/beginning_perl Live and work overseas - http://www.overseas-exile.com/
Re: TAP::Harness and -w
- Original Message - > From: Ricardo Signes > > * Leon Timmermans [2013-07-04T14:04:21] >> By what process? Define consensus? Given Andy is the official >> maintainer and Ovid is the effective maintainer, I don't think they >> need our consensus a priori. > > 06perms.txt says: > > Test::Harness,ANDYA,m > Test::Harness,MSCHWERN,c > Test::Harness,OVID,c > > So presumably at least getting the three of them to agree on some kind of > resolution process is a start. I'm winding up with astonishingly little bandwidth due to launching our company, so I was hoping to see a strong consensus from the users. I would also love to see examples of where the change or lack thereof is causing an issue. I am SWAMPED with so much email that receiving many opinions piecemeal makes it hard for me to follow along. Were I not so bandwidth-constrained, this would be less of an issue, but I'd like to see a good Wiki page or something with the pro/con arguments laid out. If this is too much, I should turn over maintainership to someone with more bandwidth to ensure I'm not a blocker. Cheers, Ovid -- IT consulting, training, international recruiting http://www.allaroundtheworld.fr/. Buy my book! - http://bit.ly/beginning_perl Live and work overseas - http://www.overseas-exile.com/
Re: How might we mark a test suite isn't parallalizable?
> From: Mark Stosberg > >> OK, but you still have to clean out your database before you start each >> independent chunk of your test suite, otherwise you start from an >> unknown state. > >In a lot of cases, this isn't true. This pattern is quite common: > >1. Insert entity. >2. Test with entity just inserted. > >Since all that my test cares about is the unique entity or entities, the >state of the rest of database doesn't matter. The state the matters is >in a "known state". For many of the test suites I've worked on, the business rules are complex enough that this is a complete non-starter. I *must* have a database in a known-good state at the start of every test run. is $customer_table->count, 2, "We should find the correct number of records"; >We have a cron job that runs overnight to clean up anything that was >missed in Jenkin's runs. No offense, but that scares me. If this strategy was so successful, why do you even need to clean anything up? You can accumulate cruft forever, right? For example, I might want to randomize the order in which I run my tests (theoretically, the order in which you run separate test cases SHOULD NOT MATTER), but if I don't have a clean environment, I can't know if a passing test is accidentally relying on something a previous test case created. This often manifests when a test suite passes but an individual test program fails (and vice versa). That's a big no-no. (Note that I distinguish between a test case and a test: a test case might insert some data, test it, insert more data, test the altered data, and so on. There are no guarantees in that scenario if I have a dirty database of unknown state). >We expect our tests to generally work in the face of a "dirty" database. >If they don't, that's considered a flaw in the test. Which implies that you might be unknowingly relying on something a previous test did, a problem I've repeatedly encountered in poorly designed test suites. >This is important >to run several tests against the same database at the same time. Even if >we did wipe the database for we tested, all the other tests running in >parallel would be considered to making the database "dirty". Thus, if a >pristine database is a requirement, only one test could run against the >database at the time. There are multiple strategies people use to get around this limitation, but this is the first time I've ever heard of anyone suggesting that a dirty test database is desirable. >We run our tests 4x parallel against the same database, matching the >cores available in the machine. Your tests run against a different test database per pid. Or you run them against multiple remote databases with TAP::Harness::Remote or TAP::Harness::Remote::EC2. Or you run them single-threaded in a single process instead of multiple processes. Or maybe profiling exposes issues that weren't previously apparent. Or you fall back on a truncating strategy instead of rebuilding (http://www.slideshare.net/Ovid/turbo-charged-test-suites-presentation). That's often a lot faster. There are so many ways of attacking this problem which don't involve trying to debug an unknown, non-deterministic state. >We also share the same database between developers and the test suite. >This "dirty" environment can work like a feature, as it can sometimes >produce unexpected and "interesting" states that were missed by a >clean-room testing approach that so carefully controlled the environment >that some real-world possibilities. I've been there in one of my first attempts at writing tests about a decade ago. I got very tired of testing that I successfully altered the state of the database only to find out that another developer was running the test suite at the same time and also altered the state of the database and both of us tried to figure out why our tests were randomly failing. I'll be honest, I've been doing testing for a long, long time and this is the first time that I can recall anyone arguing for an approach like this. I'm not saying you're wrong, but you'll have to do a lot of work to convince people that starting out with an effectively random environment is a good way to test code. Cheers, Ovid -- Twitter - http://twitter.com/OvidPerl/ Buy my book - http://bit.ly/beginning_perl Buy my other book - http://www.oreilly.com/catalog/perlhks/ Live and work overseas - http://www.overseas-exile.com/
Re: Anyone want Test::Class::Moose?
Hi Mark, > > From: Mark Stosberg >As I looked more at Test::Class::Moose, one thing I really like is that >plans are completely gone. Thank you. You're welcome. They're inferred at the suite and class level, but with an implicit "done_testing()" for each method. It's not perfect, but there alternatives seemed a touch worse. >Two questions: > >1. About this: "use Test::Class::Moose;" > >Why not standard inheritance to add Test::Class functionality? > >It looke the rationale here is to save a line of boilerplate with the >"use Moose" line. Test::Class::Moose is explicitly coupled with Moose, so having a "use Moose" line is both redundant and error-prone. If it's required and you forget it, oops. I've given you a source of bugs you didn't need. If it's not required, why write it? >2. About this syntax for extending a test class: > use Test::Class::Moose parent => 'TestsFor::Some::Class'; > >why not use standard inheritance in a class, and to extend a class using >test::Class? Or could you 'extends' in the import list here to look more >Moose-y? I think "extends" might be better. Good call. I don't use standard inheritance because the various solutions for that don't allow for both inheriting from a class and exporting functions (in this case, ok(), is(), eq_or_diff(), and so on). Cheers, Ovid -- Twitter - http://twitter.com/OvidPerl/ Buy my book - http://bit.ly/beginning_perl Buy my other book - http://www.oreilly.com/catalog/perlhks/ Live and work overseas - http://www.overseas-exile.com/
Anyone want Test::Class::Moose?
Hi all, People keep asking me how to properly integrate Moose with Test::Class. I know about Test::Able and some alternatives, but I *generally* like Test::Class's interface (or maybe I'm just in a comfort zone). So I wrote my own Test::Class::Moose (it does not use Test::Class) and it uses subtests all the way down. Every test class is one test. Every test method is a subtest in the test class. Every test in a test method is, well, one test in a test method. Here's a simple test class which, in turn, inherits from another test class (you get all of the functions from Test::Most along with Moose helper functions, if desired): package TestsFor::Basic::Subclass; use Test::Class::Moose parent => 'TestsFor::Basic'; sub test_startup { my $test = shift; $test->next::method; # more startup here, but tests are not allowed } sub test_me { my $test = shift; my $class = $test->this_class; ok 1, "I overrode my parent! ($class)"; } before 'test_this_baby' => sub { my $test = shift; my $class = $test->this_class; pass "This should run before my parent method ($class)"; }; sub this_should_not_run { fail "We should never see this test"; } sub test_this_should_be_run { for ( 1 .. 5 ) { pass "This is test number $_ in this method"; } } 1; Note that attributes are not required on the test methods. Methods which start with "test_" are considered test methods (you can override this behavior, of course). That includes the test control methods of test_startup(), test_setup(), and so on. You should be able to consume roles or use the rest of Moose any way you think you should. Here's how we load and run tests: use Test::Class::Moose::Load qw(t/lib); Test::Class::Moose->new({ # timing on a class and method level show_timing => 0, # how many classes, methods and tests statistics => 1, })->runtests; My main concern is that the nested subtests will annoy people: # # Executing tests for TestsFor::Basic::Subclass # # TestsFor::Basic::Subclass->test_me() ok 1 - I overrode my parent! (TestsFor::Basic::Subclass) 1..1 ok 1 - test_me # TestsFor::Basic::Subclass->test_this_baby() ok 1 - This should run before my parent method (TestsFor::Basic::Subclass) ok 2 - whee! (TestsFor::Basic::Subclass) 1..2 ok 2 - test_this_baby # TestsFor::Basic::Subclass->test_this_should_be_run() ok 1 - This is test number 1 in this method ok 2 - This is test number 2 in this method ok 3 - This is test number 3 in this method ok 4 - This is test number 4 in this method ok 5 - This is test number 5 in this method 1..5 ok 3 - test_this_should_be_run 1..3 ok 1 - TestsFor::Basic::Subclass # # Executing tests for TestsFor::Basic # # TestsFor::Basic->test_me() ok 1 - test_me() ran (TestsFor::Basic) ok 2 - this is another test (TestsFor::Basic) 1..2 ok 1 - test_me # TestsFor::Basic->test_this_baby() ok 1 - whee! (TestsFor::Basic) 1..1 ok 2 - test_this_baby 1..2 ok 2 - TestsFor::Basic 1..2 # Test classes: 2 # Test methods: 5 # Total tests run: 11 ok All tests successful. Files=1, Tests=2, 2 wallclock secs Result: PASS So, does this look useful for folks? Is there anything you would change? (It's trivial to assert plans for classes and the entire test suite rather than rely on done_testing(), but I haven't done that yet). Cheers, Ovid -- Twitter - http://twitter.com/OvidPerl/ Buy my book - http://bit.ly/beginning_perl Buy my other book - http://www.oreilly.com/catalog/perlhks/ Live and work overseas - http://www.overseas-exile.com/
Re: preforking prove
Hi Jonathan, I have just one question. You wrote that you're using Test::Class and "many of the tests start by loading a bunch of the same modules". That confuses me as Test::Class was originally designed to speed up test suites and did so by loading everything *once* in the same process. Are you using a separate .t script per test class? That would cause the reloading. Otherwise, a single .t script loading all of your test classes (Test::Class::Load helps) should help you load all classes at once. Assuming they have a sane setup and all call their own test control methods (startup/setup/teardown/shutdown), you could drop something like this in your base class: # see also http://www.slideshare.net/Ovid/a-whirlwind-tour-of-testclass sub setup : Tests(setup) { my $test = shift; $test->reset_singletons; } Yes, it's a hack to work around the fact that you have a bunch of singletons with mutable state, but it seems like this would be much easier (though perhaps less fun :) than your preforking solution. Cheers, Ovid -- Twitter - http://twitter.com/OvidPerl/ Buy my book - http://bit.ly/beginning_perl Buy my other book - http://www.oreilly.com/catalog/perlhks/ Live and work overseas - http://www.overseas-exile.com/ > > From: Jonathan Swartz >To: perl-qa >Sent: Tuesday, 6 November 2012, 2:03 >Subject: preforking prove > >We have a large slow test suite at work (Test::Class, 225 classes, about 45 >minutes run time). Many of the tests start by loading a bunch of the same >modules. Obviously we could speed things up if we could share that loading >cost. > >I'm aware of Test::Aggregate and Test::Aggregate::Nested, but a number of our >tests run into the caveats (e.g. they change singletons -- yes, this is not >ideal, but also not easily changeable). > >I thought of an alternative to Test::Aggregate called "preforking prove", or >pfprove, which does the following: > >* Accepts the same arguments as prove >* Preloads the module(s) in -M >* Launches a Starman server in the background >* For each test file, makes a request to the Starman server. The Starman child >runs the test and exits. > >The idea is that you preload all the common stuff in the Starman parent, so >that forking each child and running each test is fast (at least on Linux). > >Potential advantages over Test::Aggregate: >* runs one test per process, so avoids some caveats >* keeps the TAP in traditional form (one TAP stream per file) >* works well with parallel prove > >Potential disadvantages: >* lots of extra complexity (requires Starman or similar, need to make sure it >shuts down, need to handle errors, etc.) > >Curious what you think. Is there something like this out there already? >Potential problems? > >If I do this I might call it Test::Aggregate::Preforking, just to keep it in >the same category. > >Thanks! >Jon > > > >
Re: TPF Devel::Cover grant report Week 18
From: Jeffrey Thalhammer >To: Ovid >Cc: Paul Johnson ; "perl-qa@perl.org" >Sent: Monday, 1 October 2012, 22:48 >Subject: Re: TPF Devel::Cover grant report Week 18 > > >On Oct 1, 2012, at 2:00 AM, Ovid wrote: > >> For others: yes, I know that PPI offers cyclomatic complexity information. I >> would like to see all code coverage information provided in "one-stop >> shopping" for Perl rather than relying on a separate tool (and I don't know >> that PPI exposes the control-flow graph used to calculate the cyclomatic >> complexity. Does it?). > >To my knowledge, PPI doesn't compute cyclomatic complexity. But Perl::Critic >approximates it just by counting conditional operators and keywords. > >-Jeff D'oh! You're right. I didn't realize that P:C only approximates it. For those who haven't seen it, it's the "McCabe score" when you use --statistics: $ perlcritic lib/ --statistics 5 files. 21 subroutines/methods. 177 statements. 450 lines, consisting of: 59 blank lines. 10 comment lines. 4 data lines. 207 lines of Perl code. 170 lines of POD. Average McCabe score of subroutines was 1.52. Cheers, Ovid -- Twitter - http://twitter.com/OvidPerl/ Buy my book - http://bit.ly/beginning_perl Buy my other book - http://www.oreilly.com/catalog/perlhks/ Live and work overseas - http://www.overseas-exile.com/
Re: Proposal Test::TAPv13
> > From: Michael G Schwern >I think we don't have to guess at that. Structured diagnostics are only >associated with test results (and possibly start/end of test info). They're >not used like diag() is now as both a warning mechanism and a test result >information system. Just a minor comment: yes, we also would want structured diagnostics at the start and end of each test (and subtest). There are numerous reasons. Anyone who uses Test::Class extensively would be comfortable with this when they thing about startup/setup/teardown/shutdown test control methods. Cheers, Ovid -- Live and work overseas - http://www.overseas-exile.com/ Buy the book - http://www.oreilly.com/catalog/perlhks/ Tech blog - http://blogs.perl.org/users/ovid/ Twitter- http://twitter.com/OvidPerl/
Including dirs in Devel::Cover
Hi all, This is Devel::Cover 0.88 and perl version 5.12.2 I have been searching like mad for the answer and can't figure it out. I have been tasked with cleaning up some legacy code which is poorly written, but has an astonishingly large number of tests. Some of these tests run code in files like this: fcgi/*.fcgi I would very much like to include those in my coverage reports. In fact, I'd love to ensure that I can include *everything* (regardless of extension) in lib/, fcgi/, and utils/ and *nothing* in any other directories. This is one of my many attempts: HARNESS_PERL_SWITCHES=-MDevel::Cover=+inc,fcgi,+inc,lib,+inc,util prove -rl t FAIL! I've also tried created simple Build.PL or Makefile.PL scripts and keep getting "No tests defined" when I run things like 'cover -test' or './Build testcover'. I see lots of people asking questions like this in various places on the Web. A cookbook of examples would be lovely :) Cheers, Ovid -- Live and work overseas - http://www.overseas-exile.com/ Buy the book - http://www.oreilly.com/catalog/perlhks/ Tech blog - http://blogs.perl.org/users/ovid/ Twitter- http://twitter.com/OvidPerl/
Re: Can TAP::Parser already parse nested TAP?
Hi Steffen, The TAP::Parser encountered a few technological issues for dealing with nested TAP. That's why the specification is that the first line *after* the nested TAP is a summary line indicating success or failure of the TAP block. As a result, TAP::Parser doesn't parse the nested TAP, but the format is backwards-compatible, allowing us to still deliver correct results (this assumes that the "summary line" success or failure matches the success or failure of the nested TAP. Cheers, Ovid -- Live and work overseas - http://www.overseas-exile.com/ Buy the book - http://www.oreilly.com/catalog/perlhks/ Tech blog - http://blogs.perl.org/users/ovid/ Twitter- http://twitter.com/OvidPerl/ > > From: Steffen Schwigon >To: perl-qa@perl.org >Sent: Friday, 1 June 2012, 12:08 >Subject: Can TAP::Parser already parse nested TAP? > >Hi! > >I just got confused about the state of parsing nested TAP. > >I thought at least TAP::Parser (v24, from github) would already parse >nested TAP. Doesn't it? Do I need to turn it on somehow? > >My expectation came from Test::More propagating it like this: > > subtest 'An example subtest' => sub { > plan tests => 2; > > pass("This is a subtest"); > pass("So is this"); > }; > >but TAP::Parser parses the subtests as "unknown", regardless of me >declaring "TAP version 13" or not. > >I'm mostly interested in the state of implementation, I'm aware of the >spec discussion is in flux. > >Thanks in advance for any clarification. > >Kind regards, >Steffen >-- >Steffen Schwigon >Dresden Perl Mongers <http://dresden-pm.org/> > > >
Re: Revert use_ok() change to allow lexical effects?
- Original Message - > From: Michael G Schwern >> But it fails to DWIW: report clearly on failures. Perhaps what it is > doing >> is not so simple, after all? > > Personally I'm a fan of "scroll up and read the first failure". > It always works! Try it on a Test::Class test suite running thousands of test in a single process, whizzing past on your terminal :) Cheers, Ovid -- Live and work overseas - http://www.overseas-exile.com/ Buy the book - http://www.oreilly.com/catalog/perlhks/ Tech blog - http://blogs.perl.org/users/ovid/ Twitter - http://twitter.com/OvidPerl/
Re: Revert use_ok() change to allow lexical effects?
- Original Message - > From: Aristotle Pagaltzis > > * Michael G Schwern [2012-04-11 18:35]: >> Nope, too much magic for too small a use case. > > And faithfully duplicating `use` would be less so? :-) > > I don’t see how it is any more magic than `done_testing`. Because done_testing is applicable to every test module and solves the far more common issue of hating maintain a plan. I would argue that done_testing is much more necessary than an AutoBailout. That being said, I'd use AutoBailout a lot more often if it was available. Then again, as Schwern pointed out, that's why I wrote Test::Most :) > What I don’t like about duplicating `use` is that you need to diddle > internals and ... I think Schwern's not arguing against this. He's just trying to figure out the best way forward. Cheers, Ovid -- Live and work overseas - http://www.overseas-exile.com/ Buy the book - http://www.oreilly.com/catalog/perlhks/ Tech blog - http://blogs.perl.org/users/ovid/ Twitter - http://twitter.com/OvidPerl/
Re: Revert use_ok() change to allow lexical effects?
Or more simply: use Test::More tests => 1; my $ok; END { BAIL_OUT "Could not load all modules" unless $ok } use Test::Trap::Builder::TempFile; use Test::Trap::Builder::SystemSafe; use Test::Trap::Builder; use Test::Trap; use if eval "use PerlIO; 1", 'Test::Trap::Builder::PerlIO'; ok 1, 'All modules loaded successfully'; $ok = 1; Cheers, Ovid -- Live and work overseas - http://www.overseas-exile.com/ Buy the book - http://www.oreilly.com/catalog/perlhks/ Tech blog - http://blogs.perl.org/users/ovid/ Twitter- http://twitter.com/OvidPerl/ - Original Message - > From: Ovid > To: "sidhe...@allverden.no" ; "perl-qa@perl.org" > > Cc: > Sent: Wednesday, 11 April 2012, 16:22 > Subject: Re: Revert use_ok() change to allow lexical effects? > > - Original Message - > >> From: The Sidhekin > > >> * How would you rewrite a test script such as my own >> http://cpansearch.perl.org/src/EBHANSSEN/Test-Trap-v0.2.2/t/00-load.t so >> that it does not use use_ok()? >> * Why would you? :-\ > > > Just a quick hack: > > use Test::More; > > > > BEGIN { > my @modules = qw( > Test::Trap::Builder::TempFile > Test::Trap::Builder::SystemSafe > Test::Trap::Builder > Test::Trap > ); > push @modules => 'Test::Trap::Builder::PerlIO' if eval > "use PerlIO; 1"; > plan tests => scalar @modules; > > for my $module (@modules) { > eval "use $module"; > BAIL_OUT $@ if $@; > } > } > > > With that, you're using the actual use builtin and not worrying about extra > code that might or might not be obscuring problems. > > Cheers, > Ovid > -- > Live and work overseas - http://www.overseas-exile.com/ > Buy the book - http://www.oreilly.com/catalog/perlhks/ > Tech blog - http://blogs.perl.org/users/ovid/ > Twitter - http://twitter.com/OvidPerl/ >
Re: Revert use_ok() change to allow lexical effects?
- Original Message - > From: The Sidhekin > * How would you rewrite a test script such as my own > http://cpansearch.perl.org/src/EBHANSSEN/Test-Trap-v0.2.2/t/00-load.t so > that it does not use use_ok()? > * Why would you? :-\ Just a quick hack: use Test::More; BEGIN { my @modules = qw( Test::Trap::Builder::TempFile Test::Trap::Builder::SystemSafe Test::Trap::Builder Test::Trap ); push @modules => 'Test::Trap::Builder::PerlIO' if eval "use PerlIO; 1"; plan tests => scalar @modules; for my $module (@modules) { eval "use $module"; BAIL_OUT $@ if $@; } } With that, you're using the actual use builtin and not worrying about extra code that might or might not be obscuring problems. Cheers, Ovid -- Live and work overseas - http://www.overseas-exile.com/ Buy the book - http://www.oreilly.com/catalog/perlhks/ Tech blog - http://blogs.perl.org/users/ovid/ Twitter - http://twitter.com/OvidPerl/
Re: Perl QA Hackathon: What was accomplished?
Amongst other things, I released a more robust version of DB::Color (http://search.cpan.org/dist/DB-Color/). It provides syntax highlighting when you use the Perl debugger from the command line. The new version does not require a .perldb file (though you can still use one). You can now do this: perl -MDB::Color -e some_program.pl Cheers, Ovid -- Live and work overseas - http://overseas-exile.blogspot.com/ Buy the book - http://www.oreilly.com/catalog/perlhks/ Tech blog - http://blogs.perl.org/users/ovid/ Twitter - http://twitter.com/OvidPerl/ > > From: James E Keenan >To: perl-qa@perl.org >Sent: Saturday, 7 April 2012, 14:57 >Subject: Re: Perl QA Hackathon: What was accomplished? > >On 4/6/12 9:14 PM, David Golden wrote: >> >> I don't know if this page is on the main page, yet, but this is the >> running summary page: >> >> http://2012.qa-hackathon.org/qa2012/wiki?node=Results >> > >Thanks, that's what I was looking for. But it would also be good to >have a write-up on this list, since a permanent archiving structure for >the list already exists. > >jimk > > >
Re: Strange interaction between new Test::More and Test::Builder::Tester
- Original Message - > From: Buddy Burden > >G uys, > > I'm getting CPAN Tester failures that look like this: > > # STDOUT is: > # ok 1 # SKIP symlink_target_is doesn't work on systems without symlinks! > # > # not: > # ok 1 # skip symlink_target_is doesn't work on systems without symlinks! > # > # as expected > > and, in case it doesn't jump out at you what the problem is (I had to > stare at it 4 or 5 times before I caught it), it's the capitalization > of "skip" (or "SKIP", as the case may be). Now, these > failures are a > minority, and the trend I'm seeing is that the failures all seem to > come from Test::More 1.005000_002, whereas the passes all seem to come > from versions pre 1.x. Now, I've never really used > Test::Builder::Tester before (I've previously used Test::Tester), but > I got co-maint on this module (Test::File), and that's what it uses, > and it seems to work pretty well ... except for this one thing. So I > don't really want to change it, and anyway I'm sure I'm not the only > person who is/will be seeing this sort of problem. Hi Buddy, I'm not sure what's going on here. You've mentioned the Test::More, Test::Builder::Tester, Test::Tester and Test::File. I don't know exactly what is causing the problem you have with your tests, but if you check the TAP::Parser::Grammar (https://metacpan.org/module/TAP::Parser::Grammar) you'll see that the SKIP (and TODO) directives are case-insensitive. Thus, both SKIP and skip should be fine. If something is marking that as a failure, it's probably ignoring case-sensitivity for directives or it's expecting an exact text match. In any event, I can't tell how to reproduce the issue from the plethora of modules you've listed. Can you send a small code example of a test failure? Cheers, Ovid -- Live and work overseas - http://overseas-exile.blogspot.com/ Buy the book - http://www.oreilly.com/catalog/perlhks/ Tech blog - http://blogs.perl.org/users/ovid/ Twitter - http://twitter.com/OvidPerl/
Re: What is the "best" code on the CPAN?
Hi Jeffrey, While the Perl::Critic and TB2 examples may be good code, I submit that they're also complicated enough that it may be very hard for a new developers to really understand them due to their sheer size. Even for experienced developers, they may be nerve-wracking as a "learning experience". For smaller problems, though, they may not have enough of the characteristics you list to qualify as "best". Perhaps you can find smaller examples of code which fit interesting problem spaces and are well-designed? For example, I think you'll find my HTML::TokeParser simple is reasonably well designed, even though it's a disguised factory which inherits from an awkward interface. It was written in the pre-Moose days, so it doesn't fit your criteria. However, it shows good use of inheritance (a tricky thing to do!) because base classes I wrote are abstract and its polymorphism is effective in making many methods simply a single line of code. On top of that, it also obeys Liskov because you can drop it into a code which uses HTML::TokeParser and it will still magically work, even as you gradually replace stuff like this: my $text = $token->[0] =~ /^(?:S|E|PI)$/ ) ? $text = $token->[-1] : $text = $token->[1]; With this: my $text = $token->as_is; (get_text() is a better name, but the base class used it for something else and overriding a method to provide semantically different behavior is a Bad Thing) For bonus education points: grab the older versions from the Backpan and watch how it evolved from a steaming pile of ones and zeros to something tolerable (for its time -- today I would be tempted to rewrite with Moose, but why rewrite working code?) TL;DR: Matching of your criteria would be hard in a code base small enough to get your head around easily but maybe something smaller helps? Cheers, Ovid -- Live and work overseas - http://overseas-exile.blogspot.com/ Buy the book - http://www.oreilly.com/catalog/perlhks/ Tech blog - http://blogs.perl.org/users/ovid/ Twitter- http://twitter.com/OvidPerl/ > > From: Jeffrey Thalhammer >To: Perl QA >Sent: Wednesday, 8 February 2012, 3:29 >Subject: What is the "best" code on the CPAN? > >I'm working with a group of Perl developers with various backgrounds and skill >levels. We have instituted a fairly effective code inspection process, but we >still get bogged down in debates over what "good code" is. Unfortunately, >the developers work in isolated silos and spend the vast majority of the time >looking only at their own code. So they rarely have an opportunity to see >what good (or just better) code might actually look like. > >I want the team to see how to do things right, rather than debating all the >ways to do it wrong. So for our next code inspection, I want them to study >some "good" code from the CPAN. So the question is, which distribution >provides the best example. These are the things I think I want in such an >example (in no particular order): > >Object-orientation using Moose. >Prudent use of Perl idioms without being overly clever. >A discernible architecture and clear separation of concerns. >Strong examples of encapsulation / inheritance / polymorphism. >Demonstrates inversion-of-control principles. >Well named variables and subroutines. >Well factored code with minimal complexity. >A clear pattern for extension and reuse. >Useful documentation (e.g. POD, comments). >High-value tests at the functional and unit level. >Perl::Critic compliance (any set of Policies will do). >Effective use of other CPAN modules for routine tasks. >Effective error handling. >Effective use of Roles. >Self-documenting code. >Robust and consistent API. > >So in your opinion, which distribution on the CPAN best demonstrates these >qualities? Or do you think there are other more important qualities that I >should be looking for? I realize there is more than one way to do it, so I >don't really expect to find the "best" code. I just want something I can hold >up as strong example that people (including myself) can learn from and aspire >to. > >-Jeff > > > > >
Re: Fatal "wide character" warnings in tests
I wound up stopping use of that module many years ago, but maybe I should look at creating an alternative to Test::NoWarnings. It's caused me so much grief over the years due to how it diddle's Test::More's plan that maybe it's time to scratch this itch and my Wide Character issue at the same time. Cheers, Ovid -- Live and work overseas - http://overseas-exile.blogspot.com/ Buy the book - http://www.oreilly.com/catalog/perlhks/ Tech blog - http://blogs.perl.org/users/ovid/ Twitter- http://twitter.com/OvidPerl/ - Original Message - > From: Gabor Szabo > To: Ovid > Cc: perlqa > Sent: Monday, 30 January 2012, 7:04 > Subject: Re: Fatal "wide character" warnings in tests > > On Sun, Jan 29, 2012 at 11:55 PM, Ovid > wrote: >> How do I make "Wide character in print" warnings fatal in tests? > > Test::NoWarnings catches all forms of warnings in your test, not only > the specific one you mentioned. > Maybe that could be used/changed. > > Gabor > > -- > Gabor Szabo > http://szabgab.com/ >
Fatal "wide character" warnings in tests
How do I make "Wide character in print" warnings fatal in tests? This test passes; use Test::More; use strict; use warnings; use warnings FATAL => 'utf8'; use utf8::all; my $string = '日本国'; my $length = length($string); is $length, 3, "$string should have $length characters"; diag $string; done_testing; That's passing because the warnings pragma is lexically scoped and the actual warnings are emitted in Test::Builder guts (utf8::all will let the test pass because that package's code is now marked as utf8, but it doesn't fix Test::Builder's filehandles). I can make the warnings go away with this: my $output = Test::Builder->new->todo_output; binmode $output, ':encoding(UTF-8)'; $output = Test::Builder->new->failure_output; binmode $output, ':encoding(UTF-8)'; But I'd really like a clean way of just saying "kill my code if I ever see 'Wide character in print'" regardless of which package the error is emitted from. Cheers, Ovid -- Live and work overseas - http://overseas-exile.blogspot.com/ Buy the book - http://www.oreilly.com/catalog/perlhks/ Tech blog - http://blogs.perl.org/users/ovid/ Twitter- http://twitter.com/OvidPerl/
Re: Relying more on Mouse
> From: Michael G Schwern >Let's keep our heads. The whole argument against TB2::Mouse is predicated on >the idea that shipping TB2::Mouse in the core MIGHT cause future maintenance >hassle for p5p. If the alternatives will cause EVEN MORE maintenance hassle >for p5p and/or CPAN authors... we should just ship TB2::Mouse. Rename TB2::Mouse to something like TB2::_U_R_A_MORON_FOR_USING_THIS_ and I think the problem will take care of itself. Note: I'm not saying Schwern is a moron! I'm saying that if anyone dares to use TB2::Mouse with the above name, they'll definitely think twice. Cheers, Ovid -- Live and work overseas - http://overseas-exile.blogspot.com/ Buy the book - http://www.oreilly.com/catalog/perlhks/ Tech blog - http://blogs.perl.org/users/ovid/ Twitter- http://twitter.com/OvidPerl/
Re: Relying more on Mouse
- Original Message - > From: David Golden > > And I wonder if Role::Tiny could be extended to do multiple roles. As an aside, while I'm sure that Schwern is using far more than just roles, Role::Basic already handles multiple roles, is forwards-compatible with Moose::Role and it adheres much more closely to the traits spec, particularly with regards to the commutative and associative properties that Role::Tiny and Moose::Role ignore (https://metacpan.org/module/Role::Basic::Philosophy) Cheers, Ovid -- Live and work overseas - http://overseas-exile.blogspot.com/ Buy the book - http://www.oreilly.com/catalog/perlhks/ Tech blog - http://blogs.perl.org/users/ovid/ Twitter - http://twitter.com/OvidPerl/
Re: Threads working, Test::Builder1.5 is feature complete
> > From: David E. Wheeler > >On Nov 22, 2011, at 5:11 AM, Ovid wrote: > >> Ah, just saw this. As I've already said privately, but maybe we can see how >> others feel, this is a PERFECT time to discourage use_ok and require_ok and >> even deprecate them (though I doubt we can remove them completely). > >Feel free to discourage, yes, but please don’t leave it broken. There are a >shitload of tests out there that use it (including nearly all of my modules). But it's already broken in the mainstream Test::Builder. You just mean "keep it broken the way it is", right? Cheers, Ovid -- Live and work overseas - http://overseas-exile.blogspot.com/ Buy the book - http://www.oreilly.com/catalog/perlhks/ Tech blog - http://blogs.perl.org/users/ovid/ Twitter- http://twitter.com/OvidPerl/
Re: Threads working, Test::Builder1.5 is feature complete
- Original Message - > From: Michael G Schwern > There's a bug in use_ok() that effects 0.98_01 and 1.5. So I'm going to > hold > off on a new alpha for a day or two and it's either fixed or I'll roll > it back. > https://rt.cpan.org/Ticket/Display.html?id=67538#txn-1002509 Ah, just saw this. As I've already said privately, but maybe we can see how others feel, this is a PERFECT time to discourage use_ok and require_ok and even deprecate them (though I doubt we can remove them completely). They've been broken for a long time, people have to work around their limitations, and they don't add value. http://use.perl.org/~Ovid/journal/39859 I'm hard-pressed to think of a better time to at least slip a note in the docs that their use is discouraged. Cheers, Ovid -- Live and work overseas - http://overseas-exile.blogspot.com/ Buy the book - http://www.oreilly.com/catalog/perlhks/ Tech blog - http://blogs.perl.org/users/ovid/ Twitter - http://twitter.com/OvidPerl/
Re: Relying more on Mouse
> > From: Michael G Schwern > >On 2011.11.21 4:07 AM, David Cantrell wrote: >> But then how often does one need to 'use Test::More'? Not enough to >> bother optimising it, I'd say. > >In every single .t file that gets run by just about everybody. > >By being THE testing framework, it places an upper bound on how fast anyone's >tests can be. 10 .t files per second, no faster. That sucks. > > >> To take a real-world example, it occurs 182 times in our test suite at >> work, a test suite that takes almost 2 hours to run in full. Those >> extra 182 * 0.07 == 13 seconds are of absolutely no importance at all. > >I run tests a lot while developing. I can see they're slower without even >benchmarking it. You wouldn't think you'd notice 70 ms, but you do. > >I don't want to give anyone an excuse to not run tests often or not upgrade. I have to agree with this. Test performance is an issue that people constantly hit. And aside from Test::Class/Test::Aggregate style solutions (I see to recall someone put out an alternative to Test::Aggregate?), individual tests in separate processes is the norm. For larger, "enterprisey" type systems, my experience shows that they generally have enough other issues that a tiny hit on Test::More won't matter, but when you add up all the tiny hits, it's death by a thousand cuts. Dave, you weren't on the PIPs team when I optimized their test suite, but it was those "thousand cuts" that I stripped away one by one to get an hour long test suite running in less than 15 minutes. So yeah, this is a very important issue. If performance isn't dealt with, it's going to be a constant sore spot. And Schwern: thanks for all of your good work in this area! Cheers, Ovid -- Live and work overseas - http://overseas-exile.blogspot.com/ Buy the book - http://www.oreilly.com/catalog/perlhks/ Tech blog - http://blogs.perl.org/users/ovid/ Twitter- http://twitter.com/OvidPerl/
Re: Discuss change of namespace Test::Builder2 -> TB2?
- Original Message - > From: Michael G Schwern > > On 2011.11.14 12:41 AM, Philippe Bruhat (BooK) wrote: >> I'm more annoyed with the version number being part of the name. >> Even if I can understand the reason why (CPAN only knows one way to >> upgrade: up). > > I used to be with you there. > > I've since found it's a remarkably simple and foolproof way to indicate > an API > split. The name, code and docs are neatly delineated. It works with all > existing CPAN tools. Both API versions can exist in harmony. The name change > tells the user this is not their father's Kansas. > > Besides, "tee bee two" rolls off the mouth nicely and TB:: is a bit > too short. Tee bee and not tee bee two? That is the question. Whether to suffer the slings and arrows of outrageous attempts at humor ... Cheers, Ovid (though clearly no longer worth of that moniker) -- Live and work overseas - http://overseas-exile.blogspot.com/ Buy the book - http://www.oreilly.com/catalog/perlhks/ Tech blog - http://blogs.perl.org/users/ovid/ Twitter - http://twitter.com/OvidPerl/
Re: Event handling: One method per event or one method for all?
- Original Message - > From: chromatic > To: perl-qa@perl.org > Cc: Michael G Schwern > Sent: Thursday, 27 October 2011, 9:36 > Subject: Re: Event handling: One method per event or one method for all? > > On Wednesday, October 26, 2011 at 09:58 PM, Michael G wrote: > >> So now the question is one of good event handler design, which I don't > have >> experience with. How does one design a good event handler? Is the pattern >> one method per type of event? Or one method to do the dispatching? Or >> something else? > > I've done this several times. I prefer one method per type of event (hello, > cheap and easy polymorphism!). > > This is also one case in which I find an abstract base class acceptable; the > null method pattern means never having to call ->can(). Agreed. That's why I tell people that their Test::Class base class should always start (sort of) like this: package My::Test::Class; use parent 'Test::Class'; sub startup : Tests(startup) {} sub setup : Tests(setup) {} sub teardown : Tests(teardown) {} sub shutdown : Tests(shutdown) {} In a subclass, you *never* have to worry about the order in which the methods are called or if you can call them. Need a startup method in a subclass? It's trivial: sub startup : Tests(startup) { my $test = shift; # no-op? You don't care, nor should you care (with caveats, of course) $test->SUPER::startup; ... } It's much cleaner that way. Cheers, Ovid -- Live and work overseas - http://overseas-exile.blogspot.com/ Buy the book - http://www.oreilly.com/catalog/perlhks/ Tech blog - http://blogs.perl.org/users/ovid/ Twitter - http://twitter.com/OvidPerl/
Fw: Do we need subtests in TAP?
Should have been sent to the list, not just Fergal. Cheers, Ovid -- Live and work overseas - http://overseas-exile.blogspot.com/ Buy the book - http://www.oreilly.com/catalog/perlhks/ Tech blog - http://blogs.perl.org/users/ovid/ Twitter- http://twitter.com/OvidPerl/ - Forwarded Message - > From: Ovid > To: Fergal Daly > Cc: > Sent: Saturday, 29 October 2011, 17:33 > Subject: Re: Do we need subtests in TAP? > >> >> From: Fergal Daly > > >> It seems like it's impossible then to declare a global plan in advance >> if you use subtests unless you go counting all the sub tests which is >> no fun, > > > Oops. It think it may not have been explained well. There is no distinction > at > the top level between a subtest and an individual test: > > is $foo, $bar, $description; > subtest 'some test', sub { ... }; > > That's two tests. It doesn't matter how many "tests" the > subtest runs (even if it contains further subtests): it's one test. > > That makes it *easier* to maintain plans with subtests. When Abigail was > testing > regexes, Abigail had a problem knowing in advance how many tests a given > feature > would require for various versions of Perl. Just dropping each feature into a > subtest made it trivial. Each subtest would exercise a varying number of > tests > per feature (in other words, subtests bridge the gap between xUnit testing > and > TAP). > > Cheers, > Ovid > -- > Live and work overseas - http://overseas-exile.blogspot.com/ > Buy the book - http://www.oreilly.com/catalog/perlhks/ > Tech blog - http://blogs.perl.org/users/ovid/ > Twitter- http://twitter.com/OvidPerl/ >
Re: Do we need subtests in TAP?
I'm in Prague all week, so I've been able to read, but not really participate. Echo chamber alert: I've often seen long discussions on this list ignore the "real world" (though often for good reason). In this case, it sounds like there's a consideration of removing a feature from TAP. IMHO this is a bad idea which should be opened up to the community at large. Moving along, the *idea* of a nested TAP is so conceptually simple that if the implementing code is struggling with it, perhaps it's a sign that there are some design decisions which should be revisited? When I find conceptually simple ideas hard to do, I find it a code smell. (note that I'm not saying the actual design is bad. I haven't looked). I also find subtests so incredibly convenient and opens up so many possibilities that I would hate to lose them (and I use them a lot). Cheers, Ovid -- Live and work overseas - http://overseas-exile.blogspot.com/ Buy the book - http://www.oreilly.com/catalog/perlhks/ Tech blog - http://blogs.perl.org/users/ovid/ Twitter- http://twitter.com/OvidPerl/ > >From: Michael G Schwern >To: Perl QA >Sent: Wednesday, 26 October 2011, 23:09 >Subject: Re: Do we need subtests in TAP? > >Adrian forgot to send this to the list. > > > Original Message >Subject: Re: Do we need subtests in TAP? >Date: Wed, 26 Oct 2011 14:14:31 +0100 >From: Adrian Howard >To: Michael G Schwern > >Hey there, > >On 26 Oct 2011, at 04:56, Michael G Schwern wrote: > >> I understand wanting "blocks of tests" and the ability to make plans for >> just those blocks, but do we need a discrete test state for that? For >> example, Test::Class provides most of what subtests provide without special >> support. > >... and to do that T::C contains a bunch of annoying special case code than is >(I think) still wrong in an odd corner case. Everybody who wants to do the >things T::C does will also have to do that work. > >T::C implemented with subtests is _much_ cleaner code. > >There may be other ways of getting that complexity out of T::C (and similar) >and into Test::Builder of course - but I'm not 100% sure what you're >suggesting... > >> It occurred to me because most other testing systems don't have subtests. >> They have test methods, but those aren't treated as a sub-state of the test. > >Some do have different levels of hierarchy though. (e.g. JUnit's Test Case / >Suite distinction). > >> In essence, a "subtest" is nothing more than a block of tests with a name >> and it's own plan. The special TAP formatting seems unnecessary. I guess >> that's the real question, do we need the special TAP formatting or do we >> just need named test blocks with their own plan? > >One thing subtests TAP formatting gives you is a simple way to nest TAP >streams from elsewhere. Any other system would mean you have to rewrite the >nested stream (I think?) > >Cheers, > >Adrian >-- >Who invented the eponym? > > >
Re: [test-more] DBIx::Class mysterious fails (#146)
Hi there, I'm sorry to forward this to PerlQA, but with a baby at home, I've so little free time available that it's hard for me to really look into issues right now. I really would like your issue to be resolved and I'm hoping (beg, beg, beg) that someone on PerlQA might recognize this issue and be able to help. Cheers, Ovid -- Live and work overseas - http://overseas-exile.blogspot.com/ Buy the book - http://www.oreilly.com/catalog/perlhks/ Tech blog - http://blogs.perl.org/users/ovid/ Twitter- http://twitter.com/OvidPerl/ > >From: ribasushi > >To: o...@cpan.org >Sent: Thursday, 21 July 2011, 13:30 >Subject: [test-more] DBIx::Class mysterious fails (#146) > >Granted I do some weird shit in these tests, but my TAP output/TB hooking is >legit afaict. Please share your thoughts: > >http://www.cpantesters.org/cpan/report/f4703b28-b0d6-11e0-bffd-2a6e543d7ba4 > >-- >Reply to this email directly or view it on GitHub: >https://github.com/schwern/test-more/issues/146 > > >
Re: Testing (other things) with Perl
- Original Message > From: Steffen Schwigon > AMD just released “Tapper”, an open source test infrastructure with > automation, machine scheduling, webgui, result evaluation api and > testplan support. > > Besides being generic and adaptive by nature it particularly supports > testing Operating Systems with Virtualization (Xen/KVM). > > It is written in Perl, built around TAP, utilizes several TAP::* > toolchains from CPAN and is still language agnostic. E.g., it comes > with a thin wrapper for the autotest client (which is written in > Python) utilizing its TAP support. > > See the full announcement and more ressources here: > > http://developer.amd.com/zones/opensource/AMDTapper/Pages/default.aspx Looks interesting, but did they really release a separate distribution for every module in Tapper? http://search.cpan.org/~amd/ Cheers, Ovid-- Live and work overseas - http://overseas-exile.blogspot.com/ Buy the book - http://www.oreilly.com/catalog/perlhks/ Tech blog - http://blogs.perl.org/users/ovid/ Twitter - http://twitter.com/OvidPerl/
Re: post-install testing
If you're talking about rerunning the package tests on a module after it's been installed, I had been working on the idea of installing tests along with the code. This would require a few things: 1. A place to install the tests. 2. A way to save the test run history. 3. Possible cooperation from the CPAN, CPANPLUS, CPANM and other maintainers (see point 1). Saving tests could be configurable and then you'd have a test runner which would run all of the tests you've saved. The main thing to keep in mind is that some tests will fail this way (unless you manually hack them to work, such as many database tests or other tests which require resources). So what you want to do when you install a new module is to rerun the "installed" tests and note when you get *different* failures (it's harder than that, to be fair). To do this you need to save the test history and I had started that with https://github.com/Ovid/app--prove--history, but it's an awful hack which I hadn't gotten around to finishing. I'll be at the QA Hackathon here in Amsterdam this weekend, so maybe I should resurrect this idea? Cheers, Ovid -- Live and work overseas - http://overseas-exile.blogspot.com/ Buy the book - http://www.oreilly.com/catalog/perlhks/ Tech blog - http://blogs.perl.org/users/ovid/ Twitter- http://twitter.com/OvidPerl/ - Original Message > From: Jozef Kutej > To: perl-qa@perl.org > Sent: Tue, 12 April, 2011 17:50:57 > Subject: post-install testing > > Hi, > > It turned out that there is quite a lot that can go wrong. > > Found this gem in our internal wiki. :-) > > My question is regarding the post-install testing. Normally the test are run > before installation and then discarded with all the rest of the distribution > files. But what possibilities do we have about testing of already installed > code? Is anyone working on this concept? > > Cheers, > Jozef > >
Re: Conditional tests - SKIP, die, BAILOUT
> From: Jozef Kutej > To: perl-qa@perl.org > Sent: Wed, 30 March, 2011 7:54:21 > Subject: Re: Conditional tests - SKIP, die, BAILOUT > > On 2011-03-29 23:05, Michael Ludwig wrote: > >> Perhaps the 'bail_on_fail' or 'die_on_fail' functions from Test::Most > >> would help you here? > > > > That's very convenient. > > perl -le 'use Test::More tests => 2; ok(1) or die; ok(1);' > perl -le 'use Test::More tests => 2; ok(0) or die; ok(1);' True, but advantage of using Test::Most is that not only do you not have to add the "or die" to every test which might potentially fail, but you get the most common testing functionality as determined by analyzing what testing modules people were actually using (by running code against my minicpan installation). At this point, I would suggest that Test::More might be for code that you put on the CPAN (assuming you don't want to force dependencies on people), but I'd never want to do without Test::Most for personal code. Cheers, Ovid -- Live and work overseas - http://overseas-exile.blogspot.com/ Buy the book - http://www.oreilly.com/catalog/perlhks/ Tech blog - http://blogs.perl.org/users/ovid/ Twitter- http://twitter.com/OvidPerl/
Vague Testing
I've stumbled on a bit of an odd case where I have constantly shifting data I need to test. Ordinarily I would use cmp_deeply from Test::Deep, but it's not quite structured enough. I need something similar to a Levenshtein edit distance for complex data structures. Let's say I have an AoA like this: [ [ 1, 'North Beach', 'au', 'city' ], [ 2, 'North Beach', 'us', 'city' ], [ 3, 'North Beach', 'us', 'city' ], [ 4, 'North Beach Hotel', 'us', 'hotel' ], [ 5, 'North Beach', 'us', 'city' ], [ 6, 'North Beach', 'us', 'city' ], ] It's OK if I get back that data structure, but the 2 and 4 records are swapped or maybe the 5 isn't present. However, for any contained array reference its exact data can't change. However, if those came back in the order of 6,5,4,3,2,1, the test should fail (thus, I can't use bag tests). Does anyone know of any test modules which allows this? Cheers, Ovid -- Live and work overseas - http://overseas-exile.blogspot.com/ Buy the book - http://www.oreilly.com/catalog/perlhks/ Tech blog - http://blogs.perl.org/users/ovid/ Twitter- http://twitter.com/OvidPerl/
Re: Vague Testing
--- On Thu, 17/2/11, David Golden wrote: > So I think you've got to nail down what specifically about > the order > is required, then sort in a way that preserves that > important > dimension (country), but standardizes the rest (e.g. always > putting > 'hotel' after 'city'). Then you're just walking > through it looking > for things that are mismatched or missing. Sadly, the sort criteria are not only complicated, but frequently changing (not just the sort data, but the sort *rules*). Furthermore, the exact criteria used is gone by the time it gets to the test. If I were to try and duplicate that criteria, I would be: a. Duplicating a lot of business knowledge in the test b. Constantly synchronizing the test and the code it tests Or I can assert a "fuzzy" match and hope for the best. I'm working in an extremely constrained environment for this particular case and trying to determine my best options :/ Cheers, Ovid -- Live and work overseas - http://overseas-exile.blogspot.com/ Buy the book - http://www.oreilly.com/catalog/perlhks/ Tech blog - http://blogs.perl.org/users/ovid/ Twitter- http://twitter.com/OvidPerl/
Re: Vague Testing
--- On Thu, 17/2/11, David Golden wrote: > From: David Golden > wrote: > > It's OK if I get back that data structure, but the 2 > and 4 records are swapped or maybe the 5 isn't present. > However, for any contained array reference its exact data > can't change. However, if those came back in the order of > 6,5,4,3,2,1, the test should fail (thus, I can't use bag > tests). > > Do you have actual hard criteria? Or is it > fuzzy/arbitrary? How do > you know that one swap is OK, but a full reversal is not? > > Put differently, with the criteria you describe, I have a > hard time > seeing how this could actually be a meaningful test. > Can you explain > more about the problem domain? I don't think I can describe the exact problem domain without violating my employment agreement. Let's just say there's a "threshold" in the code which allows one to determine the amount of fuzziness allowed. For a *very* contrived use case, imagine that you're being introduced to your daughter's boyfriend for the first time and you know his name is "Alexander". He might introduce himself as "Alexander", "Alex", "Al", or even "Xander" and you might not bat an eyelash. If he introduces himself as "Sally" or "Bob", it's times to start asking questions. In my case, I have code which returns a list of items, but I'm pulling real data (and it's very hard not to pull real data for this use case) and that data will *usually* be in the order I expect, but subtle variations are allowed and cannot be easily prevented. Unfortunately, I can't tell you more than this. Cheers, Ovid -- Live and work overseas - http://overseas-exile.blogspot.com/ Buy the book - http://www.oreilly.com/catalog/perlhks/ Tech blog - http://blogs.perl.org/users/ovid/ Twitter- http://twitter.com/OvidPerl/
Re: Vague Testing
--- On Thu, 17/2/11, Fergal Daly wrote: > From: Fergal Daly > It would be nice if this was a custom > comparator for Test::Deep, then > you would be apply the "almost" to lists of arbitrarily > complex items > and also conduct that test at any level of the data > structure > (including nesting if you feeling really fruity), Ooh, I would love to see that! After feedback from folks on that blog entry, I'm sure something could be whipped up. Cheers, Ovid -- Live and work overseas - http://overseas-exile.blogspot.com/ Buy the book - http://www.oreilly.com/catalog/perlhks/ Tech blog - http://blogs.perl.org/users/ovid/ Twitter- http://twitter.com/OvidPerl/
Re: Vague Testing
--- On Thu, 17/2/11, Ovid wrote: > From: Ovid > I've stumbled on a bit of an odd case where I have > constantly shifting data I need to test. Ordinarily I would > use cmp_deeply from Test::Deep, but it's not quite > structured enough. I need something similar to a Levenshtein > edit distance for complex data structures. First pass at a solution: http://blogs.perl.org/users/ovid/2011/02/is-almost.html Cheers, Ovid -- Live and work overseas - http://overseas-exile.blogspot.com/ Buy the book - http://www.oreilly.com/catalog/perlhks/ Tech blog - http://blogs.perl.org/users/ovid/ Twitter- http://twitter.com/OvidPerl/
Vague Testing
(Now sent from the correct email address) I've stumbled on a bit of an odd case where I have constantly shifting data I need to test. Ordinarily I would use cmp_deeply from Test::Deep, but it's not quite structured enough. I need something similar to a Levenshtein edit distance for complex data structures. Let's say I have an AoA like this: [ [ 1, 'North Beach', 'au', 'city' ], [ 2, 'North Beach', 'us', 'city' ], [ 3, 'North Beach', 'us', 'city' ], [ 4, 'North Beach Hotel', 'us', 'hotel' ], [ 5, 'North Beach', 'us', 'city' ], [ 6, 'North Beach', 'us', 'city' ], ] It's OK if I get back that data structure, but the 2 and 4 records are swapped or maybe the 5 isn't present. However, for any contained array reference its exact data can't change. However, if those came back in the order of 6,5,4,3,2,1, the test should fail (thus, I can't use bag tests). Does anyone know of any test modules which allows this? Cheers, Ovid -- Live and work overseas - http://overseas-exile.blogspot.com/ Buy the book - http://www.oreilly.com/catalog/perlhks/ Tech blog - http://blogs.perl.org/users/ovid/ Twitter- http://twitter.com/OvidPerl/
Re: Move Test::More development discussion back to perl-qa?
- Original Message > From: Michael G Schwern > I don't get much response to posts on test-more-users, though I know people > are subscribed. perl-qa traffic has dropped off quite a bit. I'm wondering > that if I moved Test::More and Test::Builder2 posts back to perl-qa that there > would be more discussion? I'd like to see it back in perl-qa, if only so that the people on perl-qa who might be impacted can see what's going on. And I suspect you'd get more response. Cheers, Ovid -- Live and work overseas - http://overseas-exile.blogspot.com/ Buy the book - http://www.oreilly.com/catalog/perlhks/ Tech blog - http://blogs.perl.org/users/ovid/ Twitter - http://twitter.com/OvidPerl/
Re: Test::Deep 0.108 (and the Test::Most philosophy)
--- On Sun, 17/10/10, Aristotle Pagaltzis wrote: > From: Aristotle Pagaltzis > > Modules are poor place for evangelism about unrelated > conventions > in general, but I feel this especially strongly about > Test:: > modules with break-the-CPAN level adoption such as > Test::Deep. That arguments you made are compelling, so I need to ask your point of view about this: #!/usr/bin/env perl use Test::Most ok 1, '1 is true'; "use Test::Most tests => 42" is loosely equivalent to: use strict; use warnings; use Test::Exception 0.88; use Test::Differences 0.500; use Test::Deep 0.106; use Test::Warn 0.11; use Test::More tests => 42; Test::Most, like Test::Class::Most, not only imports the most common testing functions, but also imports strict and warnings for you. I didn't do this lightly. I did this because I see a lot of test suites forgetting one or the other and in the case of test suites, it's terribly important to not miss those because they stop so many errors (for example, many warnings are actually symptoms of underlying bugs and that's what a test suite is about, right?). So did I do the wrong thing here? I'd love to hear pro and con arguments. Cheers, Ovid -- Buy the book - http://www.oreilly.com/catalog/perlhks/ Tech blog- http://blogs.perl.org/users/ovid/ Twitter - http://twitter.com/OvidPerl Official Perl 6 Wiki - http://www.perlfoundation.org/perl6
Re: testing an online web service
--- On Tue, 28/9/10, Spiros Denaxas wrote: > From: Spiros Denaxas > I am venturing off to write a module which will act as a > wrapper > around an online API provided by a third party. I am trying > to plan > ahead of time how the tests for it will work. > > On the one hand, I am thinking that creating a test with > the module > that accesses the actual online service might be the most > accurate way > of testing things, on the other hand I am not sure if this > is the > "canonical" way of doing things. I have done some > reading around and > some people suggested that instead of hitting the actual > online source > one should mock the output it returns tand take it from > there. In the > past, I have actually written tests to hit the live server > but I am > wondering if there's another way to this. > > Is there any standard way for this sort of thing? There are no standards that I know of because different situations can call for different responses. However, be very careful about just mocking everything. If you do and their API changes, you'll find yourself with passing tests for code which does not work. Contact the people supplying the API (or read their docs carefully) and find out if you can interrogate for a version and what guarantees they provide for said version. Don't allow mocks if the version changes unless you can't connect. With that, you might offer people the option of connecting live, but also making it clear that you only support version X. Again, there are numerous strategies you can take but the right one(s) will depend on your needs. Cheers, Ovid -- Buy the book - http://www.oreilly.com/catalog/perlhks/ Tech blog- http://blogs.perl.org/users/ovid/ Twitter - http://twitter.com/OvidPerl Official Perl 6 Wiki - http://www.perlfoundation.org/perl6
Re: getting more details from a test script
--- On Mon, 5/4/10, Gabor Szabo wrote: > From: Gabor Szabo > Maybe I need something like this: > > $mech->content_like(qr{regex}) or do { > my $filename = 'some_filename'; > if (open my $fh, '>', $filename) { > print $fh $mech->content; > diag "File: $filename"; > } > }; > > and then parse the TAP output for 'File:' *after* a test > failure. > > Is there a better way to do this? The problem, I think, is that everyone wants subtly different things from tests outside of ok/not ok. The question I'm wondering is what you mean by "this" in "is there a better way to do this?". Are you wanting a better way of presenting the filename to test authors/runners? Are you wanting a better way to store the file contents? If it's the former, we need structured diagnostics in TAP to be formalised and implemented. If it's the latter, I would recommend writing your own "output to file" function and then instead of using "Test::More" and your own test utilities, bundle all of them with Test::Kit so you can just do this: use My::Custom::Test::More tests => $test_count; The advantage here is that you have your own custom test behaviours nicely controlled by one module and if you need to change them, you can do so in one spot. Or maybe you meant something else by "this" entirely :) Cheers, Ovid -- Buy the book - http://www.oreilly.com/catalog/perlhks/ Tech blog- http://blogs.perl.org/users/ovid/ Twitter - http://twitter.com/OvidPerl Official Perl 6 Wiki - http://www.perlfoundation.org/perl6
Re: sharing perl data structures across tests
--- On Fri, 2/4/10, Erik Osheim wrote: > From: Erik Osheim > > I assume either > some code builds it > > and others then need it or it's expensive to > build? If it's the > > former, it implies an ordering dependency and coupling > in your tests > > which greatly lowers their utility. > > It's more the former than the latter, and yes, it does > lower the tests' > utility. Ouch. On the other hand, plenty of legacy systems get created this way :) > The application suffers from its monolithic > architecture. We > had previously tried to use mock objects to do all the > testing but most > of the mock objects ended up needing to be "close enough" > to the real > thing to defeat the purpose. I've never been a fan of mock objects. If your interface changes but you've mocked it up, your tests can generate false positives. For that matter, it's why I lean towards integration tests over unit tests (there are trade-offs, of course). > Maybe I should port my solution to Test::Class and write > something > similar to Test::Class::Load (maybe called > Test::Class::Dependency) to > manage the automatic loading of dependent classes (e.g. for > test D, > load tests A and B automatically and make sure to run them > first)? Or > do you think I should stick with my custom solution? Actually, I'd love to see a patch to Test::Class which adds "tags" to test methods. Imagine this: sub customer_order : Tests(23) Tags(Customer Order) { my $test = shift; my $customer = $test->test_customer; ok $customer->login, '...'; ... } The idea behind a 'tag' is that your test framework can spot it and take action accordingly. In the example above, your "setup" method might look like this: sub setup : Tests(setup) { my $test = shift; foreach my $tag ( $test->test_tags ) { # or whatever code you need $test->load_fixture($tag); } } By being able to tag tests, you'd be able to: * Load fixtures as needed. * Load dependencies as needed. * Run only '$tag' tests * ... or anything else you can think of with tags Yeah, I know it would be more work for you, but you'd really help those who use Test::Class and want more out of it :) If you're struggling with Test::Class, I have a tutorial at: http://www.slideshare.net/Ovid/testing-with-testclass Cheers, Ovid -- Buy the book - http://www.oreilly.com/catalog/perlhks/ Tech blog- http://blogs.perl.org/users/ovid/ Twitter - http://twitter.com/OvidPerl Official Perl 6 Wiki - http://www.perlfoundation.org/perl6
Re: sharing perl data structures across tests
--- On Fri, 2/4/10, nadim khemir wrote: > From: nadim khemir > Same for me. I simply didn't understand what the original > mail meant. Not at > all! It's an art to write a concise email which describes a problem. What I often find myself doing is writing a long email and then summarising at the bottom. When that's done, I often cut-n-paste the summary at the top of the email and only leave the rest if it is really necessary. Cheers, Ovid -- Buy the book - http://www.oreilly.com/catalog/perlhks/ Tech blog- http://blogs.perl.org/users/ovid/ Twitter - http://twitter.com/OvidPerl Official Perl 6 Wiki - http://www.perlfoundation.org/perl6
Re: sharing perl data structures across tests
--- On Wed, 31/3/10, Erik Osheim wrote: > From: Erik Osheim > So at $WORK we have a bunch of really > large data (immutable) data > structures which a ton of our source code uses. As such, > most tests > that we write need to access these data structures to run. > These > structures can't (currently) be serialized with Storable > due to having > LibXML objects in them (among other reasons). I see no one's answered this yet. I was hoping for more clarification lest my (mis)understanding hampers things. You have a large data structure to share across tests and I assume either some code builds it and others than need it or it's expensive to build? If it's the former, it implies an ordering dependency and coupling in your tests which greatly lowers their utility. If the structure is simply expensive, have you considered running tests in a single process to ensure the data structure doesn't go away? Test::Class and Test::Aggregate can both let you do this. Cheers, Ovid -- Buy the book - http://www.oreilly.com/catalog/perlhks/ Tech blog- http://blogs.perl.org/users/ovid/ Twitter - http://twitter.com/OvidPerl Official Perl 6 Wiki - http://www.perlfoundation.org/perl6
Re: Test::Most feedback wanted
> From: Fergal Daly > > Also, some test modules are problematic. For example: > > > >use Moose; > >use Test::Deep; > > > > That gives you a prototype mismatch warning. So you can omit underlying > > modules if needed: > > This is caused by both modules exporting a blessed function by default > and Moose's one sets a prototype. Since the core Scalar::Util::blessed has the same prototype, would you consider adding this to Test::Deep? Cheers, Ovid -- Buy the book - http://www.oreilly.com/catalog/perlhks/ Tech blog- http://use.perl.org/~Ovid/journal/ Twitter - http://twitter.com/OvidPerl Official Perl 6 Wiki - http://www.perlfoundation.org/perl6
Test::Most feedback wanted
I would love feedback (via actual use) of a development version of Test::Most available at http://search.cpan.org/~ovid/Test-Most-0.21_04/. If you're not familar with it, instead of this: use strict; use warnings; use Test::Exception 0.29; use Test::Differences 0.500; use Test::Deep 0.106; use Test::Warn 0.11; use Test::More 0.94; use Test::More tests => 42; You type: use Test::Most tests => 42; Yes, we have strict and warnings enabled by default (this is a devel version only and if it's causing serious pain, I might pull it). Note the version numbers. I went through through those various test modules and saw how many fixes and upgrades there were. As a result, I realized that if I give people a big steaming pile of ones and zeroes, they might get frustrated to have to upgrade all of those modules individually. So Test::Most is now "one stop shopping". I hope to periodically release it when the modules it depends on are upgraded. Also, some test modules are problematic. For example: use Moose; use Test::Deep; That gives you a prototype mismatch warning. So you can omit underlying modules if needed: use Test::Most tests => 42, '-Test::Deep'; (I'll provide a fix for the Moose/Test::Deep nit in a later release). Side note: I believe the Test::Class::Most failures are resolved. After David Cantrell gave me access to a server its build failed on, I found some serious issues that forced me to rethink the module. "use feature" and "mro" were pulled entirely. Cheers, Ovid-- Buy the book - http://www.oreilly.com/catalog/perlhks/ Tech blog- http://use.perl.org/~Ovid/journal/ Twitter - http://twitter.com/OvidPerl Official Perl 6 Wiki - http://www.perlfoundation.org/perl6
Test::Class::Most failures?
Is anyone able to replicate the test failures I'm getting for Test::Class::Most on Perl 5.10.1 Linux/Solaris? I can't (even on those boxes). http://www.cpantesters.org/cpan/report/6771238 The problem appears to be that "feature->import(':5.10')" isn't working (or something like that), but it works for chromatic's Modern::Perl, so I'm unsure of what's happening. I can't debug what I can't reproduce (before anyone asks, I've emailed the testers but they've not gotten back to me). Cheers, Ovid-- Buy the book - http://www.oreilly.com/catalog/perlhks/ Tech blog- http://use.perl.org/~Ovid/journal/ Twitter - http://twitter.com/OvidPerl Official Perl 6 Wiki - http://www.perlfoundation.org/perl6
Test::Differences and utf8
Shlomi Fish sent a but report for Test::Differences and I'm afraid that I'm not comfortable enough with utf8 to be sure of the most appropriate fix (this is the second report on this topic). Essentially, utf8 characters are output as their \x{} equivalents and this makes the output unreadable. Suggestions? #!/usr/bin/perl use strict; use warnings; use utf8; use Test::More tests => 1; use Test::Differences; # So we can output the text from the tests as UTF-8 binmode(STDOUT, ":encoding(utf-8)"); binmode(STDERR, ":encoding(utf-8)"); # TEST eq_or_diff( <<"EOF", Hello שלוש EOF <<"EOF", Hello שלום EOF ); __END__test-diff.t .. 1/1 # Failed test at test-diff.t line 16. # +---+--+--+ # | Ln|Got |Expected | # +---+--+--+ # | 1|Hello |Hello | # * 2|\x{05e9}\x{05dc}\x{05d5}\x{05e9} |\x{05e9}\x{05dc}\x{05d5}\x{05dd} * # +---+--+--+ # Looks like you failed 1 test of 1. test-diff.t .. Dubious, test returned 1 (wstat 256, 0x100)Failed 1/1 subtests Cheers, Ovid-- Buy the book - http://www.oreilly.com/catalog/perlhks/ Tech blog- http://use.perl.org/~Ovid/journal/ Twitter - http://twitter.com/OvidPerl Official Perl 6 Wiki - http://www.perlfoundation.org/perl6
Test::Most improvement?
Test::Most was written because I was tired of seeing boilerplate code like: use strict; use warnings; use Test::More tests => 23; use Test::Exception; use Test::Differences; So now it's down to this: use strict; use warnings; use Test::Most tests => 23; But I'm thinking about going one step further and hitting the Modern::Perl-like road: use Test::More tests => 23; Just use that line and you get strict and warnings, plus the most popular testing modules. I don't remember the last time I saw a modern test program without strict and warnings. Would it be a good idea to automatically turn them on? I know that there are a few people who won't like it, but they won't have to write any more lines of code. They could just write: use Test::More tests => 23; no strict; no warnings; Heck, I could argue that I'm saving them two characters :) (not to mention the fact that I'd be forcing them to be explicit that they didn't *forget* strict and warnings) Thoughts? Curtis-- Buy the book - http://www.oreilly.com/catalog/perlhks/ Tech blog- http://use.perl.org/~Ovid/journal/ Twitter - http://twitter.com/OvidPerl Official Perl 6 Wiki - http://www.perlfoundation.org/perl6
Test::Class::Most
For Test::Class fans: Here's the announcement of my Test::Class::Most: http://blogs.perl.org/users/ovid/2010/01/-package-sometestclass.html And if you can't wait for the CPAN upload (I'm sure there's no burning desire for this): http://github.com/Ovid/Test-Class-Most Basically, use this module instead of Test::Class and automatically get the benefits of Adrian Howard's Test::Class and chromatic's Modern::Perl without an effort. Note that I've tried to make it safe for pre-5.10 users, too. Bisous, Ovid-- Buy the book - http://www.oreilly.com/catalog/perlhks/ Tech blog- http://use.perl.org/~Ovid/journal/ Twitter - http://twitter.com/OvidPerl Official Perl 6 Wiki - http://www.perlfoundation.org/perl6
Re: camels
--- On Sun, 3/1/10, Shawn H Corey wrote: > From: Shawn H Corey > > An onion can be pretty pared down before you lose > sight of what it is. > > I pared many an onion and you loose sight of it when the > tears start to > flow. :) And let's face it: to many people, onions stink and they *do* make you cry. That's not a positive association. Perl: the language that will make you cry. Many, many people will agree with that. And there have been huge numbers of marketing disasters which have occurred because superior products fail to customer perception. IT'S IMPORTANT. Remember that: customer perception is important. If technical superiority were all that mattered, how many people think that the current rankings of language popularity would hold? Raise your hands now! Yeah, just what I thought: no one was stupid enough to raise their hands because frankly, I don't think there are any stupid people on this list. A huge mistake that many corporations make is to change their branding. The only significant reason to do so is when the current branding has a negative association. Frankly, I don't think the camel applies. So to change our current branding to something which already has a negative association is, if not counter-productive, at least not productive (in the marketing sense. In the legal sense, it's a different matter entirely, though it's an important one). >From a marketing perspective, the camel wins hand-down. From a legal >perspective, what are the pros and cons? Frankly, I have no idea. Cheers, Ovid -- Buy the book - http://www.oreilly.com/catalog/perlhks/ Tech blog- http://use.perl.org/~Ovid/journal/ Twitter - http://twitter.com/OvidPerl Official Perl 6 Wiki - http://www.perlfoundation.org/perl6
Re: camels
--- On Sun, 3/1/10, chromatic wrote: > From: chromatic > > > Maybe someone could get a grant to hire someone/a > company with design skills > > to come up with a better logo than the onion? > > It seems rather unlikely that TPF will go through the > business of applying for > another trademark because you don't like the onion. > I'm sure if you asked a > dozen non-techies in your office about the word "perl", > several of them would > add an A in the middle. Just because Perl people aren't particularly good at branding doesn't mean we should ignore it. Let's face it: both the onion and the butterfly (at least, the one we have) are difficult things to build brand association with.. So's a camel, but there's been a lot of time invested in that. What is our concern vis-a-vis the camel and how can we approach O'Reilly regarding this concern? While it's certainly the trademark of a private company, I doubt very seriously that O'Reilly would be terribly averse to giving TPF plenty of leeway in using it (as past history has shown). Cheers, Ovid -- Buy the book - http://www.oreilly.com/catalog/perlhks/ Tech blog- http://use.perl.org/~Ovid/journal/ Twitter - http://twitter.com/OvidPerl Official Perl 6 Wiki - http://www.perlfoundation.org/perl6
Re: qa.perl.org
--- On Sat, 2/1/10, Leo Lapworth wrote: > From: Leo Lapworth > I've renamed to 'inactive' for now. > > I like Eric's idea about history if someone wants to create > that. I'll second that. From a pure marketing perspective, "archive", "inactive" and "old" all imply "dead". "History" implies more of a narrative. Cheers, Ovid -- Buy the book - http://www.oreilly.com/catalog/perlhks/ Tech blog- http://use.perl.org/~Ovid/journal/ Twitter - http://twitter.com/OvidPerl Official Perl 6 Wiki - http://www.perlfoundation.org/perl6
Re: Interesting test failure under prove --merge
- Original Message > From: Gabor Szabo > I encountered and interesting failure that only appears when I run > prove with --merge on Windows. > This is running in the Padre Stand Alone which means Strawberry > October 2009. perl 5.10.1 > Same thing on Linux works well though I have not compared the versions > of Test::Harness.. > > > C:\work\padre\Padre>prove -b t\15-locale.t > t\15-locale.t .. 1/7 2 1 : 3 2 : 2 0 : E r r o r : C a n n o t s > e t l o c a l e t o l a n g u a g e A r a b i c . > t\15-locale.t .. ok > All tests successful. > Files=1, Tests=7, 3 wallclock secs ( 0.02 usr + 0.04 sys = 0.06 CPU) > Result: PASS > > > C:\work\padre\Padre>prove --merge -b t\15-locale.t > t\15-locale.t .. Failed 1/7 subtests > > Test Summary Report > --- > t\15-locale.t (Wstat: 0 Tests: 6 Failed: 0) > Parse errors: Tests out of sequence. Found (4) but expected (3) > Tests out of sequence. Found (5) but expected (4) > Tests out of sequence. Found (6) but expected (5) > Tests out of sequence. Found (7) but expected (6) > Bad plan. You planned 7 tests but ran 6. > Files=1, Tests=6, 3 wallclock secs ( 0.02 usr + 0.05 sys = 0.07 CPU) > Result: FAIL Hi Gabor, Can you rerun that test in verbose mode? Is the failure still there? If so, can you post the output? We've had problems with --merge in the past because of how it works, but I'm curious to know what this issue is. Cheers, Ovid -- Buy the book - http://www.oreilly.com/catalog/perlhks/ Tech blog- http://use.perl.org/~Ovid/journal/ Twitter - http://twitter.com/OvidPerl Official Perl 6 Wiki - http://www.perlfoundation.org/perl6
Re: Discourage use_ok?
- Original Message > From: Jonathan Rockway > Why use a script at all? They are clearly difficult to test, and code > that is difficult to test is where the bugs always hide. Because there's a lot of legacy code out there and much of it is in the form of scripts. The best way to write a script is something similar to this (or a procedural variant): #!/usr/bin/env perl use strict; use warnings; use My::App; my $app = My::App->new; $app->run(@ARGV); If you have an older script, you can gradually approach this via refactoring. That, of course, should not be done without tests and that brings us back to the original issue :) Cheers, Ovid -- Buy the book - http://www.oreilly.com/catalog/perlhks/ Tech blog- http://use.perl.org/~Ovid/journal/ Twitter - http://twitter.com/OvidPerl Official Perl 6 Wiki - http://www.perlfoundation.org/perl6
Re: Discourage use_ok?
- Original Message > From: Erik Osheim > To: perl-qa@perl.org > Sent: Mon, 9 November, 2009 17:15:52 > Subject: Re: Discourage use_ok? > > On Mon, Nov 09, 2009 at 04:32:18PM +, David Cantrell wrote: > > Why not test that the script *works*, not just that it compiles? Agreed, but it's nice to have a t/00-load.t which bails out if anything (programs or modules) fails to compile. Then individual test programs can exercise them. > That's a good idea. Maybe something like run_ok()? By what does that mean? You generally want to test that for a given set of inputs for a given state, you get a particular set of outputs. run_ok() doesn't really manage any of that. Am I missing something here? (I could very well be). Cheers, Ovid -- Buy the book - http://www.oreilly.com/catalog/perlhks/ Tech blog- http://use.perl.org/~Ovid/journal/ Twitter - http://twitter.com/OvidPerl Official Perl 6 Wiki - http://www.perlfoundation.org/perl6
Re: Discourage use_ok?
--- On Mon, 9/11/09, David Golden wrote: > From: David Golden > I don't see any problem with require_ok. I've found > it useful as a > cheap sanity check and don't see the action at a distance > problems you > imply. use Test::More tests => $gazillion; require_ok $some_module; run_gazillion_minu_one_tests(); If the require fails, you get a test failure, but the tests keep running. Meanwhile, you might have a *partially* compiled version of $some_module in a namespace and the subsequent tests can throw very strange errors (I speak from painful experience here). However, if the tests scroll off the screen, perhaps more lines than your screen buffer provides, you might be staring at some weird failure in test 327 and trying to figure out why your perfectly good code is failing a test no matter what you try. In short, require_ok() generally needs an "or die" after it. Cheers, Ovid -- Buy the book - http://www.oreilly.com/catalog/perlhks/ Tech blog- http://use.perl.org/~Ovid/journal/ Twitter - http://twitter.com/OvidPerl Official Perl 6 Wiki - http://www.perlfoundation.org/perl6
Re: Discourage use_ok?
--- On Mon, 9/11/09, Philippe Bruhat (BooK) wrote: > compile_ok() would certainly be interesting with scripts > shipped with > a module, that usually have very little meat that needs > testing (since > most of the work is done in the modules), but that one > would at least > check that they compile. OK, here's a quick 'n dirty implementation. Takes either a module name or a script name. If we could get this relatively stable, I might slap it in Test::Most if Schwern doesn't want it in Test::More. #!/usr/bin/env perl use strict; use warnings; use Test::Most 'no_plan'; sub compile_ok ($;$) { my ( $module_or_code, $name ) = @_; my $tb = Test::More->builder; my $perl = $^X; my $command = Test::More::_is_module_name($module_or_code) ? "$perl -M$module_or_code -e 1" : "$perl -c $module_or_code"; my $success = system($command) == 0; my $error = $? >> 8; my $ok = $tb->ok( $success, $name || "$module_or_code compiles" ); unless ($ok) { $tb->diag(<<"DIAGNOSTIC"); Tried to compile '$module_or_code'. Exit status: $error DIAGNOSTIC } return $ok; } compile_ok $0; compile_ok $0, 'we compile'; compile_ok 'CGI'; compile_ok 'No::Such::Module'; Cheers, Ovid -- Buy the book - http://www.oreilly.com/catalog/perlhks/ Tech blog- http://use.perl.org/~Ovid/journal/ Twitter - http://twitter.com/OvidPerl Official Perl 6 Wiki - http://www.perlfoundation.org/perl6
Re: Discourage use_ok?
--- On Mon, 9/11/09, Ovid wrote: > From: Ovid > The *only* use I've ever had for use_ok() has been in a > t/00-load.t test which attempts to load all modules and does > a BAIL_OUT if it fails. I'm sure there are other use > cases, but if that's the only one, it seems a very, very > slim justification for a fragile code. Thinking about this more, what about a compile_ok()? It merely asserts that the code compiles (in an anonymous namespace, perhaps?), but doesn't make any guarantees about you being able to even use the code -- just that it compiles. It wouldn't need to be done at BEGIN time, nor would it necessarily require a "or die" after it, since its availability is not guaranteed (though that would be problematic as cleaning a namespace is also fragile). Just tossing out ideas here. Cheers, Ovid -- Buy the book - http://www.oreilly.com/catalog/perlhks/ Tech blog- http://use.perl.org/~Ovid/journal/ Twitter - http://twitter.com/OvidPerl Official Perl 6 Wiki - http://www.perlfoundation.org/perl6
Discourage use_ok?
I've been toying with this thought for a while: discourage (not quite deprecate) use_ok() and require_ok(). I've written up some of the problems with the former (http://use.perl.org/~Ovid/journal/39859) and the latter still has the "or die" problem. For the life of me, I can't really see any utility to use_ok() or require_ok(). Not only are both fragile and a source of strange "action at a distance" bugs, but the constructs they replace not only work correctly, but can be viewed as tests themselves! If either "use My::Module" or "require My::Module" fails, the test program exits with a non-zero exit status, thus meaning a failure is reported. What do people think? Should we start discouraging the use of these tests? Maybe even go so far as to deprecate them? The *only* use I've ever had for use_ok() has been in a t/00-load.t test which attempts to load all modules and does a BAIL_OUT if it fails. I'm sure there are other use cases, but if that's the only one, it seems a very, very slim justification for a fragile code. See my link for more explanation, including how Test::More's own docs get this wrong -- which alone should say something about how new testers will get this wrong when experienced testers do. Cheers, Ovid -- Buy the book - http://www.oreilly.com/catalog/perlhks/ Tech blog- http://use.perl.org/~Ovid/journal/ Twitter - http://twitter.com/OvidPerl Official Perl 6 Wiki - http://www.perlfoundation.org/perl6
Re: Testing with Test::Class
--- On Tue, 27/10/09, Michael Peters wrote: > From: Michael Peters > > > If you like the output and want to convert your own > POD to typeset documents, you can check out my VERY alpha > code at: > > > > http://github.com/Ovid/Pod-Parser-GroffMom > > Looks very nice. I've been thinking about doing something > similar with a pod-2-html and then html-2-pdf process for my > conference slides. Would be really nice to be able to write > them in POD and get syntax highlighting out of it too. After a few tweaks, I want to write Pod::Parser::GroffMom::S5. That would allow me to embed "=for s5" in my POD and have my slides and the documentation side-by-side. Think "literate training manuals". I'd much prefer to have Keynote.app slides generated, but Apple's XML format is impenetrable (and no longer has a schema with it. Thanks Apple!). Cheers, Ovid -- Buy the book - http://www.oreilly.com/catalog/perlhks/ Tech blog- http://use.perl.org/~Ovid/journal/ Twitter - http://twitter.com/OvidPerl Official Perl 6 Wiki - http://www.perlfoundation.org/perl6
Testing with Test::Class
Hi all, As some of you may know, chromatic published my five part series on "Testing with Test::Class" on http://www.modernperlbooks.com/. I'm working on software which automatically converts POD to beautifully typeset pdfs, including syntax highlighting. As a test, I converted that article and uploaded it to slideshare: http://www.slideshare.net/Ovid/testing-with-testclass You might want to read that article for a couple of reasons: * To learn Test::Class (even experienced users will learn stuff) * To give me feedback on how well my converter works :) If you like the output and want to convert your own POD to typeset documents, you can check out my VERY alpha code at: http://github.com/Ovid/Pod-Parser-GroffMom Requires groff (http://www.gnu.org/software/groff/) which is standard on many systems and even has Windows binaries. My code outputs data which groff converts to PostScript, so I don't actually convert directly to PDF, but there appear to be plenty of free PostScript to PDF utilities out there, so I'm not too worried about this (OS X fans can open PostScript files directly in Preview.app). Cheers, Ovid -- Buy the book - http://www.oreilly.com/catalog/perlhks/ Tech blog - http://use.perl.org/~Ovid/journal/ Twitter - http://twitter.com/OvidPerl Official Perl 6 Wiki - http://www.perlfoundation.org/perl6
Re: Making TODO Tests Fail
- Original Message > From: Salve J Nilsen > > Fork/branch Test::Builder and make it work yourself. When it's ready and > usable, > ask Schwern to evaluate, improve and merge. > > Code = Conversation. :) I know. I've thought about that, but truth be told, I'm really getting burnt out with the Perl community right now. Lots of people are being rude, thinking that being "right" is all they need to justify being arrogant and it's sapping my energy relating to anything regarding Perl right now. If people can't play nice, I don't want to play. I'll dive back in sooner or later, though. I always do. Cheers, Ovid -- Buy the book - http://www.oreilly.com/catalog/perlhks/ Tech blog- http://use.perl.org/~Ovid/journal/ Twitter - http://twitter.com/OvidPerl Official Perl 6 Wiki - http://www.perlfoundation.org/perl6
Re: Making TODO Tests Fail
- Original Message > From: chromatic > Add diagnostics to TODO tests and let your test harness do what it's supposed > to do. Shoving yet more optional behavior in the test process continues to > violate the reasons for having separate test processes and TAP analyzers. We have no diagnostics. We've never had diagnostics (the ad-hoc things going to STDERR don't count because they can't be synched or reliably parsed). Thus, I can't add diagnostics to the TODO tests until Schwern puts diagnostics in Test::Builder or accepts a patch. That doesn't look like it's going to happen any time soon, so telling me to add diagnostics to TODO tests doesn't help :( Thus, I'm trying to think of a way of solving my problem now, not at some hypothetical date in the future. Cheers, Ovid -- Buy the book - http://www.oreilly.com/catalog/perlhks/ Tech blog- http://use.perl.org/~Ovid/journal/ Twitter - http://twitter.com/OvidPerl Official Perl 6 Wiki - http://www.perlfoundation.org/perl6
Re: Making TODO Tests Fail
- Original Message > From: Gabor Szabo > > I think it would be better to have a tool (Smolder) be able to display > various drill-downs from the aggregated test report. > e.g. list of all the TODOs > list of all the TODOs that pass > etc... How would Smolder (which we're not using since we use Hudson) help with this? With over 15,000 tests being reported for t/aggregate.t, I think a drill-down would be problematic here. Plus, tying the TODO to the appropriate test file being aggregated is needed. Thus, I thought about a BAILOUT or forced failure for the TODO at that point. Cheers, Ovid -- Buy the book - http://www.oreilly.com/catalog/perlhks/ Tech blog - http://use.perl.org/~Ovid/journal/ Twitter - http://twitter.com/OvidPerl Official Perl 6 Wiki - http://www.perlfoundation.org/perl6
Making TODO Tests Fail
We currently have over 30,000 tests in our system. It's getting harder to manage them. In particular, it's getting harder to find out which TODO tests are unexpectedly passing. It would be handy have to some option to force TODO tests to die or bailout if they pass (note that this behavior MUST be optional). Now one might think that it would be easy to track down missing TODOs, but with 15,000 tests aggregated via Test::Aggregate, I find the following unhelpful: TODO passed: 2390, 2413 If those were in individual tests, it would be a piece of cake to track them down, aggregated tests get lumped together. Lacking proper subtest support (which might not mitigate the problem) or structured diagnostics (which could allow me to attach a lot more information to TODO tests) at the end of the day, I need an easier way of tracking this. Suggestions? Cheers, Ovid -- Buy the book - http://www.oreilly.com/catalog/perlhks/ Tech blog- http://use.perl.org/~Ovid/journal/ Twitter - http://twitter.com/OvidPerl Official Perl 6 Wiki - http://www.perlfoundation.org/perl6
Re: Test module for tests in Perl module distro
- Original Message > From: Mark Morgan > > >> [1] Test::Class is my preferred testing package for work; I don't use > >> it for stuff destined for CPAN due to adding an extra dependancy. > >> *sigh* > > > > Your CPAN modules already depend on things like Moose and Hook::LexWrap and > > XML::Parser. Leaving out Test::Class at that point is, at best, Pyhrric. > > Yeah, probably true anymore. I generally adopted this approach years > back, when places that I worked at (at the time a number of ISPs) > chances of installing any modules was inversely proportional to the > number of dependancies it had... I would check to see the likelihood of failure to install a given module and if that's a low % chance relative to the benefits you gain, I'd include it. Regrettably, Test::Class has a relatively high failure rate and thus I'd be less inclined to include it :( That being said, I've been a rather naughty CPAN developer in that I've sometimes included modules in 'requires' which *I* think are really important or common, thus decreasing the chance that my module will be installed, but increasing the chance, if it's installed, that others who depend on it will install successfully. Hopefully that will also make programmers more likely to use it if it's already there. Yeah, I know. That's not really nice :) Cheers, Ovid-- Buy the book - http://www.oreilly.com/catalog/perlhks/ Tech blog- http://use.perl.org/~Ovid/journal/ Twitter - http://twitter.com/OvidPerl Official Perl 6 Wiki - http://www.perlfoundation.org/perl6
Singleton subtest fix
Hi all, I've altered Test::Builder to handle the cases where people are using the TB singleton at the top of their test files. You can get a copy at http://github.com/Ovid/test-more/tree/master if you want to test it. Cheers, Ovid -- Buy the book - http://www.oreilly.com/catalog/perlhks/ Tech blog- http://use.perl.org/~Ovid/journal/ Twitter - http://twitter.com/OvidPerl Official Perl 6 Wiki - http://www.perlfoundation.org/perl6
Re: Calling All Test:: Authors
- Original Message > From: Michael G Schwern > > > The latest developer release of Test::More allows subtests. Subtests are > > great > > in that they solve a lot of problems in advanced Perl testing, but they have > > required a change in Test::Builder. > > Whoa whoa whoa! While its great and all to get people to change their code to > use Test::Builder::Module, I think we got our wires crossed. > > subtest() has to be fixed to work with the existing singleton style. There's > just far too much code out there making that assumption and I'm not going to > pretend they're all suddenly going to change or that it all lives on CPAN. I remember, when I was a child, reading a parable about a bunch of monkeys who complained about their leaky roof, but every time the sun came out, they always ran around and played and never fixed that roof. Then the rain would return and they'd huddle inside and complain about the leaky roof. I know subtest() needs to be fixed. However, the next time this issue comes around (and there *will* be a next time if we don't plan for it :) it would be nice if the roof was fixed (or at lest less leaky). The great thing about pointing out this issue and asking module authors to address this is that it's NOT an emergency. Now if you'll excuse me, I need to go play in the sunshine. Cheers, Ovid -- Buy the book - http://www.oreilly.com/catalog/perlhks/ Tech blog- http://use.perl.org/~Ovid/journal/ Twitter - http://twitter.com/OvidPerl Official Perl 6 Wiki - http://www.perlfoundation.org/perl6
Re: "Fluent" tests?
- Original Message > From: Michael Peters > > > use Test::Fluent 'no_plan'; > > > > my ( $have, $want ) = ( 1, 1 ); > > have $have, want $want, reason 'Some test name'; > > have [ 3, 4 ], want [ 4, 5 ], reason 'cuz I said so'; # fails > > true 3, reason '3 had better be true'; > > false 3, reason '3 had better still better be true';# fails > > I would much rather see something like > > cmp_ok( > have => 1, > want => 2, > reason => 'Some test name', > diagnostic => { world => $world }, > ); > > Much more Perlish. I've always disliked some APIs where it's not immediately > clear what's a function call and what's an argument to that function: is > reason() an argument to want() or have()?. It also seems more obvious for > doing > things like data-driven tests where you just have a large data structure that > tests are run against. I've thought about this idea for the past couple of days and while there's nothing *wrong* with your cmp_ok() idea, it doesn't do anything for me. I'm rather curious about stretching my wings, so to speak, and seeing how interesting an interface I could create and whether or not it's useful. cmp_ok() is old ground and anyone could write it. As for your comment "what's a function call and what's an argument", I know what you mean, but again, how can we think about something in a larger context and see what we can accomplish? I want to try something new. Cheers, Ovid -- Buy the book - http://www.oreilly.com/catalog/perlhks/ Tech blog- http://use.perl.org/~Ovid/journal/ Twitter - http://twitter.com/OvidPerl Official Perl 6 Wiki - http://www.perlfoundation.org/perl6
Fw: "Fluent" tests?
I forgot to hit 'reply all' :) Also, I had considered this: have $some_value, assuming { shift > 7 }, reason "Argument must be greater than 7"; And that would allow us to naturally put complex constraints onto the values. Cheers, Ovid -- Buy the book - http://www.oreilly.com/catalog/perlhks/ Tech blog- http://use.perl.org/~Ovid/journal/ Twitter - http://twitter.com/OvidPerl Official Perl 6 Wiki - http://www.perlfoundation.org/perl6 - Forwarded Message > From: Ovid > To: Colin Newell > Sent: Tuesday, 30 June, 2009 19:17:54 > Subject: Re: "Fluent" tests? > > - Original Message > > > From: Colin Newell > > > > > my $armageddon = $world->destruction; > > > false $armaggedon, > > > reason "World->destruction must never return true", > > > diagnostic { world => $world }; > > > > > > I don't know that this is a brilliant interface, but just playing around > with > > it works (the have/want pair automatically falls back to Test::Differences > > if > > $have or $want is a reference). > > > > > > It's a lot more explicit that Test::More and friends. That means it's a > > > lot > > > more verbose. However, it's something which could be played with. > > > > At the hackathon Schwern and I had a go at implementing the ugly -> > > type interface for Test::Builder2 based on an idea from a PDX.pm > > meeting. > > > > The idea being that it would work something like this, > > > > ok($func('a'), 'some test')->diag('some cheap diagnostics'); > > > > or for delayed diagnostics, > > > > { > > my $result = ok $func('a'), 'A test'; > > if(!$result) { > > $result->diag("expensive diagnostics we'd rather not run"); > > } > > } > > > > The idea being that these result objects would then be usable by > > modules like Test::More and you would be able to easily attach > > diagnostics in the right place. > > > > Note that the block in the second example is a necessary evil for the > > delayed diagnostics because of the way we make sure we don't produce > > the output before we know what diagnostics come with the test. > > > > I did have a go at writing this up at > > http://colinnewell.wordpress.com/2009/03/31/perl-qa-hackathon-testbuilder2/. > > I appreciate the work you've done and I think it would be an improvement over > what we have. However, rather than settle for "good", I'm wondering what > would > be "great"? I certainly don't know that I have the answer here, so I would > love > to see what people think is an INCREDIBLE interface for writing tests. > > > Cheers, > Ovid > -- > Buy the book - http://www.oreilly.com/catalog/perlhks/ > Tech blog- http://use.perl.org/~Ovid/journal/ > Twitter - http://twitter.com/OvidPerl > Official Perl 6 Wiki - http://www.perlfoundation.org/perl6
Re: "Fluent" tests?
- Original Message > From: David Golden > > On Tue, Jun 30, 2009 at 1:17 PM, Ovidwrote: > > Thoughts? Am I totally smoking crack here? If there's a clean way to > shoehorn diagnostics on the Test::More-style interface, I guess that would be > ok. > > Doesn't Test::Builder2 address this? I'd rather see more energy > directed at getting that done. Well, I see this as conflating two issues. First, we've waited years for TB2 and we might have to continue waiting years for it. Second, can we create a better interface? I don't think people should sit around and have to wait for other authors. Schwern and Colin have done good work, but there's nothing there yet and I know Schwern is as bad as I am about pushing out new code :) Also, I think playing around with more fluent interfaces is a good idea. If my interface is great, why not? If it's bad, what would people *love* to see in a test interface which allows them to naturally write tests? Cheers, Ovid -- Buy the book - http://www.oreilly.com/catalog/perlhks/ Tech blog- http://use.perl.org/~Ovid/journal/ Twitter - http://twitter.com/OvidPerl Official Perl 6 Wiki - http://www.perlfoundation.org/perl6
"Fluent" tests?
Part of the problem with have with 'Test::More' and friends is that there's not an easy way to attach diagnostics to any test. This is largely because the interface is set in stone. So I was thinking about a rewritten test interface which allows something like this (this code actually works, by the way): use Test::Fluent 'no_plan'; my ( $have, $want ) = ( 1, 1 ); have $have, want $want, reason 'Some test name'; have [ 3, 4 ], want [ 4, 5 ], reason 'cuz I said so'; # fails true 3, reason '3 had better be true'; false 3, reason '3 had better still better be true';# fails At the end of each of those, you could append a 'diagnostics' tag which is required to take a hashref: my $armageddon = $world->destruction; false $armaggedon, reason "World->destruction must never return true", diagnostic { world => $world }; I don't know that this is a brilliant interface, but just playing around with it works (the have/want pair automatically falls back to Test::Differences if $have or $want is a reference). It's a lot more explicit that Test::More and friends. That means it's a lot more verbose. However, it's something which could be played with. Is this a bad idea? Are there other suggestions people want? If we had a dot operator, it would be clean to write this: have($have).want($want).reason($some_reason); But we don't, so we'd have to fall back on the ugly: have($have)->want($want)->reason($some_reason); Though I could easily overload the dot operator to get the first syntax. Or if we needed delayed evaluation for some reason: have { $have }, want { $want }, reason $some_reason); That's also ugly and requires the (&) prototype (and friends). Thoughts? Am I totally smoking crack here? If there's a clean way to shoehorn diagnostics on the Test::More-style interface, I guess that would be ok. Cheers, Ovid -- Buy the book - http://www.oreilly.com/catalog/perlhks/ Tech blog- http://use.perl.org/~Ovid/journal/ Twitter - http://twitter.com/OvidPerl Official Perl 6 Wiki - http://www.perlfoundation.org/perl6
Re: Calling All Test:: Authors
- Original Message > From: Ricardo SIGNES > > I updated my Test:: libraries to Test::Builder->new in their test routines, > instead, as that's what I thought the original wisdom was. Is that still > okay? (I did not add subtest-specific tests.) > > That is, I turned: > > my $TEST = Test::Builder->new; > sub is_baloney { $TEST->ok('delicious') } > > into: > > sub is_baloney { > my $TEST = Test::Builder->new; > $TEST->ok('delicious'); > } Currently this should be fine as all the builder() method does is call Test::Builder->new (and that's what I did in Test::JSON, to be honest). I can't say whether or not this behavior will change in the future. I just used the information passed to me by Schwern and by the Test::Builder::Module documentation. Cheers, Ovid -- Buy the book - http://www.oreilly.com/catalog/perlhks/ Tech blog- http://use.perl.org/~Ovid/journal/ Twitter - http://twitter.com/OvidPerl Official Perl 6 Wiki - http://www.perlfoundation.org/perl6
Calling All Test:: Authors
>From http://use.perl.org/~Ovid/journal/39193 The latest developer release of Test::More allows subtests. Subtests are great in that they solve a lot of problems in advanced Perl testing, but they have required a change in Test::Builder. Previously you could do stuff like this: package Test::StringReverse; use base 'Test::Builder::Module'; our @EXPORT = qw(is_reversed); my $BUILDER = Test::Builder->new; sub is_reversed ($$;$) { my ( $have, $want, $name ) = @_; my $passed = $want eq scalar reverse $name; $BUILDER->ok($passed, $name); $BUILDER->diag(<<"END_DIAG") if not $passed; have: $have want: $want END_DIAG return $passed; } 1; And you've have a simple (untested;) test for whether or not strings are reversed. The reason that worked is that Test::Builder->new used to return a singleton. This is no longer true. If someone uses your test library in a subtest, the above code would break. Instead, you want to do this: sub is_reversed ($$;$) { my ( $have, $want, $name ) = @_; my $passed = $want eq scalar reverse $name; my $builder = __PACKAGE__->builder; $builder->ok($passed, $name); $builder->diag(<<"END_DIAG") if not $passed; have: $have want: $want END_DIAG return $passed; } It's a minor change, it's completely backwards-compatible and it supports subtests. Cheers, Ovid -- Buy the book - http://www.oreilly.com/catalog/perlhks/ Tech blog- http://use.perl.org/~Ovid/journal/ Twitter - http://twitter.com/OvidPerl Official Perl 6 Wiki - http://www.perlfoundation.org/perl6
Calling All Test:: Authors
(Helps if I send this from a subscribed address): From http://use.perl.org/~Ovid/journal/39193 The latest developer release of Test::More allows subtests. Subtests are great in that they solve a lot of problems in advanced Perl testing, but they have required a change in Test::Builder. Previously you could do stuff like this: package Test::StringReverse; use base 'Test::Builder::Module'; our @EXPORT = qw(is_reversed); my $BUILDER = Test::Builder->new; sub is_reversed ($$;$) { my ( $have, $want, $name ) = @_; my $passed = $want eq scalar reverse $name; $BUILDER->ok($passed, $name); $BUILDER->diag(<<"END_DIAG") if not $passed; have: $have want: $want END_DIAG return $passed; } 1; And you've have a simple (untested;) test for whether or not strings are reversed. The reason that worked is that Test::Builder->new used to return a singleton. This is no longer true. If someone uses your test library in a subtest, the above code would break. Instead, you want to do this: sub is_reversed ($$;$) { my ( $have, $want, $name ) = @_; my $passed = $want eq scalar reverse $name; my $builder = __PACKAGE__->builder; $builder->ok($passed, $name); $builder->diag(<<"END_DIAG") if not $passed; have: $have want: $want END_DIAG return $passed; } It's a minor change, it's completely backwards-compatible and it supports subtests. Cheers, Ovid -- Buy the book - http://www.oreilly.com/catalog/perlhks/ Tech blog- http://use.perl.org/~Ovid/journal/ Twitter - http://twitter.com/OvidPerl Official Perl 6 Wiki - http://www.perlfoundation.org/perl6 -- Buy the book - http://www.oreilly.com/catalog/perlhks/ Tech blog- http://use.perl.org/~Ovid/journal/ Twitter - http://twitter.com/OvidPerl Official Perl 6 Wiki - http://www.perlfoundation.org/perl6
Re: [ANNOUNCE] Test::Sims
- Original Message > From: Michael G Schwern > > Ovid wrote: > > First feature request: automatic Moose support to build random data which > > conforms to Moose constraints :) (Yes, I know it's much, much harder than > it sounds). > > Hello, what? package Person; use Moose; has name => (is => 'ro', isa => 'Str'); has age => (is => 'ro', isa => 'Int'); ... "name" and "age" must be Str and Int respectively. Each can be easy to break, but Test::Sims::Moose would be able to: use Test::Sims::Moose 'random'; Person->new( name => random('Person.name'), age => random('Person.age') ); And that would potentially have issues when it assigns "\t" to $name and -12 to $age, even those are both valid values for the types in question. It would be very difficult to find data which automatically fits any random type, but it could be written for core types and extensible for other types. Cheers, Ovid -- Buy the book - http://www.oreilly.com/catalog/perlhks/ Tech blog- http://use.perl.org/~Ovid/journal/ Twitter - http://twitter.com/OvidPerl Official Perl 6 Wiki - http://www.perlfoundation.org/perl6
Re: [ANNOUNCE] Test::Sims
- Original Message > From: Michael G Schwern > > Sim functions can combine together to make more complicated sims. > > package Sim::Person; > > use Test::Sims; > > sub sim_person { > my %defaults = ( > name => rand_name(), > age => rand_age(), > birthday => sim_datetime(), > ); > > return Person->new( %defaults, @_ ); > } > > export_sims(); > > Combine with things like Data::Random and Data::Generate to create yet more > random data quickly. First feature request: automatic Moose support to build random data which conforms to Moose constraints :) (Yes, I know it's much, much harder than it sounds). Cheers, Ovid -- Buy the book - http://www.oreilly.com/catalog/perlhks/ Tech blog- http://use.perl.org/~Ovid/journal/ Twitter - http://twitter.com/OvidPerl Official Perl 6 Wiki - http://www.perlfoundation.org/perl6
Re: Subtest fail with singletons
- Original Message > From: Michael G Schwern > > Ovid wrote: > > > > I've generally been extremely pleased with how robust 'local > > $hash->{value}' > is, > > but you can't localize lexical variables. > > The Test::Builder singleton is now a package global. Given that I think I made that change, I feel pretty silly right now :) Cheers, Ovid -- Buy the book - http://www.oreilly.com/catalog/perlhks/ Tech blog- http://use.perl.org/~Ovid/journal/ Twitter - http://twitter.com/OvidPerl Official Perl 6 Wiki - http://www.perlfoundation.org/perl6
Re: prove is not generating archive when test bails out.
- Original Message > From: Michael Peters > > > When running tests with prove -a file.tar.gz it nicely creates the > > archive file > > but if the test bails out the archive file is not created at all. > > > > Is this a feature or a bug ? > > Sounds like a bug. I'm not sure what path a bail out follows, but > TAP::Harness::Archive overrides runtests() to add the archive creation after > the > parent's runtests() have finished. What does TAP::Harness do when it > encounters > a bailout? Is there some exception thrown that T::H::A should catch? Test::Builder exits with 255 on a bail out. TAP::Harness dies. Maybe an exception would be better here. Cheers, Ovid -- Buy the book - http://www.oreilly.com/catalog/perlhks/ Tech blog- http://use.perl.org/~Ovid/journal/ Twitter - http://twitter.com/OvidPerl Official Perl 6 Wiki - http://www.perlfoundation.org/perl6
Buffered parallel subtests
Hi all, I hope this idea makes sense. I was thinking about the issue of running subtests in parallel when I thought about the idea of a "buffered subtest". Basically, it would work exactly like a normal subtest, but nothing would go to STDOUT or STDERR until the final line (the non-nested test summary line). In other words: use Test::More tests => 2; buffered_subtest 'some subtest' = > sub { plan tests => 2; ok 1; ok 1 }; ok 1; That might emit something like: 1..2 1..2 ok 1 ok 2 ok 1 - 'some subtest' ok 2 So if the 'some subtest' subtest didn't emit anything until that summary 'ok 1' line, can we safely run subtests in parallel without worrying about whether or not their output overlaps? Cheers, Ovid -- Buy the book - http://www.oreilly.com/catalog/perlhks/ Tech blog- http://use.perl.org/~Ovid/journal/ Twitter - http://twitter.com/OvidPerl Official Perl 6 Wiki - http://www.perlfoundation.org/perl6
Re: Subtest fail with singletons
- Original Message > From: Michael G Schwern > > >> %$Test = %$child; > >> > >> Watch out for edge cases of when subtest() dies, make sure the > >> parent's guts > >> get put back. > > > > "local" should be good for doing that right? > > Normally, yes. local $Test will localize the value of $Test, which is a > reference, not its guts. local %$Test would seem to be the right thing, but > I'm not sure if that will DWIM. I've generally been extremely pleased with how robust 'local $hash->{value}' is, but you can't localize lexical variables. I'd have to do something like: my $test = Test::Builder->new; local $test->{$_} = $child->{$_} for keys %$child; (And you have to write it similar to that to avoid scoping issues). I think that's safe, but would a clone be safer? We have file handles stored in $test and I'm unsure of the behavior there or how to safely test that. Cheers, Ovid -- Buy the book - http://www.oreilly.com/catalog/perlhks/ Tech blog- http://use.perl.org/~Ovid/journal/ Twitter - http://twitter.com/OvidPerl Official Perl 6 Wiki - http://www.perlfoundation.org/perl6
Re: Subtest fail with singletons
- Original Message > From: Michael G Schwern > > A simple strategy might be to just replace the global singleton with the > child's guts at the start of a subtest() and then back out again at the end. > > %$Test = %$child; I don't think that the child should ever have knowledge of the parent, but I could easily be wrong. What if I am wrong? With that strategy, you can't do this: my $builder = Test::Builder->new; subtest 'some test' => sub { plan tests => 13; if ( $builder->some_method ) { ... } # surprise! }; Of course, you don't get something for nothing. Oh, and I just uploaded a new Test::JSON to make sure it works with subtests :) Cheers, Ovid -- Buy the book - http://www.oreilly.com/catalog/perlhks/ Tech blog- http://use.perl.org/~Ovid/journal/ Twitter - http://twitter.com/OvidPerl Official Perl 6 Wiki - http://www.perlfoundation.org/perl6
Re: Subtest fail with singletons
- Original Message > From: David E. Wheeler > > > And Test::Exception and many, many other Test:: modules. It's a very > > common > pattern and getting all authors to agree to fix those modules is a dubious > strategy, I think. > > /me shrug. If their modules fail with a new version of T::B, they have to fix > it, no? > > I'm inclined to think that this is not our problem. None of them will fail with the new version of T::B (that I'm aware of). However, they'll fail with 'subtest', which means that for many serious test system (such as ours at the BBC which is where I found this error), subtests are needed, but useless. What if the T::B singleton held a weak reference to the child and if it's present, automatically calls the child's $Test->ok and $Test->diag? That will catch all of the cases I've seen. It might be a useful stop-gap measure, but it also seems dodgy as hell :( Cheers, Ovid -- Buy the book - http://www.oreilly.com/catalog/perlhks/ Tech blog- http://use.perl.org/~Ovid/journal/ Twitter - http://twitter.com/OvidPerl Official Perl 6 Wiki - http://www.perlfoundation.org/perl6
Re: Subtest fail with singletons
- Original Message > From: David E. Wheeler > To: Ovid > Cc: perl-qa@perl.org > Sent: Monday, 29 June, 2009 17:38:15 > Subject: Re: Subtest fail with singletons > > On Jun 29, 2009, at 2:19 AM, Ovid wrote: > > >my $Test = Test::Builder->new; > > > > If every test function simply had that line in the function, rather than > trying to share this across all test functions, the code would work fine. > > > > Not sure of the best way of handling this, but it's annoying as heck :( > > Submit a bug report for Test::XML and Test::JSON with a test case > demonstrating > the issue? And Test::Exception and many, many other Test:: modules. It's a very common pattern and getting all authors to agree to fix those modules is a dubious strategy, I think. Cheers, Ovid -- Buy the book - http://www.oreilly.com/catalog/perlhks/ Tech blog- http://use.perl.org/~Ovid/journal/ Twitter - http://twitter.com/OvidPerl Official Perl 6 Wiki - http://www.perlfoundation.org/perl6
Re: Subtest fail with singletons
Er, I should probably make it clear that Test::Builder relies on a singleton internally, but it's not really the cause of this bug. Subtests cause the Test::Builder->new singleton to temporarily return a "child" version of TB. It's trying to reuse the TB variable in Test::XML (and other testing modules) which causes this issue. Cheers, Ovid -- Buy the book - http://www.oreilly.com/catalog/perlhks/ Tech blog- http://use.perl.org/~Ovid/journal/ Twitter - http://twitter.com/OvidPerl Official Perl 6 Wiki - http://www.perlfoundation.org/perl6 - Original Message > From: Ovid > To: perl-qa@perl.org > Sent: Monday, 29 June, 2009 10:19:19 > Subject: Subtest fail with singletons > > > The following subtest code fails badly: > > use Test::More tests => 2; > use Test::XML; > > ok 1; > subtest 'FAIL!' => sub { > plan tests => 1; > is_xml '', '', 'Singleton fail'; > }; > __END__ > xml.t .. > 1..2 > ok 1 > 1..1 > Cannot run test (Singleton fail) with active children at > /home/ovid/pips_dev/work/Pips3/branches/rights_modeling/deps/lib/perl5/Test/XML.pm > > line 57. > # Child (FAIL!) exited without calling finalize() > > The reason this happens is because Test::XML, at the top of its code, has > this: > > > my $Test = Test::Builder->new; > > If every test function simply had that line in the function, rather than > trying > to share this across all test functions, the code would work fine. > > Not sure of the best way of handling this, but it's annoying as heck :( > > Cheers, > Ovid > -- > Buy the book - http://www.oreilly.com/catalog/perlhks/ > Tech blog- http://use.perl.org/~Ovid/journal/ > Twitter - http://twitter.com/OvidPerl > Official Perl 6 Wiki - http://www.perlfoundation.org/perl6
Subtest fail with singletons
The following subtest code fails badly: use Test::More tests => 2; use Test::XML; ok 1; subtest 'FAIL!' => sub { plan tests => 1; is_xml '', '', 'Singleton fail'; }; __END__ xml.t .. 1..2 ok 1 1..1 Cannot run test (Singleton fail) with active children at /home/ovid/pips_dev/work/Pips3/branches/rights_modeling/deps/lib/perl5/Test/XML.pm line 57. # Child (FAIL!) exited without calling finalize() The reason this happens is because Test::XML, at the top of its code, has this: my $Test = Test::Builder->new; If every test function simply had that line in the function, rather than trying to share this across all test functions, the code would work fine. Not sure of the best way of handling this, but it's annoying as heck :( Cheers, Ovid -- Buy the book - http://www.oreilly.com/catalog/perlhks/ Tech blog- http://use.perl.org/~Ovid/journal/ Twitter - http://twitter.com/OvidPerl Official Perl 6 Wiki - http://www.perlfoundation.org/perl6
Nested Aggregate Tests
I might be able to release a new Test::Aggregate to the CPAN. It includes Test::Aggregate::Nested. It's like the current Test::Aggregate, but it's much cleaner and it uses nested TAP. Because it can now assert a plan (which Test::Aggregate couldn't) and this plan is equal to the number of tests, code which uses Test::Aggregate::Nested and also had extra tests included would break because Test::Aggregate::Nested was asserting that plan. The solution was obvious: run the Test::Aggregate::Nested tests inside of their own subtest. That led to the following very delightful output to my terminal: 1..5 1..5 1..1 ok 1 - findbin is reinitialized for every test ok 1 - aggtests/findbin.t 1..0 # SKIP Testing skip all ok 2 # skip Testing skip all 1..1 ok 1 - subs work! ok 3 - aggtests/subs.t 1..2 ok 1 - slow loading module loaded ok 2 - env variables should not hang around ok 4 - aggtests/slow_load.t 1..5 ok 1 - aggtests/check_plan.t * 1 ok 2 - aggtests/check_plan.t * 2 ok 3 # skip checking plan (aggtests/check_plan.t * 3) ok 4 - env variables should not hang around ok 5 - aggtests/check_plan.t * 4 ok 5 - aggtests/check_plan.t ok 1 - nested tests ok 2 - Startup should be called once ok 3 - ... as should shutdown ok 4 - Setup should be called once for each test program ok 5 - ... as should teardown ok All tests successful. Files=1, Tests=5, 1 wallclock secs ( 0.02 usr 0.00 sys + 0.11 cusr 0.01 csys = 0.14 CPU) Result: PASS Cheers, Ovid -- Buy the book - http://www.oreilly.com/catalog/perlhks/ Tech blog- http://use.perl.org/~Ovid/journal/ Twitter - http://twitter.com/OvidPerl Official Perl 6 Wiki - http://www.perlfoundation.org/perl6
Re: [ANNOUNCE] Test::More/Builder 0.89_01 now with subtests
--- On Thu, 25/6/09, Josh Heumann wrote: > From: Josh Heumann > > > As long as we're bike-shedding, a simplification: > > > > subtest { > > plan "sanity check" => 3; > > pass for 1 .. 3; > > } > > +1 > > I like anything that keeps it roughly in line with the > syntax for TODO > and SKIP blocks: I understand where you're coming from, but different things should look different. SKIP and TODO are relatively similar, but subtests are significantly different from them. In any event, I'm completely mystified why anyone has a problem with the "subtest $name, sub { ...}" syntax. Honestly :) But since I don't have a clue and am not particularly fussed, I'll just bow out and let you folks have at it. Cheers, Ovid -- Buy the book - http://www.oreilly.com/catalog/perlhks/ Tech blog- http://use.perl.org/~Ovid/journal/ Twitter - http://twitter.com/OvidPerl Official Perl 6 Wiki - http://www.perlfoundation.org/perl6
Re: [ANNOUNCE] Test::More/Builder 0.89_01 now with subtests
- Original Message > From: Paul Johnson > > One question though. Why > > subtest "text", sub {}; > > rather than > > subtest {}, "text"; > > ? > > The latter seems more consistent as well as removing a rather annoying bit of > syntax. Were you worried that "text" might get lost at the end of the sub? I would prefer the 'subtest {}, "text"' syntax, but you're right, the concern is the text getting lost at the end. It's especially bad if you have a really long block of tests in your subtests. In any event, it's in Schwern's hands now :) Cheers, Ovid -- Buy the book - http://www.oreilly.com/catalog/perlhks/ Tech blog- http://use.perl.org/~Ovid/journal/ Twitter - http://twitter.com/OvidPerl Official Perl 6 Wiki - http://www.perlfoundation.org/perl6
Re: Combining TAP with more extensive logging of raw data
- Original Message > From: Michael G Schwern > > Gabor Szabo wrote: > > I recall that we talked about a possibility to emit yamlish but the last > > thing > > I remember was the discussion about lower or upper case names... > > Was there a progress in that subject ? > > tl;dr version: Yes, its resolved at least to Ovid and I's satisfaction who > were the two most worried about it. > > Ovid and I talked about it first thing at the latest QA hackathon. Ovid, > correct me if I'm wrong but... > > 1) /^[a-z]/ is fine for official keys. Everything else is unofficial. This > avoids the silly "Hungarian I" problem. > > 2) There was a fight at QA1 about whether unofficial keys should always be > prefixed with an identifier saying where they came from to avoid collisions > (Ovid's position) and that they should not be prefixed to allow overlap of > commonly agreed on, yet unofficial, keys (my position). > > At QA2 we finally communicated that A) we can't usefully enforce a prefix and > B) what Ovid really wanted was to ensure that he can define keys and guarantee > their meaning to him. End result: add a prefix if you want to protect your > private key, and if you don't don't. Yeah, I can deal with all of this. I think the main thing is that any YAML diagnostics which accept arbitrary keys added will have to: a. Reject any key matching /^[[:lower:]]/ (or is /^[a-z]/ really preferred here?) b. For now, just let people add arbitrary data for diagnostics. --- have: 3 want: 4 file: t/foo.t line: 17 user: ??? ... Start small. Grow later. Cheers, Ovid -- Buy the book - http://www.oreilly.com/catalog/perlhks/ Tech blog- http://use.perl.org/~Ovid/journal/ Twitter - http://twitter.com/OvidPerl Official Perl 6 Wiki - http://www.perlfoundation.org/perl6
Re: Combining TAP with more extensive logging of raw data
- Original Message > From: Gabor Szabo > > I am quite confused and I am not sure what do I really want :-) > > I recall that we talked about a possibility to emit yamlish but the last thing > I remember was the discussion about lower or upper case names... > Was there a progress in that subject ? > > > Anyway here is another thing that I found. > The test script fetches a few rows from a database and prints out a > nicely formatted > table of the values using high quality ascii art: > > 1 | 3 | foo > 1 | 7 | bar > > I can just print the array holding this using explain \...@data but that > will lead to > an uprising. It will also likely lead to misinterpreted test results :) Explain sends its data to the diagnostic file handle. That's usually STDERR and is not guaranteed to be in synch with STDOUT. Thus, the extra information printed is not guaranteed to go with the test lines you think it does. > The people who need to see this are Java and Matlab programmers. > Any other YAML like output will still be inferior to that nicely > formatted table but I hope > I'll be able to hook up some formater to display that nicely. > Preferably inside Smolder > as that's what we are going to use to collect the reports. This is exactly the sort of use case envisioned for TAP diagnostics. First, the diagnostics go to STDOUT instead of STDERR, guaranteeing that they are synched corrrectly. Second, user-supplied diagnostics *are* parsed by TAP::Parser (see TAP::Parser::Result::YAML) and instead of you getting a chunk 'o text, it's converted to the appropriate Perl data structure (though you can get the raw text if you wanted). You would just set your string as the value of a key and it would magically work. So hypothetically, you could do this: while ( my $result = $parser->next ) { if ( $result->is_yaml ) { display($result->data->{ascii_art}); } } Cheers, Ovid -- Buy the book - http://www.oreilly.com/catalog/perlhks/ Tech blog- http://use.perl.org/~Ovid/journal/ Twitter - http://twitter.com/OvidPerl Official Perl 6 Wiki - http://www.perlfoundation.org/perl6