Tagging tests

2012-04-24 Thread Daniel Perrett
Is there any way to 'tag' tests in Perl?

What I mean is, that ideally, if you tagged all your tests according
to the functionality they depended on, you could use the tags to more
easily work out what was going wrong.

# Looks like you failed 21 tests of 75.
# Failures by tag:
#  syntax: 0/50
#  lwp: 21/25
#  http: 21/21
#  search: 20/20
#  unicode: 5/8

This report is for some hypothetical module which imports some syntax
and allows the user to run some searches over the web via an LWP
object.

Here, we can see at a glance that there is nothing wrong with the
syntactic code, and the code which requires LWP itself is probably
fine because the problem only comes when the user tries to make http
requests.

It looks likely that even though all the search tests fail, they are
failing because there is no working connection, as tested by the first
http request. Although five of the unicode tests are failing, three
aren't (throwing unicode characters at the syntax).

Assuming there isn't any way to 'tag' tests, like this, should there
be? The problems I can see are:

- How to make it easy to do syntactially, avoiding lots of repetition
- Whether this requires a complete rewrite of the assumptions of the
core test handling modules
- The effort involved for test authors in tagging tests in this detail
- Some test failures happen something unexpected went wrong, and
tagging is only useful if we know what categories are useful.
- It could encourage authors to be lazy

But the advantages are

- If there are common problems (computer can't access the net, unicode
handling is dodgy), this makes it more straightforward to diagnose
than reading through logs of very long test scripts with lots of
failure diagnostics
- Might be useful for coverage checking (you could write something
asking if you have a test which has a particular combination of tags)
- You can write more clever algorithms to try to pinpoint where the
problem is (e.g. which combinations of tags always fail)
- Makes TDD easier because you can write lots of tests which will fail
(and you know you can ignore because they depend on code that isn't
written yet), and you can still focus on the features you're writing
right now.

(I guess one answer could be 'write them in separate test scripts' but
what I want is tags (many-to-many) rather than categories
(many-to-one), and more files is a bit cumbersome.)

Daniel


Re: Tagging tests

2012-05-01 Thread Daniel Perrett
Thanks, I'll look into both!

Daniel

On 1 May 2012 19:17, Lars Dɪᴇᴄᴋᴏᴡ 迪拉斯  wrote:
>> Is there any way to 'tag' tests in Perl?
> 
>
>> - If there are common problems (computer can't access the net, unicode
>> handling is dodgy), this makes it more straightforward to diagnose
>> than reading through logs of very long test scripts with lots of
>> failure diagnostics
> 


Re: Proposal Test::TAPv13

2012-07-11 Thread Daniel Perrett
I may have missed the point here, but would it be sufficient to have
two flavours of diagnostic calls, distinguishing between pre-test
diagnostics (`forewarn()`) and post-test diagnostics (`reflect()`), or
is the problem that ok() must be a single function call?

If that's possible, maybe we can extend it to guess at what the
various messages mean: most diags can be assumed to be of the
`reflect()` type, as that's typical behaviour, e.g. Test::More, while
more perlish warnings is most likely to be of the `forewarn()`
variety.

Daniel

On 10 July 2012 19:11, Michael G Schwern  wrote:
> On 2012.6.1 5:40 AM, Steffen Schwigon wrote:
>> I am about to upload Test::TAPv13. I also did an prepan entry for that.
>>
>> PrePAN:   http://prepan.org/module/429En4oFbn .
>> URL:  https://github.com/renormalist/Test-TAPv13
>> Synopsis:
>>
>>use Test::TAPv13 ':all'; # must come before Test::More
>>use Test::More tests => 2;
>>
>>my $data = { affe => { tiger => 111,
>>   birne => "amazing",
>>   loewe => [ qw( 1 two three) ],
>> },
>> zomtec => "here's another one",
>> "DrWho" => undef,
>>   };
>>
>>ok(1, "hot stuff");
>>tap13_yaml($data);
>>tap13_pragma "+strict";
>>ok(1, "more hot stuff");
>>
>> Does it make sense?
>> Did I overlook an existing way to generate TAP v13?
>
> I'm just seeing this now.  The output looks correct and I'm happy to see
> somebody playing with the structured diagnostics for reals!  There's a few
> code nits, but I'll note them on github.
>
> Test::Builder1.5 can generate TAP v13, in fact it does so by default, but is
> currently lacking the structured diagnostics part.  Part of this is just
> time/effort, but the larger part is with how tests are written...
>
> The problem with...
>
> ok( ... );
> diagnostics( ... );
>
> Is that there's no reliable way to know that diagnostics() goes with the
> previous ok().  This is ok for TAP, where the test result and diagnostics are
> on separate lines and can be printed separately, but other formats need to
> print out the results and diagnostics together.  Like anything where the
> diagnostics information is inside a  tag, for example.
>
> That pattern has to be rethought.  This was one of the goals of Test::Builder2
> (the new class to replace Test::Builder) but that is on hold.  Any thoughts?
>
> Also, when did we add the pragma thing?
>
>
> --
> The interface should be as clean as newly fallen snow and its behavior
> as explicit as Japanese eel porn.


Re: [LDTP-Dev] Announce: Cobra 2.0 - Windows GUI test automation tool

2012-08-03 Thread Daniel Perrett
QBasic?

On 03/08/2012, Gabor Szabo  wrote:
> FYI something is really missing there :-(
>   Gabor
>
> -- Forwarded message --
> From: Nagappan Alagappan 
>
> * Java / C# / VB.NET / PowerShell / Ruby are now officially supported
> LDTP scripting languages other than Python
>
> [...]
>
>
> About LDTP:
>
> Cross Platform GUI Automation tool Linux version is LDTP, Windows
> version is Cobra and Mac version is PyATOM (Work in progress).
>
> Download source: https://github.com/ldtp/cobra
>


Re: Running perl from a test on Windows

2013-01-29 Thread Daniel Perrett
"I need to collect the output from the other Perl library *without
loading it*, because I also want to make sure that my library loads it
for me"
Is there a reason the output has to be created during testing rather
than being part of the distribution? What about running it out in a .t
file which precedes the .t file in question?

Daniel

On 29/01/2013, Buddy Burden  wrote:
> schwern,
>
> Oh, hey, look: I never responded to this.   Bad coder.
> :-)
>
 One way to deal with this problem is to use the flexibility of TAP and
 eshew a testing library.

 print "1..1\n";

 ...run your code normally here...

 print $ok ? "ok 1\n" : "not ok 1\n";
>
>>> Well, yes, but then I have to do all the redirection stuff myself.
>
>> I think you misunderstand.  You need a fresh process uncontaminated by
>> any other library to run your test in.  Each .t file is a fresh process
>> over which you have nearly total control (enough for your purposes).
>> Load the library and test it directly.
>>
>> $ cat t/doesnt_load_other_modules.t
>> #!/usr/bin/env perl
>>
>> print "1..1\n";
>>
>> require My::Library;
>> print !$INC{"Some/Other/Library.pm"} ? "ok 1\n" : "not ok 1\n";
>>
>> Done.  Nothing else goes in that .t file.  Straight forward Perl.  No
>> cross platform concerns.
>
> Okay, I think what you're saying would work for _one_ of my problems.
> But I actually have two concurrent problems:
>
> * I need to compare the output from another Perl library with the
> output from my library.  (In this case it happens to be Data::Printer,
> but I think that shouldn't matter.
> * I need to collect the output from the other Perl library *without
> loading it*, because I also want to make sure that my library loads it
> for me.
>
> Now, one strategy I could employ here is to put those two tests into
> two totally separate .t files.  That would work perfectly.  But it
> just seems messy to me ... like I'm cheating, somehow.  I'm testing
> two tightly coupled things:
> # Does my library load the module?
> # Having loaded it, does it use the module properly to produce the
> expected output?
>
> Those two things seem like they _ought_ to be two tests in a single
> test file, not two separate test files just because I can't figure out
> how to make Windows play nice. :-/
>
> And I rather thought this would end up being a solved problem, that
> lots of folks would have run into this previously.  But I guess either
> people haven't, or they haven't often enough to come up with a clever
> workaround.  So it looks like I have two viable options:
>
> # Give up on the sensibleness of putting both tests into one test file.
> # Give up on the convenience of avoiding making a temp file for the
> script by using -e.
>
> Neither one is _particularly_ attractive, but I think I'm going with #2
> there.
>
> (And thanks for making me spell this out; it's helpful in case I
> really do decide to do a blog post.)
>
>
> -- Buddy
>