Re: Lessons from the test function parameter placement quibbles?

2006-07-17 Thread Adrian Howard


On 17 Jul 2006, at 12:49, Ovid wrote:


- Original Message 
From: A. Pagaltzis [EMAIL PROTECTED]

[snip]

And you know what? We don’t even need Test::More::NextGen to
implement that. All functions as they stand could unambiguously
accept a hashref as their single argument.


That's going to cause other problems.

  use Test::Exception;
  lives_ok { @args }; # hashref or coderef?


sound of Adrian ducking to avoid the oncoming conversation about  
prototypes


:-)

Adrian



Re: TAP extension proposal: test groups

2006-07-03 Thread Adrian Howard


On 3 Jul 2006, at 13:56, Adam Kennedy wrote:


That seems like a problem too but the one I'm trying to get at is
4 no plan, with groups
If your script exits prematurely after one of the groups,  the  
harness

will not notice because everything looks just fine. The solution to
this is not to use plan, with groups because then you have to count
all the tests individually which goes aginst objective #2,


But then we've had this problem up till now anyway.

If it exists prematurely with a good return code now, it's a  
correct ending, if it returns with a bad return code it's an error.


The addition of groups will not change that behaviour in unplanned  
test space, because what you want is a simply unknowable.


If we don't have some way of signifying the end of a group in TAP  
then it removes a chunk of the utility for the people writing things  
that generate TAP - since everybody has to write their own checks  
that groups actually output the number of tests that they should.


If we have an end-of-group marker the TAP::Parser can pick this up -  
which seems much more sensible to me.


This, I think, is the same issue as the mixing grouped and non- 
grouped tests that I wrote about yesterday. Without an end-of-group  
marker a test script sending less/more than the specified number of  
tests for the group cannot be detected.


Cheers,

Adrian


Re: TAP extension proposal: test groups

2006-07-03 Thread Adrian Howard


On 3 Jul 2006, at 17:47, Ovid wrote:


- Original Message 
From: Michael G Schwern [EMAIL PROTECTED]

* Its backwards compatible.  The ..# lines are currently considered
junk and ignored.


Is this behavior documented anywhere?

[snip]

From Test::Harness::TAP


Anything else
Any output line that is not a plan, a test line or a diagnostic is  
incorrect. How a harness handles the incorrect line is undefined.  
Test::Harness silently ignores incorrect lines, but will become  
more stringent in the future.


Cheers,

Adrian


Re: TAP extension proposal: test groups

2006-07-02 Thread Adrian Howard


On 1 Jul 2006, at 23:38, Michael G Schwern wrote:


Cons?


* Doesn't handle nested groups - but I have to admit that's a use  
case I've never wanted :-)


* Doesn't handle groups with an undefined number of tests. The  
obvious solution would be to allow .. sans numeric suffix so you  
would have something like


..2 - I want to run two tests
ok 1
ok 2
.. - I don't know how many tests in this group
ok 3
ok 4
ok 5
ok 6
..1 - one last test
ok 7
1..7

* since there is no end of group marker you get problems when  
mixing grouped and non-grouped tests together. Let's pretend  
Test::Block output TAP with this format.


use Test::More 'no_plan';
use Test::Block qw( $Plan );

{
local $Plan = 2;
pass 'first test';
# oops - forgot second test
}
pass 'non-grouped test';

gives us

..2
ok 1 - first test
ok 2 - non-grouped test
1..2

which misses the fact the second test isn't intended to be part of  
the group. This is nasty enough to need fixing IMO.


* Another use case that this doesn't support is having the output  
from different test groups interleaved. This is something that I  
would  have found useful on the occasions that I've built test farms  
that collate the TAP output from different machines together. Rather  
than wait for each to finish and output everything together it would  
be nice to be able to do something like:


ok 1.1 - first result from first group
ok 2.1 - first result from second group
ok 2.2 - second result from second group
ok 1.2 - second result from first group

Although this could also be handled by a test runner that was bright  
enough to read multiple streams at the same time - which might be a  
better solution than making TAP more complex now that I think of it...


Cheers,

Adrian



Re: TAP::Harness

2006-07-02 Thread Adrian Howard


On 1 Jul 2006, at 20:36, Michael G Schwern wrote:
[snip]

* How can I help?

Provide use cases, what would you want to do with Test::Harness if you
could?  What are you doing with Straps?  What features do other
testing systems (JUnit, for example) have that you'd like to see in
Perl?  Once I post the design, pick it to pieces.


Use case: Run test scripts simultaneously

I sometimes split up long test runs by running multiple test scripts  
on multiple machines, or on the same machine where multiple CPUs,  
blocking for IO, etc. means I'll get quicker results.


Test::Harness currently forces me to process the test results in  
series - which means that I can wait longer than necessary to  
discover a test failure that's been processed, but is waiting behind  
a longer running test script.


I'd like TAP::Harness to be able to accept multiple streams of TAP  
input that it can process simultaneously.


Cheers,

Adrian



Re: Running individual unit tests with Test::Class???

2006-05-31 Thread Adrian Howard

[apologies to andrew for a dupe - didn't notice it went to perl-qa]

On 31 May 2006, at 14:35, Andrew Gianni wrote:

Let me start by admitting that I don't know a whole lot about xUnit  
testing.

In fact, using Test::Class is really my first exposure to the idea, so
perhaps I'm asking for something that doesn't make sense; please  
bear with

me in case I am.


[excellent description snipped]

In short
* Yes running one test method at a time is a sensible things to do.
* No - there currently isn't a simple way of doing this
* Good news - Ovid has submitted a patch to make it easy
* Bad news - I've been too bone idle to apply it

Hopefully I will become less lazy soon :-) Should be in the next  
release, which is well overdue.


Cheers,

Adrian




Re: Test me please: P/PE/PETDANCE/Test-Harness-2.57_06.tar.gz

2006-04-25 Thread Adrian Howard

On 24 Apr 2006, at 15:51, Shlomi Fish wrote:
[snip]

Am I missing something or isn't that what
Test::Harness:Straps/Test::Run::Straps are for? If not, I suppose I  
can
extract a class out of Test::Run::Straps that will provide a  
reusable TAP

parser.

[snip]

In addition to Michael's and chromatic's points, the T::H::S API is  
oriented around getting the results of a successful parse, while I  
would prefer a more event-based model for some purposes (do Foo when  
you test failure - where Foo might involve stopping early).


Adrian



Re: Test me please: P/PE/PETDANCE/Test-Harness-2.57_06.tar.gz

2006-04-24 Thread Adrian Howard


On 23 Apr 2006, at 20:05, Shlomi Fish wrote:
[snip]
This debate demonstrates why a plugin system is necessary for a  
test harness.

If it has it, then one can write a plugin to control whether or not
percentages are displayed. So for example, you can install a plugin  
that does

that, and put this in your .bash_profile:

[snip]

That's not the issue for me. I can already write my own test runners  
without too much effort.


For me the issue is why we're removing a useful (to a few people  
anyway), already implemented feature from the default test runner.


Although I agree with Michael and chromatic that a separate TAP  
parser would be most pleasant.


Cheers,

Adrian


Re: Test me please: P/PE/PETDANCE/Test-Harness-2.57_06.tar.gz

2006-04-23 Thread Adrian Howard


On 23 Apr 2006, at 07:02, Andy Lester wrote:
[snip]
I've removed the meaningless percentages of tests that have  
failed.  If you rely on the output at the end, it's different now.

[snip]

I'll just repeat what I left on Andy's blog here in case anybody  
agrees with me.



I don't like the change myself. I'm bright enough to figure out that  
anything less than 100% pass is bad when developing.


When using other peoples test suites seeing, for example, 99% ok  
tells me something very different from seeing 3% ok. For me the  
difference between nearly there apart form this bit of functionality  
that I don't care about and completely f**ked is useful. Yes I can  
figure it out from the test/pass numbers - but the percentage gives  
me a handy overview. Math is hard! :-)


Not something I feel /that/ strongly about - but I don't see the  
utility of the change myself (beyond code simplification in T::H).



(probably just me :-)

Adrian



Re: Non-Perl TAP implementations (and diag() problems)

2006-04-20 Thread Adrian Howard


On 19 Apr 2006, at 09:02, Ovid wrote:
[snip]

From a parser standpoint, there's no clean way of distinguishing that
from what the test functions are going to output.  As a result, I
really think that diag and normal test failure information should be
marked differently (instead of the /^# / that we see).

[snip]

I've thought in the past about about using /^## / for non-test  
related diagnostics


## Start the fribble tests
ok 1 - fribble foo
not ok 2 - fribble bar
#   Failed test 'fribble bar'
#   in untitled text 2 at line 5.
#  got: 'baz'
# expected: 'bar'
## Start the blart tests
# ok 1 - blart foo
... etc ...

Reads reasonably to me and has the advantage of being backward  
compatible.


?

Adrian


Re: Use case testing of Web apps with Perl?

2006-04-20 Thread Adrian Howard

On 19 Apr 2006, at 17:12, Andrew Gianni wrote:
[snip]
We'd like to be a bit more programmatic about writing our mech  
tests to test
use-case driven test-cases. I'm wondering if there are any tools or  
ideas

out there to ease the process so we don't have to manually write the
numerous mech tests individually or develop our own framework for  
this.


Any recommendations are appreciated.

[snip]

I'll second Luke's recommendation of Selenium (and related firefox  
plugins.) Damn fine.


If you're willing to play with Ruby WAITR is well worth a look  
http://rubyforge.org/projects/wtr/.


If you want to stick with Perl Samie http://samie.sourceforge.net/  
may be worth playing with - I find it painful compared to WAITR  
myself though.


And of course there is the venerable HTTP::Recorder (see http:// 
www.perl.com/pub/a/2004/06/04/recorder.html for a tutorial.)


Cheers,

Adrian


Re: Non-Perl TAP implementations (and diag() problems)

2006-04-20 Thread Adrian Howard


On 20 Apr 2006, at 16:55, Michael Peters wrote:
[snip]

I'm not sure I agree that there is a difference between them. They are
both comments output by the tests. Just because one comes from the
testing routine used by the test and the other from the test itself
doesn't mean they aren't both just human readable comments on the  
test run.

[snip]

It's useful to distinguish between them for things like home-brew  
test runners - so I can accurately determine which diagnostics are  
associated with a particular test failure, and which ones are just  
informative.


Adrian


Re: [OT] TDD + Pair Programming

2006-04-17 Thread Adrian Howard

Hi all,

On 2 Apr 2006, at 01:04, Jeffrey Thalhammer wrote:


I have never actually had an opportunity to practice
this, but I've always felt that the most obvious way
to combine test-driven development with pair
programming was to have one person write test code
while the other person writes application code.
Presumably they might change roles periodically, but
I'm not sure if they would actually work at the same
terimnal.  However, I've never heard anyone
excplicitly advocate for this approach.  Is this
actually happening and I'm just not aware of it?  Or
is there some obstacle to this approach that I haven't
considered?


Very belated response. Using Easter as an opportunity to catch up on  
my huge e-mail mountain :-)


Just to throw a contrary opinion into the mix I've found this to be a  
very effective technique. So have other people. Google around for  
ping pong development. See http://www.redsquirrel.com/blog/ 
archives/0170.html for example. Making the test pass/fail to be  
a competition between the pair and changing the driver regularly seem  
to be the points that can make it work well.


Absolutely do it at the same terminal though if at all possible.  
Remote pairing is nowhere near as effective as co-located pairing.


Cheers,

Adrian




Re: Set binmode on T::B's File Handles?

2006-01-09 Thread Adrian Howard


On 9 Jan 2006, at 05:03, David Wheeler wrote:
[snip]
Is there any way to get Test::Builder to set an I/O layer on its  
file handles?

[snip]

Y'want Test::Builder's failure_output(), e.g.:

use Test::More tests = 1;
binmode Test::More-builder-failure_output, ':utf8';
diag \x{201c};
ok 1;

Cheers,

Adrian



Re: How to mangle system time for testing

2005-12-28 Thread Adrian Howard


On 28 Dec 2005, at 16:36, Javier Amor Garcia wrote:


Hello,
  i am testing a  module for a web application and i need to test the
expiration of sessions. The problem is that i can not modify the
expiration time and i not want to make sleep the test for the full
length of expiration time (a hour).

[snip]
Can anyone give me pointers or advice about how to perform this  
type of

tests?


What I would do would be to isolate all the bits of code that poke at  
real time functions. For example I could imagine the only place where  
I actually call time being:


sub is_expired {
my $self = shift;
return ( $self-time_created + $self-session_length ) = time;
}

then I can test everything except is_expired() by simple symbol table  
munging:


{
local *MySessionClass::is_expired = sub { return 1 };
... test stuff that assumes session expired ...
}

{
local *MySessionClass::is_expired = sub { return };
... test stuff that assumes session valid ...
}

and I can test is_expired() by overwriting time explicitly - with the  
rather useful Test::MockTime.


use Test::More tests = 4;
use Test::MockTime;

BEGIN { use_ok 'MySessionClass' };

my $s = MySessionClass-new( session_length = 10, time_created =  
1234 );


Test::MockTime::set_fixed_time( 1234 );
ok( ! $s-is_expired, 'not expired at creation time' );

Test::MockTime::set_fixed_time( 1243 );
ok( ! $s-is_expired, 'not expired on session_length seconds' );

Test::MockTime::set_fixed_time( 1244 );
ok( $s-is_expired, 'is expired after session_length seconds' );

Hope this helps.

Adrian


Re: Sub::Uplevel

2005-09-09 Thread Adrian Howard


On 9 Sep 2005, at 21:55, David Golden wrote:

At least one of the culprits may be Test::Exception, for any  
version before 0.20.  The problem is that CPANPLUS doesn't  
currently play well with Module::Build and doesn't respect the  
build_requires parameter, but only looks at the requires  
parameter.  So you'll get unexpected failures for those using  
CPANPLUS and defaulting to Build.PL.

[snip]

Although CPANPLUS should support build_requires - the mistake this  
time was mine since Sub::Uplevel should have been in requires to  
start with.


Adrian



Re: GC API from discussion

2005-08-16 Thread Adrian Howard

On 16 Aug 2005, at 18:14, Yuval Kogman wrote:

On Mon, Aug 15, 2005 at 15:59:34 +0100, Adrian Howard wrote:

I'm not sure what you're proposing here. A separate arena for
stuff  you want to allocate and not be moved by the GC? How would
I tell the  compiler?


You won't, the language glue is responsible for setting that up for
you, and it does that by assuming it's always there, and the
compiler simply optimizes the cases where it's never going to be
needed.


Sorry - I don't understand. If I do:

call_to_external_c_library_foo( $foo );
call_to_external_c_library_bar( $bar );

Then how does the compiler know that $foo is only used temporarily  
and can be moved around, while $bar is cached in the external library  
and shouldn't be moved by any heap de-fragmentation that the garbage  
collector might do?



How about

 do : GC::priority(:new) {
 # only GC things allocated during the lifetime
 # of the block
 }


I think that's not priority but limit or scoped or
origin(:local) or something like that...

[reasonable sounding terms snipped]

Sounds nice :-)


Actually, since to my naive eyes it looks like the GC is a first
class object the problem can probably be solver better by adding
your  own.


Well, as I see it the GC is a subobject of the runtime. The amount
of control that this object can give you can be checked using the
strong support for reflection that perl 6 will have, or by simply
asking the runtime to switch GC (if it lets you do that).


Nice.

Thanks for answering my mad ramblings :-)

Cheers,

Adrian


Re: GC API from discussion

2005-08-15 Thread Adrian Howard

On 15 Aug 2005, at 02:13, David Formosa ((aka ? the Platypus)) wrote:


After a very fruitful discussion I've rewritten my suggested GC API.
Comments please.

[snip]

I'm speaking from complete ignorance since I've only been vaguely  
following the subject... but four additional things that strike me as  
useful (because I found them so in Pop-11 when I used it) would be:


1) Some way of declaring objects as being fixed so we can pass them  
to external code without having to worry about the GC moving them  
around.


2) Some way of being able to tell the garbage collector to ignore the  
current contents of the heap for the purposes of GC. One Pop-11 idiom  
was to do something like:


;;; create a whole bunch of complicated self referencing
;;; objects that we know are going to persist over time

sys_garbage();;;; run the garbage collector
sys_lock_heap();  ;;; lock stuff currently in the heap

;;; do lots of stuff that now runs quicker since the GC doesn't
;;; have to worry about marking the objects that we know are
;;; not going away

sys_unlock_heap();;;; give the GC full rein again


3) Some way of marking structures/fields so their reference doesn't  
count. Weakrefs basically.


4) Hooks to run code before/after GC. Occasionally very useful. (e.g.  
with the gc hooks and heap locking/unlocking you could implement your  
own ephemeral GC system in Pop-11).


Hopefully this makes some vague sort of sense.

Cheers,

Adrian



Re: GC API from discussion

2005-08-15 Thread Adrian Howard

On 15 Aug 2005, at 13:17, Yuval Kogman wrote:

On Mon, Aug 15, 2005 at 12:40:05 +0100, Adrian Howard wrote:

[snip]
1) Some way of declaring objects as being fixed so we can pass  
them  to external code without having to worry about the GC moving  
them  around.


A handle to an object should always be fixed, I would think... Even
under a copying mechanism, you can have an arena for handles, and an
arena for the actual data which is actually collected, and points
back to the data.

Optimized access (auto unboxing, various inferrencing by the
compiler) could be made such that it doesn't go through the handle
unless absolutely necessary.


I'm not sure what you're proposing here. A separate arena for stuff  
you want to allocate and not be moved by the GC? How would I tell the  
compiler?


2) Some way of being able to tell the garbage collector to ignore  
the  current contents of the heap for the purposes of GC. One  
Pop-11 idiom  was to do something like:


 ;;; create a whole bunch of complicated self referencing
 ;;; objects that we know are going to persist over time

 sys_garbage();;;; run the garbage collector
 sys_lock_heap();  ;;; lock stuff currently in the heap

 ;;; do lots of stuff that now runs quicker since the GC doesn't
 ;;; have to worry about marking the objects that we know are
 ;;; not going away

 sys_unlock_heap();;;; give the GC full rein again



We are trying to design a requirement based interface, so that the
GC can be changed, but behavior remains consistent.

This should be more like

[interesting options snipped]

How about

do : GC::priority(:new) {
# only GC things allocated during the lifetime
# of the block
}

?

[snip]

do :GC::nodelay {

[snip]

do :GC::nodestroy {

[snip]

no_delay and no_destroy please (I spent a minute trying to figure out  
what a node lay was :-)


[snip]
4) Hooks to run code before/after GC. Occasionally very useful.  
(e.g.  with the gc hooks and heap locking/unlocking you could  
implement your  own ephemeral GC system in Pop-11).


This is possibly done by introspecting
$*RUNTIME.Memory.GarbageCollector, and seeing if it supports events.

[snip]

Actually, since to my naive eyes it looks like the GC is a first  
class object the problem can probably be solver better by adding your  
own.


Adrian


Sébastien

2005-08-15 Thread Adrian Howard


On 15 Aug 2005, at 17:12, Yitzchak Scott-Thoennes wrote:
[snip]

The throw_ok { ... } syntax only works because the throw_ok sub exists
and has a prototype that specifies a subref is expected; if you don't
load Test::Exception by the time the throw_ok call is compiled, it
is parsed as an indirect object call of the throw_ok method on the
object or class returned by the {} block:

$ perl -MO=Deparse,-p -we'throws_ok {  Net::Pcap::lookupdev() } / 
^Usage: Net::Pcap::lookupdev\(err\)/, calling lookupdev() with no  
argument'

BEGIN { $^W = 1; }
do {
Net::Pcap::lookupdev()
}-throws_ok('/^Usage: Net::Pcap::lookupdev(err)/', 'calling  
lookupdev() with no

 argument');
-e syntax OK

which is perfectly valid perl, but unlikely to do what you want.


I love it when other people answer the bug reports first :-) Thanks.

Sébastien - You can fix the problem by either wrapping the  
provisional load of T::E in a BEGIN block like this:


  BEGIN {
  eval use Test::Exception;
  plan skip_all = Test::Exception needed if $@;
  }

  # ... tests that need T::E here ...

Or, alternatively, use non-prototyped calls to T::E. for example:

  dies_ok( sub { $dev = Net::Pcap::lookupdev(undef)},
  '/^arg1 not a reference/', calling lookupdev() with no  
reference );


Cheers,

Adrian




Re: Test::Harness::Straps - changes?

2005-07-31 Thread Adrian Howard


On 30 Jul 2005, at 17:19, chromatic wrote:


(BTW chromatic: I'm curious why you didn't break todo tests into
separate passing/failing classes as you did the normal test?)


TAP doesn't, so I didn't see any reason to do it.


Well, I don't really see that TAP separates pass/fail todo tests any  
less than it separates pass/fail todo tests:


ok 1
not ok 2
ok 3 # TODO
not ok 4 # TODO

so if you're splitting one up it seems sensible to split both

Now that you mention it, reporting unexpected successes might be  
worthwhile -- but then again, Test::Harness::Straps reports that as  
a bonus in the summary

anyway.

I can't think of anything useful to do with it, but if there is
something, I'm happy to make that separation.


For me it would be useful since my normal view of test results  
separates them into three groups


1)Expected behaviour (passing tests, failing todo tests)
2)Stuff I need for information (skipped tests, just in case they  
shouldn't be)

3)Unexpected behaviour (failing tests, passing todo tests)

Not being able to split passing/failing todo tests with polymorphism  
seems odd.


In fact, in Perl 6, could I separate (1)  and (3) by adding expected/ 
unexpected roles/traits/whatever?


Adrian



Re: Test::Builder::STDOUT ?

2005-07-30 Thread Adrian Howard


On 30 Jul 2005, at 00:00, Michael G Schwern wrote:
[snip]

Perhaps you misunderstand.


I did


  I mean to put that BEGIN { *STDERR = *STDOUT }
in the test script.  foo.t never prints to STDERR.


Doh. I would have to put in in a module so I could shim it in with  
HARNESS_PERL_SWITCHES but yes, that would have been simpler.


However analyse rather analyse_file was the right way in this  
instance I think.


Adrian



Re: Test::Harness::Straps - changes?

2005-07-30 Thread Adrian Howard

On 30 Jul 2005, at 01:05, Andy Lester wrote:
On Fri, Jul 29, 2005 at 03:57:07PM -0700, Michael G Schwern  
([EMAIL PROTECTED]) wrote:
This is, IMHO, the wrong place to do it.  The test should not be  
responsible
for decorating results, Test::Harness should be.  It means you can  
decorate

ANY test, not just those that happen to use Test::Builder.


This also coincides with the premise that Test::Harness::Straps are  
just

parsing TAP from any given source.


I took chromatic to mean that he'd like the test harness to do the  
decorating... so you could do something along the lines of:



{   package GrowlingFailure;
use base qw( Test::Harness::Test::Failure );
use Mac::Growl::Testing;

sub action {
my $self = shift
Mac::Growl::Testing-failed(
title = $self-name
description = $self-diagnostics
);
$self-SUPER::action;
}
}
{   package GrowlingTodo;
use base qw( Test::Harness::Test::Todo );
use Mac::Growl::Testing;

sub action {
Mac::Growl::Testing-unexpected_pass(
description = $self-diagnostics
) if $self-actually_passed;
$self-SUPER::action;
}
}
{   package GrowlingTestHarness;
use base qw( Test::Harness );

sub failure_test_class  { 'GrowlingFailure' }
sub todo_test_class { 'GrowlingToodo'   }
}

# show me pretty dialogs for test failures
GrowlingTestHarness-new-analyse_files( @ARGV );


So we have the underlying test harness producing different classes  
for each variety of test.


(BTW chromatic: I'm curious why you didn't break todo tests into  
separate passing/failing classes as you did the normal test?)


Adrian



Re: Test::Builder::Module

2005-07-29 Thread Adrian Howard


On 29 Jul 2005, at 11:31, Michael G Schwern wrote:


I've just implemented the oft requested Test::Builder::Module.  Its a
superclass for all Test::Builder based modules that implements an  
import()

method to match what Test::More does and a builder() method to get the
Test::Builder object.


Nice.

[snip]
Calling builder() is safer than Test::Builder-new as it is forward  
compatible for a day when each module will be able to have its own  
Test::Builder object rather than the strict singleton it is now.

[snip]

In that case should we be encouraging people to write

sub ok ($;$) {
Test::Simple-builder-ok(@_);
}

instead of using a package lexical, in case people want to swap  
modules at runtime?


[snip]
What scaffolding do module authors find themselves implementing?   
import() and builder() is all I can think of.

[snip]

Can't think of anything else that would belong in a base class.

Adrian


Re: Test::Builder::STDOUT ?

2005-07-29 Thread Adrian Howard

On 29 Jul 2005, at 06:07, Michael G Schwern wrote:

BEGIN { *STDERR = *STDOUT }

That'll handle anything, Test::Builder or not.


Nope. T::H::S turns

analyse_file( 'foo.t' )

into something like

open(FILE, /usr/bin/perl foo.t| )

so the test script will get it STDERR disassociated from the piped  
STDOUT.


Adrian



Re: Test::Builder::STDOUT ?

2005-07-29 Thread Adrian Howard


On 29 Jul 2005, at 02:58, chromatic wrote:

If you can use your own test harness, use
Test::Harness::Straps::analyze() or  
Test::Harness::Straps::analyze_fh()

to collect STDERR and STDOUT from the tested process.


Ah. That would make sense. Much more sensible. Off to play.

Adrian



Test::Builder::STDOUT ?

2005-07-28 Thread Adrian Howard
I've been pondering custom test runners recently and have hit the  
familiar problem of Test::Harness::Straps not capturing STDERR, so  
missing the diagnostics that Test::Builder outputs.


A moderately evil solution occurred, and I now have a  
Test::Builder::STDOUT on my box that just does:


use Test::Builder;
*Test::Builder::failure_output = \Test::Builder::output;

redirecting all T::B diagnostic messages to STDOUT. Now I can add:

$ENV{ HARNESS_PERL_SWITCHES } = '-MTest::Builder::STDOUT';

to my test runner and all my diagnostics end up in the results hash  
that Test::Harness::Straps::analyze_file returns.


Obviously won't work if your test scripts aren't written using T::B,  
but that's not a problem I hit very often.


Greasy hack? Obviously - but it seems to do the job. I'm tempted to  
throw it at CPAN :-)


Is there a better way I'm missing?

Cheers,

Adrian



Re: Test harnesses?

2005-07-26 Thread Adrian Howard

On 25 Jul 2005, at 22:29, Peter Kay wrote:

http://qa.perl.org/test-modules.html has a bunch of test modules  
listed.


However, there are no harnesses listed.  I know Test::Harness, and I'm
going to go read about Test::Builder, but what other meta-testing
modules are there?

[snip]

All depends on your definition of harness I guess :-) Is  
Apache::Test one? Is Test::Class? Test::Base? Test::LectroTest?  
Test::Inline?


One of the things that makes Perl's standard testing framework  
interesting is that everything is so decoupled. As long as something  
talks TAP you can plug it in. As well as Test::Harness, you might  
want to look at:


Test::Harness::Straps
Test::TAP::HTMLMatrix
Test::TAP::Model

Apart from the TAPish stuff the only other Perl testing framework  
that seemed to get any traction at all was the JUnit based  
Test::Unit, which has it's own test runners as well as being able to  
output TAP. Seems dead in the water now though.


The only other thing that occurs is FIT frameworks, of which Perl has  
two:


Test::FIT
Test::C2FIT

Neither seems to have really caught on. People seem to prefer to grow  
domain-specific languages in Perl based on Test::Builder instead.


Cheers,

Adrian



Re: Need to talk to an EU patent attorney

2005-07-12 Thread Adrian Howard

On 12 Jul 2005, at 22:00, Michael G Schwern wrote:
Barbie's journal, via Ovid, made me aware of patent EP1170667  
Software

Package Verification granted last month in the EU.
http://gauss.ffii.org/PatentView/EP1170667

[snip]

Oh for f**k's sake :-(

Don't know any patent lawyers myself, but it might be worth dropping  
a line to one or more of:


http://www.nosoftwarepatents.com/en/m/about/contact.html
http://fsfeurope.org/

The FSFE in particular have been campaigning hard in Europe so should  
hopefully have some decent contacts.


I knew I should have been a train driver sigh

Adrian


Re: Need to talk to an EU patent attorney

2005-07-12 Thread Adrian Howard


On 12 Jul 2005, at 23:07, Adrian Howard wrote:
[snip]
Don't know any patent lawyers myself, but it might be worth  
dropping a line to one or more of:


http://www.nosoftwarepatents.com/en/m/about/contact.html
http://fsfeurope.org/

[snip]

http://www.eurolinux.org/ also might be worth asking.

Adrian


Re: AnnoCPAN and a wiki POD idea

2005-07-09 Thread Adrian Howard


On 8 Jul 2005, at 20:08, Adam Kennedy wrote:
[snip]
There's no way to get a listing of the annotations for a given  
author id, or even for a given dist. So I'm reduced to manually  
looking through a thousand odd web pages to find potential changes  
or improvements to the code.

[snip]

http://www.annocpan.org/~ADAMK/

Complete with RSS feed :-)

Adrian



Re: ANN: JavaScript Test.Simple 0.10

2005-06-24 Thread Adrian Howard


On 24 Jun 2005, at 06:27, David Wheeler wrote:
[snip]

See Test.Harness.Browser in action here:

  http://www.justatheory.com/code/Test.Simple-0.10/tests/index.html
  http://www.justatheory.com/code/Test.Simple-0.10/tests/index.html? 
verbose=1


Sweet!

It probably says something quite sad about my personality that this  
is the most persuading argument I personally have now for switching  
to Firefox from Safari :-)


Adrian


Re: Module::Build::TestReporter 1.00 Preview

2005-06-02 Thread Adrian Howard


On 30 May 2005, at 22:23, chromatic wrote:
[snip]

I'd love to have feedback before I release it to the CPAN in a week or
so.

[snip]

Getting some test failures on vanilla OS X 10.4.1 (see below). Not  
got time to dig into causes at the moment.


Looks nice though. Like the roles stuff.

Adrian


% ./Build test verbose=1
t/base1..35
ok 1 - use Module::Build::TestReporter;
ok 2 - Module::Build::TestReporter-can('new')
ok 3 - The object isa Module::Build
ok 4 - The object isa Module::Build::TestReporter
ok 5 - new() should set report_file to test_failures.txt by default
ok 6 - ... but should set if it passed
ok 7 - Module::Build::TestReporter-can('ACTION_test')
ok 8 - ACTION_test() should not write to selected fh
ok 9 - ... calling SUPER with args
ok 10 - ... and should restore selected fh
ok 11 - Module::Build::TestReporter-can('find_test_files')
ok 12 - find_test_files() should return empty arrayref
ok 13 - ... writing no output by default
ok 14 - ... reporting failures
ok 15 - ... having cleared out any existing failures
# Failed test (t/base.t at line 95)
# Structures begin differing at:
#  $got-{failures}[0]{diagnostics} = 'Failed test  
(fake_tests/fail.t at line 9)

#  got: 'foo'
# expected: 'bar'
# '
# $expected-{failures}[0]{diagnostics} = '
# Failed test (fake_tests/fail.t at line 9)
#  got: 'foo'
# expected: 'bar'
# '
ok 16 - ... writing no output by default
ok 17 - ... yet still reporting failures
ok 18 - Module::Build::TestReporter-can('save_failure_details')
ok 19 - save_failure_details() should save results of all failures
not ok 20 - ... saving failure information
ok 21 - Module::Build::TestReporter-can('report_failures')
ok 22 - report_failures() should report success with no failures
ok 23 - report_failures() should write a full report for all failed  
tests

ok 24 - ... with test failure information
ok 25 - ... and the full -V information of this perl
ok 26 - ... and a failure report
ok 27 - ... with failure details
ok 28 - Module::Build::TestReporter-can('write_report')
ok 29 - write_report() should write its report
ok 30 - ... from the report passed
ok 31 - ... throwing an exception if it cannot write test data
ok 32 - Module::Build::TestReporter-can('write_failure_results')
ok 33 - write_failure_results() should only warn of failure without  
contact

ok 34 - ... or giving e-mail directions with a contact
ok 35 - ... adding the report in verbose mode
# Looks like you failed 1 tests of 35.
dubious
Test returned status 1 (wstat 256, 0x100)
DIED. FAILED test 20
Failed 1/35 tests, 97.14% okay
t/inherit.1..10
ok 1 - use Module::Build::TestReporter;
ok 2 - The object isa My::Build
ok 3 - The object isa Module::Build
ok 4 - role application should work
ok 5 - My::Build-can('new')
ok 6 - My::Build-can('ACTION_test')
ok 7 - My::Build-can('find_test_files')
ok 8 - My::Build-can('save_failure_details')
ok 9 - My::Build-can('report_failures')
ok 10 - My::Build-can('write_report')
ok
t/overrideok 1 - use Module::Build::TestReporter;
1..1
ok
Failed Test Stat Wstat Total Fail  Failed  List of Failed
 
---

t/base.t   1   256351   2.86%  20
Failed 1/3 test scripts, 66.67% okay. 1/46 subtests failed, 97.83% okay.

% perl -V
Summary of my perl5 (revision 5 version 8 subversion 6) configuration:
  Platform:
osname=darwin, osvers=8.0, archname=darwin-thread-multi-2level
uname='darwin b28.apple.com 8.0 darwin kernel version 7.5.0: thu  
mar 3 18:48:46 pst 2005; root:xnuxnu-517.99.13.obj~1release_ppc power  
macintosh powerpc '
config_args='-ds -e -Dprefix=/usr -Dccflags=-g  -pipe  - 
Dldflags=-Dman3ext=3pm -Duseithreads -Duseshrplib'

hint=recommended, useposix=true, d_sigaction=define
usethreads=define use5005threads=undef useithreads=define  
usemultiplicity=define

useperlio=define d_sfio=undef uselargefiles=define usesocks=undef
use64bitint=undef use64bitall=undef uselongdouble=undef
usemymalloc=n, bincompat5005=undef
  Compiler:
cc='cc', ccflags ='-g -pipe -fno-common -DPERL_DARWIN -no-cpp- 
precomp -fno-strict-aliasing -I/usr/local/include',

optimize='-Os',
cppflags='-no-cpp-precomp -g -pipe -fno-common -DPERL_DARWIN -no- 
cpp-precomp -fno-strict-aliasing -I/usr/local/include'
ccversion='', gccversion='3.3 20030304 (Apple Computer, Inc.  
build 1809)', gccosandvers=''

intsize=4, longsize=4, ptrsize=4, doublesize=8, byteorder=4321
d_longlong=define, longlongsize=8, d_longdbl=define, longdblsize=8
ivtype='long', ivsize=4, nvtype='double', nvsize=8,  
Off_t='off_t', lseeksize=8

alignbytes=8, prototype=define
  Linker and Libraries:
ld='env MACOSX_DEPLOYMENT_TARGET=10.3 cc', ldflags ='-L/usr/ 
local/lib'

libpth=/usr/local/lib /usr/lib
libs=-ldbm -ldl -lm -lc
perllibs=-ldl -lm -lc
libc=/usr/lib/libc.dylib, so=dylib, 

Re: Module::Build::TestReporter 1.00 Preview

2005-06-02 Thread Adrian Howard


On 2 Jun 2005, at 10:01, Michael G Schwern wrote:
[snip]

Test::More 0.48_02 introduced a change where it would put a leading
newline before its diagnostics when running under Test::Harness.   
Looks

like the test expected that.  You're probably running Test::More 0.47.

[snip]

Bah. I knew that. Quite right - apologies for dimness.

Adrian



Re: Test::Object

2005-05-31 Thread Adrian Howard


On 31 May 2005, at 09:47, Adam Kennedy wrote:
[snip]

Something exist already that I'm missing?

[snip]

I'd use Test::Class (but I would say that :-) So the example from  
your POD would be something like:



{   package Foo::Test;
use base qw( Test::Class );
use Test::More;

# we take the ::Test suffix off to get the name of the class  
we're testing
# (this should really be in a Test::Class base class - and will  
be soon)

sub class_under_test {
my $self = shift;
my $test_class = ref $self;
$test_class =~ s/::Test$//s;
return $test_class;
};

# here is where we create our object under test
sub create_fixture :Test( setup ) {
my $self = shift;
$self-{object} = $self-class_under_test-new;
};

# here we make sure foo returns true
sub foo_returns_correct_value : Test {
my $object = shift-{object};
ok( $object-foo );
};

# here we check answer returns 42 (just for the sake of another  
test)

sub the_answer_to_live_the_universe_and_everything : Test {
my $object = shift-{object};
is( $object-answer, 42 );
};
}

{   package FooBar::Test;
use base qw( Foo::Test );
use Test::More;

# we just have to say what's different about FooBar objects, the
# common behaviour stays the same
sub foo_returns_correct_value : Test {
my $object = shift-{object};
is( $object-foo, 'bar' );
};
}

Test::Class-runtests;


which would give us:

# FooBar::Test-foo_returns_correct_value
ok 1 - foo returns correct value
#
# FooBar::Test-the_answer_to_live_the_universe_and_everything
ok 2 - the answer to live the universe and everything
#
# Foo::Test-foo_returns_correct_value
ok 3 - foo returns correct value
#
# Foo::Test-the_answer_to_live_the_universe_and_everything
ok 4 - the answer to live the universe and everything

Cheers,

Adrian




Re: RFC - Class::Agreement

2005-05-27 Thread Adrian Howard

On 27 May 2005, at 16:21, Ian Langworth wrote:

[snip]

When you say automatic, I think of source filtering. Do you simply
mean an alias for the first argument? If so, I think it's best to
leave that up to the programmer. You can always use shift.


Fair enough. I just hate having the duplication arguments in the pre/ 
post/method blocks.



-I'd want a global way of switching off contracts without having
to change the code. $ENV{ClassAgreementDisabled} = 1 or something.



Arg! This is a biggie. If used properly, you shouldn't need, let alone
*want*, to turn contracts off. Class::Agreement doesn't do any deep
cloning like Class::Contract. Class::Agreement's contracts should be
nearly as light as putting die unless in your methods.

[snip]

Depends. I've seen some darn complex contracts in my time that have  
significant runtime costs. When you have postconditions like 'must  
return same result as old hideously ineffective system' or 'identical  
result to simpler, but slow algorithm' keeping the contracts running  
can have a massive cost.


[snip]

Thanks, Adrian. This is much appreciated.


Y'welcome :-)

Adrian



Re: RFC - Class::Agreement

2005-05-27 Thread Adrian Howard


On 27 May 2005, at 18:25, Ovid wrote:


--- Ian Langworth [EMAIL PROTECTED] wrote:


Reflecting upon this, I'm not even sure why I'd want argument
modification as a feature. (Maybe I still had Hook::LexWrap on the
brain.) I might just take this out.



I vote for taking it out.  I view contracts to be similar to  
exceptions
in one respect:  when implemented properly, removing them from the  
code

should not affect the normal operation of the code (sweeping a few
things under the rug there).  Thus, argument modification is a no-no.
Some might argue against the bondage and discipline, but they're
probably not going to be using Class::Agreement anyway :)


100% agreement. I can't think of a single scenario where argument  
modification would be a good thing for contracts. AOP maybe. DbC nope.


Adrian



Re: RFC - Class::Agreement

2005-05-26 Thread Adrian Howard

On 23 May 2005, at 15:33, Ian Langworth wrote:


I'm working on a new module, Class::Agreement, and I've started by
writing the documentation. If anyone has a few minutes, I'd like some
feedback as to whether my descriptions of the concepts make sense and
if you like the syntax.

  HTML: http://reliant.langworth.com/~ian/Class-Agreement.html

[snip]

Nice. Random comments/niggles.

-It makes sense to me, but I've done DBC and used Eiffel in the  
past. I'm not entirely sure it would make sense to somebody who  
doesn't already know what DBC is. A full working example that does  
something vaguely useful would go a long way to illustrating the  
concepts to the newbie. Say a simple stack/queue class.


-A quick mapping onto Eiffel constructs might be nice. I'd  
imagine a lot of the people interested in DBC in Perl would have  
Eiffel experience. Also a lot of the stuff written about DBC uses  
Eiffel - so a quick summary might be useful for Perl folk looking for  
more info elsewhere.


-No class invariants?

-You do mention that tweaking @_ in the pre/post blocks will  
affect the @_ passed to the method. You don't say that having pre/ 
posts that have side effects is evil. You probably should :-)


-s/sub g = sub {/sub g {/

-It's not immediately obvious to me from reading the docs that  
doing:


{   package SomeSubclass;
use base 'SomeClass'; # from your example
sub choose_word { return -1 };
}

would fail since choose_word should still be bound by the  
SomeClass precondition. I'm assuming you're doing something clever  
with INIT blocks or something so this does work. What you need to do  
with subclasses that have the same contract (if anything) needs to be  
made explicit in the docs.
-On How can I type less?. I'm curious as to whether you  
considered adding an automatic $self to match the $r/@r ?


-I'd want a global way of switching off contracts without having  
to change the code. $ENV{ClassAgreementDisabled} = 1 or something.


-I had to read What do you mean, ``There's a problem with the  
heirarchy?'' three times. More paragraphs and an example for the  
slow of thinking like me please :-)


Cheers,

Adrian



Re: ANN: JavaScript TestSimple 0.03

2005-05-05 Thread Adrian Howard
On 4 May 2005, at 01:14, David Wheeler wrote:
On May 3, 2005, at 14:27 , Joe McMahon wrote:
Here's a weird idea: how about the option of AJAXing the test harness 
results back to a receiving server somewhere that understands TAP? 
Bingo: TAP testing of JS embedded in web pages in its native habitat.
That's just evil. Maybe when Schwern or whoever had the idea gets 
networked TAP going, I'll just send the data there. :-)
That's pretty much what I did when I hacked JSUnit to output TAP. 
Worked quite nicely.

Adrian


Re: Test::Builder change BAILOUT - BAIL_OUT

2005-05-05 Thread Adrian Howard
On 3 May 2005, at 23:36, Michael G Schwern wrote:
Test::Simple/More/Builder 0.61 will introduce a change to Test::Builder
whereby the BAILOUT() method becomes BAIL_OUT().  Additionally 
Test::More
finally features a BAIL_OUT() function.
[snip]
Just out of curiosity - any particular reason for the change?
Adrian


Re: ANN: JavaScript TestSimple 0.03

2005-05-05 Thread Adrian Howard
On 5 May 2005, at 18:00, David Wheeler wrote:
On May 5, 2005, at 04:26 , Adrian Howard wrote:
Here's a weird idea: how about the option of AJAXing the test 
harness results back to a receiving server somewhere that 
understands TAP? Bingo: TAP testing of JS embedded in web pages in 
its native habitat.

That's just evil. Maybe when Schwern or whoever had the idea gets 
networked TAP going, I'll just send the data there. :-)

That's pretty much what I did when I hacked JSUnit to output TAP. 
Worked quite nicely.
Do you have some sample code for your TAP server?
Yup. I'll see if I can dig it out next time I'm home. It's currently 
sitting on a linux box sitting behind the sofa a couple of hundred 
miles from my current location :-)

Adrian


Fwd: [agile-testing] ANNOUNCE: New version of Perl port of Fit.

2005-04-28 Thread Adrian Howard
Since it seems to have been announced everywhere but here, I thought  
folks might be interested in this.

Adrian
Begin forwarded message:
From: Tony Byrne [EMAIL PROTECTED]
Date: 28 April 2005 09:52:09 BST
To: [EMAIL PROTECTED], [EMAIL PROTECTED],  
[EMAIL PROTECTED]
Subject: [agile-testing] ANNOUNCE: New version of Perl port of Fit.
Reply-To: [EMAIL PROTECTED]

Folks,
I'm pleased to announce my first release of Test::C2FIT for Perl.
It's available now from CPAN:
http://search.cpan.org/CPAN/authors/id/T/TJ/TJBYRNE/Test-C2FIT 
-0.01a.tar.gz

and from the 'files' section of the extremeprogramming yahoo group.
I've been working on modernizing Dave W. Smith's original Perl port of
the FIT testing framework.  Test::C2FIT is a direct port of the Java
version to Perl.  It's based on the port that is available from
http://fit.c2.com/files/PerlDownloads/, and is not to be confused with
Test::FIT which is an early and incomplete port available from CPAN.
This version implements changes to the core and new test fixtures  
required
for the port to pass FAT, the current FIT specification.  The biggest
changes are to be found in Parse.pm and Fixture.pm.

I've also taken the opportunity to change the directory layout to make  
the
distribution CPAN and MakeMaker friendly.  This should make
installation a doddle for Perl users.

In spite of these changes the port is still mostly Dave W. Smith's
code.  My changes are minor compared to his original hard work in
making FIT a reality for Perl.
Feel free to play with the new version, I'd love to hear your
feedback.
Regards,
Tony.
--
Tony Byrne


Yahoo! Groups Links
* To visit your group on the web, go to:
http://groups.yahoo.com/group/agile-testing/
* To unsubscribe from this group, send an email to:
[EMAIL PROTECTED]
* Your use of Yahoo! Groups is subject to:
http://docs.yahoo.com/info/terms/





Re: [ANNOUNCE] Test::Simple/More/Builder 0.59_01

2005-04-27 Thread Adrian Howard
On 27 Apr 2005, at 06:03, Michael G Schwern wrote:
[snip]
This finally allows one to create a second Test::Builder object via
Test::Builder-create.  Authors of modules which test testing modules 
may
now rejoice, you can use Test::Builder to test Test::Builder!
Neato!
Adrian


Re: Module and package version numbering

2005-04-19 Thread Adrian Howard
On 19 Apr 2005, at 11:40, David Cantrell wrote:
[snip]
The script that generates it doesn't change.  The data that it mangles 
into a module is the bit that changes.
Can you add a version number to the data?
So I'll take the suggestion of putting MMDD into a version number. 
But then wasn't there some issue with CPAN.pm not liking very long 
version numbers, after Randal went and did his silly automagic daily 
module stunt?
I guess you could do something like Y.YYYMMDD - I can't see that 
causing problems.

If I couldn't add a version number to the data I'd probably rip out the 
revision number of the last change of that file from subversion (or 
whatever your local SCM is).

Adrian


Re: Kwalitee and has_test_*

2005-04-18 Thread Adrian Howard
On 17 Apr 2005, at 11:09, Tony Bowden wrote:
On Sun, Apr 17, 2005 at 08:24:01AM +, Smylers wrote:
Negative quality for anybody who includes a literal tab character
anywhere in the distro's source!
Negative quality for anyone whose files appear to have been edited in
emacs!
Ow! Coffee snorted down nose. Ouch.
Adrian


Re: Kwalitee and has_test_*

2005-04-18 Thread Adrian Howard
On 17 Apr 2005, at 13:47, David A. Golden wrote:
[snip]
2) A metric to estimate the quality of a distribution for authors to 
compare their work against a subjective standard in the hopes that 
authors strive to improve their Kwalitee scores.  In this model, 
faking Kwalitee is irrelevant, because even if some authors fake it, 
others will improve improve quality (as measured by Kwalitee) for 
real, thus making Kwalitee useful as a quality improvement tool.

Actually, in #2, fakers can provide extra competitive pressure, as 
module authors who take Kwalitee seriously perceive a higher standard 
that they should be striving for.

I think most of the Kwalitee debate has been around confusion between 
whether #1 or #2 is the goal, plus what the subjective standard 
should be.
If #2 is the primary goal then one option might be to have a standard 
way of popping the information into the META.yml file? If we're 
assuming honesty on the module authors part...

Adrian


Re: Module and package version numbering

2005-04-18 Thread Adrian Howard
On 18 Apr 2005, at 17:03, David Cantrell wrote:
[snip]
Number::Phone::UK::Data - no version, this is where the .0004 comes 
from
  though.  It has no version number because the
  entire file is generated from a *really* dumb
  script
[snip]
I agree with Schwern that there is no correct :-) However, if it were 
me, I would generate a version number along with the module (seeded 
from the version number of the generating script.)

Personally I prefer separate version numbers per-module, but some 
people don't. I've yet to read anything /really/ convincing for either 
side - so I'd do whatever you're comfortable with myself.

Cheers,
Adrian


Re: Test::Expect

2005-04-14 Thread Adrian Howard
On 14 Apr 2005, at 11:36, Leon Brocard wrote:
Oh, I forgot to mention to perl-qa that I wrote Test::Expect:
  http://search.cpan.org/dist/Test-Expect/
It's nice. Already used it :-)
Adrian


Re: Test automation with perl.

2005-04-14 Thread Adrian Howard
On 14 Apr 2005, at 08:43, suresh babu wrote:
Hi Experts,
I would like to reiterate my request.
[snip]
Did you not read the replies?
http://www.nntp.perl.org/group/perl.qa/4079
http://www.nntp.perl.org/group/perl.qa/4081
Adrian


Re: TestSimple/More/Builder in JavaScript

2005-04-08 Thread Adrian Howard
On 7 Apr 2005, at 19:23, David Wheeler wrote:
Greetings fellow Perlers,
I'm pleased to announce the first alpha release of my port of 
TestSimple/More/Builder to JavaScript. You can download it from:

  http://www.justatheory.com/downloads/TestBuilder-0.01.tar.gz
[snip]
You rock! Excellent stuff. Off to play.
Adrian


Re: TestSimple/More/Builder in JavaScript

2005-04-08 Thread Adrian Howard
On 7 Apr 2005, at 20:27, David Wheeler wrote:
[snip]
Besides, I'm sure that Adrian will soon take my code to port 
Test::Class to JavaScript, and then we can have both approaches! ;-)
I did once hack JSUnit to output TAP - so you never know :-)
Adrian


Re: Talk: Why You Really Want To Write Tests

2005-03-23 Thread Adrian Howard
On 22 Mar 2005, at 19:11, Michael G Schwern wrote:
On Tue, Mar 22, 2005 at 06:28:21PM +, Adrian Howard wrote:
I can't believe you didn't stick a reference to the perl-qa list there
:-)
The audience was not Perl programmers.  Primarily Haskell and Java.  A 
few
people expressed interest in Perl afterwards but mostly in the form of
so why do you use Perl?
[snip]
Ah - sorry. Didn't realise.
In that case [EMAIL PROTECTED] and 
[EMAIL PROTECTED] are a couple of other language-agnostic 
testing lists that I've found very useful.

Cheers,
Adrian


Re: Talk: Why You Really Want To Write Tests

2005-03-22 Thread Adrian Howard
On 4 Mar 2005, at 17:15, Michael G Schwern wrote:
[snip]
There's not nearly enough references, particularly when I expect the 
audience
to go out and work things out on their own.  I still can't think of a 
decent
testing book nor tutorial to recommend.  Test::Tutorial leaves the 
reader
at a dead end without referencing further works on, say, perl.com.  I 
don't
know the JUnit community to recommend anything there.
[snip]
I can't believe you didn't stick a reference to the perl-qa list there 
:-)

My personal list would probably include the following
Online:
http://del.icio.us/tag/perl+testing
-   delicious rocks!
www.testdriven.com
-   General blog/portal/aggregator site on testing.
Mostly TDD. Some Perl occasionally.
http://www.testingeducation.org/BBST/
-   Really excellent online materials on testing - but has
a far bigger scope than just developer written automated
unit tests. For those considering testing as a career
option
Offline:
I'd put these next two in the really great books on testing section.
Lessons Learned in Software Testing: A Context Driven Approach,
Cem Kaner, James Bach, Brett Pettichord
-   Very readable book on software testing in general. A collection
of hundreds of good practices and tips.
Test Driven Development, Kent Beck
-   Everybody should read it. It's thin too :-)
while these are just darn fine
Perl Medic, Peter Scott
-   Has some nice chapters on testing. About the only Perl
book currently out there that does AFAIK.
Test Driven Development: A Practical Guide, Dave Astels
-   Nice intro to TDD. Covers various xUnit frameworks in
several languages (not Perl unfortunately)
Pragmatic Unit Testing In Java with JUnit, Andy Hunt, Dave Thomas
-   Mostly JUnit, but well written. As long as you can
read Java you should be able to take useful stuff
away from it.
Cheers,
Adrian
PS O'Reilly will have a small book soon ?


Re: Testing What Was Printed

2005-02-12 Thread Adrian Howard
On 11 Feb 2005, at 19:52, Shawn Sorichetti wrote:
[snip]
I've started working on Test::Output that is based on Schwern's TieOut 
module that comes with Test::More. I'm hoping to have it released on 
CPAN later tonight.

Test::Output is a self contained so that it can be included with other 
modules, and no prereqs. Right now it provides output_is() (combined 
STDERR, STDOUT), stderr_is(), and stdout_is(), but I plan to add 
_like, and _found shortly.
Excellent! I love it when other people do things that I'm too lazy to 
get around to doing myself (see 
http://www.nntp.perl.org/group/perl.qa/1828  
http://www.nntp.perl.org/group/perl.module-authors/1939) :-)

Much better implemented than mine too. Thank you!
Adrian


Re: Test::Unit, ::Class, or ::Inline?

2005-02-08 Thread Adrian Howard
On 7 Feb 2005, at 21:13, Michael G Schwern wrote:
On Mon, Feb 07, 2005 at 03:03:29PM +, Adrian Howard wrote:
Test::Unit, as mentioned by Curtis, has been abandoned.
Has it? I thought that the folk on [EMAIL PROTECTED] had taken
it on ?
http://groups.yahoo.com/group/PerlUnit/ shows some activity on the 
mailing
list.  Its members-only so I joined to see what's going on.  There was 
a
grand total of 24 messages from January to March 2004 until the list
effectively died.  Everything else after that is spam.

I sent out a lifeline post pointing whoever's left here and at 
qa.perl.org.
Until I saw your post I'd forgotten I was actually subscribed to it :-)
Okay. I'll try and adopt so I can add pointers to T::B/T::H based stuff 
in the docs. People seem to hit Test::Unit when coming to Perl testing 
from other languages so I think it's probably worth the effort.

Cheers,
Adrian


Re: Test::Unit, ::Class, or ::Inline?

2005-02-07 Thread Adrian Howard
Belated response...
On 26 Jan 2005, at 20:18, Michael G Schwern wrote:
On Mon, Jan 24, 2005 at 04:11:56PM -0500, Ian Langworth wrote:
I'm taking a software development class this semester which will 
involve
writing extensive object-oriented code. My partner and I are trying to
decide whether to use Test::Unit, ::Class, or ::Inline for our test 
scripts.

I can see the advantages of Test::Class in terms of object heirarchy,
but I really like the idea of having my tests right along with the
methods when using Test::Inline. (The latter would be great when
presenting our code to the class.)
Thoughts?
Test::Unit, as mentioned by Curtis, has been abandoned.
Has it? I thought that the folk on [EMAIL PROTECTED] had taken 
it on ?

If it has been abandoned I might adopt it (if only to add a note that 
active development has ceased and add pointers to Test::Builder based 
modules).

[snip]
The important thing to remember is these are all additive.  Its not
either or.  You can safely use Test::Inline and Test::Class together.
You can use them all in addition to traditional .t files.  Use them all
where appropriate.
[snip]
Definitely.
Hell, I wrote T::C and I still start my test scripts with plain 
Test::More until I actually need things like fixtures.

One of the things that makes Perl's testing framework so neat is the 
way you can integrate different testing models/frameworks via 
Test::Builder / TAP / Test::Harness.

Cheers,
Adrian


Re: hello all

2005-02-04 Thread Adrian Howard
On 1 Feb 2005, at 16:30, Shaun Fryer wrote:
[snip]
Hello!
Hello right back at ya :-)
Adrian


Re: Whither the perl-qa wiki ?

2005-02-02 Thread Adrian Howard
On 31 Jan 2005, at 21:18, Michael G Schwern wrote:
On Mon, Jan 31, 2005 at 04:07:04PM -0500, Michael G Schwern wrote:
So I may as well do that now.
Done.  Let me know if that seems like its the right database, there 
were
several to choose from owing to circumstances.
Fantastic - I'll go have a poke around.
Thanks,
Adrian


Whither the perl-qa wiki ?

2005-01-31 Thread Adrian Howard
I've just noticed that the perl-qa wiki linked from http://qa.perl.org/ 
is still toast.

I seem to remember somebody (Andy ?) saying that a 
something-or-other.kwiki.org was in the process of being set up to 
replace it. Is my terrible memory playing it's usual tricks or has it 
popped into existence and not been linked to?

Just wondering...
Adrian


Re: Whither the perl-qa wiki ?

2005-01-31 Thread Adrian Howard
My impression was that was for Phalanx people, rather than perl-qa in 
general ?

Adrian
On 31 Jan 2005, at 18:10, Shawn Carroll wrote:
http://phalanx.kwiki.org/
On Mon, 31 Jan 2005 14:51:22 +, Adrian Howard
[EMAIL PROTECTED] wrote:
I've just noticed that the perl-qa wiki linked from 
http://qa.perl.org/
is still toast.

I seem to remember somebody (Andy ?) saying that a
something-or-other.kwiki.org was in the process of being set up to
replace it. Is my terrible memory playing it's usual tricks or has it
popped into existence and not been linked to?
Just wondering...
Adrian



Re: Hello

2005-01-22 Thread Adrian Howard
On 21 Jan 2005, at 17:09, Andy Lester wrote:
On Fri, Jan 21, 2005 at 05:00:09PM +, GlennH ([EMAIL PROTECTED]) 
wrote:
I read about the Phalanx project on the yahoo Agile Testing group and
thought I'd sign up the mailing list and skulk in the background.  
I'm a
Do you have a mention of what was posted?  I'm curious what was said.
Here y'go:
	From: 	  [EMAIL PROTECTED]
	Subject: 	[agile-testing] Article reference: large-scale distributed 
automated testing
	Date: 	21 January 2005 15:27:30 GMT
	To: 	  [EMAIL PROTECTED]
	Reply-To: 	  [EMAIL PROTECTED]

Hi...
Andy Lester, aka Petdance, the author of Perl's highly usable
WWW::Mechanize module, is leading the effort to build a reasonable
automated regression test for the 6000 modules that comprise CPAN, the
Comprehensive Perl Archive Network.  He's written a short and very
interesting piece on the trials and tribulations of doing this,
available from the oreilly.com website or from perl.com:
http://www.perl.com/pub/a/2005/01/13/phalanx.html .
If you read this, you'll run across a fascinating concept unique
to the Perl community of kwalitee.  If you can tolerate a few
vulgarities, Michael Schwern's original announcement is still the best
explanation of kwalitee I know of:
http://www.nntp.perl.org/group/perl.qa/149 .
Interesting stuff.
-Chris
agile-testing is an interesting list BTW. Worth a look in general.
Adrian


Re: Test::Harness with modules that output to STDOUT

2004-08-24 Thread Adrian Howard
On 24 Aug 2004, at 16:04, Peter Kay wrote:
I am attempting to write tests (using whichever Tests::...) for a 
module that will use Test::Harness.  The module outputs to STDOUT (it 
just does).
You might find 
http://www.mail-archive.com/[EMAIL PROTECTED]/msg01690.html of 
interest.

[snip]
So far, I've come up with 2 ideas:
1.  Hack something up to snatch away STDOUT and hope Test::More 
handles it correctly.
[snip]
This is what I normally do. Test::Builder dups the filehandles at 
compile time so it's perfectly safe as long as T::B loads before your 
filehandle munging occurs.

Cheers,
Adrian


Re: Little lost wiki...

2004-08-02 Thread Adrian Howard
On 1 Aug 2004, at 21:46, Andy Lester wrote:
The Perl QA Wiki linked to from http://qa.perl.org/ as 
http://www.pobox.com/~schwern/cgi-bin/perl-qa-wiki.cgi eventually 
ends up as a 403 at 
http://mungus.schwern.org/~schwern/cgi-bin/perl-qa-wiki.cgi.
Probably dead, because Ingy was to set up qa.kwiki.org and 
phalanx.kwiki.org, too, when we were together at OSCON.
Cool. Can we get at the old one to migrate anything useful across?
Adrian


Little lost wiki...

2004-08-01 Thread Adrian Howard
The Perl QA Wiki linked to from http://qa.perl.org/ as 
http://www.pobox.com/~schwern/cgi-bin/perl-qa-wiki.cgi eventually 
ends up as a 403 at 
http://mungus.schwern.org/~schwern/cgi-bin/perl-qa-wiki.cgi.

Dead or just resting?
Adrian


Re: [ANNOUNCE] Test::Simple 0.48_02

2004-07-19 Thread Adrian Howard
On 19 Jul 2004, at 07:25, Michael G Schwern wrote:
[snip]
There's a new feature.  When run under Test::Harness diagnostic output 
will
throw in a leading newline for better readability.
[snip]
Which causes anything testing test diagnostic output with 
Test::Builder::Tester to fall over. Test::Class, Test::Exception  
Test::Block's test suites now all fail.

Pooh sticks.
My temptation is to say the new behaviour is the right one and patch 
T::B::T and friends?

Adrian


Re: [ANNOUNCE] Test::Simple 0.48_02

2004-07-19 Thread Adrian Howard
On 19 Jul 2004, at 20:30, Mark Fowler wrote:
On Mon, 19 Jul 2004, Adrian Howard wrote:
My temptation is to say the new behaviour is the right one and patch
T::B::T and friends?
The version of TBT in my subversion repository[1] now twiddles the
HARNESS_ACTIVE ENV variable off when it's collecting output for 
testing.
Soper ;-) Thanks. Everything working now.
I'll release it to CPAN after this alpha ships.
Wouldn't it be better to get the new T::B::T out before the new 
Test::Simple distribution? That way people can update dependencies and 
avoid test failures when the new T::S hits CPAN.

Adrian


Re: C/C++ White-Box Unit Testing and Test::More

2004-06-28 Thread Adrian Howard
On 26 Jun 2004, at 12:51, Fergal Daly wrote:
On Fri, Jun 25, 2004 at 10:13:52PM +0100, Adrian Howard wrote:
[snip]
What xUnit gives you is a little bit more infrastructure to make these
sorts of task easier.
That's fair enough but that infrastructure is just extra baggage in 
some
cases.
True. The nice thing about Perl's framework is that we can avoid it 
when we don't need it.

Although the extra baggage that people are complaining about is often 
because of the verboseness of a language's OO code rather than xUnit 
itself.

Actually, just after I wrote the email, I realised I had used xUnit 
before, in Delphi. With DUnit, testing a single class takes a 
phenomenal amount of boilerplate code and I guess that's why I'd 
blocked it from my memory :).
I think DUnit would be an example of exactly what I'm talking about. 
For example the following DUnit

unit Project1TestCases;
interface
uses
TestFrameWork;

type
TTestCaseFirst = class(TTestCase)
published
procedure TestFirst;
end;

implementation

procedure TTestCaseFirst.TestFirst;
begin
Check(1 + 1 = 2, 'Catastrophic arithmetic failure!');
end;

initialization
TestFramework.RegisterTest(TTestCaseFirst.Suite);
end.
would be written in Test::Class as:
package Project1TestCases;
use base qw( Test::Class );
use Test::More;

sub catastrophic_arithmetic_failure : Test { is 1+1, 2 };
or, since we don't need any xUnit magic here, as plain old:
use Test::More tests = 1;
is 1+1, 2, 'catastrophic arithmetic failure';
This is why I like Perl!
As you say, we already have a good chunk of xUnit style with 
Test::Harness, with each .t file corresponding somewhat to a suite 
but without the nestability.
You could also compare each .t file to a test method, since the tests 
in different .t files tend to be isolated from each other.

I think the baggage only pays for itself when you end up doing a lot of
inheriting between test classes,
For me the baggage pays off as soon as test isolation becomes a factor. 
Having setup/teardown to help create test fixtures saves me typing. 
YMMV.

Adrian


Re: C/C++ White-Box Unit Testing and Test::More

2004-06-25 Thread Adrian Howard
On 24 Jun 2004, at 20:19, Andrew Pimlott wrote:
On Thu, Jun 24, 2004 at 05:08:44PM +0100, Adrian Howard wrote:
Where xUnit wins for me are in the normal places where OO is useful
(abstraction, reuse, revealing intention, etc.).
Since you've thought about this, and obviously don't believe it's OO 
so
it's better, I'd be interested in seeing an example if you have one in
mind.
Off the top of my head.
* I never have to type repetitive tests like
isa_ok Foo-new(), 'Foo'
again because it's handled by a base class that all my test classes 
inherit from.

* I can create units of testing that can be reused multiple times. If I 
have an Iterator interface I can write a test suite for it once and 
reuse it any class that implements the Iterator interface.

* I have conception level available higher than individual tests (in 
T::M land) or asserts (in xUnit land). I can say something like:

sub addition_is_commutative : Test {
is 10 + 5, 15;
is 5 + 10, 15;
};
and talk about addition_is_commutative test as a concept separate from 
the tests/assertions that implement it. I can easily move test methods 
around as I refactor without having to worry about it breaking some 
other part of the test suite.

* The setup/teardown methods provide an infrastructure for creating 
test fixtures and isolating tests, which can often save typing and 
speed everything up considerably.

* Need to check that a class invariant still holds after each test? 
Chuck it in a teardown method.

Cheers,
Adrian


Re: C/C++ White-Box Unit Testing and Test::More

2004-06-25 Thread Adrian Howard
On 24 Jun 2004, at 21:41, Ovid wrote:
[snip]
I also like the thought of inheriting tests, but I know not everyone 
is fond of this idea.  There
was a moderately interesting discussion about this on Perlmonks:
http://www.perlmonks.org/index.pl?node_id=294571
[snip]
Yeah, I meant to contribute to that but never got the spare tuits.
I tend to use them when I have an abstract interface that several 
different classes are implementing. Seems a bad idea to either ignore 
testing functionality that's being changed or waste time reimplementing 
the basically the same code.

Adrian


Re: C/C++ White-Box Unit Testing and Test::More

2004-06-25 Thread Adrian Howard
On 25 Jun 2004, at 16:10, Andrew Pimlott wrote:
[snip]
I thought the isolation principle that people were talking about is
that before every test, a setup method is called, and after every  
test
a teardown is called, automatically by the test harness.  This
seems to require one method == one test.
It doesn't. I think there's a confusion of vocabulary.
In the Perl world 'test' refers to something like 'is' and 'ok' from  
Test::More and friends. We're interested in the number of successful  
tests.

In the xUnit world these are called assertions, not a tests, and in  
general we're /not/ concerned with the number of assertions that  
succeed.

In the xUnit world a /test/ is a method with one or more assertions  
that checks one particular bit of behaviour in the code. A failed test  
is a method where one assertion failed. A passed test is a method where  
all assertions succeed.

From this perspective it makes sense to abort after the first assertion  
has failed - since the 'test' has failed. Think of it like logical ''  
short circuiting, or maybe like SKIP blocks (there's no point doing the  
other assertions because the thing we're testing has already failed).

Test isolation is the idea each test (not assertion) can run  
independently from every other. This means when we have a failure we  
can quickly focus in on exactly what caused the problem. We can be  
confidant that four test failures indicates four separate problems, not  
a cascade of failure within the test suite itself.

There is a school of thought that one-assertion-per-test is a good goal  
to aim for, but not everybody agrees. For some discussion see:

-  
http://www.testdriven.com/modules/newbb/viewtopic.php? 
viewmode=flattopic_id=363forum=6
- http://www.artima.com/weblogs/viewpost.jsp?thread=35578

Hopefully this makes some vague sort of sense.
Cheers,
Adrian


Re: C/C++ White-Box Unit Testing and Test::More

2004-06-25 Thread Adrian Howard
On 24 Jun 2004, at 21:10, Tony Bowden wrote:
On Thu, Jun 24, 2004 at 02:59:30PM -0400, Andrew Pimlott wrote:
I see this more as a limitation than a feature.  It seems to mean that
- You need to use the same setup/teardown for all your tests.
Those that need different things aren't testing the same thing and
should move to a different class.
Yup.
This misunderstanding seems to be a common one. Novice xUnit users 
often think that there should be a single test class for every class 
being tested.

Sometimes this can work; when you don't need text fixtures or where a 
single set of test fixtures can cover all of a classes functionality.

However many situations require multiple classes each with their own 
set of fixtures and test behaviour.

Cheers,
Adrian


Re: C/C++ White-Box Unit Testing and Test::More

2004-06-25 Thread Adrian Howard
On 25 Jun 2004, at 20:18, Andy Lester wrote:
Repetition is good.  I feel very strongly that you should be checking
your constructor results in every single test, and checked against
literals, not variables.
I'm not complaining about repetitive tests, and I agree with what you 
said about testing constructor results if it's something that can 
reasonably fail.

I'm complaining about /typing/ repetitive tests. Why the heck should I 
have to type the same code in twice if I can get the computer to do it 
for me.

Adrian


Re: C/C++ White-Box Unit Testing and Test::More

2004-06-25 Thread Adrian Howard
On 24 Jun 2004, at 19:59, Andrew Pimlott wrote:
[snip]
- You don't have much control (correct me if I'm wrong) about the order
  of tests, or the relationship between tests, eg you can't say if 
this
  test fails, skip these others.  This is straightforward in
  Test::More's simple procedural style.
[snip]
This is probably due to the same test/assertion confusion - but the 
whole /point/ of xUnit is that the test order shouldn't matter :-)

Test isolation == good!
If I need isolation, why can't I just ask for it directly?
with_test_setup {
run_tests();
}
You can.
But then the test writer/reader has to code/understand the 
with_test_setup() code for each test script, and the structure of each 
of those routines is pretty much the same.

So you might decide to generalise it and have a standard mechanism for 
you to plug in your start up code.

Then you discover that some of your tests need to clean up after 
themselves so you add another slot to run after the tests.

Then you might find that you often have common sets of tests that are 
run in different situations, so it would be nice if you could package 
them up and use them as a unit. So you add some infrastructure to 
support it.

Well done - you've implemented an xUnit framework!
(or rather provided the few elements of an xUnit framework not already 
supplied by Test::Harness, Test::Builder and friends).

These sorts of requirements are exactly what made me build Test::Class 
- adding the bits of xUnit not already in the Perl testing framework.

I didn't start out to write an xUnit framework. What I did was refactor 
several large and slow test suites and the very xUnit-ish Test::Class 
just fell out.

I don't really think of xUnit as a competitor to Perl's normal testing 
infrastructure - it's more of a superset. The fact that it's so easy to 
layer the missing bits of xUnit on top of Perl's standard mechanisms 
shows that.

[snip]
Even better would be to put Test::Builder in skip mode, where it 
skips
automatically whenever a test fails:

skip_mode {
ok(something);
is(this, that);
}
sub skip_mode () {
my $test_sub = shift;
my $old_ok = \Test::Builder::ok;
my $test_passed=1;
local $Test::Builder::Level = $Test::Builder::Level + 1;
no warnings;
local *Test::Builder::ok = sub {
die unless $test_passed = $old_ok-(@_);
};
eval { $test_sub-() };
die $@ if $@  $test_passed;
};
:-)
[snip]
Every time I hear about xUnit, I figure there must be something other
than setup and teardown in its favor.  If that's all there is, I'm 
not
sold.
You have to remember that xUnit isn't just setup/teardown routines. 
It's all the rest too - assertions, a test runner, test suites, OO 
reuse framework, etc. Most of which we already have with Test::Harness, 
Test:Builder and friends.

When people speak about the advantages of xUnit, they're mostly talking 
about the advantages of having a standard testing infrastructure.

Cheers,
Adrian


Re: C/C++ White-Box Unit Testing and Test::More

2004-06-25 Thread Adrian Howard
On 25 Jun 2004, at 16:51, Fergal Daly wrote:
[snip]
NB: I haven't used xUnit style testing so I could be completely off 
the mark
but some (not all) of these benefits seem to be available in T::M land.
Just so I'm clear - I'm /not/ saying any of this is impossible with 
T::M and friends. That's obviously silly since you can build an xUnit 
framework with Test::Builder and friends.

What xUnit gives you is a little bit more infrastructure to make these 
sorts of task easier.

Off the top of my  head.
* I never have to type repetitive tests like
isa_ok Foo-new(), 'Foo'
again because it's handled by a base class that all my test classes
inherit from.
sub constructor_ok
{
my $class = shift;
isa_ok $class-new, $class;
}
But you still have to call constructor_ok().
I can still all my common tests in a base test class and just by 
inheriting from it get them all run automagically.

* I can create units of testing that can be reused multiple times. If 
I
have an Iterator interface I can write a test suite for it once and
reuse it any class that implements the Iterator interface.
What's stopping you doing this in T::M,
sub test_iterator
{
my $iterator = shift;
# test various things about $iterator.
}
Nothing. But xUnit supplies a little extra bit of magic to mark 'test' 
subroutines so they can be automatically found and run with no effort 
on your part.

* I have conception level available higher than individual tests (in
T::M land) or asserts (in xUnit land). I can say something like:
sub addition_is_commutative : Test {
is 10 + 5, 15;
is 5 + 10, 15;
};
and talk about addition_is_commutative test as a concept separate from
the tests/assertions that implement it. I can easily move test methods
around as I refactor without having to worry about it breaking some
other part of the test suite.
I don't get this. What is the difference between having this as a 
method vs as a sub?
By having it as a test method it gets automatically picked up and run 
by the test environment. I can move the test method to another class 
and, without altering another line, it will be automatically picked up 
and run by the new class.

* The setup/teardown methods provide an infrastructure for creating
test fixtures and isolating tests, which can often save typing and
speed everything up considerably.
* Need to check that a class invariant still holds after each test?
Chuck it in a teardown method.
The T::M land you could put your setup and teardown in modules and 
call them before and after. Then if they're named consistently you 
could automate that at which point, you'd almost have xUnit. So xUnit 
seems to win here for sure,
Exactly! This is all stuff that Test::Builder based modules can do - 
xUnit just sprinkles some convenience fairy dust over everything to 
make doing it easier :-)

The /nice/ thing about Perl's infrastructure is that we can abandon the 
extra infrastructure when we don't need it - making simpler test 
scripts more compact and easier to understand.

Cheers,
Adrian


Re: C/C++ White-Box Unit Testing and Test::More

2004-06-24 Thread Adrian Howard
On 24 Jun 2004, at 07:09, Piers Cawley wrote:
[snip]
The xUnit style framework does a much better job of enforcing test
isolation than Test::More does (but you have to remember that what
Test::More thinks of as a test, xUnit thinks of as an assertion to be
used *in* a test).
To be fair to Test::More and friends xUnit doesn't /enforce/ test 
isolation any more than Test::More prevents it. Writing isolated tests 
with Test::More is trivial, just do something like:

sub make_fixture {
return ( Cash-new(10), Cash-new(20) );
};
isa_ok( $_, 'Cash' ) foreach make_fixture();
{
  my ($ten, $twenty) = make_fixture();
  is_deeply $ten + $twenty, Cash-new(30);
};
... etc ...
I had a mild rant about this on the TDD list a few months back. You can 
write isolated tests in a procedural style quite easily. You can also 
easily write tightly-coupled tests in an xUnit style. It's all reliant 
on developer discipline. xUnit provides some infrastructure that helps, 
but it doesn't enforce it. Developers do that.

(Apologies for rant. Consider it a symptom of the number of ghastly 
xUnit test classes that I've seen with 100 line test methods and no 
setup methods.)

Where xUnit wins for me are in the normal places where OO is useful 
(abstraction, reuse, revealing intention, etc.). Where xUnit loses are 
the times when you don't need it all the extra infrastructure and it 
just becomes overhead that gets in the way of understanding the test 
suite.

Where the Perl testing framework wins for me:
-	it gives me the flexibility to do both procedural and xUnit styles as 
I see fit
-	it also provides SKIP and TODO tests, which I've not come across 
elsewhere. TODO test in particular I find useful for tracking technical 
debt
-	Test::Harness has a nice ASCII protocol that I can use to feed 
non-Perl stuff into the testing framework

Anyway enough rambling ;-)
Adrian


Re: empty tests, Test::Harness, and Test::Inline

2004-06-11 Thread Adrian Howard
On 11 Jun 2004, at 19:16, Andrew Pimlott wrote:
[snip]
1.  pod2test exits with status 1 when there are no tests.  This is
simple to work around, and you could argue that pod2test is right 
to
throw up a flag for this degenerate case, but I actually think it 
is
more useful to accept it silently and create an empty test file.
[snip]
Why?
What does an empty test file give you over an absent one? Apart from 
the added complexity of having to disambiguate deliberately empty test 
files from accidentally empty ones?

Curiously,
Adrian


Re: Temporarily Overriding subs

2004-05-25 Thread Adrian Howard
On 25 May 2004, at 18:31, Ovid wrote:
[snip]
So I wrote a little module, Sub::Override, to do that for me.  I can 
replace subs, explicitly
restore them to their original value or just let the object fall out 
of scope and have the subs
automatically restored.  However, this seems like such an obvious 
little module that *someone*
must have written it.  Alas, I cannot find it on the CPAN.  Is it out 
there and I missed it, or is
this something I should upload?
[snip]
Hook::Lexwrap?
It's what I normally use for this sort of thing, and you can 
short-circuit the original method in a pre- wrapper.

Adrian


Re: Duplicated code

2004-04-19 Thread Adrian Howard
On 19 Apr 2004, at 21:03, Ovid wrote:

As part of our refactoring project, we'd like to find duplicated code. 
 Our hand-rolled scripts do a decent job, but could use a lot of work. 
 Rather than do a lot of work, I'm curious to know if anyone knows of 
any tools already out there for that.

Any suggestions?  I'd be rather curious to hear about something that 
operates on the op-code level and can possibly cope with renamed 
variables as a result.
[snip]

I don't know of anything Perl specific, certainly not at the opcode 
level.

I've had some success throwing everything through perltidy to normalise 
the code then applying comparator 
http://www.catb.org/~esr/comparator/. This is all purely textual but 
works surprisingly well, and has the bonus of involving almost no 
actual work :-)

You may want to take a look at CPD 
http://pmd.sourceforge.net/cpd.html, which does duplicate code 
detection for Java, C, C++, and PHP. Details on the algorithm at 
http://dogma.net/markn/articles/bwt/bwt.htm.

Cheers,

Adrian



Re: Funky «vector» operator

2004-03-19 Thread Adrian Howard
On 19 Mar 2004, at 16:16, Larry Wall wrote
Another approach would be to write a little fixup script that turns
the ASCII variants into the non-ASCII variants, and then you could
bind it to a function key to translate the current line.  That has
the advantage that you could use it on a script someone else sends
you as well if you find the ASCII workarounds too visually offensive.
/lurk

That would be really nice, and would go along way to counter my 
(probably unreasonable) fear of non-ASCII in the core language. Being 
able to easily switch between ASCII/non-ASCII using the Perl 6 
equivalent of perltidy would be excellent.

Adrian

lurk



Re: testers.cpan.org ideas

2004-03-09 Thread Adrian Howard
On 9 Mar 2004, at 13:14, Leon Brocard wrote:
[snip]
Does anyone have any features they'd like to see on the website? I'm
looking at extracting more information (Perl version, platform) and
having pages (and thus RSS) per author.
RSS feeds would be *very* nice :-)

Adrian



Re: testers.cpan.org ideas

2004-03-09 Thread Adrian Howard
On 9 Mar 2004, at 13:35, Leon Brocard wrote:

Adrian Howard sent the following bits through the ether:

RSS feeds would be *very* nice :-)
Easy request to fulfill - it already does has an RSS feed per
distribution. The bottom of
http://testers.cpan.org/show/Test-Exception.html points out:
http://testers.cpan.org/show/Test-Exception.rss
Well - learn something new every day :-)

So now I want an RSS feed per author, so I don't have to subscribe to
30 RSS feeds ;-)
++good.

Adrian



Re: Aborting testsuits

2004-02-23 Thread Adrian Howard
On Monday, February 23, 2004, at 02:40 PM, Thomas Klausner wrote:
[snip]
Is there a way to abort a whole testsuite?
[snip]

Yup. Take a look at BAILOUT in Test::Builder. Doing:

	Test::More-builder-BAILOUT

should stop Test::Harness in its tracks.

Adrian



Re: Aborting testsuits

2004-02-23 Thread Adrian Howard
No idea :-)

Mr Schwern?

Adrian

On Monday, February 23, 2004, at 07:04 PM, Thomas Klausner wrote:

Hi!

On Mon, Feb 23, 2004 at 05:01:54PM +, Adrian Howard wrote:
On Monday, February 23, 2004, at 02:40 PM, Thomas Klausner wrote:
[snip]
Is there a way to abort a whole testsuite?
[snip]

Yup. Take a look at BAILOUT in Test::Builder. Doing:

	Test::More-builder-BAILOUT

should stop Test::Harness in its tracks.
Thanks, this is working.

Is there any reason why BAIL_OUT is marked as unimplemented in the
Test::More docs?


--
#!/usr/bin/perl   http://domm.zsi.at
for(ref bless{},just'another'perl'hacker){s-:+-$-gprint$_.$/}



Re: Aborting testsuits

2004-02-23 Thread Adrian Howard
On Monday, February 23, 2004, at 10:46 PM, Andy Lester wrote:

Because it is, in Test::More.  I've yet to need it.  Nobody's given 
me a
patch to implement it.
And T::H doesn't recognize anything like that either?
From perldoc test::Harness

=item BBail out!

As an emergency measure, a test script can decide that further tests
are useless (e.g. missing dependencies) and testing should stop
immediately. In that case the test script prints the magic words
  Bail out!

to standard output. Any message after these words will be displayed by
CTest::Harness as the reason why testing is stopped.
:-)

Adrian



Re: Distributed testing idea

2004-02-18 Thread Adrian Howard
On Wednesday, February 11, 2004, at 09:24  pm, Michael G Schwern wrote:

The biggest time suck in developing MakeMaker, and to a lesser extent
Test::More, is running the tests.  Why?  Because they need to be run on
lots of different platforms with lots of different versions of Perl.
Currently, I do this by hand.  And we all know manual testing sucks.
Its time consuming and you tend to avoid it.  I can't run the tests on
every platform at every patch so I often wind up breaking something and
not realizing it for a while.
So what I need is some way to set up a network of test servers such 
that
I can say test this module for me and my testing client would ship it
to as many test servers as it can find and get the results back all in
just a few minutes.
[interesting outline implementation snipped]

Random comments. I have zero available tuits to help implementation so 
feel free to ignore and/or laugh:

-	Nice idea

-	If this is going to be run by paranoid people everything would have 
to be over https to prevent man-in-the-middle attacks on the code being 
transported

-	I've done a vaguely related task in the past. Instead of distributing 
one test suite over several machines, I was distributing bits of a 
single test suite over several machines (so I could run the test 
scripts in parallel and decrease the overall runtime of the whole test 
suite). Would be nice if we had something flexible enough to cope with 
both scenarios - but that might make it too complex to implement in a 
day or so :-)

-	I solved my problem with SSH rather than HTTP since I had the 
infrastructure for it in place on the machines I was playing with. 
Might be worth considering as an alternative to HTTP[S]

-	Would you want to deal with cannot test (e.g. because the test server 
didn't have the necessary prerequisite modules rather than the test 
itself timing out or there being a communication problem) as well as 
pass/fail?

-	Some mechanism to automatically gather/report the base platforms 
would seem to be a good idea. Otherwise you are going to have people 
forgetting to keep the central list up to date when then update their 
box.

Cheers,

Adrian



Re: ok(1,1) vs. ok ('foo','foo') in Test::More

2004-02-03 Thread Adrian Howard
On Tuesday, February 3, 2004, at 05:44  pm, Tels wrote:
[snip]
This has prevented me from converting several huge old testsuites 
from
use Test; to use Test::More; because I know that I would then have 
to
go and add testnames to thousand of tests (e.g. all tests that test for
number output). This is very boring, and you can bet that I will come 
up
with is ($foo-baz(), 12, 'is 12'); just to shut of the warnings. 
And bad
testnames are like bad comments, better have none than a bad or silly
one :)
[snip]

Since the test names are optional, is there any need to add one at all 
as you're converting?

Adrian



Default test name?

2004-02-02 Thread Adrian Howard
Hi all,

I've just got around to adding default test names to Test::Class by 
wrapping Test::Builder:ok so doing:

	define correct_answer : Test { is $answer, 42 };

will produce

	ok 1 - correct_answer

This seems to work just dandy.

However, I was wondering if anybody else ever wanted to do this sort of 
thing and, if so, would a more generic API to the test name be useful - 
e.g. localising something like $Test::Builder::Test_name?

If so, I can probably be persuaded to write a patch - if not I'll shut 
up and go away :-)

Adrian



Re: Default test name?

2004-02-02 Thread Adrian Howard
On Monday, February 2, 2004, at 11:53  pm, chromatic wrote:

On Mon, 2004-02-02 at 15:46, Adrian Howard wrote:
[snip]
I'd rather print less if I don't really care what the name is, though I
don't feel exceedingly strongly that way.  It just seems that a default
test name is there only to have a test name, not because it provides 
any
useful information.

People should use test names if they make the tests easier to 
understand
and to maintain, not because people should use test names.
[snip]

Oh I agree :-) I just want to define my test name elsewhere so rather 
than having duplication:

	define correct_answer : Test { is $answer, 42, 'correct_answer' };

or meaningless method names:

	define test123 : Test { is $answer, 42, 'correct_answer' };

I can have the (in my eyes) neater:

	define correct_answer : Test { is $answer, 42 };

I like meaningful method names in Test::Class test suites, since I can 
use them to make quick-n-dirty documentation an AgileDox style 
http://joe.truemesh.com/blog/archives/agile/47.html.

I was just wondering if there were any other use cases out there that 
would justify adding something more generic.

Adrian



Re: Default test name?

2004-02-02 Thread Adrian Howard
On Tuesday, February 3, 2004, at 12:26  am, Michael G Schwern wrote:
[snip]
In the Test::Class context, the default name would extend only to a 
given
test method.  So you could have a default name which is, for example,
the name of the test method.  Or something like, testing X feature.
[snip]

Exactly.

[snip]
As for providing a Test::Builder default, for the time being just 
override
ok().  I don't think anything more than that is necessary at this 
point.
Fairy Nuff :-)

Adrian



Re: Testing complex web site

2004-01-19 Thread Adrian Howard
On Monday, January 19, 2004, at 06:10  pm, Gabor Szabo wrote:

If this is OT, please point me to some better place to find an answer.
[snip]

Not OT in my opinion, but you also might want to try 
http://groups.yahoo.com/group/TestFirstUserInterfaces.

On the functional level:
Basic things can be achieved by WWW::Mechanize but I don't know yet how
to deal with Javascript in the response page.
[snip]

If you've got a Win box around you might want to try:

http://samie.sourceforge.net/
-   allows you to drive MSIE from Perl which can be handy for testing
https://sourceforge.net/projects/ieunit/
-   testing framework using JavaScript to drive MSIE
You also might find these JavaScript unit testing framworks of use:

http://jsunit.berlios.de/
http://jsassertunit.sourceforge.net/docs/index.html
Adrian



Re: Trying to spear a phalanx shield for pod

2003-10-28 Thread Adrian Howard
On Friday, Oct 24, 2003, at 14:23 Europe/London, Andrew Savige wrote:

I'm about to add a POD test program to my phalanx distro.
Before I do that, just want to check I'm using the best model.
I plan on using the one from WWW::Mechanize (shown below) --
unless someone can suggest a better model.
[snip]

This may be a dim question but why scan blib and lib?

[snip]
my $blib = File::Spec-catfile(qw(blib lib));
[snip]

Wouldn't everything in lib be in blib at test time? Also, isn't there 
the possibility that people might transform illegal POD in lib to legal 
POD in blib using .PL scripts at build time?

Adrian



Re: Phalanx / CPANTS / Kwalitee

2003-10-15 Thread Adrian Howard
On Wednesday, Oct 15, 2003, at 11:09 Europe/London, Rafael 
Garcia-Suarez wrote:

Thomas Klausner wrote:
there are currently 4 dists on CPAN that only include a configure 
script
(makepp-1.19, glist-0.9.17a10, swig1.1p5, shufflestat-0.0.3)

179 do not include any of Makefile.PL, Build.PL or configure.

Quite a lot come with two or three of those files.
Could we infer that a distribution that comes with several Makefile.PLs
may have an overcomplicated build process, maybe indicating a low
kwalitee ?
I don't think so.

For example I'm planning to release my modules with Build.PL and 
Makefile.PL in the future (because I like Module::Build, but want to 
continue to support people using CPAN).

(maybe more than one == higher kwalitee :-)

Adrian



Re: passing arguments to tests

2003-09-13 Thread Adrian Howard
On Thursday, Sep 11, 2003, at 16:38 Europe/London, Ovid wrote:

--- Andrew Savige [EMAIL PROTECTED] wrote:
Oh, that 'grind' looks like a very handy command but I'm a bit
confused about how you use it. Is it just a handy general-purpose
command or do you use it specifically as part of make test in
your CPAN distributions?
It's a utility that I wrote to allow me to better manage my tests.
[snip]

Maybe worth having a chat with the Test::Verbose/tv author - just to 
avoid that whole duplication of effort thang ;-)

Adrian



Re: Test::More and 'deep' tests

2003-09-09 Thread Adrian Howard
On Tuesday, Sep 9, 2003, at 10:52 Europe/London, Tony Bowden wrote:
[snip]
1) ok $str1 eq $str2;
2) is $str1, $str2;
3) is_deeply [$str1], [$str2];
4) is_deeply $str1, $str2;
All should pass as far as I am concerned.

The Test::More deeply behaviour matches my intuitions, and I would have 
tests that break if this changed (although the documentation could be a 
little clearer - I know, I should write a patch :-)

If I need to do other sorts deep comparisons I know where to go.

(didn't we have a similar discussion a few months back?)

Adrian



Re: blocks and subplans again

2003-08-26 Thread Adrian Howard
On Thursday, August 21, 2003, at 08:17  pm, Michael G Schwern wrote:

On Thu, Aug 21, 2003 at 02:38:03PM +0100, Fergal Daly wrote:
[snip]
You could allow extensions at any time but then you lose the ability 
to know
if you ran 4 + 2 tests or 5 + 1,
Not if you introduce an end tag (though I'd rather not).
Why (he asks curiously)?

I know I've had one occasion where a footer would have saved me some 
trouble (a test script exiting early with a safe exit value).

[snip]
Though now the 'no_plan' style in a subplan gets confusing.  We might 
have
to change the no_plan style so that it has to produce some sort of 
header.
It might be literally 1..N.
[snip]

I'd like to see this.

Adrian



Re: blocks and subplans again

2003-08-26 Thread Adrian Howard
On Thursday, August 21, 2003, at 11:50  pm, Michael G Schwern wrote:

On Thu, Aug 21, 2003 at 10:19:35PM +0100, Fergal Daly wrote:
[snip]
Also you can allocate a sub block to each thread and you don't have 
to worry
about it's output getting confused with the output of any other thread
because every thing from thread 1 will have a number starting with 1. 
and
thread 2 will have 2. etc
This is convincing.
Another use case would be the distributed execution of a test suite.

A few months ago I had an test suite that took so long to execute 
(20-30 minutes) it was interfering with integration (I like to run the 
tests every time somebody checks in).

My solution was to distribute the test execution over several machines. 
However, I could only do this at the test script level since I needed 
to feed a plans worth of tests to Test::Harness at a time, which meant 
that test output blocked all of the time waiting for one script or 
another to finish.

It would have been nice to have been able to interleave output from 
several different plans so we could have got to see failing tests more 
quickly.

Adrian



Re: Existing books on testing?

2003-08-19 Thread Adrian Howard
On Tuesday, August 19, 2003, at 02:24  pm, Adam Turoff wrote:
[snip]
In _Software Craftsmanship_, Pete McBreen has high praise for:

The Craft of Software Testing
Brian Marick
Prentice Hall
It's out of print and nearly impossible to find.  I haven't read it 
yet,
so I can't say whether it is as seminal as McBreen says it is.
[snip]

I've also been told that this is good - but I've not found a copy 
myself yet either .

Adrian



Re: Scrutinizing CPAN distributions (was Testing for valid path names...)

2003-08-18 Thread Adrian Howard
On Monday, August 18, 2003, at 05:31  pm, Tels wrote:

I didn't even know that cpanratings exists! Wow! But why, by Seline 
Moonbow,
does this site need a login just to show me a rating?
Taken a look at search.cpan.org recently then? For example:

	http://search.cpan.org/author/MBARBON/Module-Info-0.22/

Addition: When you klick a module name, you are asked to rate the 
module. Not
to view it's rating or review. This is, however, not clear from the 
login
page, neither is clear that it needs cookies to work. And it doesn't 
make a
difference whether you are the original author of the module or not, 
despite
cpanratings tying to make you believe otherwise*sigh*
You could always write some code - I'm sure it would make ask happy ;-)

Adrian



Re: Existing books on testing?

2003-08-15 Thread Adrian Howard
On Friday, August 15, 2003, at 06:49  pm, Kurt Starsinic wrote:
[snip]
Worth being familiar with.  Very practical.  If anybody knows a
good book on Junit (if there is such a thing, HHOS), I would love
to know about it.
Unit Testing in Java: How Tests Drive the Code by Johannes Link  
Peter Fröhlich has lots of good stuff, but (in the edition I read 
anyway) a few translation quirks from the original German. Still worth 
getting tho'.

I've heard that JUnit in Action http://manning.com/massol/ is 
looking pretty good, but it's not out until October. I've not seen it 
myself.

O'Reilly's Java Extreme Programming Cookbook has a nice introductory 
chapter on JUnit but unless you want to learn about XP too is probably 
not worth the price of the book.

All IMHO of course ;-)

Adrian


Re: Existing books on testing?

2003-08-15 Thread Adrian Howard
Three I would thoroughly recommend, although not Perl related in any 
way, are:

Lessons Learned in Software Testing: a Context-driven Approach  
Cem Kaner, James Bach
Publisher: John Wiley  Sons Inc;   ISBN: 0471081124

Testing Extreme Programming  
Lisa Crispin, Tip House
Publisher: Addison Wesley;   ISBN: 0321113551

Test Driven Development
Kent Beck
Publisher: Addison Wesley;   ISBN: 0321146530
Adrian

On Friday, August 15, 2003, at 06:25  am, Michael G Schwern wrote:

I'm often embarassed when I get ot the end of a testing tutorial and 
come
to the section on suggested books which pretty much consists of
Perl Debugged.

What books out there are of use for those wanting to learn Perl 
testing?
They don't necessarily have to be specificly about *Perl* testing.
I've put up a Wiki page to generate a listing.
http://www.pobox.com/~schwern/cgi-bin/perl-qa-wiki.cgi?TestingBooks

--
Michael G Schwern[EMAIL PROTECTED]  
http://www.pobox.com/~schwern/
Death was thought to be fatal.
-- Craig A. Berry in [EMAIL PROTECTED]




  1   2   >