cvsuser 01/11/20 08:07:45
Added: P5EEx/Indigo Changes MANIFEST Makefile.PL
P5EEx/Indigo/P5EEx README
P5EEx/Indigo/P5EEx/Indigo/pod unit_test_guide.pod
Log:
initial insert into P5EEx-Indigo
Revision Changes Path
1.1 p5ee/P5EEx/Indigo/Changes
Index: Changes
===================================================================
use rcs2log to generate this
1.1 p5ee/P5EEx/Indigo/MANIFEST
Index: MANIFEST
===================================================================
Changes P5EEx-Indigo Change Log
MANIFEST All files in the P5EEx-Indigo sandbox
Makefile.PL Makefile Generator
P5EEx/Indigo/pod/unit_test_guide.pod Small guide to writing unit tests
1.1 p5ee/P5EEx/Indigo/Makefile.PL
Index: Makefile.PL
===================================================================
#!/usr/bin/perl -w
#$Id: Makefile.PL,v 1.1 2001/11/20 16:07:45 dkubb Exp $
use strict;
use File::Spec::Functions qw(catdir curdir);
use ExtUtils::MakeMaker qw(WriteMakefile);
WriteMakefile(
NAME => 'P5EEx-Indigo',
DISTNAME => 'P5EEx-Indigo',
VERSION => 0.01,
PMLIBDIRS => [catdir(curdir, 'P5EE')],
);
1.1 p5ee/P5EEx/Indigo/P5EEx/README
Index: README
===================================================================
#$Id: README,v 1.1 2001/11/20 16:07:45 dkubb Exp $
The P5EEx-Indigo namespace is a purely
experimental and hopefully will contain
some of the following projects:
- Documentation on writing unit tests
1.1 p5ee/P5EEx/Indigo/P5EEx/Indigo/pod/unit_test_guide.pod
Index: unit_test_guide.pod
===================================================================
=head1 NAME
P5EE writing unit tests - guidelines for test writing
=head1 VERSION
$Revision: 1.1 $
=head1 DOCUMENT STATUS
This document is a very rough draft. Spelling
mistakes, additions and suggestions are very
welcome.
=head1 DESCRIPTION
This document outlines my observations about
writing tests for perl modules and programs.
Recently, I've been involved in writing
numerous tests for a development system and
all it's perl modules, and I've tried to
condense what I've learned down into this
document.
Writing perl tests was almost completely
unknown to me 6 months ago, so hopefully
others can review this, add to it, and
help improve the general state of perl
unit testing.
This document is intended to serve as a
reference for P5EEx developers writing tests
for future P5EE modules.
=head1 TUTORIALS
Michael Schwern has provided an excellent
L<Test Tutorial introduction> into writing
tests for perl code using standard modules
like Test::More.
I highly recommend you read this article,
it gives good insight into how to write
basic tests.
=head1 GUIDELINES
=head2 Minimize environmental impact
So you have a module idea that you are
kicking around, and you know it will need
to be tested at some point, but you don't
know where to start?
It's often a good idea to think about
what the module does, and how you can
create an I<environment> for the test to
run in that most closely matches real
world conditions. At the same time
you have to ensure that your simulated
environment does not interfere with the
real-world in any way.
To give an example, imagine you want to
build a module that's main purpose will
be to load and save configuration
information to a data source. For this
example that data source is a simple flat
file. Now imagine that your module has
the location to the flat files hard
coded in it. This is bad on many levels,
any tests you run on the module could
change the global data source.
As you can see, it is important for you
to be thinking "How am I going to make
this module easy to test?" B<while> you
are doing it's initial design. In the
above example you would want to ensure
that the data source is not hard coded
in your module, otherwise during testing
you could risk modifying real-world data.
A better approach would be to allow the
user to define this information in the
constructor, configuration system, or
somewhere else appropriate.
Conclusion: Design your module to be
easily tested in it's own I<environment>.
=head2 Common test patterns
There are some things that will be
common across most of the modules that
you build. Every module you write is
probably going to include perl code, so
you're going to be to test and see if
it even compiles. Your code is (hopefully,
lest ye shall burn!) going to contain
some documentation in the form of POD
and you want to make sure it is well
structured and that all public methods
are documented.
You want to look out for common patterns
such as these, and develop a "test toolkit"
where common test patterns can be stored.
It doesn't matter if it's a document, a
directory or in your head. (ok it does
matter, what's in your head can't be
shared with others, let others know
on the P5EE list when you discover a new
pattern). The important thing is that
you can bootstrap your test writing
process by including the basics before
writing tests for your specific module.
Some common test patterns that have
been identified so far are:
=over 4
=item *
use Test::More::require_ok() on the module
source code.
=item *
use Pod::Checker to make sure the module's
POD is syntactically correct.
=item *
use Pod::Coverage to make sure your module
includes sufficient POD documentation. It
ensures all your public methods are documented,
that you include the main C<=head1> tags, such
as NAME and DESCRIPTION.
=back
If you have found any more common patterns,
please email the author of this document
for inclusion. Please include sample code,
if possible.
Conclusion: Please use a common suite of
tests with each of your modules.
=head2 Testing Related Modules
The author recommends Test::More in your
individual test files. It provides a
standard test suite, that can be made
uniform across all P5EE module test suites.
If you want to run a group of test files you
can either use C<perl Makefile.PL; make test>
or you can use Test::Harness directly.
Test::Harness is used when you do a C<make test>,
but using it directly produces nicer
and more helpful output, for some
reason.
In order for you to even run C<make test>
you will need to build a Makefile.PL.
You have a choice here, you can either
build one by hand or use h2xs.
Read the documentation on ExtUtils::MakeMaker
for a tutorial on building Makefile.PL's.
=head2 Test each function
When you are planning your module, decide
each function (and by function I don't
mean subroutine, I mean each I<thing>
it will do) and plan to write a test
to ensure it is producing the correct
outcome for a given input.
Testing a function could mean writing
a single .t file for each method with
multiple tests, then one .t file for
each thing that a group of methods do
in series.
=head2 Test Failure
A common tendency of developers is to
assume that code will respond in the
correct manner when recieving bad input.
This is often not the case. Code
sometimes responds with irrelevant
information that doesn't assist the
developer in tracking down the problem.
Worse yet, some code will continue to
function as if nothing happened, and
entering bad data into a data source.
Worst of all, code could respond by
printing out information to the
application user that could be
deemed a security risk, such as the
user name and password to your database.
You can lessen the likely hood of this
happening though. Don't just check each
function to make sure it produces the
correct outcome for a given input. B<Give
it bad input and make sure it responds
in the appropriate manner.> Test to make
sure the an exception is thrown, or other
appropriate action takes place upon
failure.
And most of all, make sure that sensitive
data is hidden from plain view in the event
of an error.
=head2 Order by risk
It is important that the tests run in
a specific order, and that they build
on the previous tests. For example,
let's say we have a module that saves
an object to a data source, and we
write a test for it. In order for
this test to run, the object's C<new()>
constructor needs to work. If you
haven't proven this, and C<save()>'s
test fails, you could waste time looking
at the C<save()> method code, when the
new constructor was at fault all
along. This means that the C<new()>
constructor's test should precede the
C<save()> method's test in order.
But how do we show our intentions to
perl? When you are naming your .t files
you can influence the order they are run in
by paying attention to the names you
give them. All tests should be
preceeded by a number, signifying the
order you would like them run in.
How should you decide this order? Look
at all the different functions your
module performs and decide which has
to be proven to work before the other.
If a module does not load, then it is
no good. It makes no sense to test the
C<new()> constructor if the .pm file
won't compile.
Risk should be determined by weighing
how important it is to the module for
the test to pass. If a module doesn't
compile or the constructor doesn't set
up the object's internals properly are
are much higher risk compared to, say,
a debugging method.
Sometimes risk is hard to determine,
so I just group by function, to keep
things simple.
=head2 New Bug = New Test
When a bug is found in the code, it
means that a test somewhere failed to
identify the problem. A new test should
be added to an existing .t file, or
a brand new .t file should be created
to ensure the bug really is fixed.
Once all tests have passed, including
the new one, should the bug be considered
resolved.
=head2 Unordered Tests
Sometimes you try to write a method,
then write some tests for it, make sure
they (and all previous tests pass), then
inch along to the next method.
Other times you'll jump into writing your
module code, due to some spark of wisdom,
then write a bunch of tests all at the end.
In either case, at some point you B<will>
end up with a some tests completely out
of order, and breaking the guideline
above. This is a certainty in test
development. You'll be testing something
that is dependant on a test down the line
succeeding.
The only way to avoid this is to re-number
all your tests each time you write a new one.
B<I strongly recommend you not do this>. It
somewhat breaks CVS, in that you can only get
history on a file as far back as the last
re-numbering round, without having go on
an I<CVS attic hunt>.
The best you can do is plan your tests ahead
of time, ordering them, then add tests as bugs
come in. I would suggest re-ordering them
correctly when you move the code to the P5EE
release namespace. The code should be more
proven and stable at this point. In theory
the chance of new bugs, design changes, and
tests causing disorder should be reduced.
=head2 Central Test Configuration
Disclaimer: This is one way I have found
that does simplify test scripts, but if
anyone knows of a better or standard way
of doing these things I would gladly revise
this subsection. Also, I would prefer to
switch this to use the standard P5EE::Config
module, once one has been defined.
Sometimes tests will share common
data. You could hard code it in each
test file, but that could become a maintenance
headache. It's much better to put all
common things in a central location for
easier configration.
Test configuration can take on many forms.
You could load it from a ini configuration
file, an XML file, or a perl module. While
the others have merits, I am lazy, so I
just use a perl module with good old perl
code. This will change once a
standard P5EE configuration module has
been defined, but for now it does the job.
Inside the module's t/ directory I
make a file called Config.pm. Inside
it I will have something similar to:
package t::Config;
use strict;
use File::Spec::Functions qw(curdir catfile);
use constant NAME => 'P5EE::Module::To::Test';
use constant PM => catfile(curdir, qw(blib lib P5EE Module To Test.pm));
use vars qw($VERSION @EXPORT_OK);
$VERSION = (qw$Revision: 1.1 $)[1];
@EXPORT_OK = qw(NAME PM);
require Exporter;
*import = \&Exporter::import;
1;
Then in my .t file C<00_compile.t> I will do the
something like the following:
#!/usr/bin/perl -wT
#00_compile.t - make sure the module can compile
use strict;
use Test::More tests => 1;
use File::Spec::Functions qw(catdir curdir);
use lib catdir(curdir);
use t::Config qw(NAME PM);
require_ok(PM);
This will load the t::Config module, and import
the configuration variables into the test's
namespace.
Notice that there isn't anything inside this
test about the module being used, or where it
is located? If we ever need to change the name
of the module we don't have to edit 20 small
test files, just Config.pm. As well, the structure
of the test is completely independant of the
module, this could easily be re-used for any
module needing compile-time checks done - which
should be nearly every module.
These constants can also come in handy when
writing the Makefile.PL files too. In the
future, ideally we would want all the meta
data for a module in a single place, not
scattered between t::Confg, Makefile.PL,
Build.PL, License, etc.
=head2 Test::More Specifics
Here are some specific things I have
learned about Test::More and some of
the reasons why:
=head3 use is/isnt not ok
The main problem with ok is it can tell you
when a test is right or wrong, but not B<why>.
To find out why you need to go into the code
and either print/warn out the test results
before ok(), or write results to a log file
and examine it later.
is() and isnt() are superior to ok().
They will tell you when a test is right
or wrong, but the key difference is
when there is a failure, they will also
tell you what the input was compared to
the expected results. This seems to speed
up debugging.
=head3 use is_deeply for deep comparisons
If you have a nested data structure there
may be the tendency for you to only test
the top-most values rather than every node
in the tree.
With this routine you can easily compare
to nested data structures, and ensure that
B<all> the data you are testing is correct.
=head3 Always name your tests
Always name your tests. When a .t file
reports there is something wrong with test
number 72, good luck in trying to figure
out what that corresponds to.
While you are naming the tests, try to
include any extra data you will need to
narrow down the test results. For example
you may be testing inside a loop, and
you need to know which iteration it is
in order to pin down the problem. Put
the iteration count number inside the
test name.
Here's an example naming convention I use
for my test names:
<name> - <description> (<extra data>)
Translating this to a code example we get:
is(
$number + 6,
54,
"add 6 - adds 6 to the number object, should add to 54 (\$number is $number)",
);
This is by no means the one true way, but it
seems to work quite well in practice.
=head1 AUTHOR
Dan Kubb <[EMAIL PROTECTED]>
=head1 SEE ALSO
=head2 Test::More
http://search.cpan.org/search?dist=Test-SDK
=head2 Test::Harness
http://search.cpan.org/search?dist=Test-SDK
=head2 ExtUtils::MakeMaker
http://search.cpan.org/search?dist=ExtUtils-MakeMaker
=head2 Test Tutorial introduction
http://search.cpan.org/doc/MSCHWERN/Test-Simple-0.33/lib/Test/Tutorial.pod
=cut