Perrin Harkins writes:
> But what about the actual data?  In order to test my $product->name()
> method, I need to know what the product name is in the database.  That's
> the hard part: writing the big test data script to run every time you
> want to run a test (and probably losing whatever data you had in that
> database at the time).

There are several issues here.  I have answers for some but not all.

We don't do complex unit tests.  We save those for the acceptance test
suite.  The unit tests do simple things.  I've attached a basic unit
test for our DBI abstraction layer.  It runs on Oracle and Postgres.

Acceptance tests take over an hour to run.  We have a program which
sets up some basic users and clubs.  This is run once.  It could be
run before each test suite run, but we don't.  We have tests which
test creating users, entering subscription payments, twiddling files
and email.  By far the biggest piece is testing our accounting.  As I
said, we used student labor to write the tests.  They aren't perfect,
but they catch lots of errors that we miss.

Have a look at:

http://petshop.bivio.biz/src?s=Bivio::PetShop::Util

This program populates the database for our petshop demo.  It builds
the entire schema, too.  The test suite for the petshop will assume
this data.

The amount of data need not be large.  This isn't the point of
acceptance testing imo.  What you want is enough data to exercise
features such as paging, form submits, etc.  Our production database
is multi-GB.

We do have a particularly nasty problem of our quote database.  We
update all of our quote databases nightly using the same software
which talks to our quote provider.  This tests the software in
real-time on all systems.  We run our acceptance test suite in the
morning after all the nightly stuff is done.  It takes hours to
re-import our quote database.

You need a test system distinct from your production and development
systems.  It should be as close in configuration to the production
system as possible.  It can be very cheap. Our test system consists of
a refurb Compaq Presario and a Dell 1300 with 4 disks.  We use
hardware RAID on production and software RAID on test.  Differences
like these don't matter.

The database source needs to be configurable.  Disk is cheap.
You can have multiple users (schemata) using the same database host.
Our database abstraction allows us to specify the target database
vendor, instance, user, and password.  Our command line utility
software allows us to switch instances easily, and the config module
does, too.

I often test against my development database at the same time as I
compare the same results against the test database.  I can do this,
e.g.

     b-petshop -db test create_db

All utilities have a '-db' argument.  Alternatively, I can specify the
user in long hand for the Connection test below:

     perl -w Connection.t --Bivio::Ext::DBI.database=test

All config parameters can be specified this way, or in a dynamically
selectable file.

> This has been by far the biggest obstacle for me in testing, and from
> Gunther's post it sounds like I'm not alone.  If you have any ideas
> about how to make this less painful, I'd be eager to hear them.

It isn't easy.  We don't write a unit test per class.   Indeed we're
far from this.  OTOH, we reuse heavily.  For example, we don't need
to test our product list:
http://petshop.bivio.biz/src?s=Bivio::PetShop::Model::ProductList
It contains no "code", only declarations.  All the SQL is generated
by the object-relational mapping layer which handles paging,
column sorting, and so on.  The view is as simple:
http://petshop.bivio.biz/src?s=View.products
Neither of these modules is likely to break, so we feel confident
about not writing unit tests for them.

Rob
--------------------------------------------------------------
#!/usr/bin/perl -w
use strict;
use Bivio::Test;
use Bivio::SQL::Connection;

my($_TABLE) = 't_connection_t';
Bivio::Test->unit([
    Bivio::SQL::Connection->create => [
        execute => [
            # Drop the table first, we don't care about the result
            ["drop table $_TABLE"] => undef,
        ],
        commit => undef,
        {
            method => 'execute',
            result_ok => \&_expect_statement,
        } => [
            # We expect to get a statement back.
            [<<"EOF"] => [],
                create table $_TABLE (
                    f1 numeric(8),
                    f2 numeric(8),
                    unique(f1, f2)
                )
EOF
            ["insert into $_TABLE (f1, f2) values (1, 1)"] => [],
        ],
        commit => undef,
        execute => [
            ["insert into $_TABLE (f1, f2) values (1, 1)"]
                => Bivio::DieCode->DB_CONSTRAINT,
        ],
        {
            method => 'execute',
            result_ok => \&_expect_one_row,
        } => [
            ["update $_TABLE set f2 = 13 where f2 = 1"] => [],
        ],
        execute_one_row => [
            ["select f2 from $_TABLE where f2 = 13"] => [[13]]
        ],
        {
            method => 'execute',
            result_ok => \&_expect_one_row,
        } => [
            ["delete from $_TABLE where f1 = 1"] => [],
        ],
    ],
]);

# sub _expect_statement(any proto, string method, array_ref params, array_ref 
expected, array_ref actual) : boolean
#
# Returns true if $expected->[0] is a DBI::st.
#
sub _expect_statement {
    my($proto, $method, $params, $expected, $actual) = @_;
    return 0 unless ref($actual) eq 'ARRAY';
    my($st) = $actual->[0];
    return ref($st) && UNIVERSAL::isa($st, 'DBI::st') ? 1 : 0;
}

# sub _expect_one_row(any proto, string method, array_ref params, array_ref expected, 
array_ref actual) : boolean
#
# Returns true if $actual->[0] is a DBI::st and we processed one row.
#
sub _expect_one_row {
    my($proto, $method, $params, $expected, $actual) = @_;
    return 0 unless _expect_statement(@_);
    return $actual->[0]->rows == 1 ? 1 : 0;
}

Reply via email to