Hi,

I'm a newbie to Devel::Cover but not perl. I am contracting for a firm
that has a legacy web application written in perl. 

When I started work here, the developers had already begun a rudimentary
test framework where the test files were kept in "mirrored" directory
locations.

ie; if the real package was in 

          /Project/lib/perl/Model/Foo.pm, 

the test for that package is located in

     /TestProject/lib/perl/Model/foo.t

A test_runner.pl script collects the .t file locations in the Test
directory and execute them, checking for errors only and outputting the
results.

I have since modified the test_runner so that it uses Tap::Harness and
Devel::Cover, adding coverage via command-line switch.

However, due to my "newbieness" in the area of Devel::Cover and the
default behavior, the cover_db files are just enormous as coverage is
being instrumented on everything ever used in the process of executing
the test cases.

One approach I am considering is to modify the .t files themselves to
contain a string specifying the particular module under test. The
harness runner would not only then collect the .t files for later
execution but associate a module "under test" so that I could
potentially execute the test and pass a 'select' for use by Devel:Cover,
limiting coverage to the "module under test" (reducing cover_db size and
decreasing run-times when the coverage option is selected).

I am wondering if this is a decent approach and whether or not anyone
has had success with using Devel::Cover in a non-standard module
structure.

I have searched through the archives before posing this question. So
far, at least one answer was, to paraphrase: 

   "Well, you fool, everything is designed to make it easy
     if you stick to the t/*.t paradigm, so don't do that."

Unfortunately, I can't be forced into that paradigm and I'd still like
to provide this development community here with a robust test framework
that will allow them to test and produce coverage metrics without having
to completely change the way they deploy the software - something that
would be required if I try to shoehorn in the "standard" practices.

Has anyone been successful with producing meaningful coverage reports
when the test files are in a non-standard location?

Any help would be appreciated.

Rick





Reply via email to