Smoking from GIT
With git eminent after the release of perl-5.8.9, we need to discuss the best way to continue smoking blead (and the other tracks) from git. When perforce is closed, and the move to git is final, we will not have perforce change numbers anymore, and we do not know if rsync will still be available and if it will come with a .patch file. The .patch file was used not only for reporting exactly what version of perl was used, but also to determine if a smoke is really wanted (using smartsmoke) and to report that level in the Subject: line of the posted reports. I'd like to invite all people that have (good) ideas about the subject, and/or good git knowledge to discuss this on IRC. Abe and I just opened #smoke on irc.perl.org -- H.Merijn Brand Amsterdam Perl Mongers (http://amsterdam.pm.org/) using porting perl 5.6.2, 5.8.x, 5.10.x on HP-UX 10.20, 11.00, 11.11, 11.23, SuSE 10.1 10.2, AIX 5.2, and Cygwin. http://qa.perl.org http://mirrors.develooper.com/hpux/http://www.test-smoke.org http://www.goldmark.org/jeff/stupid-disclaimers/
Non-standard use of Devel::Cover
Hi, I'm a newbie to Devel::Cover but not perl. I am contracting for a firm that has a legacy web application written in perl. When I started work here, the developers had already begun a rudimentary test framework where the test files were kept in mirrored directory locations. ie; if the real package was in /Project/lib/perl/Model/Foo.pm, the test for that package is located in /TestProject/lib/perl/Model/foo.t A test_runner.pl script collects the .t file locations in the Test directory and execute them, checking for errors only and outputting the results. I have since modified the test_runner so that it uses Tap::Harness and Devel::Cover, adding coverage via command-line switch. However, due to my newbieness in the area of Devel::Cover and the default behavior, the cover_db files are just enormous as coverage is being instrumented on everything ever used in the process of executing the test cases. One approach I am considering is to modify the .t files themselves to contain a string specifying the particular module under test. The harness runner would not only then collect the .t files for later execution but associate a module under test so that I could potentially execute the test and pass a 'select' for use by Devel:Cover, limiting coverage to the module under test (reducing cover_db size and decreasing run-times when the coverage option is selected). I am wondering if this is a decent approach and whether or not anyone has had success with using Devel::Cover in a non-standard module structure. I have searched through the archives before posing this question. So far, at least one answer was, to paraphrase: Well, you fool, everything is designed to make it easy if you stick to the t/*.t paradigm, so don't do that. Unfortunately, I can't be forced into that paradigm and I'd still like to provide this development community here with a robust test framework that will allow them to test and produce coverage metrics without having to completely change the way they deploy the software - something that would be required if I try to shoehorn in the standard practices. Has anyone been successful with producing meaningful coverage reports when the test files are in a non-standard location? Any help would be appreciated. Rick