RE: use Tests; # ?

2006-07-17 Thread leif . eriksen
I know we've moved on, but I'm in a completely different time zone, so please 
understand...

I, like demerphq, also think that coming up with a name for each and every test 
is a good idea.

It shouldn’t be hard to think of a description for each and every test.

Just note down why you wrote that test case in the first place.

Don’t know why you wrote a test case ? Then delete it, for it serves no known 
purpose.

Tests are written for a reason, and that reaons should be part of the test.

L

-Original Message-
From: [EMAIL PROTECTED] [mailto:[EMAIL PROTECTED] 
Sent: Monday, 17 July 2006 7:48 PM
To: perl-qa@perl.org
Subject: Re: use Tests; # ?

On Mon, 2006-07-17 at 11:39 +0200, demerphq wrote:

 Test names shouldnt be optional.

I disagree.  I would find it cumbersome to have to come up with a description 
for each and every test.

 Finding a particular test in a file by its number can be quite 
 difficult, especially in test files where you dont have stuff like
 
 'ok 26'.
 
 When ok() and is() are silently incrementing the counter and test 
 names arent used how is one supposed to find the failing test? As you 
 probably know it can be quite difficult.

Well, if the test passes, there's no need to know where exactly it's located.  
If it fails, the diagnostics contain the line number:

  not ok 6
  #   Failed test in t/xxx.t at line 26.

I've never seen incorrect line numbers.

--
Bye,
-Torsten

--
No virus found in this incoming message.
Checked by AVG Free Edition.
Version: 7.1.394 / Virus Database: 268.10.1/390 - Release Date: 17/07/2006
 

-- 
No virus found in this outgoing message.
Checked by AVG Free Edition.
Version: 7.1.394 / Virus Database: 268.10.1/390 - Release Date: 17/07/2006
 
**
IMPORTANT
The contents of this e-mail and its attachments are confidential and intended
solely for the use of the individual or entity to whom they are addressed.  If
you received this e-mail in error, please notify the HPA Postmaster, [EMAIL 
PROTECTED],
then delete  the e-mail.
This footnote also confirms that this e-mail message has been swept for the
presence of computer viruses by Ironport. Before opening or using any
attachments, check them for viruses and defects.
Our liability is limited to resupplying any affected attachments.
HPA collects personal information to provide and market our services. For more
information about use, disclosure and access see our Privacy Policy at
www.hpa.com.au
**


RE: [OT] TDD + Pair Programming

2006-04-02 Thread leif . eriksen
I have done the two programmers, one terminal approach advocated by
Beck for XP developments (not just TDD) and it worked well. We delivered
on time with all features present and correct (where correct means the
application passed the customers Business Acceptance Tests -  first
time).

I should note we were both quite experienced developers, and I would say
we were quite close in ability. It was just the two of us, not a
revolving team of 5 or 6, but I could easily see that working too, if
you havre the right people.

We didn't do pair programming the whole time - it was more prevalent
during the early weeks of the project, when we were developing the
important parts of the framework. Later on we tended to separate to our
own workstations to complete more mundane requirements.

Try it - it is a hard discipline to maintain, but if you can achieve
some success , it is naturally reinforcing. If you struggle to get the
approach to work, step back and try to see what is holding you back

L

-Original Message-
From: [EMAIL PROTECTED] [mailto:[EMAIL PROTECTED]

Sent: Sunday, 2 April 2006 10:05 AM
To: perl-qa@perl.org
Subject: [OT] TDD + Pair Programming

I have never actually had an opportunity to practice
this, but I've always felt that the most obvious way
to combine test-driven development with pair
programming was to have one person write test code
while the other person writes application code. 
Presumably they might change roles periodically, but
I'm not sure if they would actually work at the same
terimnal.  However, I've never heard anyone
excplicitly advocate for this approach.  Is this
actually happening and I'm just not aware of it?  Or
is there some obstacle to this approach that I haven't
considered?

-Jeff  

__
Do You Yahoo!?
Tired of spam?  Yahoo! Mail has the best spam protection around 
http://mail.yahoo.com 
**
IMPORTANT
The contents of this e-mail and its attachments are confidential and intended
solely for the use of the individual or entity to whom they are addressed.  If
you received this e-mail in error, please notify the HPA Postmaster, [EMAIL 
PROTECTED],
then delete  the e-mail.
This footnote also confirms that this e-mail message has been swept for the
presence of computer viruses by Ironport. Before opening or using any
attachments, check them for viruses and defects.
Our liability is limited to resupplying any affected attachments.
HPA collects personal information to provide and market our services. For more
information about use, disclosure and access see our Privacy Policy at
www.hpa.com.au
**


RE: [OT] TDD only works for simple things...

2006-03-30 Thread leif . eriksen

I would classify what Adam does as robustness testing.

Often the first release can be classified as working, in a perfect world.

Adam lives in a World of Evil.

Let me expand. For most of us (this means Not Adam), we work during the Day 
and rest at Night. We don't call it Day and Not Day, because 
Night implies a whole range of things not included in a simple Not Day state.

So the extra testing Adam does is more than implied by Not Perfect, but is 
included by Evil.

By most people measures, when it works in a perfect world (and we've proved 
this by our TDD approach), its does what is advertised and can be released.

But by having someone like Adam wreak havoc our weak, naïve code, we improve 
its robustness in less than perfect conditions.

But coding for a perfect world and coding for Adam's world are really the same 
discipline, but taken to different levels. Coding for Evil isn't necessarily 
harder to do or test, but requires more precision in defining the conditions 
under which you state that your code can be considered to working.

E.g. Adam gave the example of code that required a reference to a string as a 
parameter, but failed if you passed a reference to a constant string. If the 
doco for the sub in question stated pass a reference to mutable string rather 
than pass a reference to a string, we would have stymied Adam's Evil World. 

It is perhaps a bit harder in Perl to recognise where this precision is 
required - in Java and C/C++, the concepts of mutable and immutable are easily 
communicated in code, so, for example, you would expect the compiler to catch 
the passing of an immutable string where a mutable one is required.
This probably supports Adam's earlier point about TDD and loosely typed 
languages. Perhaps some of the new features in Perl6 will help here.

One last point. Tesing weird parameter combo's and values is good, but 
robustness testing isn't limited to that. Things like network outages, database 
failures, daylight savings time adjustments, are also extremely relevant to 
improving the robustness of our code, if they depend on these services. For 
this kind of complex external system type testing, I have found the mock 
object approach to be superb - and usually part of the TDD development cycles 
where time permits.

Leif

-Original Message-
From: [EMAIL PROTECTED] [mailto:[EMAIL PROTECTED] 
Sent: Friday, 31 March 2006 1:54 PM
To: [EMAIL PROTECTED]
Cc: perl-qa@perl.org; [EMAIL PROTECTED]
Subject: Re: [OT] TDD only works for simple things...

Well, the weakness I speak of is not so much that that it will never get 
to the point of being stable, but that it introduces a temptation to 
release early without taking the time to critically look at what might 
go wrong, based on your knowledge of how it is implemented.

So more of a timing thing than a it will never get there thing.

Adam K

chromatic wrote:
 On Thursday 30 March 2006 07:32, Adam Kennedy wrote:
 
 In contrast, as I hear chromatic express it, TDD largely involves
 writing tests in advance, running the tests, then writing the code.
 
 Not quite.  It means writing just enough tests for the next testable piece of 
 the particular feature you're implementing, running them to see that they 
 fail, writing the code to make them pass, then refactoring both.  Repeat.
 
 The important point that people often miss at first is that it's a very, very 
 small cycle -- write a test, write a line of code.
 
 (The second important point is refactor immediately after they pass.)
 
 In my use of Test::MockObject and UNIVERSAL::isa/can I found I was
 initially able to cause them to fail quite easily with fairly (to me)
 trivially evil cases that would occur in real life.
 
 For the most part, they weren't trivially easy cases that came up in my real 
 life, so I didn't think of them.  I don't feel particularly badly about that 
 either.  The code met my initial goals and only when someone savvy enough to 
 use the code in ways I had not anticipated it found edge cases did the 
 problems come up.  I suspect you had little problem working around them until 
 I fixed them, too -- at least in comparison to a lot of other programmers 
 without your evil southern hemisphere nature and experience.
 
 This I think (but cannot prove) is a TDD weakness, in that it might
 encourage not critically looking at the code after it's written to find
 obvious places to pound on it, because you already wrote the tests and
 they work, and it's very tempting to move on, release, and then wait for
 reported bugs, then add a test for that case, fix it, and release again.
 
 It seems more like a weakness of coding in general.  I don't release code 
 with 
 known bugs, but I expect people will report bugs.  Then I'll add test cases, 
 refactor, and learn from the experience.
 
 Compare the previous version of UNIVERSAL::isa to the version I released.  
 Not 
 only does it have far fewer bugs, but it's at least an order of magnitude 
 more 

RE: Network Testing

2006-02-16 Thread leif . eriksen
Well it depends on what your actually studying...

1. You have written the code to implement a network bridge, and you want
to test
i. the codes correctness
ii. its ability to handle packets correctly for various
configurations and load

2. You have a network bridge, and you want to study how best to
configure it for various network and load scenarios.

For 1.i, normal unit testing should suffice, it all depends on the
implementation languages (and its commonly available libraries) support
for that kind of thing

For ii, you could try setting up multiple virtual hosts, using any of
the current tools for this (vmware allows you to create whole virtual
networks just for this kind of thing, user mode linux etc)

For 2, what Adam said I guess.

L

-Original Message-
From: [EMAIL PROTECTED] [mailto:[EMAIL PROTECTED] 
Sent: Friday, 17 February 2006 3:08 AM
To: perl-qa@perl.org
Subject: Network Testing

Hello,

I'm currently working on a project that involves dynamically configuring
a
network bridge to shape network traffic.  I want to set up automated
tests
to make sure that data flows the way that it should.  This includes
blocking
or limiting traffic based on IPs and/or ports.  Does anyone have
experience
in this area and are willing to give some tips/hints on the subject?

Thanks,

--
David Steinbrunner 
**
IMPORTANT
The contents of this e-mail and its attachments are confidential and intended
solely for the use of the individual or entity to whom they are addressed.  If
you received this e-mail in error, please notify the HPA Postmaster, [EMAIL 
PROTECTED],
then delete  the e-mail.
This footnote also confirms that this e-mail message has been swept for the
presence of computer viruses by Ironport. Before opening or using any
attachments, check them for viruses and defects.
Our liability is limited to resupplying any affected attachments.
HPA collects personal information to provide and market our services. For more
information about use, disclosure and access see our Privacy Policy at
www.hpa.com.au
**


RE: First (developers) Release of Test::Shlomif::Harness

2005-10-11 Thread leif . eriksen


-Original Message-
From: [EMAIL PROTECTED] [mailto:[EMAIL PROTECTED] 


That said, now that TAP is well documented (yay), there's nothing wrong
with writing other harnesses.


Just as a comment, I used the TAP doco to write a VB console app for
testing the non-GUI (library) part of a VB application I recently became
responsible for. The console app writes TAP to STDOUT, and this can be
picked up by the normal means - so now my perl and (most of) my VB code
can be tested. Yay.

Leif
**
IMPORTANT
The contents of this e-mail and its attachments are confidential and intended
solely for the use of the individual or entity to whom they are addressed.  If
you received this e-mail in error, please notify the HPA Postmaster, [EMAIL 
PROTECTED],
then delete  the e-mail.
This footnote also confirms that this e-mail message has been swept for the
presence of computer viruses by Ironport. Before opening or using any
attachments, check them for viruses and defects.
Our liability is limited to resupplying any affected attachments.
HPA collects personal information to provide and market our services. For more
information about use, disclosure and access see our Privacy Policy at
www.hpa.com.au
**


Embedding tests in modules (inspired by my misreading of Test::Code)

2005-08-11 Thread leif . eriksen



[EMAIL PROTECTED] wrote:


I usually do this with can_ok()

can_ok( __PACKAGE__, qw(is_code isnt_code) );


 

Initially i thought Would that work ? Isnt __PACKAGE__ equal to main:: 
in a t file ? then I realised we're testing that the use_ok is 
exporting these to our namespace, which is __PACKAGE__ so yeah that'd work.


Then, somehow, I thought, what if our module we're testing looked like 
this -


code
package Mo::Bling;
use strict;
use warnings;
...
sub bling {}
...
other functions defined here
...
1;

__TEST__
use Test::More test = xxx;

use_ok(__PACKAGE__, qw(pragmas));

is(__PACKAGE__-bling(), 'mo bling than yo', 'adds bling');

...
mo tests 
...

__DATA__

...
data we may wish to use in __TEST__ scenarios
...

__END__
...
pod
...

/code

I'm thinking that the code, tests, data and pod are all there in the pm 
file - that seems on the surface a good thing. Does this seem like a 
reasonable idea ?


Against it is the significant inertia the current .t regime enjoys, but 
it seems an interesting idea.


--
Leif Eriksen
Snr Developer
http://www.hpa.com.au/
phone: +61 3 9217 5545
email: [EMAIL PROTECTED]


Re: Embedding tests in modules (inspired by my misreading of Test::Code)

2005-08-11 Thread leif . eriksen



[EMAIL PROTECTED] wrote:


You may wish to look at Test::Inline and Test::Class which are different
approaches to putting your tests near your code.
 


Test::Inline looks like what I'm thinking - thanx


Also __TEST__ is not legal Perl which gets into source filters and then the
burning and itching and oi.

 


Yeah I know...

--
Leif Eriksen
Snr Developer
http://www.hpa.com.au/
phone: +61 3 9217 5545
email: [EMAIL PROTECTED]


Re: Re: OSCON testing tutorial?

2005-07-20 Thread leif . eriksen



[EMAIL PROTECTED] wrote:


On Wed, Jul 20, 2005 at 02:48:43PM -0500, Bill Odom ([EMAIL PROTECTED]) wrote:
 

I didn't think we were actually *calling* them Lightning Talks, but 
that does capture the spirit. Lots of topics, even more examples -- a 
very high-density presentation.
   



Plus donuts and dancing girls.

 


Donuts ? Did you say donuts !? What kind ?

--
Leif Eriksen
Snr Developer
http://www.hpa.com.au/
phone: +61 3 9217 5545
email: [EMAIL PROTECTED]


Re: Devel::Cover Problem: testing || for a default value.

2005-07-11 Thread leif . eriksen



[EMAIL PROTECTED] wrote:


Michael G Schwern wrote:



you're right that the case of $class being false may be of interest, 
but that's not what this common idiom actually does.  The code will 
blithely pass a false value to bless (with potentially unexpected 
results depending on whether $class is 0, , or undef).  That failure 
is an example of where correctness can't be validated by coverage -- 
where the error lies between the ears of the programmer.  :-)  If 
$class were explictly tested, then Devel::Cover would pick it up 
properly, such as in bless {}, ref $class || $class || die.



I'd say this idiom is one of the ones I am most often affected by in the 
work I do for the Kwalitee project - the my$class = ref$proto||$proto; 
idiom in constructors. I usually do the following


1. Add code to handle the 'both false' case, similiar to
   my $class = ref $proto || $proto;
   warn 'wrong calling convention for Class::Constructor::new - try 
Class::Constructor-new' and return unless $class;


2. Add a test that makes ref $proto || $proto false, and tidy up the 
harness so the warning doesn't mess up the output


my @warn;
my $rc;
eval {
   local $SIG{__WARN__} = sub {push @warn, @_};
   $rc = Class::Constructor::new();
};

is(@warn, 1, 'warning on calling convention');
like(shift(@warn), qr(wrong calling convention for 
Class::Constructor::new - try Class::Constructor-new at ), 'expect 
message');

is($rc, undef, 'no object created');

Now D::C is happy, and the code is more robust - to me a win-win. Now 
newbies who dont really know perl's OO conventions are gently steered to 
the path to enlightenment, and everyone else is only penalised with a 
very lightweight unless test.This seems OK to me, but I know opinions on 
this cover a wide spectrum.


The only caveat is in regards to those psycho's who like to bless into 
the '0' namespaceI believe '' and undef result in blessing into main::


--
Leif Eriksen
Snr Developer
http://www.hpa.com.au/
phone: +61 3 9217 5545
email: [EMAIL PROTECTED]


Re: [Maybe Spam] Re: Devel::Cover Problem: testing || for a default value.

2005-07-11 Thread leif . eriksen



[EMAIL PROTECTED] wrote:


On Tue, 2005-07-12 at 10:46 +1000, [EMAIL PROTECTED] wrote:

 


1. Add code to handle the 'both false' case, similiar to
   my $class = ref $proto || $proto;
   warn 'wrong calling convention for Class::Constructor::new - try 
Class::Constructor-new' and return unless $class;
   



Why not delete the code entirely?  Do these classes *really* expect
users to call them with anything besides Classname-new()?

 


I'd put it this way.
1. Classes that dont test for a valid package name in their constructor 
do not expect to be called in any way other than Classname-new().
2. Those classes fail badly when a naive/inexperienced/drunk/whatever 
user uses the wrong convention

3. Classes that test for a valid package name dont suffer from 2.
4. That said, 2 doesnt happen very often. Whether the developer wants to 
protection when it does happen is up to the developer. Getting 100% 
coverage via D::C is another motivation.


--
Leif Eriksen
Snr Developer
http://www.hpa.com.au/
phone: +61 3 9217 5545
email: [EMAIL PROTECTED]


Re: Re: is_deeply() and code refs

2005-06-26 Thread leif . eriksen



[EMAIL PROTECTED] wrote:


Another way to look at the eval case is to apply it to other references.

is_deeply( eval { foo = 42, bar = 23 },
   { bar, 42, foo, 23 } );

Even though the code is written differently the resulting data is the same.  
Would anyone be in doubt that it should pass?


 

I'm guessing that is_deeply tests for 'semantic equivalence', not 
'syntactic equivalence' - or is that a whole unopen can of worms?


--
Leif Eriksen
Snr Developer
http://www.hpa.com.au/
phone: +61 3 9217 5545
email: [EMAIL PROTECTED]


DBD-mysql coverage == 56% - am I on drugs ??

2005-05-12 Thread leif . eriksen
Can this be right ? I checked out DBD-mysql-2.9007 and ran it through 
Devel::Cover. Apart from skipping 15 tests to do with leaks and 1 test 
to do with transactions, the overall coverage figure from Devel::Cover 
is 56%

All tests successful, 1 test and 14 subtests skipped.
Files=18, Tests=769, 129 wallclock secs (103.23 cusr +  1.77 csys = 
105.00 CPU)
[EMAIL PROTECTED] DBD-mysql-2.9007]$ cover
Reading database from /home/le6303/spool/DBD-mysql-2.9007/cover_db

--- -- -- -- -- -- 
--
File  stmt branch   condsub   time  
total
--- -- -- -- -- -- 
--
blib/lib/DBD/mysql.pm 71.9   42.3   38.6   75.9   12.0   
60.1
blib/lib/DBD/mysql/GetInfo.pm 70.6n/an/a   62.50.9   
68.0
blib/lib/Mysql.pm 67.3   42.3   40.0   59.5   82.6   
60.2
blib/lib/Mysql/Statement.pm   38.5   36.10.0   70.84.5   
40.3
Total 62.5   41.1   33.8   67.3  100.0   
56.0
--- -- -- -- -- -- 
--

Writing HTML output to 
/home/le6303/spool/DBD-mysql-2.9007/cover_db/coverage.html ...
done.

This is the #2 item on the Phalanx 100. Is that coverage statistic for 
real ? I am shocked if it isas soon as I finish getting Class::DBI 
to 100% (well on the way) I intend to hit this one hard if this is the 
real situation.

--
Leif Eriksen
Snr Developer
http://www.hpa.com.au/
phone: +61 3 9217 5545
email: [EMAIL PROTECTED]


Re: DBD-mysql coverage == 56% - am I on drugs ??

2005-05-12 Thread leif . eriksen

[EMAIL PROTECTED] wrote:
Leif Eriksen wrote:
Can this be right ?

snip
--- -- -- -- -- 
-- --
File  stmt branch   condsub   
time  total
--- -- -- -- -- 
-- --
blib/lib/DBD/mysql.pm 71.9   42.3   38.6   75.9   
12.0   60.1
blib/lib/DBD/mysql/GetInfo.pm 70.6n/an/a   62.5
0.9   68.0
blib/lib/Mysql.pm 67.3   42.3   40.0   59.5   
82.6   60.2
blib/lib/Mysql/Statement.pm   38.5   36.10.0   70.8
4.5   40.3
Total 62.5   41.1   33.8   67.3  
100.0   56.0
--- -- -- -- -- 
-- --

snip

That being said, as Michael said, the coverage on Mysql::Statement is 
quite low and is pulling down the overall average.
OK, but if we remove that, the Stmt goes to ~70% wich is still 
shockingly low for such an important module. It is also very distressing 
that the Sub column isnt at 100% - why  you would go to the effort of 
writing test cases and not at least call every method/function is beyond me.

And I'll start on this as soon as I finish C::DBI coverage
--
Leif Eriksen
Snr Developer
http://www.hpa.com.au/
phone: +61 3 9217 5545
email: [EMAIL PROTECTED]


Re: [Maybe Spam] Re: DBD-mysql coverage == 56% - am I on drugs ??

2005-05-12 Thread leif . eriksen
[EMAIL PROTECTED] wrote:
I hope you're not just now realizing 
that some of the most important and popular modules are also the most 
undertested? 

I always knew they would be less than perfect, I just had no idea the 2nd 
most popular would be this bad.
Anyway, I've booked a weekend at the nearest 'citizen re-education 
centre' for a course of 'perception adjustment'. Then everything will be 
fine 

Covering the XS portion of the code with gcov is possible, and Devel::Cover
will create all kinds of nice webpages and statistics for you too.  
Paul Johnson may have this written up somewhere, but, if not, I should 
really write something up about this since I've used it to determine Perl's
test coverage.

Generating coverage tests for XS code - why are my hands shaking ?
Thanks - I'm sure it wil be needed one way or the other.
--
Leif Eriksen
Snr Developer
http://www.hpa.com.au/
phone: +61 3 9217 5545
email: [EMAIL PROTECTED]


Devel::Cover and -d:ptkdb report problem, 'make test' does not

2005-02-08 Thread leif . eriksen
QA'ers,
   Once again I am trying to get a handle on how to track down failures 
caught only under D::C or the debugger.

I've written coverage tests for Ima::DBI,as part of the Phalanx/Kwalitee 
effort for Class::DBI. And its works fine except under the GUI debugger 
or D::C

For plain make test we have
[EMAIL PROTECTED] ImaDBI]$ make test
PERL_DL_NONLAZY=1 /usr/bin/perl -MExtUtils::Command::MM -e 
test_harness(0, 'blib/lib', 'blib/arch') t/*.t
t/DBIok
All tests successful.
Files=1, Tests=54,  1 wallclock secs ( 0.30 cusr +  0.05 csys =  0.35 CPU)

For D::C we have
[EMAIL PROTECTED] ImaDBI]$ perl Makefile.PL  
HARNESS_PERL_SWITCHES=-MDevel::Cover make test || cover
checking for optional Test::MockObject  found
Writing Makefile for Ima::DBI
PERL_DL_NONLAZY=1 /usr/bin/perl -MExtUtils::Command::MM -e 
test_harness(0, 'blib/lib', 'blib/arch') t/*.t
t/DBIok 54/54# Looks like your test died just after 54.
t/DBIdubious
   Test returned status 255 (wstat 65280, 0xff00)
   after all the subtests completed successfully
Failed Test Stat Wstat Total Fail  Failed  List of Failed
---
t/DBI.t  255 65280540   0.00%  ??
Failed 1/1 test scripts, 0.00% okay. 0/54 subtests failed, 100.00% okay.
make: *** [test_dynamic] Error 2
Reading database from /home/le6303/work/Kwalitee.ClassDBI/ImaDBI/cover_db
Devel::Cover: Deleting old coverage for changed file blib/lib/Ima/DBI.pm
Devel::Cover: Deleting old coverage for changed file blib/lib/Ima/DBI.pm

--- -- -- -- -- -- 
--
File  stmt branch   condsub   time  
total
--- -- -- -- -- -- 
--
blib/lib/Ima/DBI.pm  100.0  100.0  100.0  100.0  100.0  
100.0
Total100.0  100.0  100.0  100.0  100.0  
100.0
--- -- -- -- -- -- 
--

For running under the debugger we have
[EMAIL PROTECTED] ImaDBI]$ perl -d:ptkdb t/DBI.t
1..54
ok 1 - set_db(test1)
ok 2 - set_db(test2)
...
ok 53 - rollback with one db setup
ok 54 - fail rollback
DESTROY created new reference to dead object 'DBI::dr' during global 
destruction.

Now I dont know where to go from here - have I uncovered a bug in DBI ?
Or is it elsewhere ?
How do I interprete the stat and wstat values form D::C ?
Do I need to compile a debug version of perl and step through under the 
debugger with that ?

--
Leif Eriksen
Snr Developer
http://www.hpa.com.au/
phone: +61 3 9217 5545
email: [EMAIL PROTECTED]


eq_array testing values prematurely...

2005-02-07 Thread leif . eriksen
I've written some coverage tests for Ima::DBI as part of Phalanx, but I 
get a warning under -W

promptHARNESS_PERL_SWITCHES=-W make test
And got these warnings
[EMAIL PROTECTED] Ima-DBI-0.33]$ HARNESS_PERL_SWITCHES=-W make test
PERL_DL_NONLAZY=1 /usr/bin/perl -MExtUtils::Command::MM -e 
test_harness(0, 'blib/lib', 'blib/arch') t/*.t
t/DBIok 3/0Use of uninitialized value in string eq at 
/usr/lib/perl5/5.8.0/Test/More.pm line 1013.
Use of uninitialized value in string eq at 
/usr/lib/perl5/5.8.0/Test/More.pm line 1013.
t/DBIok
All tests successful.
Files=1, Tests=54,  0 wallclock secs ( 0.32 cusr +  0.03 csys =  0.35 CPU)

Investigating further, that line in Test::More is
sub eq_array  {
   my($a1, $a2) = @_;
   return 1 if $a1 eq $a2;
...
Now the more recent versions of eq_array (you can see I'm using 5.8.0) 
try to protect it a bit from non-array references, but even running the 
latest version of Test::More::eq_array (and _eq_array) still gives this 
warning.

So I changed it to this
sub eq_array  {
   my($a1, $a2) = @_;
   if (defined $a1 and defined $a2) {
 return 1 if $a1 eq $a2;
   }
And we get
[EMAIL PROTECTED] Ima-DBI-0.33]$ HARNESS_PERL_SWITCHES=-W make test
PERL_DL_NONLAZY=1 /usr/bin/perl -MExtUtils::Command::MM -e 
test_harness(0, 'blib/lib', 'blib/arch') t/*.t
t/DBIok
All tests successful.
Files=1, Tests=54,  1 wallclock secs ( 0.33 cusr +  0.02 csys =  0.35 CPU)

I'm guessing this is the right forum to post this too - unless I should 
go right ahead and file with RT...?

--
Leif Eriksen
Snr Developer
http://www.hpa.com.au/
phone: +61 3 9217 5545
email: [EMAIL PROTECTED]


Re: [Maybe Spam] Re: Anomalous Difference in Output between HTML Files Created by

2005-01-31 Thread leif . eriksen

[EMAIL PROTECTED] wrote:
Does Python have customizable test suites *at all*?
 

I dont know about that, but I hear they have the ability to invoke the 
debugger from within the code, rather than the other way round like 
perl/C/... does

Something like
import pdb
...
pdb.run(statement_to_debug[, globals[, locals]])
...
This launches the debugger mid-script - nice. I've heard there has been 
some talk/suggertions of doing this in Perl 6

--
Leif Eriksen
Snr Developer
http://www.hpa.com.au/
phone: +61 3 9217 5545
email: [EMAIL PROTECTED]


Re: Anomalous Difference in Output between HTML Files Created by 'cover'

2005-01-30 Thread leif . eriksen
I'd guess it is because you are seeing the output of the code after it 
has been compiled-then-decompiled - it is compiled so it can run and 
coverage statistics can be collected, then it is decompiled to relate 
coverage stats to code lines. Now there are many ways to write code that 
compiles to the same compiled form, but the decompiler (I imagine it is 
B::Deparse) only decompiles those symbols one way.

As a test, you could change those two lines in Text::Template to be the 
same as what you are seeing in the coverage HTML, run the make test and 
cover again and see them unchanged.

Or more directly
===
deparse.pl
===
#!/usr/bin/perl -w
use strict;
my $self;
$self-{DATA_ACQUIRED} = 1;
+++
end deparse.pl
+++
prompt perl -MO=Deparse deparse.pl
BEGIN { $^W = 1; }
use strict 'refs';
my $self;
$$self{'DATA_ACQUIRED'} = 1;
deparse.pl syntax OK
===
deparse2.pl
===
#!/usr/bin/perl -w
use strict;
my $self;
$$self{DATA_ACQUIRED} = 1;
+++
end deparse2.pl
+++
prompt perl -MO=Deparse deparse2.pl
[EMAIL PROTECTED] perl]$ perl -MO=Deparse deparse2.pl
BEGIN { $^W = 1; }
use strict 'refs';
my $self;
$$self{'DATA_ACQUIRED'} = 1;
deparse2.pl syntax OK
Leif
[EMAIL PROTECTED] wrote:
I have just noticed an anomalous difference in output between two of 
the files created by the Devel::Cover 'cover' utility when run against 
a popular Perl module -- and I am wondering whether this difference 
should be considered a feature or a bug.

The module in question is Text::Template, which I am studying as part 
of Perl Seminar NY's contribution to the Phalanx project.  Start with 
a copy of Text-Template-1.44 (the latest on CPAN) and examine the 
code. In 'lib/Text/Template.pm', consider these two lines:

  128$self-_acquire_data unless $self-{DATA_ACQUIRED};
  450if (! defined $val) {
Proceed in the normal manner:
  perl Makefile.PL
  make
  cover -delete
  HARNESS_PERL_SWITCHES=-MDevel::Cover make test
  cover
... 'cover' creates a number of HTML files, including these two:
  ./Text-Template-1.44/cover_db/blib-lib-Text-Template-pm.html
  ./Text-Template-1.44/cover_db/blib-lib-Text-Template-pm--branch.html
'blib-lib-Text-Template-pm.html' displays lines 128 and 450 exactly as 
they appear in the module itself. 
'blib-lib-Text-Template-pm--branch.html', however, displays the 
relevant branch part of these lines of code as follows:

  128  unless $$self{'DATA_ACQUIRED'}
  450  if (not defined $val) { }
'$self-{DATA_ACQUIRED}' is changed to '$$self{'DATA_ACQUIRED'}' and 
'! defined $val' is changed to 'not defined $val'.  (I could site 
other examples as well, but these suffice to illustrate the point.)

Now, I grant that these are merely displays, not live code. 
Nonetheless, since the purpose of these HTML files is to guide a 
programmer to lines of code whose test coverage needs improvement, I 
am puzzled as to why the output in these two files differs.

Jim Keenan


SegFault under Devel::Cover for sort

2005-01-24 Thread leif . eriksen
I have isolated a case where perl is happy but D::C segfaults

sort.pl

#!/usr/bin/perl -w
use strict;
my %sort = (B = \backwards,
   F = \forwards);
sub backwards {
   return $b cmp $a;
}
sub forwards {
   return $a cmp $b;
}
sub GetAlgorithm {
   my ($alg) = @_;
   return $sort{$alg};
}
my @list = qw( a d e c g );
my $alg = GetAlgorithm(('B', 'F')[int(rand(2))]);
@list = sort {{$alg}} @list;
use Data::Dumper;
print STDERR Dumper([EMAIL PROTECTED]);
++
[EMAIL PROTECTED] perl]$ perl -MDevel::Cover sort.pl
Devel::Cover 0.52: Collecting coverage data for branch, condition, 
statement, subroutine and time.
   Pod coverage is unvailable.  Please install Pod::Coverage from CPAN.
Selecting packages matching:
Ignoring packages matching:
   /Devel/Cover[./]
   ^t/
   \.t$
   ^test\.pl$
Ignoring packages in:
   .
   /usr/lib/perl5/5.8.0
   /usr/lib/perl5/5.8.0/i386-linux-thread-multi
   /usr/lib/perl5/site_perl
   /usr/lib/perl5/site_perl/5.8.0
   /usr/lib/perl5/site_perl/5.8.0/i386-linux-thread-multi
   /usr/lib/perl5/vendor_perl
   /usr/lib/perl5/vendor_perl/5.8.0
   /usr/lib/perl5/vendor_perl/5.8.0/i386-linux-thread-multi
Segmentation fault

[EMAIL PROTECTED] perl]$ perl sort.pl
$VAR1 = [
 'g',
 'e',
 'd',
 'c',
 'a'
   ];
+++
I've also tried the sub reference as { $alg-() }, but to no avail.
Any pointers as to how I can progress  solving this ? Do you need the 
core dump ?


Platform info
[EMAIL PROTECTED] perl]$ uname -a
Linux itdevtst 2.4.20-31.9 #1 Tue Apr 13 18:04:23 EDT 2004 i686 i686 
i386 GNU/Linux
[EMAIL PROTECTED] perl]$ perl -v

This is perl, v5.8.0 built for i386-linux-thread-multi
(with 1 registered patch, see perl -V for more detail)
--
Leif Eriksen
Snr Developer
http://www.hpa.com.au/
phone: +61 3 9217 5545
email: [EMAIL PROTECTED]


failures under Devel::Cover only

2005-01-19 Thread leif . eriksen
Hi,
   I am doing some testing under Devel::Cover, and get some weird 
results sometimes. What should I be looking at in my code or test cases 
that is provoking this discrepancy?

Without D::C
++
[EMAIL PROTECTED] src]$ make test
...
PERL_DL_NONLAZY=1 /usr/bin/perl -MExtUtils::Command::MM -e 
test_harness(0, 'blib/lib', 'blib/arch') ./Invoice/t/*.t 
./QualityBakers/t/*.t ./QualityBakers/RecordSet/t/*.t 
./WeeklyStatement/t/*.t ./AdjustmentNote/t/*.t
./AdjustmentNote/t/ClaimHeader..ok
./Invoice/t/AccountSummaryDetailok
./Invoice/t/PageHeader1.ok
./Invoice/t/SummaryDetails..ok
./Invoice/t/SummaryHeader...ok
./QualityBakers/RecordSet/t/Iteratorok
./QualityBakers/t/AnyFile...ok
./QualityBakers/t/AnyRecord.ok
./QualityBakers/t/Checkpointok
./QualityBakers/t/DeliveryMethodok
./QualityBakers/t/FileSet...ok
./QualityBakers/t/RecordSet.ok
./WeeklyStatement/t/ColumnHeaderok
All tests successful.
Files=13, Tests=170,  7 wallclock secs ( 4.21 cusr +  0.39 csys =  4.60 CPU)
++

With D::C
++
[EMAIL PROTECTED] src]$ HARNESS_PERL_SWITCHES=-MDevel::Cover make test
...
PERL_DL_NONLAZY=1 /usr/bin/perl -MExtUtils::Command::MM -e 
test_harness(0, 'blib/lib', 'blib/arch') ./Invoice/t/*.t 
./QualityBakers/t/*.t ./QualityBakers/RecordSet/t/*.t 
./WeeklyStatement/t/*.t ./AdjustmentNote/t/*.t
./AdjustmentNote/t/ClaimHeader..ok
./Invoice/t/AccountSummaryDetailok
./Invoice/t/PageHeader1.ok
./Invoice/t/SummaryDetails..ok
./Invoice/t/SummaryHeader...ok
./QualityBakers/RecordSet/t/Iteratorok
./QualityBakers/t/AnyFile...ok
./QualityBakers/t/AnyRecord.ok
./QualityBakers/t/Checkpointok
./QualityBakers/t/DeliveryMethodok
./QualityBakers/t/FileSet...dubious
   Test returned status 0 (wstat 11, 0xb)
./QualityBakers/t/RecordSet.ok 32/0# Looks like your test 
died just after 32.
./QualityBakers/t/RecordSet.dubious
   Test returned status 255 (wstat 65280, 0xff00)
   after all the subtests completed successfully
./WeeklyStatement/t/ColumnHeaderok
Failed Test   Stat Wstat Total Fail  Failed  List of Failed
---
./QualityBakers/t/FileSet.t  011??   ??   %  ??
./QualityBakers/t/RecordSet.t  255 65280320   0.00%  ??
Failed 2/13 test scripts, 84.62% okay. -22/91 subtests failed, 124.18% okay.
make: *** [test_dynamic] Error 29
++

Version info
++
[EMAIL PROTECTED] src]$ perl -v
This is perl, v5.8.0 built for i386-linux-thread-multi
(with 1 registered patch, see perl -V for more detail)
++
[EMAIL PROTECTED] src]$ perl -MDevel::Cover -e 'print 
Devel::Cover-VERSION()'
Devel::Cover 0.50: Collecting coverage data for branch, condition, 
statement, subroutine and time.
   Pod coverage is unvailable.  Please install Pod::Coverage from CPAN.
Selecting packages matching:
Ignoring packages matching:
   /Devel/Cover[./]
   ^t/
   \.t$
   ^test\.pl$
Ignoring packages in:
   .
   /usr/lib/perl5/5.8.0
   /usr/lib/perl5/5.8.0/i386-linux-thread-multi
   /usr/lib/perl5/site_perl
   /usr/lib/perl5/site_perl/5.8.0
   /usr/lib/perl5/site_perl/5.8.0/i386-linux-thread-multi
   /usr/lib/perl5/vendor_perl
   /usr/lib/perl5/vendor_perl/5.8.0
   /usr/lib/perl5/vendor_perl/5.8.0/i386-linux-thread-multi
Devel::Cover: Can't find file ../../lib/Storable.pm: ignored.
Devel::Cover: Can't find file -e: ignored.
0.50Devel::Cover: Writing coverage database to 
/home/le6303/work/GoodmanFielder.QualityBakers/src/cover_db/runs/1106198896.7448.54982
--- -- -- -- -- -- 
--
File  stmt branch   condsub   time  
total
--- -- -- -- -- -- 
--
Total  n/an/an/an/a
n/an/a
--- -- -- -- -- -- 
--

++
[EMAIL PROTECTED] src]$ perl -MTest::More -e 'print Test::More-VERSION()'
0.47
--
Leif Eriksen
Snr Developer
http://www.hpa.com.au/
phone: +61 3 9217 5545
email: [EMAIL PROTECTED]


Re: [Maybe Spam] Coverage testing success story.

2004-12-14 Thread leif . eriksen
You may be interested in what I found on my journey to 100% coverage 
with D::C http://perlmonks.org/?node_id=378586

[EMAIL PROTECTED] wrote:
So even when you approach 100% there's still bugs to be found with
simple coverage analysis.
 

I think this is the most valuable part of the exercise - the bugs you
find when you think 'its got 98% coverage, there cant possibly be any
bugs left...oh, look'
--
Leif Eriksen
Snr Developer
http://www.hpa.com.au/
phone: +61 3 9217 5545
email: [EMAIL PROTECTED]


Re: Harness runs the sub, D::C says I haven't

2004-11-16 Thread Leif Eriksen
Paul Johnson wrote:
On Sat, Nov 13, 2004 at 12:33:01PM +1100, Leif Eriksen wrote:
 

First, thanx so very much for responding so quickly...
   

That was just to make up for the short delay here, and the much longer
delay to your last mail to me ;-)
 

Hey, we had a weekend in between, and its not like I'm paying you - your 
helping because you *want to* not because you *have to* - and I (and 
many others) appreciate that completely

 

Paul Johnson wrote:
   

On Sat, Nov 13, 2004 at 12:46:16AM +1100, Leif Eriksen wrote:
 

Even though Test::More is reporting (via make test) that every test 

   

Could you try putting the use_ok inside a BEGIN block, as Test::More
recommends?
 

OK, will do, though I upgraded to Devel::Config 0.50 first and now I hang...
More details -
This is perl, v5.8.3 built for i386-linux-thread-multi
Linux mother 2.6.8-1.521 #1 Mon Aug 16 09:01:18 EDT 2004 i686 athlon 
i386 GNU/Linux
Fedora Core release 2 (Tettnang)

Hang is
prompt HARNESS_PERL_SWITCHES=-MDevel::Cover make test
PERL_DL_NONLAZY=1 /usr/bin/perl -MExtUtils::Command::MM -e 
test_harness(0, 'blib/lib', 'blib/arch') Monash/t/*.t
Monash/t/Config..ok
Monash/t/Config_fail.ok
Monash/t/Config_fail2ok
Monash/t/DB..ok 2/0make: *** [test_dynamic] 
Interrupt (I hit ^C)

I'll revert to 0.49...hang on...nope - still stuck...revert to 0.45 - OK 
good, not sure what the issue is there

Lets check the coverage.
Nope, still says I haven't been there
trtd class=ha id=L793793/a/tdtd class=c0div 
class=sldap_groups/div/td/tr
   

So, if I've got this right, 0.45 shows the code as uncovered and 0.50
hangs during the tests.
I suspect that what is happening is that your code is being called via
some code for which coverage is not being collected, such as a core or
already installed module.  Up until recently this would lead to the code
being marked as uncovered, as you are seeing.  I suspect that if we
could get 0.50 working on your tests then you would find the code being
marked as covered.
 

I'm thinking of upgrading Perl and D::C at the same time
I was holding off upgrading my perl to beyond what I have (5.8.3) 
because 5.8.6 is at RC1, but I maight bite the bullet
and do it today - I respond this evening (my time) once that is done.

 

Can you give me a pointer where to go from here - is it my code at fault ?
   

I don't think so.  I already have a report of something like this, along
with a test case.  Unfortunately, I haven't had the chance to chase it
down yet.  If you are able to reduce the problem to a minimal test case
I'd be very grateful.  But with the test case I already have I'm hoping
to make a fix soon anyway.
 

I think I can do that too - I played 'insert print()s till you find the 
culprit' last night and its actually not hanging in D::C code, but in 
DBI-connect. Now before you go 'ah ha!!', I am using DBD::Mock, which 
just does something similiar to return bless {}, 'dbhandle'; - no 
actual DB connect is done - thats the whole point of DBD::Mock, to 
remove all that complexity from your test harness. I'll check this again 
after upgrading Perl and D::C, and try to reduce everything to a module 
with one sub, a 't-file' and a Makefile.PL - need anything else ?

In the meantime, if you go to the last version that works for you, you
should be able to get a complete coverage report with a line such as
 HARNESS_PERL_SWITCHES=-MDevel::Cover=-select,. make test
The downside is that that will also give you coverage for every module
you use, which is distracting and slow.
 

Will do this evening - I'm madly typing this before taking the tribe to 
school - a long D::C run is 'not gonna happen, Dad' !!

Thanx for your persistence
Leif


Re: Harness runs the sub, D::C says I haven't

2004-11-16 Thread Leif Eriksen
Paul Johnson wrote:
 HARNESS_PERL_SWITCHES=-MDevel::Cover=-select,. make test
The downside is that that will also give you coverage for every module
you use, which is distracting and slow.
Well this may be worthy of note, it still doesnt report coverage of a
sub I know is being exercised.
Now I tried to reproduce by cutting down the code to just the module,
with the 'uncovered sub' only and the t-file, but it suddenly reported
100% coverage, so that wasn't going to work without a lot of
cutting-testing-pasting'.
Next I tried to see why D::C 0.50 didn't work. To do this I started with
a clean slate, ala 'echo y | cvs release -d monash.its  cvs co
monash.its' (blow away the source dir structure and recreate from CVS).
I then did the 'perl Ma... make test' incantation, all OK.
Then I did 'HARNESS_PERL_SWITCHES=-MDevel::Cover make test' and viola it
worked
File  stmt branch   condsub   time
total
--- -- -- -- -- --
--
blib/lib/Monash/LDAP.pm   98.7   98.4   80.0   96.3   67.2
97.3
(Dont worry about the 96.3% subroutine coverage - there is one sub not
unit tested on explicit direction from the infrastructure team - so I
have the required 100%)
So, I guess there was possibly some cruft around, either from the blib
created by MakeMaker, or something in the cover_db dir (I tend to
accumulate 'cover' runs over a long period (in this case weeks)). We'll
never know now.
Morale - clean up and try from scratch before hitting the 'emergency
email support' button.
Thanx so much for your patience Paul - if your ever in Melbourne, I owe
you a few shouts at the bar - I recommend a James Boags.
Leif Eriksen
aka Mr Testing SmartyPants (you can tell I'm please with myself cant you)


Re: Harness runs the sub, D::C says I haven't

2004-11-13 Thread Leif Eriksen
First, thanx so very much for responding so quickly...
Paul Johnson wrote:
On Sat, Nov 13, 2004 at 12:46:16AM +1100, Leif Eriksen wrote:

  Even though Test::More is reporting (via make test) that every test 

Could you try putting the use_ok inside a BEGIN block, as Test::More
recommends?
OK, will do, though I upgraded to Devel::Config 0.50 first and now I hang...
More details -
This is perl, v5.8.3 built for i386-linux-thread-multi
Linux mother 2.6.8-1.521 #1 Mon Aug 16 09:01:18 EDT 2004 i686 athlon 
i386 GNU/Linux
Fedora Core release 2 (Tettnang)

Hang is
prompt HARNESS_PERL_SWITCHES=-MDevel::Cover make test
PERL_DL_NONLAZY=1 /usr/bin/perl -MExtUtils::Command::MM -e 
test_harness(0, 'blib/lib', 'blib/arch') Monash/t/*.t
Monash/t/Config..ok
Monash/t/Config_fail.ok
Monash/t/Config_fail2ok
Monash/t/DB..ok 2/0make: *** [test_dynamic] 
Interrupt (I hit ^C)

I'll revert to 0.49...hang on...nope - still stuck...revert to 0.45 - OK 
good, not sure what the issue is there

Lets check the coverage.
Nope, still says I haven't been there
trtd class=ha id=L793793/a/tdtd class=c0div 
class=sldap_groups/div/td/tr

HARNESS_PERL_SWITCHES=-MDevel::Cover=-select,Monash/LDAP make test
Then this shouldn't be necessary.
Lets try it anyway ... nope
Code in Monash/t/LDAP_groups.t is now
#!/usr/bin/perl -w
# tests specific to the ldap_groups function
use strict;
use Test::More qw(no_plan);
use Test::MockObject;
# Monash::LDAP depends on the services of Monash::Config
# this in turn requires two envvars to be set
# - SERVER_TYPE and PORTAL_BASE_DIR
# Set these for testing
BEGIN {
   $ENV{PORTAL_BASE_DIR} = `pwd`;
   $ENV{SERVER_TYPE} = 'Mock';
   use_ok( 'Monash::LDAP', qw( ldap_groups ) );
}
can_ok( 'Monash::LDAP', qw( ldap_groups ) );
my $mock = Test::MockObject-new();
# we work through the code, passing or failing code at each cond (if/unless)
# bad parameters
is(ldap_groups(), 'Error: no filter supplied.');
# failing search
$mock-fake_module('Monash::LDAP',
  ldap_do_search = sub($$) { },
 );
is_deeply([ldap_groups(filter = 'filter')], []);
blah blah blah
If this is still a problem, could you confirm that you are using the
latest release, 0.50.  You're on RH9, right?
See version details from earlier
http://www.nntp.perl.org/group/perl.perl5.porters/85930?show_headers=0
(I meant use_ok in that message, not isa_ok.)

Read it - um, yeah sure, whatever you say...(note to self - perl 
internals really are as freaky as everyone says...)

Can you give me a pointer where to go from here - is it my code at fault ?
Leif


Harness runs the sub, D::C says I haven't

2004-11-12 Thread Leif Eriksen
Hi perl-qa'er's,
   I am puzzled as to how to get D::C to report that I ran a test over 
a sub. Lets start with some background.

   I am using Test::More to write 't-files' for a module, and I am 
writing one t-file per subroutine. The subroutine is fully exercised in 
that t-file, all branches that are possible to reach.

   I am also using the excellent Test::MockObject, to avoid setting up 
complex externalities. This mainly consists of replacing helper subs in 
the package (which go off and talk to an LDAP server) with their 
expected results.

   Even though Test::More is reporting (via make test) that every test 
ran and I had a 100% pass, some subs (such as ldap_groups that I expand 
upon here) are marked by D::C as never being run - even though there is 
a whole t-file dedicated to just that sub that did indeed run.

   The module has a sub 'ldap_groups()', that is in the @EXPORT_OK for 
the module.

   The t file is basically
code
#!/usr/bin/perl -w
# tests specific to the ldap_groups function
use strict;
use Test::More qw(no_plan);
use Test::MockObject;
use_ok( 'Monash::LDAP', qw( ldap_groups ) );
can_ok( 'Monash::LDAP', qw( ldap_groups ) );
my $mock = Test::MockObject-new();
# we work through the code, passing or failing code at each cond (if/unless)
# bad parameters first
is(ldap_groups(), 'Error: no filter supplied.');
# failing search
$mock-fake_module('Monash::LDAP',
  ldap_do_search = sub($$) { },
 );
is_deeply([ldap_groups(filter = 'filter')], []);
blah blah blah
/code
Now, in order to get around the fact that use_ok('Monash::LDAP', ...) 
seems to stop D::C instrumenting Monash::LDAP, I call the test harness as

HARNESS_PERL_SWITCHES=-MDevel::Cover=-select,Monash/LDAP make test
However, I still get a report from cover that ldap_groups() is untested, 
even though Test::More says it passed 100%

From blib-lib-Monash-LDAP-pm--subroutine.html
trtd class=ha id=L793793/a/tdtd class=c0div 
class=sldap_groups/div/td/tr

so the class=c0 means untested I imagine.
Is there some interaction with Test::More::use_ok that is stopping D::C 
instrumenting the module correctly ?

Is there some other switch in D::C I need to use ?
Leif Eriksen


special blocks tests fail on 5.8.0

2004-10-27 Thread leif . eriksen
I dont know if the code under test is wrong or the expected output.
I run RH9, which uses Perl 5.8.0. I was getting a failure for 
t/aspecial_blocks, indicating a difference in the expected output for a 
CHECK {} block.

IF the expected output is wrong, I have provided a patch of the 
test_output/cover/special_blocks.5.008 golden file.

I dont have a patch if the code under test is wrong, wouldnt even have a 
clue where to start - I ran t/aspecial_blocks under Devel::ptkdb, and 
realised I'd need to dig a lot deeper to find out where the code goes...

Can anyone give me a hint to track the code for a CHECK block? Do I have 
to trace the dynamically generated command maually ?

--
Leif Eriksen
Snr Developer
HPA Pty Ltd
ph +61 3 9217 5545

diff -Naur test_output/cover/special_blocks.5.008 
test_output/cover_new/special_blocks.5.008
--- test_output/cover/special_blocks.5.008  2004-10-28 12:30:48.0 +1000
+++ test_output/cover_new/special_blocks.5.008  2004-10-28 13:11:46.0 +1000
@@ -43,7 +43,7 @@
 19  
 20  CHECK
 21  {
-22  $x++
+22 1  100   $x++
 23  }
 24  
 25  INIT
@@ -67,5 +67,6 @@
 BEGIN  1 tests/special_blocks:10
 BEGIN  1 tests/special_blocks:11
 BEGIN  1 tests/special_blocks:17
+CHECK  1 tests/special_blocks:22