Devel::Cover: completing $x{foo} ||= 1 conditions

2004-05-20 Thread Geoffrey Young
hi paul.

I've found that in a statement like

$x{foo} ||= 1;

I can't ever satisfy the first condition in the "condition coverage" matrix
(0,0) since 1 is always true.  is it desirable to remove fixed truth values
like this from the truth table?

I tried taking a look at adding the condition to tests/cond_or but I really
couldn't grok the test suite at first glance :)

--Geoff


Re: Devel::Cover: completing $x{foo} ||= 1 conditions

2004-05-20 Thread Geoffrey Young
> This is unlikely to be the only case in which I have not fully
> understood the subtleties of the op tree, and so I am grateful for
> reports such as this.

I'll keep them coming, then :)

> 
> The following patch should fix it, and will be in the next release,
> hopefully coming next week:

excellent, thanks.

>>I tried taking a look at adding the condition to tests/cond_or but I really
>>couldn't grok the test suite at first glance :)
> 
> 
> It's not one of the more simple test suites out there, that's true.  In
> part this is because testing requires running the test program in a new
> process under Devel::Cover, then running F to generate the
> results, then checking the results are correct.  To do this, I use
> golden results.
> 
> So the process is something like:
> 
>   - Add the code to the test program in tests/cond_or
>   - Run make text TEST=cond_or and check the results are correct
>   - Run make gold TEST=cond_or

cool, that's good information.

>   - Do this for perl5.6.1, 562, 580, 581, 582, 583, 594 and 592,
> threaded and unthreaded
>   - Take appropriate shortcuts by examining the golden results in
> test_output/cover/cond_or.* and using all_versions
>   - Try very hard not to mess up

:)

> 
> I've not really expected anyone else to do this up to now.  

well, I was trying to be a good citizen and provide a useful test case
(rather than a tarball that merely exhibited the issue).  I did add
something to cond_or and ran 'make test TEST_FILES=t/acond_or.t' but I got
more than just one failure so I figured there was more to it :)

> Thanks again for the report.

no problem.  and thanks again for groking the op tree to the point where
Devel::Cover is really useful.

--Geoff


Re: Devel::Cover: completing $x{foo} ||= 1 conditions

2004-05-21 Thread Geoffrey Young

> Full coverage isn't always possible, and the lack of it isn't 
> necessarily a problem.

I fully agree.  however, once you start using a tool like this, management
will inevitably ask "what's that 93% about?"  and the answer is sometimes
complex and subject to judgement: "well, Devel::Cover is just a guideline,
and these few cases are really limitations of the software.  but I've
checked them out so we're really at 100%"

I think a nice future feature might be some way to predeclare conditions
that you understand.  for instance, given

  $x ||= func();

I might like to signal Devel::Cover that func() has a constant return (or
lack thereof).  another thing that is keeping me from 100% right now is the
classic

  my $class = ref $self || $self;

where the only way to satisfy the conditional is to call My::Foo::bar()
using functional syntax instead of a method syntax.

granted, if I were to somehow signal Devel::Cover that some "false, false"
condition will never be raised and that's ok, I'm leaving myself open to
code that misbehaves exactly on that condition.  but this entire methodology
is only part of the picture anyway, even at 100%.  and sometimes it's all
about the green :)

--Geoff


Re: Devel::Cover: completing $x{foo} ||= 1 conditions

2004-05-21 Thread Geoffrey Young

>> I might like to signal Devel::Cover that func() has a constant return (or
>> lack thereof).
> 
> 
> I don't know if I would like this feature. To me it would allow you to
> falsely skew the results of the Devel::Cover output. I look at
> Devel::Cover as a harshly objective analysis of my test-code coverage,
> anything destroying that objectivity IMO would lessen the value of the
> tool.

in general I agree, and if such a feature existed I wouldn't ever use it
myself while writing tests or developing code.

however, in the process of development we are required to analyze any of the
inevitable gaps and decide whether the unhit condition is valid.  if it is
we write  a test for it.  if it represents a condition we would explain away
(D::C limitation, or whatnot) then it would be nice to have some way to
track it within the tool itself.  partially to appease management with heavy
greens, but more to save development cycles chasing down issues I (or other
developers) have analyzed before.

> 
> If you are looking to satisfy management, then I would suggest not
> running/showing branch and conditional coverage, as they can be tricky,
> and just showing statement and subroutine coverage (and maybe POD too)
> since it is much easier to get 100% on those. 

of course, there is also html editing ;)

> Sometimes, too much
> information can hurt, and it can be a slippery slope to try and explain
> to some why branch/cond coverage sometimes can never be 100%.

right.  it's a situation we as developers can accept and understand, but for
the visually inclined it gets messy.  luckily I think it's something
everyone can grasp in my current situation :)

> To write a test to satisfy the coverage,
> but that doesn't actually test the (or any) intended usage, IMO weakens
> your overall test.

agreed.  100% ;)

--Geoff


Re: Devel::Cover: completing $x{foo} ||= 1 conditions

2004-05-21 Thread Geoffrey Young


chromatic wrote:
> On Fri, 2004-05-21 at 09:02, Geoffrey Young wrote:
> 
> 
>>another thing that is keeping me from 100% right now is the
>>classic
>>
>>  my $class = ref $self || $self;
>>
>>where the only way to satisfy the conditional is to call My::Foo::bar()
>>using functional syntax instead of a method syntax.
> 
> 
> Wouldn't calling that constructor on an existing object satisfy the
> conditional?

I don't think so.  you have three logical pathways to test:

  - T
  - F || T
  - F || F

it's the last one that is "problematic" in typical uses, since the only way
to make $self false, given

  my $self = shift;

is to call the class/object method directly as a functional subroutine with
no arguments, which probably breaks most uses of the API.

still, it's not really about being slave to the green (which is a bad idea)
- you might genuinely care about what happens when someone new to your group
calls something as a function when it should be a method.  or you may have
something like Apache::server_root_relative() which can be called as either
an object method or a function (in which case the proper pool is retrieved
for you).

so it's not necessarily something you _don't_ want to test for.  but if
you're writing tests for a standard OO API you may not want to waste the
time excercising conditions that clearly break the OO paradigm of your
application.

are we OT yet?

> Granted, I'd *never* do that in real code, so I prefer to bury that
> yucky idiom.

I'm merely the test slave at the moment, not the author :)

--Geoff


Re: Adding analysis to Devel::Cover reports

2004-06-07 Thread Geoffrey Young

> Before I get too deep into an implementation, I'd like to poll the group about
> how you would use this feature and like it to behave. My thoughts and plans follow.
> 
> For the coverage summary, the numbers represent actual coverage, but the colors
> are based upon actual coverage + analysis. So it's possible to have (e.g.) a
> green 66.7 in the 'cond' column. Tooltips have changed from "N / T" (covered
> paths / total paths) to "T paths: N covered, M analyzed".

that sounds like a great approach.  something didn't feel quite right about
messing with the actual numbers.


> The only thing I don't like about this approach is that some of the data is
> available only in the tooltips, which of course don't print. Do people make
> hardcopies of these reports? If so, would you want the extra data in them?

sorry I need this kind of explanation, but what are the tooltips?

>>Michael has some ideas for backends and interfaces to the uncoverable code, 
>>which I'll let him talk about or work on as he sees fit.
> 
> 
> I plan to get the backend working before I start messing with the UI.
> 
> CGI would be the slickest -- you could do everything from the report. But that
> would require a webserver, which I'm loathe to do. (I may relent if anyone knows
> of a small, simple, lightweight pure-perl server that could be started/stopped
> as needed.)

yeah, it would be great to be able to click on the report itself, but a
webserver probably isn't the best idea.

> 
> My current plan is to create a command-line based tool for entering the data,
> and provide a Tk app as a wrapper. The intent would be for people to use the Tk
> interface with the command-line interface provided for those unable/unwilling to
> install Tk or in case someone wants a scriptable interface.

having both is a good idea.  if the underlying file were to stay in
human-readable format that would also be great.  personally, it took me a
while to grok the format, and just when I thought I had it it I couldn't get
it to work anyway with my code :)

--Geoff


Re: Removing Tests from Devel::Cover results

2004-06-17 Thread Geoffrey Young


Tony Bowden wrote:
> Is there any simple way to remove the test files themselves from the
> Devel::Cover results? i.e. just see the coverage analysis of the files
> being tested rather than all the t/* files as well?
> 
> We want to monitor the total coverage over time on a project, and it
> would be nice to just grab that from the bottom line of the HTML report.
> But, from what I can see, we can make that number get larger purely by
> adding more and more lines to test files that don't actually test
> anything, as running those lines will increase the coverage of the
> project as a whole! If there were a flag that would -exclude (or
> -ignore, both seem to be documented?), the actual test files, this might
> be slightly nicer.

ignore and +-inc do the trick for me.  my current 'make cover' target starts
like this:

  [EMAIL PROTECTED]::Cover=-ignore,\.t\$$, \
 -ignore,apache_test_config.pm,+inc,$(TOPDIR)/lib, \
 +inc,$(TOPDIR)/Apache-Test\\

HTH

--Geoff


Re: xUnit vs. Test::More

2004-06-24 Thread Geoffrey Young

> The other concern I've had with our style of xUnit testing is that we're testing 
> behavior, but not
> the actual data.  With Test::More, we tested against a copy of the live database 
> (when possible --
> but this definitely caused some issues) and we sometimes caught data problems that 
> xUnit style
> testing seems less likely to catch.  The reason for this is quite simple:  when you 
> setup the
> data, you're setting up your *idea* of what the data should be.  This might not 
> match what your
> data actually is.

I take the approach that these are fundamentally two different things.

first, as a developer you need to code against what your idea of the data
is, taking the "known data gives expected results" approach to your tests.
a good example is a subroutine that uses a regex to parse the data - the
best you can do while developing the routine is to make sure your regex
handles the conditions of some sample data (which you may in fact be making
up in your head).

once that is done, you can bang against the routine with real data and see
how it holds up.  if you find that you get a condition that you didn't think
about before, you now have two tests you can use - the real data that caused
the error, and some minimal data extracted from the real data that isolates
the problem which can be added to your developer tests.

this is what I have been doing lately.  call them whatever you like, as I'm
sure that the XP people have some fancy nomenclature for it, but the idea is
to separate developer-level (used while coding) from API-level tests
(real-world API usage) and use both in your testing process.  the former is
what I use for coverage purposes, tracing through the logical branches in as
isolated a context as possible.  with the latter I try to tie in live (test)
databases, servers, and so on, relying on that to fill in the gaps that are
exposed in an isolated test environment.

HTH

--Geoff


Devel::Cover and nested subroutines

2004-06-25 Thread Geoffrey Young
hi paul :)

I recently discovered an issue with nested subroutines while using
Devel::Cover with Parse::Yapp.  the basic issue is that some subroutines are
not discovered by Devel::Cover and thus no metrics are generated.

there are two files in the tarball.  Foo.pm is a minimal test case showing
that two subroutines are missing (one defined and one anonymous).  Parser.pm
is an autogenerated file based on Parser.yp that illustrates the same thing
- note that the _missing() subroutine is missing from coverage results (as
well as a handful of anonymous nested subroutines I think).

the only interesting thing is that the behavior of my minimal test case and
Parser.pm seem to be opposite - in the former the missing subroutine is the
one that contains the anonymous sub, while in the latter is is the named
subroutine that occurs after a nested anonymous sub.

anyway, let me know if there is anything else I can do - I realize this is
kind of obscure, so no pressure :)

oh, and I tested using 0.46 and both 5.8.4 and 5.8.0.

--Geoff


bug-devel-cover.tar.gz
Description: GNU Zip compressed data


Re: Devel::Cover and nested subroutines

2004-06-28 Thread Geoffrey Young

> Thanks a lot for the test cases.  I think there are two separate bugs
> here, but I'm only going to take responsibility for one ;-)

:)

> 
> First, mine.  The problem with Foo.pm (the minimal test case) is that
> completely empty subroutines (that is subs which contain no statements
> at all) are ignored as far as subroutine coverage is concerned.  That is
> the case for both named and anonymous subs.  The op tree for an empty
> sub didn't contain the structure I was looking for, so it wasn't
> recognised as a sub.

oh.  cool, I think :)

> 
> I have put in a fix for this, but it only works with Perl 5.8.2 and
> later versions.  I've not gone trying to get it to work with earlier
> versions, since it is pretty obscure and I prefer to keep the code
> reasonably clean.  Or maybe I'm just too lazy.  In any case, let me know
> if this is going to cause anyone problems.  I have documented it as a
> bug though, and upped the recommended version to use from 5.8.1 to 5.8.2.

for minor, obscure issues like this depending on a more recent perl seems
perfectly fine.  especially here, where there isn't any code to create
metrics for anyway.

> 
> Then the second bug.  The problem here is that if you lie to perl it
> will bite your bottom ;-)

that's the quote of the week, for sure :)

>   1.  The coverage will not be reported in the Parser.pm module.

blarg.  but at least there's a reason that makes sense :)

> 
>   2.  Devel::Cover needs to be able to find Parser.yp.  In the example
>   the filename given is Parser.yp, but the file is actually at
>   lib/My/Parser.yp and so Devel::Cover can't find it.  Changing the
>   example to give the full filename, 

you mean changing the #line directive?


>   or putting in a link from the
>   current directory fixes the problem.  I'm not sure if this is
>   actually a problem with Parse::Yapp or just a result of the way
>   you packaged up the testcase.

well, the packaging is pretty much the way the code I'm testing is layed out
- Foo.yp in the same directory as Foo.pm, and all under some lib/ someplace.
 fairly standard I'd think.

but the symlink works - linking Foo.yp to the top-level directory, along
side cover_db.

> 
>   3.  Parse::Yapp doesn't clean up after itself by setting the line
>   number and filename back to what it actually is, which means that
>   subsequent code coverage is reported on in the grammar file even
>   though it didn't come from there, which can be somewhat
>   surprising.

hmph.

> 
> So I'm afraid there's not much I can do about this one - it will need to
> be fed to the author of Parse::Yapp and he can decide if he wants to do
> anything about it.

I suppose I could do that, but it seems kinda strange to ask him to change
stuff around just so we can have good test metrics.  but, per your
suggestion, at least there is a simple workaround - thanks for that.

> In any case, the first fix will be in the next release, 

excellent, thanks.

> and thanks again for the great test cases.

sure thing - thanks for being responsive :)

--Geoff


Test::More::is_deeply() bug

2004-06-30 Thread Geoffrey Young
hi all

I'm not sure if this is the proper forum for reporting Test::More bugs, but
I know some of the interested parties are listening :)

  use Test::More tests => 2;


  is(undef,undef);
  is_deeply([undef],[undef]);


  # both of these should fail
  is('',undef);# this fails
  is_deeply([''],[undef]); # this does not

--Geoff


Re: Test::More::is_deeply() bug

2004-06-30 Thread Geoffrey Young


Fergal Daly wrote:
> There are patches in the archives for this and a couple of other bugs but
> they were submitted along with another change that wasn't acceptable so they
> were never applied. A search for is_deeply should find the patches and a
> long argument,

I found some stuff about is_deeply() and stringified references, but it
doesn't look to me to be the same thing.  the problem I am describing is the
difference specifically between an empty string and undef.

  is_deeply([''],[undef]);

improperly says that the two arrays are equal.

  is_deeply(['foo'],[undef]);

behaves as expected.

am I missing something is the discussion?

--Geoff


Re: Test::More::is_deeply() bug

2004-06-30 Thread Geoffrey Young

> Actually, it seems that some of the patches were applied. The problem is
> that is_deeply() delegates to ->is_eq() for non deep arguments but handles
> it's own string comparison once you descend into the structure. 

yes, I figured it was something like that - who hasn't been bitten by this
kind of thing?  :)

> The patch
> below seems to fix it,

many, many thanks.  I felt bad that I didn't have the tuits to write up a
proper patch.

--Geoff


Re: Phalanx: What if full coverage isn't possible?

2004-07-16 Thread Geoffrey Young

>> I just ran into a similar "problem" in POE::Driver::SysRW.  For
>> portability I have a couple lines similar to
>>
>> $! = 0 if $! == EAGAIN or $! == EWOULDBLOCK;
>>
>> EAGAIN and EWOULDBLOCK are identical on most systems.  In fact, one is
>> usually defined in terms of the other.  They differ on a few platforms,
>> however, and it's important to check both.
> 
> Redefine EAGAIN and EWOULDBLOCK (they're just perl constants), and rerun
> that code, after setting $!.  (Your code probably isn't written so that
> is in one callable place, but it could be.)
> 
> This is probably a good example of where it's too silly to force it to
> test all the possibilities, but it is possible.

I don't think it's silly at all - if "it's important to check both" then you
would want to have tests that cover situations which can occur on platforms
other than your own, otherwise you can't really be sure that you have
provided the logic you seek.

as you mention, local-style redefinition is just the solution for code like
this - I, for one, always get confused about operator precidence in cases
just slightly more complex than the one above, and prefer to use tests (and
parens :) to make sure I got it right.

--Geoff


Re: [ANNOUNCE] Test::Simple 0.48_02

2004-07-29 Thread Geoffrey Young


Michael G Schwern wrote:
> http://mungus.schwern.org/~schwern/src/Test-Simple-0.48_02.tar.gz
> 
> A new alpha release of Test::Simple/More/Builder.  You can consider this
> the 0.49 release candidate.  Please let me know how it goes.

sorry it took me so long to get around to trying this...

the Apache-Test integration stuff we talked about at YAPC works just fine
with 0.48_02.

thanks.

--Geoff


[RFC] Test-Locally

2004-08-09 Thread Geoffrey Young
hi all...

I've been working on something a bit and wanted to run it by people here to
see if folks think it's a project worthy of persuing.  basically the below
bit from the README kinda sums it up for me - locally wrapping lots of
routines is getting quite tedious (specifically sockets at the moment) and
from the docs Hook::Lexwrap just seems to be way too much for testing.

so, the proposal is Test-Locally, which would provide something like this

  - a base class for people that wanted to wrap their own single subroutines
(or groups of subroutines) simply and without a lot of fuss
  - some tools that make overriding subroutines easier, like a base READ
implementation everyone can use
  - knowledge of some standard yet complex modules in the core distribution
so that they didn't need to be wrapped by everyone who ends up wrapping them.

and so on.  at the moment, the implementation is pretty incomplete (only the
base class is documented and only IO:: is implemented and tested) which is
why I bring it up here - if people like the idea I'll tidy it up and
probably put it on sourceforge so people can contribute implementations for
their (least) favorite classes.  if nobody cares I'll forget about CPAN.

anyway, constructive feedback welcome.  you can get the (incomplete) tarball
here:

  http://perl.apache.org/~geoff/Test-Locally-0.01.tar.gz

--Geoff

from the README:

"...here is an example drawn from real life...

suppose you have a subroutine that opens a socket connection with
a client, sends it some data, receives a response, and validates
the response (all within the same Perl subroutine).  overriding all
the calls to the point where you can effectively fake the read so
that it uses your test data you would need to do something like this


  {
no warnings qw(redefine);
local *IO::Socket::INET::new = sub { bless {}, 'IO::Socket' };
local *IO::Socket::connected = sub { 1 };
local *IO::Handle::syswrite  = sub { };
local *IO::Handle::close = sub { };
local *IO::Handle::sysread   = sub { generic READ of $data };


... testing stuff ...
  }


yucko.  with Test::Locally that can be reduced to


  {
my $local = Test::Locally::IO::Socket::INET->override
  ->new
  ->connected
  ->syswrite
  ->sysread($data)
  ->close;


... testing stuff ...
  }

  # IO::Socket::INET methods are restored when $local is destroyed


or even less if you subclass Test::Locally::IO::Socket::INET and
add, say, a C method that suits your specific purposes."

http://perl.apache.org/~geoff/Test-Locally-0.01.tar.gz


Re: [RFC] Test-Locally

2004-08-09 Thread Geoffrey Young


stevan little wrote:
> Geoff,
> 
> This sounds like mock objects basically
> (http://www.mockobjects.com/FrontPage.html), although maybe on a
> smaller/more-directed scale.

hrmph.  now that you mention it, yeah, it does.  and there's already
Test::MockObject (which I've heard about but obviously haven't actually used
yet :)

just goes to show ya, if you're looking for lexical subroutine overloading
that's what you'll find.  but what you think you want and what you really
want are often different.

> I do like the idea of building a mock
> object repository of sorts, I am sure that would come in handy.

yeah, that was the real goal. perhaps a subclass of Test::MockObject is more
appropriate.

--Geoff


Test::MockObject nits?

2004-08-20 Thread Geoffrey Young
hi chromatic :)

given being pointed toward Test::MockObject::Extends last time I decided to
rework my tests to use it instead of a dozen local overrides.  I immediately
ran into a little snag.

I want to override IO::Socket::INET so that a class calling it's constructor
 will use my mocked object.  so I tried this:

  my $mock = Test::MockObject->new('IO::Socket::INET');


  $mock->fake_new('IO::Socket::INET')
   ->set_false('connected')
   ->mock('error', sub { 'localerror' });

the goal being that when my class calls IO::Socket::INET->new($args) that it
fails, returning my error string.

well, it works great (thanks!)... except fake_new() doesn't return $self
like all the other methods seem to do, so I can't really chain the calls
together like I would have expected to be able to do.  patch attached :)

also, I was a little confused by this note in the
Test::MockObject::fake_new() docs:

  "Note: see Test::MockObject::Extends for a better alternative to this method."

if the goal of Extends.pm is to fake an entire class, then wouldn't that
include the class constructor?  at first I got the impression that I was
doing something wrong, but then I figured it was just a chicken-and-egg
thing since I started with Extends.  or maybe I am doing something wrong (or
at least something non-idiomatic...)

--Geoff
--- lib/Test/MockObject.pm	Thu Mar 25 23:04:57 2004
+++ lib/Test/MockObject.pm.geoff	Fri Aug 20 12:29:45 2004
@@ -269,6 +269,7 @@
 {
 	my ($self, $class) = @_;
 	$self->fake_module( $class, new => sub { $self } );
+	$self;
 }
 
 {
--- lib/Test/MockObject/Extends.pm	Thu Mar 25 22:58:56 2004
+++ lib/Test/MockObject/Extends.pm.geoff	Fri Aug 20 12:28:42 2004
@@ -118,7 +118,7 @@
   use Test::MockObject::Extends;
 
   my $object  = Some::Class->new();
-  my $mock_object = Test::MockObject::Extends( $object );
+  my $mock_object = Test::MockObject::Extends->new( $object );
 
   $mock->set_true( 'parent_method' );
 


Re: Test::MockObject nits?

2004-08-20 Thread Geoffrey Young

>   my $mock = Test::MockObject->new('IO::Socket::INET');
> 
>   $mock->fake_new('IO::Socket::INET')
>->set_false('connected')
>->mock('error', sub { 'localerror' });
> 
> the goal being that when my class calls IO::Socket::INET->new($args) that it
> fails, returning my error string.
> 
> well, it works great (thanks!)

hmph, I'm actually having a difficult time getting ::Extends to do what I
would think it would do.  consider

  use Test::MockObject::Extends;
  my $mock = Test::MockObject::Extends->new('IO::File');
  $mock->mock('open', sub { print "mocked open\n" });

  IO::File->open;

which yields an IO::File::open() error - I would have expected my own
subroutine to be called instead.  some poking around shows me that if you
call fake_new() then $class->new->method() works ok, but class methods
themselves are not overridden, including new(), which is a handy thing to be
able to override :)

anyway, is something wrong with the code or my understanding of what
Test::MockObject::Extends is capable of?

--Geoff


Re: Test::MockObject nits?

2004-08-20 Thread Geoffrey Young

> It's your understanding.  You're not mocking the class as a whole. 
> You're mocking an instance.  If it helps, think of prototype-based
> programming, where you don't inherit from classes, you inherit from
> other objects and selectively override or add methods on the new
> objects.

hmm, ok, I'll accept that I'm out of whack here :)  but I'm still a bit
confused.

Test::MockObject is clearly object/instance based.
Test::MockObject::Extends is documented to mock either an object or the
class as a whole.  if that's not the case that is fine (I guess ;), but then
I'm very confused what value passing a class name to new() adds - the docs
claim that it is for mocking class methods, doesn't it?

--Geoff


Re: Test::MockObject nits?

2004-08-20 Thread Geoffrey Young

> The docs may be misleading, especially as there's code in
> Test::MockObject that really should live in something like
> Test::MockModule or Test::MockPackage, neither of which exist yet.
> 
> The important point is that you always have to work with the object
> returned from Test::MockObject::Extends->new(), as it's the only one
> with the special instancey behavior.

ok, now I see where you're going :)

living with the object limitation is fine, as that handles most of the
general cases - I can still use fake_new() so that the calling class
constructs a mock object of my choosing.  however, I might suggest that
there be some generic fake_constructor($name) method or something - if I
want to subvert, say, DBI behind the scenes I would want to override
connect() for the class instead of new().

also, if you fancy making fake_new() return $self, mock() and unmock() in
Extends also don't return $self, which makes them different from the
MockObject versions.

anyway, thanks for taking the time to discuss things a bit :)

--Geoff


Re: [ANNOUNCE] Test::Simple 0.48_02

2004-09-05 Thread Geoffrey Young
hi all :)

Michael G Schwern wrote:
> http://mungus.schwern.org/~schwern/src/Test-Simple-0.48_02.tar.gz
> 
> A new alpha release of Test::Simple/More/Builder.  You can consider this
> the 0.49 release candidate.  Please let me know how it goes.

just out of curiosity, what's the word on 0.49?

--Geoff


[Devel::Cover] return conditions

2004-09-16 Thread Geoffrey Young
hi paul :)

I think this has come up before, but I'm not sure what the resolution was.

I just came across (production) code that looks like this:

  return 1 if $one == $two or return 0;

in the condition coverage the second return is always false, but I suppose
that it could be argued that if the second return is reached at all the
condition has been met (regardless of what it actually returns).

anyway, I have a tarball with a working example if you need it.  but I'll
also accept that D::C is always going to work the way it does now :)

--Geoff


Re: [Devel::Cover] return conditions

2004-09-16 Thread Geoffrey Young

>>>  return 1 if $one == $two or return 0;
> 
> 
> Just FYI:
> 
> I always wonder why someone would write such code. IMHO this is 
> unmaintainable code. I might not be an Perl expert, but I wouldn't consider 
> myself a beginner either, especially not at boolean logic. And still, my 
> mind cannot grasp what this actually does - I would need to write it down 
> on paper and trace it to actually understand it. And if you need to do that 
> than the code in question is too complicated :)
> 
> But maybe I am just overworked and tired :)

nope, I fully agree.  however, you can't always choose which legacy code you
inherit.  and you can't always just up and change production code.  well,
unless you have some tests to show that it works the same way as it did
before, the reason that I'm writing tests for it ;)

--Geoff


Re: [Devel::Cover] @INC at Runtime?

2004-09-17 Thread Geoffrey Young

> You could use the values from Config.pm

paul and I were talking about this on irc, wondering if there was some
Config.pm value that would give the @INC that perl was compiled with.
neither of us could find one, and when I started poking around as to create
a patch, I fould that the logic the compiled @INC isn't just sitting around
to be picked up - it looks to be scattered in all sorts of variables that
are pasted together if this or that.

but maybe I'm missing something very obvious (which wouldn't be surprising :)

--Geoff


Re: [ANNOUNCE] Test::Simple 0.49

2004-10-14 Thread Geoffrey Young

> 0.49  Thu Oct 14 21:58:50 EDT 2004

excellent!  thank you very much.

for the interested, Test::More support has now officially been added to
Apache-Test server-side tests, provided you have 0.49.

kudos all around.

--Geoff


[RELEASE CANDIDATE] Apache-Test-1.14

2004-10-11 Thread Geoffrey Young
a release candidate for Apache-Test 1.14 is now available.

  http://perl.apache.org/~geoff/Apache-Test-1.14-dev.tar.gz

worthy of note is that this version ought to play nicely with Devel::Cover
0.49 with the use of -one-process, making it possible to get coverage
results for mod_perl handlers.

please take the time to excercise the candidate through all your existing
applications that use Apache-Test and report back successes or failures.

--Geoff

Changes since 1.13:

improve the same_interpreter framework to handle response failures
while trying to init and later find out the same interpreter. [Stas]

make sure that 'make distclean' cleans all the autogenerated files
[Stas]

make sure that if -maxclients option is passed on the command line,
minclients will never be bigger than that value [Stas]

add -one-process runtime argument, which will start the server
in single-server mode (httpd -X in Apache 1.X or
httpd -D ONE_PROCESS in 2.X) [Geoffrey Young]

In open_cmd, sanitize PATH instead of clearing it [Gozer]

Allow / \ and \\ path delimiters in SKIP file [Markus Wichitill
<[EMAIL PROTECTED]>]

Added an apxs query cache for improved test performance [Gozer]

run_tests make target no longer invokes t/TEST -clean, making it
possible to save a few development cycles when a full cleanup is
not required between runs.  [Geoffrey Young]

Apache::TestSmoke imrovements: [Stas]
 o the command line option -iterations=N should always be respected
   (previously it was internally overriden for order!='random').
 o since IPC::Run3 broke the Ctrl-C handler, we started to loose any
   intermediate results, should the run be aborted. So for now, try to
   always store those results in the temp file:
   smoke-report...$iter.temp

fix 'require blib' in scripts to also call 'blib->import', required to
have an effect under perl 5.6.x. [Stas]

don't allow running an explicit 'perl Makefile.PL', when Apache-Test
is checked out into the modperl-2.0 tree, since it then decides that
it's a part of the modperl-2.0 build and will try to use modperl
httpd/apxs arguments which could be unset or wrong [Stas]

Fix skip test suite functionality in the interactive configuration
phase [Stas]

s/die/CORE::die/ after exec() to avoid warnings (and therefore
failures) when someone overrides CORE::die when using Apache-Test
[William McKee, Stas]


[ANNOUNCE] Apache-Test-1.14

2004-10-12 Thread Geoffrey Young

The URL

http://perl.apache.org/~geoff/Apache-Test-1.14.tar.gz

has entered CPAN as

  file: $CPAN/authors/id/G/GE/GEOFF/Apache-Test-1.14.tar.gz
  size: 127197 bytes
   md5: d930b810b4e1b85325f3e3fd9cb93bd1


Changes since 1.13:

improve the same_interpreter framework to handle response failures
while trying to init and later find out the same interpreter. [Stas]

make sure that 'make distclean' cleans all the autogenerated files
[Stas]

make sure that if -maxclients option is passed on the command line,
minclients will never be bigger than that value [Stas]

add -one-process runtime argument, which will start the server
in single-server mode (httpd -X in Apache 1.X or
httpd -D ONE_PROCESS in 2.X) [Geoffrey Young]

In open_cmd, sanitize PATH instead of clearing it [Gozer]

Allow / \ and \\ path delimiters in SKIP file [Markus Wichitill
<[EMAIL PROTECTED]>]

Added an apxs query cache for improved test performance [Gozer]

run_tests make target no longer invokes t/TEST -clean, making it
possible to save a few development cycles when a full cleanup is
not required between runs.  [Geoffrey Young]

Apache::TestSmoke imrovements: [Stas]
 o the command line option -iterations=N should always be respected
   (previously it was internally overriden for order!='random').
 o since IPC::Run3 broke the Ctrl-C handler, we started to loose any
   intermediate results, should the run be aborted. So for now, try to
   always store those results in the temp file:
   smoke-report...$iter.temp

fix 'require blib' in scripts to also call 'blib->import', required to
have an effect under perl 5.6.x. [Stas]

don't allow running an explicit 'perl Makefile.PL', when Apache-Test
is checked out into the modperl-2.0 tree, since it then decides that
it's a part of the modperl-2.0 build and will try to use modperl
httpd/apxs arguments which could be unset or wrong [Stas]

Fix skip test suite functionality in the interactive configuration
phase [Stas]

s/die/CORE::die/ after exec() to avoid warnings (and therefore
failures) when someone overrides CORE::die when using Apache-Test
[William McKee, Stas]



Re: running Devel::Cover in mod_perl (1.3)

2004-10-02 Thread Geoffrey Young


Kevin Scaldeferri wrote:
> 
> On Sep 21, 2004, at 1:30 PM, Kevin Scaldeferri wrote:
> 
>>
>> So, I don't expect anyone to try to figure out this stack trace stuff,
>> but I'm curious if other people have seen stability problems like
>> this?  Alternatively, if someone can tell me the exact logistics of
>> how they get the coverage out in the end, I'd appreciate it.  Is there
>> another way than 'kill'ing the apache process to get Devel::Cover to
>> write its data?  It seems to do it at one point during startup, but
>> after that it looks like it just stays in memory, which I end up
>> losing when things go bad terminating the process.
>>
> 
> Sorry, to resurrect this, but I was hoping to hear from some other
> people who are using Devel::Cover in mod_perl about just how they run
> the tests, and get the results out.  Are people doing something other
> than killing the server in order to get Devel::Cover to dump the data
> out of memory to the disk?  Could anyone give a cookbook type procedure
> that they use to do this sort of data collection?

we use Apache-Test, which starts the server, runs the tests, and shuts down
the server again.  with the posted patch (or the next version of
Devel::Cover) this produces the necessary statistics - running cover when
Apache-Test is done generates the same html you would get from a normal test
script.

if you haven't investigated Apache-Test yet, I would.  our custom make
target look like this:

test-cover: all
[EMAIL PROTECTED] -delete
[EMAIL PROTECTED]::Cover=+ignore,\.t\$$ \
  ,+ignore,apache_test_config.pm,+ \
  +inc,$(HOME)/.apache-test,APACHE_TEST_EXTRA_ARGS=-one-process \
  $(MAKE) test
@cover

that final $(MAKE) test calls 'make test' which calls the generated
Apache-Test harness t/TEST.  works like a charm :)

--Geoff




Re: running Devel::Cover in mod_perl (1.3)

2004-10-02 Thread Geoffrey Young

> if you haven't investigated Apache-Test yet, I would.  our custom make
> target look like this:

I forgot to add some A-T specific stuff :)

t/conf/modperl_extra.pl:

  if ($ENV{HARNESS_PERL_SWITCHES}) {
eval {
  require Devel::Cover;
  Devel::Cover->import('+ignore' => 't/response/',
   '+inc'=> "$ENV{TOPDIR}/lib",
   '+inc'=> "$ENV{TOPDIR}/Apache-Test",
   '-db' => "$ENV{TOPDIR}/cover_db");
};
warn "Devel::Cover error: $@" if $@;
  }

  1;

t/conf/extra.conf.in

  PerlPassEnv TOPDIR
  PerlPassEnv HARNESS_PERL_SWITCHES

HTH

--Geoff


Re: running Devel::Cover in mod_perl (1.3)

2004-10-02 Thread Geoffrey Young


Kevin Scaldeferri wrote:
> 
> On Oct 2, 2004, at 12:36 PM, Geoffrey Young wrote:
> 
>>>
>>
>> we use Apache-Test, which starts the server, runs the tests, and shuts
>> down
>> the server again.
> 
> 
> 
> When I last talked with you about Apache-Test, I seem to recall that you
> said that it was restricted to running the tests serially.  Is this
> still true?  I supposed that while gathering coverage, and in single
> process mode, it doesn't matter, but if I can't parallelize my test
> requests on a typical run, it would be unacceptably slow and probably
> keep people from actually running the tests.

well, it runs them the same way that Test::Harness would - one *.t file at a
time, which acts as the client.  since Devel::Cover requires single process
mode I don't think there would be a way around this anyway.

fwiw, while running the _complete_ test suite takes a very long time under
Devel::Cover, I don't find single test files (or smallish groups of files,
say 20 or so, with 150 tests in total) to be all that bad.

--Geoff


Re: running Devel::Cover in mod_perl (1.3)

2004-10-02 Thread Geoffrey Young

> [ Just before sending this I notice Geoff has recommended something
> better, but I'll send this too as another WTDI. ]

cool :)

I started to maintain Apache-Test skeletons, but I never quite got them up
to speed.  give me a few days and I'll roll a tarball with a test-cover
target so that folks can have an entire working example of the way I would
do it.

--Geoff


Re: running Devel::Cover in mod_perl (1.3)

2004-10-05 Thread Geoffrey Young


Geoffrey Young wrote:
>>[ Just before sending this I notice Geoff has recommended something
>>better, but I'll send this too as another WTDI. ]
> 
> 
> cool :)
> 
> I started to maintain Apache-Test skeletons, but I never quite got them up
> to speed.  give me a few days and I'll roll a tarball with a test-cover
> target so that folks can have an entire working example of the way I would
> do it.

as promised, here is a tarball that includes a 'test-cover' target.

  http://perl.apache.org/~geoff/Apache-Test-with-Devel-Cover.tar.gz

it's made a little more complex than it needs to be because of version
restrictions: 'test-cover' requires the (yet unreleased) Apache-Test 1.14 as
well as the (yet unreleased) Devel::Cover 0.48.  if you remove that logic
the requirements to get Devel::Cover working well with mod_perl are pretty
minimal.

in a real-life deployment my configuration is almost identical, save lots
more +inc entries and specific pointers to where cover_db ought to live.
and, as paul pointed out, when you invoke Devel::Cover like this you get
coverage for _both_ regular perl modules tested by *.t files, as well as
your mod_perl handlers, making it really easy to mix apache and non-apache
based tests in a single suite.

anyway, if people have trouble with the tarball just let me know and I'll
tweak it as required.  if you are unfamiliar with Apache-Test, check out the
README contained within the tarball itself - it has just about everything
you need to know, including pointers to more verbose documentation.

HTH

--Geoff



Re: running Devel::Cover in mod_perl (1.3)

2004-10-05 Thread Geoffrey Young


David Wheeler wrote:
> On Oct 2, 2004, at 2:30 PM, Geoffrey Young wrote:
> 
>> I started to maintain Apache-Test skeletons, but I never quite got
>> them up
>> to speed.  give me a few days and I'll roll a tarball with a test-cover
>> target so that folks can have an entire working example of the way I
>> would
>> do it.
> 
> 
> Perhaps I should add support for Module::Build's "covertest" action to
> Apache::TestMB...just tell me what it needs to do.

I think that all Apache::TestMB would need to do is add a make target that
looks like this:

test-cover ::
@cover -delete
@HARNESS_PERL_SWITCHES=-MDevel::Cover=+inc,$(HOME)/.apache-test
APACHE_TEST_EXTRA_ARGS=-one-process $(MAKE) test
@cover

broken down it does this

  - HARNESS_PERL_SWITCHES gets Devel::Cover started
  - +inc,$(HOME)/.apache-test keeps coverage away from generated A-T files,
which isn't required
  - -one-process puts httpd in -X mode, but is only suppored in A-T current cvs

there is more to do if you want to generate coverage on mod_perl handlers,
but that needs to happen from httpd.conf land.  however, this will get
Devel::Cover started for *.t files that test normal Perl modules.

HTH

--Geoff



Re: running Devel::Cover in mod_perl (1.3)

2004-10-05 Thread Geoffrey Young

>>   - HARNESS_PERL_SWITCHES gets Devel::Cover started
> 
> 
> Module::Build's testcover target already does this.

:)

> 
>>   - +inc,$(HOME)/.apache-test keeps coverage away from generated A-T
>> files,
>> which isn't required
> 
> 
> Ah, cool. But $(HOME) doesn't correspond to ~/ here, does it?

yeah - it's equivalent to $ENV{HOME} in make-land.  I guess there is always
the danger that $HOME isn't populated, but internally A-T uses $ENV{HOME}
when it generates the .apache-test directory, so it's probably not that big
of a deal.  or it may not be that big of a deal anyway - the worst that can
happen is that D::C reports statistics for TestConfigData.pm

> 
>>   - -one-process puts httpd in -X mode, but is only suppored in A-T
>> current cvs
> 
> 
> That's an environment variable? 

yes, APACHE_TEST_EXTRA_ARGS is an environment variable that adds extra
arguments to the t/TEST call.  so this makes it equivalent to calling

  $ t/TEST -one-process

-one-process is required because if httpd runs in standard mode the coverage
statistics are only partial.  in fact, we added both APACHE_TEST_EXTRA_ARGS
and -one-process to Apache-Test for exactly this reason :)

> I think I wouldn't worry about that,
> since the next version of TestMB wouldn't come out before the next
> version of A-T.

oh, of course ;)

--Geoff


Re: running Devel::Cover in mod_perl (1.3)

2004-10-05 Thread Geoffrey Young

>>test-cover ::
>>  @cover -delete
>>  @HARNESS_PERL_SWITCHES=-MDevel::Cover=+inc,$(HOME)/.apache-test
>>APACHE_TEST_EXTRA_ARGS=-one-process $(MAKE) test
>>  @cover
> 
> 
> I wonder whether we shouldn't try to standardise the target name before
> it's too late to do so.  Module::Build uses covertest, I've always used
> cover, and Geoff has just used test-cover.
> 
> I'm not overly concerned, but I'll admit to preferring something
> starting with cover, because it completes more easily.  (Yes, I'm that
> lazy.  I even have make aliased to n.)
> 
> So, standardise on covertest?  Opinions?

Devel::Cover is your realm, so I'm happy to follow whichever standard you
choose.  as david mentions, Module::Build already has a target, but it would
be nice if they could bend to your will as well ;)

--Geoff


Re: running Devel::Cover in mod_perl (1.3)

2004-10-05 Thread Geoffrey Young


David Wheeler wrote:
> On Oct 5, 2004, at 11:32 AM, Geoffrey Young wrote:
> 
>>> Ah, cool. But $(HOME) doesn't correspond to ~/ here, does it?
>>
>>
>> yeah - it's equivalent to $ENV{HOME} in make-land.  I guess there is
>> always
>> the danger that $HOME isn't populated, but internally A-T uses $ENV{HOME}
>> when it generates the .apache-test directory, so it's probably not
>> that big
>> of a deal.  or it may not be that big of a deal anyway - the worst
>> that can
>> happen is that D::C reports statistics for TestConfigData.pm
> 
> 
> Uh...why? Why not create it in the build directory? That seems more
> portable.

basically it goes into $HOME because it stores the A-T preferences for a
specific user.  but this is all part of the endless 'sticky preferences' foo
that I really don't want to be associated with ;)  lots of to and fro in the
httpd-test archives, though...

--Geoff


Re: running Devel::Cover in mod_perl (1.3)

2004-10-05 Thread Geoffrey Young

> +my $atdir = $self->localize_file_path("$ENV{HOME}/.apache-test");
> +local $Test::Harness::switches=
> +local $Test::Harness::Switches=
> +local $ENV{HARNESS_PERL_SWITCHES} = "-MDevel::Cover=+inc,'$atdir'";

somewhere in here it looks like -one-process is missing, though I wouldn't
know where it would go.

> Geoff, should I go ahead and commit this to A-T?

you're the only one with commit access who uses or understand Module::Build,
so go forth!

--Geoff


Re: running Devel::Cover in mod_perl (1.3)

2004-10-05 Thread Geoffrey Young


David Wheeler wrote:
> On Oct 5, 2004, at 12:36 PM, Geoffrey Young wrote:
> 
>> somewhere in here it looks like -one-process is missing, though I
>> wouldn't
>> know where it would go.
> 
> 
> I'll put it in, though it isn't needed if you use A-T in CVS, eh?

no, it is required.  but only cvs currently supports -one-process as an
option - earlier versions will explode.

--Geoff


Re: running Devel::Cover in mod_perl (1.3)

2004-10-05 Thread Geoffrey Young


David Wheeler wrote:
> On Oct 5, 2004, at 12:43 PM, Geoffrey Young wrote:
> 
>> no, it is required.  but only cvs currently supports -one-process as an
>> option - earlier versions will explode.
> 
> 
> Okay. So I just added this to the testcover action:
> 
> local $ENV{APACHE_TEST_EXTRA_ARGS} = "-one-process";
> 
> Is that all it needs?

yeah, I think that's all the required up front pieces.  authors still need
to configure Devel::Cover over in httpd.conf land, but there's not much we
can do from a makefile to help with that.

--Geoff


[ANNOUNCE] Apache-Test-1.16

2004-11-09 Thread Geoffrey Young
The URL

http://perl.apache.org/~geoff/Apache-Test-1.16.tar.gz

has entered CPAN as

  file: $CPAN/authors/id/G/GE/GEOFF/Apache-Test-1.16.tar.gz
  size: 137425 bytes
   md5: f1d2d2321af6d5f2080e0a56a58b6cec


Changes since 1.15:

launder the require()d custom config filename to make -T happy
[Torsten Fortsch ]

added Apache::TestRunPHP and Apache::TestConfigPHP classes,
which provide a framework for server-side testing via PHP scripts
[Geoffrey Young]

fix problem with multiple all.t files where only the final
file was being run through the test harness.  [Geoffrey Young]

Documented that redirection does not work with "POST" requests in
Apache::TestRequest unless LWP is installed. [David Wheeler]

Separated the setting of the undocumented $RedirectOK package
variable by users of Apache::TestRequest from when it is set
internally by passing the "requests_redirectable" parameter to
the user_agent() method. This allows users to override the
behavior set by the user_agent() method without replacing it.
[David Wheeler]



[RFC] adding skip option directly to plan()

2004-11-30 Thread Geoffrey Young
hi all.

yesterday on irc we got to discussing adding a feature to Test::More that
Apache-Test has been using for a while.  the overall opinion was that the
idea had merit, but we should vet out options here, so comments welcome.
here's the scoop...

over in Apache-Test we allow users to join the plan and skip operations in a
single step.  so, whereas in Test::More you would do something like

  if ($condition) {
plan tests => 3;
  }
  else {
plan 'skip_all';
  }

Apache-Test allows a simple

  plan tests => 3, $condition;

basically, if the arguments to plan() are unbalanced, the final argument is
taken as a skip marker.  of course, that's not the whole story, since I just
glossed over the skip message, but that's the basic idea we're tossing
around.  here are the gory details.

Apache-Test provides a few different interfaces for $condition, but the
format for each is to populate a @global with the skip message.  here's an
example:

  plan tests => 3, need_lwp;  # skip if the LWP suite is not found.

where need_lwp would populate the global skip array behind the scenes, and
the message would be printed as you would expect.  for user-specific
functions there is skip_message()

  plan tests => 3, skip_message('foo not found');

a final, interesting form is the special need() function, which will push
skip messages onto this global array and print them all

  # skip unless we can do SSL in real time
  plan tests => 3, need need_lwp, need_module('LWP::Protocol::https');

in this case, if only LWP::Protocol::https was missing the skip message
would be just for it, while if both were missing both skip messages would be
printed.

anyway, the point of this exercise is to present a few different options for
augmenting Test::More's plan().  personally, I really, really like the way
Apache::Test::plan() works, and would love to see that logic carried over to
Test::More.  the only real sticking issue is the use of a global array,
which doesn't really fly with Test::Builder.

so, were I to implement it now (off the top of my head) I would port plan(),
skip_message(), and need() over to Test::Builder.  the communication would
use a new function Test::Builder::add_to_skip_message() (or somesuch) that
would be the official accessor to some locally scoped message array.

ok, now tear it to shreds :)

--Geoff


Re: [RFC] adding skip option directly to plan()

2004-11-30 Thread Geoffrey Young


Andy Lester wrote:
> On Tue, Nov 30, 2004 at 10:36:00AM -0500, Geoffrey Young ([EMAIL PROTECTED]) 
> wrote:
> 
>>anyway, the point of this exercise is to present a few different options for
>>augmenting Test::More's plan().  personally, I really, really like the way
>>Apache::Test::plan() works,
> 
> 
> I do, too, and I've been wanting this since I found out about how it
> works at YAPC.  It just fell off my radar.

:)

> I would just ask that it be done as
> 
>plan tests => $n, @list_of_conditions_to_be_met

I think that's possible.

> 
> I would hate to see the 3rd+ parms turn into a meta-language of stuff
> like 
> 
>plan tests => 14, needs => "Apache::Wango 1.14";

in case it wasn't clear from my example before, need() is a function A-T
provides, not a new argument to plan().   so the example could be more clear as

  # skip unless we can do SSL in real time
  plan tests => 3, need(need_lwp, need_module('LWP::Protocol::https'));

which is different than

  plan tests => 3, (need_lwp && need_module('LWP::Protocol::https'));

in that the former allows the skip message to build up, while the latter
shows only the first condition to fail.

but I guess need() wouldn't be needed at all if the final argument were
allowed to be an array.  I'm not sure why it wasn't implemented that way in
the first place, come to think of it.  probably because we are passing off
to Test.pm, where plan() expects a hash instead of just ($self, $cmd, $arg).

--Geoff


Re: [RFC] adding skip option directly to plan()

2004-11-30 Thread Geoffrey Young


Michael G Schwern wrote:
> On Wed, Dec 01, 2004 at 12:44:50AM +0100, Paul Johnson wrote:
> 
>>>plan tests => 14, if => have( "Foo" ) && moon_phase eq "waning";
>>
>>The downside here, as Geoff alluded to, is that we don't really want the
>>short circuiting behaviour of &&, since evaluating the operands may give
>>useful information as to why the tests are being skipped.
> 
> 
> Actually we do.  Consider the following.
> 
>   plan tests => 14, if => os("Win32") && check_something_win32_specific;
> 
> Now there is something rather important missing from this discussion.
> The skip reason.  Everything proposed so far is only taking into account
> the conditional. 

well, not everything :)

> But the real thing we want to condense is this.
> 
> if( some_condition ) {
> plan tests => 14;
> }
> else {
> plan skip_all => "the reason we're skipping this test";
> }
> 
> The "need" functions must express two things.
> 
> 1)  Whether or not the test should be skipped.
> 2)  Why the test is being skipped.
> 
> A simple solution is for need functions to return 0 if the need is met,
> and the reason for skipping if the need is not met.  Because this is
> backwards logic we should rename the functionality to better match the 
> logic.
> 
> For example.
> 
>   sub need_module {
>   my $module = shift;
>   eval qq{use $module};
>   return $@ ? "Can't load $module" : 0;
>   }
> 
> I'm not totally pleased with having "backwards" logic but it does seem
> the simplest way to do it.

which is fine, unless you want to layer conditions.  I think we can
accomplish both.  in A-T we allow for two different flavors:

  # if need_lwp() fails, perl will stop and only 'lwp not found' is printed
  plan tests => 3, need_lwp && need_module('LWP::Protocol::https');

and

  # if both fail both skip messages are printed, else whichever failed
  plan tests => 3, need need_lwp, need_module('LWP::Protocol::https');

the way this would be accomplished in T::M might be

  sub need_module {
my $module = shift;
eval qq{use $module};
Test::Builder->add_to_skip_message
return $@
  ? Test::Builder->add_to_skip_message("Can't load $module")
  : 0;
  }

or somesuch.  in either case, plan() might first look for the private skip
message stash on non-zero return, then move to whatever it was passed if the
skip message stash was empty.

anyway, code speaks more clearly than words, and it sounds like most
everyone is willing to at least consider it, so I'll whip something up.  in
the meanwhile

  http://search.cpan.org/dist/Apache-Test/lib/Apache/Test.pm

might be worth a read - the way we handle plan() is right there at the top.

--Geoff


Re: Test label - contents

2004-12-07 Thread Geoffrey Young

> Changing the subject slightly, is there any guidance on what people
> should write in the name/comment/label?
> 
> I ask because several times I've been puzzled by a test failure
> where the message printed is ambiguous. Compare these two:
> 
>   not ok 42 - is red
> 
>   not ok 42 - should be red
> 
> The first isn't clear whether "is red" is what's expected or what's
> actually happened.

yeah, it's easy to get confused when using ok() like that.  that's why I use
is() and it's better diagnostics almost all the time, reserving ok() for
only cases like this

  my $rc = foo('bar');
  ok ($rc, 'foo() succeeded');

so, my personal rule is this:

  - use ok() only when testing single variables for truth of falsity
  - use is() (or another comparison function) when you care about the data
behind the results

the corrolary to this is that I wouldn't do

  is($rc, 1, 'foo() succeeded')

unless it was important to know that foo() returned 1 (over, say, 33) when I
really want to check for truth.

no big insights here, mind you, but I've seen people try to muscle lots of
stuff into ok() that would work better using some of the other T::M
functions.  if you're stuck using Test.pm I guess this isn't an option, but
if you have T::M you might as well use the tools available to you.

one other comment is that I (personally) enforce making each comment unique,
even within loops, since I just find results easier to track that way, even
when everything is successful.  YMMV.

HTH

--Geoff


Re: Test::Legacy warnock'd

2004-12-21 Thread Geoffrey Young


Michael G Schwern wrote:
> On Tue, Dec 21, 2004 at 04:53:18PM +0100, Tels wrote:
> 
>>On Tuesday 21 December 2004 08:53, Michael G Schwern wrote:
>>
>>>I've gotten absolutely no response about Test::Legacy.  Is anybody
>>>using it?  Anybody tried migrating old Test.pm based tests with it?
>>
>>I am converting my old tests directly to Test::More (Test::legacy wasn't 
>>available before so :)
>>
>>Currently I do not plan to do this - the old tests either work (never fix 
>>what 
>>is working) or they don't (seldom), at which point I would convert them to 
>>Test::More.
> 
> 
> There's no "I want to add a new test to this test file that uses Test.pm and
> it would be nice if I could use Test::Foo" case?

I could see this being really good for Apache-Test, which is by default
Test.pm driven.  the thing is that

  - I've already ported A-T to use Test::More in place of Test.pm

  - the native implementation uses Test.pm magic like

$Test::ntest = 1;
%Test::todo = ();

which I figured might be supported in time but not at the moment.  of
course, that was only speculation on my part, since

  - a severe lack of tuits on my part has kept me away

so I, for one, am very sorry that you've taken the time to work on something
that might prove useful to me and I haven't been able to reciprocate with
any kind of meaningful feedback.

sorry.

--Geoff


Re: Test::Legacy warnock'd

2004-12-21 Thread Geoffrey Young

> But for all Test::Builder based modules you can get the same intent with
> Test::Builder->reset.

yup, I used that for the port away from Test.pm - works like a charm :)

--Geoff


Re: Test names/comments/whatever?

2005-02-10 Thread Geoffrey Young


Ovid wrote:
> Is anyone even using THS?

/me raises his hand

>  If anything, I
> suspect there are a tiny handful of people who have played with it, but
> haven't really used it since it's not as useful as it could be.

I got Apache-Test to run .php scripts in under 10 lines by subclassing
straps.  it could have been less if there weren't that silly call to
$Strap->{callback} in Test::Harness that I needed to regenerate in my
subclass to get verbose output working properly (hint hint).

--Geoff


Re: TAP Version (was: RE: Test comments)

2005-02-18 Thread Geoffrey Young

> This is helpful for processing bug reports, so I don't have to make
> second trip back to the user to ask: "What version of CGI.pm where you 
> using?".

yeah, I'll second this, at least so far as adding a version component to
Test::More goes (which is different than adding a TAP version, which I don't
have an opinion on:).  Test.pm currently prints out

  # Using Test.pm version 1.24

and Apache-Test follows suit with

  # Using Apache/Test.pm version 1.16

and I always wished that Test::More and friends would follow suit.  for
instance, it might help if someone reports test failures and they're using a
version of is_deeply() that has a known issue on older versions.

I'll whip up a patch if folks are inclined to agree.

--Geoff


Re: TAP Version (was: RE: Test comments)

2005-02-18 Thread Geoffrey Young

> Hm, that does seem valuable.  Should all test modules report their
> versions by default, though?

well, my thought was that it was more important to list the source of the
comparison operators the user uses (like is() or eq_array()) than it was the
internal stuff that, say, interfaces with Test::Harness.  but if they are so
intertwined it might be difficult to separate them like that.

since Test-Simple is the top-level distribution and account for so much, I'd
settle for just reporting that if I had to :)

>  Should they respect a verbose flag
> somewhere instead?

both the examples I gave respect TEST_VERBOSE.

> 
> Test::Builder could report it for them automatically, if the answer to
> at least one question is yes.

that would be nice :)

--Geoff


Re: TAP and STDERR

2005-02-24 Thread Geoffrey Young


Joe Schaefer wrote:
> we should be able to communicate TAP via HTTP, SMTP, etc.).

TAP::Lite anyone?

/me ducks

;)

--Geoff


Re: [RFC] adding skip option directly to plan()

2005-03-12 Thread Geoffrey Young


Ian Langworth wrote:
> On 30.Nov.2004 09:57AM -0600, Andy Lester wrote:
> 
> 
>>   plan tests => 14, have( "Foo::Wango" ), moon_phase eq "waning", etc;
> 
> 
> Where does the reason fit into this syntax?

well, this syntax doesn't exist in Test::More at the moment (though I
probably should get around to a patch like I promised) - it's only in
Apache-Test.

the A-T syntax can be found in the Apache::Test manpage, but basically it
comes down to plan() accepting a third optional argument, which is a boolean
saying whether or not you want to skip the entire file.  but to answer your
question, have() is an internal function that essentially interacts with TAP
to yield something like "all skipped: Foo::Wango not found".  the reason
that have() operates this way is because we can say

  plan tests => 14, have(qw(Foo::Wango Bar::Beer Java::Cool));

and it will show up as a single "all skipped: Foo::Wango not found,
Java::Cool not found".  we also provide skip_message() as an interface into
the same, so you can write your own functions and layer them on top of
plan(), like so

  plan tests => 14, have have_min_perl_version(5.6),
 have_lwp,
 have_module('Foo::Wango'),
 sub { $< or skip_message('running as root!') };

here, the skip message would include all the conditions that failed, not
just the first one.  the idea being that you would know all the test
preconditions that need to be met after the first run, and not need to
figure them out one at a time until the test can actually run.

oh, and for the record, it's now need() (and need* variants, like
need_module()) in A-T, not have() and have* variants. the difference being
that need* functions deal with the skip message while have* functions do
not, so they can safely be used to determine logical pathways within the
tests themselves.  see the A-T docs for more.

nevertheless, what you are replying to was just a discussion about a feature
that doesn't exist in the standard Test::More toolkit but was brought up
because Apache-Test's plan() works a bit differently and there are enough
people who like it that I thought it warranted a discussion here to see if
T::M was interested.

HTH

--Geoff



Re: [RFC] adding skip option directly to plan()

2005-03-13 Thread Geoffrey Young


Ian Langworth wrote:
> On 12.Mar.2005 11:41PM -0500, Geoffrey Young wrote:
> 
> 
>>nevertheless, what you are replying to was just a discussion
>>about a feature that doesn't exist in the standard Test::More
>>toolkit but was brought up because Apache-Test's plan() works
>>a bit differently and there are enough people who like it that
>>I thought it warranted a discussion here to see if T::M was
>>interested.
> 
> 
> Yeah, I know, I was curious how the reason could fit into the
> _proposed_ Test::More additions. I'll do a better job at phrasing
> next time. :-)

:)

well, I think it would need to be something like skip_message() plus a bunch
of helper functions, similar to the way Apache-Test works.  that's not to
say that optional plan() argument wouldn't work without those special
functions, just that the skip message wouldn't.

but that said, Apache-Test has a bunch of super useful functions, some of
which I mentioned before.  A-T also provides a bunch of useful runtime
functions, like t_write_file() which creates a file (and directories, if
required), writes to it, then removes it (and any created directories) when
the test script finishes.  what I've wanted to do for a long time is wrap
all of these up in something like Test::Util so that the world could use
them outside of Apache-Test.  if T::M were to incorporate plan() then
perhaps Test::Util could be bundled with Test-Simple and include the need*
and have* functions.

anyway, just a thought.

--Geoff


Re: [RFC] adding skip option directly to plan()

2005-03-13 Thread Geoffrey Young


Mark Stosberg wrote:
> On 2005-03-13, Geoffrey Young <[EMAIL PROTECTED]> wrote:
> 
>>nevertheless, what you are replying to was just a discussion about a feature
>>that doesn't exist in the standard Test::More toolkit but was brought up
>>because Apache-Test's plan() works a bit differently and there are enough
>>people who like it that I thought it warranted a discussion here to see if
>>T::M was interested.
> 
> 
> I like these ideas-- they seem like worthwhile additions. 
> 
> Let's see if I can get an example right.
> 
> This:
> 
>  use Test::More;
>  if( $^O eq 'MacOS' ) {
>  plan skip_all => 'Test irrelevant on MacOS';
>  }
>  else {
>  plan tests => 42;
>  }
> 
> would become something more like:
> 
>  use Test::More tests => 5, have 'LWP', { "not Win32" => sub { $^O eq 
> 'MSWin32'} };
> 
> (OK, so that example was highly adapted from the Apache::Test docs). 

something like that.

> 
> It is shorter, but does mean a more functionality to understand for the 
> programmer to use it.

well, I don't think anyone was suggesting that skip_all go away, only that
plan() become that much more useful.  so in that respect, yeah, there is
more for programmers to understand, but that's certainly not an obstacle
since they don't need to use (or understand) it immediately.

--Geoff


Re: [RFC] adding skip option directly to plan()

2005-03-13 Thread Geoffrey Young


Michael G Schwern wrote:
> On Sat, Mar 12, 2005 at 11:41:08PM -0500, Geoffrey Young wrote:
> 
>>well, this syntax doesn't exist in Test::More at the moment (though I
>>probably should get around to a patch like I promised) - it's only in
>>Apache-Test.
> 
> 
> For the record, there's no reason why Test::More has to be the one to declare
> the plan.  Which is to say, given my desire to avoid Test::More from becoming
> a feature monolith, the shorter path to usable code is to write up a
> Test::Plan module.

sounds like a plan :)

I haven't look at the innards in a while, but do you think the
infrastructure is there in Test::Builder to support this now?  the last time
I checked I had to jump through some hoops to get an external plan() call to
play nice with everything else.

so, if it's there now cool, I'll take on Test::Plan.  if it's not, if you
could sketch out what you would prefer the internal changes to look like
I'll take that route as I play around.

--Geoff


[RFC] Test::Plan

2005-03-15 Thread Geoffrey Young
hi all :)

following up on the discussion from a few months ago but renewed over the
weekend, here is Test::Plan.  basically all it does is carry over the exact
syntax and helper functions we are already using in Apache-Test land to the
greater community.

I'm still working up additional tests, but the documentation and
functionality is all there, so constructive feedback welcome.

  http://www.modperlcookbook.org/~geoff/modules/Test-Plan-0.01.tar.gz

--Geoff


Re: [RFC] Test::Plan

2005-03-15 Thread Geoffrey Young

>   test => { TESTS => join ' ', map { glob } qw( t/*/*.t t/*/*/*.t ) },

but slashes aren't portable, right?  I don't think you can get rid of
File::Spec.


> Also, I agree that the use-Test-Plan-after-Test-More caveat is icky.

well, it's a caveat, not a requirement :)

the way it works now is that either you load Test::More first and subvert it
with Test::Plan (as I show) or you load Test::Plan first, which successfully
takes over main::plan, then you load Test::More and boom, Test::More
complains about redefining symbols.  which is quite normal, actually, when
using modules with like-named functions and Exporter - nothing new in
perl-land.  the difference here is that Test::Plan provides some sugar so
you don't need to worry about doing that, provided you load it last.

> How about modifying Test::Plan::import() so that it (a) checks to see
> if Test::More *has* already been loaded or (b) checks if a plan()
> subroutine is already defined in the namespace that the method is
> exporting to.

hmm, maybe I'm misunderstanding but neither of those situations are helpful
- Test::Plan::plan() needs to be the one and only plan() for people to use
it as a drop-in replacement - I explicitly don't want to bail if either a)
or b) is true, I want the main::plan() namespace no matter what.

fwiw, I'm not trying to be defensive about this, and I appreciate the
feedback.  I was only trying to provide an elegant solution so that test
writers didn't need to worry so much about competing function names.

--Geoff


Re: TestSimple/More/Builder in JavaScript

2005-04-08 Thread Geoffrey Young


David Wheeler wrote:
> On Apr 7, 2005, at 5:55 PM, Michael G Schwern wrote:
> 
>> If you have isDeeply() there's little point to the eq* salad.
> 
> 
> Hrm, fair enough. I'll comment them out, then...

well, a few thoughts here...

as someone familiar with T::M and not javascript, were I to try to use this
it's an additional barrier to call it "Test::More in JavaScript" but not
provide _the exact same functions_ as Test::More.  now before everyone
starts slamming this let me explain...

as we shaped the php T::M port we started hitting a few things that were
perlish but not phpish.  isa_ok() is a good example - isa() is a perl method
but php calls it something else.  so, what we plan on doing (or did,
depending on the function) is implementing isa_ok() for the perl folks and
aliasing it to foo_ok() (I forget what) for the php folks.  I think we
carried over use_ok() even though php doesn't distinguish between use and
require in the perl sense, for example.

I guess what I'm trying to say is that, really, the target audience for
these ports is primarily people who are already Test::More savvy.  so,
taking away eq_array() or calling something isDeeply() just makes my life as
a perl-first developer more difficult.  ok, marginally so, but still...

the secondary audience are folks who are not Test::More savvy but who
program in $language.  for them, providing functions with names like
isDeeply() is more idiomatic, so it's a good idea to offer them too - we
should make them as comfortable as possible so they adopt our awesome tools.

anwyay, just a few random thoughts.  I don't ever plan on using javascript
so it really doesn't apply to me anyway :)

oh, and david... you really are crazy ;)

--Geoff



Re: ANN: JavaScript TestSimple 0.03

2005-05-05 Thread Geoffrey Young


David Wheeler wrote:
> On May 5, 2005, at 04:26 , Adrian Howard wrote:
> 
 Here's a weird idea: how about the option of AJAXing the test 
 harness results back to a receiving server somewhere that 
 understands TAP? Bingo: TAP testing of JS embedded in web pages  in
 its native habitat.

>>>
>>> That's just evil. Maybe when Schwern or whoever had the idea gets 
>>> networked TAP going, I'll just send the data there. :-)
>>>
>>
>> That's pretty much what I did when I hacked JSUnit to output TAP. 
>> Worked quite nicely.
> 
> 
> Do you have some sample code for your TAP server?

Apache-Test pretty much does all of this already, and automatically for
mod_perl and php on the server-side.  for additional $language (javascript,
parrot, python, cgi, soap, whatever) support all you need to do is make sure
that your pages spit out TAP and use the A-T interface to make the calls.
the A-T interface itself can be written in perl or php at the moment, though
it's trivial to add support for other languages, I just haven't had the time
yet.

--Geoff


Re: Module suggestion

2005-05-27 Thread Geoffrey Young


Vsevolod (Simon) Ilyushchenko wrote:
> Hi,
> 
> I'd like to suggest a module that I came up with to test CGI file
> uploading logic. I have not found anything else like it.

have you seen Apache-Test yet?

 http://search.cpan.org/dist/Apache-Test/

I find it hard to understand modules like this anymore - why use some funky
tie interface and a bunch of other hoops to fake a live environment when you
can really upload a file to a real webserver and run your tests there?

--Geoff


Re: Devel::Cover and HTTP::Server::Simple

2005-06-08 Thread Geoffrey Young


Ricardo SIGNES wrote:
> Yesterday, hide gave me some sweet example code to use
> HTTP::Server::Simple and Test::WWW::Mechanize to test Rubric's CGI bits.
> I've started working with them, and they make me happy.
> 
> I've realized that the server, which is forked from the test script,
> doesn't have its usage show up in Devel::Cover.  I can see why that
> would happen but... I'm so addicted to the awesome utility of coverage
> reports that I am loathe to have to cope with half my code being so hard
> to cover.
> 
> Does anyone have a good idea on how to make this work?

yeah, use Apache-Test, run the CGI under mod_perl, and see your coverage
magically appear :)

--Geoff


Re: Scalability of Devel::Cover

2005-06-21 Thread Geoffrey Young


>>> This seems unfortunate for at least two reasons:
>>> 1) it ends up taking a really long time to run the tests.  At some
>>> point, maybe long enough that nightly tests become prohibitive (even
>>> more so for continuous integration).

> We have a substantial Perl code base (as I've said several hundred
> modules), with unit tests.  I have a test environment which does a
> nightly checkout of the code and runs all the unit tests, with
> Devel::Cover enabled and reports on the results.

I have unit tests for maybe 15% of our perl codebase, but at least a basic
compile test for maybe 90% of the almost 900 modules.  a Devel::Cover run
takes ~14 hours to complete (versus maybe 2 hours without D::C) so I
abandoned the idea of nightly coverage runs a long time ago.  not that I
thought it would be that useful anyway - with that many modules, and so few
near 100%, I doubt I could make any sense of the report anyway.

a better idea would probably be to run D::C for just the packages that
changed on a given day (via TEST_FILES), since a module nobody has touched
in a month isn't going to get any better or worse coverage.  intelligently
diff that to the prior report for that module and then you've actually got
some useful information.

--Geoff


Re: ANN: JavaScript Test.Simple 0.10

2005-06-24 Thread Geoffrey Young

>   http://www.justatheory.com/code/Test.Simple-0.10/tests/index.html?
> verbose=1

that's just awesome :)

nice work.

--Geoff


Re: False Positives from Automated Testing at testers.cpan.org

2005-07-20 Thread Geoffrey Young

> (I deliberately
> did *not* list IO::Capture as a prerequisite in Makefile.PL because I
> didn't want to force users to install that module.  I simply wanted them
> to use it during testing and then throw it away.

this is the start of the right attitude I think - when your testing
environment relies on tools that your runtime environment does not it should
be up to the (individual) tests, not the distribution, to make sure that the
testing environment is sane.

what I personally would change about what you are doing is adjust each
plan() call so that you only plan() tests where you know the environment
contains what you need.  then rely on the user's @INC to decide if they have
the proper setup required to run your tests, and remove IO::Capture from
your distribtion altogether.  YMMV :)

if you search the archives for this list you'll see some talk from a few
months ago about extending Test::Builder's plan() so that it functions the
same was as Apache::Test::plan() - it was precisely to cover situations like
the one you experienced that I wanted to see this happen.  IIRC the
discussion ended with "that should be a separate module" so I wrote
Test::Plan, which you can look at as well - not as an endorsement of the
module per se, but rather of the basic idea I was trying to get across: it's
not a test failure if the "failure" is the result of an incomplete test
environment, so it ought to be up to the tests to make sure that the
environment is good to go before they even start.

--Geoff


Re: Test::Builder::Module

2005-07-29 Thread Geoffrey Young


Michael G Schwern wrote:
> What I'm looking for is ideas about more things it could do that would
> be useful for most testing libraries.  What scaffolding do module authors 
> find themselves implementing? 

if there were a better way to do this:

  push @ISA, qw(Test::Harness::Straps);
  $Test::Harness::Strap = __PACKAGE__->new;
  $Test::Harness::Strap->{callback} = sub { ... };

it would be awesome.

--Geoff


Re: Testing module madness

2005-09-10 Thread Geoffrey Young

They add some value to me (show that at least something works).
>>>
>>>Either they're valuable enough that you install their prerequisites or
>>>they're not.
> 
> 
> But how am I supposed to find this out? I dont even know whether the 
> required modules are used for the tests only, without digging through the 
> source...

see Test::Plan (yes, yet another module :)

as I've said here before, I point to that module not necessarily as an
endorsement but as a pointer to a theory - I'm (now) very much convinced
that it's up to the individual tests to know what their requirements and/or
prerequisites are and skip over the test if those requirements are not met.
 so, if a module requires Test::Deep, Test::Foo, and Test::Beer, the test
file ought to do something like

  plan tests => 5, need qw(Test::Deep Test::Foo Test::Beer);

or, for the majority of folks, the skip_all equivalent.  this would be in
contrast to a largish PREREQ_PM that make _build-time_ prerequisites of
modules that were only required for _test-time_ interactions.  it also has
the effect of being much more granular in that it's probably not _every_
test file that uses Test::Beer, so when you choose not to install Test::Beer
(and it's hidden Test::Bar prerequisite) you'll probably still be able to
run _some_ tests...

fwiw

--Geoff


Re: Testing module madness

2005-09-11 Thread Geoffrey Young


Andy Lester wrote:
>> Usually, Test::* modules are only used for the test phase.
> 
> 
> I really don't understand the idea of "only used for the test phase",

there is clearly a distinction between the code required for a given module
to compile and run in a production environment and the code required to make
sure a suite of tests runs.

> as if the tests don't matter, 

well, tests mean different things to different people.

> or if there are levels of failure.  

I wouldn't take that stance, but lots of people do.  "oh, that test is
supposed to fail" is still heard all the time, even though those folks
hanging out here know how bad something like that really is.

> Either they install OK on the target system, and you can use them  with
> confidence, and they've done their job, 

which is really separate from the running the tests, now isn't it.

take something as ubiquitous as apache.  nobody runs the test suite (mainly
because it doesn't come bundled with the software) but people install and
run it in droves.do you run tests when installing, say, fedora?  no, but
you install the software anyway.  would you still install it if you ran the
tests and some failed?  of course (and don't tell me you wouldn't since I'll
bet everyone here has had failing perl tests in the past but we all still
used it anyway :)  now, say that the perl test suite required you do go out
and install, say, CuTest.  would you say to potential perl users "the build
process will fail if you don't have the stuff to run the tests" or "you
shouldn't have confidence in perl if you didn't run the tests (even though
we made you go through lots of hoops to do so)"?

I've known many an SA who would say something like "LWP has been around for
ages, seen many eyes, and if there are any bugs that we see that nobody else
has caught yet we'll deal with that later... so I don't bother with 'make
test'."  you can argue whether that is a sane practice or not, but people do
take that position.

> or you're going to  ignore the
> tests completely and then who needs 'em?

well, I'd argue that user-land tests exist (in part) to give me, the
developer, and idea of what the issues might be if they report a bug.  but
even if the users ignore the tests completely that doesn't mean they're
meaningless - they still ease the development process, and allow me to say
"ok, you're having this problem, what does this test say in verbose mode?"

> 
> It's like if I'm installing a washing machine, and I don't have a 
> level.  I can say "Ah, I only need it for the installation, and it 
> looks pretty level, so I don't need the level", 

I'd say that's the opinion of the vast majority of SAs I've ever met.

> or I can say "I'm not 
> using this appliance until I've proven to myself that the machine is 
> level and won't cause me any problems in the future because of an 
> imbalance."

yeah, well, you could say that.  last time I installed my washer I said
"looks pretty level to me, but I know where my level is if it makes a racket"

:)

--Geoff


Re: Testing module madness

2005-09-11 Thread Geoffrey Young


Andy Lester wrote:
>> yeah, well, you could say that.  last time I installed my washer I  said
>> "looks pretty level to me, but I know where my level is if it makes  a
>> racket"
> 
> 
> That's fine, but I'm still not shipping my washing machines without 
> explicit instructions to level the damn thing.  Similarly, I'm not 
> making any of my tests optional except in the case of tests where  they
> don't affect direction operation, as in t/pod{,-coverage}.t.

well, it's nice that you have that luxury with the code you write, but not
every module requires or can live with that kind of behavior :)

consider something real like mod_perl.  some of our tests are optional and
are skipped in various circumstances.  why?  well, not every installation
has LWP, so we skip some tests where we require it when all it does is make
our test writing life easier.  not every apache installation has mod_auth
installed, so we skip tests that exercise mod_auth-behaviors.  I could go
on, of course, but you get the idea.

the same analogy can hold to various perl modules, where one class interacts
with something that not everyone cares about.  say you've got some module
with shared code, client-specific code, and server-specific code.  say some
user of that code only cares about the client part.  now say that testing
the server code requires 3 additional Test:: modules.  you're going to force
the user to install those modules and run those tests even though they will
be exercising code he doesn't care about?  I don't think that kind of thing
makes sense at all.  furthermore, I think that doing so gives rise to
conversations like the one that started this thread, where people see umteen
test dependencies and get frustrated.

remember, we're in an uphill battle here in the testing world.  every time
we frustrate a user we make it harder on ourselves - too many up-front
dependencies and folks will skip the tests altogether and carry that
frustration back to their desk when they decide whether to write tests
themselves...

--Geoff


Re: Devel::Cover problem with Apache::Test

2005-09-16 Thread Geoffrey Young

> I'd really love to use Devel::Cover - I love the effect mastering the
> request/response Apache::Test framework has had on my code, and I really
> want to start using code coverage as part of my toolkit.

yah, this is a bit more complex than it probably ought to be, but I guess
that's by design.  it could also be a bit better documented, but...

start with this

  http://people.apache.org/~geoff/Apache-Test-with-Devel-Cover.tar.gz

there are two parts to note in there that are different than a standard
Apache-Test-based distribution.  the first is the addition of
t/conf/modperl_extra.pl which (for anyone searching the archives) contains:

  if ($ENV{HARNESS_PERL_SWITCHES}) {


eval {
  require Devel::Cover;
  Devel::Cover->VERSION(0.48);


  # this ignores coverage data for some generated files
  # you may need to adjust this slightly for your config
  Devel::Cover->import('+inc' => 't/response/',);


  1;
} or die "Devel::Cover error: $@";
  }

the second part is that you need to alter t/conf/extra.conf.in so
HARNESS_PERL_SWITCHES is passed to the underlying mod_perl process.  the line is

  PerlPassEnv HARNESS_PERL_SWITCHES

sure, there are other ways to do it without examining HARNESS_PERL_SWITCHES
I guess, but this is what I came up with and it seems to do the trick.

IIRC paul had some info on this at his YAPC::EU advanced Devel::Cover talk
(which I did not attend) so maybe he has his slides available for viewing as
well.

HTH

--Geoff


Re: Devel::Cover problem with Apache::Test

2005-09-16 Thread Geoffrey Young

> [snip - ah, helpful, now I understand how to use the testcover target]

:)


> Devel::Cover is reporting
> 100% statement coverage for a number of modules for which there are no tests
> as of yet (legacy modules I have yet to revisit)

I don't think that's unusual - D::C will aggregate all the results from all
your tests into a single coverage report, so if that legacy code is hit
_anywhere_ it will show up in the results.  of course, it really depends on
your situation, but I see this kind of "coverage bleeding" all the time.

> while reporting that
> subroutines for which I have tests are uncovered. 

did you click through the report html to find out what exactly isn't covered
that you thought you did?  are you absolutely certain that a condition was hit?

also, try just running one test single test and see what happens.  if you
haven't noticed already, the D::C report shows how many times a statement
was hit, as well as coverage statistics (which is in itself an incredible
debugging aid :) so you should be able to track whether D::C is behaving
properly given a small enough set of test code.

or maybe you've already done all of this, so it's not much help.  but my
experience is that if I can get D::C to run under mod_perl without causing
core dumps (which it does on occasion.  work, that is) it tends to be accurate.

--Geoff


Re: Devel::Cover problem with Apache::Test

2005-09-19 Thread Geoffrey Young



Hilary Holz wrote:

Okay - here's what I've figured out - D::C is not recording any coverage
info when I run a test in t/apache. D::C is recording coverage for all the
tests that are in the t/ directory - and the reports are in the realm of the
reasonable.

Have you had D::C collect coverage stats for tests in the t/apache,
t/response/TestApache format?


yes.  when I run the skeleton I pointed you toward last time I get this

Filestmt   bran   condsub   time  total
- -- -- -- -- -- --
blib/lib/Apache2/Handler.pm100.0   50.0n/a  100.0  100.0   95.0

browsing through coverage.html shows that the handler() subroutine was 
indeed hit and coverage recorded.


do you not see similar results when running that sample?

--Geoff


Re: Devel::Cover problem with Apache::Test

2005-09-19 Thread Geoffrey Young

> No, not when I run the example out of the box - I had to move the
> PerlPassEnv directives to extra.conf.in and rebuild (this makes sense,
> though, as extra.conf is processed before modperl_extra.pl, while
> extra.last.conf is processed after - perhaps you fixed your local copy and
> haven't uploaded that change to the published version?)

yes :)  but I thought I did it before I sent you the initial email.  I guess
not.  anyway, what's up there now should not have that specific issue...

> 
> The test case in the demo doesn't contain the problem I'm having, though.
> Tests that aren't of the very specific write-the-request write-the-response
> in t/apache and t/response/TestApache work fine for me, too.

well, I don't understand the exact issue then.  if you can adjust the
(freshly uploaded) tarball such that it exhibits your problem in a very
minimal sense I'll take a look.  maybe that will help.

> 
> I have it on the run, though - It appears to be an interaction between
> TestMB and D::C. I have gotten it to give me correct results a couple of
> times - I'm currently trying to isolate/replicate the exact settings that
> yield correct results.

well, I can't speak to TestMB.pm or Module::Build in general.  if you shift
to the standard MakeMaker interface of 'make testcover' I will be in a
better position to help you.  if you still have issues we can isolate and
(hopefully) fix them, then ping david about making the required TestMB.pm
changes.

> 
> One problem which I've isolated is that running ./Build testcover overrides
> the select/ignore settings specified via import.

hmm.  well, one thing at a time.  all of Apache-Test was developed using
MakeMaker, with Module::Build support strapped on far after the fact.  so,
let's figure out what D::C issues there are with the "standard" interface
and see if we can't get it working there first.  then we'll make the
Module::Build people happy :)

> 
> More info in a bit
> 
> (This is fun, right? right?)

yes, of course.  was there ever any question? ;)

--Geoff


Apache-Test and Devel::Cover

2005-11-01 Thread Geoffrey Young
hi all :)

I just commited a patch to Apache-Test in svn that removes all the
additional work involved with getting Devel::Cover to work for server side
tests.  now a simple 'make testcover' should be all you need to do to get
coverage results from code within handler() subroutines - no more adding
modperl_extra.pl entries or other associated foo.

so, for the few here interested in this kind of thing, I'd like to hear
people's test results for completeness sake.  on my own system (which is
slowly dying), I get random core dumps and Devel::Cover failures both in the
way I used to do it and the way it works now, so in that sense things work
out "the same."  but if other people who used to have stuff working now have
increased problems I'd love to hear about it.  but hopefully all this patch
does is lower the barrier to getting coverage in the apache world.

for those not terribly svn savvy, you can get Apache-Test from svn like so

  $ svn checkout \
  > http://svn.apache.org/repos/asf/perl/Apache-Test/trunk Apache-Test

thanks, and enjoy.

--Geoff


Re: Apache-Test and Devel::Cover

2005-11-01 Thread Geoffrey Young

> Nice work, Geoff

:)

> 
> A few issues:
> 
> 1)
> 
> % make testcover
> Cannot run testcover action unless Devel::Cover is installed
> 
> and after installing Devel::Cover it still gives the same error, since
> it's hardcoded in Makefile.PL. May be adding a check and suggesting to
> rebuild Makefile if Devel::Cover is now installed?

ok.

> 
> 2) at the end of run it gives:
> 
> All tests successful, 1 test skipped.
> Files=7, Tests=22, 140 wallclock secs (134.28 cusr +  5.51 csys = 139.79
> CPU)
> server localhost:8529 shutdown
> make[1]: Leaving directory `/home/stas/apache.org/Apache-Test'
> make: cover: Command not found
> make: [testcover] Error 127 (ignored)
> 
> I don't use a standard perl, so 'cover' is not in the path. So it should
> probably do which(cover) and run it only then? Also could check the
> directory perl lives and add that to PATH, since that's where 'cover'
> lives if not installed global-wise.

I don't use a standard perl either, so cover is one of those things I need
to add to my path.

really, I'm not sure how much I want to run through hoops for people wrt
stuff like this - if you're a developer and you care about coverage you have
Devel::Cover (and its components) installed and ready for use, otherwise you
don't.  almost nobody will be running testcover who doesn't have things
installed properly, so I'm not sure it makes too much sense to go poking
around looking for stuff.

> 
> 3)
> 
> I do get segfaults and they are quite inconsistent. Even on
> Apache-Test's test suite. Of course this probably has nothing to do with
> your work, since have happened before as well. I wonder whether
> Devel::Cover is just as unstable under plain perl.

under plain perl it works out very nicely.  I think something about the
embedded perl environment is causing problems.  actually, until a recent
version (0.48 IIRC) it didn't work under mod_perl at all.  my own personal
experience wrt mod_perl is that it seems to work ok with mp1.  the mp2 +
Devel::Cover combination behaves very erratically, but I have better luck
with unthreaded perls and prefork than threaded perls (and horrible results
with worker, if it works at all which I don't think it does).

but we're getting closer each time, and paul has been outstanding creating
D::C at all, so there's not much to complain about :)

--Geoff



Test::Builder feature request...

2006-02-08 Thread Geoffrey Young
hi all :)

there's a feature split I'm itching for in Test::Builder, etc - the
ability to call is() and have it emit TAP free from the confines of
plan().  not that I don't want to call plan() (or no_plan) but I want to
do that in a completely separate perl interpreter.  for example, I want
to do something that looks a bit like this

  use Test::More tests => 1;

  print qx!perl t/response.pl!;

where response.pl makes a series of calls to is(), ok(), whatever.
while this may seem odd it's actually not - I'd like to be able to
plan() tests within a client *.t script but have the responses come from
one (or more) requests to any kind of server (httpd, smtp, whatever).

currently in httpd land we can do this by calling plan() and is() from
within a single server-side perl script, but the limitation there is
that you can only do that once - if I want to test, say, keepalives I
can't have a single test script make multiple requests each with their
own plan() calls without things getting tripped up.

so, I guess my question is whether the plan->is linkage can be broken in
Test::Builder/Test::Harness/wherever and still keep the bookkeeping in
tact so that the library behaves the same way for the bulk case.  or
maybe at least provide some option where calls to is() don't bork out
because there's no plan (and providing an option to Test::More where it
doesn't send a plan header).

so, thoughts or ideas?  am I making any sense?

--Geoff


Re: Test::Builder feature request...

2006-02-08 Thread Geoffrey Young

>> so, thoughts or ideas?  am I making any sense?
> 
> 
> Yes, you are. 

*whew*

:)

> I think that the subprocess can load Test::More and 
> friends like this:
> 
> use Test::More no_plan => 1;
> Test::More->builder->no_header(1);

cool, thanks.

> 
> That will set No_Plan, Have_Plan, and No_Header to true, silencing  the
> "Gotta have a plan!" error and the "1.." message at the end.

with your suggestion I'm almost there:

1..1
ok 1 - this was a passing test
# No tests run!

http://people.apache.org/~geoff/test-more-separately.tar.gz

if you want to try...

--Geoff


Re: Test::Builder feature request...

2006-02-09 Thread Geoffrey Young

>> One of the problems is going to be numbering, surely?

but it shouldn't need to be, right?  I mean, TAP is merely a protocol and
there shouldn't be a requirement that the bookkeeping happen in the same
process as the TAP emitting process I wouldn't think.  in fact, if someone
were implementing my own TAP interpretation now without any knowledge of how
Test::More works - as in, say I need a java TAP interpretation and handed it
off to someone - I would expect the java interpretation to just spit out the
proper stuff and have Test::Harness interpret those results solo.  and, in
actuality, the Test::Harness::TAP docs seem to indicate that is all that is
required.

but I understand why the perl implementations have this tie.  I just wanted
to be able to work around it.


> This works:

yes, excellent randy.  thanks for that.  it still seems a little hackish but
that's ok - hackish works for me if it means I can do what I want and nobody
else needs to do extra work :)

I made some tweaks to your format and added a few minor notes here

  http://people.apache.org/~geoff/test-more-separately.tar.gz

thanks all.

--Geoff


Re: Best Practice for testing compilation of scripts

2006-03-15 Thread Geoffrey Young


chromatic wrote:
> On Wednesday 15 March 2006 12:25, Jeffrey Thalhammer wrote:
> 
> 
>>I'm sure I could clean this up by opening a pipe
>>instead of using backticks and output redirection.
>>But even that doesn't smell very good.  I've looked
>>around on CPAN, but I have not yet found a Test::
>>module that seems appropriate.  I also wondered if
>>fiddling with $^C would do the trick somehow.  Any
>>suggestions?  Thanks.
> 
> 
> I've long intended to take t/test.pl from the Perl core distribution and wrap 
> up at least its runperl() in a Test:: module.  Perhaps that would work for 
> you?

compile_ok() ?

--Geoff


Re: Best Practice for testing compilation of scripts

2006-03-15 Thread Geoffrey Young

>>> I've long intended to take t/test.pl from the Perl core  distribution
>>> and wrap
>>> up at least its runperl() in a Test:: module.  Perhaps that would 
>>> work for
>>> you?
>>
>>
>> compile_ok() ?
>>
>> --Geoff
>>
> 
> It is unclear from Geoff's message above whether he is asserting that 
> function exists, or if he is merely proposing it

yeah, sorry, it's been a long couple of days...

I was suggesting the functionality be added to Test::More as compile_ok(),
rather than runperl() in some separate CPAN module, as it seems to closely
parallel use_ok() for modules and would be rather useful on a larger scale.

fwiw

--Geoff


[OT] TDD only works for simple things...

2006-03-28 Thread Geoffrey Young
hi all :)

for those interested in both php and perl, it seems that php's native .phpt
testing feature will soon produce TAP compliant output - see greg beaver's
comments here

  http://shiflett.org/archive/218#comments

so, TAP is slowly dominating the world... but we all knew that already :)

what actually prompted me to write is a comment embedded there:

"Only the simplest of designs benefits from pre-coded tests, unless you have
unlimited developer time."

needless to say I just don't believe this.  but as I try to broach the
test-driven development topic with folks I hear this lots - not just that
they don't have the time to use tdd, but that it doesn't work anyway for
most "real" applications (where their app is sufficiently "real" or "large"
or "complex" or whatever).

since I'm preaching to the choir here, and I'd rather not get dragged into a
"yes it does, no it doesn't" match, is there literature or something I can
point to that has sufficient basis in "real" applications?  I can't be the
only one dealing with this, so what do you guys do?

--Geoff


Re: [OT] TDD only works for simple things...

2006-03-28 Thread Geoffrey Young


David Cantrell wrote:
> Geoffrey Young wrote:
> 
>>> "Only the simplest of designs benefits from pre-coded tests, unless
>>> you have
>>> unlimited developer time."
>>
>> needless to say I just don't believe this.
> 
> 
> Try writing a test suite ahead of time for a graphing library.  It's
> possible (indeed, it's trivial - just check the md5 hashes of the images
> that are spat out against images that you have prepared ahead of time in
> some other way) but it would be damnably time-consuming to create those
> tests.  Consequently, I've not bothered.  I throw data at it, and look
> at the results.  If the results are good I then put an md5 hash of the
> image into a regression test.

well, ok, I'll agree with you if you look at it that way.  but I think tdd
ought to happen at much lower level than that - certainly there's more to
test than just spitting out an image?  you're probably calling several
different subroutines in order to generate that image, each of which can be
developed using tdd, and each of which gets more and more simple as you get
deeper into the application I'd suspect.

in general that's where I think the gap really rests for people who think
tdd doesn't work.  or maybe it's my misunderstanding.  but I think there are
places much deeper than the end product that warrant testing, and where tdd
can succeed and add value no matter how large or complex the code is.

--Geoff


Re: Testing with Apache/mod_perl

2006-03-29 Thread Geoffrey Young


> Apache::Test looks like it might be the way to go.  But it doesn't seem
> to play very nicely with Test::More, 

that's not really true.  yes, Apache-Test was based on Test.pm (for various
reasons I won't get into here) but I added Test::More support and use it all
the time.  grep for stuff like this in the docs

  use Apache::Test qw(-withtestmore)

> and comments like this in the
> documentation:
> 
>   Use Apache::TestMM in your Makefile.PL to set up your distribution
>   for testing.
> 
> give me the fear.  I've already got my distribution set up, and it
> doesn't use ExtUtils::MakeMaker...

yeah, unfortunately a Makefile.PL is really the simplest way to set A-T up.
 you can also use Module::Build if you prefer.

without either of those options you kind of need to resort to a bit of
trickery.  see, for example
  http://www.modperlcookbook.org/~geoff/slides/OSCon/2005/OSCon-2005-code.tar.gz

and take a look at mod_example_ipc-install.  it uses some gmake trickery so
that 'make' invokes 'perl Makefile.PL' and commands unrecognized by the
lowest-level makefile get passed to the generated Makefile.  yeah, I know,
but it's the best I've been able to come up with for people who are afraid
of a Makefile.PL.

> 
> Am I setting myself up for pain, or is there a nice simple approach to
> doing this that I've overlooked?

I use Apache-Test all the time - every day, in fact, for a very large
codebase with a very complex setup (way more complex than the makefile
trickery I just described).  I pretty much hacked together a series of make
targets that pull in the various libraries I need, does the proper overlays,
etc.  granted, it's for $dayjob, where the environment is considerably more
fixed than a distrobution would allow for.

but, outside of all that, you can pull a simple example of Apache-Test in
action here:

  http://people.apache.org/~geoff/Apache-Test-skeleton-mp1.tar.gz

or

  http://people.apache.org/~geoff/Apache-Test-skeleton-mp2.tar.gz

run

  $ perl Makefile.PL -httpd /path/to/your/httpd

and see what the various test* targets and the resulting t/TEST file look
like - you could pretty much copy the t/TEST file figure out what features
of the make targets you want to keep.  in reality, once the t/TEST file has
been generated it runs everything, so if you (or your userbase) is
intelligent enough having A-T in @INC and t/TEST should be all you need to
actually use Apache-Test.

HTH

--Geoff


Re: Testing with Apache/mod_perl

2006-03-30 Thread Geoffrey Young


Adam Kennedy wrote:
> I'd also add a small warning in that Apache::Test does seem to want to
> dominate the entire test suite (run everything from TEST) and so may not
> be as suitable in cases where you have 50-500 test scripts already, and
> you just want a few to work with Apache::Test and a normal Makefile.PL
> built with something like Module::Install.
> 
> So I've had to limit my use of it to specific cases where mod_perl
> dominates the purpose of the entire distribution.
> 
> Unless you've fixed this since the tutorial :)
> 
> Because I really don't mind the cost of having to boot up TEST 4 times
> for 4 scripts, when only 4 out of 100 need it.

well, there's no harm in using Apache-Test to run _everything_, even if only
4 out of 100 test scripts need the apache server - it's not like t/TEST adds
any real overhead to the process, since _something_ needs to call runtests...

but if you are talking about "test-code-test-code-test" overhead, where A-T
starts and stops the server each time even if you don't need it and you're a
developer who only wants to run a single script, then yeah, I hear ya :)

in truth, that was the biggest complaint about A-T from the developers here
at work - when you don't need apache and only want to run a single test over
and over again it just takes too long.  so, a few things were added to
Apache-Test to try to remedy this...

the first is that you can pass -no-httpd to Apache-Test, like so

  $ export APACHE_TEST_EXTRA_ARGS=-no-httpd
  $ make test TEST_FILES=t/noapache.t
  [ no server configuration or startup ]

which saves a boatload of time.  you can actually speed things up even more
by using the 'make run_tests' target instead of 'make test' - the test
target runs a full clean scan to hook the autogenerated A-T magic each time,
which you obviously don't need if you're not interested in A-T.

so, use both of those techniques and you'll find the overhead gets very low.

I'll warn you, though, you're going to get bitten by -no-httpd and/or
run_tests eventually.  some day, you're going to try to run the apache-based
tests and you'll have -no-httpd in your environment and things will blow up
and you'll spend half a day trying to figure out why.  or you'll add a new
A-T test file and use run_tests and the test won't work and you will scratch
your head.  so you'll curse A-T and it's magic for a while...

anyway, HTH.  and I wish there were more time in life that I could
communicate all this greatness to the masses in some talk or docs, but there
just isn't (for me, anyway...)

--Geoff


Re: Testing with Apache/mod_perl

2006-03-30 Thread Geoffrey Young
we should keep this on list :)

Adam Kennedy wrote:
> 
> 
> Geoffrey Young wrote:
> 
>>
>> Adam Kennedy wrote:
>>
>>> I'd also add a small warning in that Apache::Test does seem to want to
>>> dominate the entire test suite (run everything from TEST) and so may not
>>> be as suitable in cases where you have 50-500 test scripts already, and
>>> you just want a few to work with Apache::Test and a normal Makefile.PL
>>> built with something like Module::Install.
>>>
>>> So I've had to limit my use of it to specific cases where mod_perl
>>> dominates the purpose of the entire distribution.
>>>
>>> Unless you've fixed this since the tutorial :)
>>>
>>> Because I really don't mind the cost of having to boot up TEST 4 times
>>> for 4 scripts, when only 4 out of 100 need it.
>>
>>
>> well, there's no harm in using Apache-Test to run _everything_, even
>> if only
>> 4 out of 100 test scripts need the apache server - it's not like
>> t/TEST adds
>> any real overhead to the process, since _something_ needs to call
>> runtests...
> 
> 
> Well, there is a performance argument it's true, but the things that
> freak me out a bit more is situations where some of the testing involves
> things that might not like playing with apache.

I just don't understand this at all.

  $ t/TEST t/has_nothing_to_do_with_apache_at_all.t

is essentially the same as

  $ prove t/has_nothing_to_do_with_apache_at_all.t

albeit with minor semantic differences between TEST and prove.  but the idea
is the same - run t/test.t and pass the results to Test::Harness.  what
t/test.t contains is entirely up to you - interact with apache or don't.

> 
> Imagine a distribution which has mod_perl mod_perl2 fast_cgi and so on
> parts, 

so?  A-T is perfectly designed to work with all, none, or any combination of
the above.  well, you can't have mp1 and mp2 in the same httpd binary, of
course, but you can handle a distribution that has separate binaries with
any compatible combination of the above.

or WxWindows, or other various scenarios.

I don't know what that is, but if it's not apache then I obviously don't
care.  but neither should you - there are simple ways to determine whether
apache is available or not (see below)

> 
> That it may in future situation (I can see myself encountering) lack the
> necessary flexibility for my situation.

A-T is nothing if not flexible.

> 
> That the _option_ to trade off speed for flexibility isn't as easy as I
> personally would like it to be, rather than that it doesn't address any
> given known scenario.

really, I don't know what you're saying, but it sounds vaguely like saying
A-T isn't suited to some particular set of needs, when the issue may be that
you just don't know how to use it appropriately.

so, I just uploaded this to pause:

http://www.modperlcookbook.org/~geoff/modules/WebService-CaptchasDotNet-0.06.tar.gz

now, you might not be interested in the module, but here are some various
platforms under which the test suite will run as a whole:

  - Apache-Test not installed
  - Apache-Test installed but no apache "found"
  - apache 1.0, any configuration, and set of modules
  - apache 2.0, ""

the single test file that actually requires apache will be skipped unless

  - apache 1.0 + mod_cgi
  - apache 2.0 + mod_cgi or mod_cgid

now, to trigger a "found" apache you use standard Apache-Test semantics, such as

  $ perl Makefile.PL -apxs /path/to/apache

or

  $ export APACHE_TEST_APXS=/path/to/apache

or a few other things.  of course, now that I've released it I realize I
forgot to update the INSTALL with this information.  but so it goes when you
rush into things...

--Geoff


Re: Testing with Apache/mod_perl

2006-03-31 Thread Geoffrey Young

> 
> A-T requires me to do things differently, and it's that difference that
> introduces the lack of flexibility.

I had a bunch of foo written that I removed, mainly because this is the real
issue, for you I guess - the idea that different is somehow bad or
inflexible, that anyone who creates something useful needs to integrate
itself so tightly with the current CPAN mode that users shouldn't need to
break with what they know to use it.

funny, though, I don't hear anyone complaining about, say, Devel::Cover
because you need to do some extra foo with a standard Makefile.PL. but ok,
you got me, it's different.

your criticims seem to fall into two camps

  o A-T features (like hitting 1.3 and 2.0 in the same run)
  o user integration (like not being able to use prove)

for the former, all I can say is feel free to suggest features you'd like to
see (sans handwaving) in the appropriate forum.

for the latter, well, there's a very real reason you can't use things like
prove, Module::Install, etc with Apache-Test - they need to become A-T
aware.  really, how could prove know to start any standalone server (apache,
sendmail, or whatever)?  show me an interest in making core and other
third-party modules people rely on A-T aware and we can work on it.

until then, it's not my itch to work on reducing a very, very complex task
beyond the 4 extra lines in your Makefile.PL and the setting of an
environment variable.

/me out - it's friday

--Geoff


Re: Non-Perl TAP implementations

2006-04-17 Thread Geoffrey Young


Andy Lester wrote:
> 
> I'm adding a section to Test::Harness::TAP on non-Perl TAP.
> 
> http://svn.perl.org/modules/Test-Harness/trunk/lib/Test/Harness/TAP.pod
> 
> If you know of one, please send me some text to add.

all the big PHP players now produce TAP

  o phpt (outputs TAP by default as of the yet-to-be-released PEAR 1.5.0)

  http://pear.php.net/PEAR


  o PHPUnit has a TAP logger (since 2.3.4)

  http://www.phpunit.de/wiki/Main_Page


  o there's a third-party TAP reporting extension for SimpleTest

http://www.digitalsandwich.com/archives/51-Updated-Simpletest+Apache-Test.html


  o Apache-Test's PHP writes TAP by default and includes the standalone
test-more.php

  http://search.cpan.org/dist/Apache-Test/


there's also libtap (written in C)

  http://jc.ngo.org.uk/trac-bin/trac.cgi/wiki/LibTap

HTH

--Geoff


Re: Non-Perl TAP implementations

2006-04-18 Thread Geoffrey Young


chromatic wrote:
> On Monday 17 April 2006 18:50, Ovid wrote:
> 
> 
>>The only problem I see with that is the occasional buffering errors I
>>see on my Mac where the STDERR and STDOUT don't line up.
> 
> 
> Agreed.  Is it too late to send everything to STDOUT where it belongs?

just for everyone's knowledge, for much of Apache-Test STDERR is the apache
error_log, so we override things so errors are sent to STDOUT where they're
immediately visible as output from the http client.

so, as long as there remains an official way to redirect the two streams as
I deem fit then it doesn't matter to me.  but I thought folks might like to
be reminded that there's no guarantee that STDERR and STDOUT are the same
"screen" so mixing up where test output and/or diagnostics ends up may or
may not be a good thing, depending on your pov.

--Geoff


Re: Non-Perl TAP implementations

2006-04-18 Thread Geoffrey Young


Adam Kennedy wrote:
> 
>>> Schwern made one small change in the STDERR format, and the recursive
>>> cascade of failing test-testing modules hit something like 3000 CPAN
>>> distributions.
>>
>>
>> While I agree that this caused problems, those modules were relying on a
>> format that was not spec'ed out or documented.
> 
> 
> That is irrelevant. You put something into CPAN, get massive numbers of
> people using it, and leave it alone and have it remain stable for 4
> years, it becomes an API whether you wanted it to be or not :)

really?  so I use an age old (but undocumented) feature of Config, then
Config changes and it's _not_ my fault?[1]  of course it is - perl is great
because you can go mucking around with stuff you shouldn't, but everyone
knows it's your own fault if it bites you later on.

--Geoff

[1] http://dev.perl.org/perl5/list-summaries/2005/20051003.html


Re: skip_all with Test::More?

2006-05-31 Thread Geoffrey Young
Pete Krawczyk wrote:
> Subject: skip_all with Test::More?
> From: Tels <[EMAIL PROTECTED]>
> Date: Wed, 31 May 2006 17:53:46 +0200
> 
> }
> }use Test::More;
> }
> }plan tests => 123;
> }
> }skip_all( 'reason' ) if ...;
> }
> }# tests here
> }
> }
> }Did I miss something or is this simple not yet possible?
> 
> Actually, it goes something like this:
> 
> 
> use Test::More;
> plan skip_all, "No tests here" if $some_condition;
> plan tests => 22;
> 
> 
> skip_all is a plan descriptor and as such needs to be given to plan.

another alternative is to try Test::Plan

  http://search.cpan.org/dist/Test-Plan/

--Geoff


Re: Continuous testing tools

2006-06-08 Thread Geoffrey Young

> Since you're using C++, you can probably use libtap
> (http://www.onlamp.com/pub/a/onlamp/2006/01/19/libtap.html and
> http://jc.ngo.org.uk/trac-bin/trac.cgi/wiki/LibTap) for writing the tests and
> then you could use a Perl harnes to collect those results.

just out of curiosity, has anyone gotten this to work?  about two months
ago I tried to use it for a project but struggled with compilation
issues that were sufficient for me to stop trying...

--Geoff


Re: Continuous testing tools

2006-06-09 Thread Geoffrey Young
Nik Clayton wrote:
> Geoffrey Young wrote:
> 
>>> Since you're using C++, you can probably use libtap
>>> (http://www.onlamp.com/pub/a/onlamp/2006/01/19/libtap.html and
>>> http://jc.ngo.org.uk/trac-bin/trac.cgi/wiki/LibTap) for writing the
>>> tests and
>>> then you could use a Perl harnes to collect those results.
>>
>>
>> just out of curiosity, has anyone gotten this to work?  about two months
>> ago I tried to use it for a project but struggled with compilation
>> issues that were sufficient for me to stop trying...
> 
> 
> I managed to get it to work.  But that's probably something of a given.

:)

> 
> I never received much positive feedback about libtap, so I stuck it on
> the backburner, figuring that people were using other C frameworks.

that's too bad.  I definitely wouldn't give it up - it would be great to
 have a robust tap library for C, and that was exactly what I was
looking for.  I suspect that all tap fans would love to know that
something in C is being maintained... when they need it :)

> 
> I've just gone back to check, and it seems that a few people have
> submitted bug reports, but trac wasn't forwarding them to me, so I never
> saw them.

hmph.

> 
> If you let me know what the compilation issues where I can try and
> resolve them.

I don't know that I'll be able to do that with exact precision.  I was
at a client site and they were using their own test library, so I spent
maybe two hours trying to get libtap to work instead.  I was met with
all kinds of unresolved symbols or whatnot, which was painful for me as
just  a casual C guy...

really, to be ultimately useful it would be great if it were all very
self contained.  for example, if it were all in libtap.h as a set of
macros or something so I could include just one file and *poof* that
would be awesome.  well, from an end-user pov more than a developer pov
:)  but my memory tells me the process was actually a bit more than
that, so it wasn't exactly easy for me to use, which made me grumble
(and a bit sad).

anyway, sorry I can't be more specific at the moment - I probably should
have bugged you at the time.  if I ever do try it again I'll let you know.

--Geoff


interesting behavior in use_ok()

2006-06-27 Thread Geoffrey Young
hi all :)

so, as a standard practice, I start with

  use_ok($class);

as the first test in each file, the idea being that if the class doesn't
compile I shouldn't care about the results of the rest of the test - I
know immediately that subsequent failures are because I introduced a
typo or something.

rationale agreement aside, because most of the time the typos are
introduced in the subroutine I'm testing I get a bunch of failures later
on in the test file (which I ignore).  but yesterday a fellow coder
introduced a typo at the package level, something equivalent to this:

  package Foo;

  use strict;
  use warnings FATAL => qw(all);

  foo;  # blows up

  sub bar {
return "inside bar";
  }

simple enough.  I was, however, met with surprise when my test for bar()
came back ok:

  $ perl bar.t
  1..2
  not ok 1 - use Foo;
  #   Failed test 'use Foo;'
  #   in bar.t at line 5.
  # Tried to use 'Foo'.
  # Error:  Bareword "foo" not allowed while "strict subs" in use...
  # Compilation failed in require at (eval 3) line 2.
  # BEGIN failed--compilation aborted at bar.t line 5.
  ok 2 - bar() called successfully
  # Looks like you failed 1 test of 2.

so, the compile test failed, but bar() could still be called and, in
fact, even executed successfully.

anyway, I'm not suggesting this is a bug.  I might be suggesting that
use_ok() should change somewhat.  but, more than all that, I figured
folks would just find it interesting if it's something y'all didn't
already know.

--Geoff


Re: TAP::Harness

2006-07-05 Thread Geoffrey Young
sorry for dropping in on this late, but it was a holiday weekend :)

> * How can I help?
> 
> Provide use cases, what would you want to do with Test::Harness if you
> could?  What are you doing with Straps?  What features do other
> testing systems (JUnit, for example) have that you'd like to see in
> Perl?

the current threads are a bit to wade through, but in case nobody else
has brought it up, I've mentioned the idea of making it simple to use
plan() and Test::More functions before[1], because it's what we do over
in httpd land (plan in perl from the shell, emit TAP in perl from the
server).  you can do it now[2] but ideally it should be a bit more
seamless I'd think.

since you're asking :)

--Geoff

[1] http://www.nntp.perl.org/group/perl.qa/5445
[2] http://www.nntp.perl.org/group/perl.qa/5455


Re: TAP::Harness

2006-07-05 Thread Geoffrey Young
Geoffrey Young wrote:
> I've mentioned the idea of making it simple to use
> plan() and Test::More functions before

blarg... insert "separately" ^ here.  all the rest is pretty simple
already :)

--Geoff


Re: Test::Builder feature request...

2006-07-06 Thread Geoffrey Young
Michael G Schwern wrote:
> On 2/9/06, Geoffrey Young <[EMAIL PROTECTED]> wrote:
> 
>> > This works:
>>
>> yes, excellent randy.  thanks for that.  it still seems a little
>> hackish but
>> that's ok - hackish works for me if it means I can do what I want and
>> nobody
>> else needs to do extra work :)
>>
>> I made some tweaks to your format and added a few minor notes here
>>
>>   http://people.apache.org/~geoff/test-more-separately.tar.gz
> 
> 
> A less hackish version of plan.t is...
> 
>  use Test::More;
>  my $TB = Test::More->builder;
>  $TB->no_ending(1);
> 
>  plan tests => 3;
> 
>  print qx!perl t/response.pl!;

cool, thanks.  I've updated the example package to include that format
as well.

--Geoff




Re: TAP diagnostic syntax proposal

2006-07-11 Thread Geoffrey Young
Ovid wrote:
> - Original Message  From: Jonathan Rockway <[EMAIL PROTECTED]>
> 
>> What else is TAP targeted to?  C / C++ / Java?
> 
> 
> PHP tests often use TAP (don't know the name)

almost all of the php test frameworks now offer TAP support - see

  http://search.cpan.org/dist/Test-Harness/lib/Test/Harness/TAP.pod#PHP

there's also a decent list of other languages as well.

--Geoff




Re: TAP diagnostic syntax proposal

2006-07-11 Thread Geoffrey Young

> However, most perl tests don't care about TAP, they use Test::More and
> Test::Harness and happen to exchange data via TAP.  If Test::More and
> Test::Harness decied to use "YAP" (YAML Anything Protocol? :), then most
> applications would probably never notice.

most _perl_ applications would never notice.

IMHO however this overhaul ends up playing out, TAPs current marvel is
that it's wonderfully simple and very forgiving - if the new version
isn't completely back compatible y'all should call it something else,
lest you risk alienating all the non-perl folks who embraced the
protocol because of it's simplicity.

in other words, call it TAP but break current non-perl TAP
implementations and I think you'll wipe out the userbase that some of us
spent a lot of effort trying to win over (and succeeding :)

--Geoff


Re: TAP diagnostic syntax proposal

2006-07-13 Thread Geoffrey Young
Jonathan Rockway wrote:
> While I agree with David, this argument is almost completely pointless. 
> Nobody reads the raw TAP output!

are you serious?  listen to what they people here are saying - we _all_
read the raw TAP output, all the time, and not because we're TAP
developers interested in the underlying implementations.  as users, the
(current) raw TAP diagnostics helps us figure out why a test failed, and
if it doesn't make sense due to bad wording or reversed expectations
then it's that much harder than it needs to be.

--Geoff



Re: TAP ain't "Test All Perl"

2006-08-16 Thread Geoffrey Young

> Trouble is at the moment all this is still in the prototype stage.
> And none of them are killer.

supporting TAP means you can integrate with Test::Harness.  now, I know
that might not seem like much, but we've got quite the number of mature
testing tools over in perl-land that are pretty cool.

for example, because nearly all the popular PHP testing tools now
support TAP (SimpleTest, PHPUnit) PHP test-minded folks don't have to
change what they are doing to take advantage of, for example,
Apache-Test.  the combination of both means they can write their tests
the way they always have, but (painlessly) run them in the
httpd-embedded PHP instead of in the CLI... without a browser and
aggregated on the command line, just like they're used to.

so, what else is there in that might work similarly, extending other
languages with cool perl tools?  I dunno, but I think the ability to
integrate like this is something that would make a killer feature if
folks figured out how to leverage (and market) it.

--Geoff


Re: TAP ain't "Test All Perl"

2006-08-16 Thread Geoffrey Young
Adrian Howard wrote:
> 
> On 16 Aug 2006, at 14:45, Geoffrey Young wrote:
> [snip]
> 
>> I dunno, but I think the ability to
>> integrate like this is something that would make a killer feature if
>> folks figured out how to leverage (and market) it.
> 
> 
> The problem is that the test runners for some frameworks are "cooler" 
> than Test::Harness ("pretty" GUIs, integrated with all major IDEs, 
> etc.) so you've got a bit of a hill to climb before you can sell the 
> extra benefits a "standard" based system might give you.

yeah, well that's kinda their problem for me - if perl only came with
one of these cooler interfaces, with pretty colors that I had to
eye-scan I might not be testing as much.  now give me something I can
stick in crontab and look at the failure report tomorrow (if anything
fails) and we're all set ;)

but sure, I see your point.  in fact, people have made the argument that
Test::Harness integration is actually a stumbling block for PHP testing,
since most PHP developers live in win32 world and only have open a text
editor and their browser (and I guess an ftp client or something), with
no unix-type interface available most of the time.  whether this is true
or not, that someone said it would seem to indicate that, yeah, the
command line isn't all that great a tool if you're not already there
using it.

--Geoff


  1   2   >