Re: Fixing the damage caused by has_test_pod

2007-08-02 Thread David Cantrell

Eric Wilhelm wrote:

# from David Cantrell

Let us assume that I write a module that, if you have a particular
module installed will do some extra stuff.  This is very common.
...
Skipping tests because you correctly identify that the optional module
isn't available is, of course, counted as passing.
Test::Pod is *not* optional to PERL_AUTHOR_TESTING.  If your intent is 
to test the pod (here, I am taking PERL_AUTHOR_TESTING to imply that 
we're trying to prevent bad pod), you must have the module that tests 
it.


That is, to test the pod, you have to test the pod.

To put it another way, the pod is not tested unless the pod tests have 
been run.


If the pod tests didn't get run, the pod hasn't been tested.

A pod test which skips due to 'no Test::Pod' has not tested the pod.

To test the pod, you must run the pod tests.


Seeing that you obviously think I'm an idiot, there's probably not much 
point continuing.


--
David Cantrell


Re: Summarizing the pod tests thread

2007-08-02 Thread Joshua ben Jore
On 7/31/07, David Golden [EMAIL PROTECTED] wrote:
 On 7/31/07, chromatic [EMAIL PROTECTED] wrote:
  Please explain to me, in detail sufficient for a three year old, precisely
  how:
 
  1) POD can possibly behave any differently on my machine versus anyone 
  else's
  machine, being non-executed text and not executed code

 What version of Pod::Simple do you have?  What version does everyone
 else have?  Will POD parsed on your machine always parse the same
 everywhere?

 Should you care? is really your second question:

Just FYI, using valid pod like =head3 causes runtime failures during
build on older versions of Pod::Whatever that's used during the
module installation by EU::MM.

It's really annoying to have to edit pod to so it isn't throwing
exceptions during execution. Apparently Pod is executable code to some
part of Pod::Simple or whatever.

Josh


Re: Summarizing the pod tests thread

2007-08-02 Thread Eric Wilhelm
# from Joshua ben Jore
# on Thursday 02 August 2007 07:13 am:

Just FYI, using valid pod like =head3 causes runtime failures during
build on older versions of Pod::Whatever that's used during the
module installation by EU::MM.

Good point.

And still, this is something which *can* be checked at other than the 
install target.  The author, CPANTS, or cpan-testers.

--Eric
-- 
Issues of control, repair, improvement, cost, or just plain
understandability all come down strongly in favor of open source
solutions to complex problems of any sort.
--Robert G. Brown
---
http://scratchcomputing.com
---


Re: running author tests

2007-08-02 Thread Eric Wilhelm
# from David Cantrell
# on Thursday 02 August 2007 02:54 am:

Eric Wilhelm wrote:
 # from David Cantrell

 Skipping tests because you correctly identify that the optional
 module isn't available is, of course, counted as passing.
...
 To test the pod, you must run the pod tests.

Seeing that you obviously think I'm an idiot, there's probably not
 much point continuing.

No, I just want to be clear that the pod must get tested by the pod 
tests and that eval {require Test::Pod} is the wrong trigger if 
$ENV{PERL_AUTHOR_TESTING} is set.  The switch state implies that the 
module is mandatory.

I (the author) don't want to accidentally skip the test due to a 
missing/corrupt/old module (that is exactly why we're having problems 
with the current t/pod.t invocation.)

If we're going to establish a recommendation for usage of 
$PERL_AUTHOR_TESTING and/or an author_t/ directory, it should behave 
somewhat like 'use strict' in that it needs to actually run *all* of 
the author tests (with very few exceptions.)

For such usage, extra dependencies would only be optional in extreme 
cases.  Further, pod/pod-coverage would possibly even be 'assumed' 
tests.  That is, why bother even having a .t file if the authortest 
tool knows how to run testpod and testpodcoverage?

--Eric
-- 
Like a lot of people, I was mathematically abused as a child.
--Paul Graham
---
http://scratchcomputing.com
---


Re: Summarizing the pod tests thread

2007-08-02 Thread Salve J Nilsen

Thanks for reading through my wall of text, Adam. :)

Adam Kennedy wrote:

Salve J. Nilsen wrote:
Let's say Joe Sysadmin wants to install the author's (a.k.a. your) 
module Useful::Example, and during the test-phase one of the POD tests 
fail.


Joe Sysadmin doesn't use modules, lets try the following.

Joe Sysadmin wants to install the JSAN client, because his HTML/JavaScript
guys want to use some of the JavaScript modules. Joe Sysadmin doesn't know
Perl. He does not know what POD is, and has never heard of CPANTS. He will
never need to read the documentation for any dependencies of the JSAN
client.


Ok, let's try.


1) Joe's POD-analyzing module has a different/early/buggy notion of how 
the POD syntax is supposed to be. This can be fixed by you in several 
possible ways:


Joe Sysadmin runs sudo cpan --install JSAN. 10,000 lines of text scrolls
down the screen for about 10 minutes. 9 minutes and 8,500 lines in, the POD
tests in a utility module 6 layers of recursive dependencies up the chain
fails.

Installation of that module fails, and as the CPAN recursion unwinds another
5 modules recursively fail.

The final summary lists 6 modules which did not install. The original reason
is 1,500 lines above this summary, at the top of many many details about
failed tests due to the missing dependencies.

Joe Sysadmin has no idea why the installation failed, he scrolls up through
last 1000 lines of output, before giving up, and just running sudo cpan
--install JSAN again. It still fails, with 2,000 lines of output.

At this point, MOST people that are not Perl developers are utterly lost.
I've seen it several times in the JSAN IRC channels, as quite competant
JavaScript, Python, Ruby and ASP.Net coders come in to ask for help because
their CPAN installation fails.


Ok, I see you're describing several bugs (other than the one breaking the
install chain):

Bug #1: The module build output text is too verbose. (Hiding the detailed
output would be useful.)
Bug #2: The module build output isn't stored anywhere accessible, or at all.
(Keeping the module build output in a Build.log would be useful.)
Bug #3: If the build output IS stored somewhere, there's nothing telling Joe
about this fact. (Telling at the end of the build where the Build.log
can be found may help. TESTS FAILED! SEE /tmp/Build.log FOR DETAILS)
Bug #4: There isn't a sufficiently clear test output summary telling Joe which
module broke the dependency chain - so he can't look into it himself.
(Visualizing the dependencies and show where it broke may help. Maybe
displaying the relevant dependencies in a way like tree(1) does?)
Bug #5: There's no simple way available Joe to report/post the failed test to
someone who cares. (It may help asking if the test failures should be
reported, possibly resulting in the installation of Test::Reporter and
it picking up the previous Build.log files)



The author has no idea it has failed for the user, because the user does not
know how to report the fault.


This ought to be something the authors (and the community) can improve (see bug
#5.)



Likewise, not only does the user not know HOW to blame the pod analyzer, but
often does not even know what POD is.


He doesn't have to know what POD is, just that there has been an error, and how
to report it. :)



But even if the author's influence over Joe Sysadmin's installation is
rather limited, it's still the author's duty to make sure Joe can know (as
best as possible) that the module is in good shape.


Surely the best way to do this is simply to not have failing tests for 
things that aren't related to the actual functionality of the module.


Well, in some ways, I agree with you. But sadly, no module is an island. By 
running all the tests (even the ones that don't directly concern the

modules functionality), we can learn about other things too. Things like Does
Test::Pod understand my documentation syntax? or Does Test::Pod::Coverage
give what i expect as result? or Are my tests set up correctly? or Have I
made sure to keep my dependencies requirements up to date? or even secondary 
concerns like Does the module I use for testing POD function correctly? or 
even Is the syntax I use to describe my documentation powerful enough?


By letting the end-user run these tests, you get a much earlier warning about 
these questions (and therefor an earlier chance to find an answer to them), but 
at the cost of some annoyance for the user. Because of this, I think the 
feedback one can get from such tests easily outweigh any concerns from the user 
about non-essential tests failing...


But this isn't a binary yes/no to POD tests issue. There's no reason to make
this into an all-or-nothing situation. We can still let the end-user be the
master of her own world, by allowing her to run the less essential tests only
when she explicitly asks for it. e.g. by using $ENV{PERL_AUTHOR_TESTING}, or

Re: formalizing extra tests

2007-08-02 Thread Eric Wilhelm
# from Salve J Nilsen
# on Thursday 02 August 2007 08:19 am:

But this isn't a binary yes/no to POD tests issue. There's no reason
 to make this into an all-or-nothing situation. We can still let the
 end-user be the master of her own world, by allowing her to run the
 less essential tests only when she explicitly asks for it. e.g. by
 using $ENV{PERL_AUTHOR_TESTING}, or asking during setup if she wants
 to run the author tests (with default answer no.)

Yep.

We do need to have standard ways to do this.

They need to be incorporated into CPANTS and cpan-testers.

They shouldn't prevent end-users from using otherwise-functioning code.

We also need to make this information clear and accessible to CPAN 
authors.

http://perl-qa.hexten.net/wiki/index.php/Best_Practices_for_Testing#TODO

--Eric
-- 
If the above message is encrypted and you have lost your pgp key, please
send a self-addressed, stamped lead box to the address below.
---
http://scratchcomputing.com
---


Re: .t files as specs

2007-08-02 Thread nadim khemir
On Tuesday 19 June 2007 17:52, Mike Malony wrote:
 So I'm working my project, and I've got one other more junior coder with
 me.

 Has anyone tried writing test files as part of their spec's?

 An overview document would also be needed, and some time walking through
 the expected testing.  But it sure would be setting clear expectations.

That's why I wrote POD::Tested. I needed a tool that would allow me to sit 
with a customer and we'd write the documentation and peper it with 
acceptance test. Then I'd use the document as a test, directly.

Play with it and tell me if you need more.

Cheers, Nadim.


Re: .t files as specs

2007-08-02 Thread Scott McWhirter
On 8/2/07, nadim khemir [EMAIL PROTECTED] wrote:
 On Tuesday 19 June 2007 17:52, Mike Malony wrote:
  So I'm working my project, and I've got one other more junior coder with
  me.
 
  Has anyone tried writing test files as part of their spec's?
 
  An overview document would also be needed, and some time walking through
  the expected testing.  But it sure would be setting clear expectations.

 That's why I wrote POD::Tested. I needed a tool that would allow me to sit
 with a customer and we'd write the documentation and peper it with
 acceptance test. Then I'd use the document as a test, directly.

 Play with it and tell me if you need more.

 Cheers, Nadim.


We've been doing this with FitNesse so that our customer can edit
these things directly themselves. I also wrote Test::FITesque to port
some of this stuff so that we can more easily reuse our acceptance
tests for a regression suite.

We've been using FitNesse with the perl server and writing our own
fixture classes which use WWW::Selenium in the background, it's
similar to SocialText's WikiTest stuff.

ta!


-- 
-Scott-


Re: .t files as specs

2007-08-02 Thread Mike Malony
On 8/2/07, Scott McWhirter [EMAIL PROTECTED] wrote:

 On 8/2/07, nadim khemir [EMAIL PROTECTED] wrote:
  On Tuesday 19 June 2007 17:52, Mike Malony wrote:
   So I'm working my project, and I've got one other more junior coder
 with
   me.
  
   Has anyone tried writing test files as part of their spec's?
  
   An overview document would also be needed, and some time walking
 through
   the expected testing.  But it sure would be setting clear
 expectations.
 
  That's why I wrote POD::Tested. I needed a tool that would allow me to
 sit
  with a customer and we'd write the documentation and peper it with
  acceptance test. Then I'd use the document as a test, directly.
 
  Play with it and tell me if you need more.
 
  Cheers, Nadim.
 

 We've been doing this with FitNesse so that our customer can edit
 these things directly themselves. I also wrote Test::FITesque to port
 some of this stuff so that we can more easily reuse our acceptance
 tests for a regression suite.

 We've been using FitNesse with the perl server and writing our own
 fixture classes which use WWW::Selenium in the background, it's
 similar to SocialText's WikiTest stuff.

 ta!


 --
 -Scott-



And now I have two more modules to investigate.  Thanks!

And thanks to all, I appreciate all the comments.  I'm still fighting to
build more testing into the process.   And I can see several options now
that I'd never thought through myself.

Mike


Re: running author tests

2007-08-02 Thread David Golden
On 8/2/07, Eric Wilhelm [EMAIL PROTECTED] wrote:
 For such usage, extra dependencies would only be optional in extreme
 cases.  Further, pod/pod-coverage would possibly even be 'assumed'
 tests.  That is, why bother even having a .t file if the authortest
 tool knows how to run testpod and testpodcoverage?

Who says I keep my POD in same file as my code?  I might put it in a
separate file that has a completely different name.  Or I might have a
different convention for private vs public methods, etc.

Having an actual pod/pod-coverage.t gives a handy place to put those
customizations.  Yes, some of that could be put in
pod_coverage_options in a config or a ACTION_testpod method, but to
me, that introduces extra complexity that I don't want cluttering
Build.PL.

Having an authortest/ directory that gets run with a harness is
simple and consistent.

Regards,
David


Re: running author tests

2007-08-02 Thread Adriano Ferreira
On 8/2/07, David Golden [EMAIL PROTECTED] wrote:
 Having an actual pod/pod-coverage.t gives a handy place to put those
 customizations.  Yes, some of that could be put in
 pod_coverage_options in a config or a ACTION_testpod method, but to
 me, that introduces extra complexity that I don't want cluttering
 Build.PL.

Agreed in all manners. IMHO, Build.PL and Makefile.PL should be kept
as dumb as possible. And easy to be read (and understood).

 Having an authortest/ directory that gets run with a harness is
 simple and consistent.

There is already a convention that tests go under the t/ directory
and also that they usually match the glob t/*.t. I cannot think of
anything simpler and that can possibly works, than to stick the extra
tests into subdirectories of t/ like chromatic does ( t/author/*.t
or t/developer/*.t ).

The behavior could be triggered by updating M::B and EUMM to make them
run the extended glob t/**/*.t when PERL_AUTHOR_TESTING or
AUTOMATED_TESTING is setted.

Even programmers unaware of such conventions that are trying with the
distribution, will make

$ prove t/author/pod.t

naturally as he/she runs any other tests.

 Regards,
 David



Re: running author tests

2007-08-02 Thread Christopher H. Laco
Adriano Ferreira wrote:
 On 8/2/07, David Golden [EMAIL PROTECTED] wrote:
 Having an actual pod/pod-coverage.t gives a handy place to put those
 customizations.  Yes, some of that could be put in
 pod_coverage_options in a config or a ACTION_testpod method, but to
 me, that introduces extra complexity that I don't want cluttering
 Build.PL.
 
 Agreed in all manners. IMHO, Build.PL and Makefile.PL should be kept
 as dumb as possible. And easy to be read (and understood).
 
 Having an authortest/ directory that gets run with a harness is
 simple and consistent.
 
 There is already a convention that tests go under the t/ directory
 and also that they usually match the glob t/*.t. I cannot think of
 anything simpler and that can possibly works, than to stick the extra
 tests into subdirectories of t/ like chromatic does ( t/author/*.t
 or t/developer/*.t ).
 
 The behavior could be triggered by updating M::B and EUMM to make them
 run the extended glob t/**/*.t when PERL_AUTHOR_TESTING or
 AUTOMATED_TESTING is setted.
 
 Even programmers unaware of such conventions that are trying with the
 distribution, will make
 
 $ prove t/author/pod.t
 
 naturally as he/she runs any other tests.
 
 Regards,
 David

 
 

I must say, I like t/author more than t/authortests...

However, my main beef overall is now we essentially have a 'reserved
name' directory that may interfere with how I may choose to group tests
for larger distributions.

For example, I have a boatload of tests to test http server code in
t/live, and tests to test providers in t/providers and tests to test
catalyst in...shockingly...t/catalyst, etc...

If I have a boatload of tests for a 'author' object, I might choose to
group then in t/author

Sure, it's rare. Sure, I could name all those tests t/author_*.t...
Either way, I think just assuming a directory name is rude.

Now, if there were a way to tell the harness which directory to consider
as 'author' tests.. then I guess I don't care.

-=Chris




signature.asc
Description: OpenPGP digital signature


Re: running author tests

2007-08-02 Thread Eric Wilhelm
# from Adriano Ferreira
# on Thursday 02 August 2007 01:13 pm:

The behavior could be triggered by updating M::B and EUMM to make them
run the extended glob t/**/*.t when PERL_AUTHOR_TESTING or
AUTOMATED_TESTING is setted.

No.  That breaks recursive tests, unless we also add code to skip 
t/author/.

I think a new toplevel (e.g. ./author_t or ./author_test) directory is 
going to be most compatible.

--Eric
-- 
If you only know how to use a hammer, every problem begins to look like
a nail.
--Richard B. Johnson
---
http://scratchcomputing.com
---


Re: running author tests

2007-08-02 Thread Andy Armstrong

On 2 Aug 2007, at 23:53, Eric Wilhelm wrote:

I think a new toplevel (e.g. ./author_t or ./author_test) directory is
going to be most compatible.


+1

--
Andy Armstrong, hexten.net



Re: page - wiki - wiki - wiki

2007-08-02 Thread Tyler MacDonald
Ok, perl-qa.yi.org redirects there now and it looks like everything's
working...


Michael G Schwern [EMAIL PROTECTED] wrote:
 Andy Lester wrote:
  
  On Aug 1, 2007, at 5:09 PM, Michael G Schwern wrote:
  
http://qa.perl.org/ [2] links to:
 
  Ask can fix that.  Ask?
  
  Oh wait, qa.perl.org is mine.  What do you want I should link to instead?
 
 http://perl-qa.hexten.net/wiki please.
 

--