Re: Kwalitee metric: Broken Installer

2006-07-18 Thread David Golden

Thomas Klausner wrote:

I think it's a good metric, but maybe Module::Install-supports will
disagree :-)


Well, as long as it only picks up *certain* versions, that's probably OK.


We could also add checks for problems in Makefile.PL/Build.PL...


At the risk of going out on a limb here, maybe something that detects 
Module::Build::Compat in passthrough mode?


Regards,
David





Re: Time for a Revolution

2006-07-14 Thread David Golden

chromatic wrote:
Why is there not a Bundle::PerlPlus (and yes, I've lathered up my yak with 
that name) that downloads and installs the modules that should have been in 
the box?


For one, that should be Task::PerlPlus.  :-)

Second, for any pre-packaged distribution like Strawberry Perl (see 
vanillaperl.com), adding in the must have modules is pretty easy.  One 
of the goals of the Vanilla Perl project is to get win32 Perl to a point 
where we can offer an end-user Perl with all the best modules already 
installed.


Unlike what others said, core perl shouldn't be the vehicle for this, 
most likely, given the more stringent support and backwards 
compatibility.  We want to be able to change the composition of 
PerlPlus overtime, and once things go into core, they're pretty stuck.


But I concur with others that that the issue is awareness and usage, not 
installation.  We need an easily accessible way for people to learn that 
there are some best modules along with best practices.  (And thank you, 
Damian, for encluding lots of module recommendations in that book.)


One idea I had is that we need a new edition of the Perl Cookbook, with 
new recipes that cover not only how to use core built-ins and modules, 
but that highlight some of the best CPAN modules for certain tasks as 
well.


Or, perhaps we need the Perl CPAN Cookbook -- which would be like the 
Cookbook but focuses *only* on the greatest-hits modules across all the 
same categories.  If CPAN is one of Perl's greatest strengths, shouldn't 
that get more attention, too?


Or, perhaps, to break up or both Cookbook into individual books covering 
certain topics in more depth.  This might be good for O'Reilly's PDF 
book series -- low cost and easy to update over time.


More generally, I think Perl needs to be focusing on how it helps people 
get stuff done faster/cheaper/better -- task focused, not tool focused 
-- and in areas where there is buzz and excitement.  E.g. Writing AJAX 
applications in Perl.


Regards,
David



Re: CPANDB - was: Module::Dependency 1.84

2006-07-11 Thread David Golden

Tels wrote:
My idea was to build _only_ the database, and do it right, simple and easy 
to use and then get everyone else to just use the DB instead of fiddling 
with their own. (simple by having the database being superior to every 
other hack thats in existance now :-)


I even got so far as to do a mockup v0.02 - but then went back to playing 
Guildwars.


Is this a project that would be of general interest?


At YAPC::NA, Adam Kennedy mention that he wanted to try to make some 
headway on CPAN::Index, which sounds very similar in intent.  While it's 
not released, you can see the formative project at his public repository:


http://tinyurl.com/g888h

Perhaps you can join forces with him and help push some collective 
project towards a release.


Regards,
David Golden



Anyone experiencing problems with rt.cpan.org?

2006-07-08 Thread David Golden
In the last day or so, every time I go to rt.cpan.org, it seems to 
nearly finish loading a page and then just stalls.


Deleting the cookie for it seemed to help briefly, and then it stalled 
again after submitting a bug report.


Are others experiencing difficulty?

Regards,
David Golden


Re: Old and broken versions of Module::Install

2006-07-06 Thread David Golden

Steffen Mueller wrote:
Versions of Module::Install  0.61 do not work on the current ActivePerl 
release 5.8.8 build 817. There are *a lot* of CPAN distributions that 
use Module::Install versions in the 0.3X range. In fact the most modules 
that use Module::Install are still 0.3X. For example an integral 
distribution like Scalar-List-Utils is using Module::Install 0.37. You 
can find a complete (and somewhat current) list of problematic modules 
at http://steffen-mueller.net/mi_old.html


Ideas?


What about adding NO_BROKEN_INSTALLER as a Kwalitee point for CPANTS?

:-)

Regards,
David Golden


Re: Old and broken versions of Module::Install

2006-07-06 Thread David Golden

Steffen Mueller wrote:

Michael G Schwern schrieb:

What's broken and why suddenly 5.8.8?


* ActivePerl::Config on case-insensitive filesystems interacts
  erroneously with Module::Install's (outdated) @INC hack, so remove it.
  (Patch from Gisle Aas)
[...]

Sounds like its a combination of an M::I hack and Windows being the bad 
platform its often perceived as.


I take issue with the bad platform bit.  The underlying issue is Perl 
developers who assume unix semantics (perhaps unconsciously, even) and 
don't write for portability.


The Vanilla Perl project (vanillaperl.com) has been squashing numerous 
bugs of this sort in CPAN modules.  There are surprisingly many 
forward/backslash bugs, even by developers who should know better.  Many 
of them crop up in test-suites, even when the code itself uses File::Spec.


The M::I hack doesn't help, but probably neither does AS's config hacking.

David Golden



Re: On Gaming CPANTS, and a Kwalitee Suggestion

2006-07-05 Thread David Golden

Randy J. Ray wrote:

I'm a fairly-recent addition to the list. I've read a good part of the


Welcome!


Secondly, having recently added digital-signing to a few of my modules,
perhaps the presence of a SIGNATURE file might be a Kwalitee marker (with
the caveat that it should be an actual Module::Signature-generate artifact,
not just a zero-length file named SIGNATURE). I found the steps needed to
add this to be pretty simple, not much more work than adding POD and
POD-coverage tests to those same modules.


Module::Signature has caused a problem at various points for people who 
have it installed, but not configured properly.  Given that, some 
developers have started removing SIGNATURE to improve compatibility.


Given that Mod::Sig checks are just that the signature is valid, not 
that the signature matches a known/registered developer, the security 
aspect is already minimal.


Regards,
David Golden



Re: Volunteer wanted: We need a new wiki.

2006-07-05 Thread David Golden

Randy W. Sims wrote:
I don't know much about SocialText. Is there a converter, so that if you 
put something up temporarily in MediaWiki, it can later be converted and 
moved to SocialText?


I know there are a whole series of HTML::WikiConverter dialects on CPAN. 
 I haven't used it, but it seems like it has the right idea.  If there 
isn't a SocialText dialect plugin already, I'm sure someone could whip 
one up.


Regards,
David



Re: CPAN and META.yml: no_index dir vs directory

2006-07-05 Thread David Golden

Andreas J. Koenig wrote:

The page is there, http://thepierianspring.org/perl/meta/, but does
not provide direct statistics so I made up my own.

no_index/dir 13
no_index/directory 1397
private/directory40

David's D/DA/DAGOLDEN/Perl-Dist-Vanilla-5 used both dir and directory:)

Those who used just dir were ignored up to now:

 B/BL/BLM/Win32API-Registry-0.27
 B/BW/BWARFIELD/NRGN/Test-AutoLoader-0.03
 D/DA/DAGOLDEN/Object-LocalVars-0.15
 D/DA/DAGOLDEN/Object-LocalVars-0.16
 D/DA/DAGOLDEN/Perl-Dist-Vanilla-4
 G/GU/GUIDO/Test-Unit-GTestRunner-0.03
 G/GU/GUIDO/Test-Unit-GTestRunner-0.04
 J/JV/JV/EekBoek-0.91
 J/JV/JV/EekBoek-0.60
 J/JV/JV/EekBoek-0.61
 R/RC/RCAPUTO/POE-0.3502
 R/RC/RCAPUTO/POE-Component-Client-Keepalive-0.0801


Well, I think that suggests pretty definitively that directory needs 
to be added to the spec, at least, to reflect de facto usage.


As for dir, I'm three of the 13, so discount those.  I'm not sure 
that's worth adding to CPAN.  Maybe it calls for dropping dir from the 
spec unless anyone else knows who's using it.


Regards,
David





Re: TAP extension proposal: test groups

2006-07-02 Thread David Golden

chromatic wrote:

On Saturday 01 July 2006 16:46, Fergal Daly wrote:


It looks like it's only one level of nesting. Any reason not to go the
whole hog with something like

..1
OK 1
..2
...1
OK 2
OK 3
...2
OK 4
..3
OK5


No one has provided an actual use case for it yet.  YAGNI.


I've got plenty of test fixtures where subroutine X with tests calls 
subroutine Y with tests.  Or a loop calls a subroutine with tests.  It 
would be intuitive to group and nest those.


David Golden


Re: CPANTS is not a game.

2006-05-23 Thread David Golden

Andy Lester wrote:

  How do you get authors to actually look at the CPANTS information and
  make corrections?  Well, we like competition.  Make it a game!

So it was you -- or somebody impersonating you on this list -- who
managed to persuade me that actually Cpants being a game was a good
thing!


The key is that we're playing for different goals.  Schwern was saying 
that the improvement of the modules is a game.  PerlGirl is making a 
game out of improving the numeric score for her modules, but without any 
improvement of the module itself.


How does is_prereq improve quality?

Or, put differently, how does measuring something that an author can't 
control create an incentive to improve?


If CPANTS is a objective quality measure, then it makes sense.  If 
CPANTS is a quality game -- i.e. a friendly competition to improve 
one's scores -- then it doesn't.


If CPANTS stays with a narrow set of well-defined, objective criteria, 
then it can serve both purposes.  Remove or refine the subjective or 
hard-to-measure ones and the numerical gaming that doesn't change 
apparent quality goes away.


Regards,
David Golden



Re: CPANTS is not a game.

2006-05-23 Thread David Golden

Chris Dolan wrote:
is_prereq is usually a proxy metric for software maturity: if someone 
thinks your module is good enough that he would rather depend on it than 
reinvent it, then it's probably a better-than-average module on CPAN.  
is_prereq is usually a vote of confidence, so it is likely a good proxy 
for quality.  In fact I believe that because the author (usually) can't 
control it directly, is_prereq is one of the best proxies for quality 
among the current kwalitee metrics.


I'd go so far to argue that is_prereq is perhaps a more significant 
metric than Kwalitee itself as it is really a measure of *utility*.  I'd 
be very interested to see it explored fully, not just as a binary -- 
e.g. how many different authors used a module in at least one of their 
distributions.


That said, it doesn't mean much for quality -- people may well use a 
poor quality distribution if it is sufficiently useful.


As an example, consider pod_coverage.  It's a rather annoying metric, 
most of us agree.  Test::Pod::Coverage really only needs to be run on 
the author's machine, not on every user's machine.  However, by adding 
pod_coverage to kwalitee we got LOTS of authors to improve their POD 
with the cost of wasting cycles on users' machines.  I think that's a 
price worth paying -- at least until we rewrite the metric to actually 
test POD coverage (which is a decent proxy for POD quality) instead of 
just checking for the presence of a t/pod_coverage.t file (which is a 
weak proxy for POD quality, but dramatically easier to measure).


It doesn't check for the existence of a t/pod_coverage.t file.  It 
checks that a string like use Test::Pod::Coverage appears properly 
formatted.  E.g. I believe this is sufficient to get the Kwalitee point:


  # t/pod_coverage.t
  __END__
  use Test::Pod::Coverage;

And, unfortunately, it also misses actual perl that doesn't meet its 
regex expectations.  (E.g. see the bug I recently filed for 
Module::ExtractUse.)


Regards,
David



Re: Changing permissions on temporary directories during testing

2006-05-22 Thread David Golden

James E Keenan wrote:
Let's say that I'm writing a test suite for a Perl module which creates 
files and then, optionally, moves those files to predetermined 
directories.  To test this module's functionality, I would have to see 
what happens when the user running the tests does not have write 
permissions on the destination directory (e.g., test whether an 
appropriate warning was issued).  But to do *that*, I would have to 
change permissions on a directory to forbid myself to write to it.


This code is intended to achieve that goal but doesn't DWIM:

my ($file, $workdir, $destdir);
{
$workdir = File::Temp::tempdir();
chdir $workdir or die Cannot change to $workdir;
$file = system(touch alpha);
$destdir = $workdir/other;
chmod 0330, ($destdir)
or die Cannot change permissions on directory;
# dies on preceding line
rename $file, $destdir/$file or die Cannot move $file;
}

Is there any other way to do this?  Or am I mistaken in even attempting 
to try it?  Thanks.


How portable does this need to be?  My inclination is not to mess with 
file permissions in a test suite if you can avoid it.  I'd probably 
just mock/override rename to report failure in the module under test:


  BEGIN {
*Module::Under::Test::rename = sub { 0 };
  }
  use Module::Under::Test;

For system interaction tests, I prefer to fake failures rather than try 
to manufacture them.


Regards,
David


Re: Module requirements

2006-04-07 Thread David Golden

David Wright wrote:

Your $thingy could be a hashref, in which case $thingy-isa will die.


The point of the discussion is that you should be checking if $thingy is 
blessed() first, as UNIVERSAL::isa breaks for objects that masquerade as 
other objects (e.g. via an adaptor pattern).


I've been using it a lot recently to catch exceptions. What's so wrong 
with the below, almost identical to the example in perldoc -f die? I'd 
rather not die again immediately by assuming [EMAIL PROTECTED]isa will work.


eval {
# do some stuff
};

if ( $@ ) {
if( UNIVERSAL::isa($@, 'My::Exception') ) {
   # known exception, handle appropriately
}
else {
   die Ooops-a-daisy: $@;
}
}


Exception::Class now offers the caught method so you don't have to use 
UNIVERSAL::isa that way.  I also wrote Exception::Class::TryCatch for a 
little more helpful sugar:


  eval {
 # do stuff;
  };

  # catch() upgrades non-object $@ to Exception::Class::Base
  if ( catch my $err ) {
if ( $err-isa('My::Exception') ) {
  # handle it
}
else {
  $err-rethrow;
}
  }

Regards,
David Golden


Re: Module requirements

2006-04-06 Thread David Golden

Randy W. Sims wrote:
Yep, that's ridiculous. I used to see these questions a lot back when I 
was answering mails on the beginner groups. People wanting to do things 
that have already been done and widely tested, but everyone wants to 
write their own in order to reduce dependencies.


Reinventing the wheel is a good learning exercise, but after that it is 
a waste of time. At the same time they don't want to require their users 
to download 50 modules to use their application, so reducing 
dependencies is good to some extent. There is no clear line though, 
about how many dependencies are too many and which are trivial. 
Different applications of various sizes and complexities will have 
different boundaries. Authors have to balance convenience to themselves 
and to their users, leaning more toward users of course.


Dependencies also increase maintenance complexity.  Since most module 
authors are contributing as volunteers, minimizing maintenance demands 
is an important consideration.  Core modules have the advantage of 
extensive testing and scrutiny.  If I find Whizz::Bang that meets my 
needs, I may not want to use it if I'm not sure that (a) it's well 
tested or (b) it's well maintained.


There are situations where I may prefer to reimplement something, even 
if I might create a new buggy implementation, so as not to make the 
quality and maintainability of my code dependent on something I'm not 
confident in.  At the least, I know that I'll be responsive to my own 
bugs -- which, judging from the length of some bug reports on RT, isn't 
always the case for Whizz::Bang.


This underlying behavior is one of my biggest pet peeves with the perl 
community. Too many people want to go out and write their own version of 
modules instead of contributing to the work others began. Diversity is a 


I suspect that many of these are API driven.  Programming should be fun 
 and using an API that doesn't fit isn't fun.  As a result people go 
write their own stuff that they feel is easier/faster to use.  This is 
the flip side of impatience and hubris.  E.g. CPAN search found 510 
Simple, 82 Easy and 80 Fast modules -- not to mention the 49 
Getopt modules.  I don't think that sort of thing is going to change.


David


Re: [OT] TDD only works for simple things...

2006-03-30 Thread David Golden

Adam Kennedy wrote:

- It can test the things you know that work.
- It is good when testing the things you know that don't work (its 
strong point)

- It is not good for testing the things you don't know that don't work.


This implies that TDD is about code quality. For me, I find that a big 
part of the value of TDD isn't about this.  Here's a few benefits that I 
experience:


1. Writing the test first lets me role play my API.  I might rewrite 
the API a couple times while writing the first couple tests because I'll 
get to see how it works in practice.  These changes are nearly free, as 
I've done no implementation yet.  (Nor do I have to change Pod/spec, 
because I haven't written that, either.)


2. It forces me to work in small chunks.  I define a specific coding 
task by way of a *single* test (not a whole test file).  Then I 
implement that test.  I get an immediate feeling of accomplishment each 
time a test passes.  (I think those who get addicted to TDD are really 
getting addicted to the dopamine/endorphin/whatever hit from frequent 
positive feedback of task completion.)


3.  It encourages YAGNI.  By keeping focused on passing a single test, 
coding is focused.  I don't write things I might need/want until I get 
around to defining a real case for it and embody that in a test.


4.  It protects me against myself.  As I add each small piece of code, I 
get continual feedback that I haven't unintentionally broken anything else.


I do realize that it's easy to get bogged down in all the ways to do 
Evil Testing.  There is a mixture of art and science in TDD.  The way 
I manage is that I focus on positive testing and code first -- i.e. 
getting the code doing what it's supposed to given proper input.


Only once I know its working properly on valid input do I expand the 
tests to seek out invalid input, etc.  And I mix this with some coverage 
testing to ensure that I haven't inadvertently created defensive code 
that wasn't actually part of passing a test.  (Generally, conditional 
branches not taken are what show up here.)


Regards,
David Golden


Re: [OT] TDD only works for simple things...

2006-03-28 Thread David Golden

Geoffrey Young wrote:
  Only the simplest of designs benefits from pre-coded tests, unless 
you have

unlimited developer time.

needless to say I just don't believe this.  but as I try to broach the
test-driven development topic with folks I hear this lots - not just that
they don't have the time to use tdd, but that it doesn't work anyway for
most real applications (where their app is sufficiently real or large
or complex or whatever).


Isn't this just a specifications issue?  Simple designs are easier to 
specify fully.  More complex designs aren't so easy to specify -- or 
rather, the time to specify 100% of cases isn't always practical.


I would imagine that TDD breaks down if figuring out correct behavior 
takes longer than whipping up something that looks right most of the 
time and that the end customer will pay for.


I would imagine that it also breaks down when testing things at too high 
a level with too many degrees of freedom.


Still, there's no reason that TDD can't be used for the well-defined 
parts of a big project, or used at the unit level where behavior has 
fewer degrees of freedom.  Then the benefits in terms of positive 
reinforcement of task completion and the safety net against future 
breakage still apply.


David Golden



Re: Upgrading core modules on Windows

2006-03-17 Thread David Golden

demerphq wrote:

The point i was making is that it isnt actually necessary for Schwern
to do anything for you to enjoy the benefits of the new Install code.

 [snip]

  cpan install YVES/ex-ExtUtils-Install-1.3701.tar.gz

the you are cooking, with gas, right now. :-)


That's great, but I want everyone *else* to use gas, too, so that 
modules I write that use, say, Scalar::Util::refaddr, will automatically 
be built into PPMs.  Since the big bottleneck is apparently 
Activestate's challenge in upgrading core modules, this is just one step 
in a much bigger process.  (I still want an 8xx-latest repository built, 
too.)



And to be honest the package could use a bit of a workout so I
encourage people to try it out.


That's a different story -- I'll install it this weekend and see what 
happens.


Regards,
David



Upgrading core modules on Windows

2006-03-16 Thread David Golden

Philippe M. Chiasson wrote:

The main reason this is hapenning is that it's not currently possible to update
CORE packages in ActivePerl, so any module that depends on a CORE package can
be suffering from this. This problem will persist until it becomes possible to
update core packages in ActivePerl.

It's certainly not an ideal situation, but unless somebody can point out an
easier solution, this is where we are at.


I had one of those shower epiphanies this morning.  While it's only a 
concept at this point, I thought I would put it to the group for more 
experienced minds to consider.


As I understand from comments in 
http://use.perl.org/comments.pl?sid=30312 the big problem with upgrading 
core modules for Windows (ActiveState, VanillaPerl, etc.) is that 
Windows won't delete open files. Thus any core files (particularly *.dll 
files) in use by PPM/cpan.pm can't be replaced.


So why not bundle a snapshot of all the module dependencies for 
PPM/cpan.pm into a separate directory and put that at the start of @INC 
when running PPM/cpan.pm?


That way, other than the core perl executable, the upgrade tools 
wouldn't be using any core modules directly -- only the ones bundled 
snapshot.  If I understand correctly, that should allow core modules to 
be upgraded.


There's still a bit of a bootstrap problem for upgrading PPM/cpan.pm 
themselves or any of their dependencies.  However, even that might be 
avoided if the snapshot (including updates to PPM/cpan.pm) were 
updated/created on the fly by a batch file prior to executing PPM/cpan.pm.


I'm sure moving to this approach would mean changes at various points in 
the toolchain -- particularly in how the PPM and cpan batch files 
operate -- but it would avoid more complicated approaches requiring 
changes in the Windows registry in order to finish upgrades during a reboot.


Thoughts and feedback welcome.

Regards,
David Golden



Re: Upgrading core modules on Windows

2006-03-16 Thread David Golden

Adam Kennedy wrote:

The only problem with this is that it only deals with CPAN.pm itself.

The problem with locked files is wider than this.

Imagine for example that you have Windows mod_perl or some other 
long-running program holding a lock on the modules.


I realized that complication, but plenty of Windows installers already 
warn you to close all open programs before upgrading.  That's not an 
unrealistic expectation for users on Windows.


So I'd like to see where we get to once the new ExtUtils::Install is out 
and we can install properly, and then if there are still problems after 
that, we take the next step.


That's good news. I hope the Schwern bottleneck clearly up soon.

Regards,
David Golden


Re: New kwalitee metric - eg/ directory

2006-03-14 Thread David Golden

Adam Kennedy wrote:
For all those component distributions I consider it a failure if it is 
so complex that you need something more than just three or four lines 
from the SYNOPSIS.


Maybe there should be a Kwalitee metric for the length of the synopsis?

:-)

Regards,
David Golden


Activestate and Scalar-List-Utils

2006-03-14 Thread David Golden
So back at the beginning of February, there was some email traffic about 
how ActiveState's automated PPM build system was using an outdated 
version of Scalar-List-Utils, which was causing a cascading prerequisite 
failure for many distributions.


Has anyone heard any updates on this?  Does anyone have an inside 
contact at ActiveState that can shed some more light on the subject?


Regards,
David Golden


Re: Activestate and Scalar-List-Utils

2006-03-14 Thread David Golden

Steve Peters wrote:

The problem was that newer Scalar-List-Utils uses an internal Perl
function that Windows does not see as an exported function.  This was
changed with Perl 5.8.8.  Once ActiveState releases a Perl 5.8.8, they
should be able to upgrade the version of Scalar-List-Utils that they
distribute.


I don't really understand that answer given what I see in the field.

I currently have 1.14 -- which came installed with ASPerl 5.8.7.

Looking at this URL, I can't tell if this is reflecting 1.06 or 1.11:
http://aspn.activestate.com/ASPN/Modules/dist_html?dist_id=9591

From this URL: 1.15 seems to build OK on windows:
http://ppm.activestate.com/BuildStatus/5.8-windows/windows-5.8/Scalar-List-Util-1.15.txt

Even on linux, the PPM build system seems to be using an older version:
http://ppm.activestate.com/BuildStatus/5.8-linux/linux-5.8/Class-InsideOut-0.11.txt

Regards,
David Golden


Re: Activestate and Scalar-List-Utils

2006-03-14 Thread David Golden

Philippe M. Chiasson wrote:

It's not that simple a problem, exactly. The PPM build servers are all running
one the first ActivePerl release for that platform.  That way, forward binary
compatibility can be guaranteed going forward.


I wondered if that was the cause of it.  Still, that means that you're 
more or less frozen in time at 2002 when 5.8 first came out.  How long 
is the AS support window intended to be?



The main reason this is hapenning is that it's not currently possible to update
CORE packages in ActivePerl, so any module that depends on a CORE package can
be suffering from this. This problem will persist until it becomes possible to
update core packages in ActivePerl.


What's behind this limitation?  Out of curiosity, I tried to install 
-force Scalar-List-Utils-1.18 from the theoryx PPM repository at 
uwinnipeg.ca and it seems to have worked just fine -- the only issue is 
that it installs it in c:/perl/site/lib, which comes after c:/perl/lib 
in the @INC directory.  Yet, somehow I can't believe a fix is as simple 
as reversing that order, allowing user-installed upgrades to be found 
first in preference to the core modules.



It's certainly not an ideal situation, but unless somebody can point out an
easier solution, this is where we are at.


An obvious, though resource intensive one, is to move from one single 
8xx repository to several 8nx repositories -- one for each minor release.


Perhaps more realistic approach might be to have the 8xx repository as 
is (always building against the first release at that major) and just 
*one* 8NX repository that always builds against the *latest* release 
(whatever that is at any point in time).


Regards,
David Golden



Re: Erroneous CPAN Testers Reports

2006-03-13 Thread David Golden

Jerry D. Hedden wrote:

In addition to getting CPANPLUS fixed, I feel there is the
issue of what to do about such fallacious reports in the
CPAN Testers database.  Currently, there is no functionality
for deleting such reports.


This issue also has frustrated me for some time, but I don't think that 
we should be considering deleting reports.  Reports are just facts -- 
they have no value basis.  Yes, people may make judgments about the 
robustness of a module based on pass/fail percentages, but I don't think 
we should alter the raw data.


I'd rather see the ability for test reports to be annotated (e.g. by the 
author), or I'd like to see another site that summarizes the raw data 
and can filter down to a more reliable set of statistics.



As such, I would like to propose that the following
compromise be adopted:  If a newer CPAN Testers report is
received that matches all of the following with respect to
an existing report:
CPAN Tester (i.e., email address)
Module
Module version
Machine architecture
Perl version
then the newer report should overwrite/mask the older
report in such a way that the older report is no longer
displayed on the module's CPAN Testers results page.


Again, I don't think overwriting is a good idea.  What if some 
dependency module was changed and caused the module to fail/succeed 
differently than before?  That's important to know.  Different results 
from the same tester is *valuable* data, not erroneous.


Regards,
David Golden


Re: Erroneous CPAN Testers Reports

2006-03-13 Thread David Golden

A. Pagaltzis wrote:

* David Golden [EMAIL PROTECTED] [2006-03-13 23:05]:

This issue also has frustrated me for some time, but I don't
think that we should be considering deleting reports.  Reports
are just facts -- they have no value basis. Yes, people may make
judgments about the robustness of a module based on pass/fail
percentages, but I don't think we should alter the raw data.


Errm, here’s a question: in which way does it benefit *any* use
of the raw data to preserve those bogus reports, even annotated?
I can’t think of any reason why anyone would ever need to know
about those, but maybe my imagination is limited. Can you list
some?


It's a record of what happened to a user -- including a record of what 
kinds of modules people do and don't have in the wild.  I never like to 
see data like that thrown away.


I'd suppose I could see failures like this (caused by CPANPLUS) recoded 
to N/A -- or something like Dubious -- if they can be clearly 
identified.  That way the full record is always available.


Regards,
David Golden



Re: getting round Test::More test formatting trickiness

2006-03-13 Thread David Golden

Dr Bean wrote:

I've gotten comfortable with Test::More conventions, but it's
difficult in my editor to really quickly create lots of tests.

is($o-index('You'), 1, 'objects index 1');
isnt($o-index(1), 1, 'objects index 2');
isnt($o-index(2), 2, 'objects index 2');
is($o-index($t), 3, 'objects index 3');



I frequently wind up putting my repetitive cases in a data structure and 
then looping over that rather than writing cases individually.  E.g.


  use Test::More;

  my @cases = (
[ 'You'  ,1  ],
[ $t ,3  ],
  );

  plan tests = @is_cases;

  for my $c ( @cases ) {
is( $o-index( $c-[0] ), $c-[1], objects index $c-[1] );
  }

Sometimes I use hashes rather than arrays, particularly if the examples 
or test loops get long.  Likewise, I often include the test label in the 
case as well.


Test::Base makes that all a bit more automatic, but I prefer to avoid it 
so that simple little modules I write don't suddenly inherit a big 
build_requires dependency chain.


Regards,
David Golden



Re: Kwalitee in your dependencies (was CPAN Upload: etc etc)

2006-01-28 Thread David Golden

Adam Kennedy wrote:
Likewise, if your module installs all the way from a vanilla 
installation and all it dependencies go on cleanly, then I think that's 
well and truly worthy of a point.


Something like a clean_install metric. If there are any FAIL entries in 
CPAN Testers against the current version of your module, you lose a point.


Those two are not the same.  Leaving aside that Kwalitee tests don't run 
code, the ability of a vanilla version of the latest production release 
of perl to install a module and all of its dependencies with the vanilla 
version of CPAN for that release could be an interesting signal of quality.


Knocking off points for fails, however, might be due to things that are 
completely idiosyncratic.  For example, anyone whose module depended on 
a test module that used Test::Builder::Tester when Test::Builder changed 
and broke it could get dinged.


Does this really tell us anything about actual quality?

What about if I list a prerequisite version of Perl and someone who 
tries it under an older version causes a FAIL on CPAN Testers?  Does 
that tell us anything?


There are so many special cases that I don't think the value derived 
from such a metric will be worth the effort put into it.


The vanilla testing is a more interesting idea in its own right.  I've 
had that on my back burner for a while -- installing a fresh perl and 
scripting up something to try installing my distributions to it, then 
blowing away the site library directory afterwards.  I just haven't 
gotten around to it, so I look forward to seeing what you come up with.


David


Re: Kwalitee in your dependencies (was CPAN Upload: etc etc)

2006-01-28 Thread David Golden

Adam Kennedy wrote:
Whether or not that is a transient thing that lasts for a week, or a 
serious and ongoing problem, I think it's still worth it.


But that would require regular scanning -- otherwise I might get the 
point one week and then a dependency might upgrade in a way that is 
borked.  (Test::Exception did this to me a while back for some 
particular version of it -- actually related to CPANPLUS not recognizing 
Module::Build's build_requires... and on we merrily go.)


More to the point, it should lead people to spend more time looking into 
WHY their module isn't installing, and help us nail down the critical 
modules in the CPAN toolchain that have problems.


I agree. I've been stripping Test::Exception.  The syntactic sugar isn't 
worth the headache of diagnosing build failures.


If all of a sudden you lose the clean_install point, you can go find the 
problem, then either help the author fix the problem or stop using that 
module. Either way, your module used to not install, and now it does 
install.


(*If* the author fixes the problem.  I still can't get my patches for 
Sub::Uplevel high enough in Schwern's queue.  But that goes back to your 
point about dodgy dependencies, so I'll score that argument to you.)


A testing system should only be sending FAIL reports when it believes it 
has a platform that is compatible with the needs of the module, but when 
it tries to install tests fail.


General question: Are failed prerequisite versions a FAIL or a Not 
Applicable if the smoke tester isn't set to automatically try to upgrade 
them?


I'm interested to hear what some the special cases might be though. I'm 
trying to put together a mental list of possible problems I need to deal 
with.


You need to deal with N/A's down the chain.  E.g. I prereq a module that 
 can't be built in an automated fashion but requires human intervention 
or choices or external libraries.  If you don't have it and the 
automated build to try to get it fails, that really isn't my problem -- 
I've been clear about the dependency: If you have version X.YZ of 
Module ABC::DEF, then I promise to pass my tests.


Think about the GD modules -- if the GD library isn't installed, GD 
won't build and anything depending on it fails.  Should that fail a 
clean_install?


(Contrast that with SQLite, which installs it for you.)

A FAIL should ONLY mean that some package has told the smoke system that 
thinks it should be able to pass on your platform, and then it doesn't.


Failures that aren't the fault of the code (platform not compatible, or 
whatever) should be N/A or something else.


I think better granularity would be: FAIL -- failed its own test 
suite; DEPFAIL -- couldn't get dependencies to pass their test suites; 
and N/A -- incompatible platform or Perl version or not automatically 
chasing dependencies.



If you want to help out, we'd love to see you in irc.perl.net #pita.


Between tuit shortages and the day job (side q: what's this whole $job 
thing I keep seeing in people's posts/blogs), a realtime channel like 
IRC is hard to keep up with.  Is there a mailing list or wiki or some 
such?  (Subversion repository?)


Regards,
David


Re: Test Script Best-Practices

2006-01-24 Thread David Golden

Jeffrey Thalhammer wrote:

* Should a test script have a shebang?  What should it
be?  Any flags on that?


I often see -t in a shebang.  One downside of the shebang, though, is 
that it's not particularly portable.  As chromatic said, with prove 
it's not really necessary.  (prove -t)



* Should a test script make any assumptions about the
CWD?  Is it fair/safe for the test script to change
directories?


Anything that affects the file system (particularly creating directories 
and files) often needs to change directories as part of the test.


As a side note, I wrote File::pushd to make it easy to change 
directories locally in a block and then snap back to where one started. 
 I find it handy for that kind of testing.


  use File::pushd;

  {
my $dir = pushd( $some_dir ); # change to $some_dir
# do stuff in $some_dir directory
  }
  # back to original directory here

  # convenient for testing:
  {
my $dir = tempd; # change to a temp directory
# do stuff in temp directory
  }
  # back to original directory and temp directory is gone


* What's the best practice for testing-only libraries?
 Where do they go and how do you load them?  Is there
a naming convention that most people follow (e.g.
t::Foo::Bar).


I've personally come to like the t::Foo::Bar style as it is 
immediately obvious that the module in question is test-related.  It's a 
handy affordance.


Regards,
David Golden


Re: Need help diagnosing Test-Simple-0.62 make test error

2006-01-16 Thread David Golden

James E Keenan wrote:

What happens with:  prove -vb t/sort_bug.t


It was in the next section via make with TEST_VERBOSE.  Subtests 
complete successfully then the test dies.



t/sort_bug1..2
# parent 2267: continue
# kid 1 before eq_set
# parent 2267: continue
# parent 2267: waiting for join
# kid 2 before eq_set
# kid 1 exit
ok 1 - threads exit status is 42
# parent 2267: waiting for join
# kid 2 exit
ok 2 - threads exit status is 42
dubious
Test returned status 255 (wstat -1, 0x)
after all the subtests completed successfully 





Need help diagnosing Test-Simple-0.62 make test error

2006-01-15 Thread David Golden
/usr/lib/perl5/site_perl/5.8.2/i386-linux-thread-multi
/usr/lib/perl5/site_perl/5.8.1/i386-linux-thread-multi
/usr/lib/perl5/site_perl/5.8.0/i386-linux-thread-multi
/usr/lib/perl5/site_perl/5.8.5
/usr/lib/perl5/site_perl/5.8.4
/usr/lib/perl5/site_perl/5.8.3
/usr/lib/perl5/site_perl/5.8.2
/usr/lib/perl5/site_perl/5.8.1
/usr/lib/perl5/site_perl/5.8.0
/usr/lib/perl5/site_perl
/usr/lib/perl5/vendor_perl/5.8.5/i386-linux-thread-multi
/usr/lib/perl5/vendor_perl/5.8.4/i386-linux-thread-multi
/usr/lib/perl5/vendor_perl/5.8.3/i386-linux-thread-multi
/usr/lib/perl5/vendor_perl/5.8.2/i386-linux-thread-multi
/usr/lib/perl5/vendor_perl/5.8.1/i386-linux-thread-multi
/usr/lib/perl5/vendor_perl/5.8.0/i386-linux-thread-multi
/usr/lib/perl5/vendor_perl/5.8.5
/usr/lib/perl5/vendor_perl/5.8.4
/usr/lib/perl5/vendor_perl/5.8.3
/usr/lib/perl5/vendor_perl/5.8.2
/usr/lib/perl5/vendor_perl/5.8.1
/usr/lib/perl5/vendor_perl/5.8.0
/usr/lib/perl5/vendor_perl
.



Insights and suggestions are greatly appreciated.

Regards,
David Golden


Re: Compiling Devel::Cover stats across scripts

2005-11-17 Thread David Golden

Paul Johnson wrote:

On Wed, Nov 16, 2005 at 05:46:34PM -0500, David Golden wrote:
I've got a bunch of test files for a distribution that run a script that 
comes with the distribution.  I'd like to test my coverage against that 
script.  I figure that I can just pass -MDevel::Cover to the execution of 
the script if that's also in the HARNESS_PERL_SWITCHES.


I would hope things should work just as you are expecting, that is all
twenty runs should be merged to give combined coverage for the script
and any modules used.


It doesn't appear to work when the test script changes into another 
directory before running the script, even when the -db is explicitly specified.


I've attached some shorter test files outside of a Test::Harness structure 
that illustrates the issue:


* script.pl -- has simple logic to report whether it's currently in the same 
directory as one passed in @ARGV.  This simple branch just illustrates the 
coverage that I'm trying to confirm for this script.


* test1.pl -- calls script.pl with Devel::Cover flags with an absolute path 
to a cover_db (branch 1)


* test2.pl -- changes to a temp directory and then calls script.pl with 
Devel::Cover flags with an absolute path to a cover_db (branch 2)


The Devel::Cover output varies depending on the order in which test1 and 
test2 are run (output edited down to remove what is ignored, etc.).  When 
test1 is run first, test2 summary shows all n/a.  When test2 is run first, 
they both show a summary.  In both cases, running cover gives a message 
about merging, but shows only one half of the data instead of showing 100% 
coverage (even though printed messages confirm both paths are being taken).


CASE 1: test1.pl followed by test2.pl:


$ cover -delete; perl test1.pl; perl test2.pl; cover
Deleting database /home/david/tmp/coverage-test/cover_db

still in original dir

Devel::Cover: Writing coverage database to 
/home/david/tmp/coverage-test/cover_db/runs/1132225902.3809.46276
 -- -- -- -- -- -- --
File   stmt   bran   condsubpod   time  total
 -- -- -- -- -- -- --
script.pl  83.3   50.0n/a  100.0n/a  100.0   82.6
Total  83.3   50.0n/a  100.0n/a  100.0   82.6
 -- -- -- -- -- -- --

not in original dir

Devel::Cover: Can't find file blib/lib/Storable.pm: ignored.
Devel::Cover: Writing coverage database to 
/home/david/tmp/coverage-test/cover_db/runs/1132225905.3815.31279
 -- -- -- -- -- -- --
File   stmt   bran   condsubpod   time  total
 -- -- -- -- -- -- --
...p/coverage-test/script.pln/an/an/an/an/an/an/a
Total   n/an/an/an/an/an/an/a
 -- -- -- -- -- -- --


Reading database from /home/david/tmp/coverage-test/cover_db
Devel::Cover: merging data for script.pl into 
/home/david/tmp/coverage-test/script.pl

 -- -- -- -- -- -- --
File   stmt   bran   condsubpod   time  total
 -- -- -- -- -- -- --
...p/coverage-test/script.pl   83.3   50.0n/a  100.0n/a  100.0   82.6
Total  83.3   50.0n/a  100.0n/a  100.0   82.6
 -- -- -- -- -- -- --



CASE 2: test2.pl followed by test1.pl:

$ cover -delete; perl test2.pl; perl test1.pl; cover  
Deleting database /home/david/tmp/coverage-test/cover_db


not in original dir

Devel::Cover: Writing coverage database to 
/home/david/tmp/coverage-test/cover_db/runs/1132226001.3823.04804
 -- -- -- -- -- -- --
File   stmt   bran   condsubpod   time  total
 -- -- -- -- -- -- --
...p/coverage-test/script.pl   83.3   50.0n/a  100.0n/a  100.0   82.6
Total  83.3   50.0n/a  100.0n/a  100.0   82.6
 -- -- -- -- -- -- --

still in original dir

Devel::Cover: Can't find file blib/lib/Storable.pm: ignored.
Devel::Cover: Writing coverage database to 
/home/david/tmp/coverage-test/cover_db/runs/1132226004.3827.08742
 -- -- -- -- -- -- --
File   stmt   bran   condsubpod   time  total
 -- -- -- -- -- -- --
script.pl  83.3   50.0n/a  100.0n/a  100.0   82.6
Total

Compiling Devel::Cover stats across scripts

2005-11-16 Thread David Golden
Before I flounder around to figure this out, I hope that a quick message to 
the list can offer some guidance.


I've got a bunch of test files for a distribution that run a script that 
comes with the distribution.  I'd like to test my coverage against that 
script.  I figure that I can just pass -MDevel::Cover to the execution of 
the script if that's also in the HARNESS_PERL_SWITCHES.


If my test file calls the script 20 times testing 20 different execution 
paths, do all the runs get aggregated into cover_db?  Or do I need to create 
separate cover_db_$$ dirs for each of the 20 runs, then use cover at the end 
to merge them together?


Thanks,
David




Re: Private tests

2005-11-15 Thread David Golden

Adam Kennedy wrote:

What about a special environment variable, like RUN_PRIVATE_TESTS?



I've been working on a concept of taggable tests on some of my larger 
commercial stuff, integrating with the Test::More skip() function, and 
some form of environment variables does indeed seem the best way to do 
such a thing.


Isn't Test::Manifest sort of geared to do this kind of thing?  (Or could be 
extended to do so?)




Re: Spurious CPAN Tester errors from Sep 23rd to present.

2005-10-05 Thread David Golden

Michael G Schwern wrote:
AFAIK there is only one module of consequence which does screen scraping 
on Test::More and that's Test::Builder::Tester (Test::Warn, it turns out, 
fails because of Test::Builder::Tester).  Fix that, upload a new version 
and the problem goes away.


Nit: does Test::Harness count as a module of consequence?

- David Golden


Re: CPANTS new

2005-09-18 Thread David Golden

David Landgren wrote:
Yeah, but I'm loathe to dedicate two separate test files merely to score 
two points of Kwalitee. As it is, I'd just much rather bundle both tests 
in a 00_basic.t file along with all the other standard no-brainer tests.


One option is just to forget about the two points of Kwalitee, of course...

David


Re: Test::Harness Extension/Replacement with Color Hilighting

2005-09-16 Thread David Golden

Andy Lester wrote:

On Fri, Sep 16, 2005 at 03:55:15AM +0300, Shlomi Fish ([EMAIL PROTECTED]) wrote:


Mr. Lester, would you approve of a friendly spin-off of Test::Harness?



Why are you asking if I approve?  You can do whatever you like with the
source code for Test::Harness.


I think a polite question is wonderful (with potential answers ranging from 
sure to hey, that's a cool idea, let's try a merge instead of a fork).


After the recent CPANPLUS::Dist::Build debacle, a little politeness in the 
community is nice to see.


David Golden


Re: New kwalitee test, has_changes

2005-09-15 Thread David Golden

Adam Kennedy wrote:
Rather than do any additional exploding, I'd like to propose the 
additional kwalitee test has_changes. I've noticed that a percentage 
(5-10%) of dists don't have a changes file, so it can be hard to know 
whether it's worth upgrading, or more importantly which version to add 
dependencies for.



I like it.  I think it's consistent with many of the other metrics which are 
focused on whether the basic structure of a distribution is inplace.


(However, I have to confess that when I first read has_changes, I thought 
you were proposing a kwalitee test to see if there was more than one release 
of a distribution, or more than one release in the last X amount of time. 
Maybe that will become a has_changed metric someday -- a rolling stone 
gathers no moss.  I'm kidding, of course.)


David Golden



Re: New kwalitee test, has_changes

2005-09-15 Thread David Golden

Ricardo SIGNES wrote:

* Christopher H. Laco [EMAIL PROTECTED] [2005-09-15T08:23:57]


Would this look for Change OR ChangeLog?
Both seem to be popular on CPAN.



...and some modules have a HISTORY or CHANGES section of POD, and DBI
has DBI::Changes. 



Though, as with pod and pod coverage tests, a Changes file with See 
DBI::Changes or See Readme would satisfy the kwalitee test.  It a hack to 
satisfy the metric, but has the side benefit of actually providing a 
consistent place for someone to track down the actual location of the change 
log.


Regards,

David Golden



Re: kwalitee: drop Acme?

2005-09-09 Thread David Golden

Adam Kennedy wrote:
Ditch the CPANTs elements that a fail-by-default. By that I mean 
has_test_pod_coverage, is_prereq and possibly also has_test_pod.


Or make is_prereq SO easy to game that it's a nonissue? Why should a 
module depended upon by another author be ranked any higher than one 
that isn't.


I think that pod/pod-coverage is at least arguable. (Maybe *too* arguable). 
 I find is_prereq just amusing because I can't affect my own is_prereq 
scores -- only those of others.  So who is *that* helping?


David


Re: kwalitee: drop Acme?

2005-09-08 Thread David Golden

It can't be by the same author, though, to count for is_prereq, right?

So someone needs to create a new CPAN ID, and release a module under that ID 
that prereqs all of CPAN.  Then we'd all get our prereq points.


Probably could be done with a Build.PL that pulls the full module list then 
constructs a massive requires hash.  Unless CPANTS scans for dependencies, 
in which case you'd need to build the .pm file dynamically, too.  And then 
run a cron job to rebuild/re-release with cpan-upload every so often to keep 
it fresh.


David

Michael Graham wrote:

We should at least throw the poor module author's a bone and leave
Acme:: out of this.



Just as long as ACME keeps working for is_prereq, though!

A bunch of us are planning ACME::CGI::Application::Kwalitee, which will
exist solely to require all of the C::A plugins, so we can all get our
'is_prereq' point.

Don't make us release this foolishness outside of ACME::!


Michael



---
Michael Graham [EMAIL PROTECTED]



Re: Why are we adding more kwalitee tests?

2005-09-07 Thread David Golden

chromatic wrote:

Some kwalitee metrics are useful in both places and that's fine.  I just
wonder if some of PANTS should be more private.


I think much of the problem could be solved by separating the metrics from 
the scoring.  If CPANTS just gave the results of various tests so I could 
look up my modules and see which ones I pass and fail, that still helps me 
as a developer if I'm interested in improving my modules.  Where it becomes 
into a competition rather than a developer's tool is that the scores are 
added together into one Kwalitee score that assumes (or for which people 
assume):


a) all tests are relevant
b) all tests matter equally (are equally weighted)
c) higher score means higher quality

The scores are available in the database.  There's no reason that someone 
can't use it to generate their own Qualitee score or a core Kwalitee or 
whatever subset they think matters.  Some of that might be useful for 
installers, and some not.  But if CPANTS itself didn't offer a Kwalitee 
score -- just the results of the tests -- there wouldn't be an implicit 
sanction of a single metric that gets people so riled up.


Regards,
David Golden




Re: Adding more kwalitee tests

2005-09-06 Thread David Golden

Ivan Tubert-Brohman wrote:
Sorry for my ignorance, but I had never even heard of this option. I 
don't find any way of setting it via MakeMaker or Module::Build. Do I 
have to edit META.yml by hand? Would it get overritten by 'make dist'?


The spec is here:

http://module-build.sourceforge.net/META-spec-v1.2.html

It can be added with Module::Build with the meta_add option.  E.g., as 
part of the arguments to new() in your Build.PL:


meta_add= {
noindex = { directory = [ qw/examples/ ] }
},

It would be nice if Module::Build added a noindex option directly.  Maybe 
after the 0.27 release.


Regards,
David Golden


Re: Why are we adding more kwalitee tests?

2005-09-06 Thread David Golden

Andy Lester wrote:


Why are we worrying about these automated kwalitee tests?  What will 
happen once we find that DBIx::Wango has only passed 7 of these 23 items 
on the checklist?


I don't have any problem with someone proposing or running these kinds of 
automated test.  It's helpful feedback to me as a distribution author. 
Whether I choose to do something about the feedback or not is something 
else.  Viewed as a tool for authors, automated kwalitee testing has value. 
As a tool for module users, it's not so clear cut.


I would be concerned if CPANTS Kwalitee showed up on the CPAN page for a 
distribution.  That would make it official and judgmental I don't think 
that an arbitrary and controversial (at least in part) metric should be so 
enshrined.  But I think the collection of metrics themselves is 
value-neutral.  If someone wants to write the tests and invest processor 
cycles on it, more power to them.


Regards,
David Golden


Re: Adding more kwalitee tests

2005-09-06 Thread David Golden

Adam Kennedy wrote:

missing_no_index:
Only the libraries (and .pod docs) in your dist should be indexed)
things in inc/ and t/ and examples/ etc should NOT be indexed. Thus,
there should be a no_index entry in the meta.yml for all of these 
directories (if they have .pm files in them).


I like it.  (N.B. Since t/ is not indexed already and many people write test 
support modules, I'd recommend against needing those to be explictly 
included in a no_index)




has_perl_dependency:


My initial thought was great!  Then I realized that encouraging people to 
stick one one in may be worse than having none at all as quick (but 
incorrect) information may be less desireable than no information.



---

bad_perl_dependency:

The caching and mass processing facilities in PPI are getting close to 
being able to regularly process all of CPAN. The first demo of this I'm 
hoping to do is a use of Perl::MinimumVersion to find cases where a 
module is listed as working with (for example) Perl 5.005, but the 
module syntax shows that it needs 5.006. (and thus the version 
dependency is wrong).


I later plan to expand this to include cases where it says it should run 
with 5.005, but it has a dependency on some other module (recursively) 
that needs 5.006, or you accidentally put 5.006 code in your test 
scripts, and thus fails depsite the code in that actual module for the 
module being ok.


I like this a lot.  I'd drop has_perl_dependency and just have 
bad_perl_dependency fail if there is no perl dependency listed.  I.e., 
assume that no perl dependency is equivalent to use perl 0 or maybe some 
arbitrarily old version of perl 5 (e.g. 5.004).


Regards,
David Golden



Re: Test::Code

2005-08-12 Thread David Golden

Ivan Tubert-Brohman wrote:

Isn't

  ok defined *::is_code{CODE};

just a convoluted way of saying

  ok defined is_code;


Won't is_code get called that way?  Should this be:

ok defined \is_code;


David Golden


Re: Inline POD vs not (was Re: Modules::Starter question)

2005-08-08 Thread David Golden

Ivan Tubert-Brohman wrote:
* The code gets lost among the documentation, as often you have more 
documentation than code. Syntax highlighting reduces the problem, but 
the POD still takes half the screen if you have short subs.


Another option would be to dust the code folding features of my editor 
and see if they work for hiding inline POD temporarily.


As a vim user, I found it helpful to edit my perl.vim syntax file like so:

   Use only the bare minimum of rules
- if exists(perl_fold)
+ if exists(perl_fold_pod)
syn region perlPOD start=^=[a-z] end=^=cut fold
  else
syn region perlPOD start=^=[a-z] end=^=cut
  endif

Then, by setting let perl_fold_pod=1 in my .vimrc, all the pod folds up by 
default, leaving only code.  (I don't fold my code, so folding just hides my 
inline pod.  If you want to fold both, don't make the edit to perl.vim and 
set let perl_fold=1)  When I want to edit a folded pod section, zo opens 
a fold and zc closes it.  zn opens them all and zN closes them all. 
Makes inline pod less frustrating, particularly if you write lengthy docs.


Regards,
David Golden


Re: Basic Testing Questions

2005-07-18 Thread David Golden

Brett Sanger wrote:

I have some related tests.  For example, to test the account management
of our LDAP administration tools on the website, I have a test to create
an account, test various edit options, then delete the account.  (This
is testing the create and delete as well, so I don't want to use an
existing account).  This means that I either write a very large,
monolithic .t file which reduces my ability to testing single functions
of the interface, or I write separate .t files for each function.  In
the latter case, I have to either be sure to run the create in the
beginning and the delete at the end (see prove and order of tests at the
start of this email) or I have to copy the create and delete code into
each tests, making maintenance harder.  Is there a common way to
modularize code in test files?  Is this just a do call?


You might look at Test::Class, which has some nice features for handling 
repetitive setup/teardown needs.


A more manual approach is to put your common testing code in its own module 
and use that in each of your test scripts.  If you're using prove, you 
should be able to create t/Common.pm, define package t::Common; in that 
file, and then use t::Common in your tests scripts.  (This assume that you 
run your tests from the directory above t/ so that t::Common finds your 
module.)


Regards,
David Golden


Re: Devel::Cover Problem: testing || for a default value.

2005-07-12 Thread David Golden

[EMAIL PROTECTED] wrote:


I respectfully disagree. I think you're focusing too much on the low-level

behavior of || returning one of its operands. That behavior makes Perl's syntax
flexible and a little ambiguous. Because Perl doesn't make a distinction between
assign with a default value and perform a boolean OR Devel::Cover it has to
play it conservatively. 


You shouldn't shift the burden to somewhere else (where $foo is subsequently
used) either, because you don't know how it will be used. It could be
 1) a boolean in a condition
 2) used in another $a = $b || $c type expression
 3) an output that only appears in a print() statement
 ...

In any of these cases, it's possible that $foo is really a boolean but by the
method you proposed $foo would only be tested for taking both true and false
values in the first one.

I appreciate your point of view.  My suggestion (as I stated in my 
original post) is conditional on being able to determine the context in 
which the || is used.  If you read the documentation for Want.pm or 
perldoc perlguts (search for context propogation) there seems to be 
the possibility of doing this -- though I freely admit that I don't 
understand perlguts well enough to say.


Regards,
David Golden


Re: Devel::Cover Problem: testing || for a default value.

2005-07-11 Thread David Golden

Michael G Schwern wrote:


What's missing is a way to let Devel::Cover know that a bit of coverage is
not necessary.  The first way to do this which pops into my mind is a comment.

my $foo = $bar || default();  # DC ignore X|0


 

I posted an item about this usage of the || operator the Devel::Cover 
RT several months ago:


http://rt.cpan.org/NoAuth/Bug.html?id=11304

My suggestion turns on the question of whether it's possible to 
differentiate the context between a true boolean context ( foo() if $p 
|| $q ) and a pseudo-boolean context that is really part of an 
assignment ( my $foo = $p || $q ) or just a standalone statement ( $p 
|| print $q ). 

Want.pm seems to imply that this might be possible, but I don't know the 
guts of Perl well enough.  The concept I had was that *EXCEPT* in true 
boolean context, the $p || $q idiom is (I think) pretty much logically 
equivalent to a trinary operation $p ? $p : $q (ignoring potential 
side effects) and thus the truth table in this situation only needs to 
include the first operand, thus avoiding the false-alarm.


Regards,
David Golden



Re: Devel::Cover Problem: testing || for a default value.

2005-07-11 Thread David Golden

Michael G Schwern wrote:


my $foo = $p || $q is not boolean. I'm not even sure you can call it
pseudo-boolean without understanding the surrounding code. How do you
know that $q can never be false?

The other examples in the ticket play out the same way:

bless {}, ref $class || $class;

$class being false would be quite the error. Devel::Cover can't know that
in most cases you want to ignore the 0 || 0 case because you assume $class
to always be some true value and don't think its worth testing that it
might be false.

I think this is a coverage vs correctness distinction.  The idea that I 
was trying to convey is that while these expressions use a boolean 
operator for a shortcut, they aren't really about truth vs. falsity of 
the overall expression, *except* when they are being used as part of a 
conditional statement.  From a coverage perspective, what should matter 
in my $foo = $p || $q is that $foo takes on the values of both $p and 
$q at some point during the test suite, not whether or not $foo takes on 
both true and false values -- coverage of that condition should be 
checked when $foo is used in another expression. 

In the bless case -- you're right that the case of $class being false 
may be of interest, but that's not what this common idiom actually 
does.  The code will blithely pass a false value to bless (with 
potentially unexpected results depending on whether $class is 0, , or 
undef).  That failure is an example of where correctness can't be 
validated by coverage -- where the error lies between the ears of the 
programmer.  :-)  If $class were explictly tested, then Devel::Cover 
would pick it up properly, such as in bless {}, ref $class || $class || 
die.



Want.pm seems to imply that this might be possible, but I don't know the
guts of Perl well enough. The concept I had was that *EXCEPT* in true
boolean context, the $p || $q idiom is (I think) pretty much logically
equivalent to a trinary operation $p ? $p : $q (ignoring potential
side effects) and thus the truth table in this situation only needs to
include the first operand, thus avoiding the false-alarm.



That assumption only works in void context.

# Don't care about the return value of die.
open FILE, $foo || die $!;

# DO care about what $q is
foo( $p || $q );

Following my logic above, here I'd argue that we *don't* care what $q is 
from a coverage perspective, only that both $p and $q are passed to foo 
at some point.  However, if for foo() we have:


 sub foo { my $val = shift; print Hi! if $val }

We now care about what $val is from a coverage standpoint, so we get the 
same coverage testing that way and we pick up the case we want where $q 
is false in the original call.  Unfortunately, this doesn't work with 
Devel::Cover when foo() is a built-in or subroutine imported from 
another module.  In such a case, one option could be to use foo( $p || 
$q || '' )  or, more precisely, foo( $p || $q || $q ) -- which are 
awkward, but are actually forcing the program to evaluate $q as true or 
false.


I guess that begs the question for coverage testers which case is more 
common -- caring only that $p || $q takes on the value of $p and $q, 
or caring that both $p and $q take on both true and false values.


Regards,
David Golden



Re: Kwalitee and has_test_*

2005-04-07 Thread David Golden
This is an interesting point and triggered the thought in my mind that 
CPANTS Kwalitee is really testing *distributions* not modules -- i.e. 
the quality of the packaging, not the underlying code.  That's 
important, too, but quite arbitrary -- insisting that distributions test 
pod and pod coverage is arbitrary.  If CPANTS insisted that all modules 
in a distribution be in a lib directory, that would be arbitrary, too, 
but not consistent with general practice (fortunately, it's written to 
allow a single .pm in the base directory, otherwise there has to be a 
lib directory).

The point I'm making is that CPANTS -- if it is to stay true to purpose 
-- should stick to distribution tests and try to ensure that those 
reflect widespread quality practices, not evangelization (however well 
meaning) to push an arbitrary definition of quality on an unruly 
community.  Devel::Cover is a useful tool -- but it pushes further and 
further away from a widespread distribution-level measure of quality.  
(Whereas I see pod testing as analogoous to a compilation test and pod 
coverage testing being a documentation test -- both of which are 
reasonable things to include in a high quality test suite.)

David Golden
Christopher H. Laco wrote:
Because they're two seperate issues.
First, checking the pod syntax is ok for the obvious reasons. Broken 
pad leads to doc problems.

Second, we're checkling that the AUTHOR is also checking his/her pod 
syntax and coverage. That's an important distinction.

I would go as for to say that checking the authors development 
intentions via checks like Test::Pod::Coverage, Test::Strict, 
Test::Distribution, etc is just as important, if not more, than just 
checkong syntax and that all tests pass.

Givin two modules with a passing basic.t, I'd go for the one with all 
of the development side tests over the other. Those tests listed above 
signal [to me] that the author [probably] pays more loving concern to 
all facets of their module than the one with just the passing basic.t

-=Chris



Re: Kwalitee and has_test_*

2005-04-07 Thread David Golden
Let's step back a moment.
Does anyone object that CPANTS Kwalitee looks for tests?  Why not apply 
the same arguments against has_test_* to test themselves?  What if I, as 
a developer, choose to run test as part of my development but don't ship 
them.  Why should I make users have to spent time waiting for my test 
suite to run?

Keeping in mind that this is a thought exercise, not a real argument, 
here are some possible reasons (and counter arguments) for including 
test files in a distribution and for Kwalitee to include the existence 
of tests

* Shipping tests is a hint that a developer at least thought about 
testing.  Counter: It's no guarantee of the quality of testing and can 
be easily spoofed to raise quality.

* Tests evaluate the success of the distribution against its design 
goals given a user's unique system and Perl configuration.  Counter: 
developers should take responsibility for ensuring portability instead 
of hoping it works unti some user breaks it.

The first point extends very nicely to both has_test_* and coverage 
testing.  Including a test for pod/pod-coverage shows that the developer 
thought about it.  It doesn't mean that a developer couldn't do those 
things and just not create a *.t file for them, of course, or create a 
*.t file for them and not do those things, either.  The presence of a 
test is just a sign -- and one that doesn't require code to be run to 
determine Kwalitee.  The flip side, of course, is that by including test 
that are necessary for CPANTS, a developer inflicts them on everyone who 
uses the code.  That isn't so terrible for pod and pod coverage testing, 
but it's a much bigger hit for Devel::Cover.

Why not find a way to include them in the META.yml file and have the 
build tools keep track of whether pod/pod-coverage/code-coverage was 
run?  Self reported statistics are easy to fake, but so are the 
has_test_* Kwalitee checks as many people have pointed out.  Anyone who 
is obsessed about Kwality scores is going to fake other other checks, 
too.  And that way, people who have customized their environments can 
report that they are doing it.

As to the benefits of having Devel::Cover run on many environments and 
recording the output, rather than suggest developers put it in a *.t 
file -- which forces all users to cope with it -- instead why not build 
it into CPANPLUS as an option along the lines of how test reporting is 
done.  Make it a user choice, not a mandated action.

Ironically, for all the skeptical comments about why a scoreboard -- 
the fact that many people care about the Kwalitee metric suggests that 
it does serve some inspirational purpose.

Regards,
David Golden



Re: [Module::Build] Re: Test::META

2005-04-01 Thread David Golden
Ken Williams wrote:
On Mar 30, 2005, at 6:16 PM, Michael G Schwern wrote:
On Wed, Mar 30, 2005 at 05:53:37PM -0500, Randy W. Sims wrote:
Should we completely open this up so that requires/recommends/conflicts
can be applied to any action?
install_recommends = ...
testcover_requires = ...
etc.

This sounds useful and solves a lot of problems at one sweep.  You 
can use
the existing dependency architecture to determine what needs what.  
Such as
testcover needs both test_requires and testcover_requires.

There's a problem with this that I'm not sure how to solve: what 
happens when, as part of refactoring, a chunk of one action gets 
factored out to become its own sub-action?  The dependency may well 
pertain to the new sub-action instead of the original action, but 
distribution authors won't have any way to know this - or even if they 
did, they couldn't declare it in a forward-compatible way.

I freely admit that I haven't been following this thread closely (I 
guess Ken's posts have a lower activation energy for me), but the 
suggested approach sounds way overengineered.  How many modules really 
needs this kind of thing?  I'm not sure that adding complexity to the 
requirements management system for those learning/using Module::Build is 
worth it for what I imagine to be relatively few modules that would wind 
up using such functionality.

I'd rather see requires/recommends kept at a high level and let 
individual actions/tests check for what they need and be smart about how 
to handle missing dependencies.

Regards,
David


Re: Test::Builder-create

2005-03-08 Thread David Golden
Having an instance is great.
Could we also consider moving away from singletons that are hard-wired 
to Test::Builder? By that I mean make Test::Builder a 'factory' that 
gives either a default, plain vanilla Test::Builder object or else a 
specific subclass?  E.g.,

 use Test::Builder 'Test::Builder::Subclass'; #sets up the singleton as 
a subclass
 use Test::More 'no_plan'; # uses the subclass object

Also, in thinking through the reorg of Test::Builder, it would be great 
if the notion of success or failure could be isolated from any 
particular form of output.  That would mean that someone could use 
Test::Builder::TAP (for TAP style output) or Test::Builder::HTML for 
custom output purposes.  (As opposed to the current approach of using 
Test::Builder to speak TAP to Test::Harness to gather results to either 
print or do other things with.)

Regards,
David Golden



Re: Test::Builder-create

2005-03-08 Thread David Golden
Michael G Schwern wrote:
On Tue, Mar 08, 2005 at 03:39:17PM -0500, Michael Graham wrote:
 

Something that's been sitting in the Test::Builder repository for a while
now is Test::Builder-create.  Finally you can create a second Test::Builder
instance.  I haven't done much with it and I think $Level is still global
across all instances (bug) but I figured folks would want to play with it
particuarly the Test::Builder::Tester guys.
 

Would this make it possible to run many test scripts (each with its own
plan) within the same perl process?  'Cos that would be nifty.
   

Yes.  Though beyond testing testing libraries I don't know why you'd want to
do that.
 

The use case I've been pondering is to be able to better control the 
granularity of my tests within a particular scripts.  Stuff like 
Test::Class and Test::Block gets closer but not quite to what I would 
like.  I'd like to be able to define a block of tests and either 
report that the whole block succeeded or else show me that it failed 
with a diagnostic about the individual tests within it.  In effect, I'd 
like to be able to localize verbosity.  I'm in the middle of cobbling 
up a module to do it -- the approach I have in mind is storing the 
blocks as a code reference, running the first block (in an eval to trap 
dies), storing away a copy of the Test::Builder results, resetting 
Test::Builder, running the next, etc., etc., then at the end resetting 
Test::Builder one final time and passing/failing for each result set 
(with diagnostics that show the verbose results of the individual tests 
passing or failing.)  Since I'm trying test-first development, I'm 
currently hung up in the middle of figuring out how to test that it's 
doing the what I want before I write it.  I think the approach works, 
but all this mucking about in the internals of Test::Builder feels like 
voodoo.

Regards,
David


Re: Test::Builder-create

2005-03-08 Thread David Golden
chromatic wrote:
On Tue, 2005-03-08 at 14:54 -0500, David Golden wrote:
 

Also, in thinking through the reorg of Test::Builder, it would be great 
if the notion of success or failure could be isolated from any 
particular form of output.  That would mean that someone could use 
Test::Builder::TAP (for TAP style output) or Test::Builder::HTML for 
custom output purposes.  (As opposed to the current approach of using 
Test::Builder to speak TAP to Test::Harness to gather results to either 
print or do other things with.)
   

Hm, this is less convincing to me.  Still, you can do something
different with Test::Builder::TestOutput if you want.
 

Let me say it a different way.  Right now, Test::Builder and 
Test::Harness (et al.) are tightly coupled.  It would be nice to break 
or at least reduce that coupling.  Stuff deep in Test::Builder assumes 
that the output is TAP.  For example, consider the way that ok() doesn't 
just store a result in the results array but goes to all the trouble to 
print it out in TAP format with both _print() and _print_diag(), too.  
(Presumably, that's what's being refactored into TestOutput?) 

One of the side effects of this is that diagnostic messages from the 
ok() blend in with any user generated diagnostics and that's the only 
way to get at them.  Details() doesn't actually give details about why 
tests failed, either, which would be nice for anything that is 
extracting details for some custom purpose.  It would seem to make sense 
to put that in the reason slot of the details, except that most 
derived tests are written to use ok($expr) or diag($message).  That's 
equivalent to (record details and print stuff out) + (print other stuff 
out).  It would be nice if there were better symmetry so that something 
like is() could stick its got X, expected Y message in reason field 
of the details array if it fails.

David




Re: Test::Builder-create

2005-03-08 Thread David Golden
Michael G Schwern wrote:
On Tue, Mar 08, 2005 at 02:54:46PM -0500, David Golden wrote:
 

Could we also consider moving away from singletons that are hard-wired 
to Test::Builder? By that I mean make Test::Builder a 'factory' that 
gives either a default, plain vanilla Test::Builder object or else a 
specific subclass?  E.g.,

use Test::Builder 'Test::Builder::Subclass'; #sets up the singleton as 
a subclass
use Test::More 'no_plan'; # uses the subclass object
   

This is too simplistic for Various Reasons that chromatic and I are still
hashing out.
 


Care to hash in public?
David


Re: Test::Builder-create

2005-03-08 Thread David Golden
chromatic wrote:
On Tue, 2005-03-08 at 17:31 -0500, David Golden wrote:
 

Yeah, it would be nice if we had a better way to handle this.  Perhaps
changing the idiom to:
$Test-ok( $dressed_up, 'How nice do you look?' ) ||
$Test-reason( 'Refusing to wear a tie and jacket' );
and encouraging Test::* module authors to adopt that type of error
reporting will help.
 

Ick! No!  My point was to keep it atomic -- by putting $Test-reason as 
its own separate call then you need a heuristic that says that any 
reasons apply to the previously seen test and so on.  (And what do you 
do if you get two reason calls?  Append?  Replace?  What about a reason 
call before any tests?)  It's the diag problem all over again.

You could go an OO route, setting up a results object then filing that 
as the details, e.g. (very off the cuff and not that well thought out):

   my $r = $Test-new_result($expr,$name);  # returns a new 
Test::Builder::Result object that stores pass/fail and a name
   $r-reason($message) if $r-failed; # attach a reason
   $Test-record($r); # Stick it in the details array (which now holds 
Result objects)

Regards,
David


Re: Test::Builder-create

2005-03-08 Thread David Golden
chromatic wrote:
On the other hand, it doesn't do anything differently with TAP as
currently defined, and I share Schwern's case of howling fantods at
recommending that people look in Test::Builder's test results list.  Out
of process interpreting and reporting feels so much nicer.
 

An addendum on rereading this part above:
This is another example of the coupling.  Even if out-of-process is 
nicer (and I agree, for the most part), why not put all the information 
acquired during testing into the test results list and then serialize 
that over the output stream?  There's pros and cons to any alternative 
to TAP, of course, but as long as Test::Builder is tightly coupled to 
TAP and as long as so many Test modules are written to use 
Test::Builder, it makes experimentation or migration to alternatives 
rather difficult.

The features that I think we're looking at are these:
* Provide a way for a controller to execute a test script and retreive 
data over a given channel
* Provide various test functions to a test script from one or more test 
modules
* Provide a way for test scripts or test modules to designate a distinct 
location where subsequent results are recorded
* Provide a way test modules to record the results and details of a test 
function to the previously designated location
* Provide a way for test modules to serialize the results of one or more 
locations in a given protocol over a given channel
* Provide a way for a controller to interpret results in a given protocol
* Provide a user-interpretable test report

Just to be playful with concepts:
* Test::Answer -- holds the details of a particular test 
(pass/fail/skip/todo)

* Test::Scoresheet -- holds a collection of Test::Answers
* Test::Booklet -- sets up a default Test::ScoreSheet object to hold 
results, creates and maintains new ones as requested and manages an 
active ScoreSheet for recording Answers.  On exit, serializes all 
Scoresheets.  The serialization method is abstract and subclasses must 
implement particular protocols with helper modules (e.g. 
Test::Protocol::TAP).

* Test::Proctor -- fires up a test script with a particular subclass of 
Test::Booklet (e.g. Test::Booklet::TAP) and records the response. (perl 
-MTest::Booklet::TAP test.t)

* Test::ReportCard -- parses results received from Test::Proctor.  
Subclasses parse particular protocols with helper modules and produce a 
human/machine readable report.  Sub-subclasses could offer alternate 
report formats.

(Ideally, we'd even abstract the channel in there so we could do it in a 
way other than stdout).

IMO, the more we can keep these functions uncoupled and generic the 
better -- and we can let specific implementations provide the details.  
It certainly sounds like chromatic and Michael have got things headed in 
the right direction and I look forward to seeing what we wind up with.

David


Re: Test::Builder-create

2005-03-08 Thread David Golden
I was playfully suggesting that behind simply as an abstraction of what 
Test::Builder and Test::Harness provide.  No way would I inflict that on 
an end-user!

David
Ofer Nave wrote:
David Golden wrote:
Just to be playful with concepts:
* Test::Answer -- holds the details of a particular test 
(pass/fail/skip/todo)

* Test::Scoresheet -- holds a collection of Test::Answers
* Test::Booklet -- sets up a default Test::ScoreSheet object to hold 
results, creates and maintains new ones as requested and manages an 
active ScoreSheet for recording Answers.  On exit, serializes all 
Scoresheets.  The serialization method is abstract and subclasses 
must implement particular protocols with helper modules (e.g. 
Test::Protocol::TAP).

* Test::Proctor -- fires up a test script with a particular subclass 
of Test::Booklet (e.g. Test::Booklet::TAP) and records the response. 
(perl -MTest::Booklet::TAP test.t)

* Test::ReportCard -- parses results received from Test::Proctor.  
Subclasses parse particular protocols with helper modules and produce 
a human/machine readable report.  Sub-subclasses could offer 
alternate report formats.

(Ideally, we'd even abstract the channel in there so we could do it 
in a way other than stdout).

IMO, the more we can keep these functions uncoupled and generic the 
better -- and we can let specific implementations provide the 
details.  It certainly sounds like chromatic and Michael have got 
things headed in the right direction and I look forward to seeing 
what we wind up with.

That's too much work.  If I only have a few questions for my module, 
can I just use the simpler Quiz::* system?  Especially for quick 
one-off tests...

-ofer



Re: testing Parallel::Simple

2005-03-03 Thread David Golden
Ofer Nave wrote:
I've written a new module for CPAN called Parallel::Simple.  It's my 
first CPAN module, and I have not yet uploaded it because I have not 
yet written any formal tests for it (although I use it in production 
currently).  I've also never written any formal tests in perl at all 
(using the Test::* libraries).

I've skimmed the Test::* docs, including Tutorial, read various 
articles on perl.com, and I have the Sam Tregar book, so I'm confidant 
I can figure out how to use the tools.  The question remains as to how 
to apply them.  In particular, how would you design formal tests for 
code that forks?

Not to be facetious, but what are your requirements?  What features is 
your code trying to deliver?  Forget about testing whether your code 
does what you think -- test whether your code does what you intended and 
documented.  (Plug for test-first design is that it forces you to 
determine your spec in a formal fashion before you write your code!)  At 
the core, you seem to have two overarching features:

* given some arguments with code blocks, execute the arguments, and 
store their exit values
* execute the code in parallel

The first is easy enough to test for variations of your parameters 
styles, return values, code that dies, etc.  As long as you get what you 
expect from the black box, you pass the test, whether or not it's 
actually forking in the background.

For the second, how do you confirm it's running in parallel?  Or rather, 
how do you know that it's running forked?  At first order, you need to 
find a way to get the pid back out of each of the children.  Maybe use 
Test::Output to capture STDOUT and have each function print it's pid?  
If you use two functions, you should expect two lines of output each 
with a different pid.

Regards,
David



Re: Testing What Was Printed

2005-02-11 Thread David Golden
My $0.02:
Very nice integration of IO::Capture.
I think this is very promising, but all the start(), stop() calls seem 
overly repetitive to me.  What about refactoring it into a set of test 
functions that handle it for the user automatically?  Just quickly off the 
cuff, what about a test module (Test::Output?  Test::Print? ) that provided 
functions like this:

stdout_is  { fcn() } $string, comment; # exact
stdout_like{ fcn() } qr/regex/, comment; # regex match
stdout_count   { fcn() } qr/regex/, $count, comment;  # number of matches
stdout_found   { fcn() } qr/regex/, [EMAIL PROTECTED], comment; # list of 
matches
# repeat for stderr...
If you prototype it to take a code reference, then you can capture the 
results of dereferencing that code reference behind the scenes.  (Using 
stdout_count, if the regex is \n, you get the line count.)

You can provide a utility function that returns a string with the output for 
people to write custom tests against:

stdout_capture  { fcn() };
Regards,
David Golden
James E Keenan wrote:
And here are the fruits of my application of IO::Capture:  a module with 
three subroutines which have proven useful in the project I'm working on 
for my day job.

The full module is here: 
http://mysite.verizon.net/jkeen/perl/modules/misc/TestAuxiliary-0.01.tar.gz

Here is the SYNOPSIS, which includes simple examples of each function:
use Test::More qw(no_plan);
use IO::Capture::Stdout;
use TestAuxiliary qw(
verify_number_lines
verify_number_matches
get_matches
 );
$capture = IO::Capture::Stdout-new();
$capture-start();
print_greek();
$capture-stop();
is(verify_number_lines($capture), 4,
number of screen lines printed is correct);
my @week = (
[ qw| Monday LundiLunes | ],
[ qw| TuesdayMardiMartes| ],
[ qw| Wednesday  Mercredi Miercoles | ],
[ qw| Thursday   JeudiJueves| ],
[ qw| Friday Vendredi Viernes   | ],
[ qw| Saturday   Samedi   Sabado| ],
[ qw| Sunday Dimanche Domingo   | ],
);
$capture-start();
print_week([EMAIL PROTECTED]);
$capture-stop();
my $regex = qr/English:.*?French:.*?Spanish:/s;
is(verify_number_matches($capture, $regex), 7,
correct number of forms printed to screen);
$regex = qr/French:\s+(.*?)\n/s;
my @predicted = qw| Lundi Mardi Mercredi Jeudi
Vendredi Samedi Dimanche |;
ok(eq_array([EMAIL PROTECTED], get_matches($capture, $regex)),
all predicted matches found);
sub print_greek {
local $_;
print $_\n for (qw| alpha beta gamma delta |);
return 1;
}
sub print_week {
my $weekref = shift;
my @week = @{$weekref};
for (my $day=0; $day=$#week; $day++) {
print English:  $week[$day][0]\n;
print French:   $week[$day][1]\n;
print Spanish:  $week[$day][2]\n;
print \n;
}
return 1;
}
If you like these, or if you have similar subroutines which can extend 
the functionality of IO::Capture::Stdout, please let me know.  Perhaps 
we can work this up into a full-scale CPAN module or subclass of 
IO::Capture.

Thank you very much.
jimk


Re: Test names/comments/whatever?

2005-02-05 Thread David Golden
If this discussion means the voting has re-opened, I'm in favor of 'label' 
as it implies an identifying description, but also connotes something brief.

David
Stevan Little wrote:
I sent Schwern a patch to change 'names' to 'description', but then Andy 
brought up the idea of 'labels'. At the time, Schwern said it was 'in 
the pipeline', but I expect its actually been moved out since.

Personally, I view them as 'descriptions' since thats what I usually 
write. But 'labels' makes sense too, if you use them that way. However, 
'name' just never made since to me though, and initially was a source of 
confusion when I first read Test::Tutorial and co.

I don't mind patching it again once a word has been choosen (only took 
about 10 minutes really), we just need to decide on what that word is.

- Steve
On Feb 5, 2005, at 11:03 AM, Ovid wrote:
Has there been any final decision as to what to call test names?  There
was quite a bit of discussion, but I don't recall the resolution.
Cheers,
Ovid
=
If this message is a response to a question on a mailing list, please 
send
follow up questions to the list.

Web Programming with Perl -- http://users.easystreet.com/ovid/cgi_course/


Re: Test::Unit, ::Class, or ::Inline?

2005-01-26 Thread David Golden
Andy Lester wrote:
I would love to hear your thoughts and ideas on structures of results.
Wild brainstorming here -- the issue seems to be with the fact that test 
success and failure is associated with a printed result.  What if the tests 
instead pushed a results object onto some global data stack?

Here's a quick conceptual example with three files (included at the end with 
output):

  * Tester.pm -- provides test_file and ok
  * example.t -- a testfile using Tester
  * test_runner.pl -- runs example.t and prints output
While this simple example just has the test function shove a reference to a 
hash onto a global array @{$Tester::RESULTS{$filename}}, the test function 
could just as easily create an object (Test::Object::Ok, Test::Object::Is, 
etc.) that holds relevant data for that type of test and push that onto the 
array.  E.g. for is it would hold the got and expected results.  Then 
there could be separate modules that interprets the array of objects and 
either prints them out as text with statistics, or prints them as HTML, etc. 
 Or the objects themselves could be required to stringify or htmlify, or 
whatever.

Clearly, the devil is in the details and this is a really simple example, 
but if one is not adverse to playing with some globals, it seems doable. 
(Albeit requiring writing a complete alternative to Test::Harness...)

Regards,
David Golden
-
### Tester.pm ###
package Tester;
use Exporter 'import';
our @EXPORT = qw( test_file ok );
%Tester::RESULTS = ();
sub test_file {
my $tgt = shift;
$Tester::RESULTS{$tgt} = [];
do $tgt;
return @{$Tester::RESULTS{$tgt}};
}
sub ok {
my ($val,$name) = @_;
my ($package, $filename, $line) = caller;
my $result = $val ? 1 : 0;
my $hr = {
filename = $filename,
line = $line,
result = $result,
name = $name,
data = $val
};
push @{$Tester::RESULTS{$filename}}, $hr;
}
1;
-
### test_runner.pl ###
#!/usr/bin/perl
use warnings;
use strict;
use Tester;
use YAML;
my @results = test_file(example.t);
print Dump(@results);
-
### example.t ###
use lib .;
use Tester;
ok( 3, test #1 );
ok( 1 == 1, test #2 );
-
### Output ###
--- #YAML:1.0
data: 3
filename: example.t
line: 4
name: test #1
result: 1
--- #YAML:1.0
data: 1
filename: example.t
line: 5
name: test #2
result: 1


Re: Test::Unit, ::Class, or ::Inline?

2005-01-26 Thread David Golden
Michael G Schwern wrote:
On Wed, Jan 26, 2005 at 05:55:24PM -0500, David Golden wrote:
While this simple example just has the test function shove a reference to a 
hash onto a global array @{$Tester::RESULTS{$filename}}, the test function 
could just as easily create an object (Test::Object::Ok, Test::Object::Is, 
etc.) that holds relevant data for that type of test and push that onto the 
array.  E.g. for is it would hold the got and expected results.

Test::Harness does not have any of this information.  It only has the test
output which is if the test passed or failed and the name.
Within the test program you can get at some of this information already
using Test::Builder.  It does not store the arguments of each individual
testing function and I'm not really sure its possible to do so in any
sort of universal fashion.
What you're suggesting requires both the test running and test output to
be done by the same process or for the test to output a whole lot more
information.  Both would require a radical alteration to the way tests
are run.
You're completely right.  It's not just a rewrite of Test::Harness (er, 
Straps), but all of Test::More (er, Builder) as well.  It's a whole parallel 
approach to the existing system, I'll admit.  I wouldn't try to do it as a 
drop in replacement, for sure.

But is there any reason they can't be run in the same process?  Right now, 
T::H::Straps just executes the test file by calling perl and handing it to 
the system, and then iterates through the output, parsing for test info.  If 
it's in the same process, we eliminate the need for parsing.  If we can't do 
it in the same process, (e.g. maybe having to do with @INC, switches, 
environmental variables, Devel::Cover -- as I said, the devil in the 
details), we could still have tests that returned structured output (e.g. 
YAML) instead of just ok/not ok and write something to parse that.  Or have 
them write the usual ok/not ok to stdout and write detail in a structured 
form to some other process through a socket or to a log file.

Either way, it's all still pretty much a separate approach from what we have 
today with Test::Harness and Test::Builder.  (Which work fine for me -- I'm 
not necessarily advocating for change, rather just brainstorming some really 
different alternatives.)

Regards,
David Golden