Re: TAP::Harness and -w

2013-09-17 Thread Mark Stosberg
On 09/17/2013 01:20 PM, chromatic wrote:
> On Tuesday, September 17, 2013 01:13:26 PM Mark Stosberg wrote:
> 
>> An attempting-to-be-neutral summary would be very helpful.
> 
> Pro of adding -w to test runs:
>   - that's how Test::Harness has always worked, and people might be 
> relying 
> on that behavior
> 
> Cons of adding -w to test runs:
>   - you get warnings from dependencies (and their dependencies) because 
> -w 
> enables global action at a distance
>   - using fatal warnings may cause your test suite to fail because of 
> warnings in dependencies you don't directly control

Thank you.

I would rather TAP::Harness not enable warnings by default.

I would prefer to opt-in, rather than opt out.

Mark


Re: TAP::Harness and -w

2013-09-17 Thread Mark Stosberg
On 09/17/2013 11:26 AM, Leon Timmermans wrote:
> On Sun, Jul 7, 2013 at 11:45 AM, Ovid  > wrote:
> 
> I'm winding up with astonishingly little bandwidth due to launching
> our company, so I was hoping to see a strong consensus from the
> users. I would also love to see examples of where the change or lack
> thereof is causing an issue. I am SWAMPED with so much email that
> receiving many opinions piecemeal makes it hard for me to follow along.
> 
> Were I not so bandwidth-constrained, this would be less of an issue,
> but I'd like to see a good Wiki page or something with the pro/con
> arguments laid out. If this is too much, I should turn over
> maintainership to someone with more bandwidth to ensure I'm not a
> blocker.
> 
> 
> Just as I expected, "make it a wiki" means it gets warnocked again.
> 
> Can we please make a decision, or if we must first come to an agreement
> on how to make it?

I think a pro/con list is a reasonable request. I've read all the
messages myself, am a regular user of Perl's testing tools, and the
benefits and drawbacks are not clear to me either.

An attempting-to-be-neutral summary would be very helpful.

Mark


Re: How might we mark a test suite isn't parallalizable?

2013-05-29 Thread Mark Stosberg

>> We have a cron job that runs overnight to clean up anything that was
>> missed in Jenkin's runs.
> 
> No offense, but that scares me. If this strategy was so successful,
> why do you even need to clean anything up? You can accumulate cruft
> forever, right?

Ha. Like any database, smaller ones perform better.

>> We expect our tests to generally work in the face of a "dirty"
>> database.  If they don't, that's considered a flaw in the test.
> 
> Which implies that you might be unknowingly relying on something a
> previous test did, a problem I've repeatedly encountered in poorly
> designed test suites.

I just ran across a ticket now where our design was helpful, so I
thought I would share it. The story goes like this:

I was tracking down a test that sometimes failed. I found that the test
was expecting a "no results" state on the page, but sometimes other test
data created a "some results" state. The test failed at this point
because there was actually a bug in a code. This bug was not found by
the site's users, our client, ou development time, or directly the
automated test itself, but only because the the environment it run it
had some randomness in it.

Often I do intentionally create isolation among my tests, and sometimes
we have cases like this, which add value for us.

> Your tests run against a different test database per pid.
> 
> Or you run them against multiple remote databases with
> TAP::Harness::Remote or TAP::Harness::Remote::EC2.
> 
> Or you run them single-threaded in a single process instead of
> multiple processes.
> 
> Or maybe profiling exposes issues that weren't previously apparent.
> 
> Or you fall back on a truncating strategy instead of rebuilding
> (http://www.slideshare.net/Ovid/turbo-charged-test-suites-presentation).
> That's often a lot faster.
> 
> There are so many ways of attacking this problem which don't involve
> trying to debug an unknown, non-deterministic state.

And if I'm running 4 tests files in parallel, would you expect that I'm
setting up and tearing down a database with 50 tables and significant
database for each and every test file, so they don't interfere with each
other?

That seems rather wasteful, when we already have easy to implementation
solutions to allow multiple tests to share the same database while still
achieving isolation between them when needed.

> I'll be honest, I've been doing testing for a long, long time and this
> is the first time that I can recall anyone arguing for an approach
> like this. I'm not saying you're wrong, but you'll have to do a lot of
> work to convince people that starting out with an effectively random
> environment is a good way to test code.

I've been doing Perl testing for about 15 years myself. Many of our
tests have isolation designed in, because there's value in that.

There's also value against running some tests against ever-changing
datasets, which is more like the kind of data that actually happens in
production.

Mark


Re: How might we mark a test suite isn't parallalizable?

2013-05-03 Thread Mark Stosberg

> OK, but you still have to clean out your database before you start each
> independent chunk of your test suite, otherwise you start from an
> unknown state. 

In a lot of cases, this isn't true. This pattern is quite common:

 1. Insert entity.
 2. Test with entity just inserted.

Since all that my test cares about is the unique entity or entities, the
state of the rest of database doesn't matter. The state the matters is
in a "known state".

> What about when you're not running under Jenkins.  Like when you're
> writing and testing your code.  You still need to start testing from a
> known state each time, which means you must clean out the database at
> startup.

We have a cron job that runs overnight to clean up anything that was
missed in Jenkin's runs.

We expect our tests to generally work in the face of a "dirty" database.
If they don't, that's considered a flaw in the test. This is important
to run several tests against the same database at the same time. Even if
we did wipe the database for we tested, all the other tests running in
parallel would be considered to making the database "dirty". Thus, if a
pristine database is a requirement, only one test could run against the
database at the time.

We run our tests 4x parallel against the same database, matching the
cores available in the machine.

We also share the same database between developers and the test suite.
This "dirty" environment can work like a feature, as it can sometimes
produce unexpected and "interesting" states that were missed by a
clean-room testing approach that so carefully controlled the environment
that some real-world possibilities.

For example, perhaps a column allows null values, but the test suite
never tests that case because it "shouldn't happen". A developer might
manually create a value, which could expose a problem spot-- perhaps the
field should be "not null", or the app should handle that case gracefully.

A perfect clean-room approach would cover all these cases, but I don't
assume our tests are perfect.

Mark



Re: How might we mark a test suite isn't parallalizable?

2013-05-03 Thread Mark Stosberg

> No, you can't have your tests clean up after themselves.  For two
> reasons.  First, that means you have to scatter house-keeping crap all
> over your tests.  Second, if you have a test failure you want the
> results to be sitting there in the database to help with debugging.

There is another way to have tests clean up after themselves, which
addresses both the shortcoming you address.

First, I have functions like "insert_test_user()". At the end of these
functions, there is another call like this:

  schedule_test_user_for_deletion(...)

That inserts a row into a table "test_ids_to_delete" which includes
column for the table name and primary key of the entity to delete.
Another column has the insertion timestamp.

So, there's no "clean-up" code in all of our test scripts, and we have
the data there for debugging when the tests are done.

In Jenkins, after the test suite runs, a job to "Delete Test Data" is
kicked off, which deletes all test data older than hour. (Longer than it
takes the test suite to run).

There's a third reason not to do in-line test clean-up, which is that a
SQL "DELETE" operation can be relatively slow, especially when complex
RI is involved. Doing this "offline" as we do speeds that up.

There are still a few places where we have clean-up code in tests, but
it is exception. Those are the cases in which we can't use functions
like "insert_test_user()".

For example, if we are creating an entity by filling out a form on the
web and submitting it, then we may need to manually register that for
clean up later.

   Mark


Re: How might we mark a test suite isn't parallalizable?

2013-05-02 Thread Mark Stosberg

> When can a test not be parallelizable? Most of the examples that I can
> think of (depending on a file resource, etc) smell like a design failure.
> Tests should usually be able to use a unique temp file, etc.

Here's an example:

Say you are testing a web application that does a bulk export of a
database table.

The test works by doing a slow "count(*)" on the table, then having the
app what should generate 2 rows, and then running another slow
"count(*)" on the table, then checking that if the new value is the
original value plus 2.

This isn't parallel safe, because other tests running in parallel could
insert rows into the table in between the before and after counts.

One solution is to use a unit test instead. If you do that, the test and
the application can be made to share the same database handle. In
PostgreSQL, you can create a temporary table of the same name which
masks the original. Thus, your test has exclusive access to the table,
and the test can be made to run parallel-safe. It may also run much
faster, as the temporary table may have 10 rows in it instead of 100,000.

Other kinds of mocking could be used at this point as well.

Alternatively, you could still use a functional-style test by using
Test::WWW::Mechanize::PSGI instead of testing through your web server.
In this arrangement, the app and the test also run in the same process,
and can be made to share the same database handle, allowing the same
kinds of solutions as above.

I'll be talking more about related topics at YAPC::NA in my talk on
improving the performance of large test suites.

Mark


Re: How might we mark a test suite isn't parallalizable?

2013-05-02 Thread Mark Stosberg
On 05/02/2013 03:39 PM, brian d foy wrote:
> In HARNESS_OPTIONS we can set -jN to note we want parallel tests
> running, but how can a particular module, which might be buried in the
> dependency chain, tell the harness it can't do that?
> 
> It seems to me that by the time the tests are running, it's too late
> because they are already in parallel and the best we can do is issue a
> warning or decide to fail.

I spent considerable time researching the topic of partially-parallel
test suites in Perl. Some of research is published here:

http://mark.stosberg.com/blog/2012/08/running-perl-tests-in-parallel-with-prove-with-some-exceptions-2.html
http://stackoverflow.com/questions/11977015/how-to-run-some-but-not-all-tests-in-a-perl-test-suite-in-parallel

In the end, the most efficient path forward for my particular case was
to look harder at the exceptional cases, and find ways to make them
parallel-safe.

Also, higher level automation like Jenkins or a wrapper script could be
used to first run all the tests that can be run in parallel, and then
run serial-only tests.

I have hope for better support within prove in the future, but
pragmatically for now I've find it easier to address through other parts
of the toolchain.

   Mark





Re: Test::Builder, MakeMaker & Consensus QA Hackathon Achievements

2013-04-15 Thread Mark Stosberg
On 04/15/2013 12:20 PM, Michael G. Schwern wrote:
> I've just updated the wiki with the achievements around the projects I
> was associated with.  I'll paste them below.
> http://2013.qa-hackathon.org/qa2013/wiki?node=Achievements

Thanks to all the Hackathon contributors for these updates! As a CPAN
author and user, I appreciate the work on all these behind-the-scenes
updates.

   Mark



Re: TAP::Harness uses lots of RAM to store test results

2013-04-10 Thread Mark Stosberg

> Which makes me wonder - just how much memory is TAP::Parser using.
> In particular, is TAP::Parser using the same amount of memory to store 65850
> "ok"s as it would to store some mix of 65850 "ok"s and "not ok"s?
> Which I'm starting to think, for large test suites, isn't that efficient.
> Most tests pass most of the time. So is it possible for TAP::Parser to use
> a more efficient format in memory to "archive" results for tests where
> every single subtest was an "ok"?

I have a larger test suite (25,000 tests), and my passing tests are
usually not just "ok", they have a label with them. When tests fail, the
labels of the nearby passing tests are helpful to understand the context
of the failure.

`prove` has an option off --failures-only. I haven't checked to see how
that's implemented, but it seems fair to optimize that particular case
by throwing away the non-failures, since the user asked not to have them
reported.

   Mark



Re: interest in converting LWP / Mech hierarchy to roles?

2013-02-28 Thread Mark Stosberg
On 02/28/2013 03:26 PM, Mike Doherty wrote:
> On 13-02-28 09:07 AM, Yanick Champoux wrote:
>> On 13-02-27 11:04 AM, Mark Stosberg wrote:
>>> Perhaps if some of these get converted to extend-by-roles instead of
>>> extend-by-inheritance, some others will follow along, and we'll end up
>>> with a more useful of collection of Perl-based browser extensions.
>>
>>  I don't have a lot to add to the conversation at the moment, but I
>> just wanted to '++' the idea
> 
> Same here. I've had two separate projects at university that both would
> have been easier had the LWP stuff been done as roles. As it seems
> everyone thinks this is probably a good idea, the real question becomes
> "Who is going to do the work?" -- as usual :)

One of the nice parts about this, is that it can be backwards
compatible. For example, if we wanted to make this change in
WWW::Mechanize, we could make WWW::Mechanize::Role, and move (hopefully)
all the functionality in there, and then WWW::Mechanize would like this:

  package WWW::Mechanize;
  use Moo;
  extends 'LWP::UserAgent';
  with 'WWW::Mechanize::Role';
  1;

I mocked this up with Moo, as it's both Moose-compatible, and a lighter
dependency. Any other Moose-compatibile role option seems like it would
be a fine choice.

People could go on using WWW::Mechanize as they have been, but now
there's a new option to use it a role in a complex user agent if desired.

   Mark


Re: interest in converting LWP / Mech hierarchy to roles?

2013-02-28 Thread Mark Stosberg
On 02/28/2013 11:29 AM, Andy Lester wrote:
> 
> On Feb 28, 2013, at 10:26 AM, Mark Stosberg  <mailto:m...@summersault.com>> wrote:
> 
>> I'd like to have a Mechanize that has both the testing functions, and
>> also JavaScript support, which the WWW::Scripter sub-class has.
> 
> Seems to me even better would be to fold WWW::Scripter's JS support into
> Mech itself.

Good to hear! I've shared this comment with the WWW::Scripter maintainer.

   Mark



Re: interest in converting LWP / Mech hierarchy to roles?

2013-02-28 Thread Mark Stosberg
On 02/28/2013 10:45 AM, Andy Lester wrote:
> 
> On Feb 27, 2013, at 10:04 AM, Mark Stosberg  <mailto:m...@summersault.com>> wrote:
> 
>> Then you have the WWW::Mechanize sub-classes. Here's a sampling:
>>
>>Test::WWW::Mechanize
> 
> As far as Test::WWW::Mechanize goes, I don't see that the testingness is
> really a role.  It's really mostly a bunch of wrappers and some
> convenience functions.

Thanks for the feedback, Andy.

I'd like to have a Mechanize that has both the testing functions, and
also JavaScript support, which the WWW::Scripter sub-class has.

I haven't looked closely into it, but the surface it seems like I should
be to combine a "does testing" role with a "does javascript" role to
achieve this result.

   Mark


Re: Using NYTProf for code coverage?

2013-02-28 Thread Mark Stosberg
On 02/28/2013 06:02 AM, Christian Walde wrote:
> On Wed, 27 Feb 2013 14:49:44 +0100, Mark Stosberg 
> wrote:
> 
>> I was tasked with working on code coverage for a large project, but had
>> difficulty getting Devel::Cover to run.
> 
> Can you maybe go into details as to why it won't run, or maybe condense
> it to a repeatable case? PJCJ is, with help from others, actively
> working on D::C and fixing bugs, so reports of broken things are quite
> useful. :)

Thanks for all the replies. Sounds like the consensus is that I should
expect Devel::Cover to work, and get help if it doesn't. I'll give it
another shot.

I appreciated David Cantrell's additional detail about the additional
value that Devel::Cover provides.

   Mark



Re: Best practices for TODO test management?

2013-02-28 Thread Mark Stosberg
On 02/27/2013 02:48 PM, Graham TerMarsch wrote:
> On February 27, 2013 09:06:23 AM Mark Stosberg wrote:
>> What are some suggestions to make sure that passing TODO tests get
>> regular attention?
>>
> [.snip.]
>>
>> In case this context matters-- our tests are regularly through Jenkins,
>> so our TAP is getting converted to JUnit. We also run tests from the
>> command line with prove regularly as well.
> 
> Mark, what are you using to convert your TAP to JUnit?
> 
> I ask as TAP::Formatter::JUnit automatically treats "passing TODOs" as 
> "failures" and reports them as such in the JUnit output.

I use "TAP::Harness::JUnit", only because I got it working, and it works
well enough.

Thanks for highlighting this feature of TAP::Formatter::JUnit. I'll have
to give it another look.

When I looked at one one of my passing TODO tests, I found that it
marked as a TODO, because sometimes the test data it was working with
would be in the desired state, and sometimes it wouldn't be. So, it
wasn't a simply matter of of un-TODO'ing the test in that case, as it
would still fail sometimes.

In a case like that, I have to re-work the test so that it has
sufficient control over the test data to give consistent results.
Perhaps that test would be better written to "SKIP" if the data isn't
working with isn't in a state for the test to be meaningful.

   Mark


Re: Best practices for TODO test management?

2013-02-28 Thread Mark Stosberg
On 02/27/2013 09:44 AM, Ovid wrote:
> Hi Mark,
> 
> I know this isn't exactly what you want, but a long time ago I added a
> --directives switch to 'prove' to help with this. If you do this:
> 
> prove -lr --directives t/
> 
> 'prove' will *not* run in verbose mode, except for any tests with
> directives (TODO and SKIP).
>  
> It's not perfect, but at least it will make the TODO tests stand out
> (they're also colored differently if you're using colors).

Thanks for that tip, Ovid!

I had overlooked this feature. I did find it helpful, but found it it
doesn't work in combination with the "-j" feature to run tests in
parallel. I'm not sure if that's a bug, or an inherent limitation.

Still, it's nice to know about this option, even if I have to have a
slower test run now-and-then to use it.

  Mark


interest in converting LWP / Mech hierarchy to roles?

2013-02-28 Thread Mark Stosberg

There's perhaps no better illustration of the values of roles vs
inheritance in CPAN modules than the mess than that the LWP inheritance
tree.

There's so many modules that extend LWP or Mechanize through
sub-classing, but can't easily be combined with getting into diamond
inheritance:

Here's a sampling of LWP subclasses:

WWW::Mechanize
LWP::UserAgent::POE
Test::LWP::UserAgent
LWP::UserAgent::Cached
LWP::UserAgent::ProxyAny;
LWP::UserAgent::Snapshot;
LWP::UserAgent::Keychain;
LWP::Parallel::UserAgent;
LWP::UserAgent::Determined;

Then you have the WWW::Mechanize sub-classes. Here's a sampling:

Test::WWW::Mechanize
WWW::Mechanize::Query
WWW::Mechanize::Cached
WWW::Scripter

Now, if you'd like to combine one of these of the features extensions
from any these, good luck! Trial-and-error, and possible @ISA-hacking
lie ahead of you. Maybe you'll find a valid combination. Maybe not.

Now, if the extensions were written as roles, combining the extensions
would be. There could still be conflicts and incompatibilities, but I
think things would be far more likely to present themselves up front.

Perhaps if some of these get converted to extend-by-roles instead of
extend-by-inheritance, some others will follow along, and we'll end up
with a more useful of collection of Perl-based browser extensions.

   Mark


Best practices for TODO test management?

2013-02-27 Thread Mark Stosberg

We have a large test suite with 20,000+ tests in it, and a number of
passing TODO tests in it.

Our problem is that once a test is marked as TODO, it falls off our
radar, and we are unlikely to notice it again.

What are some suggestions to make sure that passing TODO tests get
regular attention?

I was hoping that 'prove' might have a flag to treat passing-TODOs as
failures, but Id didn't see one.

I would prefer not to write a custom harness... this seems like a
feature that would be of general interest.

In case this context matters-- our tests are regularly through Jenkins,
so our TAP is getting converted to JUnit. We also run tests from the
command line with prove regularly as well.

Thanks!

   Mark


speeding up Selenium testing with client-side caching?

2013-02-27 Thread Mark Stosberg

Our team was recently benchmarking using Mechanize vs
Selenium::Remote::Driver for a basic request pattern:

- get a page
- check the title
- check the body content

I expected Selenium to be slower, but was surprised it was about 8 times
slower!

As I look more closely at the inner workings, what was going on became
more clear.

Mech makes one request, and uses the cached content to check the title
and content.

Meanwhile, it appears the Selenium protocol dictates that 3 requests are
used:

1. Instruct the driven browser to load the page.
  (1a. The browser actually makes the request)
2. Ask the browser what the title of the page loaded is.
3. Ask the browser for the page source (or ask it to find an element in
the DOM on your behalf.

It seems like some considerable speed gains might be made with some kind
of hybrid approach-- something that involved sometimes caching the page
source locally, and querying it multiple times locally, rather than
getting one HTTP request to the browser for each query.

For now, we are sticking with an approach of "Mechanize when we can,
Selenium when we have to".

How are other people improving the performance of functional browser
teesting?

I realize there are some non-Selenium solutions out there, but I like
the ability to take advantage of services like Sauce Labs, which
provide parallel Selenium testing in the cloud.

Mark


Using NYTProf for code coverage?

2013-02-27 Thread Mark Stosberg

Greetings,

I was tasked with working on code coverage for a large project, but had
difficulty getting Devel::Cover to run. We had Devel::NYTProf handy,
and I realized that although that tool focuses on profiling, it produces
data that appears that it can be used for code coverage instead.

Having one tool that could be used for both profiling and code coverage
seems like a nice win. Have other people looked into using
Devel::NYTProf this way? Is there a reason why it would be undesirable?
Is Devel::Cover still the go-to tool for code coverage?

Below are some notes on how I used Devel::NYTProf for code coverage.

Mark

###

I was able to produce a report which looked like this:

0   0   Config::compile_date
0   0   Config::config_re
0   0   Config::config_sh
0   0   Config::config_vars
0   0   Config::header_files
0   0   Config::launcher
0   0   Config::local_patches
0   0   Config::myconfig
0   0   Cwd::chdir
0   0   Cwd::fast_abs_path
0   0   Data::Dumper::qquote
1   1   Cwd::getcwd
1   1   JSON::XS::encode
7   1   File::Find::CORE:closedir

It's the number of calls, places called from, and the subroutine.

To generate, in Apache add the following:

MaxClients 1
MaxRequestsPerChild 0

and this to the mod_perl startup.pl:

my $time = `/bin/date +%Y-%m-%dT%H.%M.%S.%N`;
chomp($time);
$ENV{NYTPROF} =
"file=/tmp/nytprof/nytprof.$time.out:addpid=1:endatexit=1";

require Devel::NYTProf::Apache;

Next, run your web request and a file should be created for parent and
child:

nytprof.2013-02-08T13.18.18.209426295.out.31929
nytprof.2013-02-08T13.18.18.209426295.out.31949

Now, merge the files:  nytprofmerge -v --out=nytprof-merged.out
nytprof.2013-02-0*.

Create the report: covverage.pl --file nytprof-merged.out --out
merged_coverage --minimal and parse the data.  Here is a script to parse:

cat *.tsv | perl -ne 'print if /\t/' | perl -a -ne
'print("$F[0]\t$F[1]\t$F[-1]\n") if $F[0] =~ /\d/' | sort -k 2,2 | sort
-u | grep -v ::BEGIN | grep -v __ANON | sort -n

This should give you something like the above.


Re: Anyone want Test::Class::Moose?

2012-12-17 Thread Mark Stosberg
On 12/15/2012 05:37 PM, Ovid wrote:
> Hi Justin,
> 
> This is something I've wondered as well. Several times when people asked me 
> about using Test::Class with Moose I pointed them to Test::Able, though I 
> confess I never used that code. Invariably they would not choose it. I think 
> what's going on is the interface. It looks "different" and that's possibly 
> scaring people off. The interface for Test::Class::Moose looks very similar 
> to Test::Class and that might make for easier adoption if I ever do enough to 
> get it out of alpha.
> 
> Please note that I'm not saying that what I've written is better! However, 
> the comfort level of the Test::Class::Moose interface may be appealing to 
> some.

There's also Test::Sweet-- another Moose/Test::Class mashup which I
haven't developed an opinion of: https://metacpan.org/module/Test::Sweet

I can see that one difference is that it uses Devel::Declare.

As I looked more at Test::Class::Moose, one thing I really like is that
plans are completely gone. Thank you.

Two questions:

1. About this: "use Test::Class::Moose;"

Why not standard inheritance to add Test::Class functionality?

It looke the rationale here is to save a line of boilerplate with the
"use Moose" line.

2.  About this syntax for extending  a test class:
  use Test::Class::Moose parent => 'TestsFor::Some::Class';

why not use standard inheritance in a class, and to extend a class using
test::Class? Or could you 'extends' in the import list here to look more
Moose-y?

Mark









Re: Anyone want Test::Class::Moose?

2012-12-12 Thread Mark Stosberg

> So, does this look useful for folks? Is there anything you would change? 
> (It's trivial to assert plans for classes and the entire test suite rather 
> than rely on done_testing(), but I haven't done that yet).

I would welcome it as an option.

We use Test::Class now, but I have the sense that there's a better
alternative. We have started to load Moose in most cases, so there's no
additional load time penalty for Moose since it's already there.

Some things I like from some alternatives (from reviewing them, not
using them):

I like the declarative style and simple nesting of Test::Spec:

  https://metacpan.org/module/Test::Spec

describe "A User object" => sub {
  my $user;
  before sub {
$user = User->new;
  };
  describe "from a web form" => sub {
before sub {
  $user->init_from_tree({ username => "bbill", ... });
};
it "should read its attributes from the form";
describe "when saving" => sub {
  it "should require a unique username";
  it "should require a password";
};
  };
};

Test::Ika has a similar spirit, but with an option for very readable output:

describe 'MessageFilter' => sub {
my $filter;

before_each {
$filter = MessageFilter->new();
};

it 'should detect message with NG word' => sub {
my $filter = MessageFilter->new('foo');
expect($filter->detect('hello foo'))->ok;
};
it 'should detect message with NG word' => sub {
my $filter = MessageFilter->new('foo');
expect($filter->detect('hello foo'))->ok;
};
};

Check out the TAP-alternative output here:

https://metacpan.org/module/Test::Ika

This is a good idea that I expect to spread: Generating TAP for test
harnesses, but more a readable format when the target is a human reader.

However, I think if I just wanted to mash-up Test::Class and Moose, I
would do something more like this:

package Test::Class::Moose;
use Moose;
use MooseX::NonMoose;
extends 'Test::Class';



That way I would have to update our hundreds of test scripts to use a
new syntax. :)  If your project is to be considered a path forward from
Test::Class but with an incompatible syntax, it would be great if it
came with a script to find/replace the old syntax with the replacement.


   Mark


Re: preforking prove

2012-11-08 Thread Mark Stosberg
 > I wasn't able to get forkprove to work with Test::Class, because of
Test::Class's insistence that tests be declared at compile time.
> 
>swartz> cat t/Sanity.t 
>#!/usr/bin/perl
>use CHI::t::Sanity;
>CHI::t::Sanity->runtests;
> 
>swartz> forkprove t/Sanity.t 
>t/Sanity.t .. Test::Class was loaded too late (after the CHECK block was 
> run). See 'A NOTE ON LOADING TEST CLASSES' in perldoc Test::Class for more 
> details
>t/Sanity.t .. No subtests run 
> 
> Mark, you mentioned before that you use Test::Class before - did you use it 
> in conjunction with forkprove?

Jonathan,

It "just worked" for me, using the documented forkprove syntax of
loading modules with "-M".

I ran it on a directory that primarily contained test class files. Each
one followed this general design:

###

package Project::Test::Foo;
use parent 'Test::Class';

# my tests here...

Test::Class->runtests;

###

   Mark



Re: preforking prove

2012-11-07 Thread Mark Stosberg
On 11/07/2012 03:51 PM, Jonathan Swartz wrote:
> Now on cpan. A much simpler solution than what I suggested :) and apparently 
> still works with parallel testing. Thanks miyagawa!
> 
>https://metacpan.org/module/forkprove

I did some benchmarking last night and found no real benefit over prove
-j, but Miyagawa reports that "heavy" modules like Catalyst and Moose,
he's seen 40% to 50% speed-ups.

See the details of our discussion on Github:

https://github.com/miyagawa/forkprove/commit/ca2b0c2f55a250468c4f61f7cbd1b008a0eb91b4#commitcomment-2115186

   Mark


Re: preforking prove

2012-11-06 Thread Mark Stosberg
On 11/06/2012 01:25 PM, Karen Etheridge wrote:
> On Tue, Nov 06, 2012 at 09:59:48AM -0800, Jonathan Swartz wrote:
>> For each test run, instead of loading a .t file, you're making a request 
>> against the Starman server. So you can obviously hit it with multiple 
>> simultaneous requests.
> 
> For something so simple, you could also use Parallel::ForkManager, with
> each child passing back a serialized Test::Builder object which contained
> the results of all the tests that were run.  The trickiest problem is
> consolidating all the test results back together and then emitting the
> proper TAP output.

Not long ago I worked on a project where I needed to move 8 million
images to S3, where each image had a file and some associated database rows.

We wrote a solution using Parallel::ForkManager, but it benchmarked to
take 9 days to complete.

I rewrote a solution much like the one that Jonathan describes, where a
small control script submitted jobs to a pre-forking Apache/mod_perl
serve to process.

It benchmarked to take 2 days, and ultimately bottlenecked at bandwidth,
rather than CPU power as the first solution had.

Based on that, I think Starman-prove could perform very well. I also
have a large test suite that I'm always trying to make run faster, so I
like the idea a lot.

Using a number of other techniques, I've already been get the run time
down from about 25 minutes to 4.5 minutes. We now run the full suite for
every "push", rather than a few times per day.

A lot of our run time reduction was getting the tests to be
parallel-friendly, which involved some different tricks to allow them to
share the same database without problems.

I also use Test::Class, and still run all of those tests one at a time.
One of our future optimizations is to use something like
Test::Class::Load, but I suspect we will run into some problems there,
that Starman-prove would solve.

  Mark


Re: [PATCH] for review: docs for the undocumented --rules option for'prove' (and related TAP::* bits)

2012-09-04 Thread Mark Stosberg
On 09/03/2012 01:52 PM, Eric Wilhelm wrote:
> # from Mark Stosberg on Friday 31 August 2012:
>> I looked into the App/Prove.pm source code and immediately spotted the
>> issue. It hardcodes that all "rules" should run in parallel. Thus,
>> there would be no way to specify that something should never be run
>> in parallel with anything else.
> 
> Hi Mark,
> 
> See also: 'rules' in the TAP::Harness pod, the source/comment on 
> TAP::Parser::Scheduler (_set_rules() &c.), and t/scheduler.t.

Thanks for the reply, Eric. I did look in some detail at the related
scheduling code as part of preparing the documentation patch I submitted.

> There is a disconnect between command-line flags to prove and the data 
> structure there.  I haven't had time to totally grok the scheduler code, 
> but I think you need something different than what the --rules option 
> was written to do (though we would have to ask someone who is using it 
> whether your patch breaks that use case -- I think the usage was to 
> prevent just a few tests from running at the same time as each other, 
> not from all others.)
> 
> In your case, it might be better to be able to just pass a schedule of 
> nested arrays (maybe as json or something?)

At this point, I have confirmed that the right schedule is being
generated and used. However, it is triggering a bug elsewhere, perhaps
in the aggregation or multiplexing code.  At the end of the run, there
is simply no summary report generated.

However, I may not pursue this further soon at this point because of a
course change.

It's been so slow to get proper support for exceptions to parallel test
runs, that I decided to dig into the problematic tests to see if I could
make them parallel friendly. It turns out that was faster just to fix
the tests. :)

  Mark


Re: [PATCH] for review: docs for the undocumented --rules option for'prove' (and related TAP::* bits)

2012-08-31 Thread Mark Stosberg
On 08/30/2012 06:56 PM, Eric Wilhelm wrote:
> # from Mark Stosberg on Thursday 30 August 2012:
>> I suspect there's a bug that works as follows, but I haven't isolated
>> it yet. Here's my suspected trigger:
>>
>> - First start some tests that are parallel ready, but some of them are
>> slow. - Next in the schedule have some tests which much be run in
>> sequence.
>>
>> I think the "sequence" tests are in fact being run in sequence, but
>> some of the "slow" parallel tests are still running.
> 
> Is that a bug?  I thought sequenced tests were only excluded from 
> running in parallel with each other -- and that parallel tests were 
> compatible running alongside all others.

Eric,

Thanks for the follow-up. I think you are right... I think that answer
means there is no way to use the "--rules" flags to 'prove' to
accomplish what I want, as it is written

I looked into the App/Prove.pm source code and immediately spotted the
issue. It hardcodes that all "rules" should run in parallel. Thus, there
would be no way to specify that something should never be run in
parallel with anything else.

I patched prove using the below patch, and my tests ran almost as I
wanted. Using the "--rules" I specified, the exceptions were all run
first, one at a time, and then the rest of the tests were run in parallel.

However, this triggered a related issue that I had seen sometimes before
in my testing-- No summary output is produced! I just see an "ok" result
line printed for the last test, and that's it. I'm not sure what's going
on there.

What could use TAP::Harness to fail to produce the summary report?

   Mark


# Alter the logic of the --rules processing for prove, so that each rule
is considered in sequence.
# This is what you want if you want to specify some exceptions with 'seq'
# that can't run in parallel.
--- old-trunk/perllib/App/Prove.pm  2012-08-31 15:01:57.0 -0400
+++ new-trunk/perllib/App/Prove.pm  2012-08-31 15:01:58.0 -0400
@@ -373,13 +373,13 @@
 my @rules;
 for ( @{ $self->rules } ) {
 if (/^par=(.*)/) {
-push @rules, $1;
+push @rules, { par => $1};
 }
 elsif (/^seq=(.*)/) {
 push @rules, { seq => $1 };
 }
 }
-$args{rules} = { par => [@rules] };
+$args{rules} = { seq => [@rules] };
 }

 return ( \%args, $self->{harness_class} );




Re: [PATCH] for review: docs for the undocumented --rules option for'prove' (and related TAP::* bits)

2012-08-30 Thread Mark Stosberg
On 08/20/2012 07:29 AM, Mark Stosberg wrote:
> 
>> However, I see that 'rules' is the subject of testing in t/scheduler.t.
>>   Do the individual tests in that file give you any clue as to how to
>> proceed?
> 
> Thanks for the feedback, Jim.
> 
> I believe I found what I needed over the weekend.
> 
> First, I made a list of what needed to be done here as some raw notes:
> https://github.com/markstos/Test-Harness/wiki/Missing-documentation-for-'rules'-and-'scheduler'-in-Test::Harness---App::Prove
> 
> 
> This morning I've submitted a "pull request" with proposed docs here.
> There is more feedback in this Github pull request:
> 
> https://github.com/Perl-Toolchain-Gang/Test-Harness/pull/5
> 
> I would be interested in a peer-review of my work.


When testing this on my large test suite, I believe I've found bug.
Here's my understanding of what appears to be happening:

A correct "schedule" is being created, with most tests set to run in a
parallel. The few exceptions I've sent are clearly being put in sequence
at the end.

Yet, a number of the exceptions still fail in these runs, but pass if
run by themselves.

Using a well-timed capture of activity with "ps", I was able to confirm
that my "exceptions" are running at the same time as some of the other
parallel tests.

I suspect there's a bug that works as follows, but I haven't isolated it
yet. Here's my suspected trigger:

- First start some tests that are parallel ready, but some of them are slow.
- Next in the schedule have some tests which much be run in sequence.

I think the "sequence" tests are in fact being run in sequence, but some
of the "slow" parallel tests are still running.

I'll try to mock-up this situation and see what I find.

   Mark




[PATCH] for review: docs for the undocumented --rules option for 'prove' (and related TAP::* bits)

2012-08-20 Thread Mark Stosberg



However, I see that 'rules' is the subject of testing in t/scheduler.t.
  Do the individual tests in that file give you any clue as to how to
proceed?


Thanks for the feedback, Jim.

I believe I found what I needed over the weekend.

First, I made a list of what needed to be done here as some raw notes:
https://github.com/markstos/Test-Harness/wiki/Missing-documentation-for-'rules'-and-'scheduler'-in-Test::Harness---App::Prove

This morning I've submitted a "pull request" with proposed docs here. 
There is more feedback in this Github pull request:


https://github.com/Perl-Toolchain-Gang/Test-Harness/pull/5

I would be interested in a peer-review of my work.

Thanks!

   Mark



Feedback on the undocumented --rules option for 'prove'

2012-08-17 Thread Mark Stosberg

In 2008 Alex Vandiver contributed a patch to "prove" that allowed you to
specify that you wanted some tests to run in parallel and others in
serial. This is a great feature for those of us with large test suites
that would like to test advantage of parallism, but have suites that
aren't 100% parallel-ready. I'm grateful that work was done.

The feature was considered experimental, and was not documented in
`prove`. From what I can tell it has remained largely undiscovered, and
is still remains in the same state about 4 years later.

In the last few days I started working on the same problem myself, not
realizing that the undocumented feature exist.

I asked on StackOverflow [1], and proceeded to code up my own solution
[2] before I ended up here to report my results and ask for help.

1.
http://stackoverflow.com/questions/11977015/how-to-run-some-but-not-all-tests-in-a-perl-test-suite-in-parallel/11977495#11977495
2. https://github.com/Perl-Toolchain-Gang/Test-Harness/pull/3

After I fixed in a bug in calculating the tests to run in my own
solution, it appeared to work, but then at the end of the run, no
summary would be reported. It would just stop and it wasn't clear what
the problem was. So, I tried to see if I could get the "--rules" option
to work for me.

First I tried putting this syntax in my my .proverc:

--rules seq=t/first_serial_test.t
--rules seq=t/second_serial_test.t

It appeared that it was starting to run my serial tests first... but it
appeared to keep going with running everything in serial, despite "-j 4"
on the command line.

So, I tried this:

 --rules seq=t/first_serial_test.t
 --rules seq=t/second_serial_test.t
 --rules par=**

This appeared to have the opposite result. No *everything* appeared to
be run in parallel.

Finally, I tried this variation:

--rules par=**
--rules seq=t/first_serial_test.t
--rules seq=t/second_serial_test.t

This appeared that it was doing the right thing until it failed the
same way own patch did the test suite run just ended, with no
summary.

Perhaps is this simply a documentation issue, and I haven't gotten the
syntax just right. How do I specify that there are some tests that I
always want to be run in serial?

My goal is to be able to specify this list of exceptions once in a file
and forget about it. I don't want the existence of a rule to imply that
I want to that test. I only want rules applied if tests are actually
selected to run. So far, I'm not sure if specify a rule also implies
that I'm selecting a test to run, which I wouldn't want.

Thanks for considering this feature again with me. Let's get it
finished, documented and published!

*UPDATE*

After I drafted this message, I found Test::Steering, which also
advertises the feature of mixing parallel and serial runs.

However, I found it didn't work. First, I had to patch it just to get
basic functionality going [3], and then when I got a real test to run,
it produced two "summary" reports... the one for the parallel runs was
printed in the middle of the run, while a second summary just for the
serial runs appeared at the end.

https://rt.cpan.org/Ticket/Display.html?id=62681
https://metacpan.org/module/Test::Steering

Perhaps everyone in this problem space using the "Roll your own"
approach, but it sure seems like there's potential for a generally
useful, re-usable tool for this.

Thanks!

Mark



favorite XML testing module?

2010-03-10 Thread Mark Stosberg

What's your favorite Test:: module for checking of a given document is valid 
well-formed XML?

When I looked several years ago I didn't find one that suited me, so I created 
and released

Test::XML::Valid
http://search.cpan.org/perldoc?Test::XML::Valid

It has a shortcomings reflected in the bug tracker
( https://rt.cpan.org/Public/Dist/Display.html?Name=Test-XML-Valid )

But generally no one uses it as far as I can tell. So what's a better
module to test XML with? I can retire this one that I wrote and move
on with life.

Mark

-- 
http://mark.stosberg.com/





Re: how to get archname

2007-10-30 Thread Mark Stosberg
Michael Peters wrote:

> Matisse Enzer wrote:
> 
>> forgive me but what is the magic variable to get archname? for example,
>> on my system archname is
>>darwin-thread-multi-2level
> 
> use Config;
> $Config{archname}

If you just need to see it and don't need to use it directly in Perl, 
you can also just:

perl -V  | grep archname

   Mark 



Re: Backwards (?) kwalitee definition on qa.perl.org

2006-03-08 Thread Mark Stosberg
On 2005-07-09, Nik Clayton <[EMAIL PROTECTED]> wrote:
> http://qa.perl.org/phalanx/kwalitee.html says:
>
>  What is kwalitee?
>
>  Kwalitee is inexact quality.  We don't know exactly what it is,
>  but we know it when we see it.
>
> Isn't that backwards?  I thought 'kwalitee' was supposed to be a metric 
> that was exact, and that (hopefully) had some correlation with 
> 'quality'.  The whole point is that 'kwalitee' is objectively 
> measurable, while 'quality' isn't.

I agree it could be improved. Here's a suggested refactoring:

 Kwalitee are precise metrics which strive to approximate quality. The
 name is intentionally different to convey that Kwalitee is related to
 "quality", but not quite the real thing. That's because we don't know
 exactly what quality is, but we know it when we see it.

Mark



best way to migrate to Test::WWW::Selenium ?

2006-03-06 Thread Mark Stosberg
(This message is targeted at the Test::WWW::Selenium maintainers, but I
think the response will be of interest to others here ).

I've got a test suite built with Selenium, but I would like to the
output in TAP to centralize the reporting, perhaps using Smolder once I
Smolder installed. 

It appears that Test::WWW::Selenium wants all the tests to be rewritten in
Perl. 

Is there a simple way to get TAP output starting with a Selenium test
suite, or is some rewrite/conversion process needed first? 

Thanks!

Mark



Re: ANNOUNCE - Smolder 0.01

2006-03-06 Thread Mark Stosberg
On 2006-03-05, Michael Peters <[EMAIL PROTECTED]> wrote:
>
>
> Yuval Kogman wrote:
>> On Sat, Mar 04, 2006 at 09:09:00 -0500, Michael Peters wrote:
>>> It's very similar in nature to the Pugs smoke test server, but is completely
>>> project agnostic. It's also completely self contained (contains local 
>>> copies all
>>> of it's Perl modules and a local apache/mod_perl). It's released in binary
>>> packages (currently there's only 1 binary package, but more will hopefully 
>>> be
>>> coming) and also a source distribution from which you can build a binary
>>> package. All it requires is an existing version of Perl 5.8.x (might work 
>>> with
>>> 5.6.x but that hasn't been tested) and MySQL 4.x.
>> 
>> Could you please please please pretty please with a cherry on top
>> add SQLite support?
>
> Sure, that sounds reasonable. It would even make it more self contained with
> less dependencies. I'll add this to the TODO list. We've already received
> volunteers to add PostgeSQL support, so we'll make sure that it's easy enought
> to add SQLite as well, but as always, patches are welcome.

I had volunteered for PostgreSQL support, but I know think that SQLite
support would be more valuable, and would rather focus on that. 

Perhaps a new "sqlite-support" branch is in order while we work through
those changes. 

Mark



TODO test paradox: better TODO test management?

2006-01-31 Thread Mark Stosberg
Here's my test-first TODO test management paradox:

If I write a failing test and share it through the central repo,
the smoke bot fails and keeps sending us e-mail until it is fixed,
which can be annoying when these are un-implemented features and not
bugs. The effect can be quit paying attention to the smoke bot. 

If I mark the test TODO, the smokebot succeeds and the test disappears
from the radar of tests that should be fixed soon. 

What's a good way to manage TODO tests so that they continue to be
noticed and worked on, but without being annoying? 

Partly I wish that the reporting tools provided more detail about TODO
tests. Rather than just telling me that X TODO tests passed, I'd like
to know exactly what they were and where they were located so I can go
work on them.

I also realize I have another class of TODO tests, it's the: 

Ill-get-to-it-eventually,-maybe-next-year class of TODO tests.

These are things that I've noted I'd like to have an automated test for, 
but the tests are long term because they are expensive, difficult to
setup, or well, I'm imperfect.

Maybe being able to add a "due date" to tests would help. :) 

The TODO tests would pass before the due date, but if they aren't 
addressed in flow of work, they start failing to bring attention to
themselves. 

And then there could be a "snooze" button too...

Mark



Re: TAP as XML

2005-12-13 Thread Mark Stosberg
On Tue, Dec 13, 2005 at 02:01:18PM -0500, Michael Peters wrote:
> >>>It uses (among other things) Test::TAP::Model and 
> >>>Test::TAP::HTMLMatrix, and uses YAML as an intermediate test-run format.
> >>
> >>Actually, Test::TAP::HTMLMatrix is what I currently use for test reports 
> >>that
> >>get emailed to developers. I definitely plan to continue using it.
> > 
> > 
> > Would you share an example of how this is used in a test suite? 
> > 
> > It's not clear to me from the docs how to integrate what it does into
> > a smoke bot. 
> 
> I did a short write up of this in my use.perl journal:
> 
> http://use.perl.org/~mpeters/journal/25612

Thanks for sharing that Michael

I feel like I'm being dense, but where does @testfiles come from?

Mark



Re: TAP as XML

2005-12-13 Thread Mark Stosberg
On 2005-11-22, Michael Peters <[EMAIL PROTECTED]> wrote:
>
>
> Stevan Little wrote:
>> Michael,
>> 
>> You might want to look at some of the work on the Pugs test suite.
>> 
>> http://m19s28.vlinux.de/cgi-bin/pugs-smokeserv.pl
>> 
>> It uses (among other things) Test::TAP::Model and 
>> Test::TAP::HTMLMatrix, and uses YAML as an intermediate test-run format.
>
> Actually, Test::TAP::HTMLMatrix is what I currently use for test reports that
> get emailed to developers. I definitely plan to continue using it.

Would you share an example of how this is used in a test suite? 

It's not clear to me from the docs how to integrate what it does into
a smoke bot. 

Mark




create test scripts that run from the web as well from the web

2005-12-13 Thread Mark Stosberg
Now that I'm using Selenium, I wanted to integrate some of the perl
testing tools I already like.

 http://selenium.thoughtworks.com/

I found one way to do this was to create a test script that runs as a
CGI script. Adding just one line to the top of the script allows it to
run from the web or from the command line:

 if ($ENV{SCRIPT_NAME}) { $|++ && print "Content-Type: text/plain\n\n"; }

In my setup, this meant I also needed to give the script a ".cgi" extension. 

Setting a simple environment variable in my profile allowed 'prove' to
recognize a second test extension without extra effort:

 export PROVE_SWITCHES="--ext=.t --ext=.cgi"

Now I can have Selenium run a Perl test and inspect the 'ok', 'not ok' output. 

Mark



Re: Running test suites under PersistentPerl

2005-12-07 Thread Mark Stosberg
On 2005-12-07, Mark Stosberg <[EMAIL PROTECTED]> wrote:
>>
>> Limitations and Caveats with the system:
>>
>>  * Scripts that muck about with STDIN, STDOUT or STDERR will probably
>>have problems.
>>
>>  * The usual persistent environment caveats apply:  be careful with
>>redefined subs, global vars; 'require'd code only gets loaded on the
>>first request, etc.
>>
>>  * Test scripts have to end in a true value.

I thought of an alternative which might have a number of the benefits of
this solution with less of the drawbacks. 

The idea is to create one big file test file that is run in the normal
way. Everything would only need to be loaded once instead of N times. 
There wouldn't be the usual persistence issues, either. 

Each file could pull in with scope brackets around it, sort of like:

 {
# Insert contents of test file here...
 }

taking care of /some/ scope issues.  

Some issues would still remain:

 - Global variables need to be handled with care
 - "plan" lines would need to be handled specially. I recall there is
   already a module to apply plans at a "scope" level, so this may not
   be too hard.
 - BEGIN and END blocks may need some care. For example, an END block
   may be used to remove test data before the next text runs.

It's certainly less conventional, and there may well be serious flaws
with this idea that I don't see yet. 

I thought I'd throw it out there. 

Mark

-- 
http://mark.stosberg.com/ 



integrating Selenium with a traditional perl test suite ?

2005-12-07 Thread Mark Stosberg
Hello,

So I'm now using and liking Selenium after several recommendations from
this list. I'm interested to know how other people integrate it with a
traditional perl test suite.  It seems like there are two possibilities:

 http://selenium.thoughtworks.com/

1. Use "prove" as the primary test suite runner, and write some glue to
have Selenium return results in the standard TAP format.  

2. Use Selenium as the primary test suite running, and have a CGI script
that runs the Perl tests and returns the results to a web page, which
Selenium then checks, at least for basic failure/success. 

Before I try to rig up either, is there a standard solution? 

I see Test::WWW::Selenium, but I can't see how it would work, since my
test suites all run on headless servers with no JavaScript-enabled
browsers installed. 

http://search.cpan.org/~mbarbon/Test-WWW-Selenium-0.01/lib/Test/WWW/Selenium.pm

Mark



Re: Running test suites under PersistentPerl

2005-12-07 Thread Mark Stosberg
On 2005-12-05, Michael Graham <[EMAIL PROTECTED]> wrote:
>
> This should be compatible with regular (non-PersistentPerl) use as well.
>
> ...
>
> Limitations and Caveats with the system:
>
>  * Scripts that muck about with STDIN, STDOUT or STDERR will probably
>have problems.
>
>  * The usual persistent environment caveats apply:  be careful with
>redefined subs, global vars; 'require'd code only gets loaded on the
>first request, etc.
>
>  * Test scripts have to end in a true value.
>
> If there's interest, I'll try to package all this up into a CPAN module.

Thanks for your write-up Michael, it was really helpful. 

I would definitely like to see this published. 

You mentioned parts of this would be compatible with non-persistent
environments as well. In that case, I makes sense to me to add them to
Test::More, where they can work in both kinds of environment without
extra effort on anyone's part.

Mark



Re: automated web testing with selenium

2005-11-28 Thread Mark Stosberg
On 2005-11-02, Luke Closs <[EMAIL PROTECTED]> wrote:
>
> Also, yesterday Test::WWW::Selenium was uploaded to CPAN, so Selenium
> can now be driven by perl!

Test::WWW::Selenium seems interesting, but I could use an example it
would be useful to use, versus the standard techniques. 

 From the docs, it's not clear if there are restrictions about where
the related Perl testing code must reside. Does it need to be on the
same server as Selenium and the application?

> Anyways, check out the podcast at http://qapodcast.com

I'm downloading it now, but I think I better wait until the house clears
out before I pipe it through the speaker system and expose my geek side.

Mark



Re: testing Javascript applications ?

2005-11-28 Thread Mark Stosberg
On 2005-11-28, Ovid <[EMAIL PROTECTED]> wrote:
> --- Mark Stosberg <[EMAIL PROTECTED]> wrote:
>
>> What are other folks doing to test web applications that make heavy
>> use
>> of JavaScript?
>
> If you want to leverage your Perl testing knowledge, you can check out
> Test.Simple from JSAN:
>
> http://openjsan.org/doc/t/th/theory/Test/Simple/0.21/index.html
>
> I've been using it and once you get it set up, it's fairly straight
> forward.  You can see a sample in my journal: 
> http://use.perl.org/~Ovid/journal/27229

Interesting.

And is there a way to run these as part of "./Build test" for a project,
that may also run Perl tests as well? 

Mark



testing Javascript applications ?

2005-11-28 Thread Mark Stosberg
It used to be that WWW::Mechanize was a "good enough" testing tool for
my web applications. 

It doesn't do Javascript, but I used very minimal
Javascript and thus worked around that limitation.

Along comes AJAX. It offers benefits that make JavaScript seem worth
using.  

But now how I can test the application? I have a link that uses AJAX to
pull in some content that gets displayed in a new layer, including a
form I'd like to submit.

Mozilla::Mechanize seems like a potential answer here, but so far I
haven't run across people who are actually using it to test sites with
Javascript. (It has enough dependencies that I'm putting off installing
it myself just yet. :)

What are other folks doing to test web applications that make heavy use
of JavaScript?

Mark



A binary testing system with a log of system calls?

2005-08-01 Thread Mark Stosberg
Hello,

I help test the darcs ( http://www.darcs.net/ ) binary with Perl. The
code itself is written in Haskell, but that doesn't matter here.

A developer had an interesting request, which I would like to pursue.

Where there is a test failure in the Perl test script, he would like to
look at a sequence of shell commands that reproduce the issue.

So far we are using a mix of Shell::Command

 http://search.cpan.org/~mschwern/Shell-Command-0.01/lib/Shell/Command.pm

and calls to darcs which are wrapped through a darcs() Perl call.

Is anything like this out there, before I think about it harder?

Thanks!

Mark



Re: prove with Devel::Cover example?

2005-07-10 Thread Mark Stosberg
On Sat, Jun 04, 2005 at 02:10:37AM -0700, Michael G Schwern wrote:
> On Fri, Jun 03, 2005 at 02:04:50PM -0500, Pete Krawczyk wrote:
> > }How can I use 'prove' and Devel::Cover together? I tried: 
> > 
> > HARNESS_PERL_SWITCHES=-MDevel::Cover prove file.t
> 
> Kinda surprised there's not a --cover switch.

I happen to stumble upon merlyn's answer to this in the test suite for
CGI::Prototype.

He created a script called 'cprove' for this purpose, and it looks like
this:

 #!/bin/sh
 cover -delete
 PERL5OPT=-MDevel::Cover=+inc,/Volumes/UFS prove -v -I../lib "$@" &&
 cover

Of course, having a hard-coded path to his hard-drive is a drawback, 
but's an example of a nice simple solution that gets you out of the business
of remembering so much syntax to type each time. 

Mark

-- 
http://mark.stosberg.com/ 


OT: integrating RSS with mail readers (was: Re: AnnoCPAN and a wiki POD idea)

2005-07-08 Thread Mark Stosberg
On 2005-07-08, Michael G Schwern <[EMAIL PROTECTED]> wrote:
>
>> PS. An AnnoCPAN tip: Notice that if you are an author, you can subscribe
>> to all comments on your modules:
>> http://www.annocpan.org/~MARKSTOS/recent.rss
>
> Not knowing anything about RSS I put the URL into Firefox and it asked me
> if I wanted to save the file.  ?
>
> I believe the proper instructions are go to 
> http://www.annocpan.org/~YOURCPANID/ and click on the RSS link but I get
> the same behavior.  The little RSS icon in the lower right only gives an
> option to subscribe to the "recent notes" feed.
>
> A daily email digest would be nice for those of us who prefer push and
> live in our MTAs not our web browsers.

RSS integrates with mail readers as well, just apparently not yours. For
an example, see this screenshot of Thunderbird:

http://www.ischool.utexas.edu/technology/tutorials/email/thunderbird/images/05_2thunderbird.jpg

The person has subscribed to a feed from CNN, and is able to check it and read
it as if it were a mailbox.

Personally, I think it makes more sense to integrate RSS into a mail reader 
rather than 
web browser.

As a mutt and slrn user, I use 'snownews' as a console feed reader which works 
great. 
It's like pine in the sense that it has a very helpful GUI and requires 
virtually no
config file fussing to get started with.


Mark



AnnoCPAN and a wiki POD idea

2005-07-08 Thread Mark Stosberg
If you haven't see AnnoCPAN, it's a new way to share comments on Perl
POD:

Example:
http://www.annocpan.org/dist/Net-ICal-0.15/lib/Net/ICal.pm

I have an idea about taking it a step further-- making it easier to 
close the loop with the author to integrate updates. 

CPAN documentation could be stuffed into a kwiki wiki using the POD
format feature.

Users could adjust the POD directly. The author could optionally
download the changed POD and adopt it or further refine it.

Perhaps user-contributed changes would be colorized, as they are on
AnnoCPAN. 

Search.cpan.org would still prefer the official documentation as it does
now, but also provide a link to the wiki documentation for the module as
well.

There are details to work out, but I thought I would float the initial
idea for responses.

(OTOH, maybe AnnoCPAN is sufficient, since authors can copy/paste the
changes they want... )

Mark

PS. An AnnoCPAN tip: Notice that if you are an author, you can subscribe
to all comments on your modules:
http://www.annocpan.org/~MARKSTOS/recent.rss




Looking into integrating Test::TAP::HTMLMatrix with prove

2005-07-08 Thread Mark Stosberg
Thanks to help from a number of people here, I now have a better
understanding of how Test::TAP::HTMLMatrix is used. 

I would like to see it integrated with 'prove', and have looked into
what this would take. Here's what I think needs to happen:

 - Have Test::Harness::Straps be declared 'stable enough' for general
   use.

 - Fix the possible "either/or" problem. From what I can tell,
   right now you can't get the usual output and the HTML output
   at the same time (maybe I'm wrong here?). This might be done
   by adding an as_text() method to Test::TAP::Model or a related
   module. 

 - Use Test::TAP::HTMLMatrix to run the tests, which would now
   support the inherited text output option, as well as HTML option. 

 If we want to add a third output option besides as HTML, the internals
would need to be redesigned, perhaps to a "plugin" or "mixin" style
architecture. 

Another route to go might be to publish some of these things as filters:
  
  prove | tee tap2html ./output_dir/
  prove | tee tap2xml >output.xml

I haven't through how this would work in the face of having 2 streams
that might have meaningful data on them and aren't synchronized (STDOUT
and STDERR). 

Mark

--
 . . . . . . . . . . . . . . . . . . . . . . . . . . . 
   Mark StosbergPrincipal Developer  
   [EMAIL PROTECTED] Summersault, LLC 
   765-939-9301 ext 202 database driven websites
 . . . . . http://www.summersault.com/ . . . . . . . .



How to get started with Test::TAP::HTMLMatrix

2005-07-02 Thread Mark Stosberg
Hello,

I'd like to use Test::TAP::HTMLMatrix to better visualize the state of
large test runs. 

However, I can't tell from the docs how to run the test suite such that
it gets involved in the process. Could someone provide an example?

Thanks!

Mark

-- 
http://mark.stosberg.com/ 



Re: prove with Devel::Cover example?

2005-06-04 Thread Mark Stosberg
On 2005-06-04, Michael G Schwern <[EMAIL PROTECTED]> wrote:
> On Fri, Jun 03, 2005 at 02:04:50PM -0500, Pete Krawczyk wrote:
>> }How can I use 'prove' and Devel::Cover together? I tried: 
>> 
>> HARNESS_PERL_SWITCHES=-MDevel::Cover prove file.t
>
> Kinda surprised there's not a --cover switch.

I was surprised there wasn't a more general "-M" switch, but maybe there
is a technical for research for that. 

Mark

-- 
http://mark.stosberg.com/ 



prove with Devel::Cover example?

2005-06-03 Thread Mark Stosberg
Ok, I'm feeling brain dead about this one-- this seems easy but I'm
missing it. 

How can I use 'prove' and Devel::Cover together? I tried: 

 perl -MDevel::Cover prove ...

but didn't cover the scripts that ran.

Mark



Re: examples of testing cookies with WWW::Mechanize ?

2005-04-05 Thread Mark Stosberg
On 2005-04-05, Mark Stosberg <[EMAIL PROTECTED]> wrote:
> This is how I figured out how to test a cookie with WWW::Mechanize:
>
>   my $ses_id_from_cookie = 
> $a->cookie_jar->{COOKIES}->{".$CFG{SITE_DOMAIN}"}->{'/'}->{CGISESSID}->[1];
>   ok($ses_id_from_cookie, "admin - Login screen sets cookie 
> ($ses_id_from_cookie)");
>
> Surely there is an easier/better way than digging my hand so rudely into the 
> cookie jar like that.

Ok, I had something else confirm that this functionality is missing from 
HTTP::Cookies.

I've filed a bug report to suggest it offer easy access methods for
this, like the Mech does for links:

http://rt.cpan.org/NoAuth/Bug.html?id=12151

Mark




examples of testing cookies with WWW::Mechanize ?

2005-04-05 Thread Mark Stosberg
This is how I figured out how to test a cookie with WWW::Mechanize:

  my $ses_id_from_cookie = 
$a->cookie_jar->{COOKIES}->{".$CFG{SITE_DOMAIN}"}->{'/'}->{CGISESSID}->[1];
  ok($ses_id_from_cookie, "admin - Login screen sets cookie 
($ses_id_from_cookie)");

Surely there is an easier/better way than digging my hand so rudely into the 
cookie jar like that.

Mark



best practices for returning a "technical failure" page to a web browser?

2005-04-05 Thread Mark Stosberg
As part of building web applications, I sometimes return a "technical
failure" page to the web browser when something unexpected happens that
seems like the software's fault. 

I'm wondering if it's the right thing to do to return a "500" error code
as part of the headers. 

One reason to do this would be for better integration with a
Mechanize-based testing system.

Right now if I visit a "technical failure" page and then check

ok( $mech->success() )

It appears to "succeed", when in fact the result is more similar 
to an "Internal Server Error" than a normal result page.

Mark

--
 . . . . . . . . . . . . . . . . . . . . . . . . . . . 
   Mark StosbergPrincipal Developer  
   [EMAIL PROTECTED] Summersault, LLC 
   765-939-9301 ext 202 database driven websites
 . . . . . http://www.summersault.com/ . . . . . . . .



Re: a less fragile way to test when we need to read and write to STDOUT?

2005-04-05 Thread Mark Stosberg
On 2005-04-01, George Nistorica <[EMAIL PROTECTED]> wrote:
>
> For commands that need more than one input (i.e. shell installers) you
> can use the Expect module, which you can use to test such programs, that
> wait your input for more than one time.

Thanks for the tip. It led me to Test::Expect, which looks helpful.

Mark

-- 
http://mark.stosberg.com/ 



Re: a less fragile way to test when we need to read and write to STDOUT?

2005-04-04 Thread Mark Stosberg
On 2005-04-01, George Nistorica <[EMAIL PROTECTED]> wrote:
>
> For commands that need more than one input (i.e. shell installers) you
> can use the Expect module, which you can use to test such programs, that
> wait your input for more than one time.

Thanks for the tip. It led me to Test::Expect, which looks helpful.

Mark

-- 
http://mark.stosberg.com/ 



Re: a less fragile way to test when we need to read and write to STDOUT?

2005-03-31 Thread Mark Stosberg
On 2005-04-01, Michael G Schwern <[EMAIL PROTECTED]> wrote:
>> commands with Perl?
>
> When using open2 you have to be careful to close WRITE before you READ so
> the program does not hang waiting for more input.  Once you've fixed that
> the technique above should be just fine.
>
> sub echo {
>   my($input, $command) = @_;
>
>   local(*READ, *WRITE);
>   open2(*READ, *WRITE, "$DARCS $command";
>   print WRITE "a\n";

Thanks for the tip. On this line, did you mean to write

print WRITE "$input\n";

?

Mark
-- 
http://mark.stosberg.com/ 



a less fragile way to test when we need to read and write to STDOUT?

2005-03-31 Thread Mark Stosberg
Hello,

I've been working on a Perl test suite for darcs, with notable recent
help from Schwern. 

We used to have tests that looked like this:

   like(`echo y | darcs command`,qr/$re/); 

That would run the command and answer "y" to the first and only question
it asked. It worked well enough, but I looked for for a pure Perl
solution in the spirit of "being more portable".

I came up with this:

 {
 open2(*READ, *WRITE, "$DARCS unpull -p add");
 print WRITE "a\n";
 like( ()[4], qr/really unpull/i, "additional confirmation is given 
when 'all' option is selected");
 close(WRITE);
 close(READ);
 # (We never confirmed, so those patches are still there )
 }

This is more time consuming to write, because not only is more verbose, but I
know exactly how many lines to read on STDERR. Already for two people something
got slightly off, causing the test to hang indefinitely.

Windows users weren't having problems with the first method, so maybe I
should just go back to that. 

I'm I missing an easier and less fragile way to test interactive
commands with Perl?

Thanks!

Mark











-- 
--
 . . . . . . . . . . . . . . . . . . . . . . . . . . . 
   Mark StosbergPrincipal Developer  
   [EMAIL PROTECTED] Summersault, LLC 
   765-939-9301 ext 202 database driven websites
 . . . . . http://www.summersault.com/ . . . . . . . .



Re: Fwd: [ANN: WWW::Agent 0.03 has entered CPAN: [EMAIL PROTECTED]

2005-03-19 Thread Mark Stosberg
On 2005-03-19, Andy Lester <[EMAIL PROTECTED]> wrote:
>
> login: {  # block to define 
> how to log in
>url m|https?://james.bond.edu.au/.*|  or die "there is nothing to 
> log in here"
> and fill uid $username  # fill out the 
> login form (there is
>   and fill pwd $password  # only one there)
>   and click login
>url m|^https://|  or die "not using HTTPS"
>   # now we are using 
> SSL, good
> }

This looks very cool. However, I think this will be most successful with
non-programmers. It reminds me of AppleScript. 

The beautiful thing about WWW::Mechanize is that the target users are
Perl programmers, and it's programmed in Perl. 

So if something doesn't work like I expect, I can look at the guts to
understand, and maybe fix it myself. 

I think if I was using weezl and it didn't work like I expected, I'd be
less inclined to diving in and see why the weezl wasn't being parse as I
expected. 

I think the last time I tried a language written in Perl was the
Minivend/Interchange tag language, and it was a bad experience for me. 
I kept running into things I could already do easily in Perl, but I had
to re-learn them in this more abstracted language where the ideas were
hardere to implement. 

I see advantages to having more consice and abstract web-browsing
language like this, but I don't expect it to supersede the Mechanize
for a lot of things.

Mark

-- 
http://mark.stosberg.com/ 



Re: benchmark darcs with Perl

2005-03-13 Thread Mark Stosberg
On 2005-03-14, Michael G Schwern <[EMAIL PROTECTED]> wrote:
> On Mon, Mar 14, 2005 at 01:25:16AM +0000, Mark Stosberg wrote:
>> I'm sorry-- I could have made this more productive by posting my own 
>> Benchmark
>> code in the first place. Look what happens when cmpthese is used. The
>> results look nonsensical to me:
>
> Hmm.  I guess the comparison isn't taking into account the cusr time. :(

So is that a Benchmark bug then? Seems like it to me. 

Even in a more 'normal' case, if a code block was because it was making
system calls, it seems like you'd want a benchmarking tool to report
that. 

Or perhaps I'm missing some flag to turn on in Benchmark.pm

Mark

-- 
http://mark.stosberg.com/ 



Re: benchmark darcs with Perl

2005-03-13 Thread Mark Stosberg
On 2005-03-13, Michael G Schwern <[EMAIL PROTECTED]> wrote:
>
> We can just check.
>
> $ perl -MBenchmark -wle 'timethis(10, sub { `perl -wle "rand for 1..100"` 
> })'
> timethis 10: 11 wallclock secs ( 0.01 usr  0.00 sys +  8.64 cusr  0.14 csys = 
>  8.79 CPU) @ 1000.00/s (n=10)
>
> So the time spent in a fork counts as cumulative user time and is benchmarked.
> The advantage of using Benchmark over just time is you can run the command
> multiple times and take advantage of Benchmark's built-in comparision
> function, cmpthese().

I'm sorry-- I could have made this more productive by posting my own Benchmark
code in the first place. Look what happens when cmpthese is used. The
results look nonsensical to me:

perl -MBenchmark -wle 'Benchmark::cmpthese(10, {A => sub { `perl -wle "rand for 
1..100"` }, B => sub { `perl -wle "rand for 1..50"`}})'

Benchmark:
timing 10 iterations of
 A, B
...

 A:  9 wallclock secs ( 0.00 usr  0.00 sys +  7.80 cusr  0.04 csys =  
7.84 CPU)

 B:  4 wallclock secs ( 0.00 usr  0.01 sys +  3.51 cusr  0.05 csys =  
3.57 CPU) @ 1280.00/s (n=10)

 RateBA
B  1280/s   ---100%
A 1/s 7812500%   --




The ration I care about is simple: 9 seconds versus 4. 
The summary report looks broken. 

For darcs, often running the test /once/ will be enough to find out of a
function is within the ballpark of reasonable or not. 

Mark

-- 
http://mark.stosberg.com/ 



Re: [RFC] adding skip option directly to plan()

2005-03-13 Thread Mark Stosberg
On 2005-03-13, Geoffrey Young <[EMAIL PROTECTED]> wrote:
>
> nevertheless, what you are replying to was just a discussion about a feature
> that doesn't exist in the standard Test::More toolkit but was brought up
> because Apache-Test's plan() works a bit differently and there are enough
> people who like it that I thought it warranted a discussion here to see if
> T::M was interested.

I like these ideas-- they seem like worthwhile additions. 

Let's see if I can get an example right.

This:

 use Test::More;
 if( $^O eq 'MacOS' ) {
 plan skip_all => 'Test irrelevant on MacOS';
 }
 else {
 plan tests => 42;
 }

would become something more like:

 use Test::More tests => 5, have 'LWP', { "not Win32" => sub { $^O eq 
'MSWin32'} };

(OK, so that example was highly adapted from the Apache::Test docs). 

It is shorter, but does mean a more functionality to understand for the 
programmer to use it.

Mark



Re: benchmark darcs with Perl

2005-03-13 Thread Mark Stosberg
On Sat, Mar 12, 2005 at 03:29:32PM -0800, Michael G Schwern wrote:
> 
> Well, if you're just going to look at the wall clock, why use the shell?

Err...because I forgot about the simple 'time' command? 

>   my $start_time = time;
>   `$bin diff 1/1 2>&1`;
>   my $end_time = time;
>   my $time = $end_time - $start_time;
> 
> If you throw Time::HiRes in there you can get fractional second granularity.
> 
> There is also a benchmarking module cunningly named "Benchmark" which you
> should have a look at.

Now now, I mentioned in the message I looked at 'Benchmark' first and it
didn't work. I got the sense it might have only been timing the Perl
parts of the code, which makes sense if you are using it to optimize
Perl rather than system calls. 

For what I need, I think simply calling 'time' will do.  

Thanks. 

Mark

-- 
http://mark.stosberg.com/ 


benchmark darcs with Perl

2005-03-12 Thread Mark Stosberg
darcs [1] is slow in a few places, and I'm working on benchmarking tool
in Perl to help monitor the performance. I'm got some questions about
the best way to proceed. 

 1. http://www.darcs.net/

So far: I've divided the task into a couple specific problems:

A. What repos to use for testing?

B. Actually timing various darcs binaries running the same command
against the test repos. 

A. Creating test repos
-

My first attempt at creating 'test repos' was to generate them randomly. 
As I got further into that, I decided it would be incredibly difficult
to randomly create a patch history that approximates real world
development. 

Now I'm thinking: Why not use a few real world open source repos as
starting points? We could keep a few static 'read only' copies around, 
modifying them just enough to create the condition we need to to test. 

This would mean a lot of megabytes to distribute the whole benchmarking
suite, but I'm OK wit that.  

B. How to test benchmark system calls from Perl
-

My first stop was the Benchmark module. I was surprised it didn't seem
to work for this-- it would report "0" times, although the command
clearly ran for 10 seconds.  It appeared there was a newer version to
use, but it's bundled with Perl and I didn't want to go through that
hassle. 

My solution?

my $out = `time $bin diff 1/1 2>&1`;

   # XXX Parsing of time output may be fragile
   $out =~ m/\s*([\d\.]*\s+real.*)/;

Ouch.

Perhaps my whole approach is wrong. Am I overlooking a good open source
tool to benchmark binaries with? 

What better ways are there to benchmark system calls from Perl? 

Mark

-- 
http://mark.stosberg.com/ 



Re: testing STDOUT and STDERR at the same time with Test::Output

2005-03-08 Thread Mark Stosberg
On Tue, Mar 08, 2005 at 05:48:28PM +, Fergal Daly wrote:
> 
> In the case of though darcs though, is Perl just testing the output of
> commands that have been systemed? If so they could just add 2>&1 to the
> command line and then ignore stderr,

I thought that wouldn't be portable. 

Mark


Re: testing STDOUT and STDERR at the same time with Test::Output

2005-03-08 Thread Mark Stosberg
On 2005-03-08, Michael G Schwern <[EMAIL PROTECTED]> wrote:
> On Tue, Mar 08, 2005 at 05:27:34PM +, Fergal Daly wrote:
>> On Tue, Mar 08, 2005 at 04:56:08PM +, Mark Stosberg wrote:
>> > Hmm...maybe Test::Output just needs a new feature:
>> > 
>> >  # Because sometimes you don't care who said it. 
>> >  stdout_or_stderr_is()
>> 
>> Test::Output allows
>> 
>> my ($stdout, $stderr) = output_from {...};
>> 
>> then you can do your own tests.
>
> There's no equivalent to this?
>
>   my $output = `some_program 2>&1`;
>
> Where STDOUT and STDERR are combined into one stream, keeping the order
> correct.

I agree 'output_from' is not the same as what I asked for, but it's
probably good enough. I have a 50% chance of guessing which output
stream I'm reading, so the extra effort shouldn't take too long.

Mark



Re: testing darcs with Perl (was: Re: testing non-modules)

2005-03-08 Thread Mark Stosberg
On 2005-03-08, Michael G Schwern <[EMAIL PROTECTED]> wrote:
> On Tue, Mar 08, 2005 at 11:33:30AM -0500, Mark Stosberg wrote:
>> > I'd make life simpler and dump the shell scripts, see the note about
>> > cross-platform compatibility below.
>> 
>> The philosophy behind allowing both is to have a low barrier to entry
>> for people submitting tests. Better to have tests in shell then no tests
>> at all. 
>
> That may be true, but you're coding yourself into a compatibility wall.

I guess my hope has been that the non-Unixy people would step up to address
the issues that affect their own platform. For my part, I create all tests 
in Perl. Perhaps this approach is an "attitude bug". 

> Hmmm.  No reason the Perl tests couldn't be as simple as the shell tests.
> I might submit a conversion.  Maybe I'll get around to finally making
> ExtUtils::Command useful.

I'll be a beta tester. :)

> No where in the install documentation is "make test" mentioned so I suspect
> most people aren't running then.

I hadn't noticed that. I just submitted a patch to recommend running it.

Mark



testing STDOUT and STDERR at the same time with Test::Output

2005-03-08 Thread Mark Stosberg
On 2005-03-08, Michael G Schwern <[EMAIL PROTECTED]> wrote:
>
> PS  I took a look at one of the Perl tests (pull.pl) and its needlessly
> Unix-centric making lots of shell calls which can easily be done with
> Perl, particularly rm -rf and mkdir -p (File::Path).  Best to make it
> cross-platform as early as possible, it sucks to bolt it on later.

One offender of this involves needing to test the STDERR of the binary,
which has been done like this:
 
 my $changes_out = `$DARCS changes --last 1 2>&1`;

I assume that STDERR redirection I've done is not portable. 

I just now looked at Test::Output to see if it could help with this. 

It looks like that it tests only standard error or standard out for a given 
bit of code, not both the same time. 

Hmm...maybe Test::Output just needs a new feature:

 # Because sometimes you don't care who said it. 
 stdout_or_stderr_is()


Mark



Re: testing darcs with Perl (was: Re: testing non-modules)

2005-03-08 Thread Mark Stosberg
On Tue, Mar 08, 2005 at 08:23:31AM -0800, Michael G Schwern wrote:
> 
>   perl -MTest::Harness -e 'runtests @ARGV' tests/*.pl

Aha. Thanks. 

> Why would you distribute a private copy of Test::Harness?  

To use 'prove', which your example above illustrates I don't need. 

> Or do you mean you want to run the shell scripts, too? 

The shell scripts get run, but I don't care about managing them with 
the same tool.

> I'd make life simpler and dump the shell scripts, see the note about
> cross-platform compatibility below.

The philosophy behind allowing both is to have a low barrier to entry
for people submitting tests. Better to have tests in shell then no tests
at all. 

> PS  I took a look at one of the Perl tests (pull.pl) and its
> needlessly Unix-centric making lots of shell calls which can easily be
> done with Perl, particularly rm -rf and mkdir -p (File::Path).  Best
> to make it cross-platform as early as possible, it sucks to bolt it on
> later.

I'll look into that refactor. Curiously, no one has complained about
this so far. I suspect that although it's used on lots of platforms,
the people who run the test suite may be Unix-centric bunch right now. 

> PPS  You're suspiciously lacking in a README or INSTALL document.  I
> know its probably buried somewhere in the manual/ directory but still
> its the first place many people look.

You are right. I'll work on that. 

Mark


testing darcs with Perl (was: Re: testing non-modules)

2005-03-08 Thread Mark Stosberg
I have a fork of the 'testing non-modules' question. :)

I help maintain some Perl test scripts for darcs [1]. 

 1. http://www.darcs.net/

Right now the tests are run one at a time, losing the benefit
of the summary report. 

I got stuck trying to think of how to best make this work.

I don't think I want to use 'Makefile.PL', because the project already
has it's own 'make' file. I would also just like to avoid 'make',
because it's another tool to learn and complicate things. 

I suspect I might be overlooking a simple way to do this. 

For my personal work I would use 'prove', but darcs is otherwise very
portable, and I can't expect 'prove' will be installed everywhere. 

I thought of distributing a private copy of Test::Harness with the
tests, but I thought there should be a lighter weight or more elegant
solution.

Thanks!

Mark




Re: Test::WWW::Mechanize 1.04

2005-03-08 Thread Mark Stosberg
On 2005-03-04, Andy Lester <[EMAIL PROTECTED]> wrote:
> I've updated Test::WWW::Mechanize to add get_ok() and follow_link_ok()
> methods.  If you've been writing
>
>   $mech->get( $url );
>   ok( $mech->success, 'Fetched home page' );
>
> you can now do that as
>
>   $mech->get_ok( $mech->success, 'Fetched home page' );

I don't understand how this could work. 

Wouldn't "$mech->success()" get called /before/ get_ok() ?

Perhaps you mean this:

  $mech->get_ok( $url, 'Fetched home page' );

That makes more sense to me.

Nice feature. 

Mark



Re: TAP and STDERR

2005-02-26 Thread Mark Stosberg
On 2005-02-25, Michael G Schwern <[EMAIL PROTECTED]> wrote:
>
> I'm going to call a big, fat YAGNI on this one for the time being. 

I looked that one up. :)

You Aren't Going to Need It.
http://c2.com/cgi/wiki?YouArentGonnaNeedIt

I like it. 

Mark

-- 
http://mark.stosberg.com/ 



Re: Foreign modules in test scripts?

2005-02-19 Thread Mark Stosberg
On 2005-02-20, Steffen Schwigon <[EMAIL PROTECTED]> wrote:
> Hi!
>
> General testing question:
>
> Is it ok for a CPAN module to use other modules from CPAN only for the
> test scripts (e.g. "Text::Diff")?
>
> First, I'm not sure about the usage policy. Maybe it's more common to
> write tests more "low level".
>
> Second, I know there is a "build_requires" option in Build.PL, but
> does the CPAN(PLUS).pm know about that option and really only download
> and use those "build_requires" temporarily during module build/test or
> does it fully install them?

Steffen,

If you are considered about the extra module requirement for your users,
one option is to distribute the testing modules you want in your own
distribution, in a private 'inc' directory that doesn't get installed.

Personally, I would probably just the list the module as a dependency,
because that's easy for me.

Mark

-- 
http://mark.stosberg.com/ 



Re: TAP Version (was: RE: Test comments)

2005-02-18 Thread Mark Stosberg
On 2005-02-18, Michael G Schwern <[EMAIL PROTECTED]> wrote:
> On Fri, Feb 18, 2005 at 01:13:05AM +0000, Mark Stosberg wrote:
>> On 2005-02-15, Clayton, Nik <[EMAIL PROTECTED]> wrote:
>> >
>> >ver 1.1
>> 
>> If you go this route, I would make it clear whose emitting the version
>> string:
>> 
>> TAP version 1.1
>
> Err, why?  Who else is emitting a version string?  Or anything?  Do we
> start prefixing everything else with TAP?

I have intentionally put version strings in the output, especially of
of related modules. For example, DBD::Pg spits out what version of
PostgreSQL is being tested against. 

This is helpful for processing bug reports, so I don't have to make
second trip back to the user to ask: "What version of CGI.pm where you using?".
Etc. 

Without being explicit, the user might think they have version 1.1 of
the distribution.

>   TAP ok 1
>   TAP ok 2

I don't think that's necessary.

Mark



Re: TAP Version (was: RE: Test comments)

2005-02-17 Thread Mark Stosberg
On 2005-02-15, Clayton, Nik <[EMAIL PROTECTED]> wrote:
>
>ver 1.1

If you go this route, I would make it clear whose emitting the version
string:

TAP version 1.1

###

Mark

-- 
http://mark.stosberg.com/ 



Benefits of a Real World switch from CVS to darcs

2005-01-26 Thread Mark Stosberg
Hello, 

Here's a story of how I've been able to improve the quality of my Perl
development process significantly:

Benefits of a Real World switch from CVS to darcs
http://mark.stosberg.com/Tech/darcs/cvs_switch/

Switching my source control system has made a big difference in my
ability to track changes, handle client requests and deal with
exceptional situations.

Mark



Re: Test::Harness HTML output

2005-01-25 Thread Mark Stosberg
On 2005-01-23, Ian Langworth <[EMAIL PROTECTED]> wrote:
> I'm attempting to create fancy HTML output from running a test suite 
> and thought others might find this interesting. I've tried using 
> Test::Harness::Straps to create a feedback report inspired by Tinderbox 
> and BuildBot.
>
>   http://langworth.com/downloads/tmp/THHTML/output.html
>
> Click on the summaries to see the full (however lacking) output.
>
> T::H::Straps doesn't yet seem to handle grabbing stderr, among other 
> things. If you want to try it out for other test suites, check out the 
> directory:
>
>   http://langworth.com/downloads/tmp/THHTML/

Nice. Some HTML suggestions: 

Can yo make the mouse icon change when you roll over a clickable region?
It would be nice to have that clue.

Also, having the details on other pages would be nice for test suites
that have a huge amount of output.  Although, I suppose the idea is that
any time, not many tests should be failing. :)

Also, I would suggest a more compact design, making it easier to scroll
through huge lists of tests. For example, reduce the font size of the
test names and make the surrounding box smaller. 

This seems useful to me-- I look forward to the next iteration. :)

Mark



Re: Test labels

2004-12-08 Thread Mark Stosberg
> On Mon, Dec 06, 2004 at 10:28:45PM -0600, Andy Lester wrote:
> I think even better than 
> 
>   ok( $expr, "name" );
> 
> or
> 
>   ok( $expr, "comment" );
> 
> is
> 
>   ok( $expr, "label" );
> 
> RJBS points out that "comment" implies "not really worth doing", and I
> still don't like "name" because it implies (to me) a unique identifier.
> We also talked about "description", but "description" is just s
> overloaded.

I prefer "name" or "label" to "comment". 

Name does not imply 'unique' for me, just like 'John Smith' 
is not expected to a unique name of a person. 

Mark

-- 
http://mark.stosberg.com/ 



Re: Phalanx update

2004-12-05 Thread Mark Stosberg
On 2004-12-02, Andy Lester <[EMAIL PROTECTED]> wrote:
> I've reorganized all the trees in http://svn.perl.org/phalanx.  A
> description of how things should be is at
> http://svn.perl.org/phalanx/structure.pod.

I think I missed something. This clearly has something to do with SVN
hosting and the Phalanx project, but what's the big picture here?

Mark

-- 
http://mark.stosberg.com/ 



Re: Differences between output of 'make test' and 'prove'

2004-11-05 Thread Mark Stosberg
On 2004-11-05, Jeff Bisbee <[EMAIL PROTECTED]> wrote:
>
> I remember mentioning something to Andy, but at the time he didn't like
> it.  I'm also curious how other folks run coverage, update modules
> and rerun coverage. 

Using Module::Build, it's easy to run coverage:

./Build testcover [test options]

http://search.cpan.org/~kwilliams/Module-Build-0.2602/lib/Module/Build.pm#ACTIONS

Mark




estimating other people's work (was Re: Quality from the Beginning: Better Estimates)

2004-11-04 Thread Mark Stosberg
I also have a follow-up question: 

Another real world constraint is that sometimes by the time the client
approves the quote, I'm involved in another project and it works better
logistically to have another programmer complete the task (or help with
it). 

Since programmers are not "plug and play units", we have different
levels of efficiency.

What, if anything, do you to do address this in estimates? Perhaps you
feel there's generally "programmer parity" and don't worry so much about
this.

I, for one, know I can feel uneasy if I have to work on a budget for
programming work that I didn't contribute to myself.   

Mark

-- 
 . . . . . . . . . . . . . . . . . . . . . . . . . . . 
   Mark StosbergPrincipal Developer  
   [EMAIL PROTECTED] Summersault, LLC 
   765-939-9301 ext 202 database driven websites
 . . . . . http://www.summersault.com/ . . . . . . . .



Re: Quality from the Beginning: Better Estimates

2004-11-04 Thread Mark Stosberg
Thanks for all the feedback and suggestions for improving estimation.
Based on this and other research, I expect to make a sort of "best
practices" documentation for use at my small professional services firm.
I'm thinking of including these key parts in it:

 1. A checklist of things to consider when estimating. This is especially for
the "non programming" parts. While not meant to be comprehensive, it
may help to jar the memory about factors that come into play. 

 2. Have all significant estimates peer reviewed. Perhaps
"reality-checked" is more what I have in mind. Did I leave out some
part of the process that takes time?  Does anything seem really off?

 3. A spreadsheet to document past estimates, with columns for
"estimated time", "actual time", "scope", and "risk factors" 

 4. Reward good estimates. :) 


Comments?

    Mark

-- 
 . . . . . . . . . . . . . . . . . . . . . . . . . . . 
   Mark StosbergPrincipal Developer  
   [EMAIL PROTECTED] Summersault, LLC 
   765-939-9301 ext 202 database driven websites
 . . . . . http://www.summersault.com/ . . . . . . . .



Re: dor and backwards compat (was Re: [ANNOUNCE] Test::Simple 0.49)

2004-11-03 Thread Mark Stosberg
On 2004-11-03, Michael G Schwern <[EMAIL PROTECTED]> wrote:
> On Wed, Nov 03, 2004 at 12:19:08AM +0100, Abigail wrote:
>> While I won't deny 'err' may be used in many existing programs, I doubt
>> it's used more than 'lock' was before 'lock' was introduced as a keyword.

I wouldn't be so sure. I imagine I a lot more people have to deal with
'errors' than they do with 'locks'. I'm one of those people has
production code with a subroutine named 'err' it, and would like to
avoid the hassle of changing that in a number of places. 

> Difference between lock() and err() is this.
>
> $ perl5.6.1 -wle 'sub lock { "foo" }  print lock'
> foo
> $ bleadperl -wle 'sub lock { "foo" }  print lock'
> foo
> $ bleadperl -wle 'sub err { "foo" }  print err'
> Ambiguous call resolved as CORE::err(), qualify as such or use & at -e line 1.
> syntax error at -e line 1, at EOF
> Execution of -e aborted due to compilation errors.

This seems like an important difference. It would be nice if my 'err'
routines could keep working, like custom 'lock' routines appeared to.

Mark

-- 
 . . . . . . . . . . . . . . . . . . . . . . . . . . . 
   Mark StosbergPrincipal Developer  
   [EMAIL PROTECTED] Summersault, LLC 
   765-939-9301 ext 202 database driven websites
 . . . . . http://www.summersault.com/ . . . . . . . .



Re: Quality from the Beginning: Better Estimates

2004-11-02 Thread Mark Stosberg
On 2004-11-01, chromatic <[EMAIL PROTECTED]> wrote:
> On Mon, 2004-11-01 at 07:45, Mark Stosberg wrote:
>
>> So, what resources are recommended to consult to make great estimates?
>> What habits to develop?
>
> I have two primary rules:
>
> 1) Don't make an estimate for something I haven't done before.
> 2) Don't make an estimate for anything that'll take longer than two or
> three days.  If it's a much bigger project, break it into small,
> estimable pieces, but don't go over a couple of weeks for estimates.

Thanks to everyone for all the responses. There is one theme I haven't
heard anyone mention:

The purely scientific approach that I assume involves collecting a lot o
data and using complex formulas.

It sounds like the norm is more of an "educated art form", where
experience and good judgment matter more than "book smarts" about
estimating. 

Mark



Re: Quality from the Beginning: Better Estimates

2004-11-02 Thread Mark Stosberg
On Mon, Nov 01, 2004 at 01:53:59PM -0800, Jared Rhine wrote:
> [Mark == [EMAIL PROTECTED] on Mon, 1 Nov 2004 15:45:34 + (UTC)]
> 
>   Mark> So, what resources are recommended to consult to make great
>   Mark> estimates?  What habits to develop?
> 
> Estimate only what you know...

Thank you Jared for the thorough response. You've increased my
confidence about how we already handle the "big picture" of estimating
here. I understand that short term estimates can be tighter and long
term estimates need to be vague. I get that estimating what I know is
easier, and estimating the unknown is riskier. 

I think fall down in particular on the small scale estimating: 5 to 50
hour chunks.  I will recall how long a similar project took, but not 
sufficiently account for all the ways the new project is different. 

The pure "programming" parts I think I tend to do pretty well out, 
it's the "overhead" factors that I have more trouble with, and these 
vary. For example, a project thas accumulated code over the last 5 years 
is going to have more overhead a new build-from-scratch system. :) 
But how much? 

My idea now is to better document how estimates pan out, instead of
relying on memory of past projects so much. I'm thinking of a simple
spreadsheet that would contain "budget" "actual" and "risk factors
factors"-- things that made the task tricky to estimate well. 

As time passes, I ought to see more patterns not just in the numbers,
but also in the risk factors. When I have a project that has similiar
risk factors, I would expect the overhead would be similar to an older 
project with similar factors.

I'm looking for detailed tips. For example, how you budget for automated
testing? In my experience, for small projects that launch and don't get 
much refinement, automated testing adds a little time. For larger more
complex projects, I would say automated testing saves time, because it
catches regressions, builds confidence to make changes, and speeds up a
lot of boring re-testing. 

In part, I haven't been using automated testing as long as I've been
programming, so I simply don't have as much experience to pull from when
estimating it as a factor. 

Mark

--
 . . . . . . . . . . . . . . . . . . . . . . . . . . . 
   Mark StosbergPrincipal Developer  
   [EMAIL PROTECTED] Summersault, LLC 
   765-939-9301 ext 202 database driven websites
 . . . . . http://www.summersault.com/ . . . . . . . .


Quality from the Beginning: Better Estimates

2004-11-01 Thread Mark Stosberg
Hello,

I imagine that many of you develop Perl with real world constraints--
deadlines and budgets. Whether you deliver your work internally or to an
external client, a good estimate plays an important role in the ultimate 
quality of the software. 

You may well have experienced working on a project that was estimated
too low: stress rises, as does the temptation to priority delivery time
over quality. 

So, what resources are recommended to consult to make great estimates?
What habits to develop?

I know that before the good estimate comes the strong technical
specification so you can know what you are estimating-- that much I think I
do well at. I have also read about the 'XP' model, but I find it does
map well onto smaller "one off" projects that flow through here.  

I also wouldn't mind hearing stories about "software estimates in the
real world", the good the bad and ugly.

Thanks!

Mark

-- 
 . . . . . . . . . . . . . . . . . . . . . . . . . . . 
   Mark StosbergPrincipal Developer  
   [EMAIL PROTECTED] Summersault, LLC 
   765-939-9301 ext 202 database driven websites
 . . . . . http://www.summersault.com/ . . . . . . . .



Re: testing a shell command which prompts for output?

2004-10-16 Thread Mark Stosberg
On 2004-10-17, Andy Lester <[EMAIL PROTECTED]> wrote:
>
> On Oct 16, 2004, at 8:02 PM, Mark Stosberg wrote:
>
>> How can I write an automate test for a shell command that prompts for
>> output. I first tried just using backticks, but that hangs waiting for
>> input.
>
> Will it take its input from STDIN?  If so, pipe stdin to it.

Wow. That was like an instant messenger speed response. That' service.

It also worked. Here's what I used:

`echo 'y' | my_shell_cmd`

I'm sure there's some other cooler way, but this works well enough for me.

Mark

-- 
http://mark.stosberg.com/ 



testing a shell command which prompts for output?

2004-10-16 Thread Mark Stosberg
How can I write an automate test for a shell command that prompts for
output. I first tried just using backticks, but that hangs waiting for
input.

Thanks,

Mark

-- 
http://mark.stosberg.com/ 



RFC: Test::XML::Valid

2004-05-17 Thread Mark Stosberg
A while ago I went on a hunt for the best Perl tool to validate XHTML
files.

The best thing I found (thanks to a response from this list) seemed slightly
obscure: XML::LibXML has an option to validate XML/XHTML files, although that's
not the focus of the module.

I made a small 'Test::' wrapper around it and am considering posting it
to CPAN. I'd like feedback from the QA group before I do.

The docs are here:
http://mark.stosberg.com/perl/test-xml-valid.html

The distribution is here:
http://mark.stosberg.com/perl/Test-XML-Valid-0.02.tar.gz

It should easily expand to accept lots of different inputs besides
file path names. 

In a test today, it validated 15 XHTML files in about 1 second. 

Thanks!

Mark

-- 
 . . . . . . . . . . . . . . . . . . . . . . . . . . . 
   Mark StosbergPrincipal Developer  
   [EMAIL PROTECTED] Summersault, LLC 
   765-939-9301 ext 202 database driven websites
 . . . . . http://www.summersault.com/ . . . . . . . .



Re: recommendation for website HTML validation tool?

2004-02-06 Thread Mark Stosberg
Thanks to a suggestion by David Wheeler, I was able to build a tool that
works for me. Here's the simple testing script I came up with. You are 
free to use and modify it for your own purposes:

#!/usr/bin/perl -w

# Check all our static HTML pages in a www tree to see if they are made of valid HTML
# originally by Mark Stosberg on 02/05/04 
# based on code by Andy Lester

use Test::More;
use strict;
use XML::LibXML;
use File::Spec;
use File::Find::Rule;

my $startpath = $ARGV[0] || die "usage: $0 path/to/www";
my $rule = File::Find::Rule->new;
$rule->or( $rule->new->directory->name('CVS')->prune->discard,
#   $rule->new->directory->name('Templates')->prune->discard,
   $rule->new->file->name('*.html') );
my @html = $rule->in( $startpath );

my $nfiles = scalar @html;

# Only try to run the tests if we have any static files
if ($nfiles) {
plan( tests => $nfiles );

for my $filename ( @html ) {
eval {
my $parser = XML::LibXML->new;
$parser->validation(1);
$parser->parse_file($filename);
};
is($@,'', "$filename is valid XHTML");
}
}
else {
diag " ( No static files found. No tests ran. ) ";
}
__END__


Mark

-- 
 . . . . . . . . . . . . . . . . . . . . . . . . . . . 
   Mark StosbergPrincipal Developer  
   [EMAIL PROTECTED] Summersault, LLC 
   765-939-9301 ext 202 database driven websites
 . . . . . http://www.summersault.com/ . . . . . . . .



recommendation for website HTML validation tool?

2004-02-06 Thread Mark Stosberg
Hello,

I'm looking for a perl testing tool that check a whole directory of HTML
files to see if the are valid HTML. The ideal tool would also be able to
check dynamically generated web pages. There seem to be a few options that
I've found so far, but none of them are ideal:

- HTML::Lint - nice Perl interface, but doesn't seem to support XHTML,
  which is what I need.

- WebService::Validator::HTML::W3C - I like this module because it
  intefaces with the W3C validator, the de-facto standard. I even set up
  an instance of their validator  on my own web server, for high volume
  use. Still, the module current fails because it only accepts 
  URIs to validate. So, it seems challenging to generate some dynamic
  content, /and then/ have it validated.

  It generally seems like a like a kludgy solution to have make a call
  to a web service to validate a page. A great solution seems to be a 
  refactoring of the W3C's 'check' script in Perl modules.

- 'tidy'. I even tried writing a wrapper to call this binary. It seems 
  to be more focused on fixing HTML than validating, and didn't give
  useful output.

How are other people integrating HTML validation into their work flow?
I want a solution that's easy so it actually gets used. :)

Thanks!

Mark

-- 
 . . . . . . . . . . . . . . . . . . . . . . . . . . . 
   Mark StosbergPrincipal Developer  
   [EMAIL PROTECTED] Summersault, LLC 
   765-939-9301 ext 202 database driven websites
 . . . . . http://www.summersault.com/ . . . . . . . .



thinking about Test::HTML::Form

2003-12-03 Thread Mark Stosberg
Hello,

Lately I've been using the excellent WWW::Mechanize module a good deal
to test web applications. As I've done this, I've noticed a number of
the same patterns coming up as I'm testing web-based forms. 

I'm wondering if there are any known modules out there for testing
forms represented as HTML::Form objects, or interest in helping create
such a module.

Here are the test shortcuts I'm interested in so far:

- found form named 'FOO' 
# Important to know you are testing the right form!

- form contains exactly these fields

- form contains at least these fields

Additionally, I could see shortcuts for randomly or methodically selecting
possible value combinations from selection lists, possibly with a focus on
testing the edge cases, such as the first and last items in the group.

Mark

-- 
 . . . . . . . . . . . . . . . . . . . . . . . . . . . 
   Mark StosbergPrincipal Developer  
   [EMAIL PROTECTED] Summersault, LLC 
   765-939-9301 ext 202 database driven websites
 . . . . . http://www.summersault.com/ . . . . . . . .



thinking about variable context for like()

2003-11-15 Thread Mark Stosberg
I have a suggestion for "Test::More" that is especially useful with
WWW::Mechanize.
  
I'm frequently using 'like' to test $agent->content against a regular
expression.
  
When I have a lot of these in a new test script and they are all
failing, I get a boatload of HTML source floating by, which
makes it tedious at times to find out what actually went wrong.
  
I would like a way to tune the amount context that "like" presents
upon failure. For example, the first 1000 characters of the HTML source
would do most of the time. I'm not sure what the best way to do this might be.
For now, I'll stop at merely suggesting something like this might be useful. :)

Ideas?

Mark

-- 
http://mark.stosberg.com/ 



Recommendations for testing e-mail output

2003-10-28 Thread Mark Stosberg
Hello,

I'm looking at writing a test for an e-mail that's generated by Perl.  
I'm wondering about the best way to do this. Here are some possibilities
I have considered:

- use Test::Mail. While it's designed for the task, I'm not fond of the
  complexity of setting up an e-mail address which sends the input to
  a test script, which generates a log file that someone eventually
  sees.

- Test the message at the moment before it's sent. For this I thought
  Test::AtRunTime might be a good choice. The output could be a 
  sent to a logfile that some other test script output could remind
  me to read...or perhaps even open for me. 

- ??

I'm curious to know what others are doing to address this that they have
been satisfied with.

Thanks!

Mark

--
 . . . . . . . . . . . . . . . . . . . . . . . . . . . 
   Mark StosbergPrincipal Developer  
   [EMAIL PROTECTED] Summersault, LLC 
   765-939-9301 ext 202 database driven websites
 . . . . . http://www.summersault.com/ . . . . . . . .



Phalanx & Devel::Cover

2003-10-12 Thread Mark Stosberg
I'm excited to see that the Phalanx project is happening.

On the website I see this unfiled item:

"Use Devel::Cover and gconv".

One way that seems useful to use Devel::Cover is to have an automated coverage
testing system that would test the 100 module periodically. 

All the phalanx page, an extra column on the module page could like to the
coverage results. This could be another way to track the test the progress.

One sticking point I've noticed is that a different syntax is needed for
modules that use Module::Build. That seems surmountable, though. 

A second sticking point could be that code is OS-specific, so it's not ever going 
to get testing by just one build machine.

Mark

--
http://mark.stosberg.com/ 



Re: Testers & PASS

2003-10-11 Thread Mark Stosberg
On 2003-07-19, Leon Brocard <[EMAIL PROTECTED]> wrote:
> Hello,
>
> I've been looking at the testers database (well, downloading the list
> via nntp.perl.org really) for Module::CPANTS recently.
>
> In the current version of Module::CPANTS I report the count of PASSes
> and FAILs for each distribution. This works well.
>
> I've been looking at gathering the number of tests that a distribution
> has. However, it looks like FAILs have this information, eg:
>
>  Failed 1/10 test scripts, 90.00% okay. 3/43 subtests failed, 93.02% okay.
>
> ... but PASSes don't. So for the next version of Module::CPANTS I'll
> be able to report the number of tests only for those distributions
> which have a plan and have failed at least one test.
>
> Firstly, is there a reason for this inconsistency?
>
> Secondly, who do I need to convince to add the "make test" results for
> PASSes too? ;-)

Perhaps Adam J. Foxson. He maintains "Test::Reporter", which makes it 
very easy to submit testing results through the 'cpantest' binary.
His address is: [EMAIL PROTECTED]

It seems that used on it's own 'cpantest' requires the user to paste in
the the test result on success or failure. I understand the module also
integrations with CPANPLUS. Perhaps there's a place to provide this
feature there...or perhaps it already exists now but I haven't noticed.
:)

Mark

--
http://mark.stosberg.com/