Andy,
I don’t like the name bool_eq() (“booleans are equal”) but it was the
best I could come up with.
I still like `is_true` and `is_false`. :-)
-- Buddy
Andy,
I’ve accumulated a number of test functions in our work codebase for
doing what I call “expressive” tests to avoid common cut & paste. I’m
using the term “expressive” for now because they’re expressing what it
is you want, rather than making you write the code that explains them.
Andy,
My ultimate goal here is to create a batch of convenience test methods ...
What sort of methods did you have in mind? I might have something to
contribute, if you would be interested in contributions.
-- Buddy
Chad,
https://docs.google.com/document/d/1RCqf5uOQx0-8kE_pGHqKSQr7zsJDXkblyNJoVR2mF1A/edit?usp=sharing
Several people have asked for this, so I wrote it up.
Excellent document. And, since I haven't said it yet, I applaud your
bravery for taking on this task.
One thing I would like to see
Cons of adding -w to test runs:
- you get warnings from dependencies (and their dependencies) because -w
enables global action at a distance
- using fatal warnings may cause your test suite to fail because of
warnings in dependencies you don't directly control
- you could
Paul,
In accordance with the terms of my grant from TPF this is the monthly
report for my work on improving Devel::Cover covering August 2013.
+1, definitely.
Here's what I'm curious about: I notice that you, and many other grant
recipients (e.g. Nicholas Clark) always provide such detailed
David,
* first, when a bug gets reported in live, I like to create a test case
from it, using data that at least replicates the structure of that
that is live. This will, necessarily, be an end-to-end test of the
whole application, from user interaction through all the layers that
make
Lasse,
Interesting... Developers in our project have a local copy of the
production database for working with but our unit test runs always create a
database from scratch and run all schema migrations on it before running
the tests. Creating and migrating the unit test DB usually takes between
Ovid,
I lean more towards Mark's approach on this one, albeit with a slight twist.
For many of the test suites I've worked on, the business rules are
complex enough that this is a complete non-starter. I *must* have a
database in a known-good state at the start of every test run.
is
Nicholas,
tl;dr: TAP::Harness uses lots of RAM to store test results
Particularly lots and lots for millions of little ok\ns
It would be nice if it didn't use lots of RAM when storing things that
mostly
pass.
Yes, I ran into this before.(*) I was referred to a portion of the
Test::Builder
Daniel,
I need to collect the output from the other Perl library *without
loading it*, because I also want to make sure that my library loads it
for me
Is there a reason the output has to be created during testing rather
than being part of the distribution?
But that means I'm dependent on
Gabor,
I am not sure if this helps but in Windows you need to put the
double-quotes around $cmd
my $output = qx{$^X -e $cmd};
Yes, that would work if I were running _only_ on Windows. But I need
it work for everything (and the double quotes on Linux will cause any
variables in my perl
Eirik,
On Windows, that still leaves a quoting problem, I believe.
IPC::System::Simple certainly does not seem to handle it: Unless I misread
it entirely, it ends up sending $^C -e $cmd as the command line to
Win32::Process::Create.
Let's see ...
C:\Windows\system32perl
Karen,
Test::Without::Module should be able to take care of this for you.
H ... interesting. That _might_ work ... I'd have to try it out.
I'm not sure just pretending it isn't loaded is sufficient. But I'll
look into it.
-- Buddy
First, let me say:
Thanx everyone for all your suggestions!
Eirik,
That problem is not shell nastiness. It is quoting. Observe:
:
:
If I recall correctly, the thing is that on Windows, a brand new process
does not get an argument array.
On Windows, it gets a command line.
On
Guys,
Okay, my Google-fu is failing me, so hopefully one of you guys can help me out.
For a test, I need to run a snippet of Perl and collect the output.
However, if it rus in the current interpreter, it will load a module
that I need not to be loaded ('cause I'm also going to test if my code
David,
Also I think that my code's interface is nicer :-)
But we all think that about our interfaces, no? ;-
My personal interface for mocking looks more like this:
class Mock::Property
{
has '_dbh' = ( is = 'ro', default = sub {
$Test::Rent::Dbh } );
schwern,
You should be able to leave your Test::Builder::Tester stuff alone, it'll
work, but eventually you're going to want to change them.
Excellent. So I'll leave it on my todo list, just bump it down in priority. ;-
-- Buddy
schwern,
Executive Summary: If you're using Test::Builder::Tester to test your Test
module, switch to Test::Tester. ...
:
:
What say?
Well, I say that it's a bit of a PITA, but I'll add it to my TODO
list. I recall now that the skip/SKIP thing is what was causing some
CPAN Testers
Guys,
Looking at this a different way, instead of a library, make a distzilla
extension (or whatever) which generates (and regenerates) a 00-load.t
as per Ovid's earlier example.
:
:
This sounds like the best idea to me.
-- Buddy
Guys,
I'm getting CPAN Tester failures that look like this:
# STDOUT is:
# ok 1 # SKIP symlink_target_is doesn't work on systems without symlinks!
#
# not:
# ok 1 # skip symlink_target_is doesn't work on systems without symlinks!
#
# as expected
and, in case it doesn't jump out at you what the
Ovid,
I'm not sure what's going on here. You've mentioned the Test::More,
Test::Builder::Tester, Test::Tester and Test::File.
Sorry; perhaps I overexplained. This is a problem between Test::More
and Test::Builder, like the subject says. The other two are
irrelevant.
I don't know exactly
Ovid,
Perhaps I'm misunderstanding, but couldn't you just wrap the guts of
it (or the whole thing) inside a
warning_is { ... } undef, No warnings from UTF8 stuff;
type construct? That gives you a failing test, which, in conjunction
with your very excellent Test::Most and judiciious use of
On Mon, Nov 21, 2011 at 5:35 AM, yary not@gmail.com wrote:
I'd think Michael has the interests of CPAN smoke testers in mind with
these performance benchmarks. You're right in that for the typical
developer, it's not significant.
Just to offer a contrasting viewpoint: if you're using TDD,
Schwern,
On Thu, Nov 10, 2011 at 5:57 PM, Michael G Schwern schw...@pobox.com wrote:
On 2011.11.10 4:59 PM, Buddy Burden wrote:
Does that do anything? I didn't think prove respected the shebang
line. Anyway, I thought the -w to prove would be effectively doing
that all along.
Perl
Guys,
Okay, just to follow-up in case anyone cared what the resolution on
this one was, changing the loop full of ok()s to one giant pass() or
fail() per loop fixed _everything_. Plus it runs a lot faster now. I
know I've seen test suites that do thousands and thousands of tests,
but they must
chromatic/Merjin,
Not use warnings but the -w command line flag -- the non-lexical, warnings-
on-everywhere one.
no change whatsoever. I've now added -w to all #! lines in the t files
Does that do anything? I didn't think prove respected the shebang
line. Anyway, I thought the -w to prove
David,
Well, that's probably the most common error ... surely there can't be
_that_ many CPAN Testers folks hanging around actually _watching_ the
tests run and killing them when they take too long.
No, but there are testers who have watchdog processes to kill off
anything that runs for an
David,
I guess I'm not sure what to do here. What do other folks advise?
Contact the individual testers, I guess.
I'm not sure what to say though ... hey, dude, your automated testing
is being rude to my tests, so go fix that? I mean, I wouldn't put it
that way, obviously, but i can't help
Guys,
Okay, so I found a bug in this test script for a module I recently
took over. It's a test that generates random times, and it would fail
for zero seconds. But it only happened every once in a while, since
zero was only one possible value and it was only running a small(ish)
number of
Leon,
*** Signal 9
That one is obvious, it has been SIGKILLed. Probably the tester
thought the tests were hanging.
Well, that's probably the most common error ... surely there can't be
_that_ many CPAN Testers folks hanging around actually _watching_ the
tests run and killing them when they
On Wed, Mar 30, 2011 at 5:58 AM, Ovid publiustemp-perl...@yahoo.com wrote:
From: Jozef Kutej jo...@kutej.net
To: perl-qa@perl.org
Sent: Wed, 30 March, 2011 7:54:21
Subject: Re: Conditional tests - SKIP, die, BAILOUT
:
:
perl -le 'use Test::More tests = 2;
Guys,
Let's say I have some common functions that I want available to all my
.t files. So I've created a module that all the .t files can include.
But where do I put it? I don't want to put it in lib/ because I
don't want it to get installed, right? But when I put it in t/, make
test can't
[crap, accidentally replied only to Ovid trying again]
Ovid,
If someone uses Test::Most and either has the environment DIE_ON_FAIL or
BAIL_ON_FAIL set to true, or has 'die' or 'bail' in the import list, they'll
likely be disappointed by failing test results sent back as they'll likely
Ovid,
Are you on the latest version of Test::Deep?
Ah, yes. That fixed it. Thanx!
There are issues with
previous versions having an isa() sub causing strange failures.
Yeah, I saw something about that on the list, but it didn't occur to
me that having a strange isa() could cause this
Ovid,
The latter, of course, should assert the number of tests you expected
to run, not the number of tests you've actually run. Otherwise, it's
not much better than no_plan (except you're still protected from
premature exits.
Or, to look at it a different way, it would be _exactly_
Ovid,
I've just uploaded Test::Most 0.02 to the cpan.
Crap ... you fixed that typo. :)
I meant to let you know about this, but I got totally distracted
before I could pin it down. So now I've got it down fairly sparse
here ... create two files:
# file: Testit.pm
package Testit;
use strict;
Hey guys, sorry to be long in getting back to this. My project here
at work heated up quite a bit and I've been running around trying to
make sure it's all under control. Sounds like you guys were all off
in Oslo having too much fun to respond anyways. :-)
chromatic,
And/or, it may make
Aristotle,
But when it comes to testing, doing this in terms of tests is
not only okay, it's considered best practice.
No, just intrinsically inevitable, as far as I can tell anyway.
Well, do you agree with this assessment:
Having a plan stated as an exact number of tests to be run is
I debated whether or not to post this here for a long time, because I
gather that deferred plans are somewhat of a hot topic on this group.
But finally I decided that I just needed to understand some history
and make sure there's nothing I'm missing here.
First, if you'll bear with me, a little
chromatic,
Any solution which requires a human being to read and think about the output
beyond It's all okay! or Something fell!* is not a long-term solution.
I don't think that's true of this implementation. If the script
doesn't reach the all_done() call, there is a very obvious error.
41 matches
Mail list logo