In article <[EMAIL PROTECTED]>, "Jarkko Hietaniemi" <[EMAIL PROTECTED]>
wrote:

>> In the process, with some advice from perl-qa, I've added a mock object so
>> the test could control the output of Socket::inet_ntoa() and
>> Socket::inet_aton().  t/lib/Mock/ seemed like as good a place as any.
 
> I'm not convinced.  By setting up mock-ups you are not testing the real thing:
> you are testing mock-ups.  It's real emptying shotguns at decoys and
> concluding that yup, we are eating duck tonight.

I see it as isolating the point of failure.  If the Socket module fails, its
own test should say so.  Trying to write a test that takes into account all of
the possible network variations is an exercise in madness.  (It's probably akin
to being a pumpking :).

Mock objects aren't a lot different than  mock input.  They just seem weird
because they have a different shape.  They help separate the things that change
(underlying network differences) from the things that stay the same (the module
being tested).

Net::Config doesn't care *how* inet_ntoa and inet_aton do their job.  It just
cares that it gets data in the right format.  Tying STDIN and feeding fake
input to Term::Complete is, in a weird sense, much the same thing.  (Maybe
that's not the right test to mention... )
 
> FWIW, I'm happy with leaving libnet essentially untested because I think by
> its very nature it is untestable across all the possible network
> configurations.  Try some time using ftp from behind sadistic firewalls, for
> example.  Situations like this *can* be configured to work, most of the time,
> but it requires a lot of off-line head scratching, bribing the keepers of the
> firewalls, things like that. Things a test suite cannot do unless we make Perl
> pass the Turing test.

It sounds like we have different ideas as to what a test suite is supposed to
prove.  To me, it's a near-guarantee that as far as p5p can control, the code
appears to do what it says on your platform.  There's definitely a range of
things that cannot be tested, but it's smaller than most people would think.
We also can't mathematically *prove* that everything works in every
configuration.

The best we can do is establish a baseline of behavior that ought to work on
all platforms.  If it throws up red flags and makes someone *think* about
changes to existing code, so much the better.  I'm all for an 80% solution, as
long as we realize that there's 20% still out there.  80% is better than 70% or
50% or 0%.
 
> QA incendiary: I think rabidly trying to strap a test harness on everything
> that moves is counterproductive.  Not all APIs have been planned to be tested.
>  Of course the documentation should tell what are the public interfaces, but
> if in doubt, *ask* the author. Testing for internal not-meant-to-be-seen bits
> is plain silly.
 
You're right -- some things shouldn't be tested (accessors!).  Still, the
finer-grained the tests, the easier it is to find an error.  I'd rather spend
my time fixing a dozen bugs, if I can pinpoint them to within five lines, than
tracking down one.

Of course, my opinions might change, if I have to write tests for CPAN.pm or
the debugger.  Yikes.

-- c

Reply via email to