Fergal Daly wrote:
> I'm agnostic on whether it should pass or fail (perhaps trying to
> wedge 3 states into 2 is root of the problem) but to me, a passing
> todo test implies that the author doesn't understand either his code
> or my environment. Neither are good signs for reliability of the
> module,

Wow.

Let's review what TODO tests are meant for.

1)  To spec out tests for a new feature before you implement it (ya know, test 
first).

2)  To write tests for a bug that you can't fix right now, but can test.  Then 
later on you'll be alerted when its fixed.

2a) To put in an informational test to be run on the multitude of platforms 
your code is run on to see where something works and where it doesn't.

Yes, the author might not know your platform inside and out.  Or maybe you've 
upgraded to a newer version of a dependency that now works.  But to conclude 
that the author is ignorant and the code is unreliable because, as has been 
pointed out, *the code is working better than expected* is just... beyond 
pessimistic?

This is getting out of hand.  I can think of at least two rather common use 
cases for TODO where they'll succeed and absolutely nothing will be wrong.

    TODO: {
        local $TODO = 'whatever';

        for my $stuff (@things) {
            is( foo($stuff), $expected{$stuff} );
        }
    }

In this case the author wrote a data driven test for a new feature.  Rather 
than convoluting the test to individually account for those which are and are 
not working the author has taken the lazy way out and just declared them all 
TODO.  This hurts nobody.

    TODO: {
        local $TODO = 'Busted ass Module' if $Module::VERSION >= 3.12;

        ...test a minor feature...
    }

In this example the author has noticed that one of its dependent modules has 
broken a minor feature and expects it to be fixed in the future.  When the 
module updates and fixes the problem, the test will start succeeding.  Does 
this make anything unreliable?  No.

Similar cases can be found when a dependent system breaks something, then fixes 
it, then breaks it again.  For example, let's say the current version of Module 
is 3.23.  3.12 broke things.  3.19 fixed it and 3.20 broke it again.  Rather 
than A) researching each version to find out which works and which don't and B) 
encoding that complication into the tests the author has chosen the lazy way 
out and just used a blanket version.  Again, this doesn't indicate the author 
doesn't know what they're doing.

Reply via email to