On Monday 13 July 2009 06:56:15 Ovid wrote:

> We currently have over 30,000 tests in our system.  It's getting harder to
> manage them.  In particular, it's getting harder to find out which TODO
> tests are unexpectedly passing.  It would be handy have to some option to
> force TODO tests to die or bailout if they pass (note that this behavior
> MUST be optional).

What does dying or bailing out give you that you don't already get through 
TAP?  Is it "The ease of a human being looking at the console window or the 
logs and seeing that something failed and parsing it visually?"  How does a 
human figure out which TODO passed from:

        prove test.pl
        test.pl .. ok   
        All tests successful.

        Test Summary Report
        -------------------
        test.pl (Wstat: 0 Tests: 2 Failed: 0)
          TODO passed:   1-2

There's no diagnostic information there.

> Now one might think that it would be easy to track down missing TODOs, but
> with 15,000 tests aggregated via Test::Aggregate, I find the following
> unhelpful:
>
>
>  TODO passed:   2390, 2413

That sounds like a problem with Test::Aggregate, compounding the lack of TODO 
diagnostic information.

> If those were in individual tests, it would be a piece of cake to track
> them down, aggregated tests get lumped together.  Lacking proper subtest
> support (which might not mitigate the problem) or structured diagnostics
> (which could allow me to attach a lot more information to TODO tests) at
> the end of the day, I need an easier way of tracking this.
>
> Suggestions?

Add diagnostics to TODO tests and let your test harness do what it's supposed 
to do.  Shoving yet more optional behavior in the test process continues to 
violate the reasons for having separate test processes and TAP analyzers.

-- c

Reply via email to