> Perhaps... but unit tests should *all* run all the time. When you break
> something, and indeed more than one test fail, go after the lower-level ones
> first, and the other might go away as well.

That's exactly what I'm trying to do.  I want to run all the tests (well,
most of the time), but if haltonfailure is on (which I prefer) it won't
necessarily even get to the lower-level ones, and if it's off I have to
search through the failures each time, mentally computing the component
dependency graph.  That's boring and repetitive, so I want to automate it.

Ah, now I see that I should have phrased this not as wanting to sort the
tests, but the test *failures*, so I can look at them in a sensible order.
Would the XP crowd have considered that more acceptable?  (Let's not get
into the argument about whether I should often be causing lots of tests to
fail.  This came up when I made a large transformation to the code two days
ago, now completed, and likely won't happen again soon.)

>   Sorting the unit tests is not
> worth it IMHO. We used to do that, list explicitly the test methods to call,
> and the test classes, to kind of do what you want, but kept on 'loosing'
> tests. An automatic way to find test is much better. But that's just me!

Me too.  I don't want to maintain a list of tests to run; I want to run all
of them using a fileset.  As I encounter situations where I want one test
to run before another, I can add prerequisites (too strong a word) to a
static list in the latter file.  These are used only to impose a partial
ordering on the fileset list, so they're optional.  Leaving them out
doesn't cause tests not to run, and nothing is harmed if (when) they become
obsolete.  It's just a way to record my decisions about which failures I
want to look at first.

Tom

--
To unsubscribe, e-mail:   <mailto:[EMAIL PROTECTED]>
For additional commands, e-mail: <mailto:[EMAIL PROTECTED]>

Reply via email to