Torsten Curdt wrote:

...Where shall we draw the line between "supported" and
"unsupported"? Is it really the "two committers rule" that I applied
above?...


How about adding another condition: a block can only have "supported"
status if we have automated tests for all its critical functions?


That's a very interesting point.
+1, even if I'm not the most productive test writer :-)


This might make a big difference in the accountability of supported
stuff in our releases.


Yep. But this also means the core is currently unsupported ;-P



Hehe... Well, time to write some test then ;-P

Actually I somehow like the idea of required tests!
And it would also clearly draw the line. Much better
than the two-committer rule.


Don't agree at all ;) If I have to choose between an abandoned one man show with full junit test support (whatever that means) and a block with an active community but no tests there would need to be *very* strong other advantages for the one man show, for me to even consider it.

And take the >2 commiter rule as a starting point, if you want something to become supported, give us good reasons a start a vote.

...but a lot of work :-/


I agree that test driven programming is a an efficient and good way of developing code, and I have made good use of it in e.g. the JXTG refactoring. But as for everything else it is the quality of the tests that counts. It is far to easy to write tons of testing code that resembles the commenting style where you tell that "setFoo()" sets foo.

We should definively encourage our selves to write test code when we refactor things. It is useful for checking that we don't break anything and also documents our learnings about what the code does. Starting to write test code for allready existing code just because it "should be tested" is more than "a lot of work" it is IMO pretty close to wasting time.

/Daniel



Reply via email to