On 11/14/06, Danny Angus [EMAIL PROTECTED] wrote:
I like the idea of having two test suites.
This makes sense to me. Keep the current build/test process so if
anyone creates a regression error, we spot immediately. Then for TDD
(not that I'm a fan of it but have seen it done effectively),
Hi Stefano,
Am Dienstag, den 14.11.2006, 14:06 +0100 schrieb Stefano Bagnara:
Sorry to you, I don't think *you* started the flame, I think we all
started a discussion that doesn't worth the time (imho).
IMO the discussion was worth the time. For me it was not obvious why you
insist so strong
What we're really interested in here is being able to comit designed
tests before we commit the code which passes the test.
IMO that *is* TDD
What we're up against is the knowedge that some of the passes are a
low priority, and not critical enough to prevent others from working
on other aspects
Am Dienstag, den 14.11.2006, 00:17 +0100 schrieb Stefano Bagnara:
IMO a failing test is as valuable as a passing one. Maybe even more
because it reminds us to do something.
I don't think that it is an indicator of quality to have always 100% of
tests passing.
My unit test 1x1 says: test
Stefano Bagnara schrieb:
Joachim Draeger wrote:
Am Dienstag, den 14.11.2006, 13:27 +0100 schrieb Stefano Bagnara:
This *is* what started the flames. Imho it has been a good
procedure and I (by mistake) thought we all agree that tests have to
pass before committing and if they fail it is a
Am Dienstag, den 14.11.2006, 09:31 +0100 schrieb Bernd Fondermann:
IMO a failing test is as valuable as a passing one. Maybe even more
because it reminds us to do something.
I don't think that it is an indicator of quality to have always 100% of
tests passing.
My unit
Joachim Draeger wrote:
My personal preference is to avoid as hell committing code that will
make tests to fail. As like as committing code that does not compile or
run: of course the last one is the most difficult to be determined but
if it happens (and it happened to me many times) it should
Joachim Draeger wrote:
I'm still not
convinced about the benefits except they are circumstancing limitations
in current tools.
When we'll have tools without that limitations we maybe change our minds
;-) In the mean time I think that tools matters.
Remove tools from the world, and you will
Joachim Draeger wrote:
Am Dienstag, den 14.11.2006, 13:27 +0100 schrieb Stefano Bagnara:
This *is* what started the flames. Imho it has been a good procedure
and I (by mistake) thought we all agree that tests have to pass before
committing and if they fail it is a mistake of the committer (to
On 11/14/06, Stefano Bagnara [EMAIL PROTECTED] wrote:
9) Norman knew that Noel worked on that code and probably knew how to
fix it, so the best temporary solution was to comment out the test and
open a JIRA issue to be sure that Noel not forget the issue, or to be
sure that someone else could
Joachim Draeger wrote:
Am Montag, den 13.11.2006, 12:00 +0100 schrieb Bernd Fondermann:
IMO a failing test is as valuable as a passing one. Maybe even more
because it reminds us to do something.
I don't think that it is an indicator of quality to have always 100% of
tests passing.
My unit
11 matches
Mail list logo