Bernd Fondermann wrote:
On 11/13/06, Joachim Draeger <[EMAIL PROTECTED]> wrote:
Hi!
IMO a failing test is as valuable as a passing one. Maybe even more
because it reminds us to do something.
I don't think that it is an indicator of quality to have always 100% of
tests passing.
My unit test 1x1 says: test runs have a binary result. It is either
green or red. If red, you have to fix the test. If you do nothing, the
test will soon be accompanied by a second failing test. Nobody checks
for failing tests anymore.
That does not necessarily mean every failing tests is a subject to
immediate fix. For example, this is not possible in
test-driven-development which is based on failing tests.
But intended-to-fail failing tests obscures all
oops-something-bad-happened failing tests.
Right. This is the BIG problem of failing tests.
Therefore...
There often are known bugs for a long time, some of them not even
documented anywhere.
Agreed. It is a good thing to have tests that document
bugs/errors/missing features/evolving code.
But these test should not be failing, be commented out or be contained
in the a separate "failing-tests-suite".
They should _succeed_, with a comment pointing to the JIRA they
reproduce and document.
When such an issue gets fixed, the test fails, will be detected and
the assertion in question can be inverted. Voila.
I agree that this solution is much better than a failing test.
If a developer knows he changed something that make a test to fail and
this has to be reflected in our trunk he should at least "alter" the
test to pass and add a comment linking to the JIRA issue created for this.
I really am 100% against having failing tests in svn.
It could happen by mistake, but it should not be done.
When there is a test failing we should treat it with the same priority
and thoroughness as issues in JIRA.
If there are other things that are more important or the solution is not
clear it just stays there failing.
Not agreed, see above.
I agree with Bernd.
In the current situation there is a barrier for committing tests because
it might brake the holy nightly build and the committer would be
responsible for it.
With the practice I pointed out above this is a non-issue.
IMO a failing test is as valuable as a passing one. Maybe even more
because it reminds us to do something.
Agreed. It is even so important that we have to have it fixed to keep
awareness for failing tests. A failing test obscures other failing
tests.
Agreed.
Tests are not written to steal someones time. I don't like the idea of
forcing developers to run tests. I hate being forced. Uncommenting
those things in build files is my first reaction to it.
I consider every committer responsible to do what ever is needed to
ensure the quality of his commits.
IMO it's a psychological mistake to give the false impression that the
code will be automatically verified before committing.
Everyone should be aware of ones responsibilities. IMO that is more
effective as forcing someone.
I can't see the big catastrophe if a committer causes a test to fail by
mistake.
Fully agreed. Unit tests are only one means beside others to assure
good code. They must be easy to run and must not be easy to be
forgotten.
What is your objection targeted to? The change to the ant file?
I propose to accept failing tests in the codebase. Nightly build should
not fail just because tests are failing. Instead we could provide a
testreport in the same folder.
Of course tests ran from ant should not be interrupted by the first
failure. (haltonfailure="no")
+1
Imho the nighly build should not be published if it does not pass tests.
Non passing tests should be only there by mistake, so the mistake could
have compromised the quality of the code unintentionally.
Well, of course this brings some administrative overhead. Maybe we need
a solution to sort our tests using TestSuites.
We could separate the tests known to fail to be warned at once when a
change brings additional tests to fail.
-1
I don't understand why we should complicate things so much.
If after a change an assertTrue fail and is known to fail because the
test is wrong then it should be changed to assertFalse and a JIRA issue
added if the test need more attention or have to be completed.
I don't consider slow tests as bad tests. They are just inappropriate
for a ad hoc results. Even if they took an hour it's okay, but we need
to find a way not to force people to run them all the time.
Not agreed. According to my experience unit tests should be fast and
easy. Otherwise your unit test setup is flawed or could be simplified.
Other tests, of course, for example integration or compliance test
could take much more time.
Bernd
The problem here is that most of our tests are not really unit tests but
more similar to integration tests.
Imho 8 minutes is not a long running test suite. I have test suites
running for hours in big projects.
It is better a long running test than no test. If a test can be made
faster ok, but we should not limit the maximum time for a test.
Stefano
---------------------------------------------------------------------
To unsubscribe, e-mail: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]