Hello guys,
Interesting discussion of a topic that I think deserves one. As all of
you surely are, I am in favor of introducing more testing, unit-tests or
regression, overall in Invenio (and INSPIRE). It makes for better
quality of life :-)
There are many small things that have bitten us because of lack of test
coverage, and as Piotr touches upon, if one wants to change something
that may have minor or major repercussions one feels almost scared of
the consequences in doing it - meaning that new bugs not covered by
tests appears and breaks the application.
Over on the INSPIRE team, we are planning to introduce our own little
ecosystem of code review and testing requirements before shipping to
production (or master codebase) - in addition to the other best
practices of Invenio development, of course.
For example, we are thinking of requiring the implementation of tests
for bugs that appear and gets fixed as well as for new features before
shipping out (to production or Invenio master). This enforcement, and
making sure the tests make sense, will be part of the responsibility for
the code reviewers which we hope to do much more of in the near future.
We see this kind of set-up is more and more common in other software
projects and we hope it is something we can take advantage of. More on
this later.
On the topic of deliberate failing tests in the codebase: I think these
should definitely be made very clear to be intended to fail, and when
running the normal unit-test suite they should not come up as "normal"
errors/failures - that is potentially confusing and time inefficient as
Piotr points out. In addition, if some test case is deliberately failing
because it should be fixed by someone, I think there are better
alternatives to bring awareness to it. For example ticketing systems
such as Trac.
Cheers,
Jan
On 07/20/2012 10:59 AM, Alessio Deiana wrote:
On Jul 19, 2012, at 4:48 PM, Piotr Praczyk wrote:
Hello !
From: Samuele Kaplun
Hi Piotr,
In data mercoledì, 18 luglio 2012 11.10:23, Piotr Praczyk ha scritto:
It is less related to the failing tests themselves, but if the mechanisms of
testing as quality measure of the software does not work, it does not
motivate to write new regression/unit tests.
On the other hand some people prefer to follow the test-driven-development, by
first implementing tests for functionalities that not yet exists so that they
will be eventually implemented as originally designed. YMMV.
Up to now, my understanding of test-driven development was slightly different
and I did not even think about such approach.
What I noticed is that we always test our code when coding. The problem is that
test is manual. By taking the time to automate
it, you get your time back very fast since you are basically running tests all
day: adding some coding then testing,
adding some code, testing etc.
I always though about writing tests for currently implemented feature and
satisfying them before a commit.
I think, the weakness of this approach lies in the size of a team that is
collaborating on a project.
If You have small number of frequently communicating developers, this can work.
If everyone starts uploading failing tests, all other developers have to deal
with them and see results (which results in issues described in previous
e-mail).
It is very difficult to distinguish tests that should fail because things are
not implemented from these that fail because something is broken.
Moreover, commiting failing tests carries a risk of commiting tests which will
never succeed because they are simply not well written (This is the case with
at least one currently commited test).
This also leads to trouble.
Maybe it is better to use task tracking system for new features or at least
mark tests as not-satisfied, so that testing framefowk could distinguish them
automatically ?
I guess having a separate repository for tests of future features (one branch
related with one ticket) is a bit of an overkill on the side of infrastructure,
but we could think about something better than now.
I am using nose as a wrapper around unittests. nose has a way of skipping tests.
Check http://nose.readthedocs.org/en/latest/plugins/skip.html
We could have a similar concept so that we know which tests are supposed to fail
and they do not bother us unless we want to.
--
Alessio Deiana
INSPIRE Developer
GS-SIS-OA
CERN
--
--
Jan Åge Lavik
CERN System Librarian
GS-SIS
Office: 3-1-014
Mailbox: C27800