As the dumb ass trying to merge a lot of the code last year at
Wikidata I would say stop bitching about whether to make tests or not.
Any tests are better than no tests, without tests merging code is pure
gambling. Yes you can create a small piece of code and be fairly sure
that your own code works when you merge it in, but no you can not be
sure that it will continue to work when a bunch of other nerds are
wild-guessing what your code is doing and then merge their own code
in. It is inly one sure method to make the code keep on working and
that is by adding tests. Those tests are not primarily for you, unless
you do TDD, they are for everyone else.

If you want some real comparison on what can be achieved with proper
testing, then take a look at Coverity scan: 2012 Open Source Report
[1], they are tracking 254 active projects. You must check what the
projects are doing to make the code bugfree. (Amanda uses a automatic
gatekeeper, and _they_have_extensive_tests_)

You should not make tests that is really testing internals of an unit,
you should make tests that operates on the API of the unit. You should
be able to change the internals of an unit without changing the tests.
If you must change the tests because of internal changes in an unit,
then there is probably something wrong with the test.

Security issues in an implementation is most of the time not found by
unit tests, because the unit tests should test the API and not the
implementation. An other thing is that if the unit test in fact tests
the API the necessary seems are probably in place to do source code
fault injection, and that is a very efficient tool to find security
issues. Tak a look at "Software Security Assessment Tools Review" [2]
for a list of proper methods for security assessment.

Also tests are not a replacement for proper code review, but it will
speed up the process in those cases where bugs are detected during the
testing process. In some cases you can infact use the time spent on
merges and the frequency whereby you break the build as a measure of
how good test coverage you have. Less coverage gives higher break
frequency.

TDD is nice, but often it turns into testing internals of some idea
the developer has, which is wrong. It also have a strange effect to
create lock-in on certain solutions which can be a problem. That is
you chose implementation after how easy it is to test, not how
efficient the implementation is. The same thing can be said about
other formal processes so I'm not sure it is a valid point. It is said
that those familiar with TDD writes the tests and implementation in
close to the same time as the implementation alone. Rumors has it that
this comes from the fact that the prewritten tests makes the
implementation more testable, and that this is an easier way than
adding tests to an easier implementation later. I don't know.

Code with some tests are sometimes compared to code with no tests at
all, and then there are made claims that the code without tests are
somehow similarly bugfree to some degree. This argument usually
doesn't hold, as "some tests" isn't good enough, you need real numbers
for the coverage before you can make valid claims and if the coverage
is low it can be very difficult to say anything about the real
effects. If you really want to make an effort to get bugfree code you
need a lot of tests, above 50% and probably more like 80%. Not only do
you need a lot of tests, you need a lot of people to fix those bugs,
because lots of tests will find more bugs to fix. It is like writing a
document with no spell checker vs using a spell checker.


[1] 
http://www.coverity.com/library/pdf/coverity-scan-2011-open-source-integrity-report.pdf
[2] http://samate.nist.gov/docs/NAVSEA-Tools-Paper-2009-03-02.pdf

_______________________________________________
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l

Reply via email to