Mark Wedel a écrit :

>  While what tchize says about writing unit tests make sense, I'm also a 
> little 
>skeptical that it will actually get done in the way described and/or have 
>other 
>bad effects.
>
>  For example, if there is the requirement that a unit test be written to 
>test/confirm bug fixes, I could see it leading some people say they don't 
>really 
>want to spend the time to write such a test when the fix itself is trivial.  
>And 
>then they don't fix the bug.
>  
>
If the bug is trivial, the unit test is trivial and could be written in
a 1 or 2 minutes. Writting the test is simply the same as discovering
how the bug can arise. If the bug happens on a specific map, for
example, then it's just a matter of copying that map in test directory
with a specific name, write a testcase using that map (should take no
more than 3 lines of code to init this test), identify the wrong state
of object (simply mean locate object a X,Y in map, check a given value
for correctness).

>  Likewise, while writing unit tests before any large changes are done, I know 
>from my experience finding time to actually code the change and do basic 
>testing 
>is difficult.  If it now takes 50% longer because unit tests also have to be 
>written, that has various implications.
>  
>
>From my experience, unit testing reduce coding time at long term (less
surprising bugs, less time to spend wandering in code to find out which
of your call in your code does something wrong, and so on)

>  In my ideal world, I'd love for there to be unit tests for everything.  I'm 
>just not sure what level we can really achieve.  If we create/enforce 
>draconian 
>policies, I think the more likely scenario is people just stop writing new 
>code, 
>and not that they write unit tests (and/or they fork the code to a version 
>such 
>that they don't have to write unit tests).
>
>  I'm certainly not saying unit tests are a bad thing.  But I think we have to 
>keep in mind the people who are likely to write the tests.
>
>  All that said, quick thoughts:
>
>1) Basic function/API tests should be easy to write in most cases (if I call 
>xyz(a,b) does it do the right thing).  Those shouldn't be much an issue.
>
>2) A lot of the code operates on extensive external state (map data, 
>archetypes, 
>etc).  So to test the function, you need to have a map set up in the right way 
>(right objects on the map, etc).
>
>  In this case, you want something that makes such a test easy.  What may be 
>easiest is in fact just a large set of test maps, and the unit test could 
>effectively hard code those, with known coordinates (test object being only 
>object at 0,0 and I know I want to insert that at 5,5)
>  
>
That's the idea. If you want to test map behaviour X (like, let's say,
the 'connected' behaviour in code), you write a map (with a few
connected object), and write a test that check specific object status
(the connection) after loading. Of course, as part of unit testing
framework you need to write some helper methods like
init_test_using_map(path_to_map).

So along with the framework we need to provide a toolkit that will help
writting testcase in specific area (eg, a network toolkit, map based
test toolkit, a server initialisation toolkit and so on, that's the part
that will take much of the time, but it will also be written only once)

>  My thinking here is that if I can write the basic test to reproduce the bug 
> in 
><10 minutes, I'll probably write the test.  IF the framework is such that it 
>will take me an hour, then I probably wouldn't.
>  
>
If it take an hour to write a testcase for a specific bug that mean
either it took you this time to find in which conditions a bug arise (i
could even argue in some cases here writting a unit test helped us find
those conditions faster) or the framework is so difficult to use that
noone can use it (which is not the case for most framework). basically a
unittest is just this

- init needed server code (with toolkits, take 2 or 3 lines)
- do some methods calls
- check_for_some_condition(boolean condition, char*
error_message_if_condition_not_met)
- do some methods calls
- check_for_some_condition(boolean condition, char*
error_message_if_condition_not_met)
- do some methods calls
- check_for_some_condition(boolean condition, char*
error_message_if_condition_not_met)
- do some methods calls
- check_for_some_condition(boolean condition, char*
error_message_if_condition_not_met)
- ...
- test finished

The main and the fixture creating code can nearly be cut and paste from
any existing unit test.

>3) If the number of tests grow, it is probably desirable to be able to run 
>some 
>specific subset of tests, eg, I don't want to have to wait 15 minutes for the 
>server to run all the tests when I'm trying to fix some specific bug - I just 
>want it to run my test.  After all is done, then perhaps running all tests 
>before commit makes sense.
>
>  
>
Of course, you should be able to run a specific fixture, not all
fixtures. Here at work, running all unit tests can take a few hours
(well that's mainly because we also use a code test coverage tool which
run all unit tests in debug mode). When we are doing changes, we only
test them on a specific fixture or set of fixture, we do not run all
testcase. Running all test cases is done at night from a cvs checkout an
we get the report online. This way we can keep an eye on what's wrong.

It is a good thing, for example, to run all those tests before a release.

>_______________________________________________
>crossfire mailing list
>crossfire@metalforge.org
>http://mailman.metalforge.org/mailman/listinfo/crossfire
>
>  
>


_______________________________________________
crossfire mailing list
crossfire@metalforge.org
http://mailman.metalforge.org/mailman/listinfo/crossfire

Reply via email to