I would classify what Adam does as "robustness" testing.

Often the first release can be classified as "working, in a perfect world".

Adam lives in a World of Evil.

Let me expand. For most of us (this means "Not Adam"), we work during the Day 
and rest at Night. We don't call it "Day" and "Not Day", because 
Night implies a whole range of things not included in a simple "Not Day" state.

So the extra testing Adam does is more than implied by "Not Perfect", but is 
included by "Evil".

By most people measures, when it works in a perfect world (and we've proved 
this by our TDD approach), its does what is advertised and can be released.

But by having someone like Adam wreak havoc our weak, naïve code, we improve 
its robustness in "less than perfect" conditions.

But coding for a perfect world and coding for Adam's world are really the same 
discipline, but taken to different levels. Coding for Evil isn't necessarily 
harder to do or test, but requires more precision in defining the conditions 
under which you state that your code can be considered to "working".

E.g. Adam gave the example of code that required a reference to a string as a 
parameter, but failed if you passed a reference to a constant string. If the 
doco for the sub in question stated "pass a reference to mutable string" rather 
than "pass a reference to a string", we would have stymied Adam's Evil World. 

It is perhaps a bit harder in Perl to recognise where this precision is 
required - in Java and C/C++, the concepts of mutable and immutable are easily 
communicated in code, so, for example, you would expect the compiler to catch 
the passing of an immutable string where a mutable one is required.
This probably supports Adam's earlier point about TDD and loosely typed 
languages. Perhaps some of the new features in Perl6 will help here.

One last point. Tesing weird parameter combo's and values is good, but 
robustness testing isn't limited to that. Things like network outages, database 
failures, daylight savings time adjustments, are also extremely relevant to 
improving the robustness of our code, if they depend on these services. For 
this kind of complex "external system" type testing, I have found the mock 
object approach to be superb - and usually part of the TDD development cycles 
where time permits.

Leif

-----Original Message-----
From: [EMAIL PROTECTED] [mailto:[EMAIL PROTECTED] 
Sent: Friday, 31 March 2006 1:54 PM
To: [EMAIL PROTECTED]
Cc: perl-qa@perl.org; [EMAIL PROTECTED]
Subject: Re: [OT] TDD only works for simple things...

Well, the weakness I speak of is not so much that that it will never get 
to the point of being stable, but that it introduces a temptation to 
release early without taking the time to critically look at what might 
go wrong, based on your knowledge of how it is implemented.

So more of a timing thing than a "it will never get there" thing.

Adam K

chromatic wrote:
> On Thursday 30 March 2006 07:32, Adam Kennedy wrote:
> 
>> In contrast, as I hear chromatic express it, TDD largely involves
>> writing tests in advance, running the tests, then writing the code.
> 
> Not quite.  It means writing just enough tests for the next testable piece of 
> the particular feature you're implementing, running them to see that they 
> fail, writing the code to make them pass, then refactoring both.  Repeat.
> 
> The important point that people often miss at first is that it's a very, very 
> small cycle -- write a test, write a line of code.
> 
> (The second important point is "refactor immediately after they pass".)
> 
>> In my use of Test::MockObject and UNIVERSAL::isa/can I found I was
>> initially able to cause them to fail quite easily with fairly (to me)
>> trivially evil cases that would occur in real life.
> 
> For the most part, they weren't trivially easy cases that came up in my real 
> life, so I didn't think of them.  I don't feel particularly badly about that 
> either.  The code met my initial goals and only when someone savvy enough to 
> use the code in ways I had not anticipated it found edge cases did the 
> problems come up.  I suspect you had little problem working around them until 
> I fixed them, too -- at least in comparison to a lot of other programmers 
> without your evil southern hemisphere nature and experience.
> 
>> This I think (but cannot prove) is a TDD weakness, in that it might
>> encourage not critically looking at the code after it's written to find
>> obvious places to pound on it, because you already wrote the tests and
>> they work, and it's very tempting to move on, release, and then wait for
>> reported bugs, then add a test for that case, fix it, and release again.
> 
> It seems more like a weakness of coding in general.  I don't release code 
> with 
> known bugs, but I expect people will report bugs.  Then I'll add test cases, 
> refactor, and learn from the experience.
> 
> Compare the previous version of UNIVERSAL::isa to the version I released.  
> Not 
> only does it have far fewer bugs, but it's at least an order of magnitude 
> more readable.  Without knowing how people will use it, it would have been 
> much more difficult to write the code as it stands now.
> 
> You see it as a failure or weakness of TDD.  I see it as a success -- there's 
> better code in the world now because of it.  TDD to me is one process of 
> soliciting and acting on feedback.  Releasing code with its tests is another.
> 
> It might be worth discussing the tradeoff between releasing early (with tests 
> and the expectation that bug reports will become tests) and releasing after 
> exhaustive "What could possibly ever go wrong here?" tests, but that's a 
> different discussion.
> 
> -- c
**********************************************************************
IMPORTANT
The contents of this e-mail and its attachments are confidential and intended
solely for the use of the individual or entity to whom they are addressed.  If
you received this e-mail in error, please notify the HPA Postmaster, [EMAIL 
PROTECTED],
then delete  the e-mail.
This footnote also confirms that this e-mail message has been swept for the
presence of computer viruses by Ironport. Before opening or using any
attachments, check them for viruses and defects.
Our liability is limited to resupplying any affected attachments.
HPA collects personal information to provide and market our services. For more
information about use, disclosure and access see our Privacy Policy at
www.hpa.com.au
**********************************************************************

Reply via email to