On 5/06/2006 10:30 PM, Bruno Desthuilliers wrote:
> John Machin a écrit :
>> On 5/06/2006 10:38 AM, Bruno Desthuilliers wrote:
>>
>>> SuperHik a écrit :
>>>
>>>> hi all,
>>>>
> (snip)
> 
>>>> I have an old(er) script with the following task - takes a string I 
>>>> copy-pasted and wich always has the same format:
>>>>
> (snip)
>  >>>
>>> def to_dict(items):
>>>     items = items.replace('\t', '\n').split('\n')
>>
>>
>> In case there are leading/trailing spaces on the keys:
> 
> There aren't. Test passes.
> 
> (snip)
> 
>> Fantastic -- at least for the OP's carefully copied-and-pasted input.
> 
> That was the spec, and my code passes the test.
> 
>> Meanwhile back in the real world,
> 
> The "real world" is mostly defined by customer's test set (is that the 
> correct translation for "jeu d'essai" ?). Code passes the test. period.

"Jeu d'essai" could be construed as "toss a coin" -- yup, that fits some 
user test sets I've seen.

In the real world, you are lucky to get a test set that covers all the 
user-expected "good" cases. They have to be driven with whips to think 
about the "bad" cases. Never come across a problem caused by "FOO " != 
"FOO"? You *have* lead a charmed life, so far.

> 
>> there might be problems with multiple tabs used for 'prettiness' 
>> instead of 1 tab, non-integer values, etc etc.
> 
> Which means that the spec and the customer's test set is wrong. Not my 
> responsability.

That's what you think. The users, the pointy-haired boss, and the evil 
HR director may have other ideas :-)

> Any way, I refuse to change anything in the parsing 
> algorithm before having another test set.
> 
>> In that case a loop approach that validated as it went and was able to 
>> report the position and contents of any invalid input might be better.
> 
> One doesn't know what *will* be better without actual facts. You can be 
> right (and, from my experience, you probably are !-), *but* you can be 
> wrong as well. Until you have a correct spec and test data set on which 
> the code fails, writing any other code is a waste of time. Better to 
> work on other parts of the system, and come back on this if and when the 
> need arise.

Unfortunately one is likely to be told in a Sunday 03:00 phone call that 
the "test data set on which the code fails" is somewhere in the 
production database :-(

Cheers,
John
-- 
http://mail.python.org/mailman/listinfo/python-list

Reply via email to