Harald,
I know that this is part of the cost/gain but please say me that you
really need 4K rows to test somethig in a method.
btw, I have reduced the time-to-run of those tests and I realy hope
I'll never see somethig broken there; what I'm asking is to try to pay
a bit of attention to check if we really need all those rows in the DB
and some comment here and there (in the test) to understand the
scenario tested by a method.

Herald, the time to run all NH's tests, for me, is important (if you
have a look to the SVN log I'm sure you will understand).

--
Fabio Maulo

P.S. I have changed the logic of the SQL generated for
binary-equality, when you have time please have a look to the result,
thanks.


El 26/04/2011, a las 03:39, Harald Mueller <[email protected]> escribió:

> As the culprit who wrote those full-coverage tests for NH-2583, I probably 
> have to stand up to this.
>
> First, I confess I never thought that those full-coverage tests would create 
> that many objects. Of course, we all know that full-coverage testing has this 
> effect/problem of exponential explosion. Therefore, I limited the property 
> values to zero and one and the - for NH-2583 necessary - null values. Adding 
> a few more nulls yesterday seems to have the object number driven up above 
> Fabio's threshold ...
>
> I actually chose full-coverage testing because it is obviously *simpler* than 
> selective testing - simply because you need no additional oracle that helps 
> you select the data will be best in detecting errors: Which in itself can be 
> a source of bugs. This simplicity pays off in four ways:
>
> (1) The testing code was much easier to write ("just a generator").
> (2) It tests so much ... maybe you believe me if I say how much better I 
> sleep when I do full-coverage tests.
> (3) You guys (and girls - or this is included in "you guys"?) are easier to 
> convince that something is wrong in existing code. Full coverage "hits 
> broad-side" *if* there is a problem at all (but it does still not find 
> problems it's not designed to detect: ?? and ?: work incorrectly in my 
> implementation, and only Patrick's *thinking about this* could find this).
> (4) You and I are also easier to convince that a modification does correct a 
> problem "once and for all".
>
> And with the || problems having gone unnoticed for - if I see it correctly - 
> quite some time, I had, together with their complexity in semantics and 
> design, the feeling that this is the right point to use the "sledge-hammer 
> method," for once. Processor time is cheaper than my thinking time ;-)
>
> There are, as I see it, 2 ways to go:
>
> (a) Pull back from full-coverage testing. This introduces this awful "Which 
> data do I select based on *risk*?" problem.
> - Please let's *not* run a fixed random subset [i.e., use a fixed seed to the 
> data selection random generator]).
> - And please also *not* a *different* random subset for every test run 
> (reproduction of problems is awful).
>
> (b) Take smaller subsets *algorithmically* that still are equivalent to 
> full-(input-)coverage tests. This can span the whole range from a doctoral 
> dissertation in test case reduction to some simple heuristic ... but I'm at a 
> loss to see any "obvious" heuristic.
>
> So much for a first head-scratching ...
>
> Harald M.
>
>
> -------- Original-Nachricht --------
>> Datum: Tue, 26 Apr 2011 01:01:49 +0000
>> Von: "Stephen Bohlen" <[email protected]>
>> An: [email protected]
>> Betreff: Re: [nhibernate-development] Re: My feeling of our Tests
>
>> Is it unrealistic of me to expect that it should be possible to validate
>> the behavior of the LINQ provider with perhaps 2-3 rows of data for each
>> narrowly-targeted testcase rather than requiring such massive amounts of test
>> data?
>>
>> And even if more are needed, its hard for me to believe that 5-10 rows per
>> test scenario (rather than the *thousands* mentioned here) wouldn't be
>> sufficient for all but the most complex test scenarios.
>>
>> Is this an unrealistic expectation (and if so, can someone help me
>> understand why this is a gross over-simplification of what's really needed)?
>>
>> I may be misunderstanding this but it almost sounds like we're building a
>> perf-test suite for the LINQ provider rather than validation for it's
>> correctness ;)
>>
>> Am I just being obtuse here (entirely possible <g>) --?
>>
>> -Steve B.
>>
>> -----Original Message-----
>> From: Fabio Maulo <[email protected]>
>> Sender: [email protected]
>> Date: Mon, 25 Apr 2011 19:58:26
>> To: <[email protected]>
>> Reply-To: [email protected]
>> Subject: [nhibernate-development] Re: My feeling of our Tests
>>
>> If you are experimenting the sensation that our tests are again more slow
>> than before is just because, for NH2583, we have some tests method (note *
>> test-method*) storing *4608* entities (yes! that is not a mistake they are
>> really 4608).
>> some others "just" *1008*.
>>
>> I have reduced the time to run those test from +2 minutes to less than 1
>> minute in my machine...
>> we done the possible and the impossible, for miracles I don't have more
>> time
>> to spend there.
>> If you have a little bit of time, try to reduce the amount of entities
>> needed to run a test to check the LINQ behavior.
>>
>> Thanks.
>>
>> On Mon, Apr 25, 2011 at 6:20 PM, Fabio Maulo <[email protected]> wrote:
>>
>>> There are 2 areas where I hope I will never see a broken test (until
>>> yesterday was one):
>>> The first area was NHibernate.Test.Legacy but now it is the second on
>> the
>>> ranking.
>>> The first one is now NHibernate.Test.NHSpecificTest.NH2583
>>>
>>> God save the Queen!!
>>>
>>> --
>>> Fabio Maulo
>>>
>>>
>>
>>
>> --
>> Fabio Maulo
>>
>
> --
> GMX DSL Doppel-Flat ab 19,99 Euro/mtl.! Jetzt mit
> gratis Handy-Flat! http://portal.gmx.net/de/go/dsl

Reply via email to