Hello,

Mark Miesfeld wrote:
> On Sun, Nov 30, 2008 at 12:24 PM, Rick McGuire <[EMAIL PROTECTED]> wrote:
>
>   
>> Ok, this is good.  This was a fix for a termination cleanup issue.
>> The test cases (particularly the API test cases) hammer this area
>> rather heavily, so it's not surprising that this might have have
>> resulted in some slowdowns in the test suite.  I don't plan to do any
>> further investigation on this.
>>     
>
>   
Yes,
I only thought it might be interesting to know why the interpreter got
slower.
> Rick,
>
> I had intended to reply to this also.  To point out to Rainer that
> this was not necessarily a performance issue.
>
> I think I'll add a few observations anyway.
>
> First off the total test execution time of the test suite has slowly
> been increasing as we add tests.  I expect it to continue to increase.
>
>   
Yes,
but the number of the test was the same (in this case).
> Second, the increase in execution time will not be proportional to the
> number of tests added, because some types of tests will naturally take
> far longer to run.
>
>   
Agree
> Third, if you add some new tests that run really fast because of a bug
> in the interpreter, that then run much slower after the bug has been
> fixed, it is not really a degradation of performance.  The first
> really fast time was simply not the correct time.
>   
This is the case here (I think).
> I think what happened here was that Rick was adding a lot of tests to
> test the native API.  Eventually the tests uncovered some bugs.  When
> the bugs were fixed, the new tests took their real time to execute,
> adding substantially to the over-all execution time because of the
> nature of the tests.
>
> One of the long term goals is to come up with some true performance
> tests.  I visualize them as being run under the test framework, but
> separately.  We would run a set of the current type of tests to prove
> correctness.  Then run another set of tests to look at performance.
>
>   
This sounds very good.
> There is another type of tests that should also be run separately.
> That is a set of tests that show the release package is correct.  That
> set would for example execute all the included sample programs, not to
> necessarily prove correctness, but just to demonstrate that they
> actually run.  A few of these types of tests are already in the test
> suite.  These haven't been run separately up to now, because they have
> proved useful in demonstrating correctness.
>
> --
> Mark Miesfeld
>   
Bye
  Rainer Tammer
> -------------------------------------------------------------------------
> This SF.Net email is sponsored by the Moblin Your Move Developer's challenge
> Build the coolest Linux based applications with Moblin SDK & win great prizes
> Grand prize is a trip for two to an Open Source event anywhere in the world
> http://moblin-contest.org/redirect.php?banner_id=100&url=/
> _______________________________________________
> Oorexx-devel mailing list
> [email protected]
> https://lists.sourceforge.net/lists/listinfo/oorexx-devel
>
>
>   


-------------------------------------------------------------------------
This SF.Net email is sponsored by the Moblin Your Move Developer's challenge
Build the coolest Linux based applications with Moblin SDK & win great prizes
Grand prize is a trip for two to an Open Source event anywhere in the world
http://moblin-contest.org/redirect.php?banner_id=100&url=/
_______________________________________________
Oorexx-devel mailing list
[email protected]
https://lists.sourceforge.net/lists/listinfo/oorexx-devel

Reply via email to