I don't want to insist on this, but: if you measure the runtime of your program you result = runtime + error. If you measure a series against MIN you measure MIN(result) = runtime + MIN(error) which delivers the best value for runtime.
Am 17.02.2016 um 12:28 schrieb Serguei TARASSOV: > On 17/02/2016 12:00, fpc-pascal-requ...@lists.freepascal.org wrote: >> Date: Tue, 16 Feb 2016 14:44:42 +0100 >> From: Adrian Veith<adr...@veith-system.de> >> To: FPC-Pascal users discussions<fpc-pascal@lists.freepascal.org> >> Subject: Re: [fpc-pascal] Happy tickets benchmark >> >> small remark for your testing series: >> AVG makes no sense, you should test against MIN - why ? the measured >> results are contaminated by other activities on your system, so the >> fastest result is the most accurate, because there is no way to make a >> program to run faster, but many ways to make it run slower. > No, the test against MIN shows only the case when the result was > _minimally contaminated_ in the series. But we don't know whether the > unused time was bigger or smaller than for other program. > Also, it is very probably that the minimal time in series of 1000 will > be better that in series of 10 and so on. > > The average approach smooth the contaminated time in series for all > programs. > But you could use some better approaches like the direct measure of > the time used by CPU or simply remove extreme values. > > I don't suppose that it changes anything in the relative comparison > that is the goal of test. > > Regards, > Serguei > _______________________________________________ > fpc-pascal maillist - fpc-pascal@lists.freepascal.org > http://lists.freepascal.org/cgi-bin/mailman/listinfo/fpc-pascal _______________________________________________ fpc-pascal maillist - fpc-pascal@lists.freepascal.org http://lists.freepascal.org/cgi-bin/mailman/listinfo/fpc-pascal