Re: [Factor-talk] benchmark is rigged?

2016-07-29 Thread Alexander Ilin
Hello!

  I ran the same test suite a few times adding and removing the /usepmtimer 
option and rebooting the system.

  The results were consistent: when the option switch is there, `benchmark` 
works correctly, otherwise it doesn't. When it didn't work it could measure an 
80-second quotation and say that it ran in under one second.

  I was tempted to switch to the `now now swap time-` approach, but decided 
against it, seeing that the Boot.ini option has reliably fixed the `benchmark`.

  The GetTickCount is what I would normally use in Delphi, especially since I 
don't need a sub-second precision anyway for my use case, but in Factor I 
decided to go with the standard Factor library, and therefore the `benchmark` 
word.

  I don't think that's something to be fixed in Factor. That's a Windows (or 
maybe even BIOS or a CPU) problem, so there has to be the kind of fix that I 
implemented. It's on par with having the correct driver for the system hardware 
- you can't expect application software to compensate for that.

  And yes, let's hope newer Windows versions and PCs are not going to have 
problems like that.

30.07.2016, 02:31, "Björn Lindqvist" :
> That is very interesting! I've looked at how Python does benchmarking
> and it does not use QPC:
>
> On Windows, QueryPerformanceCounter() is not used even though it has a
> better resolution than GetTickCount() . It is not reliable and has too
> many issues.
>
> https://www.python.org/dev/peps/pep-0418/
>
> So we could switch to use GetTickCount() instead but the drawback is
> that that function has abysmal resolution. Then again, XP is a very
> old operating system..

---=---
 Александр

--
___
Factor-talk mailing list
Factor-talk@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/factor-talk


Re: [Factor-talk] benchmark is rigged?

2016-07-29 Thread Björn Lindqvist
That is very interesting! I've looked at how Python does benchmarking
and it does not use QPC:

On Windows, QueryPerformanceCounter() is not used even though it has a
better resolution than GetTickCount() . It is not reliable and has too
many issues.

https://www.python.org/dev/peps/pep-0418/

So we could switch to use GetTickCount() instead but the drawback is
that that function has abysmal resolution. Then again, XP is a very
old operating system..


2016-07-24 23:43 GMT+02:00 Alexander Ilin :
> Hello!
>
>   It looks like adding the /usepmtimer switch to the boot.ini has fixed the 
> problem for that PC. At least the same test cases no longer reproduce the 
> error after a reboot.
>
>   Source of inspiration: http://www.virtualdub.org/blog/pivot/entry.php?id=106
>
> 24.07.2016, 21:59, "Alexander Ilin" :
>> Hello!
>>
>>   I'm having a weird problem with the benchmark word on my WinXP SP3 32-bit 
>> machine, running the latest Factor from Github master.
>>
>>   The benchmark word reports times under or about 1 second (1,000,000,000) 
>> for some piece of code, but the actual run time of the quotation is always 
>> about 5-6 seconds.
>>
>>   For example, here's a sketch of a test session, without restarting a 
>> Factor instance, done in the following order.
>>
>>   This code would show running time < 1 sec:
>>
>> [ do-smth ] benchmark
>>
>>   Then this code would show the correct wall time for both the time word and 
>> the benchmark word:
>>
>> [ [ do-smth ] time ] benchmark
>>
>>   This would show the correct timing for both words as well:
>>
>> [ [ do-smth ] benchmark ] time
>>
>>   Finally, returning to this code again measures incorrectly (< 1 sec):
>>
>> [ do-smth ] benchmark
>>
>>   Has anyone experienced anything like this before?
>>
>>   Replacing [ do-smth ] with [ now do-smth now ] shows the correct time in 
>> all cases, and by calculating difference I can see that benchmark calculates 
>> incorrectly. But I can't figure out the reason.
>>
>>   On the other hand, replacing [ do-smth ] with [ nano-count do-smth 
>> nano-count ] I see that benchmark is right. So, the problem is that for some 
>> reason nano-count returns lower time difference for the code I run, 
>> depending on the way I run it - with or without the "time" word.
>
> ---=---
>  Александр
>
> --
> What NetFlow Analyzer can do for you? Monitors network bandwidth and traffic
> patterns at an interface-level. Reveals which users, apps, and protocols are
> consuming the most bandwidth. Provides multi-vendor support for NetFlow,
> J-Flow, sFlow and other flows. Make informed decisions using capacity planning
> reports.http://sdm.link/zohodev2dev
> ___
> Factor-talk mailing list
> Factor-talk@lists.sourceforge.net
> https://lists.sourceforge.net/lists/listinfo/factor-talk



-- 
mvh/best regards Björn Lindqvist

--
___
Factor-talk mailing list
Factor-talk@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/factor-talk


Re: [Factor-talk] benchmark is rigged?

2016-07-24 Thread Alexander Ilin
Hello!

  It looks like adding the /usepmtimer switch to the boot.ini has fixed the 
problem for that PC. At least the same test cases no longer reproduce the error 
after a reboot.

  Source of inspiration: http://www.virtualdub.org/blog/pivot/entry.php?id=106

24.07.2016, 21:59, "Alexander Ilin" :
> Hello!
>
>   I'm having a weird problem with the benchmark word on my WinXP SP3 32-bit 
> machine, running the latest Factor from Github master.
>
>   The benchmark word reports times under or about 1 second (1,000,000,000) 
> for some piece of code, but the actual run time of the quotation is always 
> about 5-6 seconds.
>
>   For example, here's a sketch of a test session, without restarting a Factor 
> instance, done in the following order.
>
>   This code would show running time < 1 sec:
>
> [ do-smth ] benchmark
>
>   Then this code would show the correct wall time for both the time word and 
> the benchmark word:
>
> [ [ do-smth ] time ] benchmark
>
>   This would show the correct timing for both words as well:
>
> [ [ do-smth ] benchmark ] time
>
>   Finally, returning to this code again measures incorrectly (< 1 sec):
>
> [ do-smth ] benchmark
>
>   Has anyone experienced anything like this before?
>
>   Replacing [ do-smth ] with [ now do-smth now ] shows the correct time in 
> all cases, and by calculating difference I can see that benchmark calculates 
> incorrectly. But I can't figure out the reason.
>
>   On the other hand, replacing [ do-smth ] with [ nano-count do-smth 
> nano-count ] I see that benchmark is right. So, the problem is that for some 
> reason nano-count returns lower time difference for the code I run, depending 
> on the way I run it - with or without the "time" word.

---=---
 Александр

--
What NetFlow Analyzer can do for you? Monitors network bandwidth and traffic
patterns at an interface-level. Reveals which users, apps, and protocols are 
consuming the most bandwidth. Provides multi-vendor support for NetFlow, 
J-Flow, sFlow and other flows. Make informed decisions using capacity planning
reports.http://sdm.link/zohodev2dev
___
Factor-talk mailing list
Factor-talk@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/factor-talk


[Factor-talk] benchmark is rigged?

2016-07-24 Thread Alexander Ilin
Hello!

  I'm having a weird problem with the benchmark word on my WinXP SP3 32-bit 
machine, running the latest Factor from Github master.

  The benchmark word reports times under or about 1 second (1,000,000,000) for 
some piece of code, but the actual run time of the quotation is always about 
5-6 seconds.

  For example, here's a sketch of a test session, without restarting a Factor 
instance, done in the following order.

  This code would show running time < 1 sec:

[ do-smth ] benchmark

  Then this code would show the correct wall time for both the time word and 
the benchmark word:

[ [ do-smth ] time ] benchmark

  This would show the correct timing for both words as well:

[ [ do-smth ] benchmark ] time

  Finally, returning to this code again measures incorrectly (< 1 sec):

[ do-smth ] benchmark

  Has anyone experienced anything like this before?

  Replacing [ do-smth ] with [ now do-smth now ] shows the correct time in all 
cases, and by calculating difference I can see that benchmark calculates 
incorrectly. But I can't figure out the reason.

  On the other hand, replacing [ do-smth ] with [ nano-count do-smth nano-count 
] I see that benchmark is right. So, the problem is that for some reason 
nano-count returns lower time difference for the code I run, depending on the 
way I run it - with or without the "time" word.

---=---
 Александр

--
What NetFlow Analyzer can do for you? Monitors network bandwidth and traffic
patterns at an interface-level. Reveals which users, apps, and protocols are 
consuming the most bandwidth. Provides multi-vendor support for NetFlow, 
J-Flow, sFlow and other flows. Make informed decisions using capacity planning
reports.http://sdm.link/zohodev2dev
___
Factor-talk mailing list
Factor-talk@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/factor-talk