That is very interesting! I've looked at how Python does benchmarking
and it does not use QPC:

On Windows, QueryPerformanceCounter() is not used even though it has a
better resolution than GetTickCount() . It is not reliable and has too
many issues.

https://www.python.org/dev/peps/pep-0418/

So we could switch to use GetTickCount() instead but the drawback is
that that function has abysmal resolution. Then again, XP is a very
old operating system..


2016-07-24 23:43 GMT+02:00 Alexander Ilin <ajs...@yandex.ru>:
> Hello!
>
>   It looks like adding the /usepmtimer switch to the boot.ini has fixed the 
> problem for that PC. At least the same test cases no longer reproduce the 
> error after a reboot.
>
>   Source of inspiration: http://www.virtualdub.org/blog/pivot/entry.php?id=106
>
> 24.07.2016, 21:59, "Alexander Ilin" <ajs...@yandex.ru>:
>> Hello!
>>
>>   I'm having a weird problem with the benchmark word on my WinXP SP3 32-bit 
>> machine, running the latest Factor from Github master.
>>
>>   The benchmark word reports times under or about 1 second (1,000,000,000) 
>> for some piece of code, but the actual run time of the quotation is always 
>> about 5-6 seconds.
>>
>>   For example, here's a sketch of a test session, without restarting a 
>> Factor instance, done in the following order.
>>
>>   This code would show running time < 1 sec:
>>
>> [ do-smth ] benchmark
>>
>>   Then this code would show the correct wall time for both the time word and 
>> the benchmark word:
>>
>> [ [ do-smth ] time ] benchmark
>>
>>   This would show the correct timing for both words as well:
>>
>> [ [ do-smth ] benchmark ] time
>>
>>   Finally, returning to this code again measures incorrectly (< 1 sec):
>>
>> [ do-smth ] benchmark
>>
>>   Has anyone experienced anything like this before?
>>
>>   Replacing [ do-smth ] with [ now do-smth now ] shows the correct time in 
>> all cases, and by calculating difference I can see that benchmark calculates 
>> incorrectly. But I can't figure out the reason.
>>
>>   On the other hand, replacing [ do-smth ] with [ nano-count do-smth 
>> nano-count ] I see that benchmark is right. So, the problem is that for some 
>> reason nano-count returns lower time difference for the code I run, 
>> depending on the way I run it - with or without the "time" word.
>
> ---=====---
>  Александр
>
> ------------------------------------------------------------------------------
> What NetFlow Analyzer can do for you? Monitors network bandwidth and traffic
> patterns at an interface-level. Reveals which users, apps, and protocols are
> consuming the most bandwidth. Provides multi-vendor support for NetFlow,
> J-Flow, sFlow and other flows. Make informed decisions using capacity planning
> reports.http://sdm.link/zohodev2dev
> _______________________________________________
> Factor-talk mailing list
> Factor-talk@lists.sourceforge.net
> https://lists.sourceforge.net/lists/listinfo/factor-talk



-- 
mvh/best regards Björn Lindqvist

------------------------------------------------------------------------------
_______________________________________________
Factor-talk mailing list
Factor-talk@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/factor-talk

Reply via email to