Stefan Krah added the comment: Core 1 fluctuates even more (My machine only has 2 cores):
$ taskset -c 1 ./python telco.py full Control totals: Actual ['1004737.58', '57628.30', '25042.17'] Expected ['1004737.58', '57628.30', '25042.17'] Elapsed time: 6.783009 Control totals: Actual ['1004737.58', '57628.30', '25042.17'] Expected ['1004737.58', '57628.30', '25042.17'] Elapsed time: 7.335563 $ taskset -c 1 ./python telco.py full I have some of the same concerns as Serhiy. There's a lot of statistics going on in the benchmark suite -- is it really possible to separate that cleanly from the actual runtime of the benchmarks? ---------- _______________________________________ Python tracker <rep...@bugs.python.org> <http://bugs.python.org/issue26275> _______________________________________ _______________________________________________ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com