On 26/02/2018 12:06, Antoon Pardon wrote:
On 23-02-18 02:27, Steven D'Aprano wrote:
Why do you care about the 50 million calls? That's crazy -- the important
thing is *calculating the Fibonacci numbers as efficiently as possible*.

No necessarily.

David Beazley in his talks sometimes uses an ineffecient algorithm for 
calculating
fibonacci numbers because he needs something that uses the cpu intensively.
calculating the fibonacci numbers in that context as efficiently as possible 
would
defeat that purpose.

So in a context of a benchmark it is not unreasonable to assume those 50 million
calls are the purpose and not calculating the Fibonacci numbers as efficiently 
as
possible.

I don't think Steven is ever going to concede this point.

Because Python performs badly compared to Julia or C, and it's not possible to conveniently offload the task to some fast library because it only uses a handful of primitive byte-codes.

(I have the same trouble with my own interpreted language. Although somewhat brisker than CPython, it will always be much slower than a C-like language on such micro-benchmarks.

But I accept that; I don't have an army of people working on acceleration projects and tracing JIT compilers. To those people however, such a benchmark can be a useful yardstick of progress.)

--
bartc
--
https://mail.python.org/mailman/listinfo/python-list

Reply via email to