On 22/02/2018 16:00, Steven D'Aprano wrote:
On Thu, 22 Feb 2018 12:03:09 +0000, bartc wrote:

The idea of the Fibonacci benchmark is to test how effectively an
implementation manages large numbers of recursive function calls. Then,
fib(36) would normally involve 48,315,633 calls.

This version does only 37, giving a misleading impression.

Who cares if it only does 37? There is *absolutely no benefit* to those
additional 48,315,596 calls, and thanks to the benchmark I discovered
that.[1]

I want to know what is the fastest implementation of Fibonacci I can
write. I'm not married to the idea that I have to make a ludicrous 48
million function calls just to get the 36th Fibonacci number.

As I said, people keep missing the point. The fact this uses a grossly inefficient way of calculating Fibonacci seems to blind them to any other considerations.

The actual result is irrelevant, so long as its correct. The important thing is those 50 million calls.

If you want to know how well one language implementations deals with function calls compared to another, you will prefer a test that is doing 48 million function calls and little else, rather than one that does only three dozen.

And one that is harder for a compiler to optimise, as you want to know how fast those 48 million calls can be made, not how well a compiler can avoid doing them!


(It then goes on to suggest using 'numba', and using its JIT compiler,
and using on that on an /iterative/ version of fib(). Way to miss the
point.

Indeed, you certainly have.


It might be a technique to bear in mind, but it is nonsensical to say
this gives a 17,000 times speed-up over the original code.

That's *exactly what it did*.

By running a different program? Fine, then here's my submission for the N=20 case used in the tests:

  def fib(n): return 6765

You have to compare like with like when testing languages, otherwise there's no point. If you want to use an iterative algorithm, then do the same with Julia. But then these functions are so fast, that it's not clear exactly what you are testing.


Did your C code support numbers greater than 2**64? My fibonacci function
can calculate the 10,000th Fibonacci number in a blink of the eye:

py> fibi(10000)
336447648764317832666216120051075...976171121233066073310059947366875

Output truncated for brevity, it is a 2090-digit number.

Admittedly it did take about a minute and a half to generate all 208988
digits of the one millionth Fibonacci number, but the performance is fast
enough for my needs.

(OK, my interpreter took nearly 4 minutes, but I use my own big integer algorithms. All jolly interesting, but a diversion.)

The fact is that the vast majority of integer calculations don't need to use big integers (pretty much 100% of mine). Probably most don't even need 64 bits, but 32 bits.

So, if there is a performance advantage in having separate 64-bit and big integer types, then why not make use of it?

There's been a lot of talk of the advantages of static typing, and here is one use for it.

--
Bartc
--
https://mail.python.org/mailman/listinfo/python-list

Reply via email to