def fib(n, a=1, b=1):
    if n==0:
        return a
    elif n==1:
        return b
    return fib(n-1, b, a+b)


And for the record:

fib(500)
225591516161936330872512695036072072046011324913758190588638866418474627738686883405015987052796968498626L
timeit.Timer('fib(500)',

'from __main__ import fib').timeit(1)
0.00083398818969726562
Less than a millisecond, versus millions of years for the OP's algorithm.
I know which I would choose. Faster hardware can only go so far in hiding
the effect of poor algorithms.

I agree to 100% to this.

btw. the timeings are not that different for the naive recursion in OP's version and yours.
fib(500) on my machine:
        OP's: 0.00116 (far away from millions of years)
        This here: 0.000583
        Gabriel's while loop: 0.000246

The fastest algorithm I've found does fib(1000000) in .25seconds on my machine.
However, I have not found a way to parallelize this algorithm and I think it is not possible.

To come back to the original question. Yes it is possible; however there are quite some restrictions about it. In case of the naive recursion it was possible to get a speedup of more than 3 times on a 4 core machine. With the highly optimised version mentioned above, I had no success in utilizing more than one core.

As conclusion I would say, Yes, a single function can profit from multiple cores, but sometimes a better algorithm delivers better results than using more cores. However, it is possible that a slower but parallelizable algorithm might outperform the best serial algorithm, with enough cores :).

Gerhard

--
http://mail.python.org/mailman/listinfo/python-list

Reply via email to