sturlamolden wrote:
> On Feb 10, 8:38?am, Paul McGuire wrote:
>
> > Even worse than linear, the function is recursive, which as I
> > understand it, is inherently a no-no when looking for code that is
> > parallel-friendly.
>
> There is no way to parallelize Fibonacci numbers computed with li
In article ,
Grant Edwards wrote:
>On 2009-02-20, Aahz wrote:
>> Steven D'Aprano wrote:
>>>
>>> As I understand it, there's very little benefit to multi-cores in
>>> Python due to the GIL.
>>
>> As phrased, your statement is completely wrong. Here's a more
>> correct phrasing: "For threaded c
On Fri, Feb 20, 2009 at 4:27 PM, Grant Edwards wrote:
> On one hand, the upshot of that is that by finding an
> appropriate library module you might gain some of the same
> benefits as removing the GIL.
>
> On the other hand, that doesn't help if you're doing something
> original enough that nobod
On 2009-02-20, Robert Kern wrote:
>
>> Do the crunchy bits of scipy/numpy, scientific python, vtk and
>> other compute-intensive libraries tend to release the GIL
>> while they're busy "computing"?
>
> Often. Not as often as they could, sometimes.
On one hand, the upshot of that is that by findin
On 2009-02-20 13:20, Unknown wrote:
On 2009-02-20, Aahz wrote:
Steven D'Aprano wrote:
As I understand it, there's very little benefit to multi-cores
in Python due to the GIL.
As phrased, your statement is completely wrong. Here's a more
correct phrasing: "For threaded compute-bound applica
On 2009-02-20, Aahz wrote:
> Steven D'Aprano wrote:
>
>> As I understand it, there's very little benefit to multi-cores
>> in Python due to the GIL.
>
> As phrased, your statement is completely wrong. Here's a more
> correct phrasing: "For threaded compute-bound applications
> written in pure
In article ,
Steven D'Aprano wrote:
>
>As I understand it, there's very little benefit to multi-cores in Python
>due to the GIL.
As phrased, your statement is completely wrong. Here's a more correct
phrasing: "For threaded compute-bound applications written in pure
Python, there's very little
In article ,
Steven D'Aprano wrote:
>> .
>> .
>> .
And now for my version (which admitedly isn't really mine, and returns
slightly incorrect fib(n) for large values of n, due to the limited
floating point precision).
>>
"Gabriel Genellina" writes:
> Even my Pentium I MMX 233MHz can compute fib(36) thousand of times faster
> than that with the right algorithm. So I don't see the point in
> parallelizing if you're going to get infinitely worse results...
The point is to test the parallelization scheme. Recursive
En Wed, 18 Feb 2009 23:36:25 -0200, sturlamolden
escribió:
On Feb 10, 7:28 am, oyster wrote:
Let's rewrite that slightly and see...
def fib(n, _map=None):
if not _map: _map = map
if n > 2:
return sum(_map(fib, (n-1, n-2)))
else:
return 1
With two cores:
E:\>py
On Feb 10, 7:28 am, oyster wrote:
> I mean this
> [code]
> def fib(n):
> if n<=1:
> return 1
> return fib(n-1)+fib(n-2)
Let's rewrite that slightly and see...
def fib(n, _map=None):
if not _map: _map = map
if n > 2:
return sum(_map(fib, (n-1, n-2)))
else:
On Feb 10, 8:38 am, Paul McGuire wrote:
> Even worse than linear, the function is recursive, which as I
> understand it, is inherently a no-no when looking for code that is
> parallel-friendly.
There is no way to parallelize Fibonacci numbers computed with linear
time complexity, as we must know
On Tue, 17 Feb 2009 14:30:27 +, Cameron Laird wrote:
> In article ,
> Steven D'Aprano wrote:
> .
> .
> .
>>> And now for my version (which admitedly isn't really mine, and returns
>>> slightly incorrect fib(n) for large values
In article ,
Steven D'Aprano wrote:
.
.
.
>> And now for my version (which admitedly isn't really mine, and returns
>> slightly incorrect fib(n) for large values of n, due to the limited
>> floating point precision).
>
>The f
oyster writes:
> Hi, guys, my fib(xx) is just an example to show "what is a single
> function" and "what is the effect I expect to see when enable
> multi-core".
>
> My real purpose is to know "whether multi-core can help to improve
> the speed of a common function". But I know definitely that th
On Tue, Feb 10, 2009 at 8:57 PM, James Mills
wrote:
> On Wed, Feb 11, 2009 at 2:21 PM, oyster wrote:
>> My real purpose is to know "whether multi-core can help to improve the
>> speed of a common function". But I know definitely that the answer is
>> NO.
>
> As stated by others, and even myself,
On Wed, Feb 11, 2009 at 2:21 PM, oyster wrote:
> My real purpose is to know "whether multi-core can help to improve the
> speed of a common function". But I know definitely that the answer is
> NO.
As stated by others, and even myself,
it is not possible to just "automagically"
improve the execut
Hi, guys, my fib(xx) is just an example to show "what is a single
function" and "what is the effect I expect to see when enable
multi-core".
My real purpose is to know "whether multi-core can help to improve the
speed of a common function". But I know definitely that the answer is
NO.
--
http://ma
On 2009-02-11 08:01:29 +1000, Steven D'Aprano
said:
On Tue, 10 Feb 2009 22:41:25 +1000, Gerhard Weis wrote:
btw. the timeings are not that different for the naive recursion in OP's
version and yours.
fib(500) on my machine:
OP's: 0.00116 (far away from millions of years)
This here: 0
On Tue, 10 Feb 2009 02:05:35 -0800, Niklas Norrthon wrote:
> According to the common definition of fibonacci numbers fib(0) = 0, fib
> (1) = 1 and fib(n) = fib(n-1) + fib(n-2) for n > 1. So the number above
> is fib(501).
So it is. Oops, off by one error!
Or, if you prefer, it's the right algori
On Tue, 10 Feb 2009 22:41:25 +1000, Gerhard Weis wrote:
> btw. the timeings are not that different for the naive recursion in OP's
> version and yours.
> fib(500) on my machine:
> OP's: 0.00116 (far away from millions of years)
> This here: 0.000583
I don't believe those timings are credib
On Tue, 10 Feb 2009 12:43:20 +, Lie Ryan wrote:
> Of course multi-core processor can improve single function using
> multiprocessing, as long as the function is parallelizable. The
> Fibonacci function is not a parallelizable function though.
As I understand it, there's very little benefit to
def fib(n, a=1, b=1):
if n==0:
return a
elif n==1:
return b
return fib(n-1, b, a+b)
And for the record:
fib(500)
225591516161936330872512695036072072046011324913758190588638866418474627738686883405015987052796968498626L
timeit.Timer('fib(500)',
'from __main__ i
On Tue, 10 Feb 2009 14:28:15 +0800, oyster wrote:
> I mean this
> [code]
> def fib(n):
> if n<=1:
> return 1
> return fib(n-1)+fib(n-2)
>
> useCore(1)
> timeit(fib(500)) #this show 20 seconds
>
> useCore(2)
> timeit(fib(500)) #this show 10 seconds [/code]
>
> Is it possible?
>
On 10 Feb, 09:45, Steven D'Aprano
wrote:
> On Tue, 10 Feb 2009 18:18:23 +1000, Gerhard Weis wrote:
> > Once I have seen Haskell code, that ran fibonacci on a 4 core system.
>
> > The algorithm itself basically used an extra granularity paramet until
> > which new threads will be sparked. e.g. fib(
On Tue, Feb 10, 2009 at 12:45 AM, Steven D'Aprano
wrote:
> On Tue, 10 Feb 2009 18:18:23 +1000, Gerhard Weis wrote:
>
>> Once I have seen Haskell code, that ran fibonacci on a 4 core system.
>>
>> The algorithm itself basically used an extra granularity paramet until
>> which new threads will be sp
En Tue, 10 Feb 2009 06:45:37 -0200, Steven D'Aprano
escribió:
On Tue, 10 Feb 2009 18:18:23 +1000, Gerhard Weis wrote:
We tried this code on a 4 core machine using only 1 core and all 4
cores. 1 core wallclock: ~10s
4 core wallclock: ~3s
Three seconds to calculate fib(40) is terrible. Well,
On Tue, 10 Feb 2009 18:18:23 +1000, Gerhard Weis wrote:
> Once I have seen Haskell code, that ran fibonacci on a 4 core system.
>
> The algorithm itself basically used an extra granularity paramet until
> which new threads will be sparked. e.g. fib(40, 36) means, calculate
> fib(40) and spark thr
Once I have seen Haskell code, that ran fibonacci on a 4 core system.
The algorithm itself basically used an extra granularity paramet until
which new threads will be sparked. e.g. fib(40, 36) means, calculate
fib(40) and spark threads until n=36.
1. divide: fib(n-1), fib(n-2)
2. divide: fib(n
On Feb 10, 12:43 am, Chris Rebert wrote:
> Considering the GIL, I highly doubt it so. Also, the algorithm is
> fundamentally linear -- each result depends on previous results -- so
> I don't think you can easily parallelize it (if at all). You'd
> probably get greater speedup using memoization.
>
On Tue, 10 Feb 2009 14:28:15 +0800, oyster wrote:
> I mean this
> [code]
> def fib(n):
> if n<=1:
> return 1
> return fib(n-1)+fib(n-2)
>
> useCore(1)
> timeit(fib(500)) #this show 20 seconds
>
> useCore(2)
> timeit(fib(500)) #this show 10 seconds [/code]
>
> Is it possible?
Wh
On Mon, Feb 9, 2009 at 10:28 PM, oyster wrote:
> I mean this
> [code]
> def fib(n):
>if n<=1:
>return 1
>return fib(n-1)+fib(n-2)
>
> useCore(1)
> timeit(fib(500)) #this show 20 seconds
>
> useCore(2)
> timeit(fib(500)) #this show 10 seconds
> [/code]
>
> Is it possible?
>
> and mo
I mean this
[code]
def fib(n):
if n<=1:
return 1
return fib(n-1)+fib(n-2)
useCore(1)
timeit(fib(500)) #this show 20 seconds
useCore(2)
timeit(fib(500)) #this show 10 seconds
[/code]
Is it possible?
and more, can threads/multi-processors/clusters be used to improve fib?
thanx
--
33 matches
Mail list logo