sturlamolden sturlamol...@yahoo.no wrote:
On Feb 10, 8:38?am, Paul McGuire pt...@austin.rr.com wrote:
Even worse than linear, the function is recursive, which as I
understand it, is inherently a no-no when looking for code that is
parallel-friendly.
There is no way to parallelize
In article hfwdntzvr8unnwlunz2dnuvz_uown...@posted.visi,
Grant Edwards inva...@invalid wrote:
On 2009-02-20, Aahz a...@pythoncraft.com wrote:
Steven D'Aprano ste...@remove.this.cybersource.com.au wrote:
As I understand it, there's very little benefit to multi-cores in
Python due to the GIL.
In article pan.2009.02.10.21.58...@remove.this.cybersource.com.au,
Steven D'Aprano ste...@remove.this.cybersource.com.au wrote:
As I understand it, there's very little benefit to multi-cores in Python
due to the GIL.
As phrased, your statement is completely wrong. Here's a more correct
On 2009-02-20, Aahz a...@pythoncraft.com wrote:
Steven D'Aprano ste...@remove.this.cybersource.com.au wrote:
As I understand it, there's very little benefit to multi-cores
in Python due to the GIL.
As phrased, your statement is completely wrong. Here's a more
correct phrasing: For
On 2009-02-20 13:20, Unknown wrote:
On 2009-02-20, Aahza...@pythoncraft.com wrote:
Steven D'Apranoste...@remove.this.cybersource.com.au wrote:
As I understand it, there's very little benefit to multi-cores
in Python due to the GIL.
As phrased, your statement is completely wrong. Here's a
On 2009-02-20, Robert Kern robert.k...@gmail.com wrote:
Do the crunchy bits of scipy/numpy, scientific python, vtk and
other compute-intensive libraries tend to release the GIL
while they're busy computing?
Often. Not as often as they could, sometimes.
On one hand, the upshot of that is
On Fri, Feb 20, 2009 at 4:27 PM, Grant Edwards inva...@invalid wrote:
On one hand, the upshot of that is that by finding an
appropriate library module you might gain some of the same
benefits as removing the GIL.
On the other hand, that doesn't help if you're doing something
original enough
En Wed, 18 Feb 2009 23:36:25 -0200, sturlamolden sturlamol...@yahoo.no
escribió:
On Feb 10, 7:28 am, oyster lepto.pyt...@gmail.com wrote:
Let's rewrite that slightly and see...
def fib(n, _map=None):
if not _map: _map = map
if n 2:
return sum(_map(fib, (n-1, n-2)))
else:
Gabriel Genellina gagsl-...@yahoo.com.ar writes:
Even my Pentium I MMX 233MHz can compute fib(36) thousand of times faster
than that with the right algorithm. So I don't see the point in
parallelizing if you're going to get infinitely worse results...
The point is to test the parallelization
In article pan.2009.02.18.07.24...@remove.this.cybersource.com.au,
Steven D'Aprano ste...@remove.this.cybersource.com.au wrote:
.
.
.
And now for my version (which admitedly isn't really mine, and returns
slightly incorrect
On Feb 10, 8:38 am, Paul McGuire pt...@austin.rr.com wrote:
Even worse than linear, the function is recursive, which as I
understand it, is inherently a no-no when looking for code that is
parallel-friendly.
There is no way to parallelize Fibonacci numbers computed with linear
time
On Feb 10, 7:28 am, oyster lepto.pyt...@gmail.com wrote:
I mean this
[code]
def fib(n):
if n=1:
return 1
return fib(n-1)+fib(n-2)
Let's rewrite that slightly and see...
def fib(n, _map=None):
if not _map: _map = map
if n 2:
return sum(_map(fib, (n-1,
In article pan.2009.02.10.22.26...@remove.this.cybersource.com.au,
Steven D'Aprano ste...@remove.this.cybersource.com.au wrote:
.
.
.
And now for my version (which admitedly isn't really mine, and returns
slightly incorrect
On Tue, 17 Feb 2009 14:30:27 +, Cameron Laird wrote:
In article pan.2009.02.10.22.26...@remove.this.cybersource.com.au,
Steven D'Aprano ste...@remove.this.cybersource.com.au wrote:
.
.
.
And now for my version (which
Once I have seen Haskell code, that ran fibonacci on a 4 core system.
The algorithm itself basically used an extra granularity paramet until
which new threads will be sparked. e.g. fib(40, 36) means, calculate
fib(40) and spark threads until n=36.
1. divide: fib(n-1), fib(n-2)
2. divide:
On Tue, 10 Feb 2009 18:18:23 +1000, Gerhard Weis wrote:
Once I have seen Haskell code, that ran fibonacci on a 4 core system.
The algorithm itself basically used an extra granularity paramet until
which new threads will be sparked. e.g. fib(40, 36) means, calculate
fib(40) and spark threads
En Tue, 10 Feb 2009 06:45:37 -0200, Steven D'Aprano
ste...@remove.this.cybersource.com.au escribió:
On Tue, 10 Feb 2009 18:18:23 +1000, Gerhard Weis wrote:
We tried this code on a 4 core machine using only 1 core and all 4
cores. 1 core wallclock: ~10s
4 core wallclock: ~3s
Three seconds to
On Tue, Feb 10, 2009 at 12:45 AM, Steven D'Aprano
ste...@remove.this.cybersource.com.au wrote:
On Tue, 10 Feb 2009 18:18:23 +1000, Gerhard Weis wrote:
Once I have seen Haskell code, that ran fibonacci on a 4 core system.
The algorithm itself basically used an extra granularity paramet until
On 10 Feb, 09:45, Steven D'Aprano
ste...@remove.this.cybersource.com.au wrote:
On Tue, 10 Feb 2009 18:18:23 +1000, Gerhard Weis wrote:
Once I have seen Haskell code, that ran fibonacci on a 4 core system.
The algorithm itself basically used an extra granularity paramet until
which new
On Tue, 10 Feb 2009 14:28:15 +0800, oyster wrote:
I mean this
[code]
def fib(n):
if n=1:
return 1
return fib(n-1)+fib(n-2)
useCore(1)
timeit(fib(500)) #this show 20 seconds
useCore(2)
timeit(fib(500)) #this show 10 seconds [/code]
Is it possible?
and more, can
def fib(n, a=1, b=1):
if n==0:
return a
elif n==1:
return b
return fib(n-1, b, a+b)
And for the record:
fib(500)
225591516161936330872512695036072072046011324913758190588638866418474627738686883405015987052796968498626L
timeit.Timer('fib(500)',
'from __main__
On Tue, 10 Feb 2009 12:43:20 +, Lie Ryan wrote:
Of course multi-core processor can improve single function using
multiprocessing, as long as the function is parallelizable. The
Fibonacci function is not a parallelizable function though.
As I understand it, there's very little benefit to
On Tue, 10 Feb 2009 22:41:25 +1000, Gerhard Weis wrote:
btw. the timeings are not that different for the naive recursion in OP's
version and yours.
fib(500) on my machine:
OP's: 0.00116 (far away from millions of years)
This here: 0.000583
I don't believe those timings are credible.
On Tue, 10 Feb 2009 02:05:35 -0800, Niklas Norrthon wrote:
According to the common definition of fibonacci numbers fib(0) = 0, fib
(1) = 1 and fib(n) = fib(n-1) + fib(n-2) for n 1. So the number above
is fib(501).
So it is. Oops, off by one error!
Or, if you prefer, it's the right algorithm
On 2009-02-11 08:01:29 +1000, Steven D'Aprano
ste...@remove.this.cybersource.com.au said:
On Tue, 10 Feb 2009 22:41:25 +1000, Gerhard Weis wrote:
btw. the timeings are not that different for the naive recursion in OP's
version and yours.
fib(500) on my machine:
OP's: 0.00116 (far
Hi, guys, my fib(xx) is just an example to show what is a single
function and what is the effect I expect to see when enable
multi-core.
My real purpose is to know whether multi-core can help to improve the
speed of a common function. But I know definitely that the answer is
NO.
--
On Wed, Feb 11, 2009 at 2:21 PM, oyster lepto.pyt...@gmail.com wrote:
My real purpose is to know whether multi-core can help to improve the
speed of a common function. But I know definitely that the answer is
NO.
As stated by others, and even myself,
it is not possible to just automagically
On Tue, Feb 10, 2009 at 8:57 PM, James Mills
prolo...@shortcircuit.net.au wrote:
On Wed, Feb 11, 2009 at 2:21 PM, oyster lepto.pyt...@gmail.com wrote:
My real purpose is to know whether multi-core can help to improve the
speed of a common function. But I know definitely that the answer is
NO.
oyster lepto.pyt...@gmail.com writes:
Hi, guys, my fib(xx) is just an example to show what is a single
function and what is the effect I expect to see when enable
multi-core.
My real purpose is to know whether multi-core can help to improve
the speed of a common function. But I know
I mean this
[code]
def fib(n):
if n=1:
return 1
return fib(n-1)+fib(n-2)
useCore(1)
timeit(fib(500)) #this show 20 seconds
useCore(2)
timeit(fib(500)) #this show 10 seconds
[/code]
Is it possible?
and more, can threads/multi-processors/clusters be used to improve fib?
thanx
--
On Mon, Feb 9, 2009 at 10:28 PM, oyster lepto.pyt...@gmail.com wrote:
I mean this
[code]
def fib(n):
if n=1:
return 1
return fib(n-1)+fib(n-2)
useCore(1)
timeit(fib(500)) #this show 20 seconds
useCore(2)
timeit(fib(500)) #this show 10 seconds
[/code]
Is it possible?
On Tue, 10 Feb 2009 14:28:15 +0800, oyster wrote:
I mean this
[code]
def fib(n):
if n=1:
return 1
return fib(n-1)+fib(n-2)
useCore(1)
timeit(fib(500)) #this show 20 seconds
useCore(2)
timeit(fib(500)) #this show 10 seconds [/code]
Is it possible?
What does
On Feb 10, 12:43 am, Chris Rebert c...@rebertia.com wrote:
Considering the GIL, I highly doubt it so. Also, the algorithm is
fundamentally linear -- each result depends on previous results -- so
I don't think you can easily parallelize it (if at all). You'd
probably get greater speedup using
33 matches
Mail list logo