Robin Becker wrote:
Robin Becker wrote:
Paul Rubin wrote:
This module might be of interest: http://poshmodule.sf.net
It seems it might be a bit out of date. I've emailed the author via sf, but
no
reply. Does anyone know if poshmodule works with latest stuff?
from the horse's mouth
Paul Rubin http wrote:
Jorgen Grahn [EMAIL PROTECTED] writes:
I feel the recent SMP hype (in general, and in Python) is a red herring. Why
do I need that extra performance? What application would use it?
How many mhz does the computer you're using right now have? When did
you buy it?
On 06 Sep 2005 14:08:03 -0700, Paul Rubin http wrote:
Jorgen Grahn [EMAIL PROTECTED] writes:
I feel the recent SMP hype (in general, and in Python) is a red herring. Why
do I need that extra performance? What application would use it?
How many mhz does the computer you're using right now
Paul Rubin wrote:
Jeremy Jones [EMAIL PROTECTED] writes:
to pass data around between processes. Or an idea I've been tinkering
with lately is to use a BSD DB between processes as a queue just like
Queue.Queue in the standard library does between threads. Or you
could use Pyro between
Jorgen Grahn wrote:
On Tue, 06 Sep 2005 08:57:14 +0100, Michael Sparks [EMAIL PROTECTED]
wrote: ...
Are you so sure? I suspect this is due to you being used to writing code
that is designed for a single CPU system. What if you're basic model of
system creation changed to include system
Thomas Bellman wrote:
Michael Sparks [EMAIL PROTECTED] writes:
Similarly, from
a unix command line perspective, the following will automatically take
advantage of all the CPU's I have available:
(find |while read i; do md5sum $i; done|cut -b-32) 2/dev/null |sort
No, it won't. At the
Steve Jorgensen [EMAIL PROTECTED] writes:
In this case, it woiuld just be keeping a list of dirty hash tables, and
having a process that pulls the next one from the queue, and cleans it.
If typical Python programs spend so enough time updating hash tables
for a hack like this to be of any
On 05 Sep 2005 23:31:13 -0700, Paul Rubin http://[EMAIL PROTECTED]
wrote:
Steve Jorgensen [EMAIL PROTECTED] writes:
In this case, it woiuld just be keeping a list of dirty hash tables, and
having a process that pulls the next one from the queue, and cleans it.
If typical Python programs spend
Steve Jorgensen [EMAIL PROTECTED] writes:
Given that Python is highly dependent upon dictionaries, I would
think a lot of the processor time used by a Python app is spent in
updating hash tables. That guess could be right or wrong, bus
assuming it's right, is that a design flaw? That's just
Jeremy Jones wrote:
Michael Sparks wrote:
Steve Jorgensen wrote:
On 05 Sep 2005 10:29:48 GMT, Nick Craig-Wood [EMAIL PROTECTED] wrote:
Jeremy Jones [EMAIL PROTECTED] wrote:
One Python process will only saturate one CPU (at a time) because
of the GIL (global interpreter lock).
I'm hoping
Michael Sparks wrote:
Jeremy Jones wrote:
snip
And maybe
Steve's magical thinking programming language will have a ton of merit.
I see no reason to use such derisory tones, though I'm sure you didn't mean
it that way. (I can see you mean it as extreme skepticism though :-)
On Mon, 05 Sep 2005 21:43:07 +0100, Michael Sparks [EMAIL PROTECTED] wrote:
Steve Jorgensen wrote:
...
I don't get that. Python was never designed to be a high performance
language, so why add complexity to its implementation by giving it
high-performance capabilities like SMP?
It depends
On Tue, 06 Sep 2005 08:57:14 +0100, Michael Sparks [EMAIL PROTECTED] wrote:
...
Are you so sure? I suspect this is due to you being used to writing code
that is designed for a single CPU system. What if you're basic model of
system creation changed to include system composition as well as
Michael Sparks [EMAIL PROTECTED] writes:
Similarly, from
a unix command line perspective, the following will automatically take
advantage of all the CPU's I have available:
(find |while read i; do md5sum $i; done|cut -b-32) 2/dev/null |sort
No, it won't. At the most, it will use four
Jorgen Grahn [EMAIL PROTECTED] writes:
I feel the recent SMP hype (in general, and in Python) is a red herring. Why
do I need that extra performance? What application would use it?
How many mhz does the computer you're using right now have? When did
you buy it? Did you buy it to replace a
Thomas Bellman [EMAIL PROTECTED] writes:
And I'm fairly certain that 'sort' won't start spending CPU time
until it has collected all its input, so you won't gain much
there either.
For large input, sort uses the obvious in-memory sort, external merge
algorithm, so it starts using cpu once
On Tue, 6 Sep 2005 19:35:49 + (UTC), Thomas Bellman [EMAIL PROTECTED]
wrote:
Michael Sparks [EMAIL PROTECTED] writes:
Similarly, from
a unix command line perspective, the following will automatically take
advantage of all the CPU's I have available:
(find |while read i; do md5sum
[EMAIL PROTECTED] (Bengt Richter) writes:
And I'm fairly certain that 'sort' won't start spending CPU time
until it has collected all its input, so you won't gain much
there either.
Why wouldn't a large sequence sort be internally broken down into parallel
sub-sequence sorts and merges that
Jorgen Grahn [EMAIL PROTECTED] writes:
But it's interesting that the Unix pipeline Just Works (TM) with so little
effort.
Yes it is. That's a result of two things:
1) The people who invented pipes were *very* smart (but not smart
enough to invent stderr at the same time :-).
2) Pipes use a
Jeremy Jones [EMAIL PROTECTED] writes:
1) find a good clean way to utilize muti-CPU machines and
I like SCOOP. But I'm still looking for alternatives.
2) come up with a simple, consistent, Pythonic concurrency paradigm.
That's the hard part. SCOOP attaches attributes to *variables*. It
also
Scott David Daniels [EMAIL PROTECTED] wrote:
Nick Craig-Wood wrote:
Splitting the GIL introduces performance and memory penalties
However its crystal clear now the future is SMP. Modern chips seem to
have hit the GHz barrier, and now the easy meat for the processor
designers is to
Nick Craig-Wood [EMAIL PROTECTED] writes:
of is decrementing a reference count. Only one thread can be allowed to
DECREF at any given time for fear of leaking memory, even though it will
most often turn out the objects being DECREF'ed by distinct threads are
themselves distinct.
Paul Rubin http://phr.cx@NOSPAM.invalid wrote in message
news:[EMAIL PROTECTED]
Along with fixing the GIL, I think PyPy needs to give up on this
BASIC-style reference counting and introduce real garbage collection.
Lots of work has been done on concurrent GC and the techniques for it
are
On 05 Sep 2005 10:29:48 GMT, Nick Craig-Wood [EMAIL PROTECTED] wrote:
Jeremy Jones [EMAIL PROTECTED] wrote:
One Python process will only saturate one CPU (at a time) because
of the GIL (global interpreter lock).
I'm hoping python won't always be like this.
I don't get that. Phyton was
Steve Jorgensen wrote:
On 05 Sep 2005 10:29:48 GMT, Nick Craig-Wood [EMAIL PROTECTED] wrote:
Jeremy Jones [EMAIL PROTECTED] wrote:
One Python process will only saturate one CPU (at a time) because
of the GIL (global interpreter lock).
I'm hoping python won't always be like this.
I
On Mon, 05 Sep 2005 21:43:07 +0100, Michael Sparks [EMAIL PROTECTED] wrote:
Steve Jorgensen wrote:
On 05 Sep 2005 10:29:48 GMT, Nick Craig-Wood [EMAIL PROTECTED] wrote:
Jeremy Jones [EMAIL PROTECTED] wrote:
One Python process will only saturate one CPU (at a time) because
of the GIL
Terry Reedy wrote:
Paul Rubin http://phr.cx@NOSPAM.invalid wrote in message
news:[EMAIL PROTECTED]
Along with fixing the GIL, I think PyPy needs to give up on this
BASIC-style reference counting and introduce real garbage collection.
Lots of work has been done on concurrent GC and the
On 2005-09-05, Nick Craig-Wood [EMAIL PROTECTED] wrote:
Jeremy Jones [EMAIL PROTECTED] wrote:
One Python process will only saturate one CPU (at a time) because
of the GIL (global interpreter lock).
I'm hoping python won't always be like this.
Quite a few people are. :)
So, I believe
Steve Jorgensen wrote:
On Mon, 05 Sep 2005 21:43:07 +0100, Michael Sparks [EMAIL PROTECTED] wrote:
Steve Jorgensen wrote:
On 05 Sep 2005 10:29:48 GMT, Nick Craig-Wood [EMAIL PROTECTED] wrote:
Jeremy Jones [EMAIL PROTECTED] wrote:
One Python process will only
On Mon, 05 Sep 2005 23:42:38 -0400, Jeremy Jones [EMAIL PROTECTED]
wrote:
Steve Jorgensen wrote:
...
That argument makes some sense, but I'm still not sure I agree. Rather than
make Python programmers have to deal with concurrentcy issues in every app to
get it to make good use of the hardware
Jeremy Jones [EMAIL PROTECTED] wrote:
One Python process will only saturate one CPU (at a time) because
of the GIL (global interpreter lock).
I'm hoping python won't always be like this.
If you look at another well known open source program (the Linux
kernel) you'll see the progression I'm
[Jeremy Jones]
One Python process will only saturate one CPU (at a time) because
of the GIL (global interpreter lock).
[Nick Craig-Wood]
I'm hoping python won't always be like this.
Me too.
However its crystal clear now the future is SMP.
Definitely.
So, I believe Python has got to
Nick Craig-Wood wrote:
Splitting the GIL introduces performance and memory penalties
However its crystal clear now the future is SMP. Modern chips seem to
have hit the GHz barrier, and now the easy meat for the processor
designers is to multiply silicon and make multiple thread / core
Greetings, all.
I have a program I'm trying to speed up by putting it on a new machine.
The new machine is a Compaq W6000 2.0 GHz workstation with dual XEON
processors.
I've gained about 7x speed over my old machine, which was a 300 MHz AMD
K6II, but I think there ought to be an even greater speed
John Brawley [EMAIL PROTECTED] writes:
However, the thought occurs that Python (2.4.1) may not have the ability to
take advantage of the dual processors, so my question:
Does it?
No.
If not, who knows where there might be info from people trying to make
Python run 64-bit, on multiple
John Brawley wrote:
Greetings, all.
I have a program I'm trying to speed up by putting it on a new machine.
The new machine is a Compaq W6000 2.0 GHz workstation with dual XEON
processors.
I've gained about 7x speed over my old machine, which was a 300 MHz AMD
K6II, but I think there ought to be
Jeremy Jones [EMAIL PROTECTED] writes:
to pass data around between processes. Or an idea I've been tinkering
with lately is to use a BSD DB between processes as a queue just like
Queue.Queue in the standard library does between threads. Or you
could use Pyro between processes. Or CORBA.
I
Paul Rubin wrote:
Jeremy Jones [EMAIL PROTECTED] writes:
to pass data around between processes. Or an idea I've been tinkering
with lately is to use a BSD DB between processes as a queue just like
Queue.Queue in the standard library does between threads. Or you
could use Pyro between
John Brawley [EMAIL PROTECTED] wrote:
Greetings, all. I have a program I'm trying to speed up by putting it
on a new machine. The new machine is a Compaq W6000 2.0 GHz
workstation with dual XEON processors. I've gained about 7x speed
over my old machine, which was a 300 MHz AMD K6II, but I
39 matches
Mail list logo