TheFlyingDutchman <[EMAIL PROTECTED]> writes:
> The very fastest Intel processor of the last 1990's that I found came
> out in October 1999 and had a speed around 783Mhz. Current fastest
> processors are something like 3.74 Ghz, with larger caches. Memory is
> also faster and larger. It appears that someone running a non-GIL
> implementation of CPython today would have significantly faster
> performance than a GIL CPython implementation of the late 1990's.
> Correct me if I am wrong, but it seems that saying non-GIL CPython is
> too slow, while once valid, has become invalid due to the increase in
> computing power that has taken place.

This reasoning is invalid.  For one thing, disk and memory sizes and
network bandwith have increased by a much larger factor than CPU speed
since the late 1990's.  A big disk drive in 1999 was maybe 20gb; today
it's 750gb, almost 40x larger, way outstripping the 5x cpu mhz
increase.  A fast business network connection was a 1.4 mbit/sec T-1
line, today it's often 100 mbit or more, again far oustripping CPU
mhz.  If Python was just fast enough to firewall your T1 net
connection or index your 20gb hard drive in 1999, it's way too slow to
do the same with today's net connections and hard drives, just because
of that change in the hardware landscape.  We have just about stopped
seeing increases in cpu mhz: that 3.74ghz speed was probably reached a
couple years ago.  We get cpu speed increases now through parallelism,
not mhz.  Intel and AMD both have 4-core cpu's now and Intel has a
16-core chip coming.  Python is at a serious disadvantage compared
with other languages if the other languages keep up with developments
and Python does not.

Also, Python in the late 90's was pitched as a "scripting language",
intended for small throwaway tasks, while today it's used for complex
applications, and the language has evolved accordingly.  CPython is
way behind the times, not only from the GIL, but because its slow
bytecode interpreter, its non-compacting GC, etc.  The platitude that
performance doesn't matter, that programmer time is more valuable than
machine time, etc. is at best an excuse for laziness.  And more and
more often, in the application areas where Python is deployed, it's
just plain wrong.  Take web servers: a big site like Google has
something like a half million of them.  Even the comparatively wimpy
site where I work has a couple thousand.  If each server uses 150
watts of power (plus air conditioning), then if making the software 2x
faster lets us shut down 1000 of them, the savings in electricity
bills alone is larger than my salary.  Of course that doesn't include
environmental benefits, hardware and hosting costs, the costs and
headaches of administering that many boxes, etc.  For a lot of Python
users, significant speedups are a huge win.

However, I don't think fixing CPython (through GIL removal or anything
else) is the answer, and Jython isn't the answer either.  Python's
future is in PyPy, or should be.  Why would a self-respecting Python
implementation be written in (yikes) C or (yucch) Java, if it can be
written in Python?  So I hope that PyPy's future directions include
true parallelism.
-- 
http://mail.python.org/mailman/listinfo/python-list

Reply via email to