Walter:

>There's a lot of money and manpower behind Python. If this were true, why 
>hasn't this technology been done for Python?<

It was done, more than one time. One good JIT was Psyco. And more recently PyPy 
is about to surpass Psyco in performance:
http://codespeak.net/pypy/dist/pypy/doc/

But the Lua JIT is quite better than Psyco, its programmer is very intelligent 
(Psyco author too is an intelligent programmer, Armin Rigo).

And more importantly, this whole discussion can't be reduced to just a static 
Vs dynamic typing. Lua and Python are different languages, Python is probably 
more dynamic than Lua, it is quite more powerful than Lua, and it's different. 
So creating a really efficient JIT for Python is much harder than doing it for 
Lua, and you can't compare them much.

The result is that currently PyPy is generally about 8/10 times slower than D 
code, while Lua-Jit is about 2-3 times slower than D compiled well, and 
JavaScript running on the V8-CrankShaft JIT is about 3-4 times slower than D. 
And Mozilla hopes to someday beat V8, though a local static inferencer 
(probably to be seen in Firefox 5? See the "(TypeInference)" here, it's in 
development still: http://arewefastyet.com/?a=b&view=regress ).


> Secondly, even Lua's proponents (like ilikecakes) says he uses Lua in hybrid
> configurations with C and C++, so there is clearly some sort of deficit with 
> Lua.

Lua is a quite simple language, there are some things you can't do well with 
it, so you need other languages. And even with a JIT Lua is often not as fast 
as good C code. I have never said that Lua+JIT is better than D, I am not much 
interested in Lua.


> And, Python does not have the dynamic typing issue with floats that it does 
> with
> integers. Jitted floating point Python code *should* match performance with
> statically compiled floating point code.

Regarding strictly *dynamic typing* Python floats are as dynamic as Python 
integers. You probably mean that Python floats are structs that wrap around CPU 
doubles, while Python3 integers are multi-precision ones (Python2 integers are 
hybrids).

The fact they are multi-precision is orthogonal to dynamic typing: you are able 
to use multi-precision integers in D too (BigInt). Multi-precision integers are 
not intrinsically dynamically typed.

Regarding multi-precision integers, CommonLisp, OcaML and Factor languages show 
that tagged integers (that become multi-precision when the stack-allocated 
fixed ones have an overflow) are not as slow as you think. I have written small 
programs in Factor (a modern and quite refined stack-based language), and if 
you don't go past the precision of the tagged integers, they don't slow down 
the code a lot. Their slowdown is comparable to the difference of runtime 
performance of array-heavy D code compiled with and with -release switch.

Bye,
bearophile

Reply via email to