Nicholas Clark wrote:

On Tue, Feb 11, 2003 at 10:49:14AM +0100, Leopold Toetsch wrote:

I failed to follow most of the specific details, and all of the x86 specific
stuff.

Basically, all non JITed opcodes, which are now called functions (Parrot_jit_normal_op) would instead get jumped to in the cgp_core and from there operation in JITcode again will continue with a jump.


The idea actually works at all?

I think so.


And it goes faster than the prederef computed goto core?

Should be so, yes. When there are a lot of JITed ops (mainly integers and floats, kept in registers) JIT is faster then native -O3 compiled C. (I hope that the optimizer in imcc can turn a lot of plain perl code into native ints/floats).
When there are many nonJITted functions calls (e.g. string_concat), the CGP core wins compared to JIT. So combining (again :) the 2 fastest concepts give you the best of both.


So, in comparison, how fast is Python...

A little slower then perl5 (2.0 MOps)

$ python examples/mops/mops.py
M op/s: 1.70850480747

$ imcc -P examples/assembly/mops_p.pasm
M op/s: 40.195826

$ imcc -j examples/assembly/mops_p.pasm
M op/s: 68.099593

(I took the PMC based mops for comparison, and of course, this is
opcode dispatch only - or almost only)


parrot/imcc: -O3 compiled on i386/linux, Athlon 800.


Nicholas Clark
leo


Reply via email to