On Sat, Nov 03, 2001 at 03:53:10PM -0500, James Mastros wrote:
>   Here's what I'm thinking: The high 16 bits (or so) of the opcode can identify
> the oplib (as an index to the table of oplibs, see the next para), the rest
> can identify the opcode.  This shouldn't be too bad a limit: 64k opcodes
> perl oplib, 64k oplibs per packfile.
Unfornatly, this seems to be a bit on the slow side:

-g:
lilith:/usr/src/parrot-multioplib$ test_prog examples/assembly/mops.pbc
Iterations:    100000000
Estimated ops: 300000000
Elapsed time:  30.220650
M op/s:        9.926987

lilith:/usr/src/parrot$ test_prog examples/assembly/mops.pbc
Iterations:    100000000
Estimated ops: 300000000
Elapsed time:  27.459654
M op/s:        10.925119

-O3:
lilith:/usr/src/parrot-multioplib$ test_prog examples/assembly/mops.pbc
Iterations:    100000000
Estimated ops: 300000000
Elapsed time:  16.901975
M op/s:        17.749405

lilith:/usr/src/parrot$ test_prog examples/assembly/mops.pbc
Iterations:    100000000
Estimated ops: 300000000
Elapsed time:  14.848314
M op/s:        20.204314

Right now, this is my DO_OPS:
pc = interpreter->opcode_funcs[*pc>>24][*pc&0xFFF](pc, interpreter);
I tried a 16/16 split as well, but core.ops is too big (293 ops vs. 255 max).
I also tried a 17/15 split, which had about the same speed.

Ideas?

Right now, I'm thinking that the opcodes are going to be assigned to oplibs
linearly (eg the last opcode in core.ops is 292, so the first opcode in the
next oplib loaded will be 293).  However, this means that oplibs can't
expand portably.  It also just plain feels ugly.

      -=- James Mastros

Reply via email to