Leo --

You need to account for the possibility that the number of ops in an oplib
could change over its versions, and so depending on it being stable is
a losing proposition. That is exactly what is going on in the scheme 
below,
where you append oplibs en their entirety to the code segment's optable.

I think this needs to be done at the op level, not at the oplib level (as 
I've
detailed before). I believe op{info,func} lookup by name is fast enough
that this can be done as a preamble without too much trouble.

Generally, I'm in favor of a greater number of smaller oplibs, with the 
option
for static or dynamic linking.

The main objection I know of is the implications for the fast runops 
cores.
They tend to dislike dynamic optables. Although I haven't had the time
to do the JIT tinkering to prove it, I think the JIT machinery might be
sufficient to build the equivalent of a computed goto core for a dynamic
optable on the fly. IMO, solving that opens the door to very dynamic
stuff. However, it might still be a non-starter if we need cgoto-like 
cores
on architectures for which we are not going to have JIT. I don't know
what the long-term policy on non-JIT architectures is (going to have them?
does the cgoto core have to work on them?).


Regards,

-- Gregor





Leopold Toetsch <[EMAIL PROTECTED]>
02/05/2003 06:28 AM

 
        To:     P6I <[EMAIL PROTECTED]>
        cc: 
        Subject:        [RfC] a scheme for core.ops extending


The interpreter depends on the various ops files for actually executing 
byte code.
Currently all ops files except obscure.ops get logically appended and 
built into core_ops_*.c. This makes for huge core_ops files, with a lot 
of unused operations, bad cache locality and so on. It will get worse, 
when different HLs have more needs for special ops like dotgnu.ops, we 
now already have.

So here is a scheme that allows for a small core.ops, easy extension and 
no execution speed penalty for non core ops.

1) core.ops is really core (ev. minus the trigonometric ops, which could 
go into math.ops)

2) There is one additional opcode

   oplib "opsname" # e.g. opsfile "math"

3) Additional opsfiles are either compiled statically into the 
interpreter, or get loaded dynamically via Parrot_{dlopen, dlsym}

4) Above B<oplib> opcodes have to appear in the assembler (i.e. imcc) in 
the same order as during runtime, simplest, just put them at the 
beginning.

5) When imcc encounters such a B<oplib>, imcc loads the op_info of the 
opsfile and appends it to the cores op_info

6) imcc thus spits out consecutive opcode numbers, like if these 
opsfiles were appended to core.ops in the order they are seen.

7) During runtime the op_func_tables get appended to core.ops' 
op_func_table and all loaded oplibs get a pointer to this common 
op_func_table. So due to 4) there again is a op_jump_table entry for the 
opcode number.

That's it, modulo some additions for prederef and JIT, compiled C would 
need more work.

I wrote a test program, showing what's on using the CGoto core:

core.ops (5 ops) : end, noop, oplib, "op 3", "op 4"
math.ops (2 ops): add5, sub5 (internally 0 and 1, real 5 and 6)
math2.ops (2 ops) : mul5, div5 (=> 7 and 8)

Here is a test run of:
opcode_t prog3[] = {3,20, 4,21, 2,0, 2,1, 5,10, 7,10, 3,30, 8,20, 0};

op 3: 20
op 4: 21
oplib: math
*** found static lib 'math'
oplib: math2
*** found dynamic lib 'math2'
add5 5+10=15
mul5 5*10=50
op 3: 30
div5 20/5=4
end

The "2, 0" and "2, 1" ops are oplib "math" (statically linked) and oplib 
"math2" (dynamically linked), the 0/1 operands point into the 
constant_table.

Appended is the test program, systems without dlopen/dlsym would need 
some work like in platform.c.

Have fun,
leo


Attachment: ext_oplib.tgz
Description: Binary data

Reply via email to