On 10/29/04 Leopold Toetsch wrote:
> >> Ugh, yeah, but what does that buy you?  In dynamic languages pure
> >> derivational typechecking is very close to useless.
> 
> > Actually, if I were to write a perl runtime for parrot, mono or
> > even the JVM I'd experiment with the same pattern.
> 
> For the latter two yes, but as Luke has outlined that doesn't really
> help for languages where methods are changing under the hood.

If a method changes you just replace the pointer in the vtable
to point to the new method implementation. Invalidation is the
same, you just replace it with a method that gives the 
"method not found" error/exception.

> > You would assign small interger IDs to the names of the methods
> > and build a vtable indexed by the id.
> 
> Well, we already got a nice method cache, which makes lookup a
> vtable-like operation, i.e. an array lookup. But that's runtime only
> (and it needs invalidation still). So actually just the first method
> lookup is a hash operation.

And where is it cached and how? Take (sorry, still perl5 syntax:-):

        foreach $i (@list) {
                $i->method ();
        }

With the vtable idea, the low-level operations are (in pseudo-C):
        vtable = $i->vtable; // just a memory dereference
        code = vtable [method-constant-id]; // another mem deref
        run_code (code);

>From your description it seems it would look like:
        vtable = $i->vtable;
        code = vtable->method_lookup ("method"); // C function call
        run_code (code);

Note that $i may be of different type for each loop iteration.
Even a cached lookup is going to be slower than a simple memory 
dereference. Of course this only matters if the lookup is 
actually a bottleneck of your function call speed.

> > matter in parrot as long as bytecode values are as big as C ints:-)
> 
> That ought to come ;) Cachegrind shows no problem with opcode fetch and
> you know, when it's compiled to JIT bytecode size doesn't matter anyway.
> We just avoid the opcode and operand decoding.

If you use a JIT, decode overhead is already very small:-) AFAIK, alpha
is the only interesting architecture that doesn't do byte access (at least
on older processors) and so it may be a little inefficient there. But I think
you should optimize for the common case. On my machine going through a byte
opcode array is faster than an int one by about 15% (more if level 2 cache
or mem is needed to hold it). The only issue is when you need to load int
values that don't fit in a byte, but those are not so common as register 
numbers in your bytecode which currently take a whole int could just use
a byte.
Anyway, the two approaches may also balance out if the opcodes are
in ro memory. The issue is that in perl, for example, so much is
supposed to happen at runtime, because the 'use' operator changes the
compiling environment, so you actually need to compile at runtime in many 
cases, not only eval. That means emitting parrot bytecode in memory and
this bytecode is per-process, so it increases memory usage and eventually
swapping activity. As you say, since you jit, this memory is wasted, since
it goes unused soon after it is written.
Another issue is disk-load time: when you have small test apps it doesn't
matter, but when you start having bigger apps it might (even mmapping
has its cost, if you need a larger working set to load bytecodes).

BTW, in the computed goto code, make the array of address labels const:
it helps reducing the rw working set at least when parrot is built as an
executable.

lupus

-- 
-----------------------------------------------------------------
[EMAIL PROTECTED]                                     debian/rules
[EMAIL PROTECTED]                             Monkeys do it better

Reply via email to