Dan Sugalski wrote:
[snip]
> That's OK, since my example was wrong. (D'oh! Chalk it up to remnants of
> the martian death flu, along with too much blood in my caffeine stream) The
> example
> 
>   $foo{bar} = $baz + $xyzzy[42];
> 
> turns into
> 
>    baz->vtable->add[NATIVE](foo, baz, xyzzy, key);

Why does the bytecode compiler know it should generate NATIVE int addition?
Are you assuming 'use integer'? (see also next comment)

> 
> with the add routine doing
> 
>      IV lside = baz->vtable->get_int[NATIVE](baz, key+1);
>      IV rside = xyzzy->vtable->get_int[NATIVE](xyzzy, key+2);

This code assumes $xyzzy[42] will fit in an IV. It could
be bigint, bigfloat, ... and already create an overflow when
*converted* to IV, not only when added to lside.

I know this is so because it's the add[NATIVE] implementation, right(?).
But why does the bytecode compiler (or any other code from parsing
to execution) know that NATIVE will be appropriate?
Where does this assumption about both $baz and $xyzzy[42] come from?

>      IV sum;
>      bigstr *bigsum;
>      CHECK_OVERFLOW(bigstr, (sum = lside + rside));
> 
>      if (OVERFLOW) {
>        foo->vtable->set_integer[BIGINT](foo, bigsum, key[0]);
>      } else {
>        foo->vtable->set_integer[NATIVE](foo, sum, key[0]);
>      }
> 
> and foo's set_integer storing the passed data in the database as appropriate.
> 
> >And if we replace that line for the correspondent set_string operation, I
> >don't see the need to have add in a vtable, because I cannot understand how
> >it could be implemented differently for another variable aside from foo.
> 
> Now, as to this...
> 
> What happens if you have an overloaded array on the left side? Or a complex
> number? Or a bigint? The point of having add in the vtable (along with a
> lot of the other stuff that's in there) is so we can have a lot of
> special-purpose code rather than a hunk of general-purpose code. The idea
> is to reduce the number of tests and branches and shrink the size of the
> code we actually execute so we don't blow processor cache or have to clear
> out execution pipelines.

Having the `key' data structure looks like special-p. -> general-p.
to me. Is it your idea that the key pointers will simply get passed
along most of the time and in the end there will be few branches
because eg. an array value knows it has to do indexing and a scalar
value will ignore the key pointer?

Are there estimates about the relative frequencies of
    $a
vs. $a[] or $a{}
in perl code?

How will non-constant keys be handled? Will there be key data structures
created on the C-stack or in reused buffers? Or will it be like this:
    1. one or more ops calculate the key and fetch the PMC 
       from the container,
    2. the PMC is passed to other functions with a NULL key entry.

> 
> >And the line that sums the two values doesn't take into account the fact
> >that baz has + overloaded to make some test if the values are strings, to
> >concatenate instead of sum. How would foo's vtable have to deal with that?
> 
> Well, in the updated (and actually correct... :) version it does, since
> it's baz's add code that gets called. If xyzzy is overloaded things get
> dicier. How that gets handled (when the right operand in an expression is
> overloaded, or both) is a language issue, and one that someone else (read:
> Larry) has to decide.
[snip]

-Edwin

Reply via email to