On Mon, Sep 30, 2013 at 5:33 PM, Sandro Magi <[email protected]> wrote:

> A properly implemented language founded on CR for mutable state could
> totally achieve within 10% IMO.
>

I can see how this is true if all your versioned elements are scalars or
reference types, but I don't see how this could possibly be true in the
general case of any larger sequential data-structure.

For example, consider looping over a large value type array. The
non-versioned value type array is going to be large cache-efficient
sequential memory access. If we have to version writes to this data, we
have to decide between either (a) expensively copying the entire array
because we changed one entry, or (b) dealing with indirection overhead for
every value of the array (which is going to cost alot more than 10% vs a
sequential scan).

Even a large class-object of scalars turned into versioned scalars has
disasterously comparable cache effects if you access many fields of the
object.

Do you see what I was getting at now?

I've been considering whether it's reasonable to build a value-type
implementation of the versions container. Of course it would have a
fixed-size number of in-place slots after which it would need to use a
pointer to 'overflow', but I think this would be better than adding a
pointer indirection to every value type.
_______________________________________________
bitc-dev mailing list
[email protected]
http://www.coyotos.org/mailman/listinfo/bitc-dev

Reply via email to