Bartosz Milewski wrote:
Andrei Alexandrescu Wrote:

How about creating a struct Value!T that transforms T (be it an
array or a class) into a value type? Then if you use Value!(int[]),
you're effectively dealing with values throughout (even though
internally they might be COWed). Sometimes I also see a need for
DeepValue!T which would e.g. duplicate transitively arrays of
arrays and full object trees. For the latter we need some more
introspection though. But we have everything in the laguage to make
Value and DeepValue work with arrays.

What do you think?

I'm afraid this would further muddle the message: "If you want safe
arrays, use the Value device; if you want to live dangerously, use
the built in type."

I think the message is "If you want values, use Value. If you want slices, use slices." To me that's rather a clear message.

I'd rather see the reverse: D arrays are safe to
use.

But that's backwards. You can do Value with slices. You can't do slices with values. The logical primitive to pick is the slice.

They have the usual reference semantics of static arrays. But if
you expand them, the sharing goes away and you get a unique reference
to a copy. This is the "no gotcha" semantics, totally predictable,
easy to reason about.

How the compiler supports that semantics while performing clever
optimizations is another story. It's fine if this part is hard. The
language can even impose complexity requirements, if you are sure
that they are possible to implement (it's enough to have one
compliant implementation to prove this point).

Well the problem is we don't have that. It's more difficult to "be fine" if the onus of implementation is on you.

By the way, what are the library algorithms that rely on O(1)
behavior of array append?

I don't know, but there are plenty of algorithms out there that rely on that.


Andrei

Reply via email to