Bartosz Milewski wrote:
Andrei Alexandrescu Wrote:

Bartosz Milewski wrote:
Andrei Alexandrescu Wrote:

How about creating a struct Value!T that transforms T (be it an
array or a class) into a value type? Then if you use Value!(int[]),
you're effectively dealing with values throughout (even though
internally they might be COWed). Sometimes I also see a need for
DeepValue!T which would e.g. duplicate transitively arrays of
arrays and full object trees. For the latter we need some more
introspection though. But we have everything in the laguage to make
Value and DeepValue work with arrays.

What do you think?
I'm afraid this would further muddle the message: "If you want safe
arrays, use the Value device; if you want to live dangerously, use
the built in type."
I think the message is "If you want values, use Value. If you want slices, use slices." To me that's rather a clear message.

I'd rather see the reverse: D arrays are safe to
use.
But that's backwards. You can do Value with slices. You can't do slices with values. The logical primitive to pick is the slice.


I know you don't like analogies, but for me it sounds like an advice to a C++ programmer: if you want to control destruction, use unique_ptr (or shared_ptr). This, BTW, is how I program in C++ because I can't live with memory leaks and double-deletions. What would your advice be to somebody who is learning D as her first language. Don't use arrays directly, use Value!(T[]) ? Should libraries be written in terms of Value!(T[])?

Some would use that, some would use T[]. For example std.algorithm could derive only little, yet nonzero, use from Value!(T[]).

They have the usual reference semantics of static arrays. But if
you expand them, the sharing goes away and you get a unique reference
to a copy. This is the "no gotcha" semantics, totally predictable,
easy to reason about.

How the compiler supports that semantics while performing clever
optimizations is another story. It's fine if this part is hard. The
language can even impose complexity requirements, if you are sure
that they are possible to implement (it's enough to have one
compliant implementation to prove this point).
Well the problem is we don't have that. It's more difficult to "be fine" if the onus of implementation is on you.

By the way, what are the library algorithms that rely on O(1)
behavior of array append?
I don't know, but there are plenty of algorithms out there that rely on that.

I thought you had concrete examples.

I do (one famous example is in Windows 3), but I wouldn't think there's a need to argue that point. If we're facing the prospect of an append that always reallocates, we better don't provide it at all. People here have been complaining about ~= being slow because it acquires a lock, even though its complexity is amortized O(1).


Andrei

Reply via email to