On Friday, 13 May 2016 at 16:05:21 UTC, Steven Schveighoffer
wrote:
On 5/12/16 4:15 PM, Walter Bright wrote:
10. Autodecoded arrays cannot be RandomAccessRanges, losing a
key
benefit of being arrays in the first place.
I'll repeat what I said in the other thread.
The problem isn't auto-decoding. The problem is hijacking the
char[] and wchar[] (and variants) array type to mean
autodecoding non-arrays.
If you think this code makes sense, then my definition of sane
varies slightly from yours:
static assert(!hasLength!R && is(typeof(R.init.length)));
static assert(!is(ElementType!R == R.init[0]));
static assert(!isRandomAccessRange!R && is(typeof(R.init[0]))
&& is(typeof(R.init[0 .. $])));
I think D would be fine if string meant some auto-decoding
struct with an immutable(char)[] array backing. I can accept
and work with that. I can transform that into a char[] that
makes sense if I have no use for auto-decoding. As of today, I
have to use byCodePoint, or .representation, etc. and it's very
unwieldy.
If I ran D, that's what I would do.
-Steve
Well, the "auto" part of autodecoding means "automatically doing
it for plain strings", right? If you explicitly do decoding, I
think it would just be "decoding"; there's no "auto" part.
I doubt anyone is going to complain if you add in a struct
wrapper around a string that iterates over code units or
graphemes. The issue most people have, as you say, is the fact
that the default for strings is to decode.