On Friday, 3 June 2016 at 11:24:40 UTC, ag0aep6g wrote:
This is mostly me trying to make sense of the discussion.

So everyone hates autodecoding. But Andrei seems to hate it a good bit less than everyone else. As far as I could follow, he has one reason for that, which might not be clear to everyone:

char converts implicitly to dchar, so the compiler lets you search for a dchar in a range of chars. But that gives nonsensical results. For example, you won't find 'ö' in "ö".byChar, but you will find '¶' in there ('¶' is U+00B6, 'ö' is U+00F6, and 'ö' is encoded as 0xC3 0xB6 in UTF-8).

You mean that '¶' is represented internally as 1 byte 0xB6 and that it can be handled as such without error? This would mean that char literals are broken. The only valid way to represent '¶' in memory is 0xC3 0x86.
Sorry if I misunderstood, I'm only starting to learn D.


Reply via email to