On Tuesday, 27 August 2013 at 11:43:29 UTC, Jason den Dulk wrote:
On Sunday, 25 August 2013 at 19:38:52 UTC, Paolo Invernizzi
wrote:
Thanks, somewhat unintuitive.
It is a trap for the unwary, but in this case the benefits
outweigh the costs.
On Sunday, 25 August 2013 at 19:56:34 UTC, Jakob Ovrum wrote:
To get a range of UTF-8 or UTF-16 code units, the code units
have to be represented as something other than `char` and
`wchar`. For example, you can cast your string to
immutable(ubyte)[] to operate on that, then cast it back at a
later point.
To have to use ubyte would seem to defeat the purpose of having
char. If I were to have this:
auto no_convert(C)(C[] s) if (isSomeChar!C)
{
struct No
{
private C[] s;
this(C[] _s) { s = _s; }
@property bool empty() { return s.length == 0; }
@property C front() in{ assert(s.length != 0); } body{
return s[0]; }
void popFront() in{ assert(s.length != 0); } body{ s =
s[1..$]; }
}
return No(s);
}
it's element type would be char for strings. Would this still
result in conversions if I used it with other algorithms?
It might, but that range of yours is underwhelming: no indexing,
no length, no nothing.
Why would you want to do *that* though? Is it because you have an
ASCII string? In that case, you should be interested in
std.encoding.AsciiChar and std.encoding.AsciiString.