On Sunday, 13 October 2013 at 12:36:20 UTC, nickles wrote:
Why does <string>.length return the number of bytes and not the
number of UTF-8 characters, whereas <wstring.>length and
<dstring>.length return the number of UTF-16 and UTF-32
characters?

Wouldn't it be more consistent to have <string>.length return the
number of UTF-8 characters as well (instead of having to use
std.utf.count(<string>)?

Technically, UTF-16 can contain 2 ushort's for 1 character, so <wstring.>length return the number of ushort's, not the UTF-16 characters.

Reply via email to