On Sunday, 13 October 2013 at 12:36:20 UTC, nickles wrote:
Why does <string>.length return the number of bytes and not the
number of UTF-8 characters, whereas <wstring.>length and
<dstring>.length return the number of UTF-16 and UTF-32
characters?

Wouldn't it be more consistent to have <string>.length return the
number of UTF-8 characters as well (instead of having to use
std.utf.count(<string>)?

Because `length` must be O(1) operation for built-in arrays and for UTF-8 strings it would require storing additional length field making it binary incompatible with other array types.

Reply via email to