13-Oct-2013 16:36, nickles пишет:
Why does <string>.length return the number of bytes and not the
number of UTF-8 characters, whereas <wstring.>length and
<dstring>.length return the number of UTF-16 and UTF-32
characters?


???
This is simply wrong. All strings return number of codeunits. And it's only UTF-32 where codepoint (~ character) happens to fit into one codeunit.

Wouldn't it be more consistent to have <string>.length return the
number of UTF-8 characters as well (instead of having to use
std.utf.count(<string>)?

It's consistent as is.

--
Dmitry Olshansky

Reply via email to