https://gcc.gnu.org/bugzilla/show_bug.cgi?id=70893

--- Comment #3 from Jonathan Wakely <redi at gcc dot gnu.org> ---
(In reply to Кирилл from comment #2)
> ...
> Just realized its wrong endianness problem. 
> codecvt_utf8_utf16 should assume utf16be by default, right? Apparently, no.

No. It works with the native endianness of the system, if I understand the spec
correctly. i.e. it's for converting between UTF-8 and in-memory UTF-16
sequences using the native endianness. But the spec is a total mess.

Reply via email to