https://gcc.gnu.org/bugzilla/show_bug.cgi?id=108976

            Bug ID: 108976
           Summary: codecvt for Unicode allows surrogate code points
           Product: gcc
           Version: 13.0
            Status: UNCONFIRMED
          Severity: normal
          Priority: P3
         Component: libstdc++
          Assignee: unassigned at gcc dot gnu.org
          Reporter: dmjpp at hotmail dot com
  Target Milestone: ---

Text in valid Unicode should never contain surrogate code POINTS. Those are
only allowed in UTF-16, but only as code UNITS and must be properly paired.

UTF-8 text in its strictest form must not contain surrogates but in a slightly
relaxed form surrogates can be easily encoded as 3-byte sequences. Same can be
said for UTF-32 and UCS-2. Only UTF-16 is immune to the error of surrogate code
POINT (they are treated as UNITS).

Codecvts in libstdc++ currently allow surrogate code points in some cases. Here
is a minimal reproduction (asserts are the correct behavior):

#include <locale>
#include <cassert>

void u32()
{
    using namespace std;
    auto& f = use_facet<codecvt<char32_t, char, mbstate_t>>(locale::classic());

    char u8str[] = "\uC800\uCBFF\uCC00\uCFFF";
    u8str[0] = u8str[3] = u8str[6] = u8str[9] = 0xED; // turn the C into D.
    // now the string is D800, DBFF, DC00 and DFFF encoded in relaxed UTF-8
    // that allows surrogate code points.
    char32_t u32str[] = {0xD800, 0xDBFF, 0xDC00, 0xDFFF, 0};

    char32_t u32out[1];
    const char* from_next;
    char32_t* to_next;
    mbstate_t st = {};
    auto res = f.in(st, u8str, u8str+3, from_next, u32out, u32out+1, to_next);
    assert(res == f.error);
    assert(from_next == u8str);
    assert(to_next == u32out);

    st = {};
    auto l = f.length(st, u8str, u8str+3, 1);
    assert(l == 0);

    char u8out[3];
    const char32_t* from_next2;
    char* to_next2;
    st = {};
    res = f.out(st, u32str, u32str+1, from_next2, u8out, u8out+3, to_next2);
    assert(res == f.error);
    assert(from_next2 == u32str);
    assert(to_next2 == u8out);
}
void u16()
{
    using namespace std;
    auto& f = use_facet<codecvt<char16_t, char, mbstate_t>>(locale::classic());

    char u8str[] = "\uC800\uCBFF\uCC00\uCFFF";
    u8str[0] = u8str[3] = u8str[6] = u8str[9] = 0xED; // turn the C into D.
    // now the string is D800, DBFF, DC00 and DFFF encoded in relaxed UTF-8
    // that allows surrogates.

    char16_t u16out[1];
    const char* from_next;
    char16_t* to_next;
    mbstate_t st = {};
    auto res = f.in(st, u8str, u8str+3, from_next, u16out, u16out+1, to_next);
    assert(res == f.error);
    assert(from_next == u8str);
    assert(to_next == u16out);

    st = {};
    auto l = f.length(st, u8str, u8str+3, 1);
    assert(l == 0);
}
int main()
{
    u32();
    u16();
}

>From reading the file codecvt.cc the following conversions have the bug:

- From UTF-8 to any other encoding.
- From UTF-32/UCS-4 to any other encoding.

Those that read from UCS-2 seem to me like they properly report the error.
Reading from UTF-16 can not have this bug by definition. From what I checked,
the functions for reading UTF-16 properly treat unpaired surrogate code units
as error.

Reply via email to