​
As you yourself pointed out in an earlier message, "the compiler should not
try to reinterpret the byte sequences in any way". That is a truism
How I interpret this is that while it is not the compiler's job to do an exact
copy of the bytes ("Raw" string literals are a C++ feature), it should get it
right as long as the source encoding agrees with the encoding targeted by the
compiler.
And that is the source of all the problems. Every compiler defaults to
whatever they want it to default to, including the default locale language if
someone doesn't specify one. As long as things like that aren't standardized,
the problem will forever exist.​

And you are right. Even UTF-16 doesn't do a byte-for-byte copy of a string.

If a single C source file absolutely must generate code containing constants
under both encodings, the options seem to be ASCII+escapes as a
lowest-common-denominator, short of simple hexadecimal integer arrays which
are entirely unreadable (except perhaps to a coder).  However, I would be in
agreement that this is far from an ideal solution.

​That is a perfectly valid solution too. I'm just saying why won't anyone even
consider just for one second, using a different compiler as valid solution?
Wouldn't that be even simpler? Afterall, I mean after reading this thread I
now have a few tiny doubts about whether I would ever consider using Clang for
any projects. I like predictability and Clang did something unpredictable in
this situation.
Regards,
Andrew​
------------------------------------------------------------------------------
Check out the vibrant tech community on one of the world's most
engaging tech sites, Slashdot.org! http://sdm.link/slashdot
_______________________________________________
Iup-users mailing list
[email protected]
https://lists.sourceforge.net/lists/listinfo/iup-users

Reply via email to