On 08/20/2018 04:22 PM, Jeff Law wrote:
On 08/20/2018 04:16 PM, Joseph Myers wrote:
On Fri, 17 Aug 2018, Jeff Law wrote:

WCHAR_TYPE_SIZE is wrong because it doesn't account for flag_short_wchar.
As far as I can see only ada/gcc-interface/targtyps.c uses WCHAR_TYPE_SIZE
now.  TYPE_PRECISION (wchar_type_node) / BITS_PER_UNIT is what should be
used.
But that's specific to the c-family front-ends.

There's MODIFIED_WCHAR_TYPE which is ultimately used to build
wchar_type_node for the c-family front-ends.  Maybe we could construct
something from that.

If you do that, probably you want to move
fortran/trans-types.c:get_typenode_from_name (which converts the strings
used in target macros such as WCHAR_TYPE to the corresponding types) into
generic code.
I think we ultimately have to go down that path.  Or we have to make the
wchar types language independent.

My initial fooling around does something like this:

  count_by = 1;
  if (dir.specifier == 'S' || dir.modifier == FMT_LEN_l)
    {
      tree node = get_identifier (MODIFIED_WCHAR_TYPE);
      if (node)
        count_by = TYPE_PRECISION (TREE_TYPE (node)) / BITS_PER_UNIT
    }

Of course, I still have to fire up tests on AIX to know if that, or a
variant using get_typenode_from_name will DTRT.

I still believe it would be simpler, more robust, and safe
to pass in a flag to either count bytes (the default, for
all callers except sprintf %ls), or elements of the string
type (for sprintf %ls).

It would restore all the strxxx optimization (early folding)
for arrays of wide characters, make it work for
sprintf(d, "%s", L"...") and any other constant character
arrays, and make it possible to detect the missing nuls by
the same functions in constant wide strings.

Martin

Reply via email to