> Why are types only entered in integer_types if wider than long long?

IIRC it was so that TImode (__int128) could get into the array (it was
there before) without adding the other smaller types, which (I think)
don't need to be in there.  I don't recall why they're left out,
but... ah, I remember.  ITK has to be sorted by bitsize and it's used
for type promotion, we don't want types promoted *to* the __intN
types, just *from* them.

> > +static bool
> > +standard_type_bitsize (int bitsize)
> > +{
> > +  /* As a special exception, we always want __int128 enabled if possible.  
> > */
> > +  if (bitsize == 128)
> > +    return false;
> > +  if (bitsize == CHAR_TYPE_SIZE
> > +      || bitsize == SHORT_TYPE_SIZE
> > +      || bitsize == INT_TYPE_SIZE
> > +      || bitsize == LONG_TYPE_SIZE
> > +      || (bitsize == LONG_LONG_TYPE_SIZE && LONG_LONG_TYPE_SIZE < 
> > GET_MODE_BITSIZE (TImode)))
> 
> I don't think there should be a special case depending on the size of long 
> long here.

I think it was from before I had the special case for "bitsize == 128".  I'll 
remove it.

> > +      /* This must happen after the backend has a chance to process
> > +    command line options, but before the parsers are
> > +    initialized.  */
> > +      for (i = 0; i < NUM_INT_N_ENTS; i ++)
> > +   if (targetm.scalar_mode_supported_p (int_n_data[i].m)
> > +       && ! standard_type_bitsize (int_n_data[i].bitsize)
> > +       && int_n_data[i].bitsize <= HOST_BITS_PER_WIDE_INT * 2)
> > +     int_n_enabled_p[i] = true;
> 
> This HOST_BITS_PER_WIDE_INT * 2 check seems wrong.

That check was there before, for __int128, I left it as-is.  There is no
__int128 (currently) if it's bigger then HBPWI*2.

> >  mode_for_size (unsigned int size, enum mode_class mclass, int limit)
> >  {
> > -  enum machine_mode mode;
> > +  enum machine_mode mode = BLKmode;
> > +  int i;
> 
> I don't see any need for this initialization to be added.

Removed.

> >    bprecision
> > -    = MIN (precision + BITS_PER_UNIT_LOG + 1, MAX_FIXED_MODE_SIZE);
> > +    = MIN (precision, MAX_FIXED_MODE_SIZE);
> 
> This change doesn't make sense to me - if it is needed, I think it should 
> be separated out, not included in this patch which should simply about 
> providing generic __intN support in the cases where __int128 is currently 
> supported.

Perhaps is belongs in the 1/5 patch, where I removed lots of "assume
all types are powers-of-two sizes" logic?  I split up the patch and
logs by hand from a master patch, I'm not surprised I got a few
mis-placed.

> > @@ -1157,13 +1165,13 @@ enum omp_clause_map_kind
> >    OMP_CLAUSE_MAP_ALLOC,
> >    OMP_CLAUSE_MAP_TO,
> >    OMP_CLAUSE_MAP_FROM,
> >    OMP_CLAUSE_MAP_TOFROM,
> >    /* The following kind is an internal only map kind, used for pointer 
> > based
> >       array sections.  OMP_CLAUSE_SIZE for these is not the pointer size,
> > -     which is implicitly POINTER_SIZE / BITS_PER_UNIT, but the bias.  */
> > +     which is implicitly POINTER_SIZE_UNITS, but the bias.  */
> 
> POINTER_SIZE_UNITS changes belong in another patch, not this one.

Again, probably the 1/5 patch.

> >    /* ptr_type_node can't be used here since ptr_mode is only set when
> >       toplev calls backend_init which is not done with -E  or pch.  */
> > -  psize = POINTER_SIZE / BITS_PER_UNIT;
> > +  psize = POINTER_SIZE_UNITS;
> 
> Likewise.

Likewise :-)

> > @@ -844,12 +846,47 @@ c_cpp_builtins (cpp_reader *pfile)
> >    builtin_define_type_max ("__LONG_LONG_MAX__", 
> > long_long_integer_type_node);
> >    builtin_define_type_minmax ("__WCHAR_MIN__", "__WCHAR_MAX__",
> >                           underlying_wchar_type_node);
> >    builtin_define_type_minmax ("__WINT_MIN__", "__WINT_MAX__", 
> > wint_type_node);
> >    builtin_define_type_max ("__PTRDIFF_MAX__", ptrdiff_type_node);
> >    builtin_define_type_max ("__SIZE_MAX__", size_type_node);
> > +  for (i = 0; i < NUM_INT_N_ENTS; i ++)
> > +    if (int_n_enabled_p[i])
> > +      {
> > +   char buf[35+20+20];
> > +   char buf_min[15+20];
> > +   char buf_max[15+20];
> > +
> > +   sprintf(buf_min, "__INT%d_MIN__", int_n_data[i].bitsize);
> > +   sprintf(buf_max, "__INT%d_MAX__", int_n_data[i].bitsize);
> 
> All this block of code appears to be new rather than replacing any 
> existing code doing something similar with __int128.  As such, I think 
> it's best considered separately from the main __intN support.

For each __int<N> we need to provide an __INT<N>_MIN__ and
__INT<N>_MAX__, just like for "char" we provide __CHAR_MIN__ and
__CHAR_MAX__.

> Also note missing spaces before '(' in this code.

Fixed.

> Some of this may be needed for libstdc++, but not all.  As far as I can 
> tell, the existing __glibcxx_min, __glibcxx_max, __glibcxx_digits, 
> __glibcxx_digits10, __glibcxx_max_digits10 macros can be used in most 
> cases and avoid any need to predefine macros for the min or max of __intN; 
> you only need to communicate which types exist and their sizes in bits 
> (that is, a single macro predefine for each N, with anything else being 
> considered separately if otherwise thought desirable).

I tried that, and wasn't able to get a simpler macro to do it inline
than the full macro that let gcc figure out the values.  Consider the
two N of 20 and 128; one is not a multiple of bytes and the other will
likely stress any runtime math.

> > +         if (!in_system_header_at (input_location)
> > +             /* As a special exception, allow a type that's used
> > +                for __SIZE_TYPE__.  */
> > +             && int_n_data[specs->int_n_idx].bitsize != POINTER_SIZE)
> >             pedwarn (loc, OPT_Wpedantic,
> > -                    "ISO C does not support %<__int128%> type");
> > +                    "ISO C does not support %<__int%d%> types",
> > +                    int_n_data[specs->int_n_idx].bitsize);
> 
> I don't think there should be such a special exception.  Existing practice 
> is for testcases to do "__extension__ typedef __SIZE_TYPE__ size_t;" to 
> avoid -pedantic diagnostics for size_t being unsigned long long on Win64.

OK, I can do that instead.  There was some discussion about how to
handle size_t without warnings when it corresponded to an extension
type, and I thought the general consensus was to skip the warning
rather than break the code.

Note that the C++ libraries must refer to __intN throughout as it's a
core type, they can't use size_t because that's a typedef.  I.e. even
code that specifically uses "size_t" will end up using the <__int20>
templates.  Hopefully the "in system header" test will be enough to
handle uses inside the testsuite.


> > -  align = exact_log2 (POINTER_SIZE / BITS_PER_UNIT);
> > +  align = ceil_log2 (POINTER_SIZE_UNITS);
> 
> Again, belongs in a different patch.

patch 1/5.

Reply via email to