Re: [patch] change specific int128 - generic intN

2014-06-28 Thread Marc Glisse
(oups, the message got stuck in my mailer, should have been sent a while 
ago)


On Tue, 24 Jun 2014, DJ Delorie wrote:


Since the check for __STRICT_ANSI__ is removed, do we need to add
__extension__ in front of __GLIBCXX_TYPE_INT_N_0 to avoid warning with
-Wsystem-headers?


I copied the code from the __int128 case, and it explicitly bypassed
-Wsystem-headers...  so we don't have that problem.


Ah... indeed.


That seems complicated. You just need to call emit_support_tinfo_1 on
each of the types (see how fundamentals is used at the end of the
function), no need to put everything in the array.


Sure, *now* you tell me that :-)


Sorry, I should have made that clearer when I introduced 
emit_support_tinfo_1...



I can do it either way, but it's the same overhead to iterate through
the types.  Shoving it into the array is a bit more future-proof, but
keeping the index in sync is a bit of work if the table ever changes.


We are already going to have a second loop, not on the fundamentals array, 
calling emit_support_tinfo_1 for __float128 when available, so the array 
won't be complete anymore. More precisely, it will iterate either on all 
the types on which register_builtin_type was called or on float modes that 
don't correspond to float/double/long double (depends on how much they 
break ARM).



Your choice ;-)


Well, Jason's.

--
Marc Glisse


Re: [patch] change specific int128 - generic intN

2014-06-23 Thread DJ Delorie

 Since the test on __STRICT_ANSI__ is removed for all other uses, it would 
 seem consistent to me to remove this one as well. Besides, you are already 
 testing __GLIBCXX_USE_INT_N_0, which as far as I understand is protected 
 by !flag_iso (with the exception of size_t).

Yup, I'll clean that up.

  -#if !defined(__STRICT_ANSI__)  defined(_GLIBCXX_USE_INT128)
  +  // Conditionalizing on __STRICT_ANSI__ here will break any port that
  +  // uses one of these types for size_t.
  +#if defined(__GLIBCXX_USE_INT_N_0)
template
  -struct __is_integral_helper__int128
  +struct __is_integral_helper__GLIBCXX_TYPE_INT_N_0
 
 Since the check for __STRICT_ANSI__ is removed, do we need to add
 __extension__ in front of __GLIBCXX_TYPE_INT_N_0 to avoid warning with
 -Wsystem-headers?

I copied the code from the __int128 case, and it explicitly bypassed
-Wsystem-headers...  so we don't have that problem.  Not sure if we
*should* have that problem, but we didn't before...  I can only assume
that either (1) someone found it too difficult to get __extension__ to
work right in all those cases, or (2) it's a bug.  Or (3) for some
reason they decided to do it that way anyway.  I don't know.

 That seems complicated. You just need to call emit_support_tinfo_1 on
 each of the types (see how fundamentals is used at the end of the
 function), no need to put everything in the array.

Sure, *now* you tell me that :-)

I can do it either way, but it's the same overhead to iterate through
the types.  Shoving it into the array is a bit more future-proof, but
keeping the index in sync is a bit of work if the table ever changes.
Your choice ;-)


Re: [patch] change specific int128 - generic intN

2014-06-23 Thread DJ Delorie

 The changes to dwarf2asm.c, cppbuiltin.c, optabs.c, defaults.h, expr.c, 
 expmed.c, tree-dfa.c, simplify-rtx.c, lto-object.c, loop-iv.c, varasm.c, 
 the msp430 back end and some of the stor-layout.c changes don't look like 
 they should depend on the rest of the patch.  I think it would help review 
 if anything that can reasonably be separated from the main intN support is 
 posted separately, as a much smaller patch with its own self-contained 
 rationale (I presume all those changes should work fine without the main 
 intN support), and then the intN patch only contains things directly 
 related to intN support.

There are basically three types of changes working together here:

1. Changes that remove the types are powers of two assumptions
   throughout gcc.  At least, as many as I could find.  These could be
   applied without the intN support, but serve no purpose without it.

2. The intN patch itself.

3. The msp430's changes to use __int20 for size_t, which depends on
   the above two.

I'll split them up but I hope progress on set #2 isn't stalled on
bikeshedding set #1, especially since set #1 doesn't solve any
problems (or really, change *anything*) on its own.


Re: [patch] change specific int128 - generic intN

2014-06-21 Thread Marc Glisse

(Adding libstdc++@ in Cc: so they see the patch at
https://gcc.gnu.org/ml/gcc-patches/2014-06/msg01655.html )

On Sat, 21 Jun 2014, DJ Delorie wrote:


New version of https://gcc.gnu.org/ml/gcc-patches/2014-04/msg00723.html

This is a big patch, but it includes all the features/changes/support
requested since the initial patch.  Tested with identical results
before/after on x86-64 EXCEPT for g++.dg/ext/int128-1.C, which assumes
__int128 is not supported with -std= but as per previous discussion,
it now is (because __intN might be used for size_t, which is
required).  Tested on msp430 with significant improvements but a few
regressions (expected).

This patch replaces the current int128 support with a generic intN
support, which happens to provide int128 as well as up to three
additional intN types per target.  I will post a separate patch for
the msp430 backend demonstrating the use of this new feature for
__int20, as well as a third to fix some problems with PSImode in
general.

The general idea is that genmodes has a new macro INT_N() for
tm-modes.def, which gives the mode and precision to use for
target-specific intN types.  These types are always created, but the
parser will error with unsupported if the target does not support
that mode for that compile (i.e. because of command line switches,
etc).  There's an INT_N(TI,128) in the common sources.

If the target defines any types larger than long long, those types
(like int128 before) will be used for type promotion, but types
smaller than long long will not, to avoid confusing the C type
promotion rules.  Otherwise, it's up to the source being compiled to
specific __intN types as desired.


Nice. A couple quick comments on the parts I understand while
maintainers are resting for the week-end.


Index: libstdc++-v3/src/c++11/limits.cc
===

[...]

+#if !defined(__STRICT_ANSI__)


Since the test on __STRICT_ANSI__ is removed for all other uses, it would 
seem consistent to me to remove this one as well. Besides, you are already 
testing __GLIBCXX_USE_INT_N_0, which as far as I understand is protected 
by !flag_iso (with the exception of size_t).



-#if !defined(__STRICT_ANSI__)  defined(_GLIBCXX_USE_INT128)
+  // Conditionalizing on __STRICT_ANSI__ here will break any port that
+  // uses one of these types for size_t.
+#if defined(__GLIBCXX_USE_INT_N_0)
  template
-struct __is_integral_helper__int128
+struct __is_integral_helper__GLIBCXX_TYPE_INT_N_0


Since the check for __STRICT_ANSI__ is removed, do we need to add
__extension__ in front of __GLIBCXX_TYPE_INT_N_0 to avoid warning with
-Wsystem-headers?


--- gcc/cp/rtti.c   (revision 211858)
+++ gcc/cp/rtti.c   (working copy)
@@ -1506,31 +1506,44 @@ emit_support_tinfo_1 (tree bltn)

void
emit_support_tinfos (void)
{
  /* Dummy static variable so we can put nullptr in the array; it will be
 set before we actually start to walk the array.  */
-  static tree *const fundamentals[] =
+  static tree * fundamentals[] =
  {
void_type_node,
boolean_type_node,
wchar_type_node, char16_type_node, char32_type_node,
char_type_node, signed_char_type_node, unsigned_char_type_node,
short_integer_type_node, short_unsigned_type_node,
integer_type_node, unsigned_type_node,
long_integer_type_node, long_unsigned_type_node,
long_long_integer_type_node, long_long_unsigned_type_node,
-int128_integer_type_node, int128_unsigned_type_node,
float_type_node, double_type_node, long_double_type_node,
dfloat32_type_node, dfloat64_type_node, dfloat128_type_node,
+#define FUND_INT_N_IDX 22
+// These eight are for intN_t nodes
+nullptr_type_node, nullptr_type_node, nullptr_type_node, 
nullptr_type_node,
+nullptr_type_node, nullptr_type_node, nullptr_type_node, 
nullptr_type_node,
nullptr_type_node,
0
  };
-  int ix;
+  int ix, i;
  tree bltn_type, dtor;

+  ix = FUND_INT_N_IDX;
+  for (i = 0; i  NUM_INT_N_ENTS; i ++)
+if (int_n_enabled_p[i])
+  {
+   fundamentals [ix++] = int_n_trees[i].signed_type;
+   fundamentals [ix++] = int_n_trees[i].unsigned_type;
+  }
+  fundamentals [ix++] = nullptr_type_node;
+  fundamentals [ix++] = 0;
+
  push_abi_namespace ();
  bltn_type = xref_tag (class_type,
get_identifier (__fundamental_type_info),
/*tag_scope=*/ts_current, false);
  pop_abi_namespace ();
  if (!COMPLETE_TYPE_P (bltn_type))


That seems complicated. You just need to call emit_support_tinfo_1 on
each of the types (see how fundamentals is used at the end of the
function), no need to put everything in the array.

--
Marc Glisse


Re: [patch] change specific int128 - generic intN

2014-06-21 Thread Joseph S. Myers
The changes to dwarf2asm.c, cppbuiltin.c, optabs.c, defaults.h, expr.c, 
expmed.c, tree-dfa.c, simplify-rtx.c, lto-object.c, loop-iv.c, varasm.c, 
the msp430 back end and some of the stor-layout.c changes don't look like 
they should depend on the rest of the patch.  I think it would help review 
if anything that can reasonably be separated from the main intN support is 
posted separately, as a much smaller patch with its own self-contained 
rationale (I presume all those changes should work fine without the main 
intN support), and then the intN patch only contains things directly 
related to intN support.

-- 
Joseph S. Myers
jos...@codesourcery.com


Re: [patch] change specific int128 - generic intN

2014-05-09 Thread Marc Glisse

On Thu, 8 May 2014, DJ Delorie wrote:


Assuming that the formula sizeof(type)*char_bit==precision works for all


It doesn't.  THe MSP430 has __int20 for example.


Well, it wasn't a hard requirement, it is just that the library has to 
use a more complicated way to get the precision (use (unsigned TYPE)(-1) 
to get the unsigned max and compute the precision from that, probably).



Would it be acceptable for the compiler to always define a set of
macros for each of the intN types?


What set of macros do you have in mind?


I would have thought that would be discouraged,


If we can't think of another way...

--
Marc Glisse


Re: [patch] change specific int128 - generic intN

2014-05-09 Thread DJ Delorie

 Well, it wasn't a hard requirement, it is just that the library has
 to use a more complicated way to get the precision (use (unsigned
 TYPE)(-1) to get the unsigned max and compute the precision from
 that, probably).

We could define macros for the precision too, and we already know max
and min values as macros, it's just a matter of exporting that info
to the C++ headers somehow.

  Would it be acceptable for the compiler to always define a set of
  macros for each of the intN types?
 
 What set of macros do you have in mind?

In general, I meant.  They'd be predefined for pretty much every
compile, not just the C++ headers.


Re: [patch] change specific int128 - generic intN

2014-05-08 Thread DJ Delorie

The libstdc++v3 headers have __int128 hard-coded all over the place.
Any suggestions on parameterizing those for the __intN types that are
actually supported by the target?


Re: [patch] change specific int128 - generic intN

2014-05-08 Thread Marc Glisse

On Thu, 8 May 2014, DJ Delorie wrote:


The libstdc++v3 headers have __int128 hard-coded all over the place.
Any suggestions on parameterizing those for the __intN types that are
actually supported by the target?


(adding libstdc++@ in Cc:)

The first idea that comes to mind (so possibly not such a good one) is to 
provide predefined macros:

#define __EXTENDED_INTEGER_TYPE_1__ __int24
#define __EXTENDED_INTEGER_TYPE_2__ __int128
#undef __EXTENDED_INTEGER_TYPE_3__

Assuming that the formula sizeof(type)*char_bit==precision works for all 
types, it should be sufficient for the library (abs, type_traits and 
numeric_limits).


--
Marc Glisse


Re: [patch] change specific int128 - generic intN

2014-05-08 Thread DJ Delorie

 Assuming that the formula sizeof(type)*char_bit==precision works for all 

It doesn't.  THe MSP430 has __int20 for example.

Would it be acceptable for the compiler to always define a set of
macros for each of the intN types?  I would have thought that would be
discouraged, but it would be an easier way to handle it.



Re: [patch] change specific int128 - generic intN

2014-05-07 Thread Joseph S. Myers
On Sun, 4 May 2014, DJ Delorie wrote:

  I'm not aware of any reason those macros need to have decimal values.  I'd 
  suggest removing the precomputed table and printing them in hex, which is 
  easy for values of any precision.
 
 Here's an independent change that removes the decimal table and
 replaces it with generated hex values.  I included the relevent output
 of gcc -E -dM also.

OK (presuming the usual bootstrap and regression test, which should 
provide a reasonably thorough test of this code through the stdint.h 
tests).

-- 
Joseph S. Myers
jos...@codesourcery.com


Re: [patch] change specific int128 - generic intN

2014-05-07 Thread DJ Delorie

 OK (presuming the usual bootstrap and regression test, which should 
 provide a reasonably thorough test of this code through the stdint.h 
 tests).

Bootstrapped with and without the patch on x86-64, no regressions.
Committed.  Thanks!


Re: [patch] change specific int128 - generic intN

2014-05-05 Thread Mike Stump
On May 4, 2014, at 2:09 PM, DJ Delorie d...@redhat.com wrote:
 Here's an independent change that removes the decimal table and
 replaces it with generated hex values.  I included the relevent output
 of gcc -E -dM also.

 +  /* Allows bit sizes up to 128 bits.  */
 +#define PBOH_SZ (128/4+4)

After tomorrow, this should be:

MAX_BITSIZE_MODE_ANY_INT, not 128.

 +  char value[PBOH_SZ];

:-)


Re: [patch] change specific int128 - generic intN

2014-05-05 Thread DJ Delorie

 After tomorrow, this should be:
 
 MAX_BITSIZE_MODE_ANY_INT, not 128.

Heh.

Ok, after tomorrow, assume my patch has that instead :-)


Re: [patch] change specific int128 - generic intN

2014-05-04 Thread DJ Delorie

 I'm not aware of any reason those macros need to have decimal values.  I'd 
 suggest removing the precomputed table and printing them in hex, which is 
 easy for values of any precision.

Here's an independent change that removes the decimal table and
replaces it with generated hex values.  I included the relevent output
of gcc -E -dM also.

Index: c-family/c-cppbuiltin.c
===
--- c-family/c-cppbuiltin.c (revision 209391)
+++ c-family/c-cppbuiltin.c (working copy)
@@ -1285,45 +1295,71 @@ builtin_define_constants (const char *ma
 static void
 builtin_define_type_max (const char *macro, tree type)
 {
   builtin_define_type_minmax (NULL, macro, type);
 }
 
+static void
+print_bits_of_hex (char *buf, int bufsz, int count)
+{
+  gcc_assert (bufsz  3);
+  *buf++ = '0';
+  *buf++ = 'x';
+  bufsz -= 2;
+
+  gcc_assert (count  0);
+
+  switch (count % 4) {
+  case 0:
+break;
+  case 1:
+*buf++ = '1';
+bufsz --;
+count -= 1;
+break;
+  case 2:
+*buf++ = '3';
+bufsz --;
+count -= 2;
+break;
+  case 3:
+*buf++ = '7';
+bufsz --;
+count -= 3;
+break;
+  }
+  while (count = 4)
+{
+  gcc_assert (bufsz  1);
+  *buf++ = 'f';
+  bufsz --;
+  count -= 4;
+}
+  gcc_assert (bufsz  0);
+  *buf++ = 0;
+}
+
 /* Define MIN_MACRO (if not NULL) and MAX_MACRO for TYPE based on the
precision of the type.  */
 
 static void
 builtin_define_type_minmax (const char *min_macro, const char *max_macro,
tree type)
 {
-  static const char *const values[]
-= { 127, 255,
-   32767, 65535,
-   2147483647, 4294967295,
-   9223372036854775807, 18446744073709551615,
-   170141183460469231731687303715884105727,
-   340282366920938463463374607431768211455 };
+  /* Allows bit sizes up to 128 bits.  */
+#define PBOH_SZ (128/4+4)
+  char value[PBOH_SZ];
 
-  const char *value, *suffix;
+  const char *suffix;
   char *buf;
-  size_t idx;
+  int bits;
 
-  /* Pre-rendering the values mean we don't have to futz with printing a
- multi-word decimal value.  There are also a very limited number of
- precisions that we support, so it's really a waste of time.  */
-  switch (TYPE_PRECISION (type))
-{
-case 8:idx = 0; break;
-case 16:   idx = 2; break;
-case 32:   idx = 4; break;
-case 64:   idx = 6; break;
-case 128:  idx = 8; break;
-default:gcc_unreachable ();
-}
+  bits = TYPE_PRECISION (type) + (TYPE_UNSIGNED (type) ? 0 : -1);
+
+  print_bits_of_hex (value, PBOH_SZ, bits);
 
-  value = values[idx + TYPE_UNSIGNED (type)];
   suffix = type_suffix (type);
 
   buf = (char *) alloca (strlen (max_macro) + 1 + strlen (value)
  + strlen (suffix) + 1);
   sprintf (buf, %s=%s%s, max_macro, value, suffix);
 


 $ ./cc1 -quiet -E -dM dj.c | egrep -i '(min|max)' | grep 0x | sort
#define __INT16_MAX__ 0x7fff
#define __INT20_MAX__ 0x7L
#define __INT32_MAX__ 0x7fffL
#define __INT64_MAX__ 0x7fffLL
#define __INT8_MAX__ 0x7f
#define __INTMAX_MAX__ 0x7fffLL
#define __INTPTR_MAX__ 0x7fff
#define __INT_FAST16_MAX__ 0x7fff
#define __INT_FAST32_MAX__ 0x7fffL
#define __INT_FAST64_MAX__ 0x7fffLL
#define __INT_FAST8_MAX__ 0x7fff
#define __INT_LEAST16_MAX__ 0x7fff
#define __INT_LEAST32_MAX__ 0x7fffL
#define __INT_LEAST64_MAX__ 0x7fffLL
#define __INT_LEAST8_MAX__ 0x7f
#define __INT_MAX__ 0x7fff
#define __LONG_LONG_MAX__ 0x7fffLL
#define __LONG_MAX__ 0x7fffL
#define __PTRDIFF_MAX__ 0x7fff
#define __SCHAR_MAX__ 0x7f
#define __SHRT_MAX__ 0x7fff
#define __SIG_ATOMIC_MAX__ 0x7fff
#define __SIZE_MAX__ 0xU
#define __UINT16_MAX__ 0xU
#define __UINT20_MAX__ 0xfUL
#define __UINT32_MAX__ 0xUL
#define __UINT64_MAX__ 0xULL
#define __UINT8_MAX__ 0xff
#define __UINTMAX_MAX__ 0xULL
#define __UINTPTR_MAX__ 0xU
#define __UINT_FAST16_MAX__ 0xU
#define __UINT_FAST32_MAX__ 0xUL
#define __UINT_FAST64_MAX__ 0xULL
#define __UINT_FAST8_MAX__ 0xU
#define __UINT_LEAST16_MAX__ 0xU
#define __UINT_LEAST32_MAX__ 0xUL
#define __UINT_LEAST64_MAX__ 0xULL
#define __UINT_LEAST8_MAX__ 0xff
#define __WCHAR_MAX__ 0x7fffL
#define __WINT_MAX__ 0xU


Re: [patch] change specific int128 - generic intN

2014-05-02 Thread Joseph S. Myers
On Thu, 1 May 2014, DJ Delorie wrote:

 Do we have a routine to do that already?

I don't know.

-- 
Joseph S. Myers
jos...@codesourcery.com


Re: [patch] change specific int128 - generic intN

2014-05-01 Thread Joseph S. Myers
On Mon, 14 Apr 2014, DJ Delorie wrote:

 The only non-automatic part of this is the table of pre-computed
 constants in builtin_define_type_minmax() (c-family/c-cppbuiltin.c).

I'm not aware of any reason those macros need to have decimal values.  I'd 
suggest removing the precomputed table and printing them in hex, which is 
easy for values of any precision.

-- 
Joseph S. Myers
jos...@codesourcery.com


Re: [patch] change specific int128 - generic intN

2014-05-01 Thread DJ Delorie

Do we have a routine to do that already?


Re: [patch] change specific int128 - generic intN

2014-04-17 Thread Marc Glisse

On Tue, 15 Apr 2014, DJ Delorie wrote:


I wasn't sure what to do with that array, since it was static and
couldn't have empty slots in them like the arrays in tree.h.  Also,
do we need to have *every* type in that list?  What's the rule for
whether a type gets installed there or not?  The comment says
guaranteed to be in the runtime support but does that mean for this
particular build (wrt multilibs) as not all intN types are guaranteed
(even the int128 types were not guaranteed to be supported before my
patch).  In other parts of the patch, just taking out the special case
for __int128 was sufficient to do the right thing for all __intN
types.


You need someone who understands this better than me (ask Jason). To be 
able to throw/catch a type, you need some typeinfo symbols. The front-end 
generates that for classes when they are defined. For fundamental types, 
it assumes libsupc++ will provide it, and the function you are modifying 
is the one generating libsupc++ (I am surprised your patch didn't cause 
any failure on x64_64, at least in abi_check). We need to generate the 
typeinfo for __intN, either in libsupc++, or in each TU, and since both 
cases will require code, I assume libsupc++ is preferable.



I can certainly put the intN types in there, but note that it would
mean regenerating the fundamentals[] array at runtime to include those
types which are supported at the time.


After the patch I linked, it should just mean calling the helper function 
on your new types, no need to touch the array.



Do the entries in the array need to be in a particular order?


No, any random order would do.

--
Marc Glisse


Re: [patch] change specific int128 - generic intN

2014-04-15 Thread DJ Delorie

I wasn't sure what to do with that array, since it was static and
couldn't have empty slots in them like the arrays in tree.h.  Also,
do we need to have *every* type in that list?  What's the rule for
whether a type gets installed there or not?  The comment says
guaranteed to be in the runtime support but does that mean for this
particular build (wrt multilibs) as not all intN types are guaranteed
(even the int128 types were not guaranteed to be supported before my
patch).  In other parts of the patch, just taking out the special case
for __int128 was sufficient to do the right thing for all __intN
types.

I can certainly put the intN types in there, but note that it would
mean regenerating the fundamentals[] array at runtime to include those
types which are supported at the time.

Do the entries in the array need to be in a particular order?


Re: [patch] change specific int128 - generic intN

2014-04-14 Thread Marc Glisse

On Mon, 14 Apr 2014, DJ Delorie wrote:


* rtti.c (emit_support_tinfos): Remove __int128-specific entries.


Hello,

I wanted to check the interaction with:
http://gcc.gnu.org/ml/gcc-patches/2014-04/msg00618.html

but I don't see where you moved the __int128 typeinfo generation.