Implement TARGET_ATOMIC_ASSIGN_EXPAND_FENV for powerpc*-*-linux* soft-float and e500
== NULL_TREE) + { + atomic_update_decl + = build_decl (BUILTINS_LOCATION, FUNCTION_DECL, + get_identifier (__atomic_feupdateenv), + build_function_type_list (void_type_node, + const_double_ptr, + NULL_TREE)); + TREE_PUBLIC (atomic_update_decl) = 1; + DECL_EXTERNAL (atomic_update_decl) = 1; + } + + tree fenv_var = create_tmp_var (double_type_node, NULL); + mark_addressable (fenv_var); + tree fenv_addr = build1 (ADDR_EXPR, double_ptr_type_node, fenv_var); + + *hold = build_call_expr (atomic_hold_decl, 1, fenv_addr); + *clear = build_call_expr (atomic_clear_decl, 0); + *update = build_call_expr (atomic_update_decl, 1, +fold_convert (const_double_ptr, fenv_addr)); +#endif + return; +} + tree mffs = rs6000_builtin_decls[RS6000_BUILTIN_MFFS]; tree mtfsf = rs6000_builtin_decls[RS6000_BUILTIN_MTFSF]; tree call_mffs = build_call_expr (mffs, 0); Index: gcc/config.in === --- gcc/config.in (revision 216974) +++ gcc/config.in (working copy) @@ -1699,10 +1699,6 @@ #undef HAVE_WORKING_VFORK #endif -/* Define if isl is in use. */ -#ifndef USED_FOR_TARGET -#undef HAVE_isl -#endif /* Define if cloog is in use. */ #ifndef USED_FOR_TARGET @@ -1709,6 +1705,13 @@ #undef HAVE_cloog #endif + +/* Define if isl is in use. */ +#ifndef USED_FOR_TARGET +#undef HAVE_isl +#endif + + /* Define if F_SETLKW supported by fcntl. */ #ifndef USED_FOR_TARGET #undef HOST_HAS_F_SETLKW @@ -1882,6 +1885,18 @@ /* Define if your target C library provides the `dl_iterate_phdr' function. */ #undef TARGET_DL_ITERATE_PHDR +/* GNU C Library major version number used on the target, or 0. */ +#ifndef USED_FOR_TARGET +#undef TARGET_GLIBC_MAJOR +#endif + + +/* GNU C Library minor version number used on the target, or 0. */ +#ifndef USED_FOR_TARGET +#undef TARGET_GLIBC_MINOR +#endif + + /* Define if your target C library provides stack protector support */ #ifndef USED_FOR_TARGET #undef TARGET_LIBC_PROVIDES_SSP Index: gcc/configure === --- gcc/configure (revision 216974) +++ gcc/configure (working copy) @@ -26700,6 +26700,16 @@ fi { $as_echo $as_me:${as_lineno-$LINENO}: result: $glibc_version_major.$glibc_version_minor 5 $as_echo $glibc_version_major.$glibc_version_minor 6; } +cat confdefs.h _ACEOF +#define TARGET_GLIBC_MAJOR $glibc_version_major +_ACEOF + + +cat confdefs.h _ACEOF +#define TARGET_GLIBC_MINOR $glibc_version_minor +_ACEOF + + # Check whether --enable-gnu-unique-object was given. if test ${enable_gnu_unique_object+set} = set; then : enableval=$enable_gnu_unique_object; case $enable_gnu_unique_object in Index: gcc/configure.ac === --- gcc/configure.ac(revision 216974) +++ gcc/configure.ac(working copy) @@ -4503,6 +4503,10 @@ glibc_version_minor=0 glibc_version_minor=`echo $glibc_version_minor_define | sed -e 's/.*__GLIBC_MINOR__[ ]*//'` fi]]) AC_MSG_RESULT([$glibc_version_major.$glibc_version_minor]) +AC_DEFINE_UNQUOTED([TARGET_GLIBC_MAJOR], [$glibc_version_major], +[GNU C Library major version number used on the target, or 0.]) +AC_DEFINE_UNQUOTED([TARGET_GLIBC_MINOR], [$glibc_version_minor], +[GNU C Library minor version number used on the target, or 0.]) AC_ARG_ENABLE(gnu-unique-object, [AS_HELP_STRING([--enable-gnu-unique-object], -- Joseph S. Myers jos...@codesourcery.com
Re: [PATCH] Use CONVERT_EXPR_P and friends in the middle-end
On Fri, 31 Oct 2014, Richard Biener wrote: This fixes the few places where explicit checks for NOP_EXPR or CONVERT_EXPRs crept in. The goal really should be to eliminate anything that distinguishes the two, and then combine them (eliminate NOP_EXPR) (as I said in https://gcc.gnu.org/ml/gcc-patches/2009-09/msg01975.html). A noticable change may be the tree-eh.c one where we previously considered FP NOP_EXPRs trapping if flag_trapping_math (Any fp arithmetic may trap) but now like FP CONVERT_EXPRs only when honor_nans (but for some reason the honor_nans cases don't check flag_trapping_math). I'm not 100% sure which variant is more correct (this is FP - FP conversions thus widenings, truncations, converts from/to DFP). Well, use of honor_nans there is confused. (honor_nans is set in operation_could_trap_p in a way that checks flag_trapping_math !flag_finite_math_only - but doesn't check HONOR_NANS on the relevant floating-point mode.) Setting aside for the moment that -ftrapping-math covers both cases where actual trap handlers are called, and cases where exception flags are set without calling trap handlers (the latter being the only one covered by ISO C at present), the following applies: * Conversions of quiet NaNs from one floating-point type to another do not raise exceptions. Conversions of signaling NaNs do, however, and conversions of finite values can raise inexact (except for widening from a narrower to a wider type with the same radix) and underflow (except for widening, again, and with an exception to the exception in the case of __float80 to __float128 conversion with underflow traps enabled). * Conversions from floating point to integer (FIX_TRUNC_EXPR) do however raise invalid for NaN (or infinite) arguments - and for finite arguments outside the range of the destination type (this includes -1 and below converted to unsigned types). Whether they raise inexact for non-integer arguments is unspecified. To a first approximation, even with -ffinite-math-only, assume with -ftrapping-math that invalid may be raised for such conversions because of out-of-range values (although the range of binary16 - currently only supported as ARM __fp16 - is narrow enough that if you ignore non-finite values, conversions to some signed integer types are guaranteed in-range). It looks like the honor_nans argument was intended for the case of ordered conversions, for which it's correct that quiet NaNs raise exceptions, and is being misused for conversions, where fp_operation flag_trapping_math is the right thing to check (although there are certain subcases, depending on the types involved, where in fact you can't have traps). That in turn is the default, suggesting just removing the CASE_CONVERT and FIX_TRUNC_EXPR cases (the effect of which is to treat certain conversions as trapping for -ffinite-math-only where previously they weren't treated as trapping). -- Joseph S. Myers jos...@codesourcery.com
Re: [Patch] MIPS configuration patch to enable --with-[arch,endian,abi]
On Fri, 31 Oct 2014, Steve Ellcey wrote: So the question is: should /lib and /usr/lib always be for the default GCC ABI (whatever that may be) or should /lib and /usr/lib always be for the MIPS (old) 32 ABI (with /lib32 and /usr/lib32 always being for the MIPS N32 ABI and /lib64 and /usr/lib64 always being for the 64 MIPS ABI. I chose the latter as it seemed clearer and more consistent and that is why I also needed to change mips.h to add the overrides of STANDARD_STARTFILE_PREFIX_1 and STANDARD_STARTFILE_PREFIX_2. These overrides are not needed if building a multilib GCC because then MULTILIB_OSDIRNAMES in t-linux64 takes care of everything, but they are needed if building a non-multilib GCC with a default ABI other than the old 32 bit ABI. /lib and /usr/lib should always be for o32. + if test x$with_endian != x; then + default_mips_endian=$with_endian + fi install.texi currently says --with-endian is only for SH; you'll need to update the documentation to say what versions of this option are supported for MIPS. Also, t-linux64 uses MIPS_EL = $(if $(filter %el, $(firstword $(subst -, ,$(target,el) to determine endianness for Debian multiarch purposes, and I think this will need to change when you allow --with-endian to control the endianness. -- Joseph S. Myers jos...@codesourcery.com
Make soft-fp symbols into compat symbols for powerpc*-*-linux*
version + exit +} -- Joseph S. Myers jos...@codesourcery.com
Re: [gofrontend-dev] Re: [PATCH 7/9] Gccgo port to s390[x] -- part I
On Thu, 30 Oct 2014, Dominik Vogt wrote: platforms need to be added. Personally I cannot provide fixed tests for all the Abis either, so my suggestion is to xfail the test on all targets except s390[x] and x86_64 and leave it to the You should never do something in a test for x86_64 and not i?86, because they cover exactly the same set of targets (if only LP64 x86 / x86_64 is relevant, use { { i?86-*-* x86_64-*-* } lp64 }). -- Joseph S. Myers jos...@codesourcery.com
Re: [PATCH 06/10] Heart of the JIT implementation (was: Re: [PATCH 0/5] Merger of jit branch (v2))
On Thu, 30 Oct 2014, David Malcolm wrote: Looking at the build logs, I see: -fPIC within the xgcc args in the libgcc build logs, and That seems to depend on t-libgcc-pic, but that appears to cover most likely hosts (including any where I can be confident PIC is actually needed for shared libraries). It's certainly not clear that the -static-libstdc++ -static-libgcc default for building the compiler executables is the right one for building libgccjit.so. Agreed, but it's unclear to me what the default should be, and how to go about fixing it. That said, it appears that people who want the libgccjit.so to dynamically-link against libgcc and libstdc++ can already do so, by Can do so for libgccjit.so but not the compiler executables? (There are no doubt cases where it makes sense for the compiler executables to be dynamically linked with the shared libraries, but also I think cases for linking only libgccjit.so with the shared libraries.) Do you have thoughts on how I should address this? Also, given that the No. code works as-is, is resolving this a blocker for merging the jit branch? (I've been rebasing, and plan to repost the fixed-up patches for review shortly) I don't see it as a blocker, but I would not be surprised if having the libraries statically linked into libgccjit.so causes problems (is it safe to have two completely separate copies of libstdc++ in the same process? I don't know.). -- Joseph S. Myers jos...@codesourcery.com
Optimize powerpc*-*-linux* e500 hardfp/soft-fp use
compile +# certain operations into a call to the libgcc function, which thus +# needs to be defined elsewhere to use software floating point), also +# define hardfp_exclusions to be a list of those functions, +# e.g. unorddf2. # Functions parameterized by a floating-point mode M. hardfp_func_bases := addM3 subM3 negM2 mulM3 divM3 @@ -50,6 +57,8 @@ hardfp_func_list += $(foreach pair, $(hardfp_truncations), \ $(subst M,$(pair),truncM2)) +hardfp_func_list := $(filter-out $(hardfp_exclusions),$(hardfp_func_list)) + # Regexp for matching a floating-point mode. hardfp_mode_regexp := $(shell echo $(hardfp_float_modes) | sed 's/ /\\|/g') Index: libgcc/config/t-softfp === --- libgcc/config/t-softfp (revision 216787) +++ libgcc/config/t-softfp (working copy) @@ -31,6 +31,10 @@ # is a soft-float mode; for example, sftf where sf is hard-float and # tf is soft-float. # +# If some additional functions should be built that are not implied by +# the above settings, also define softfp_extras as a list of those +# functions, e.g. unorddf2. +# # If the libgcc2.c functions should not be replaced, also define: # # softfp_exclude_libgcc2 := y @@ -61,7 +65,8 @@ $(foreach i,$(softfp_int_modes), \ $(softfp_floatint_funcs))) \ $(foreach e,$(softfp_extensions),extend$(e)2) \ - $(foreach t,$(softfp_truncations),trunc$(t)2) + $(foreach t,$(softfp_truncations),trunc$(t)2) \ + $(softfp_extras) ifeq ($(softfp_exclude_libgcc2),y) # This list is taken from mklibgcc.in and doesn't presently allow for Index: libgcc/config.host === --- libgcc/config.host (revision 216787) +++ libgcc/config.host (working copy) @@ -1000,9 +1000,12 @@ soft) tmake_file=${tmake_file} t-softfp-sfdf t-softfp ;; - e500v1|e500v2) - tmake_file=${tmake_file} t-softfp-sfdf t-softfp-excl t-softfp + e500v1) + tmake_file=${tmake_file} rs6000/t-e500v1-fp t-softfp t-hardfp ;; + e500v2) + tmake_file=${tmake_file} t-hardfp-sfdf rs6000/t-e500v2-fp t-softfp t-hardfp + ;; *) echo Unknown ppc_fp_type $ppc_fp_type 12 exit 1 -- Joseph S. Myers jos...@codesourcery.com
Re: [PATCH] warning about const multidimensional array as function parameter
On Tue, 28 Oct 2014, Martin Uecker wrote: attached is a revised and extended patch. Changes with respect to the previous patch are: Thanks for the revised patch. I've moved this to gcc-patches as the more appropriate mailing list for discussion of specific patches as opposed to more general questions. It would also be a good idea to get started on the paperwork http://git.savannah.gnu.org/cgit/gnulib.git/plain/doc/Copyright/request-assign.future if you haven't already. Note that there is now a semantic (and not only diagnostic) change. Without this patch const int a[1]; int b[1]; (x ? a : b) would return a 'void*' and a warning about pointer type mismatch. With this patch the conditional has type 'const int (*)[1]'. I believe that is safe (in that that conditional expression isn't valid in ISO C). What wouldn't be safe is making a conditional expression between void * and const int (*)[] have type const void * instead of void *. * c-typeck.c: New behavior for pointers to arrays with qualifiers Note that the ChangeLog entry should name the functions being changed and what changed in each function (it's also helpful to diff with svn diff -x -up so that the function names are visible in the diff). @@ -6090,7 +6105,31 @@ == c_common_signed_type (mvr)) TYPE_ATOMIC (mvl) == TYPE_ATOMIC (mvr))) { - if (pedantic + /* Warn about conversions for pointers to arrays with different + qualifiers on the element type. Otherwise we only warn about + these as being incompatible pointers with -pedantic. */ + if (OPT_Wdiscarded_array_qualifiers + ((TREE_CODE (ttr) == ARRAY_TYPE) + || TREE_CODE (ttl) == ARRAY_TYPE)) +{ + ttr = strip_array_types(ttr); Note there should be a space before the open parenthesis. + ttl = strip_array_types(ttl); + + if (TYPE_QUALS_NO_ADDR_SPACE_NO_ATOMIC (ttr) +~TYPE_QUALS_NO_ADDR_SPACE_NO_ATOMIC (ttl)) + WARN_FOR_QUALIFIERS (location, expr_loc, WARN_FOR_QUALIFIERS uses pedwarn. That means this is not safe, because pedwarns become errors with -pedantic-errors, but this includes cases that are valid in ISO C and so must not become errors with -pedantic-errors (such as converting const int (*)[] to void *). So you need a variant of WARN_FOR_QUALIFIERS that uses warning_at instead of pedwarn. But then you *also* need to be careful not to lose errors with -pedantic-errors for cases where they are required by ISO C but not with the C++ handling of qualifiers (such as converting const void * to const int (*)[]) - so you can't have an if / else chain where an earlier case gives a plain warning and stops a later case from running that would give a pedwarn required by ISO C. I think the correct logic might be something like: * If the existing check for discarding qualifiers applies, then: recheck the qualifiers after strip_array_types; give the existing WARN_FOR_QUALIFIERS diagnostic if either qualifiers are still being discarded with the C++-style interpretation, or -pedantic. * As the next case after that existing check, see if qualifiers are being discarded with the C++-style interpretation even though they weren't with the C standard interpretation, and if so then give diagnostics using the new macro that uses warning_at instead of pedwarn. (And otherwise the code would fall through to the existing cases relating to mismatch in signedness between two pointers to integers.) -- Joseph S. Myers jos...@codesourcery.com
Re: [C PATCH] Add 'aka's on type printing in diagnostics
On Sat, 25 Oct 2014, Marek Polacek wrote: + pp_c_ws_string (cpp, aka); That should be _(aka), as it's an English word, not a C syntax construct. OK with that change. -- Joseph S. Myers jos...@codesourcery.com
Re: [PATCH][ARM] Fix/revert fallout from machine_mode change
On Wed, 29 Oct 2014, Kyrill Tkachov wrote: Hi all, This fixes an arm build failure due to removing the 'enum' keyword from machine_mode. Since libgcc2 is compiled with C rather than C++ we need it there for the definition of CUMULATIVE_ARGS. But why is CUMULATIVE_ARGS needed for libgcc? It's desirable to eliminate use of host-side headers in target-side code (I'd welcome more people picking up pieces of the target macros work described at https://gcc.gnu.org/wiki/Top-Level_Libgcc_Migration, though you shouldn't rely on the distinctions there about where I suggest a particular macro should move; it's quite likely there are better choices in various cases). Thus, if something in host-side headers is causing problems in target-side code, I'd think the obvious fix is to condition out the relevant code when building for the target, rather than fixing it to work (although meaningless) for the target. -- Joseph S. Myers jos...@codesourcery.com
Re: [AArch64, Docs, Patch] Add reference to ACLE in docs.
On Tue, 28 Oct 2014, Tejas Belagod wrote: Hi, Here is patch that consolidates AArch64 and ARM Intrinsics sections in extend.texi into one ACLE section to avoid information repetition and adds reference to the ARM C Language Extension spec on infocenter.arm.com. This seems to lose the information about which extensions are supported by GCC (given that not all of ACLE is supported; e.g. arm_acle.h has only the CRC intrinsics, while __fp16 isn't supported for AArch64 and the support for ARM corresponds to an older version of the specification). -- Joseph S. Myers jos...@codesourcery.com
Re: [PATCH 5/5] add libcc1
On Tue, 28 Oct 2014, Phil Muldoon wrote: Joseph, Hi, sorry for the troubles! I am having difficulty seeing this fail on my system. I built gmp from upstream, installed it, and pointed to the install location with --with-gmp. Which stage does your build fail at? To get the failure you need not to have GMP installed somewhere the bootstrap compiler would otherwise find (e.g. uninstall your system GMP package before testing). The failure is building stage 1. I am actually not totally sure how to respect the -with-gmp argument in libcc1. auto* tools are not my strongest skill. ;) I notice gcc/configure.ac I think just exports the variables to Makefile.in from the main configure script. That what we should do in this case? Toplevel passes GMPINC down to subdirectories. I think you should (a) copy AC_ARG_VAR(GMPINC,[How to find GMP include files]) from gcc/configure.ac; (b) copy GMPINC = @GMPINC@ from gcc/Makefile.in; (c) add $(GMPINC) to AM_CPPFLAGS. -- Joseph S. Myers jos...@codesourcery.com
Re: [pr/63582] Don't even store __int128 types if not supported.
On Sat, 25 Oct 2014, DJ Delorie wrote: Fixed PR/63582. Tested with no regressions on x86-64 and ix86. Ok? * tree.c (build_common_tree_nodes): Don't even store the __int128 types if they're not supported. OK. -- Joseph S. Myers jos...@codesourcery.com
Re: [Patch] Add MIPS flag to avoid use of ldc1/sdc1/ldxc1/sdxc1
New command-line options need documenting in invoke.texi. -- Joseph S. Myers jos...@codesourcery.com
Only allow e500 double in SPE_SIMD_REGNO_P registers
rs6000_hard_regno_nregs_internal allows SPE vectors in single registers satisfying SPE_SIMD_REGNO_P (i.e. register numbers 0 to 31). However, the corresponding test for e500 double treats all registers as being able to store a 64-bit value, rather than just those GPRs. Logically this inconsistency is wrong; in addition, it causes problems unwinding from signal handlers. linux-unwind.h uses ARG_POINTER_REGNUM as a place to store the return address from a signal handler, but this logic in rs6000_hard_regno_nregs_internal results in that being considered an 8-byte register, resulting in assertion failures. (https://gcc.gnu.org/ml/gcc-patches/2014-09/msg02625.html first needs to be applied for unwinding to work in general on e500.) This patch makes rs6000_hard_regno_nregs_internal handle the e500 double case consistently with SPE vectors. Tested with no regressions with cross to powerpc-linux-gnuspe (given the aforementioned patch applied). Failures of signal handling unwinding tests such as gcc.dg/cleanup-{8,9,10,11}.c are fixed by this patch. OK to commit? 2014-10-24 Joseph Myers jos...@codesourcery.com * config/rs6000/rs6000.c (rs6000_hard_regno_nregs_internal): Do not allow e500 double in registers not satisyfing SPE_SIMD_REGNO_P. Index: gcc/config/rs6000/rs6000.c === --- gcc/config/rs6000/rs6000.c (revision 216673) +++ gcc/config/rs6000/rs6000.c (working copy) @@ -1721,7 +1721,7 @@ rs6000_hard_regno_nregs_internal (int regno, enum SCmode so as to pass the value correctly in a pair of registers. */ else if (TARGET_E500_DOUBLE FLOAT_MODE_P (mode) mode != SCmode - !DECIMAL_FLOAT_MODE_P (mode)) + !DECIMAL_FLOAT_MODE_P (mode) SPE_SIMD_REGNO_P (regno)) reg_size = UNITS_PER_FP_WORD; else -- Joseph S. Myers jos...@codesourcery.com
Re: [C PATCH] Don't output warning twice (PR c/63626)
On Thu, 23 Oct 2014, Marek Polacek wrote: At present, we print the inline function ... declared but never defined warning twice. The reason for that is that this warning is being printed in pop_scope, which is called when popping file scope (c_common_parse_file-pop_file_scope), and when popping external scope (c_write_global_declarations). I think we should not print this warning when popping the external scope. We don't have to worry about nested functions here. Writing a proper testcase is a little bit tricky, but I hope what I did would work fine. Bootstrapped/regtested on x86_64-linux, ok for trunk? 2014-10-23 Marek Polacek pola...@redhat.com PR c/63626 * c-decl.c (pop_scope): Don't print warning in external_scope. * gcc.dg/pr63626.c: New test. OK. -- Joseph S. Myers jos...@codesourcery.com
Re: [PATCH 2/4] Add liboffloadmic
On Wed, 22 Oct 2014, Jakub Jelinek wrote: Also, do we really want the messy DOS/Windows '\r' in the messages on Unix-ish targets? Shouldn't that be dependent on what target is the library configured for? On platforms where it matters, I think it's still right to use \n only - if in the end something is output on a text stream, it's stdio's job to convert \n to \r\n as needed. -- Joseph S. Myers jos...@codesourcery.com
Re: [build] Link genmatch with $(LIBINTL)
On Wed, 22 Oct 2014, Rainer Orth wrote: 2014-10-22 Rainer Orth r...@cebitec.uni-bielefeld.de * Makefile.in (build/genmatch(build_exeext)): Add $(LIBINTL) to BUILD_LIBS. Add $(LIBINTL_DEP) dependency. No, this doesn't look right. A program built for the build system needs to use build versions of all relevant libraries, not host versions. That means $(BUILD_LIBIBERTY) not host libiberty, and build versions of libcpp and libintl if those are now needed for something built for the build system. That in turn needs toplevel changes to add libcpp and intl to build_modules. -- Joseph S. Myers jos...@codesourcery.com
Re: [build] Link genmatch with $(LIBINTL)
On Wed, 22 Oct 2014, Richard Biener wrote: On October 22, 2014 7:19:33 PM CEST, Joseph S. Myers jos...@codesourcery.com wrote: On Wed, 22 Oct 2014, Rainer Orth wrote: 2014-10-22 Rainer Orth r...@cebitec.uni-bielefeld.de * Makefile.in (build/genmatch(build_exeext)): Add $(LIBINTL) to BUILD_LIBS. Add $(LIBINTL_DEP) dependency. No, this doesn't look right. A program built for the build system needs to use build versions of all relevant libraries, not host versions. That means $(BUILD_LIBIBERTY) not host libiberty, and build versions of libcpp and libintl if those are now needed for something built for the build system. That in turn needs toplevel changes to add libcpp and intl to build_modules. I suppose we should build the build variant of libcpp without NLS support instead. Indeed, that would avoid various complications such as configure options for where to find libiconv only being correct for the host and not the build system. -- Joseph S. Myers jos...@codesourcery.com
Optimize powerpc*-*-linux* 32-bit classic hard/soft float hardfp/soft-fp use
Continuing the cleanups of libgcc soft-fp configuration for powerpc*-*-linux* in preparation for implementing TARGET_ATOMIC_ASSIGN_EXPAND_FENV for soft-float and e500, this patch optimizes the choice of which functions to build for the 32-bit classic hard-float and soft-float cases. (e500 will be dealt with in a separate patch which will need to add new features to t-hardfp and t-softfp; this patch keeps the status quo for e500.) For hard-float, while the functions in question are part of the libgcc ABI there is no need for them to contain software floating point code: no newly built code should use them, and if anything does use them it's most efficient (space and speed) for them to pass straight through to floating-point hardware instructions; this case is made to use t-hardfp to achieve that. For soft-float, direct use of soft-fp functions for operations involving DImode or unsigned integers is more efficient than using the libgcc2.c versions of those operations to convert to operations on other types (which then end up calling soft-fp functions for those other types, possibly more than once); this case is thus stopped from using t-softfp-excl. (A future patch will stop the e500 cases from using t-softfp-excl as well.) Tested with no regressions for crosses to powerpc-linux-gnu (soft float and classic hard float); also checked that the same set of symbols and versions is exported from shared libgcc before and after the patch. OK to commit? 2014-10-23 Joseph Myers jos...@codesourcery.com * configure.ac (ppc_fp_type): Set variable on powerpc*-*-linux*. * configure: Regenerate. * config.host (powerpc*-*-linux*): Use $ppc_fp_type to determine additions to tmake_file. Use t-hardfp-sfdf and t-hardfp instead of soft-fp for 32-bit classic hard float. Do not use t-softfp-excl for soft float. Index: libgcc/config.host === --- libgcc/config.host (revision 216564) +++ libgcc/config.host (working copy) @@ -991,9 +991,23 @@ ;; powerpc*-*-linux*) tmake_file=${tmake_file} rs6000/t-ppccomm rs6000/t-savresfgpr rs6000/t-crtstuff rs6000/t-linux t-dfprules rs6000/t-ppc64-fp t-slibgcc-libgcc - if test ${host_address} = 32; then + case $ppc_fp_type in + 64) + ;; + hard) + tmake_file=${tmake_file} t-hardfp-sfdf t-hardfp + ;; + soft) + tmake_file=${tmake_file} t-softfp-sfdf t-softfp + ;; + e500v1|e500v2) tmake_file=${tmake_file} t-softfp-sfdf t-softfp-excl t-softfp - fi + ;; + *) + echo Unknown ppc_fp_type $ppc_fp_type 12 + exit 1 + ;; + esac extra_parts=$extra_parts ecrti.o ecrtn.o ncrti.o ncrtn.o md_unwind_header=rs6000/linux-unwind.h ;; Index: libgcc/configure === --- libgcc/configure(revision 216564) +++ libgcc/configure(working copy) @@ -4376,6 +4376,29 @@ $as_echo $libgcc_cv_mips_hard_float 6; } esac +# Determine floating-point type for powerpc*-*-linux*. +# Single-precision-only FPRs are not a supported configuration for +# this target, so are not allowed for in this test. +case ${host} in +powerpc*-*-linux*) + cat conftest.c EOF +#ifdef __powerpc64__ +ppc_fp_type=64 +#elif defined _SOFT_FLOAT +ppc_fp_type=soft +#elif defined _SOFT_DOUBLE +ppc_fp_type=e500v1 +#elif defined __NO_FPRS__ +ppc_fp_type=e500v2 +#else +ppc_fp_type=hard +#endif +EOF +eval `${CC-cc} -E conftest.c | grep ppc_fp_type=` +rm -f conftest.c +;; +esac + # Collect host-machine-specific information. . ${srcdir}/config.host Index: libgcc/configure.ac === --- libgcc/configure.ac (revision 216564) +++ libgcc/configure.ac (working copy) @@ -320,6 +320,29 @@ [libgcc_cv_mips_hard_float=no])]) esac +# Determine floating-point type for powerpc*-*-linux*. +# Single-precision-only FPRs are not a supported configuration for +# this target, so are not allowed for in this test. +case ${host} in +powerpc*-*-linux*) + cat conftest.c EOF +#ifdef __powerpc64__ +ppc_fp_type=64 +#elif defined _SOFT_FLOAT +ppc_fp_type=soft +#elif defined _SOFT_DOUBLE +ppc_fp_type=e500v1 +#elif defined __NO_FPRS__ +ppc_fp_type=e500v2 +#else +ppc_fp_type=hard +#endif +EOF +eval `${CC-cc} -E conftest.c | grep ppc_fp_type=` +rm -f conftest.c +;; +esac + # Collect host-machine-specific information. . ${srcdir}/config.host -- Joseph S. Myers jos...@codesourcery.com
Re: [PATCH 1/4] Add mkoffload for Intel MIC
On Tue, 21 Oct 2014, Ilya Verbin wrote: +#include libgen.h +#include config.h +#include system.h You should never include system headers before config.h because config.h may define feature test macros such as _FILE_OFFSET_BITS=64 that are ineffective if defined after any system header is included. I don't see anything restricting this program to being built for GNU *hosts*. Thus, it needs to be portable (to different hosts; obviously it's target-architecture-specific) rather than relying on glibc interfaces. (Providing appropriate functions in libiberty is of course an option; thus, freely using obstacks is fine because they're in libiberty.) +#include libgomp_target.h Where does this header come from? + nextval = strchrnul (curval, ':'); I don't think strchrnul is portable (unless added to libiberty). + if (!host_compiler) +fatal_error (COLLECT_GCC must be set.); Diagnostics should not end with .. -- Joseph S. Myers jos...@codesourcery.com
Re: [PATCH doc] Explain options precedence and difference between -pedantic-errors and -Werror=pedantic
On Tue, 21 Oct 2014, Manuel López-Ibáñez wrote: On 19 October 2014 18:08, Joseph S. Myers jos...@codesourcery.com wrote: On Sat, 18 Oct 2014, Manuel López-Ibáñez wrote: What about this version? Give an error whenever the @dfn{base standard} (see @option{-Wpedantic}) requires a diagnostic, in cases where there is undefined behavior at compile-time Only in *some* such cases of compile-time undefined behavior. New try: Give an error whenever the @dfn{base standard} (see @option{-Wpedantic}) requires a diagnostic, in some cases where there is undefined behavior at compile-time and in some other cases that do not prevent compilation of programs that are valid according to the standard. This is not equivalent to @option{-Werror=pedantic}, since there are errors enabled by this option and not enabled by the latter and vice versa. OK? OK. -- Joseph S. Myers jos...@codesourcery.com
Do not build soft-fp code at all for powerpc64-linux-gnu
When I added support for using soft-fp in libgcc https://gcc.gnu.org/ml/gcc-patches/2006-03/msg00689.html, libgcc configuration was still done in the gcc/ directory, meaning that the variables set in makefile fragments could not depend on the multilib being built. Thus, building the soft-fp code for powerpc64-linux-gnu was disabled in the same way as had been done with fp-bit: the code was built, but with #ifndef __powerpc64__ wrappers around it so that the resulting objects were empty. Now that libgcc configuration is done in the toplevel libgcc directory, such uses of softfp_wrap_start / softfp_wrap_end are better replaced by configure-time conditionals that determine whether to use soft-fp for a given multilib. This patch does so for powerpc*-*-linux*. The same would appear to apply to powerpc*-*-freebsd* (using rs6000/t-freebsd64), but I have not made any changes there. t-ppc64-fp is also used by AIX targets, but they don't use soft-fp anyway so the changes are of no consequence to them. The same principle of replacing softfp_wrap_start / softfp_wrap_end with configure-time conditionals also applies to softfp_exclude_libgcc2, which was intended for cases where soft-fp is being used on hard-float multilibs and so it is desirable on those multilibs for a few functions to come from libgcc2.c rather than soft-fp (but the soft-fp versions would be more efficient on soft-float multilibs). Now we have hardfp.c and t-hardfp, those are better to use in that case, to minimize the size of the bulk of the functions that are only present for ABI compatibility and should never be called by newly compiled code. I intend followup patches to switch 32-bit hard-float multilibs to use t-hardfp as far as possible (for all non-libgcc2.c operations for classic hard float; for all except __unord* for e500v2; for all SFmode operations except __unordsf2 for e500v1). After that will come making the soft-fp operations, in the remaining cases for which they are built because they are actually needed for code compiled by current GCC, into compat symbols when building for glibc 2.19 or later, so that the glibc versions (with exception and rounding mode support) get used instead (2.19 or later is needed for all the functions to be exported from glibc as non-compat symbols). In turn, that is required before implementing TARGET_ATOMIC_ASSIGN_EXPAND_FENV for soft-float and e500, as that can only be properly effective when GCC-compiled code is actually interoperating correctly with the exception and rounding mode state used by fenv.h functions. Tested with no regressions with cross to powerpc64-linux-gnu (in addition, verified that stripped libgcc_s.so.1 is identical before and after the patch). OK to commit? 2014-10-22 Joseph Myers jos...@codesourcery.com * config.host (powerpc*-*-linux*): Only use soft-fp for 32-bit configurations. * config/rs6000/t-ppc64-fp (softfp_wrap_start, softfp_wrap_end): Remove variables. Index: libgcc/config/rs6000/t-ppc64-fp === --- libgcc/config/rs6000/t-ppc64-fp (revision 216519) +++ libgcc/config/rs6000/t-ppc64-fp (working copy) @@ -1,5 +1,2 @@ # Can be used unconditionally, wrapped in __powerpc64__ || __64BIT__ __ppc64__. LIB2ADD += $(srcdir)/config/rs6000/ppc64-fp.c - -softfp_wrap_start := '\#ifndef __powerpc64__' -softfp_wrap_end := '\#endif' Index: libgcc/config.host === --- libgcc/config.host (revision 216519) +++ libgcc/config.host (working copy) @@ -990,7 +990,10 @@ extra_parts=$extra_parts crtbeginS.o crtendS.o crtbeginT.o ecrti.o ecrtn.o ncrti.o ncrtn.o ;; powerpc*-*-linux*) - tmake_file=${tmake_file} rs6000/t-ppccomm rs6000/t-savresfgpr rs6000/t-crtstuff rs6000/t-linux t-softfp-sfdf t-softfp-excl t-dfprules rs6000/t-ppc64-fp t-softfp t-slibgcc-libgcc + tmake_file=${tmake_file} rs6000/t-ppccomm rs6000/t-savresfgpr rs6000/t-crtstuff rs6000/t-linux t-dfprules rs6000/t-ppc64-fp t-slibgcc-libgcc + if test ${host_address} = 32; then + tmake_file=${tmake_file} t-softfp-sfdf t-softfp-excl t-softfp + fi extra_parts=$extra_parts ecrti.o ecrtn.o ncrti.o ncrtn.o md_unwind_header=rs6000/linux-unwind.h ;; -- Joseph S. Myers jos...@codesourcery.com
Re: The nvptx port [11/11] More tools.
On Mon, 20 Oct 2014, Bernd Schmidt wrote: These tools currently require GNU extensions - something I probably ought to fix if we decide to add them to the gcc build itself. And as regards library use, I'd expect the sources to start with #includes of config.h and system.h (and so not include system headers directly if they are included by system.h) even if no other GCC headers are useful in any way. -- Joseph S. Myers jos...@codesourcery.com
Re: [jit] Error-handling within gcc::jit::dump
On Mon, 20 Oct 2014, David Malcolm wrote: On Fri, 2014-10-17 at 21:52 +, Joseph S. Myers wrote: [...snip static linkage discussion...] The dump file handling appears to have no I/O error checking (no checking for error on fopen, nothing obvious to prevent fwrite to a NULL m_file if fopen did have an error, no checking for error on fclose (or fwrite)). Thanks. Does the following look OK? (I've committed it to branch dmalcolm/jit) Seems fine with me. -- Joseph S. Myers jos...@codesourcery.com
Re: [jit] Add Sphinx to install.texi
On Mon, 20 Oct 2014, David Malcolm wrote: +Necessary to regenerate @file{jit/docs/_build/texinfo} from the .rst I'd say @file{.rst}, but otherwise looks OK to me. -- Joseph S. Myers jos...@codesourcery.com
Re: [PING][PATCH] GCC/test: Set timeout factor for c11-atomic-exec-5.c
On Mon, 20 Oct 2014, David Edelsohn wrote: On Mon, Oct 20, 2014 at 12:59 PM, Maciej W. Rozycki ma...@codesourcery.com wrote: Hi, I thought http://gcc.gnu.org/ml/gcc-patches/2014-09/msg00242.html would be folded into PowerPC TARGET_ATOMIC_ASSIGN_EXPAND_FENV support, but I see r216437 went without it. In that case would someone please review my proposal as a separate change? The patch seems like a kludge work-around. Joseph suggested that full support will require a newer GLIBC and detection in GCC. No, it's support for soft-float and e500 in TARGET_ATOMIC_ASSIGN_EXPAND_FENV that will need that (along with libgcc changes to make libgcc's copies of the soft-fp functions into compat symbols when they are available in glibc). That's nothing to do with the timeout issue. -- Joseph S. Myers jos...@codesourcery.com
Re: [PATCH doc] Explain options precedence and difference between -pedantic-errors and -Werror=pedantic
On Sat, 18 Oct 2014, Manuel López-Ibáñez wrote: What about this version? Give an error whenever the @dfn{base standard} (see @option{-Wpedantic}) requires a diagnostic, in cases where there is undefined behavior at compile-time Only in *some* such cases of compile-time undefined behavior. -- Joseph S. Myers jos...@codesourcery.com
Re: [PATCH] Fix PR preprocessor/42014
On Sat, 18 Oct 2014, Krzesimir Nowak wrote: + pp_verbatim (context-printer, +%s from %r%s:%d%R, prefix, locus, + diagnostic_report_from (context, map, In file included); We don't want to split up diagnostic text like that, because for translation it may be necessary to translate the whole In file included from text together rather than expecting two fragments to go together in the same way they do in English. Now, right now this message isn't marked for translation anyway (an independent bug there's no need for you to fix), but still as a design principle things should be structured to avoid splitting up English fragments like that. -- Joseph S. Myers jos...@codesourcery.com
Re: [C PATCH] Another initialization fix (PR c/63567)
On Sun, 19 Oct 2014, Marek Polacek wrote: It turned out that there is another spot where we need to allow initializing objects with static storage duration with compound literals even in C99 -- when the compound literal is inside the initializer. Fixed in the same way as previously. Bootstrapped/regtested on x86_64-linux, ok for trunk? 2014-10-18 Marek Polacek pola...@redhat.com PR c/63567 * c-typeck.c (output_init_element): Allow initializing objects with static storage duration with compound literals even in C99 and add pedwarn for it. * gcc.dg/pr63567-3.c: New test. * gcc.dg/pr63567-4.c: New test. OK. -- Joseph S. Myers jos...@codesourcery.com
Re: [libatomic PATCH] Fix libatomic behavior for big endian toolchain
Changes to architecture-independent files must use architecture-independent conditionals, so __BYTE_ORDER__ not __ARMEB__. -- Joseph S. Myers jos...@codesourcery.com
Re: [C PATCH] Enable initializing statics with COMPOUND_LITERAL_EXPR in C99 (PR c/63567)
On Fri, 17 Oct 2014, Marek Polacek wrote: Building Linux kernel failed with 'error: initializer element is not constant', because they're initializing objects with static storage duration with (T){ ...} - and that isn't permitted in gnu99/gnu11. I think the Right Thing is to allow some latitude here and enable it even in gnu99/gnu11 unless -pedantic. In gnu89, this will work as before even with -pedantic. The Right Thing is for -pedantic not to cause errors, only warnings (-pedantic-errors being needed for an error). So rather than having this conditional for whether to allow the extension at all, make the conditional code do a pedwarn (if flag_isoc99, otherwise there will already have been one for using a compound literal at all, and not for VECTOR_TYPE). (I don't believe this can affect the semantics of valid code; in this case of require_constant with a compound literal, we know the code is invalid in ISO C terms, so it's safe to diagnose it then interpret it in a sensible way.) -- Joseph S. Myers jos...@codesourcery.com
Re: [C PATCH] Make -Wno-implicit-int work in C99 mode
On Fri, 17 Oct 2014, Marek Polacek wrote: C99 mode warns about defaulting to int by default, but without the possibility to suppress the warning with -Wno-implicit-int. This is likely to arouse the ire of the users, especially with the new default. Therefore the following patch tweaks warn_implicit_int in such a way that -Wimplicit and -Wimplicit-int should work as intended (following the rule that more specific option takes precedence over the less specific). There should be no changes in GNU89 mode. Bootstrapped/regtested on x86_64-linux, ok for trunk? OK. -- Joseph S. Myers jos...@codesourcery.com
Re: C/C++ diagnostics guidelines (was: Re: [C PATCH] Enable initializing statics with COMPOUND_LITERAL_EXPR in C99 (PR c/63567))
On Fri, 17 Oct 2014, Manuel López-Ibáñez wrote: Thus, I drafted some guidelines at:https://gcc.gnu.org/wiki/Better_Diagnostics#guidelines Please, could you take a look and comment whether I got it right/wrong? Yes, that looks right to me. -- Joseph S. Myers jos...@codesourcery.com
Re: [PATCH] Avoid the need to install when running the jit testsuite
On Fri, 17 Oct 2014, David Malcolm wrote: +# This symlink makes the full installation name of the driver be available +# from within the *build* directory, for use when running the JIT library +# from there (e.g. when running its testsuite). +$(FULL_DRIVER_NAME): ./xgcc + $(LN) -s $ $@ I believe $(LN_S) would be normal, though (a) I don't see it being used anywhere, despite the definition, and (b) while the GNU Coding Standards still say If you use symbolic links, you should implement a fallback for systems that don't have symbolic links. I'm doubtful that's of practical relevance to systems people are building GCC on any more. I don't have any comments on the other parts of this patch. -- Joseph S. Myers jos...@codesourcery.com
Re: [PATCH doc] Explain options precedence and difference between -pedantic-errors and -Werror=pedantic
On Fri, 17 Oct 2014, Manuel López-Ibáñez wrote: +Some options, such as @option{-Wall} and @option{-Wextra}, turn on other +options, such as @option{-Wunused}, which may turn on further options, +such as @option{-Wunused-value}. The combined effect of positive and +negative forms is that more specific options have priority over less +specific ones, independently of their position in the command-line. For +options of the same specificity, the last one takes effect. Options +enabled or disabled via pragmas (@pxref{Diagnostic Pragmas}) take effect +as if they appeared at the end of the command-line. This part is OK. @@ -3318,8 +3327,8 @@ @item -pedantic-errors @opindex pedantic-errors -Like @option{-Wpedantic}, except that errors are produced rather than -warnings. +This is equivalent to @option{-Werror=pedantic} plus making into errors +a few warnings that are not controlled by @option{-Wpedantic}. But I think the previous version is better here. Maybe at present your version is true, but in principle -Wpedantic can control warnings that aren't pedwarns. Some of the -Wformat warnings are conditional on having both -Wformat and -Wpedantic enabled - we can only represent those using OPT_Wformat in the warning calls at present, but there's at least as case for -Werror=pedantic to turn them into errors (while -pedantic-errors definitely should not turn them into errors, as the code is only invalid at runtime and is valid at compile time as long as it never gets executed). -- Joseph S. Myers jos...@codesourcery.com
Re: [C PATCH] Enable initializing statics with COMPOUND_LITERAL_EXPR in C99 (PR c/63567)
On Fri, 17 Oct 2014, Marek Polacek wrote: Bootstrapped/regtested on x86_64-linux, ok for trunk? 2014-10-17 Marek Polacek pola...@redhat.com PR c/63567 * c-typeck.c (digest_init): Allow initializing objects with static storage duration with compound literals even in C99 and add pedwarn for it. * gcc.dg/pr61096-1.c: Change dg-error into dg-warning. * gcc.dg/pr63567-1.c: New test. * gcc.dg/pr63567-2.c: New test. OK. -- Joseph S. Myers jos...@codesourcery.com
Re: [PATCH 05/10] JIT-related changes outside of jit subdir
Although Sphinx isn't a build dependency, as a dependency for regenerating checked-in files I think it should be documented in install.texi (like autoconf, gettext, etc.). -- Joseph S. Myers jos...@codesourcery.com
Re: [PATCH 06/10] Heart of the JIT implementation (was: Re: [PATCH 0/5] Merger of jit branch (v2))
Does libgccjit.so end up getting linked with -static-libstdc++ -static-libgcc? If so, that could be problematic (are static libstdc++ and libgcc necessarily built as PIC so it's even possible to embed them into a shared library?). It's certainly not clear that the -static-libstdc++ -static-libgcc default for building the compiler executables is the right one for building libgccjit.so. The dump file handling appears to have no I/O error checking (no checking for error on fopen, nothing obvious to prevent fwrite to a NULL m_file if fopen did have an error, no checking for error on fclose (or fwrite)). -- Joseph S. Myers jos...@codesourcery.com
Re: [PATCH][3/n] Merge from match-and-simplify, first patterns and questions
On Wed, 15 Oct 2014, Richard Biener wrote: Caveat2: the GENERIC code-path of match-and-simplify does not handle everything fold-const.c does - for example it does nothing on operands with side-effects - foo () * 0 is not simplified to (foo(), 0). It also does not get the benefit from loose type-matching by means of the STRIP_[SIGN_]NOPS fold-const.c performs on operands before doing its pattern matching. This means that when I remove stuff from fold-const.c there may be regressions that are not anticipated (in frontend code and for -O0 only - with optimization the pattern should apply on GIMPLE later). So - are we happy to lose some oddball cases of GENERIC folding? (hopefully oddball cases only...) I don't see any problems with the side effects case; that seems much better only handled on GIMPLE. It seems more plausible something could depend on STRIP_[SIGN_]NOPS calls. -- Joseph S. Myers jos...@codesourcery.com
Re: [PATCH doc] Explain options precedence and difference between -pedantic-errors and -Werror=pedantic
On Sat, 18 Oct 2014, Manuel López-Ibáñez wrote: The previous version also does not match your description. You are saying that -Wpedantic = warning(OPT_Wpedantic) + pedwarn(OPT_Wpedantic) and -pedantic-errors = pedwarn(OPT_Wpedantic).+ pedwarn(0) The current version says that -Wpedantic = warning(OPT_Wpedantic) + pedwarn(OPT_Wpedantic) and -pedantic-errors = warning(OPT_Wpedantic) + pedwarn(OPT_Wpedantic) My proposal says that: -Wpedantic = warning(OPT_Wpedantic) + pedwarn(OPT_Wpedantic) and -pedantic-errors = warning(OPT_Wpedantic) + pedwarn(OPT_Wpedantic) + pedwarn(0) None of those three descriptions seems helpful here. The point of -pedantic is to give a diagnostic whenever the standard requires one (and possibly in some other cases). The point of -Werror=pedantic is to give an error for diagnostics enabled by -pedantic (whether or not the standard requires a diagnostic in those cases, and whether or not the standard requires successful translation in those cases). The point of -pedantic-errors is to give an error whenever the standard requires a diagnostic (and possibly in some other cases, but not cases where the standard requires successful translation). -- Joseph S. Myers jos...@codesourcery.com
Re: [PATCH doc] Explain options precedence and difference between -pedantic-errors and -Werror=pedantic
On Sat, 18 Oct 2014, Manuel López-Ibáñez wrote: Can we make possibly in some other cases more concrete? Otherwise, Cases where something about the code is not defined by the base standard, but a diagnostic is not required. -pedantic may give a warning for such cases. -pedantic-errors may give an error *if* there is compile-time undefined behavior (not if the not-definedness is something other than undefined behavior, or is undefined behavior only if the code in question is executed, although it will still give a warning for such cases if -pedantic does). -- Joseph S. Myers jos...@codesourcery.com
Re: [PATCH 2/5] gcc: configure and Makefile changes needed by jit
On Wed, 15 Oct 2014, David Malcolm wrote: As for the bindir in site.exp, Joseph asked me when the library invokes a driver to convert from .s to .so to: On Tue, 2014-09-23 at 23:27 +, Joseph S. Myers wrote: * use the $(target_noncanonical)-gcc-$(version) name for the driver rather than plain gcc, to maximise the chance that it is actually the same compiler the JIT library was built for (I realise you may not actually depend on it being the same compiler, but that does seem best; in principle in future it should be possible to load multiple copies of the JIT library to JIT for different targets, so that code for an offload accelerator can go through the JIT). ( https://gcc.gnu.org/ml/jit/2014-q3/msg00033.html ) This full name is used when *installing* the driver, but doesn't exist within the build directory. Hence when running the library, the installation bindir needs to be in the PATH. In particular, (in https://gcc.gnu.org/ml/jit/2014-q4/msg5.html ) when running the jit testsuite we rely on the driver having been installed, and in jit.exp we need to temporarily prepend the installation bindir onto the front of PATH when running test programs linked against libgccjit.so. Hence we need to know what bindir is from expect, hence we add it to site.exp. Even if the driver's been installed, it might not be in the configured bindir but in some other DESTDIR. Really, the need for an installed driver for testing should be avoided. The ideal way to do that is for make check to install to a staging directory within the build directory (generally, the DejaGnu approach of passing lots of -B etc. options to tell bits of the toolchain how to find each other, with lots of relevant logic hardcoded inside DejaGnu itself, is problematic and the staging directory approach would be better, although it has various other complications given that GCC needs to find separately built / installed pieces such as binutils and runtime libraries). Of course that's a much more general issue - I suppose someone with an installation in a DESTDIR can still test as-is by specifying a different value of bindir on the make command line that builds site.exp? -- Joseph S. Myers jos...@codesourcery.com
Re: [C PATCH] Clamp down incomplete type error (PR c/63543)
On Wed, 15 Oct 2014, Marek Polacek wrote: We've got a complaint that the dereferencing pointer to incomplete type error is printed for all occurrences of the incomplete type, which is too verbose. Also it'd be nicer to print the type as well. This patch fixes this; if we find an incomplete type, mark it with error node, then we don't print the error message more than once. I don't like this approach of modifying the type; type nodes are shared objects and this could affect all sorts of other logic subsequently working with the type. I think there should be some sort of annotation of the type (either in the type itself, or on the side) that *only* means an error has been given for the type being incomplete, rather than inserting error_mark_node into the type. -- Joseph S. Myers jos...@codesourcery.com
Re: [C PATCH] Clamp down incomplete type error (PR c/63543)
On Wed, 15 Oct 2014, Jeff Law wrote: On 10/15/14 15:46, Joseph S. Myers wrote: On Wed, 15 Oct 2014, Marek Polacek wrote: We've got a complaint that the dereferencing pointer to incomplete type error is printed for all occurrences of the incomplete type, which is too verbose. Also it'd be nicer to print the type as well. This patch fixes this; if we find an incomplete type, mark it with error node, then we don't print the error message more than once. I don't like this approach of modifying the type; type nodes are shared objects and this could affect all sorts of other logic subsequently working with the type. I think there should be some sort of annotation of the type (either in the type itself, or on the side) that *only* means an error has been given for the type being incomplete, rather than inserting error_mark_node into the type. Isn't slamming error_mark_node well established at this point? I fact I recall seeing it documented to be used in this kind of way to prevent future errors. Returning error_mark_node for the erroneous expression, yes - the pre-existing code already does that in this case. The problem is that the insertion of error_mark_node into the type will lead to other uses of that type (including ones that have already been processed without errors) being affected, and the type itself isn't erroneous. Indeed, the patch would create a pointer-to-error_mark type node, which is not something code in GCC would ever normally expect to handle (build_pointer_type_for_mode just returns error_mark_node if passed error_mark_node, so you can't get a POINTER_TYPE whose target is error_mark_node that way). -- Joseph S. Myers jos...@codesourcery.com
Re: [pointer size] more edge cases
On Wed, 15 Oct 2014, DJ Delorie wrote: A few more cases where pointers were assumed to be whole bytes. Ok? I don't see what the stor-layout.c changes have to do with that description, or why they are correct (they look wrong to me; the existing addition of BITS_PER_UNIT_LOG + 1 looks logically correct for bitsizetype). The other changes are OK. -- Joseph S. Myers jos...@codesourcery.com
Re: [pointer size] more edge cases
On Wed, 15 Oct 2014, DJ Delorie wrote: I don't see what the stor-layout.c changes have to do with that description, or why they are correct (they look wrong to me; the existing addition of BITS_PER_UNIT_LOG + 1 looks logically correct for bitsizetype). sooo... the type for bitsizetype will *always* be a bigger type than sizetype? Even when sizetype is already the largest type the target can handle naturally? It's expected to be larger, so that it can handle +/- the whole address space, in bits. But that's subject to the bounds of MAX_FIXED_MODE_SIZE and HOST_BITS_PER_DOUBLE_INT that are already present in the code. (If any nontrivial, nonconstant calculation is being done in bitsizetype, that should be unusual; if anything, I'd rather not have bitsizetype at all, and use bytes and bits separately for calculations where bit offsets are relevant. But under the existing logic, bitsizetype is expected to be bigger than sizetype, even if inefficient.) -- Joseph S. Myers jos...@codesourcery.com
Re: [PATCH] Fix up _Alignof with user aligned types (PR c/63495)
On Fri, 10 Oct 2014, Jakub Jelinek wrote: Hi! As the testcase shows, _Alignof can sometimes return smaller number than the minimum alignment. That is because when laying out structures, fields with types with TYPE_USER_ALIGN set have also DECL_USER_ALIGN set and therefore neither BIGGEST_FIELD_ALIGNMENT nor ADJUST_FIELD_ALIGN is applied to them, but min_align_of_type was applying that unconditionally. Fixed thusly, bootstrapped/regtested on x86_64-linux and i686-linux, ok for trunk/4.9? 2014-10-10 Jakub Jelinek ja...@redhat.com PR c/63495 * stor-layout.c (min_align_of_type): Don't decrease alignment through BIGGEST_FIELD_ALIGNMENT or ADJUST_FIELD_ALIGN if TYPE_USER_ALIGN is set. * gcc.target/i386/pr63495.c: New test. OK. -- Joseph S. Myers jos...@codesourcery.com
Re: Towards GNU11
On Thu, 9 Oct 2014, Marek Polacek wrote: On Wed, Oct 08, 2014 at 08:39:40PM -0600, Jeff Law wrote: I like it. And one could reasonably argue that now is the time to change since that maximizes the time for folks to find broken code. Yep, this is definitely stage1 stuff. We still have a few weeks, but I wouldn't want to rush such a change in the nick of time. I'd go so far as to conditionally approve -- if other maintainers don't shout out in the next week or so against, then I feel this should go forward. Thanks. I will wait at least until the end of next week. I'd like to hear from Joseph ;). I approve of the change in default (I just don't think it's a change to make on my say-so alone). -- Joseph S. Myers jos...@codesourcery.com
Re: [Patch, MIPS] Add Octeon3 support
Patches adding new -march= values need to update invoke.texi. -- Joseph S. Myers jos...@codesourcery.com
Re: [C PATCH] Print header hints (PR c/59717)
On Tue, 7 Oct 2014, Marek Polacek wrote: 2014-10-07 Marek Polacek pola...@redhat.com PR c/59717 * c-decl.c (header_for_builtin_fn): New function. (implicitly_declare): Suggest which header to include. * gcc.dg/pr59717.c: New test. OK. -- Joseph S. Myers jos...@codesourcery.com
Re: [patch] Add -static-libquadmath option
Since -static-libquadmath introduces LGPL requirements on redistributing the resulting binaries (that you provide source or relinkable object files to allow relinking with modified versions of libquadmath) that don't otherwise generally apply simply through using GCC to build a program even if you link in GCC's other libraries statically, it would seem a good idea for the documentation of this option to make that explicit. -- Joseph S. Myers jos...@codesourcery.com
Re: [C PATCH] Use error_operand_p more
On Sun, 5 Oct 2014, Marek Polacek wrote: It occured to me that we should probably use error_operand_p in the C FE where appropriate. Following change is meant only as a little cleanup. Bootstrapped/regtested on x86_64-linux, ok for trunk? 2014-10-04 Marek Polacek pola...@redhat.com * c-convert.c (convert): Use error_operand_p. * c-typeck.c (require_complete_type): Likewise. (really_atomic_lvalue): Likewise. (digest_init): Likewise. (handle_omp_array_sections_1): Likewise. OK. -- Joseph S. Myers jos...@codesourcery.com
Re: [patch] Add -static-libquadmath option
On Mon, 6 Oct 2014, Steve Kargl wrote: On Mon, Oct 06, 2014 at 08:38:14PM +, Joseph S. Myers wrote: Since -static-libquadmath introduces LGPL requirements on redistributing the resulting binaries (that you provide source or relinkable object files to allow relinking with modified versions of libquadmath) that don't otherwise generally apply simply through using GCC to build a program even if you link in GCC's other libraries statically, it would seem a good idea for the documentation of this option to make that explicit. Or, change the license of libquadmath to be compatible with libgcci and libgfortran. I believe we established when libquadmath was added that this wasn't an option as large parts of the code are not assigned to the FSF. (Longer-term it might make sense to support TS 18661-3 in GCC and glibc, so that libquadmath isn't needed when using new glibc as the functions are available in libm under TS 18661-3 names such as sinf128.) -- Joseph S. Myers jos...@codesourcery.com
Re: [google/gcc-4_9] Add gcc driver option -no-pie
If adding a new option, you need to document it in invoke.texi. -- Joseph S. Myers jos...@codesourcery.com
Re: [PATCH] gcc.c: Split up the driver's main into smaller functions
On Fri, 3 Oct 2014, David Malcolm wrote: The main function for the driver in gcc.c has grown from ~200 lines in its original form (way back in r262) to ~1000 lines today, with a dozen locals (if we include the params). The following patch splits it up into 15 smaller functions, moving the various locals into the places where they're needed, so we can easily see e.g where argc/argv get read vs written. The functions are private methods of a new driver class to give an extra level of encapsualation beyond just being static in gcc.c, and so that we can hide some state as member data inside the driver instance. Turning them into named functions/methods also makes it easier to talk about the different phases of main, and put breakpoints on them. Bootstrappedregrtested on x86_64-unknown-linux-gnu (Fedora 20). OK for trunk? OK, minus the if (0) code: + if (0) +{ + int i; + for (i = 0; i argc; i++) + printf (argc[%i]: %s\n, i, argv[i]); +} -- Joseph S. Myers jos...@codesourcery.com
Re: [C PATCH] Don't warn about gnu_inline functions without definitions (PR c/63453)
On Fri, 3 Oct 2014, Marek Polacek wrote: While looking into something else I noticed that we produce C99ish inline function declared but never defined warning even for functions marked as gnu_inline, if not in GNU89 or if -fgnu89-inline is not in effect, because the warning was guarded only by !flag_gnu89_inline. Bootstrapped/regtested on x86_64-linux, ok for trunk? OK. -- Joseph S. Myers jos...@codesourcery.com
Re: [PATCH 2/5] Error out for Cilk_spawn or array expression in forbidden places
On Wed, 1 Oct 2014, Andi Kleen wrote: +/* Check that no array notation or spawn statement is in EXPR. + If not true gemerate an error at LOC for WHAT. */ + +bool +check_no_cilk (tree expr, const char *what, location_t loc) +{ + if (!flag_cilkplus) +return false; + if (contains_array_notation_expr (expr)) +{ + loc = get_error_location (expr, loc); + error_at (loc, Cilk array notation cannot be used %s, what); + return true; +} + if (walk_tree (expr, contains_cilk_spawn_stmt_walker, NULL, NULL)) +{ + loc = get_error_location (expr, loc); + error_at (loc, %_Cilk_spawn% statement cannot be used %s, what); You need to pass two complete error messages to this function for i18n purposes, rather than building up messages from sentence fragments. If you call them e.g. array_gmsgid and spawn_gmsgid they should both get extracted by exgettext for translation. -- Joseph S. Myers jos...@codesourcery.com
Re: Fix for FAIL: tmpdir-gcc.dg-struct-layout-1/t028 c_compat_x_tst.o compile, (internal compiler error)
On Tue, 30 Sep 2014, Richard Earnshaw wrote: GCC is written in C++ these days, so technically, you need the C++ standard :-) And, while C++14 requires plain int bit-fields to be signed, GCC is written in C++98/C++03. -- Joseph S. Myers jos...@codesourcery.com
Re: [jit] Avoiding hardcoding gcc; supporting accelerators?
On Thu, 25 Sep 2014, David Malcolm wrote: Should this have the $(exeext) suffix seen in Makefile.in? $(target_noncanonical)-gcc-$(version)$(exeext) Depends on whether that's needed for the pex code to find it. As for (B), would it make sense to bake in the path to the binary into the pex invocation, and hence to turn off PEX_SEARCH? If so, presumably I need to somehow expand the Makefile's value of $(bindir) into internal-api.c, right? (I tried this in configure.ac, but merely got $(exec_prefix)/bin iirc). An installation must be relocatable. Thus, you can't just hardcode looking in the configured prefix; you'd need to locate it relative to libgccjit.so in some way (i.e. using make_relative_prefix, but I don't know offhand how libgccjit.so would locate itself). A better long-term approach to this would be to extract the spec machinery from gcc.c (perhaps into a libdriver.a?) and run it directly from the jit library - but that's a rather involved patch, I suspect. And you'd still need libgccjit.so to locate itself for proper relocatability in finding other pieces such as assembler and linker. I wonder if the appropriate approach here is to have a single library with multiple plugin backends e.g. one for the CPU, one for each GPU family, with the ability to load multiple backends at once. If you can get that working, sure. Unfortunately, backend is horribly overloaded here - I mean basically all of gcc here, everything other than the libgccjit.h API seen by client code. (Though preferably as much as possible could be shared, i.e. properly define the parts of GCC that need building separately for each target and limit them as much as possible. Joern's multi-target patches from 2010 that selectively built parts of GCC using namespaces while sharing others without an obvious clear separation seemed very fragile. For something robust you either build everything separately for each target, or have a well-defined separation between bits needing building separately and bits that can be built once and ways to avoid non-obvious target dependencies in bits built once.) -- Joseph S. Myers jos...@codesourcery.com
Re: [PATCH 1/n] OpenMP 4.0 offloading infrastructure
On Fri, 26 Sep 2014, Ilya Verbin wrote: 2014-09-26 Bernd Schmidt ber...@codesourcery.com Thomas Schwinge tho...@codesourcery.com Ilya Verbin ilya.ver...@intel.com Andrey Turetskiy andrey.turets...@intel.com * configure: Regenerate. * configure.ac (--enable-as-accelerator-for) (--enable-offload-targets): New configure options. gcc/ * Makefile.in (real_target_noncanonical, accel_dir_suffix) (enable_as_accelerator): New variables substituted by configure. (libsubdir, libexecsubdir, unlibsubdir): Tweak for the possibility of being configured as an offload compiler. (DRIVER_DEFINES): Pass new defines DEFAULT_REAL_TARGET_MACHINE and ACCEL_DIR_SUFFIX. (install-cpp, install-common, install_driver, install-gcc-ar): Do not install for the offload compiler. * config.in: Regenerate. * configure: Regenerate. * configure.ac (real_target_noncanonical, accel_dir_suffix) (enable_as_accelerator, enable_offload_targets): Compute new variables. (--enable-as-accelerator-for, --enable-offload-targets): New options. (ACCEL_COMPILER): Define if the compiler is built as the accel compiler. (OFFLOAD_TARGETS): List of target names suitable for offloading. (ENABLE_OFFLOADING): Define if list of offload targets is not empty. Any patch adding new configure options needs to add documentation of the semantics of those options in install.texi. I see no such documentation in this patch. -- Joseph S. Myers jos...@codesourcery.com
Re: [jit] Eliminate fixed-size buffers used with vsnprintf
On Wed, 24 Sep 2014, David Malcolm wrote: The ideal I'm aiming for here is that a well-behaved library should never abort, so I've rewritten these functions to use vasprintf, and added error-handling checks to cover the case where malloc returns NULL within vasprintf. GCC is designed on the basis of aborting on allocation failures - as is GMP, which allows custom allocation functions to be specified but still requires them to exit the program rather than return, longjmp or throw an exception. I believe this fixes the specific issues you pointed out (apart from the numerous missing API comments, which I'll do it a followup). Note that there's still a fixed-size buffer within gcc::jit::recording::context, the field: char m_first_error_str[1024]; Currently this is populated using strncpy followed by an explicit write of a truncation byte to make sure, but it *is* another truncation. Presumably I should address this in a followup, by making that be dynamically-allocated? Yes. Arbitrary limits should be avoided in GNU. -- Joseph S. Myers jos...@codesourcery.com
Re: [PATCH] Power/GCC: Fix e500 vs non-e500 register save slot issue
On Wed, 24 Sep 2014, David Edelsohn wrote: 2014-09-01 Maciej W. Rozycki ma...@codesourcery.com gcc/ * config/rs6000/e500.h (HARD_REGNO_CALLER_SAVE_MODE): Remove macro. * config/rs6000/rs6000.h (HARD_REGNO_CALLER_SAVE_MODE): Handle TARGET_E500_DOUBLE case here. This patch is okay. The repeated testing of E500 seems like it could have been refactored. The macro is becoming a little overly complicated as a CASE statement. Are you avoiding the special cases for TFmode and TDmode on e500 for a specific reason or simply matching current behavior? I don't know what's right in the context of the present patch, but the general principle for e500 is that TDmode is much like TImode and DDmode is much like DImode, but TFmode is much like two of DFmode; that was what I concluded when making DFP work for e500 https://gcc.gnu.org/ml/gcc-patches/2008-06/msg00270.html. -- Joseph S. Myers jos...@codesourcery.com
Re: [PATCH, testsuite]: PR 58757: Check for FP denormal values without triggering denormal exceptions
On Tue, 23 Sep 2014, Uros Bizjak wrote: Hello! Attached patch avoids triggering denormal exceptions when FP insns are used to check for non-zero denormal values. But I thought the point of the test was to verify that the compiler's understanding of existence of subnormal values was consistent with the processor. If the processor is in a mode supporting such values, the exceptions should be masked. That is, the present test should pass unconditionally, if it doesn't pass that indicates a bug (which might be appropriate for XFAILing). -- Joseph S. Myers jos...@codesourcery.com
Re: [PATCH, testsuite]: PR 58757: Check for FP denormal values without triggering denormal exceptions
On Tue, 23 Sep 2014, Uros Bizjak wrote: On Tue, Sep 23, 2014 at 7:57 PM, Joseph S. Myers jos...@codesourcery.com wrote: Attached patch avoids triggering denormal exceptions when FP insns are used to check for non-zero denormal values. But I thought the point of the test was to verify that the compiler's understanding of existence of subnormal values was consistent with the processor. If the processor is in a mode supporting such values, the exceptions should be masked. That is, the present test should pass unconditionally, if it doesn't pass that indicates a bug (which might be appropriate for XFAILing). Alpha needs special instruction mode to process denormals. Without this special mode the insn traps as soon as denormal value is processed. Yes, but I thought the point of that PR was that unless -mieee was given to support such values, *_TRUE_MIN should be the same as *_MIN, reflecting that they aren't supported. And so the failure is showing that this bug is present (and so XFAILing with a comment referring to the bug is appropriate, rather than changing the test to pass). -- Joseph S. Myers jos...@codesourcery.com
Re: [patch] moving macro definitions to defaults.h
Defaults FrontEnd UINT_LEAST8_TYPE Defaults FrontEnd UNITS_PER_WORD Defaults FrontEnd MiddleEnd Target USE_GLD FrontEndDriver WCHAR_TYPE_SIZE Defaults FrontEnd WIDEST_HARDWARE_FP_SIZE FrontEnd Target WINT_TYPEDefaults FrontEnd WORDS_BIG_ENDIAN Defaults FrontEnd MiddleEnd -- Joseph S. Myers jos...@codesourcery.com
Re: [PATCH] Merger of the dmalcolm/jit branch
Various *_finalize functions are missing comments explaining their semantics. Also the return type should be on the line before the function name. Shouldn't the jit.pdf, jit.install-html etc. Make-lang.in hooks actually build / install the documentation for this JIT? +#include config.h +#include system.h +#include ansidecl.h +#include coretypes.h The standard initial includes are config.h, system.h, coretypes.h. system.h includes libiberty.h which includes ansidecl.h, so direct ansidecl.h includes shouldn't be needed anywhere. diff --git a/gcc/jit/internal-api.c b/gcc/jit/internal-api.c Should start with standard copyright and license header. This applies to all sources in gcc/jit/. dump::write, recording::context::add_error_va, recording::string::from_printf all use fixed-size buffers with vsnprintf but no apparent reason to assume this can never result in truncation, and missing API comments (lots of functions are missing such comments ...) to state either the caller's responsibility to limit the length of the result, or that the API provides for truncation. Unless there's a definite reason truncation is needed, dynamic allocation should be used. A patch was submitted a while back to add xasprintf and xvasprintf to libiberty - https://gcc.gnu.org/ml/gcc-patches/2009-11/msg01448.html and https://gcc.gnu.org/ml/gcc-patches/2009-11/msg01449.html (I don't know if that's the most recent version), which could be resurrected. The code for compiling a .s file should: * use choose_tmpdir from libiberty rather than hardcoding /tmp (or, better, create the files directly with make_temp_file, and delete them individual afterwards); * use libiberty's pexecute to run subprocesses, not system (building up a string to pass to the shell always looks like a security hole, though in this case it may in fact be safe); * use the $(target_noncanonical)-gcc-$(version) name for the driver rather than plain gcc, to maximise the chance that it is actually the same compiler the JIT library was built for (I realise you may not actually depend on it being the same compiler, but that does seem best; in principle in future it should be possible to load multiple copies of the JIT library to JIT for different targets, so that code for an offload accelerator can go through the JIT). The documentation referring to the dmalcolm/jit branch will of course need updating to refer to trunk (and GCC 5 and later releases) once this is on trunk. -- Joseph S. Myers jos...@codesourcery.com
Re: Fix i386 FP_TRAPPING_EXCEPTIONS
On Fri, 19 Sep 2014, Joseph S. Myers wrote: On Thu, 18 Sep 2014, Joseph S. Myers wrote: On Thu, 18 Sep 2014, Uros Bizjak wrote: OK for mainline and release branches. I've omitted ia64 from the targets in the testcase in the release branch version, given the lack of any definition of FP_TRAPPING_EXCEPTIONS at all there. (I think a definition as (~_fcw 0x3f) should work for ia64, but haven't tested that.) Here is an *untested* patch with that definition. 2014-09-19 Joseph Myers jos...@codesourcery.com PR target/63312 * config/ia64/sfp-machine.h (FE_EX_ALL, FP_TRAPPING_EXCEPTIONS): New macros. Now committed after Andreas's testing reporting in PR 63312. -- Joseph S. Myers jos...@codesourcery.com
Re: Move dwarf2 frame tables to read-only section for AIX
On Mon, 22 Sep 2014, Andrew Dixie wrote: I altered the dwarf2 frame and exception table generation so the decision on whether to use a read-only or read-write section is an independent decision from how the frame tables are registered. I renamed EH_FRAME_IN_DATA_SECTION to EH_FRAME_THROUGH_COLLECT2, as it now supports read-only, has slightly changed semantics, and I think this name better reflects what it currently does rather than what it historically did. If you rename a target macro, the old target macro name needs to be poisoned in system.h. 2014-09-22 Andrew Dixie andr...@gentrack.com Move exception tables to read-only memory on AIX. * dwarf2asm.c (dw2_asm_output_encoded_addr_rtx): Add call to ASM_OUTPUT_DWARF_DATAREL. * dwarf2out.c (switch_to_eh_frame_section): Use a read-only section even if EH_FRAME_SECTION_NAME is undefined. Add call to EH_FRAME_THROUGH_COLLECT2. * except.c (switch_to_exception_section): Use a read-only section even if EH_FRAME_SECTION_NAME is undefined. * collect2.c (write_c_file_stat): Provide dbase on AIX. (scan_prog_file): Don't output __dso_handle nor __gcc_unwind_dbase. * config/rs6000/aix.h (ASM_PREFERRED_EH_DATA_FORMAT): define. (EH_TABLES_CAN_BE_READ_ONLY): define. (ASM_OUTPUT_DWARF_PCREL): define. (ASM_OUTPUT_DWARF_DATAREL): define. (EH_FRAME_IN_DATA_SECTION): undefine. (EH_FRAME_THROUGH_COLLECT2): define. * config/rs6000/rs6000-aix.c: new file. (rs6000_aix_asm_output_dwarf_pcrel): new function. (rs6000_aix_asm_output_dwarf_datarel): new function. * config/rs6000/rs6000.c (rs6000_xcoff_asm_init_sections): remove assignment of exception_section. This ChangeLog entry seems very incomplete. It doesn't mention the changes for other architectures, or to defaults.h, or to the documentation, for example. -- Joseph S. Myers jos...@codesourcery.com
Re: [patch] moving macro definitions to defaults.h
On Mon, 22 Sep 2014, Andrew MacLeod wrote: Josephs solution was to identify these and instead put a default definition in default.h ... then change all the uses to #if instead.. ie, #if BLAH This way we can ensure that the definition has been seen and it will be a compile error if not. No, my suggestion was that whenever possible we should change preprocessor conditionals - #ifdef or #if - into C conditionals - if (MACRO). Changing from #ifdef to #if does nothing to make a missing tm.h include produce an error - the undefined macro simply quietly gets treated as 0 in preprocessor conditionals. To get an error from #if in such cases, you'd need to build GCC with -Wundef (together with existing -Werror), and I'd guess there are plenty of places that are not -Wundef clean at present. Now, I think moves of defaults to defaults.h are generally a good idea, and that moving from defined/undefined to true/false semantics are also a good idea - even if the way the macro is used means you can't take the further step of converting from #if to if (). They don't solve the problem of making a missing tm.h include immediately visible, but they *do* potentially help with future automatic refactoring to convert target macros into hooks. Obviously such moves do require checking the definitions and uses of the macros in question; you need to make sure you catch all places that use #ifdef / #if defined etc. on the macro (and make sure they have the same default). And if you're changing the semantics of the macro from defined / undefined to true / false, you need to watch out for any existing definitions with an empty expansion, or an expansion to 0, etc. -- Joseph S. Myers jos...@codesourcery.com
Re: [patch] moving macro definitions to defaults.h
On Mon, 22 Sep 2014, David Malcolm wrote: There appears to be a particular implicit order in which headers must be included. I notice that e.g. tm.h has: #ifndef GCC_TM_H #define GCC_TM_H so if we're going with this no header file includes any other header file model, would it make sense to add something like: #ifndef GCC_TM_H #error tm.h must have been included by this point /* We need tm.h here so that we can see: BAR, BAZ, QUUX, etc. */ #endif to header files needing it, thus expressing the expected dependencies explicitly? In principle, yes. In practice, some headers have definitions that depend on tm.h but for most users this doesn't matter. For example, flags.h depends on SWITCHABLE_TARGET. (I think the fix there is to make most users use options.h instead, and move miscellaneous declarations from flags.h to other headers.) In some cases, the target macro may be used only in a macro expansion. (BITS_PER_UNIT isn't strictly a target macro any more, but when it was its uses in tree.h were an example of that. tree.h still depends on the target macros NO_DOLLAR_IN_LABEL, NO_DOT_IN_LABEL and TARGET_DLLIMPORT_DECL_ATTRIBUTES, however, but we shouldn't make all tree.h users include tm.h.) -- Joseph S. Myers jos...@codesourcery.com
Remove LIBGCC2_LONG_DOUBLE_TYPE_SIZE target macro
{LONG_DOUBLE_TYPE_SIZE}. If you don't define this, the -default is @code{LONG_DOUBLE_TYPE_SIZE}. -@end defmac - @defmac LIBGCC2_GNU_PREFIX This macro corresponds to the @code{TARGET_LIBFUNC_GNU_PREFIX} target hook and should be defined if that hook is overriden to be true. It Index: gcc/doc/tm.texi.in === --- gcc/doc/tm.texi.in (revision 215458) +++ gcc/doc/tm.texi.in (working copy) @@ -1384,13 +1384,6 @@ the target machine. If you don't define this, the @code{BITS_PER_UNIT * 16}. @end defmac -@defmac LIBGCC2_LONG_DOUBLE_TYPE_SIZE -Define this macro if @code{LONG_DOUBLE_TYPE_SIZE} is not constant or -if you want routines in @file{libgcc2.a} for a size other than -@code{LONG_DOUBLE_TYPE_SIZE}. If you don't define this, the -default is @code{LONG_DOUBLE_TYPE_SIZE}. -@end defmac - @defmac LIBGCC2_GNU_PREFIX This macro corresponds to the @code{TARGET_LIBFUNC_GNU_PREFIX} target hook and should be defined if that hook is overriden to be true. It Index: gcc/system.h === --- gcc/system.h(revision 215458) +++ gcc/system.h(working copy) @@ -936,7 +936,8 @@ extern void fancy_abort (const char *, int, const EXTRA_CONSTRAINT_STR EXTRA_MEMORY_CONSTRAINT \ EXTRA_ADDRESS_CONSTRAINT CONST_DOUBLE_OK_FOR_CONSTRAINT_P \ CALLER_SAVE_PROFITABLE LARGEST_EXPONENT_IS_NORMAL \ - ROUND_TOWARDS_ZERO SF_SIZE DF_SIZE XF_SIZE TF_SIZE LIBGCC2_TF_CEXT + ROUND_TOWARDS_ZERO SF_SIZE DF_SIZE XF_SIZE TF_SIZE LIBGCC2_TF_CEXT \ + LIBGCC2_LONG_DOUBLE_TYPE_SIZE /* Hooks that are no longer used. */ #pragma GCC poison LANG_HOOKS_FUNCTION_MARK LANG_HOOKS_FUNCTION_FREE \ Index: libgcc/dfp-bit.h === --- libgcc/dfp-bit.h(revision 215458) +++ libgcc/dfp-bit.h(working copy) @@ -34,19 +34,21 @@ see the files COPYING3 and COPYING.RUNTIME respect #include tm.h #include libgcc_tm.h -#ifndef LIBGCC2_LONG_DOUBLE_TYPE_SIZE -#define LIBGCC2_LONG_DOUBLE_TYPE_SIZE LONG_DOUBLE_TYPE_SIZE -#endif - /* We need to know the size of long double that the C library supports. Don't use LIBGCC2_HAS_XF_MODE or LIBGCC2_HAS_TF_MODE here because some targets set both of those. */ +#ifndef __LIBGCC_XF_MANT_DIG__ +#define __LIBGCC_XF_MANT_DIG__ 0 +#endif #define LONG_DOUBLE_HAS_XF_MODE \ - (BITS_PER_UNIT == 8 LIBGCC2_LONG_DOUBLE_TYPE_SIZE == 80) + (__LDBL_MANT_DIG__ == __LIBGCC_XF_MANT_DIG__) +#ifndef __LIBGCC_TF_MANT_DIG__ +#define __LIBGCC_TF_MANT_DIG__ 0 +#endif #define LONG_DOUBLE_HAS_TF_MODE \ - (BITS_PER_UNIT == 8 LIBGCC2_LONG_DOUBLE_TYPE_SIZE == 128) + (__LDBL_MANT_DIG__ == __LIBGCC_TF_MANT_DIG__) /* Depending on WIDTH, define a number of macros: Index: libgcc/libgcc2.c === --- libgcc/libgcc2.c(revision 215458) +++ libgcc/libgcc2.c(working copy) @@ -1866,29 +1866,25 @@ NAME (TYPE x, int m) # define CTYPE SCtype # define MODE sc # define CEXT __LIBGCC_SF_FUNC_EXT__ -# define NOTRUNC __FLT_EVAL_METHOD__ == 0 +# define NOTRUNC __LIBGCC_SF_EXCESS_PRECISION__ #elif defined(L_muldc3) || defined(L_divdc3) # define MTYPE DFtype # define CTYPE DCtype # define MODE dc # define CEXT __LIBGCC_DF_FUNC_EXT__ -# if LIBGCC2_LONG_DOUBLE_TYPE_SIZE == 64 -# define NOTRUNC 1 -# else -# define NOTRUNC __FLT_EVAL_METHOD__ == 0 || __FLT_EVAL_METHOD__ == 1 -# endif +# define NOTRUNC __LIBGCC_DF_EXCESS_PRECISION__ #elif defined(L_mulxc3) || defined(L_divxc3) # define MTYPE XFtype # define CTYPE XCtype # define MODE xc # define CEXT __LIBGCC_XF_FUNC_EXT__ -# define NOTRUNC 1 +# define NOTRUNC __LIBGCC_XF_EXCESS_PRECISION__ #elif defined(L_multc3) || defined(L_divtc3) # define MTYPE TFtype # define CTYPE TCtype # define MODE tc # define CEXT __LIBGCC_TF_FUNC_EXT__ -# define NOTRUNC 1 +# define NOTRUNC __LIBGCC_TF_EXCESS_PRECISION__ #else # error #endif Index: libgcc/libgcc2.h === --- libgcc/libgcc2.h(revision 215458) +++ libgcc/libgcc2.h(working copy) @@ -34,10 +34,6 @@ extern void __clear_cache (char *, char *); extern void __eprintf (const char *, const char *, unsigned int, const char *) __attribute__ ((__noreturn__)); -#ifndef LIBGCC2_LONG_DOUBLE_TYPE_SIZE -#define LIBGCC2_LONG_DOUBLE_TYPE_SIZE LONG_DOUBLE_TYPE_SIZE -#endif - #ifdef __LIBGCC_HAS_SF_MODE__ #define LIBGCC2_HAS_SF_MODE 1 #else -- Joseph S. Myers jos...@codesourcery.com
Re: Fix i386 FP_TRAPPING_EXCEPTIONS
On Thu, 18 Sep 2014, Joseph S. Myers wrote: On Thu, 18 Sep 2014, Uros Bizjak wrote: OK for mainline and release branches. I've omitted ia64 from the targets in the testcase in the release branch version, given the lack of any definition of FP_TRAPPING_EXCEPTIONS at all there. (I think a definition as (~_fcw 0x3f) should work for ia64, but haven't tested that.) Here is an *untested* patch with that definition. 2014-09-19 Joseph Myers jos...@codesourcery.com PR target/63312 * config/ia64/sfp-machine.h (FE_EX_ALL, FP_TRAPPING_EXCEPTIONS): New macros. Index: libgcc/config/ia64/sfp-machine.h === --- libgcc/config/ia64/sfp-machine.h(revision 215389) +++ libgcc/config/ia64/sfp-machine.h(working copy) @@ -56,6 +56,9 @@ #define FP_EX_OVERFLOW 0x08 #define FP_EX_UNDERFLOW0x10 #define FP_EX_INEXACT 0x20 +#define FP_EX_ALL \ + (FP_EX_INVALID | FP_EX_DENORM | FP_EX_DIVZERO | FP_EX_OVERFLOW \ +| FP_EX_UNDERFLOW | FP_EX_INEXACT) #define _FP_TININESS_AFTER_ROUNDING 1 @@ -67,6 +70,8 @@ __sfp_handle_exceptions (_fex); \ } while (0); +#define FP_TRAPPING_EXCEPTIONS (~_fcw FP_EX_ALL) + #define FP_RND_NEAREST 0 #define FP_RND_ZERO0xc00L #define FP_RND_PINF0x800L -- Joseph S. Myers jos...@codesourcery.com
Re: Fix i386 FP_TRAPPING_EXCEPTIONS
On Thu, 18 Sep 2014, Uros Bizjak wrote: OK for mainline and release branches. I've omitted ia64 from the targets in the testcase in the release branch version, given the lack of any definition of FP_TRAPPING_EXCEPTIONS at all there. (I think a definition as (~_fcw 0x3f) should work for ia64, but haven't tested that.) -- Joseph S. Myers jos...@codesourcery.com
Re: [PATCH 2/2] Add patch for debugging compiler ICEs.
On Thu, 11 Sep 2014, Maxim Ostapenko wrote: In general, when cc1 or cc1plus ICE-es, we try to reproduce the bug by running compiler 3 times and comparing stderr and stdout on each attempt with respective ones that were gotten as the result of previous compiler run (we use temporary dump files to do this). If these files are identical, we add GCC configuration (e.g. target, configure options and version), compiler command line and preprocessed source code into last dump file, containing backtrace. Following Jakub's approach, we trigger ICE_EXIT_CODE instead of FATAL_EXIT_CODE in case of DK_FATAL error to differ ICEs from other fatal errors, so try_generate_repro routine will be able to run even if fatal_error occurred in compiler. I still don't understand what's going on here with exit codes. Suppose cc1 calls fatal_error (not for an ICE, not -Wfatal-errors - a normal DK_FATAL arising from a call to fatal_error). What exit code does it exit with? What path leads to that exit code? How does the driver distinguish this from an ICE? Suppose cc1 calls internal_error. What exit code does it exit with? What path leads to that exit code? How does the driver distinguish this from a call to fatal_error? What about the above exit codes was different before the patch, that means the driver ICE detection can only work given the diagnostic.c changes? -- Joseph S. Myers jos...@codesourcery.com
Re: [PATCH] Add header guard to several header files.
On Fri, 19 Sep 2014, Kito Cheng wrote: Hi Joseph: Here is updated patch and ChangeLog, However I don't have commit write yet, can you help me to commit it? thanks Committed. -- Joseph S. Myers jos...@codesourcery.com
Re: [PATCH][PING] Enable -fsanitize-recover for KASan
On Thu, 18 Sep 2014, Jakub Jelinek wrote: Seems for -fdelete-null-pointer-checks we got it wrong too, IMHO for -fsanitize={null,{,returns-}nonnull-attribute,undefined} we want to disable it unconditionally, regardless of whether that option appears on the command line or not. And we handle it right for -fdelete-null-pointer-checks -fsanitize=undefined but not for -fsanitize=undefined -fdelete-null-pointer-checks Joseph, thoughts where to override it instead (I mean, after all options are processed)? finish_options is the obvious place to do that. -- Joseph S. Myers jos...@codesourcery.com
Re: Fix ARM ICE for register var asm (pc) (PR target/60606)
On Wed, 17 Sep 2014, Alan Lawrence wrote: We've just noticed this patch causes an ICE in gcc.c-torture/execute/scal-to-vec1.c at -O3 when running with -fPIC on arm-none-linux-gnueabi and arm-none-linux-gnueabihf; test logs: Which part causes the ICE? The arm_hard_regno_mode_ok change relating to modes assigned to CC_REGNUM, the arm_regno_class change relating to PC_REGNUM, or something else? Either of those would indicate something very strange going on in LRA (maybe something else needs to change somewhere as well to stop attempts to use CC_REGNUM or PC_REGNUM inappropriately?). -- Joseph S. Myers jos...@codesourcery.com
Re: [PATCH] Add header guard to several header files.
On Wed, 17 Sep 2014, Kito Cheng wrote: Updated patch OK except for the changes to target-def.h and target-hooks-macros.h. (Those aren't exactly normal headers that could reasonably be included more than once in a source file; they have dependencies on where they get included and what's defined before/after inclusion. So while I suspect the include guards would not cause any problems in those headers, it's not obvious they're desirable either.) -- Joseph S. Myers jos...@codesourcery.com
Fix i386 FP_TRAPPING_EXCEPTIONS
The i386 sfp-machine.h defines FP_TRAPPING_EXCEPTIONS in a way that is always wrong: it treats a set bit as indicating the exception is trapping, when actually a set bit (both for 387 and SSE floating point) indicates it is masked, and a clear bit indicates it is trapping. This patch fixes this bug. Bootstrapped with no regressions on x86_64-unknown-linux-gnu. OK to commit? Note to ia64 maintainers: it would be a good idea to add a definition of FP_TRAPPING_EXCEPTIONS for ia64, and I expect the new test to fail on ia64 until you do so. libgcc: 2014-09-17 Joseph Myers jos...@codesourcery.com * config/i386/sfp-machine.h (FP_TRAPPING_EXCEPTIONS): Treat clear bits not set bits as indicating trapping exceptions. gcc/testsuite: 2014-09-17 Joseph Myers jos...@codesourcery.com * gcc.dg/torture/float128-exact-underflow.c: New test. Index: gcc/testsuite/gcc.dg/torture/float128-exact-underflow.c === --- gcc/testsuite/gcc.dg/torture/float128-exact-underflow.c (revision 0) +++ gcc/testsuite/gcc.dg/torture/float128-exact-underflow.c (revision 0) @@ -0,0 +1,41 @@ +/* Test that exact underflow in __float128 signals the underflow + exception if trapping is enabled, but does not raise the flag + otherwise. */ + +/* { dg-do run { target i?86-*-*gnu* x86_64-*-*gnu* ia64-*-*gnu* } } */ +/* { dg-options -D_GNU_SOURCE } */ +/* { dg-require-effective-target fenv_exceptions } */ + +#include fenv.h +#include setjmp.h +#include signal.h +#include stdlib.h + +volatile sig_atomic_t caught_sigfpe; +sigjmp_buf buf; + +static void +handle_sigfpe (int sig) +{ + caught_sigfpe = 1; + siglongjmp (buf, 1); +} + +int +main (void) +{ + volatile __float128 a = 0x1p-16382q, b = 0x1p-2q; + volatile __float128 r; + r = a * b; + if (fetestexcept (FE_UNDERFLOW)) +abort (); + if (r != 0x1p-16384q) +abort (); + feenableexcept (FE_UNDERFLOW); + signal (SIGFPE, handle_sigfpe); + if (sigsetjmp (buf, 1) == 0) +r = a * b; + if (!caught_sigfpe) +abort (); + exit (0); +} Index: libgcc/config/i386/sfp-machine.h === --- libgcc/config/i386/sfp-machine.h(revision 215323) +++ libgcc/config/i386/sfp-machine.h(working copy) @@ -60,7 +60,7 @@ __sfp_handle_exceptions (_fex); \ } while (0); -#define FP_TRAPPING_EXCEPTIONS ((_fcw FP_EX_SHIFT) FP_EX_ALL) +#define FP_TRAPPING_EXCEPTIONS ((~_fcw FP_EX_SHIFT) FP_EX_ALL) #define FP_ROUNDMODE (_fcw FP_RND_MASK) #endif -- Joseph S. Myers jos...@codesourcery.com
Re: [C PATCH] Better diagnostics for C++ comments in C90 (PR c/61854)
On Wed, 17 Sep 2014, Marek Polacek wrote: Sure, updated. Bootstrap in progress, regtested on x86_64-linux, ok for trunk? 2014-09-17 Marek Polacek pola...@redhat.com PR c/61854 libcpp/ * init.c (struct lang_flags): Remove cplusplus_comments. (cpp_set_lang): Likewise. (post_options): Likewise. * lex.c (_cpp_lex_direct): Disallow C++ style comments in C90/C94. testsuite/ * gcc.dg/cpp/pr61854-1.c: New test. * gcc.dg/cpp/pr61854-2.c: New test. * gcc.dg/cpp/pr61854-3.c: New test. * gcc.dg/cpp/pr61854-3.h: New test. * gcc.dg/cpp/pr61854-4.c: New test. * gcc.dg/cpp/pr61854-5.c: New test. * gcc.dg/cpp/pr61854-6.c: New test. * gcc.dg/cpp/pr61854-7.c: New test. * gcc.dg/cpp/pr61854-c90.c: New test. * gcc.dg/cpp/pr61854-c94.c: New test. OK. -- Joseph S. Myers jos...@codesourcery.com
Re: Fix pr61848, linux kernel miscompile
On Tue, 16 Sep 2014, Alan Modra wrote: gcc testsuite additions? I decline. It is too soon. If you had read my patch submission you'll see that at some stage gcc was supposed to warn on conflicting section attributes, but hasn't done so for a very long time. There needs to be some agreement on which direction we should go before I'm willing to spend even a small amount of time on the testsuite. Also, a test for merging tls model attributes runs The point of testsuite additions is to verify the visible changes in behavior intended to be caused by the patch (and, as applicable, that the behavior doesn't change in other related areas where it's not meant to change), rather than to test something that GCC doesn't do either before or after the patch. If the lack of tests is because the patch is an RFC about what the desired behavior is, rather than an actual submission for inclusion, then it's helpful to say so in the patch submission. into the problem that this can only be done in a target independent way by looking at dumps, and the tls model dump is currently broken. If there is a reason some aspect of the change can't readily be tested, that should be stated in the patch submission (along with examples of the affected code that can't readily be put into suitable form for the testsuite). Come to think of it, what if I decline to make any testsuite additions? I'm asking because you're a steering committee member, and Then the patch isn't ready for review. Documentation and testcases are the first thing I look at when reviewing C front-end changes; the testcases are the primary evidence that the patch does what it's meant to do, and without them I won't generally try to review the code changes. There's no requirement for test-driven development, but personally I prefer to write the documentation and tests before the rest of the patch (and make sure the tests do fail with the unmodified compiler, unless they are tests of related cases that already work but I want to make sure don't get broken) - though in the course of implementing the patch I expect to find other related cases that result in more tests being written, and to modify exactly what I expect from the tests I wrote earlier. (I also find it a pain when backporting patches to packages that don't expect testcases as a norm for all patches if the author didn't include testsuite coverage with their patch, as it makes it much harder to tell if the backport is working properly. Or if a problem was caused by a patch that was committed without testcases - again, it's hard to tell if a fix affects the fix to the original issue the patch was supposed to address.) -- Joseph S. Myers jos...@codesourcery.com
Re: ptx preliminary address space fixes [1/4]
On Tue, 16 Sep 2014, Richard Biener wrote: Hmm. How is it with other compositive types like vectors and complex? It's bad that the middle-end needs to follow a specific frontends need. Why's the representation tied so closely together? Complex types aren't derived types in C terms; they don't have an element type, but a corresponding real type. Vectors should presumably be treated like complex types. So both can have qualifiers. OTOH that address-spaces are qualifiers is an implementation detail (and maybe not the very best). So I don't see how the C frontend needs to view them as qualifiers? It's not an implementation detail, it's how TR 18037 defines them, and thus how the C front end should represent them in order to follow the requirements of TR 18037. If something different is appropriate on GIMPLE, when GIMPLE gets its own type system independent of trees then the lowering could of course change this sort of thing. (I think the fixed-point support, also from TR 18037, would better be implemented through lowering from fixed-point types at front-end level to special (e.g. saturating) operations on normal types and modes, rather than carrying a load of special types and modes through to the back end.) -- Joseph S. Myers jos...@codesourcery.com
Re: [patch] allowing configure --target=e500v[12]-etc
On Tue, 16 Sep 2014, Olivier Hainque wrote: 2014-09-16 Olivier Hainque hain...@adacore.com toplevel/ * config.sub: Accept e500v[12] cpu names. Canonicalize to powerpc and add a spe suffix to the os name when required to select the proper ABI and not already there. config.sub patches have to go to config-patches first, we only ever import the latest unmodified config.sub and config.guess from config.git, without making local changes. -- Joseph S. Myers jos...@codesourcery.com
Re: Flatten function.h
On Tue, 16 Sep 2014, Andrew MacLeod wrote: I did an include file reduction on all the language/*.[ch] and core *.[ch] files, but left the target files with the full complement of 7 includes that function.h use to have. Its probably easier when this is all done to fully reduce the targets one at a time... there are so many nooks and crannies I figured I'd bust something right now if i tried to do all the targets as well :-) How did you determine what includes to remove? You appear to have removed tm.h includes from various files that do in fact use target macros; maybe they get it indirectly included by some other header, but I thought a principle of this flattening was to avoid relying on such indirect inclusions. Because of possible use of target macros in #ifdef conditionals, compiles with the include removed is not a sufficient condition for removing it. cfgrtl.c gimple-fold.c mode-switching.c tree-inline.c vmsdbgout.c fortran/f95-lang.c fortran/trans-decl.c objc/objc-act.c -- Joseph S. Myers jos...@codesourcery.com
Re: ptx preliminary address space fixes [1/4]
On Tue, 16 Sep 2014, Bernd Schmidt wrote: It's not an implementation detail, it's how TR 18037 defines them, and thus how the C front end should represent them in order to follow the requirements of TR 18037. My position is that standards do not mandate how our internal data structures should look like, and we should be striving to make them consistent. That My position is that the structures in the front end should correspond to how the language is actually defined, so that the most obvious way of accessing some property of an entity in the front end actually gets that property as it is defined in the standard, and not something similar but confusingly different defined by GCC. It's the job of genericizing / gimplifying to convert from structures that closely correspond to the source program and the language standard into ones that are more convenient for language-independent processing and code generation. (That TYPE_MAIN_VARIANT maps an array of qualified type to an array of corresponding unqualified type necessitates lots of special cases in the front end to avoid applying TYPE_MAIN_VARIANT to array types, since in C terms array types are always unqualified and are unrelated to an array of corresponding unqualified element type.) -- Joseph S. Myers jos...@codesourcery.com
Remove LIBGCC2_TF_CEXT target macro
: libgcc/libgcc2.c === --- libgcc/libgcc2.c(revision 215300) +++ libgcc/libgcc2.c(working copy) @@ -1865,17 +1865,16 @@ NAME (TYPE x, int m) # define MTYPE SFtype # define CTYPE SCtype # define MODE sc -# define CEXT f +# define CEXT __LIBGCC_SF_FUNC_EXT__ # define NOTRUNC __FLT_EVAL_METHOD__ == 0 #elif defined(L_muldc3) || defined(L_divdc3) # define MTYPE DFtype # define CTYPE DCtype # define MODE dc +# define CEXT __LIBGCC_DF_FUNC_EXT__ # if LIBGCC2_LONG_DOUBLE_TYPE_SIZE == 64 -# define CEXT l # define NOTRUNC 1 # else -# define CEXT # define NOTRUNC __FLT_EVAL_METHOD__ == 0 || __FLT_EVAL_METHOD__ == 1 # endif #elif defined(L_mulxc3) || defined(L_divxc3) @@ -1882,17 +1881,13 @@ NAME (TYPE x, int m) # define MTYPE XFtype # define CTYPE XCtype # define MODE xc -# define CEXT l +# define CEXT __LIBGCC_XF_FUNC_EXT__ # define NOTRUNC 1 #elif defined(L_multc3) || defined(L_divtc3) # define MTYPE TFtype # define CTYPE TCtype # define MODE tc -# if LIBGCC2_LONG_DOUBLE_TYPE_SIZE == 128 -# define CEXT l -# else -# define CEXT LIBGCC2_TF_CEXT -# endif +# define CEXT __LIBGCC_TF_FUNC_EXT__ # define NOTRUNC 1 #else # error -- Joseph S. Myers jos...@codesourcery.com
Re: ptx preliminary address space fixes [1/4]
On Wed, 17 Sep 2014, Bernd Schmidt wrote: On 09/16/2014 11:18 PM, Joseph S. Myers wrote: (That TYPE_MAIN_VARIANT maps an array of qualified type to an array of corresponding unqualified type necessitates lots of special cases in the front end to avoid applying TYPE_MAIN_VARIANT to array types, since in C terms array types are always unqualified and are unrelated to an array of corresponding unqualified element type.) Sounds like you want a c_type_main_variant wrapper then? What exactly breaks if you ignore the problem and apply TYPE_MAIN_VARIANT to arrays? Anything where the C standard defines something in terms of the unqualified versions of types, or the set of qualifiers on a type, operates incorrectly (tests compatibility of the wrong types, etc.) if you apply TYPE_MAIN_VARIANT to arrays. -- Joseph S. Myers jos...@codesourcery.com
Re: [C PATCH] Better diagnostics for C++ comments in C90 (PR c/61854)
On Mon, 15 Sep 2014, Marek Polacek wrote: On Mon, Sep 15, 2014 at 05:49:25PM +, Joseph S. Myers wrote: On Mon, 15 Sep 2014, Marek Polacek wrote: We must be careful to properly handle code such as 1 //**/ 2, which has a different meaning in C90 and GNU90 mode. New testcases test this. I don't think there's sufficient allowance here for other valid cases. It's valid to have // inside #if 0 in C90, for example, so that must not be diagnosed (must not have a pedwarn or error, at least, that is). It's Good point, sorry about that. Luckily this can be fixed just by checking pfile-state.skipping. New test added. This is getting closer, but it looks like you still treat it as a line comment when being skipped for C90, when actually it's not safe to treat it like that; you have to produce a '/' preprocessing token and continue tokenizing the rest of the line. Consider the following code: int i = 0 #if 0 // /* #else // */ +1 #endif ; For C90 i gets value 0. With // comments it gets value 1. + /* In C89/C94, C++ style comments are forbidden. */ + else if ((CPP_OPTION (pfile, lang) == CLK_STDC89 + || CPP_OPTION (pfile, lang) == CLK_STDC94)) + { + /* But don't be confused about // immediately followed by *. */ + if (buffer-cur[1] == '*' + || pfile-state.in_directive) And this comment needs updating to reflect that it's not just //* where // can appear in valid C90 code in a way incompatible with treating it as a comment. -- Joseph S. Myers jos...@codesourcery.com
Re: [C PATCH] Better diagnostics for C++ comments in C90 (PR c/61854)
On Mon, 15 Sep 2014, Marek Polacek wrote: We must be careful to properly handle code such as 1 //**/ 2, which has a different meaning in C90 and GNU90 mode. New testcases test this. I don't think there's sufficient allowance here for other valid cases. It's valid to have // inside #if 0 in C90, for example, so that must not be diagnosed (must not have a pedwarn or error, at least, that is). It's also valid to have it in a macro expansion; e.g.: #define h(x) #x #define s(x) h(x) #define foo // and then s(foo) must expand to the string //. Clearly, in any case, with or without the diagnostics, these cases should have testcases in the testsuite. But because // is only invalid in C90 if it actually results in two consecutive / tokens (not just preprocessing tokens), as such consecutive tokens are not part of any valid C90 program, a more conservative approach may be needed to avoid errors for valid cases. -- Joseph S. Myers jos...@codesourcery.com
Re: Fix pr61848, linux kernel miscompile
On Mon, 15 Sep 2014, Alan Modra wrote: This patch cures the linux kernel boot failure when compiled using trunk gcc. (Andrew, apologies for hijacking your bugzilla, I started work on this before finding the bugzilla..) Please include testcases in your patch for each case that you fix. -- Joseph S. Myers jos...@codesourcery.com
Re: Remove LIBGCC2_HAS_?F_MODE target macros
On Fri, 12 Sep 2014, paul_kon...@dell.com wrote: * SFmode would always have been supported in libgcc (the condition was BITS_PER_UNIT == 8, true for all current targets), but pdp11 defaults to 64-bit float, and in that case SFmode would fail scalar_mode_supported_p. I don't know if libgcc actually built for pdp11 (and the port may well no longer be being used), but this patch adds a scalar_mode_supported_p hook to it to ensure SFmode is treated as supported. I thought it does build. I continue to work to keep that port alive. The change looks fine. The ideal solution, I think, would be to handle the choice of float length that the pdp11 target has via the multilib machinery. Currently it does not do that. If multilib were added for that at some point, would that require a change of the code in that hook? I think the ideal is for the back end to accept a mode in scalar_mode_supported_p if it can generate something sensible (either inline code or calls to libgcc functions) for arithmetic on that mode, rather than ICEs or otherwise invalid code, even if the libgcc functions don't actually exist. (Thus, ix86_scalar_mode_supported_p always considers TFmode to be supported, whether or not the libgcc support is present.) On that basis, my hook to treat SFmode as always supported for pdp11 (so it can be accessed with __attribute__((mode(SF))), whether or not it's also float) seems to be the right thing. (Various back ends would, if they adopted my ideal, then also need to add the libgcc_floating_mode_supported_p hook to indicate the conditional lack of libgcc support for certain modes. E.g. for several back ends, TFmode is only supported in libgcc if it's long double, and most of the runtime support is expected to be in libc not libgcc, under symbol names from some ABI for that architecture. In those cases, building in the libgcc support for e.g. __multc3 in the absence of libc support would be problematic, because it would reference undefined libc functions.) -- Joseph S. Myers jos...@codesourcery.com
Re: DBL_DENORM_MIN should never be 0
On Thu, 11 Sep 2014, Marc Glisse wrote: I don't know what kind of test you have in mind, so I added a runtime test. I am just guessing that it probably fails on alpha because of PR 58757, I can't test. Computing d+d may be even more likely to trigger potential issues, if that's the goal. Yes, a runtime test. I don't think there should be an xfail without it actually having been tested to fail (and then such an xfail should come with a comment referencing the bug filed in Bugzilla). -- Joseph S. Myers jos...@codesourcery.com
Re: DBL_DENORM_MIN should never be 0
On Thu, 11 Sep 2014, Marc Glisse wrote: On Thu, 11 Sep 2014, Joseph S. Myers wrote: On Thu, 11 Sep 2014, Marc Glisse wrote: I don't know what kind of test you have in mind, so I added a runtime test. I am just guessing that it probably fails on alpha because of PR 58757, I can't test. Computing d+d may be even more likely to trigger potential issues, if that's the goal. Yes, a runtime test. I don't think there should be an xfail without it actually having been tested to fail (and then such an xfail should come with a comment referencing the bug filed in Bugzilla). Would it be ok with the attached testcase then? (same ChangeLog). Yes, OK with that test. -- Joseph S. Myers jos...@codesourcery.com
Remove LIBGCC2_HAS_?F_MODE target macros
(revision 215170) +++ libgcc/fixed-bit.h (working copy) @@ -45,21 +45,18 @@ see the files COPYING3 and COPYING.RUNTIME respect Ex: If we define FROM_QQ and TO_SI, the conversion from QQ to SI is generated. */ -#ifndef LIBGCC2_LONG_DOUBLE_TYPE_SIZE -#define LIBGCC2_LONG_DOUBLE_TYPE_SIZE LONG_DOUBLE_TYPE_SIZE +#ifdef __LIBGCC_HAS_SF_MODE__ +#define LIBGCC2_HAS_SF_MODE 1 +#else +#define LIBGCC2_HAS_SF_MODE 0 #endif -#ifndef LIBGCC2_HAS_SF_MODE -#define LIBGCC2_HAS_SF_MODE (BITS_PER_UNIT == 8) +#ifdef __LIBGCC_HAS_DF_MODE__ +#define LIBGCC2_HAS_DF_MODE 1 +#else +#define LIBGCC2_HAS_DF_MODE 0 #endif -#ifndef LIBGCC2_HAS_DF_MODE -#define LIBGCC2_HAS_DF_MODE \ - (BITS_PER_UNIT == 8 \ -(__SIZEOF_DOUBLE__ * __CHAR_BIT__ == 64 \ - || LIBGCC2_LONG_DOUBLE_TYPE_SIZE == 64)) -#endif - typedef int QItype __attribute__ ((mode (QI))); typedef unsigned int UQItype__attribute__ ((mode (QI))); typedef int HItype __attribute__ ((mode (HI))); Index: libgcc/libgcc2.h === --- libgcc/libgcc2.h(revision 215170) +++ libgcc/libgcc2.h(working copy) @@ -38,25 +38,28 @@ extern void __eprintf (const char *, const char *, #define LIBGCC2_LONG_DOUBLE_TYPE_SIZE LONG_DOUBLE_TYPE_SIZE #endif -#ifndef LIBGCC2_HAS_SF_MODE -#define LIBGCC2_HAS_SF_MODE (BITS_PER_UNIT == 8) +#ifdef __LIBGCC_HAS_SF_MODE__ +#define LIBGCC2_HAS_SF_MODE 1 +#else +#define LIBGCC2_HAS_SF_MODE 0 #endif -#ifndef LIBGCC2_HAS_DF_MODE -#define LIBGCC2_HAS_DF_MODE \ - (BITS_PER_UNIT == 8 \ -(__SIZEOF_DOUBLE__ * __CHAR_BIT__ == 64 \ - || LIBGCC2_LONG_DOUBLE_TYPE_SIZE == 64)) +#ifdef __LIBGCC_HAS_DF_MODE__ +#define LIBGCC2_HAS_DF_MODE 1 +#else +#define LIBGCC2_HAS_DF_MODE 0 #endif -#ifndef LIBGCC2_HAS_XF_MODE -#define LIBGCC2_HAS_XF_MODE \ - (BITS_PER_UNIT == 8 LIBGCC2_LONG_DOUBLE_TYPE_SIZE == 80) +#ifdef __LIBGCC_HAS_XF_MODE__ +#define LIBGCC2_HAS_XF_MODE 1 +#else +#define LIBGCC2_HAS_XF_MODE 0 #endif -#ifndef LIBGCC2_HAS_TF_MODE -#define LIBGCC2_HAS_TF_MODE \ - (BITS_PER_UNIT == 8 LIBGCC2_LONG_DOUBLE_TYPE_SIZE == 128) +#ifdef __LIBGCC_HAS_TF_MODE__ +#define LIBGCC2_HAS_TF_MODE 1 +#else +#define LIBGCC2_HAS_TF_MODE 0 #endif #ifndef __LIBGCC_SF_MANT_DIG__ -- Joseph S. Myers jos...@codesourcery.com
Re: [Ping v2][PATCH] Add patch for debugging compiler ICEs.
On Wed, 10 Sep 2014, Jakub Jelinek wrote: On Tue, Sep 09, 2014 at 10:51:23PM +, Joseph S. Myers wrote: On Thu, 28 Aug 2014, Maxim Ostapenko wrote: diff --git a/gcc/diagnostic.c b/gcc/diagnostic.c index 0cc7593..67b8c5b 100644 --- a/gcc/diagnostic.c +++ b/gcc/diagnostic.c @@ -492,7 +492,7 @@ diagnostic_action_after_output (diagnostic_context *context, real_abort (); diagnostic_finish (context); fnotice (stderr, compilation terminated.\n); - exit (FATAL_EXIT_CODE); + exit (ICE_EXIT_CODE); Why? This is the case for fatal_error. FATAL_EXIT_CODE seems right for this, and ICE_EXIT_CODE wrong. So that the driver can understand the difference between an ICE and other fatal errors (e.g. sorry etc.). Users are typically using the driver and for them it matters what exit code is returned from the driver, not from cc1/cc1plus etc. Well, I think the next revision of the patch submission needs more explanation in this area. What exit codes do cc1 and the driver now return for (normal error, fatal error, ICE), and what do they return after the patch, and how does the change to the fatal_error case avoid incorrect changes if either cc1 or the driver called fatal_error (as opposed to either cc1 or the driver having an ICE)? Maybe that explanation should be in the form of a comment on this exit call, explaining why the counterintuitive use of ICE_EXIT_CODE in the DK_FATAL case is correct. -- Joseph S. Myers jos...@codesourcery.com
Re: DBL_DENORM_MIN should never be 0
On Wed, 10 Sep 2014, Marc Glisse wrote: Hello, according to the C++ standard, numeric_limits::denorm_min should return min (not 0) when there are no denormals. Tested with bootstrap+testsuite on x86_64-linux-gnu. I also tested a basic make all-gcc for vax (only target without denormals apparently) and the macro did change as expected. The next step might be to define has_denorm as false in more cases (-mno-ieee on alpha, -ffast-math on x86, etc) but that's a different issue. (this is C++ but I believe Joseph is the floating-point expert, hence the cc) This is a C issue as well (for C11 *_TRUE_MIN). gcc/c-family/ 2014-09-10 Marc Glisse marc.gli...@inria.fr PR target/58757 * c-cppbuiltin.c (builtin_define_float_constants): Correct __*_DENORM_MIN__ without denormals. I think there should be some sort of testcase that these values aren't 0. -- Joseph S. Myers jos...@codesourcery.com
Re: [PATCH] gcc parallel make check
On Wed, 10 Sep 2014, David Malcolm wrote: (A) test discovery; write out a fine-grained Makefile in which *every* testcase is its own make target (to the extreme limit of parallelizability e.g. on the per-input-file level) The DejaGnu design doesn't allow test discovery in general (as the set of tests can depend on the results of previous tests; tests are run through arbitrary Tcl code in .exp files which both enumerates them and runs them). Hopefully the GCC tests are well enough structured not to run into this problem. (Being able to enumerate tests separately from running them, and to run each test independently, is the QMTest model, though there are other issues with how that handles some things that come up in toolchain testing.) -- Joseph S. Myers jos...@codesourcery.com
Re: Encode Wnormalized= is c.opt
On Fri, 5 Sep 2014, Manuel L?pez-Ib??ez wrote: gcc/ChangeLog: 2014-09-05 Manuel L?pez-Ib??ez m...@gcc.gnu.org * doc/invoke.texi (Wnormalized=): Update. libcpp/ChangeLog: 2014-09-05 Manuel L?pez-Ib??ez m...@gcc.gnu.org * include/cpplib.h (struct cpp_options): Declare warn_normalize as int instead of enum. gcc/c-family/ChangeLog: 2014-09-05 Manuel L?pez-Ib??ez m...@gcc.gnu.org * c.opt(Wnormalized): New. (Wnormalized=): Use Enum and Reject Negative. * c-opts.c (c_common_handle_option): Do not handle Wnormalized here. gcc/testsuite/ChangeLog: 2014-09-05 Manuel L?pez-Ib??ez m...@gcc.gnu.org * gcc.dg/cpp/warn-normalized-3.c: Delete useless dg-prune-output. OK. -- Joseph S. Myers jos...@codesourcery.com
Re: auto generate cpp_reason to gcc OPT_W table
On Fri, 5 Sep 2014, Manuel L?pez-Ib??ez wrote: This adds a new option property CppReason which maps to a warning reason code in cpplib.h. This allows us to auto-generate cpp_reason_option_codes[], which maps from CPP warning codes to GCC ones, thus making a bit harder to forget to update this table (which evidently has happened a lot in the past). Unfortunately, to use cpp warning codes we need to include cpplib.h in options.h and this would conflict with other parts of the compiler, thus I protect the table with #ifdef GCC_C_COMMON_H, and make sure in c-common.c that cpplib.h is not included before c-common.h. This patch applies on top of the previous patch about Wnormalized= but it is mostly independent of it. Bootstrapped and regression tested on x86_64-linux-gnu OK? OK. -- Joseph S. Myers jos...@codesourcery.com