Hi everybody,

On 08/16/18 08:36, Bernd Edlinger wrote:
> Jeff Law wrote:
>> I wonder if the change to how we set up the initializers is ultimately
>> changing the section those go into and ultimately causing an overflow of
>> the .sdata section.
> 
> 
> Yes, that is definitely the case.
> Due to the -fmerge-all-constants option used
> named arrays with brace initializer look like string initializers
> and can go into the merge section if there are no embedded nul chars.
> But the string constants can now be huge.
> 
> See my other patch about string merging:
> [PATCH] Handle not explicitly zero terminated strings in merge sections
> https://gcc.gnu.org/ml/gcc-patches/2018-08/msg00481.html
> 
> 
> Can this section overflow?
> 


could someone try out if this (untested) patch fixes the issue?


Thanks,
Bernd.


2018-08-18  Bernd Edlinger  <bernd.edlin...@hotmail.de>

	* expmed.c (simple_mem_bitfield_p): Do shift right signed.
	* config/alpha/alpha.h (CONSTANT_ADDRESS_P): Avoid signed
	integer overflow.

Index: gcc/config/alpha/alpha.h
===================================================================
--- gcc/config/alpha/alpha.h	(revision 263611)
+++ gcc/config/alpha/alpha.h	(working copy)
@@ -678,7 +678,7 @@ enum reg_class {
 
 #define CONSTANT_ADDRESS_P(X)   \
   (CONST_INT_P (X)		\
-   && (unsigned HOST_WIDE_INT) (INTVAL (X) + 0x8000) < 0x10000)
+   && (UINTVAL (X) + 0x8000) < 0x10000)
 
 /* The macros REG_OK_FOR..._P assume that the arg is a REG rtx
    and check its validity for a certain class.
Index: gcc/expmed.c
===================================================================
--- gcc/expmed.c	(revision 263611)
+++ gcc/expmed.c	(working copy)
@@ -579,8 +579,12 @@ static bool
 simple_mem_bitfield_p (rtx op0, poly_uint64 bitsize, poly_uint64 bitnum,
 		       machine_mode mode, poly_uint64 *bytenum)
 {
+  poly_int64 ibit = bitnum;
+  poly_int64 ibyte;
+  if (!multiple_p (ibit, BITS_PER_UNIT, &ibyte))
+    return false;
+  *bytenum = ibyte;
   return (MEM_P (op0)
-	  && multiple_p (bitnum, BITS_PER_UNIT, bytenum)
 	  && known_eq (bitsize, GET_MODE_BITSIZE (mode))
 	  && (!targetm.slow_unaligned_access (mode, MEM_ALIGN (op0))
 	      || (multiple_p (bitnum, GET_MODE_ALIGNMENT (mode))

Reply via email to