Re: Continue strict-volatile-bitfields fixes

2012-05-24 Thread Thomas Schwinge
Hi!

Ping.

On Wed, 16 May 2012 19:14:45 +0200, I wrote:
 Ping.
 
 On Wed, 09 May 2012 10:01:55 +0800, I wrote:
  On Fri, 27 Apr 2012 10:29:06 +0200, Jakub Jelinek ja...@redhat.com wrote:
   On Fri, Apr 27, 2012 at 12:42:41PM +0800, Thomas Schwinge wrote:
  GET_MODE_BITSIZE (lmode)« (8 bits).  (With the current sources, 
  lmode is
  VOIDmode.)
 
  Is emmitting »BIT_FIELD_REF *common, 32, 0  255« wrong in this 
  case,
  or should a later optimization pass be able to figure out that
  »BIT_FIELD_REF *common, 32, 0  255« is in fact the same as
  common-code, and then be able to conflate these?  Any suggestions
  where/how to tackle this?
 
 The BIT_FIELD_REF is somewhat of a red-herring.  It is created by 
 fold-const.c
 in optimize_bit_field_compare, code that I think should be removed 
 completely.
 Or it needs to be made aware of strict-volatile bitfield and C++ 
 memory model
 details.
   
   I'd actually very much prefer the latter, just disable
   optimize_bit_field_compare for strict-volatile bitfield mode and when
   avoiding load data races in C++ memory model (that isn't going to be
   default, right?).  This optimization is useful, and it is solely about
   loads, so even C++ memory model usually shouldn't care.
  
  I can't comment on the C++ memory model bits, but I have now tested the
  following patch (fixes the issue for SH, no regressions for ARM, x86):
  
  gcc/
  * fold-const.c (optimize_bit_field_compare): Abort early in the strict
  volatile bitfields case.
  
  Index: fold-const.c
  ===
  --- fold-const.c(revision 186856)
  +++ fold-const.c(working copy)
  @@ -3342,6 +3342,11 @@ optimize_bit_field_compare (location_t loc, enum t
 tree mask;
 tree offset;
   
  +  /* In the strict volatile bitfields case, doing code changes here may 
  prevent
  + other optimizations, in particular in a SLOW_BYTE_ACCESS setting.  */
  +  if (flag_strict_volatile_bitfields  0)
  +return 0;
  +
 /* Get all the information about the extractions being done.  If the bit 
  size
if the same as the size of the underlying object, we aren't doing an
extraction at all and so can do nothing.  We also don't want to


Grüße,
 Thomas


pgppDuQupdsW3.pgp
Description: PGP signature


Re: Continue strict-volatile-bitfields fixes

2012-05-24 Thread Jakub Jelinek
On Thu, May 24, 2012 at 02:29:18PM +0200, Thomas Schwinge wrote:
 Hi!
 
 Ping.

Ok.

 * fold-const.c (optimize_bit_field_compare): Abort early in the strict
 volatile bitfields case.
   
   Index: fold-const.c
   ===
   --- fold-const.c  (revision 186856)
   +++ fold-const.c  (working copy)
   @@ -3342,6 +3342,11 @@ optimize_bit_field_compare (location_t loc, enum t
  tree mask;
  tree offset;

   +  /* In the strict volatile bitfields case, doing code changes here may 
   prevent
   + other optimizations, in particular in a SLOW_BYTE_ACCESS setting.  
   */
   +  if (flag_strict_volatile_bitfields  0)
   +return 0;
   +
  /* Get all the information about the extractions being done.  If the 
   bit size
 if the same as the size of the underlying object, we aren't doing an
 extraction at all and so can do nothing.  We also don't want to

Jakub


Re: Continue strict-volatile-bitfields fixes

2012-05-16 Thread Thomas Schwinge
Hi!

Ping.

On Wed, 09 May 2012 10:01:55 +0800, I wrote:
 On Fri, 27 Apr 2012 10:29:06 +0200, Jakub Jelinek ja...@redhat.com wrote:
  On Fri, Apr 27, 2012 at 12:42:41PM +0800, Thomas Schwinge wrote:
 GET_MODE_BITSIZE (lmode)« (8 bits).  (With the current sources, lmode 
 is
 VOIDmode.)

 Is emmitting »BIT_FIELD_REF *common, 32, 0  255« wrong in this 
 case,
 or should a later optimization pass be able to figure out that
 »BIT_FIELD_REF *common, 32, 0  255« is in fact the same as
 common-code, and then be able to conflate these?  Any suggestions
 where/how to tackle this?

The BIT_FIELD_REF is somewhat of a red-herring.  It is created by 
fold-const.c
in optimize_bit_field_compare, code that I think should be removed 
completely.
Or it needs to be made aware of strict-volatile bitfield and C++ memory 
model
details.
  
  I'd actually very much prefer the latter, just disable
  optimize_bit_field_compare for strict-volatile bitfield mode and when
  avoiding load data races in C++ memory model (that isn't going to be
  default, right?).  This optimization is useful, and it is solely about
  loads, so even C++ memory model usually shouldn't care.
 
 I can't comment on the C++ memory model bits, but I have now tested the
 following patch (fixes the issue for SH, no regressions for ARM, x86):
 
 gcc/
   * fold-const.c (optimize_bit_field_compare): Abort early in the strict
   volatile bitfields case.
 
 Index: fold-const.c
 ===
 --- fold-const.c  (revision 186856)
 +++ fold-const.c  (working copy)
 @@ -3342,6 +3342,11 @@ optimize_bit_field_compare (location_t loc, enum t
tree mask;
tree offset;
  
 +  /* In the strict volatile bitfields case, doing code changes here may 
 prevent
 + other optimizations, in particular in a SLOW_BYTE_ACCESS setting.  */
 +  if (flag_strict_volatile_bitfields  0)
 +return 0;
 +
/* Get all the information about the extractions being done.  If the bit 
 size
   if the same as the size of the underlying object, we aren't doing an
   extraction at all and so can do nothing.  We also don't want to


Grüße,
 Thomas


pgpyOY9LHZdAp.pgp
Description: PGP signature


Re: Continue strict-volatile-bitfields fixes

2012-05-08 Thread Thomas Schwinge
Hi!

On Fri, 27 Apr 2012 10:29:06 +0200, Jakub Jelinek ja...@redhat.com wrote:
 On Fri, Apr 27, 2012 at 12:42:41PM +0800, Thomas Schwinge wrote:
GET_MODE_BITSIZE (lmode)« (8 bits).  (With the current sources, lmode is
VOIDmode.)
   
Is emmitting »BIT_FIELD_REF *common, 32, 0  255« wrong in this case,
or should a later optimization pass be able to figure out that
»BIT_FIELD_REF *common, 32, 0  255« is in fact the same as
common-code, and then be able to conflate these?  Any suggestions
where/how to tackle this?
   
   The BIT_FIELD_REF is somewhat of a red-herring.  It is created by 
   fold-const.c
   in optimize_bit_field_compare, code that I think should be removed 
   completely.
   Or it needs to be made aware of strict-volatile bitfield and C++ memory 
   model
   details.
 
 I'd actually very much prefer the latter, just disable
 optimize_bit_field_compare for strict-volatile bitfield mode and when
 avoiding load data races in C++ memory model (that isn't going to be
 default, right?).  This optimization is useful, and it is solely about
 loads, so even C++ memory model usually shouldn't care.

I can't comment on the C++ memory model bits, but I have now tested the
following patch (fixes the issue for SH, no regressions for ARM, x86):

gcc/
* fold-const.c (optimize_bit_field_compare): Abort early in the strict
volatile bitfields case.

Index: fold-const.c
===
--- fold-const.c(revision 186856)
+++ fold-const.c(working copy)
@@ -3342,6 +3342,11 @@ optimize_bit_field_compare (location_t loc, enum t
   tree mask;
   tree offset;
 
+  /* In the strict volatile bitfields case, doing code changes here may prevent
+ other optimizations, in particular in a SLOW_BYTE_ACCESS setting.  */
+  if (flag_strict_volatile_bitfields  0)
+return 0;
+
   /* Get all the information about the extractions being done.  If the bit size
  if the same as the size of the underlying object, we aren't doing an
  extraction at all and so can do nothing.  We also don't want to


Grüße,
 Thomas


pgpj1F9xnVycw.pgp
Description: PGP signature


Re: Continue strict-volatile-bitfields fixes

2012-04-27 Thread Jakub Jelinek
On Fri, Apr 27, 2012 at 12:42:41PM +0800, Thomas Schwinge wrote:
   GET_MODE_BITSIZE (lmode)« (8 bits).  (With the current sources, lmode is
   VOIDmode.)
  
   Is emmitting »BIT_FIELD_REF *common, 32, 0  255« wrong in this case,
   or should a later optimization pass be able to figure out that
   »BIT_FIELD_REF *common, 32, 0  255« is in fact the same as
   common-code, and then be able to conflate these?  Any suggestions
   where/how to tackle this?
  
  The BIT_FIELD_REF is somewhat of a red-herring.  It is created by 
  fold-const.c
  in optimize_bit_field_compare, code that I think should be removed 
  completely.
  Or it needs to be made aware of strict-volatile bitfield and C++ memory 
  model
  details.

I'd actually very much prefer the latter, just disable
optimize_bit_field_compare for strict-volatile bitfield mode and when
avoiding load data races in C++ memory model (that isn't going to be
default, right?).  This optimization is useful, and it is solely about
loads, so even C++ memory model usually shouldn't care.

Jakub


Re: Continue strict-volatile-bitfields fixes

2012-04-26 Thread Thomas Schwinge
Hi!

On Wed, 25 Apr 2012 13:51:16 +0200, Richard Guenther 
richard.guent...@gmail.com wrote:
 On Wed, Apr 25, 2012 at 1:27 PM, Thomas Schwinge
 tho...@codesourcery.com wrote:
  On Thu, 19 Apr 2012 19:46:17 +0200, I wrote:
  Here is my current test case, reduced from gcc.dg/tree-ssa/20030922-1.c:
 
      extern void abort (void);
 
      enum tree_code
      {
        BIND_EXPR,
      };
      struct tree_common
      {
        enum tree_code code:8;
      };
      void
      voidify_wrapper_expr (struct tree_common *common)
      {
        switch (common-code)
          {
          case BIND_EXPR:
            if (common-code != BIND_EXPR)
              abort ();
          }
      }
 
  The diff for -fdump-tree-all between compiling with
  -fno-strict-volatile-bitfields and -fstrict-volatile-bitfields begins
  with the following, right in 20030922-1.c.003t.original:
 
  diff -ru fnsvb/20030922-1.c.003t.original fsvb/20030922-1.c.003t.original
  --- fnsvb/20030922-1.c.003t.original    2012-04-19 16:51:18.322150866 +0200
  +++ fsvb/20030922-1.c.003t.original     2012-04-19 16:49:18.132088498 +0200
  @@ -7,7 +7,7 @@
     switch ((int) common-code)
       {
         case 0:;
  -      if (common-code != 0)
  +      if ((BIT_FIELD_REF *common, 32, 0  255) != 0)
           {
             abort ();
           }
 
  That is, for -fno-strict-volatile-bitfields the second instance of
  »common-code« it is a component_ref, whereas for
  -fstrict-volatile-bitfields it is a bit_field_ref.  The former will be
  optimized as intended, the latter will not.  So, what should be emitted
  in the -fstrict-volatile-bitfields case?  A component_ref -- but this is
  what Bernd's commit r182545 (for PR51200) changed, I think?  Or should
  later optimization stages learn to better deal with a bit_field_ref, and
  where would this have to happen?
 
  I figured out why this generic code is behaving differently for SH
  targets.  I compared to ARM as well as x86, for which indeed the
  optimization possibility is not lost even when compiling with
  -fstrict-volatile-bitfields.
 
  The component_ref inside the if statement (but not the one in the switch
  statement) is fed into optimize_bit_field_compare where it is optimized
  to »BIT_FIELD_REF *common, 32, 0  255« only for SH, because
  get_best_mode for SH returns SImode (ARM, x86: QImode), because SH has
  »#define SLOW_BYTE_ACCESS 1«, and thus the original QImode is widened to
  SImode, hoping for better code generation »since it may eliminate
  subsequent memory access if subsequent accesses occur to other fields in
  the same word of the structure, but to different bytes«.  (Well, the
  converse happens here...)  After that, in optimize_bit_field_compare, for
  ARM and x86, »nbitsize == lbitsize«, and thus the optimization is
  aborted, whereas for SH it will be performed, that is, the component_ref
  is replaced with »BIT_FIELD_REF *common, 32, 0  255«.
 
  If I manually force SH's SImode back to QImode (that is, abort
  optimize_bit_field_compare), the later optimization passes work as they
  do for ARM and x86.
 
  Before Bernd's r182545, optimize_bit_field_compare returned early (that
  is, aborted this optimization attempt), because »lbitsize ==
  GET_MODE_BITSIZE (lmode)« (8 bits).  (With the current sources, lmode is
  VOIDmode.)
 
  Is emmitting »BIT_FIELD_REF *common, 32, 0  255« wrong in this case,
  or should a later optimization pass be able to figure out that
  »BIT_FIELD_REF *common, 32, 0  255« is in fact the same as
  common-code, and then be able to conflate these?  Any suggestions
  where/how to tackle this?
 
 The BIT_FIELD_REF is somewhat of a red-herring.  It is created by fold-const.c
 in optimize_bit_field_compare, code that I think should be removed completely.
 Or it needs to be made aware of strict-volatile bitfield and C++ memory model
 details.

As discussed with richi on IRC, I have now done testing (just gcc
testsuite) with optimize_bit_field_compare completely removed.  For SH,
this cures my testcase posted above as well as the the following FAILs,
all as expected:

-FAIL: gcc.dg/tree-ssa/20030922-1.c scan-tree-dump-times dom2 if  0
+PASS: gcc.dg/tree-ssa/20030922-1.c scan-tree-dump-times dom2 if  0

-FAIL: gcc.dg/tree-ssa/foldconst-3.c scan-tree-dump-not optimized 
tree_code_type
+PASS: gcc.dg/tree-ssa/foldconst-3.c scan-tree-dump-not optimized 
tree_code_type

-FAIL: gcc.dg/tree-ssa/vrp15.c scan-tree-dump-times vrp1 
tree_code_length.42. 1
+PASS: gcc.dg/tree-ssa/vrp15.c scan-tree-dump-times vrp1 
tree_code_length.42. 1

No regressions for ARM and x86.  I also manually confirmed that the
x86-specific tests introduced for PR32748 (gcc.target/i386/pr37248-*.c)
still produce the optimized (itermediate and assembly) code as suggested
in the *.c files.  (So, this optimization is (also) done elsewhere,
instead of in optimize_bit_field_compare.)  So, is removing it indeed the
way to go?

Instead of removing it completely 

Re: Continue strict-volatile-bitfields fixes

2012-04-25 Thread Thomas Schwinge
Hi!

On Thu, 19 Apr 2012 19:46:17 +0200, I wrote:
 Here is my current test case, reduced from gcc.dg/tree-ssa/20030922-1.c:
 
 extern void abort (void);
 
 enum tree_code
 {
   BIND_EXPR,
 };
 struct tree_common
 {
   enum tree_code code:8;
 };
 void
 voidify_wrapper_expr (struct tree_common *common)
 {
   switch (common-code)
 {
 case BIND_EXPR:
   if (common-code != BIND_EXPR)
 abort ();
 }
 }
 
 The diff for -fdump-tree-all between compiling with
 -fno-strict-volatile-bitfields and -fstrict-volatile-bitfields begins
 with the following, right in 20030922-1.c.003t.original:
 
 diff -ru fnsvb/20030922-1.c.003t.original fsvb/20030922-1.c.003t.original
 --- fnsvb/20030922-1.c.003t.original2012-04-19 16:51:18.322150866 +0200
 +++ fsvb/20030922-1.c.003t.original 2012-04-19 16:49:18.132088498 +0200
 @@ -7,7 +7,7 @@
switch ((int) common-code)
  {
case 0:;
 -  if (common-code != 0)
 +  if ((BIT_FIELD_REF *common, 32, 0  255) != 0)
  {
abort ();
  }
 
 That is, for -fno-strict-volatile-bitfields the second instance of
 »common-code« it is a component_ref, whereas for
 -fstrict-volatile-bitfields it is a bit_field_ref.  The former will be
 optimized as intended, the latter will not.  So, what should be emitted
 in the -fstrict-volatile-bitfields case?  A component_ref -- but this is
 what Bernd's commit r182545 (for PR51200) changed, I think?  Or should
 later optimization stages learn to better deal with a bit_field_ref, and
 where would this have to happen?

I figured out why this generic code is behaving differently for SH
targets.  I compared to ARM as well as x86, for which indeed the
optimization possibility is not lost even when compiling with
-fstrict-volatile-bitfields.

The component_ref inside the if statement (but not the one in the switch
statement) is fed into optimize_bit_field_compare where it is optimized
to »BIT_FIELD_REF *common, 32, 0  255« only for SH, because
get_best_mode for SH returns SImode (ARM, x86: QImode), because SH has
»#define SLOW_BYTE_ACCESS 1«, and thus the original QImode is widened to
SImode, hoping for better code generation »since it may eliminate
subsequent memory access if subsequent accesses occur to other fields in
the same word of the structure, but to different bytes«.  (Well, the
converse happens here...)  After that, in optimize_bit_field_compare, for
ARM and x86, »nbitsize == lbitsize«, and thus the optimization is
aborted, whereas for SH it will be performed, that is, the component_ref
is replaced with »BIT_FIELD_REF *common, 32, 0  255«.

If I manually force SH's SImode back to QImode (that is, abort
optimize_bit_field_compare), the later optimization passes work as they
do for ARM and x86.

Before Bernd's r182545, optimize_bit_field_compare returned early (that
is, aborted this optimization attempt), because »lbitsize ==
GET_MODE_BITSIZE (lmode)« (8 bits).  (With the current sources, lmode is
VOIDmode.)

Is emmitting »BIT_FIELD_REF *common, 32, 0  255« wrong in this case,
or should a later optimization pass be able to figure out that
»BIT_FIELD_REF *common, 32, 0  255« is in fact the same as
common-code, and then be able to conflate these?  Any suggestions
where/how to tackle this?


Grüße,
 Thomas


pgp0SPag23wvg.pgp
Description: PGP signature


Re: Continue strict-volatile-bitfields fixes

2012-04-25 Thread Richard Guenther
On Wed, Apr 25, 2012 at 1:27 PM, Thomas Schwinge
tho...@codesourcery.com wrote:
 Hi!

 On Thu, 19 Apr 2012 19:46:17 +0200, I wrote:
 Here is my current test case, reduced from gcc.dg/tree-ssa/20030922-1.c:

     extern void abort (void);

     enum tree_code
     {
       BIND_EXPR,
     };
     struct tree_common
     {
       enum tree_code code:8;
     };
     void
     voidify_wrapper_expr (struct tree_common *common)
     {
       switch (common-code)
         {
         case BIND_EXPR:
           if (common-code != BIND_EXPR)
             abort ();
         }
     }

 The diff for -fdump-tree-all between compiling with
 -fno-strict-volatile-bitfields and -fstrict-volatile-bitfields begins
 with the following, right in 20030922-1.c.003t.original:

 diff -ru fnsvb/20030922-1.c.003t.original fsvb/20030922-1.c.003t.original
 --- fnsvb/20030922-1.c.003t.original    2012-04-19 16:51:18.322150866 +0200
 +++ fsvb/20030922-1.c.003t.original     2012-04-19 16:49:18.132088498 +0200
 @@ -7,7 +7,7 @@
    switch ((int) common-code)
      {
        case 0:;
 -      if (common-code != 0)
 +      if ((BIT_FIELD_REF *common, 32, 0  255) != 0)
          {
            abort ();
          }

 That is, for -fno-strict-volatile-bitfields the second instance of
 »common-code« it is a component_ref, whereas for
 -fstrict-volatile-bitfields it is a bit_field_ref.  The former will be
 optimized as intended, the latter will not.  So, what should be emitted
 in the -fstrict-volatile-bitfields case?  A component_ref -- but this is
 what Bernd's commit r182545 (for PR51200) changed, I think?  Or should
 later optimization stages learn to better deal with a bit_field_ref, and
 where would this have to happen?

 I figured out why this generic code is behaving differently for SH
 targets.  I compared to ARM as well as x86, for which indeed the
 optimization possibility is not lost even when compiling with
 -fstrict-volatile-bitfields.

 The component_ref inside the if statement (but not the one in the switch
 statement) is fed into optimize_bit_field_compare where it is optimized
 to »BIT_FIELD_REF *common, 32, 0  255« only for SH, because
 get_best_mode for SH returns SImode (ARM, x86: QImode), because SH has
 »#define SLOW_BYTE_ACCESS 1«, and thus the original QImode is widened to
 SImode, hoping for better code generation »since it may eliminate
 subsequent memory access if subsequent accesses occur to other fields in
 the same word of the structure, but to different bytes«.  (Well, the
 converse happens here...)  After that, in optimize_bit_field_compare, for
 ARM and x86, »nbitsize == lbitsize«, and thus the optimization is
 aborted, whereas for SH it will be performed, that is, the component_ref
 is replaced with »BIT_FIELD_REF *common, 32, 0  255«.

 If I manually force SH's SImode back to QImode (that is, abort
 optimize_bit_field_compare), the later optimization passes work as they
 do for ARM and x86.

 Before Bernd's r182545, optimize_bit_field_compare returned early (that
 is, aborted this optimization attempt), because »lbitsize ==
 GET_MODE_BITSIZE (lmode)« (8 bits).  (With the current sources, lmode is
 VOIDmode.)

 Is emmitting »BIT_FIELD_REF *common, 32, 0  255« wrong in this case,
 or should a later optimization pass be able to figure out that
 »BIT_FIELD_REF *common, 32, 0  255« is in fact the same as
 common-code, and then be able to conflate these?  Any suggestions
 where/how to tackle this?

The BIT_FIELD_REF is somewhat of a red-herring.  It is created by fold-const.c
in optimize_bit_field_compare, code that I think should be removed completely.
Or it needs to be made aware of strict-volatile bitfield and C++ memory model
details.

Richard.


 Grüße,
  Thomas


RE: Continue strict-volatile-bitfields fixes

2012-04-20 Thread Joey Ye


 -Original Message-
 From: Thomas Schwinge [mailto:tho...@codesourcery.com]
 Sent: Friday, April 20, 2012 01:46
 To: Bernd Schmidt; Richard Earnshaw
 Cc: Richard Guenther; Joey Ye; d...@redhat.com; GCC Patches; Mitchell,
 Mark
 Subject: Re: Continue strict-volatile-bitfields fixes
 That is, for -fno-strict-volatile-bitfields the second instance of
 »common-code« it is a component_ref, whereas for
 -fstrict-volatile-bitfields it is a bit_field_ref.  The former will be
 optimized as intended, the latter will not.  So, what should be emitted
 in the -fstrict-volatile-bitfields case?  A component_ref -- but this
 is
 what Bernd's commit r182545 (for PR51200) changed, I think?  Or should
 later optimization stages learn to better deal with a bit_field_ref,
 and
 where would this have to happen?
R182545 does changes compiler behavior for non-volatile bitfields. As shown in 
following ARM example.

typedef struct {
  unsigned short a:8, b:8;
} BitStruct;
BitStruct bits;

void foo()
{
bits.a=3;
}

compiled with latest trunk which includes r182545
1. -fstrict-volatile-bitfields -O0:
ldrhr2, [r3, #0]@ movhi
movsr1, #3
bfi r2, r1, #0, #8
strhr2, [r3, #0]@ movhi

2. -fno-strict-volatile-bitfields -O0:
movsr2, #3
strbr2, [r3, #0]

3. -fstrict-volatile-bitfields -Os:
movsr2, #3
strbr2, [r3, #0]

compiled without r182545:
4. -fstrict-volatile-bitfields -O0:
movsr2, #3
strbr2, [r3, #0]

Compare 1 to 4, r182545 does change behavior of non-volatile bitfields.

Comparing to 2, 1 miss the opportunity to use field to replace bitfield. But in 
3, combine pass merges the bitfield instructions into one instruction. 

So r182545 doesn't impact performance with optimization, at least on ARM. Not 
sure if this combine always success in all targets.

- Joey





Re: Continue strict-volatile-bitfields fixes

2012-04-19 Thread Thomas Schwinge
Hi!

On Wed, 18 Apr 2012 18:16:32 +0200, Bernd Schmidt ber...@codesourcery.com 
wrote:
 On 04/18/2012 06:14 PM, Richard Earnshaw wrote:
  On 18/04/12 16:37, Thomas Schwinge wrote:
  gcc/testsuite/
 * gcc.dg/tree-ssa/20030922-1.c: Compile with
 -fno-strict-volatile-bitfields.
 * gcc.dg/tree-ssa/foldconst-3.c: Likewise.
 * gcc.dg/tree-ssa/vrp15.c: Likewise.
 
  
  None of these have any volatile bitfields, so the option should be a no-op.

That's what I thought, too.  :-)

 The problem is that we have to treat normal bitfields differently as
 well, since a variable may later be declared as volatile.

Here is my current test case, reduced from gcc.dg/tree-ssa/20030922-1.c:

extern void abort (void);

enum tree_code
{
  BIND_EXPR,
};
struct tree_common
{
  enum tree_code code:8;
};
void
voidify_wrapper_expr (struct tree_common *common)
{
  switch (common-code)
{
case BIND_EXPR:
  if (common-code != BIND_EXPR)
abort ();
}
}

The diff for -fdump-tree-all between compiling with
-fno-strict-volatile-bitfields and -fstrict-volatile-bitfields begins
with the following, right in 20030922-1.c.003t.original:

diff -ru fnsvb/20030922-1.c.003t.original fsvb/20030922-1.c.003t.original
--- fnsvb/20030922-1.c.003t.original2012-04-19 16:51:18.322150866 +0200
+++ fsvb/20030922-1.c.003t.original 2012-04-19 16:49:18.132088498 +0200
@@ -7,7 +7,7 @@
   switch ((int) common-code)
 {
   case 0:;
-  if (common-code != 0)
+  if ((BIT_FIELD_REF *common, 32, 0  255) != 0)
 {
   abort ();
 }

That is, for -fno-strict-volatile-bitfields the second instance of
»common-code« it is a component_ref, whereas for
-fstrict-volatile-bitfields it is a bit_field_ref.  The former will be
optimized as intended, the latter will not.  So, what should be emitted
in the -fstrict-volatile-bitfields case?  A component_ref -- but this is
what Bernd's commit r182545 (for PR51200) changed, I think?  Or should
later optimization stages learn to better deal with a bit_field_ref, and
where would this have to happen?


Grüße,
 Thomas


pgp95HjdOdpvt.pgp
Description: PGP signature


Re: Continue strict-volatile-bitfields fixes

2012-04-18 Thread Thomas Schwinge
Hi!

As it's been some time: this was the discussion about
-fstrict-volatile-bitfields (as enabled by default on SH, for example)
breaking some testsuite bits due to missed optimizations (even for
bitfields that are not volatile).

On Tue, 21 Feb 2012 09:40:07 +, Richard Earnshaw rearn...@arm.com wrote:
 On 20/02/12 17:51, Bernd Schmidt wrote:
  On 02/20/2012 06:39 PM, Richard Earnshaw wrote:
  I'm not sure why it should be.  Can't a user write
 
  #ifdef __cplusplus
  #define BOOL bool
  #else
  #define bool _Bool
  #endif
 
  struct x {
volatile BOOL a : 1;
volatile BOOL b : 1;
volatile unsigned char c : 6;
volatile BOOL d : 1;
  } y;
 
  ?
 
  If you've got strict volatile bitfields, then the concept here is that
  the access uses the declared type for accessing the member.  Since in
  the ABI bool has a defined size, then it should access the member using
  that size.
 
  On ARM, sizeof bool is 1, so I'd take the above to mean that accessing
  y.a to mean a read of a, b and c, but not d.
  
  What are your thoughts on the argument about enums?
 
 Similar.  A particular enumeration type has a defined size, so accesses
 should use that size.

In that case, would it be appropriate to apply the following?

gcc/testsuite/
* gcc.dg/tree-ssa/20030922-1.c: Compile with
-fno-strict-volatile-bitfields.
* gcc.dg/tree-ssa/foldconst-3.c: Likewise.
* gcc.dg/tree-ssa/vrp15.c: Likewise.

Index: gcc/testsuite/gcc.dg/tree-ssa/20030922-1.c
===
--- gcc/testsuite/gcc.dg/tree-ssa/20030922-1.c  (revision 355696)
+++ gcc/testsuite/gcc.dg/tree-ssa/20030922-1.c  (working copy)
@@ -1,5 +1,5 @@
 /* { dg-do compile } */
-/* { dg-options -O1 -fdump-tree-dom2 } */
+/* { dg-options -O1 -fno-strict-volatile-bitfields -fdump-tree-dom2 } */
   
 extern void abort (void);
 
Index: gcc/testsuite/gcc.dg/tree-ssa/foldconst-3.c
===
--- gcc/testsuite/gcc.dg/tree-ssa/foldconst-3.c (revision 355696)
+++ gcc/testsuite/gcc.dg/tree-ssa/foldconst-3.c (working copy)
@@ -1,5 +1,5 @@
 /* { dg-do compile } */
-/* { dg-options -O2 -fdump-tree-optimized -fno-short-enums } */
+/* { dg-options -O2 -fno-strict-volatile-bitfields -fdump-tree-optimized 
-fno-short-enums } */
 typedef const union tree_node *const_tree;
 typedef struct
 {
Index: gcc/testsuite/gcc.dg/tree-ssa/vrp15.c
===
--- gcc/testsuite/gcc.dg/tree-ssa/vrp15.c   (revision 355696)
+++ gcc/testsuite/gcc.dg/tree-ssa/vrp15.c   (working copy)
@@ -1,5 +1,5 @@
 /* { dg-do compile } */
-/* { dg-options -O2 -fdump-tree-vrp1 } */
+/* { dg-options -O2 -fno-strict-volatile-bitfields -fdump-tree-vrp1 } */
 
 
 extern void abort (void) __attribute__ ((__noreturn__));


Grüße,
 Thomas


pgpNAcGjy3Tcn.pgp
Description: PGP signature


Re: Continue strict-volatile-bitfields fixes

2012-04-18 Thread Richard Earnshaw
On 18/04/12 16:37, Thomas Schwinge wrote:
 Hi!
 
 As it's been some time: this was the discussion about
 -fstrict-volatile-bitfields (as enabled by default on SH, for example)
 breaking some testsuite bits due to missed optimizations (even for
 bitfields that are not volatile).
 
 On Tue, 21 Feb 2012 09:40:07 +, Richard Earnshaw rearn...@arm.com wrote:
 On 20/02/12 17:51, Bernd Schmidt wrote:
 On 02/20/2012 06:39 PM, Richard Earnshaw wrote:
 I'm not sure why it should be.  Can't a user write

 #ifdef __cplusplus
 #define BOOL bool
 #else
 #define bool _Bool
 #endif

 struct x {
   volatile BOOL a : 1;
   volatile BOOL b : 1;
   volatile unsigned char c : 6;
   volatile BOOL d : 1;
 } y;

 ?

 If you've got strict volatile bitfields, then the concept here is that
 the access uses the declared type for accessing the member.  Since in
 the ABI bool has a defined size, then it should access the member using
 that size.

 On ARM, sizeof bool is 1, so I'd take the above to mean that accessing
 y.a to mean a read of a, b and c, but not d.

 What are your thoughts on the argument about enums?

 Similar.  A particular enumeration type has a defined size, so accesses
 should use that size.
 
 In that case, would it be appropriate to apply the following?
 
 gcc/testsuite/
   * gcc.dg/tree-ssa/20030922-1.c: Compile with
   -fno-strict-volatile-bitfields.
   * gcc.dg/tree-ssa/foldconst-3.c: Likewise.
   * gcc.dg/tree-ssa/vrp15.c: Likewise.


None of these have any volatile bitfields, so the option should be a no-op.





Re: Continue strict-volatile-bitfields fixes

2012-04-18 Thread Bernd Schmidt
On 04/18/2012 06:14 PM, Richard Earnshaw wrote:
 On 18/04/12 16:37, Thomas Schwinge wrote:
 gcc/testsuite/
  * gcc.dg/tree-ssa/20030922-1.c: Compile with
  -fno-strict-volatile-bitfields.
  * gcc.dg/tree-ssa/foldconst-3.c: Likewise.
  * gcc.dg/tree-ssa/vrp15.c: Likewise.

 
 None of these have any volatile bitfields, so the option should be a no-op.

The problem is that we have to treat normal bitfields differently as
well, since a variable may later be declared as volatile.


Bernd


Re: Continue strict-volatile-bitfields fixes

2012-02-21 Thread Richard Earnshaw
On 20/02/12 17:51, Bernd Schmidt wrote:
 On 02/20/2012 06:39 PM, Richard Earnshaw wrote:
 I'm not sure why it should be.  Can't a user write

 #ifdef __cplusplus
 #define BOOL bool
 #else
 #define bool _Bool
 #endif

 struct x {
   volatile BOOL a : 1;
   volatile BOOL b : 1;
   volatile unsigned char c : 6;
   volatile BOOL d : 1;
 } y;

 ?

 If you've got strict volatile bitfields, then the concept here is that
 the access uses the declared type for accessing the member.  Since in
 the ABI bool has a defined size, then it should access the member using
 that size.

 On ARM, sizeof bool is 1, so I'd take the above to mean that accessing
 y.a to mean a read of a, b and c, but not d.
 
 What are your thoughts on the argument about enums?
 
 
 Bernd
 

Similar.  A particular enumeration type has a defined size, so accesses
should use that size.

R.



Re: Continue strict-volatile-bitfields fixes

2012-02-20 Thread Richard Guenther
On Fri, Feb 17, 2012 at 9:51 PM, Thomas Schwinge
tho...@codesourcery.com wrote:
 Hi!

 How do we move this issue forward?

 On Mon, 23 Jan 2012 15:46:34 +0100, Bernd Schmidt ber...@codesourcery.com 
 wrote:
 On 11/29/2011 05:35 PM, Mitchell, Mark wrote:
  So, I still think this patch is the best way to go forward, and it
  does
  fix incorrect code generation. Would appreciate an OK.
 
  Ping.
 
  If you don't hear any objections within a week, please proceed.

 That was committed a while ago. The part in stor-layout.c that stops us
 from promoting bitfields to normal fields apparently caused some
 testsuite regressions in sh tests, where some optimization dump scans
 show that we can't perform the optimizations if there are BIT_FIELD_REFs
 rather than a normal member access.

 The testcases use things like
   enum something field:8;
 and I think applying strict-volatile-bitfields to enums is probably
 meaningless. Should we adjust semantics so that the compiler is free to
 optimize enum bitfields? The patch would look something like the below.
 Thomas has tested this on arm and sh with our internal tree.


 Bernd


 Index: gcc/stor-layout.c
 ===
 --- gcc/stor-layout.c (revision 355696)
 +++ gcc/stor-layout.c (working copy)
 @@ -665,8 +665,7 @@
            may make a volatile object later.  */
         if (TYPE_SIZE (type) != 0
              TREE_CODE (TYPE_SIZE (type)) == INTEGER_CST
 -            GET_MODE_CLASS (TYPE_MODE (type)) == MODE_INT
 -            flag_strict_volatile_bitfields = 0)
 +            GET_MODE_CLASS (TYPE_MODE (type)) == MODE_INT)
           {
             enum machine_mode xmode
               = mode_for_size_tree (DECL_SIZE (decl), MODE_INT, 1);
 @@ -674,7 +673,12 @@

             if (xmode != BLKmode
                  !(xalign  BITS_PER_UNIT  DECL_PACKED (decl))
 -                (known_align == 0 || known_align = xalign))
 +                (known_align == 0 || known_align = xalign)
 +                   (flag_strict_volatile_bitfields = 0
 +                      /* Same size makes strict volatile bitfields
 meaningless.  */
 +                      || GET_MODE_SIZE (xmode) == GET_MODE_SIZE
 (TYPE_MODE (type))
 +                      /* Strict volatile bitfields shouldn't apply to
 enums.  */
 +                      || TREE_CODE (type) == ENUMERAL_TYPE))

What about BOOLEAN_TYPE bitfields?  Thus, why not explicitely
spell out  TREE_CODE (type) == INTEGER_TYPE?

               {
                 DECL_ALIGN (decl) = MAX (xalign, DECL_ALIGN (decl));
                 DECL_MODE (decl) = xmode;



 Grüße,
  Thomas


Re: Continue strict-volatile-bitfields fixes

2012-02-20 Thread Richard Earnshaw
On 20/02/12 17:28, Bernd Schmidt wrote:
 On 02/20/2012 12:14 PM, Richard Guenther wrote:
 On Fri, Feb 17, 2012 at 9:51 PM, Thomas Schwinge
 tho...@codesourcery.com wrote:
 Hi!

 How do we move this issue forward?

 On Mon, 23 Jan 2012 15:46:34 +0100, Bernd Schmidt ber...@codesourcery.com 
 wrote:
 That was committed a while ago. The part in stor-layout.c that stops us
 from promoting bitfields to normal fields apparently caused some
 testsuite regressions in sh tests, where some optimization dump scans
 show that we can't perform the optimizations if there are BIT_FIELD_REFs
 rather than a normal member access.

 The testcases use things like
   enum something field:8;
 and I think applying strict-volatile-bitfields to enums is probably
 meaningless. Should we adjust semantics so that the compiler is free to
 optimize enum bitfields? The patch would look something like the below.
 

 What about BOOLEAN_TYPE bitfields?  Thus, why not explicitely
 spell out  TREE_CODE (type) == INTEGER_TYPE?
 
 That would work for me, if we can all agree that
 -fstrict-volatile-bitfields should be restricted to INTEGER_TYPE.
 
 
 Bernd
 


I'm not sure why it should be.  Can't a user write

#ifdef __cplusplus
#define BOOL bool
#else
#define bool _Bool
#endif

struct x {
  volatile BOOL a : 1;
  volatile BOOL b : 1;
  volatile unsigned char c : 6;
  volatile BOOL d : 1;
} y;

?

If you've got strict volatile bitfields, then the concept here is that
the access uses the declared type for accessing the member.  Since in
the ABI bool has a defined size, then it should access the member using
that size.

On ARM, sizeof bool is 1, so I'd take the above to mean that accessing
y.a to mean a read of a, b and c, but not d.


R.



Re: Continue strict-volatile-bitfields fixes

2012-02-17 Thread Thomas Schwinge
Hi!

How do we move this issue forward?

On Mon, 23 Jan 2012 15:46:34 +0100, Bernd Schmidt ber...@codesourcery.com 
wrote:
 On 11/29/2011 05:35 PM, Mitchell, Mark wrote:
  So, I still think this patch is the best way to go forward, and it
  does
  fix incorrect code generation. Would appreciate an OK.
 
  Ping.
  
  If you don't hear any objections within a week, please proceed.
 
 That was committed a while ago. The part in stor-layout.c that stops us
 from promoting bitfields to normal fields apparently caused some
 testsuite regressions in sh tests, where some optimization dump scans
 show that we can't perform the optimizations if there are BIT_FIELD_REFs
 rather than a normal member access.
 
 The testcases use things like
   enum something field:8;
 and I think applying strict-volatile-bitfields to enums is probably
 meaningless. Should we adjust semantics so that the compiler is free to
 optimize enum bitfields? The patch would look something like the below.
 Thomas has tested this on arm and sh with our internal tree.
 
 
 Bernd
 
 
 Index: gcc/stor-layout.c
 ===
 --- gcc/stor-layout.c (revision 355696)
 +++ gcc/stor-layout.c (working copy)
 @@ -665,8 +665,7 @@
may make a volatile object later.  */
 if (TYPE_SIZE (type) != 0
  TREE_CODE (TYPE_SIZE (type)) == INTEGER_CST
 -GET_MODE_CLASS (TYPE_MODE (type)) == MODE_INT
 -flag_strict_volatile_bitfields = 0)
 +GET_MODE_CLASS (TYPE_MODE (type)) == MODE_INT)
   {
 enum machine_mode xmode
   = mode_for_size_tree (DECL_SIZE (decl), MODE_INT, 1);
 @@ -674,7 +673,12 @@
 
 if (xmode != BLKmode
  !(xalign  BITS_PER_UNIT  DECL_PACKED (decl))
 -(known_align == 0 || known_align = xalign))
 +(known_align == 0 || known_align = xalign)
 +   (flag_strict_volatile_bitfields = 0
 +  /* Same size makes strict volatile bitfields
 meaningless.  */
 +  || GET_MODE_SIZE (xmode) == GET_MODE_SIZE
 (TYPE_MODE (type))
 +  /* Strict volatile bitfields shouldn't apply to
 enums.  */
 +  || TREE_CODE (type) == ENUMERAL_TYPE))
   {
 DECL_ALIGN (decl) = MAX (xalign, DECL_ALIGN (decl));
 DECL_MODE (decl) = xmode;
 


Grüße,
 Thomas


pgpB3reRBrGcX.pgp
Description: PGP signature


Re: Continue strict-volatile-bitfields fixes

2012-01-23 Thread Bernd Schmidt
On 11/29/2011 05:35 PM, Mitchell, Mark wrote:
 So, I still think this patch is the best way to go forward, and it
 does
 fix incorrect code generation. Would appreciate an OK.

 Ping.
 
 If you don't hear any objections within a week, please proceed.

That was committed a while ago. The part in stor-layout.c that stops us
from promoting bitfields to normal fields apparently caused some
testsuite regressions in sh tests, where some optimization dump scans
show that we can't perform the optimizations if there are BIT_FIELD_REFs
rather than a normal member access.

The testcases use things like
  enum something field:8;
and I think applying strict-volatile-bitfields to enums is probably
meaningless. Should we adjust semantics so that the compiler is free to
optimize enum bitfields? The patch would look something like the below.
Thomas has tested this on arm and sh with our internal tree.


Bernd


Index: gcc/stor-layout.c
===
--- gcc/stor-layout.c   (revision 355696)
+++ gcc/stor-layout.c   (working copy)
@@ -665,8 +665,7 @@
 may make a volatile object later.  */
  if (TYPE_SIZE (type) != 0
   TREE_CODE (TYPE_SIZE (type)) == INTEGER_CST
-  GET_MODE_CLASS (TYPE_MODE (type)) == MODE_INT
-  flag_strict_volatile_bitfields = 0)
+  GET_MODE_CLASS (TYPE_MODE (type)) == MODE_INT)
{
  enum machine_mode xmode
= mode_for_size_tree (DECL_SIZE (decl), MODE_INT, 1);
@@ -674,7 +673,12 @@

  if (xmode != BLKmode
   !(xalign  BITS_PER_UNIT  DECL_PACKED (decl))
-  (known_align == 0 || known_align = xalign))
+  (known_align == 0 || known_align = xalign)
+   (flag_strict_volatile_bitfields = 0
+  /* Same size makes strict volatile bitfields
meaningless.  */
+  || GET_MODE_SIZE (xmode) == GET_MODE_SIZE
(TYPE_MODE (type))
+  /* Strict volatile bitfields shouldn't apply to
enums.  */
+  || TREE_CODE (type) == ENUMERAL_TYPE))
{
  DECL_ALIGN (decl) = MAX (xalign, DECL_ALIGN (decl));
  DECL_MODE (decl) = xmode;



Re: Continue strict-volatile-bitfields fixes

2012-01-23 Thread DJ Delorie

 and I think applying strict-volatile-bitfields to enums is probably
 meaningless.

MCU vendors would disagree.  If a location is volatile, it's most
likely hardware, and must be accessed in the user-specified way.
Randomly accessing adjacent locations could cause system failure.


Re: Continue strict-volatile-bitfields fixes

2012-01-23 Thread Bernd Schmidt
On 01/23/2012 08:51 PM, DJ Delorie wrote:
 
 and I think applying strict-volatile-bitfields to enums is probably
 meaningless.
 
 MCU vendors would disagree.  If a location is volatile, it's most
 likely hardware, and must be accessed in the user-specified way.
 Randomly accessing adjacent locations could cause system failure.

This is not an issue here. The optimization in question in stor-layout
tries to find out if a bitfield (volatile or not) is laid out in a way
that's equivalent to a natural machine mode. For example, a 2-byte
aligned 16 bit value can be HImode on most targets.

The earlier patch restricts this optimization for non-volatile bitfields
as well, on the grounds that an object may later be declared as volatile
even if the bitfield isn't declared volatile in the struct definition.

Strict-volatile-bitfields is concerned with the opposite case: ensuring
that an int x:8; is accessed only with int-sized accesses. My thought
was that this is probably pointless with enums, which I think aren't
usually thought of as a specific size unlike char/short/int. So, if we
see enum something x:8; we don't want to prevent the compiler from
treating this as a plain QImode non-bitfield member.


Bernd


Re: Continue strict-volatile-bitfields fixes

2011-12-20 Thread Bernd Schmidt
On 11/29/11 17:35, Mitchell, Mark wrote:
 So, I still think this patch is the best way to go forward, and it
 does
 fix incorrect code generation. Would appreciate an OK.

 Ping.
 
 If you don't hear any objections within a week, please proceed.

Committed now after bootstrapping i686-linux and retesting arm-linux
(some timeout permutations in libstdc++, expected with my qemu setup).


Bernd


Re: Continue strict-volatile-bitfields fixes

2011-12-20 Thread Ye Joey
On Wed, Dec 21, 2011 at 12:46 AM, Bernd Schmidt ber...@codesourcery.com wrote:
 On 11/29/11 17:35, Mitchell, Mark wrote:
 So, I still think this patch is the best way to go forward, and it
 does
 fix incorrect code generation. Would appreciate an OK.

 Ping.

 If you don't hear any objections within a week, please proceed.

 Committed now after bootstrapping i686-linux and retesting arm-linux
 (some timeout permutations in libstdc++, expected with my qemu setup).


 Bernd
Noticed a typo in toplev.c:
s/fstrict-volatile-bitfield/fstrict-volatile-bitfields/

Also I'd like to add following target independent test case

Joey Ye  joey...@arm.com
   * gcc.dg/volatile-bitfields-2.c: New test.

--- gcc/testsuite/gcc.dg/volatile-bitfields-2.c (revision 0)
+++ gcc/testsuite/gcc.dg/volatile-bitfields-2.c (revision 0)
@@ -0,0 +1,15 @@
+/* { dg-do run } */
+/* { dg-options -fstrict-volatile-bitfields } */
+
+extern void abort(void);
+struct thing {
+  volatile unsigned short a: 8;
+  volatile unsigned short b: 8;
+} t = {1,2};
+
+int main()
+{
+  t.a = 3;
+  if (t.a !=3 || t.b !=2) abort();
+  return 0;
+}


Re: Continue strict-volatile-bitfields fixes

2011-12-15 Thread Ye Joey
Ping, PR middle-end/51200

Tailored from Bernd's, and added target independent test case. Now it
is a pure middle-end fix.

OK for trunk and 4.6?

Bernd Schmidt  bernds_...@t-online.de
gcc/
* expr.c (store_field): Avoid a direct store if the mode is larger
than the size of the bit field.
* stor-layout.c (layout_decl): If flag_strict_volatile_bitfields,
treat non-volatile bit fields like volatile ones.
* toplev.c (process_options): Disallow combination of
-fstrict-volatile-bitfields and ABI versions less than 2.

gcc/testsuite/
* gcc.target/arm/volatile-bitfields-4.c: New test.
* c-c++-common/abi-bf.c: New test.

 Joey Ye  joey...@arm.com
* gcc.dg/volatile-bitfields-2.c: New test.
* g++.dg/abi/bitfield12.C: no-strict-volatile-bitfields.
Index: gcc/toplev.c
===
--- gcc/toplev.c	(revision 182353)
+++ gcc/toplev.c	(working copy)
@@ -1330,6 +1330,13 @@
 flag_ira_region
   = optimize_size || !optimize ? IRA_REGION_ONE : IRA_REGION_MIXED;
 
+  if (flag_strict_volatile_bitfields  0  !abi_version_at_least (2))
+{
+  warning (0, -fstrict-volatile-bitfields disabled; 
+	   it is incompatible with ABI versions  2);
+  flag_strict_volatile_bitfields = 0;
+}
+
   /* Unrolling all loops implies that standard loop unrolling must also
  be done.  */
   if (flag_unroll_all_loops)
Index: gcc/testsuite/gcc.target/arm/volatile-bitfields-4.c
===
--- gcc/testsuite/gcc.target/arm/volatile-bitfields-4.c	(revision 0)
+++ gcc/testsuite/gcc.target/arm/volatile-bitfields-4.c	(revision 0)
@@ -0,0 +1,30 @@
+/* { dg-require-effective-target arm_eabi } */
+/* { dg-do compile } */
+/* { dg-options -O2 } */
+/* { dg-final { scan-assembler-times ldr\[\\t \]+\[^\n\]*,\[\\t \]*\\\[\[^\n\]*\\\] 2 } } */
+/* { dg-final { scan-assembler-times str\[\\t \]+\[^\n\]*,\[\\t \]*\\\[\[^\n\]*\\\] 2 } } */
+/* { dg-final { scan-assembler-not strb } } */
+
+struct thing {
+  unsigned a: 8;
+  unsigned b: 8;
+  unsigned c: 8;
+  unsigned d: 8;
+};
+
+struct thing2 {
+  volatile unsigned a: 8;
+  volatile unsigned b: 8;
+  volatile unsigned c: 8;
+  volatile unsigned d: 8;
+};
+
+void test1(volatile struct thing *t)
+{
+  t-a = 5;
+}
+
+void test2(struct thing2 *t)
+{
+  t-a = 5;
+}
Index: gcc/testsuite/gcc.dg/volatile-bitfields-2.c
===
--- gcc/testsuite/gcc.dg/volatile-bitfields-2.c	(revision 0)
+++ gcc/testsuite/gcc.dg/volatile-bitfields-2.c	(revision 0)
@@ -0,0 +1,15 @@
+/* { dg-do run } */
+/* { dg-options -fstrict-volatile-bitfields } */
+
+extern void abort(void);
+struct thing {
+  volatile unsigned short a: 8;
+  volatile unsigned short b: 8;
+} t = {1,2};
+
+int main()
+{
+  t.a = 3;
+  if (t.a !=3 || t.b !=2) abort();
+  return 0;
+}
Index: gcc/testsuite/g++.dg/abi/bitfield12.C
===
--- gcc/testsuite/g++.dg/abi/bitfield12.C	(revision 182353)
+++ gcc/testsuite/g++.dg/abi/bitfield12.C	(working copy)
@@ -1,4 +1,4 @@
-// { dg-options -Wabi -fabi-version=1 }
+// { dg-options -Wabi -fabi-version=1 -fno-strict-volatile-bitfields }
 
 struct S { // { dg-warning ABI }
   char c : 1024; // { dg-warning width }
Index: gcc/testsuite/c-c++-common/abi-bf.c
===
--- gcc/testsuite/c-c++-common/abi-bf.c	(revision 0)
+++ gcc/testsuite/c-c++-common/abi-bf.c	(revision 0)
@@ -0,0 +1,3 @@
+/* { dg-warning incompatible } */
+/* { dg-do compile } */
+/* { dg-options -fstrict-volatile-bitfields -fabi-version=1 } */
Index: gcc/expr.c
===
--- gcc/expr.c	(revision 182353)
+++ gcc/expr.c	(working copy)
@@ -6327,6 +6327,8 @@
 		|| bitpos % GET_MODE_ALIGNMENT (mode))
 	SLOW_UNALIGNED_ACCESS (mode, MEM_ALIGN (target)))
 	  || (bitpos % BITS_PER_UNIT != 0)))
+  || (bitsize = 0  mode != BLKmode
+	   GET_MODE_BITSIZE (mode)  bitsize)
   /* If the RHS and field are a constant size and the size of the
 	 RHS isn't the same size as the bitfield, we must use bitfield
 	 operations.  */
Index: gcc/stor-layout.c
===
--- gcc/stor-layout.c	(revision 182353)
+++ gcc/stor-layout.c	(working copy)
@@ -622,12 +622,13 @@
 	  /* See if we can use an ordinary integer mode for a bit-field.
 	 Conditions are: a fixed size that is correct for another mode,
 	 occupying a complete byte or bytes on proper boundary,
-	 and not volatile or not -fstrict-volatile-bitfields.  */
+	 and not -fstrict-volatile-bitfields.  If the latter is set,
+	 we unfortunately can't check TREE_THIS_VOLATILE, as a cast
+	 may make a volatile object later.  */
 	  if (TYPE_SIZE (type) != 0
 	   

Re: Continue strict-volatile-bitfields fixes

2011-11-29 Thread Bernd Schmidt
On 11/11/11 17:22, Bernd Schmidt wrote:
 On 11/11/11 16:30, Joey Ye wrote:
 -fstrict-volatile-bitfields doesn't work incorrectly in some cases 
 when storing into a volatile bit-field. 

 Bernd provided a fix here about 1 year ago:
 http://gcc.gnu.org/ml/gcc-patches/2010-12/msg00217.html.
 But it is pending to trunk. Here are my humble opinions and hopefully 
 we can revive it:
[...]
 So, I still think this patch is the best way to go forward, and it does
 fix incorrect code generation. Would appreciate an OK.

Ping.


Bernd


RE: Continue strict-volatile-bitfields fixes

2011-11-29 Thread Mitchell, Mark
  So, I still think this patch is the best way to go forward, and it
 does
  fix incorrect code generation. Would appreciate an OK.
 
 Ping.

If you don't hear any objections within a week, please proceed.

Thank you,

Mark


Re: Continue strict-volatile-bitfields fixes

2011-11-17 Thread Ye Joey
To raise awareness, a track at http://gcc.gnu.org/bugzilla/show_bug.cgi?id=51200

- Joey


RE: Continue strict-volatile-bitfields fixes

2011-11-13 Thread Mitchell, Mark
  Bernd provided a fix here about 1 year ago:
  http://gcc.gnu.org/ml/gcc-patches/2010-12/msg00217.html.
  But it is pending to trunk. Here are my humble opinions and hopefully
  we can revive it:

  Yeah. At the time I thought the objections were a bit pointless. At
 worst, the added code in some of the target ports is irrelevant, as
 admitted by DJ later in the thread, but nothing stops port maintainers
 from adding code to disallow -fabi-version for their port. Since none
 do
 this (AFAIK), I still believe it's best to make all ports behave
 identically with this patch.
 
 So, I still think this patch is the best way to go forward, and it does
 fix incorrect code generation. Would appreciate an OK.

I agree.  I remember reviewing this patch, and being disappointed that it 
wasn't included.  Given my lack of day-to-day involvement in GCC development, 
I'm not willing to simply say check it in, but I would encourage reviewers to 
accept the patch.

Mark


Re: Continue strict-volatile-bitfields fixes

2011-11-11 Thread Bernd Schmidt
On 11/11/11 16:30, Joey Ye wrote:
 -fstrict-volatile-bitfields doesn't work incorrectly in some cases 
 when storing into a volatile bit-field. 
 
 Bernd provided a fix here about 1 year ago:
 http://gcc.gnu.org/ml/gcc-patches/2010-12/msg00217.html.
 But it is pending to trunk. Here are my humble opinions and hopefully 
 we can revive it:
 
 1. The fix could have helped lots of those who use volatile bit-fields, 
 but has been blocked for 1 year by ABI version 1, a feature that I believe 
 no one nowadays is using with latest gcc. Either error out ABI version 1
 for some target, or just revising the failed ABI test case is OK for me.

Yeah. At the time I thought the objections were a bit pointless. At
worst, the added code in some of the target ports is irrelevant, as
admitted by DJ later in the thread, but nothing stops port maintainers
from adding code to disallow -fabi-version for their port. Since none do
this (AFAIK), I still believe it's best to make all ports behave
identically with this patch.

So, I still think this patch is the best way to go forward, and it does
fix incorrect code generation. Would appreciate an OK.


Bernd