On Thu, Oct 10, 2019 at 8:06 PM Richard Sandiford
<richard.sandif...@arm.com> wrote:
>
> Wilco Dijkstra <wilco.dijks...@arm.com> writes:
> > ping
> >
> > Contrary to all documentation, SLOW_BYTE_ACCESS simply means accessing
> > bitfields by their declared type, which results in better codegeneration on 
> > practically
> > any target.
>
> The name is confusing, but the documentation looks accurate to me:
>
>     Define this macro as a C expression which is nonzero if accessing less
>     than a word of memory (i.e.@: a @code{char} or a @code{short}) is no
>     faster than accessing a word of memory, i.e., if such access
>     require more than one instruction or if there is no difference in cost
>     between byte and (aligned) word loads.
>
>     When this macro is not defined, the compiler will access a field by
>     finding the smallest containing object; when it is defined, a fullword
>     load will be used if alignment permits.  Unless bytes accesses are
>     faster than word accesses, using word accesses is preferable since it
>     may eliminate subsequent memory access if subsequent accesses occur to
>     other fields in the same word of the structure, but to different bytes.
>
> > I'm thinking we should completely remove all trace of SLOW_BYTE_ACCESS
> > from GCC as it's confusing and useless.
>
> I disagree.  Some targets can optimise single-bit operations when the
> container is a byte, for example.

There's also less chance of store-to-load forwarding issues when _not_
forcing word accesses.  Because I think that we do not use the
same macro to change code generation for writes (where a larger access
requires a read-modify-write cycle).

But this also means that a decision with no information on context is
going to be flawed.  Still generally I'd do smaller reads if they are not
significantly more expensive on the target but larger writes (with the
same constraint) just for the STLF issue.

Unfortunately the macro docs don't say if it is applicable to reads or writes
or both...

> > OK for commit until we get rid of it?
> >
> > ChangeLog:
> > 2017-11-17  Wilco Dijkstra  <wdijk...@arm.com>
> >
> >     gcc/
> >         * config/aarch64/aarch64.h (SLOW_BYTE_ACCESS): Set to 1.
> > --
> > diff --git a/gcc/config/aarch64/aarch64.h b/gcc/config/aarch64/aarch64.h
> > index 
> > 056110afb228fb919e837c04aa5e5552a4868ec3..d8f4d129a02fb89eb00d256aba8c4764d6026078
> >  100644
> > --- a/gcc/config/aarch64/aarch64.h
> > +++ b/gcc/config/aarch64/aarch64.h
> > @@ -769,14 +769,9 @@ typedef struct
> >     if given data not on the nominal alignment.  */
> >  #define STRICT_ALIGNMENT                TARGET_STRICT_ALIGN
> >
> > -/* Define this macro to be non-zero if accessing less than a word of
> > -   memory is no faster than accessing a word of memory, i.e., if such
> > -   accesses require more than one instruction or if there is no
> > -   difference in cost.
> > -   Although there's no difference in instruction count or cycles,
> > -   in AArch64 we don't want to expand to a sub-word to a 64-bit access
> > -   if we don't have to, for power-saving reasons.  */
> > -#define SLOW_BYTE_ACCESS               0
> > +/* Contrary to all documentation, this enables wide bitfield accesses,
> > +   which results in better code when accessing multiple bitfields.  */
> > +#define SLOW_BYTE_ACCESS               1
> >
> >  #define NO_FUNCTION_CSE 1
>
> I agree this makes sense from a performance point of view, and I think
> the existing comment is admitting that AArch64 has the properties that
> would normally cause us to set SLOW_BYTE_ACCESS to 1.  But the comment
> is claiming that there's a power-saving benefit to leaving it off.
>
> It seems like a weak argument though.  Bitfields are used when several
> values are packed into the same integer, so there's a high likelihood
> we'll need the whole integer anyway.  Avoiding the redundancies described
> in the documention should if anything help with power usage.
>
> Maybe the main concern was using a 64-bit access when a 32-bit one
> would do, since 32-bit bitfield containers are the most common.  But the:
>
>          && GET_MODE_ALIGNMENT (mode) <= align
>
> condition in get_best_mode should avoid that unless the 64-bit
> access is naturally aligned.  (See the big comment above for the
> pros and cons of this.)
>
> So I think we should change the macro value unless anyone can back up the
> power-saving claim.  Let's wait a week (more) to see if anyone objects.
>
> The comment change isn't OK though.  Please keep the first paragraph
> and just reword the second to say that's why we set the value to 1.
>
> Thanks,
> Richard

Reply via email to