Hi,
> Fair enough - we should just fix the test and move on.
Done.
> I would suggest in addition a transitional command-line option to switch
> between LRA and reload as a temporary measure so that folks can do some more
> experimenting for AArch32.
I've a patch which fixes the REG_NOTE issues,
AArch32:
No more issues in libstdc++ as well (same as reasons as AArch64), and
only 3 failures in the testsuite:
- The first one is invalid as the test sans the assembler for
"ldaex\tr\[0-9\]+..." and it fails because with LRA the chosen
register is r12 and thus the instruction is "ldaex ip,.
Hi,
here is a status of LRA on ARM, after Vladimir's patch and a rebase of
my branch:
AArch64:
No more issues in the testsuite, the libstdc++ ones weren't LRA
specific, they happened at the pass manager level and due to PCH
inclusion, but were fixed with the trunk update. Thus, I'll post a
patch
> Yeah, good point. TBH I prefer it with separate ifs though, because the
> three cases are dealing with three different types of rtl (unary, binary
> and ternary). But I don't mind much either way.
Ok, it's fine for me too.
> The new patch looks good to me, thanks. Just one minor style nit:
>
Yvan Roux writes:
> Yes indeed ! here is a fixed patch.
>
> In strip_address_mutations we now have 3 if/else if statements with
> the same body which could be factorized in:
>
> if ((GET_RTX_CLASS (code) == RTX_UNARY)
> /* Things like SIGN_EXTEND, ZERO_EXTEND and TRUNCATE can be
>
Yes indeed ! here is a fixed patch.
In strip_address_mutations we now have 3 if/else if statements with
the same body which could be factorized in:
if ((GET_RTX_CLASS (code) == RTX_UNARY)
/* Things like SIGN_EXTEND, ZERO_EXTEND and TRUNCATE can be
used to convert betw
Yvan Roux writes:
> @@ -5454,6 +5454,16 @@ strip_address_mutations (rtx *loc, enum rtx_code
> *outer_code)
> /* Things like SIGN_EXTEND, ZERO_EXTEND and TRUNCATE can be
> used to convert between pointer sizes. */
> loc = &XEXP (*loc, 0);
> + else if (GET_RTX_CLASS (code
> Endianness in the BYTES_BIG_ENDIAN sense shouldn't be a problem AFAIK.
> We just need to worry about BITS_BIG_ENDIAN. For:
>
> ({sign,zero}_extract:m X len pos)
>
> "pos" counts from the lsb if !BITS_BIG_ENDIAN and from the msb if
> BITS_BIG_ENDIAN. So I think the condition should be somethin
Yvan Roux writes:
>> Yeah, but that's because strip_address_mutations doesn't consider
>> SIGN_EXTRACT to be a "mutation" as things stand. My point was that
>> I think it should, at least for the special extract-from-lsb case.
>> It then shouldn't be necessary to handle SIGN_EXTRACT in the other
> Yeah, but that's because strip_address_mutations doesn't consider
> SIGN_EXTRACT to be a "mutation" as things stand. My point was that
> I think it should, at least for the special extract-from-lsb case.
> It then shouldn't be necessary to handle SIGN_EXTRACT in the other
> address-analysis rout
Yvan Roux writes:
> Thanks for noticing it Richard, I made a refactoring mistake and addr
> was supposed to be used instead of x. In fact on AArch64 it occurs
> that we don't have stripped rtxes at this step and we have some of the
> form below, this if why I added the strip.
>
> (insn 29 27 5 7
Thanks for noticing it Richard, I made a refactoring mistake and addr
was supposed to be used instead of x. In fact on AArch64 it occurs
that we don't have stripped rtxes at this step and we have some of the
form below, this if why I added the strip.
(insn 29 27 5 7 (set (mem:SI (plus:DI (sign_ex
Richard Sandiford writes:
> I think SIGN_EXTRACT from the lsb (i.e. when the third operand is 0 for
> !BITS_BIG_ENDIAN or GET_MODE_PRECISION (mode) - 1 for BITS_BIG_ENDIAN)
Gah, "GET_MODE_PRECISION (mode) - size" for BITS_BIG_ENDIAN.
Richard
Yvan Roux writes:
> Thanks for the review. Here is the fixed self-contained patch to
> enable LRA on AAarch32 and AArch64 (for those who want to give a try).
> I'm still working on the issues described previously and will send
> rtlanal.c patch separately to prepare the move.
Looks like the rtl
Hi,
Thanks for the review. Here is the fixed self-contained patch to
enable LRA on AAarch32 and AArch64 (for those who want to give a try).
I'm still working on the issues described previously and will send
rtlanal.c patch separately to prepare the move.
Thanks,
Yvan
On 9 September 2013 01:23
On 13-09-08 2:04 PM, Richard Sandiford wrote:
Yvan Roux writes:
@@ -5786,7 +5796,11 @@ get_index_scale (const struct address_info *info)
&& info->index_term == &XEXP (index, 0))
return INTVAL (XEXP (index, 1));
- if (GET_CODE (index) == ASHIFT
+ if ((GET_CODE (index) == ASHI
Yvan Roux writes:
> @@ -5786,7 +5796,11 @@ get_index_scale (const struct address_info *info)
>&& info->index_term == &XEXP (index, 0))
> return INTVAL (XEXP (index, 1));
>
> - if (GET_CODE (index) == ASHIFT
> + if ((GET_CODE (index) == ASHIFT
> + || GET_CODE (index) == ASHIF
On 13-08-30 9:09 AM, Yvan Roux wrote:
Hi,
here is a request for comments on the 2 attached patches which enable
the build of GCC on ARM with LRA. The patches introduce a new
undocumented option -mlra to use LRA instead of reload, as what was
done on previous LRA support, which is here to ease t
Sorry for the previous off-the-list-html-format answer :(
On 30 August 2013 15:18, Richard Earnshaw wrote:
> On 30/08/13 14:09, Yvan Roux wrote:
>> Hi,
>>
>> here is a request for comments on the 2 attached patches which enable
>> the build of GCC on ARM with LRA. The patches introduce a new
>>
On 30/08/13 14:09, Yvan Roux wrote:
> Hi,
>
> here is a request for comments on the 2 attached patches which enable
> the build of GCC on ARM with LRA. The patches introduce a new
> undocumented option -mlra to use LRA instead of reload, as what was
> done on previous LRA support, which is here t
Hi,
here is a request for comments on the 2 attached patches which enable
the build of GCC on ARM with LRA. The patches introduce a new
undocumented option -mlra to use LRA instead of reload, as what was
done on previous LRA support, which is here to ease the test and
comparison with reload and n
21 matches
Mail list logo