Re: [PATCH, ARM, LRA] Prepare ARM build with LRA

2013-09-27 Thread Eric Botcazou
 They don't need to be kept synchronised as such.  It's fine for the index
 to allow more than must_be_index_p.  But if you're not keen on the current
 structure, does the following look better?  Tested on x86_64-linux-gnu.
 
 Thanks,
 Richard
 
 
 gcc/
   * rtlanal.c (must_be_base_p, must_be_index_p): Delete.
   (binary_scale_code_p, get_base_term, get_index_term): New functions.
   (set_address_segment, set_address_base, set_address_index)
   (set_address_disp): Accept the argument unconditionally.
   (baseness): Remove must_be_base_p and must_be_index_p checks.
   (decompose_normal_address): Classify as much as possible in the
   main loop.

Yes, fine by me, thanks.

-- 
Eric Botcazou


Re: [PATCH, ARM, LRA] Prepare ARM build with LRA

2013-09-26 Thread Richard Sandiford
Eric Botcazou ebotca...@adacore.com writes:
 So in the set_* routines it isn't about whether the value is definitely
 a base or a definitely an index.  It's just about drilling down through
 what we've already decided is a base or index to get the inner reg or mem,
 and knowing which XEXPs to look at.  We could instead have used a
 for_each_rtx, or something like that, without any code checks.  But I
 wanted to be precise about the types of address we allow, so that we can
 assert for things we don't understand.  In other words, it was designed
 to require the kind of extension Yvan is adding here.

 Does this mean that the design is to require a parallel implementation in the 
 predicates and in the set routines, i.e. each time you add a new case to the 
 predicates, you need to add it (or do something) to the set routines as well? 
  
 If so, that's a little weird, but OK, feel free to revert the de-duplication 
 part, but add comments saying that the functions must be kept synchronized.

They don't need to be kept synchronised as such.  It's fine for the index
to allow more than must_be_index_p.  But if you're not keen on the current
structure, does the following look better?  Tested on x86_64-linux-gnu.

Thanks,
Richard


gcc/
* rtlanal.c (must_be_base_p, must_be_index_p): Delete.
(binary_scale_code_p, get_base_term, get_index_term): New functions.
(set_address_segment, set_address_base, set_address_index)
(set_address_disp): Accept the argument unconditionally.
(baseness): Remove must_be_base_p and must_be_index_p checks.
(decompose_normal_address): Classify as much as possible in the
main loop.

Index: gcc/rtlanal.c
===
--- gcc/rtlanal.c   2013-09-25 22:42:16.870221821 +0100
+++ gcc/rtlanal.c   2013-09-26 09:11:41.273778750 +0100
@@ -5521,26 +5521,50 @@ strip_address_mutations (rtx *loc, enum
 }
 }
 
-/* Return true if X must be a base rather than an index.  */
+/* Return true if CODE applies some kind of scale.  The scaled value is
+   is the first operand and the scale is the second.  */
 
 static bool
-must_be_base_p (rtx x)
+binary_scale_code_p (enum rtx_code code)
 {
-  return GET_CODE (x) == LO_SUM;
+  return (code == MULT
+  || code == ASHIFT
+  /* Needed by ARM targets.  */
+  || code == ASHIFTRT
+  || code == LSHIFTRT
+  || code == ROTATE
+  || code == ROTATERT);
 }
 
-/* Return true if X must be an index rather than a base.  */
+/* If *INNER can be interpreted as a base, return a pointer to the inner term
+   (see address_info).  Return null otherwise.  */
 
-static bool
-must_be_index_p (rtx x)
+static rtx *
+get_base_term (rtx *inner)
 {
-  return (GET_CODE (x) == MULT
-  || GET_CODE (x) == ASHIFT
-  /* Needed by ARM targets.  */
-  || GET_CODE (x) == ASHIFTRT
-  || GET_CODE (x) == LSHIFTRT
-  || GET_CODE (x) == ROTATE
-  || GET_CODE (x) == ROTATERT);
+  if (GET_CODE (*inner) == LO_SUM)
+inner = strip_address_mutations (XEXP (*inner, 0));
+  if (REG_P (*inner)
+  || MEM_P (*inner)
+  || GET_CODE (*inner) == SUBREG)
+return inner;
+  return 0;
+}
+
+/* If *INNER can be interpreted as an index, return a pointer to the inner term
+   (see address_info).  Return null otherwise.  */
+
+static rtx *
+get_index_term (rtx *inner)
+{
+  /* At present, only constant scales are allowed.  */
+  if (binary_scale_code_p (GET_CODE (*inner))  CONSTANT_P (XEXP (*inner, 1)))
+inner = strip_address_mutations (XEXP (*inner, 0));
+  if (REG_P (*inner)
+  || MEM_P (*inner)
+  || GET_CODE (*inner) == SUBREG)
+return inner;
+  return 0;
 }
 
 /* Set the segment part of address INFO to LOC, given that INNER is the
@@ -5549,8 +5573,6 @@ must_be_index_p (rtx x)
 static void
 set_address_segment (struct address_info *info, rtx *loc, rtx *inner)
 {
-  gcc_checking_assert (GET_CODE (*inner) == UNSPEC);
-
   gcc_assert (!info-segment);
   info-segment = loc;
   info-segment_term = inner;
@@ -5562,12 +5584,6 @@ set_address_segment (struct address_info
 static void
 set_address_base (struct address_info *info, rtx *loc, rtx *inner)
 {
-  if (must_be_base_p (*inner))
-inner = strip_address_mutations (XEXP (*inner, 0));
-  gcc_checking_assert (REG_P (*inner)
-  || MEM_P (*inner)
-  || GET_CODE (*inner) == SUBREG);
-
   gcc_assert (!info-base);
   info-base = loc;
   info-base_term = inner;
@@ -5579,12 +5595,6 @@ set_address_base (struct address_info *i
 static void
 set_address_index (struct address_info *info, rtx *loc, rtx *inner)
 {
-  if (must_be_index_p (*inner)  CONSTANT_P (XEXP (*inner, 1)))
-inner = strip_address_mutations (XEXP (*inner, 0));
-  gcc_checking_assert (REG_P (*inner)
-  || MEM_P (*inner)
-  || GET_CODE (*inner) == SUBREG);
-
   gcc_assert (!info-index);
   

Re: [PATCH, ARM, LRA] Prepare ARM build with LRA

2013-09-25 Thread Yvan Roux
hmm, I don't see clearly where we loose the XEXP (x, n) information
when calling must_be_base_p(*inner) and/or must_be_index_p(*inner) in
set_address_base and set_address_index.

BTW, the validation on ARM (AARch32 and AARch64) is clean.

Thanks,
Yvan

On 24 September 2013 18:36, Richard Sandiford
rdsandif...@googlemail.com wrote:
 Eric Botcazou ebotca...@adacore.com writes:
 Sorry, I'm not sure I understand well, it means that you prefer the
 shift_and _rotate_code_p notation, right ?

 Let's do the following in addition to the lsb_bitfield_op_p thing:
   1. Replace the LO_SUM test in set_address_base by a call to must_be_base_p,
   2. Replace the MULT || ASHIFT test in set_address_index by a call to
 must_be_index_p,
   3. Add the new cases to must_be_index_p directly, with a comment saying 
 that
 there are e.g. for the ARM.

 FWIW, I'd prefer to keep it as-is, since must_be_base_p (x) and
 must_be_index_p (x) don't imply that we should look specifically at
 XEXP (x, 0) (rather than just X, or XEXP (x, 1), etc.).  I think it's
 better to keep the code tests and the associated XEXPs together.

 Thanks,
 Richard


Re: [PATCH, ARM, LRA] Prepare ARM build with LRA

2013-09-25 Thread Eric Botcazou
 2013-09-24  Yvan Roux  yvan.r...@linaro.org
 Vladimir Makarov  vmaka...@redhat.com
 
 * rtlanal.c (lsb_bitfield_op_p): New predicate for bitfield
 operations from the least significant bit.
 (strip_address_mutations): Add bitfield operations handling.
 (must_be_index_p): Add shifting and rotate operations handling.
 (set_address_base): Use must_be_base_p predicate.
 (set_address_index):Use must_be_index_p predicate.

OK for mainline, if you add the missing space after GET_MODE in

 enum machine_mode mode = GET_MODE(XEXP (x, 0));

-- 
Eric Botcazou


Re: [PATCH, ARM, LRA] Prepare ARM build with LRA

2013-09-25 Thread Eric Botcazou
 FWIW, I'd prefer to keep it as-is, since must_be_base_p (x) and
 must_be_index_p (x) don't imply that we should look specifically at
 XEXP (x, 0) (rather than just X, or XEXP (x, 1), etc.).  I think it's
 better to keep the code tests and the associated XEXPs together.

Feel free to revert this part, but then add appropriate comments explaining 
why we are interested in LO_SUM for set_address_base and in MULT and the 5 
others for set_address_index.  If it's because the former is rather a base 
than an index and vice-versa for the latter, then it's even clearer with the 
predicates I think.

-- 
Eric Botcazou


Re: [PATCH, ARM, LRA] Prepare ARM build with LRA

2013-09-25 Thread Richard Sandiford
Eric Botcazou ebotca...@adacore.com writes:
 FWIW, I'd prefer to keep it as-is, since must_be_base_p (x) and
 must_be_index_p (x) don't imply that we should look specifically at
 XEXP (x, 0) (rather than just X, or XEXP (x, 1), etc.).  I think it's
 better to keep the code tests and the associated XEXPs together.

 Feel free to revert this part, but then add appropriate comments explaining 
 why we are interested in LO_SUM for set_address_base and in MULT and the 5 
 others for set_address_index.  If it's because the former is rather a base 
 than an index and vice-versa for the latter, then it's even clearer with the 
 predicates I think.

The idea is that must_be_base_p and must_be_index_p are used when deciding
how to classify the operands of a PLUS.  set_address_base and set_address_index
are called once we've decided which is which and want to record the choice.
There's no backing out by that stage.

So in the set_* routines it isn't about whether the value is definitely
a base or a definitely an index.  It's just about drilling down through
what we've already decided is a base or index to get the inner reg or mem,
and knowing which XEXPs to look at.  We could instead have used a for_each_rtx,
or something like that, without any code checks.  But I wanted to be precise
about the types of address we allow, so that we can assert for things we
don't understand.  In other words, it was designed to require the kind
of extension Yvan is adding here.

E.g. although it's a bit of a stretch, it might be in future that
someone wants to allow NOT in an index.  It would then make sense
to add NOT to must_be_index_p.  But the existing check:

  if ((GET_CODE (*inner) == MULT || GET_CODE (*inner) == ASHIFT)
   CONSTANT_P (XEXP (*inner, 1)))
inner = strip_address_mutations (XEXP (*inner, 0));

wouldn't make sense as:

  if (must_be_index_p (*inner)  CONSTANT_P (XEXP (*inner, 1)))
inner = strip_address_mutations (XEXP (*inner, 0));

in that case, since NOT only has one operand.  And it might be that
the NOT is allowed only outside the scale, only inside the scale,
or in both positions.

Not the best example, sorry.

Thanks,
Richard


Re: [PATCH, ARM, LRA] Prepare ARM build with LRA

2013-09-25 Thread Eric Botcazou
 So in the set_* routines it isn't about whether the value is definitely
 a base or a definitely an index.  It's just about drilling down through
 what we've already decided is a base or index to get the inner reg or mem,
 and knowing which XEXPs to look at.  We could instead have used a
 for_each_rtx, or something like that, without any code checks.  But I
 wanted to be precise about the types of address we allow, so that we can
 assert for things we don't understand.  In other words, it was designed
 to require the kind of extension Yvan is adding here.

Does this mean that the design is to require a parallel implementation in the 
predicates and in the set routines, i.e. each time you add a new case to the 
predicates, you need to add it (or do something) to the set routines as well?  
If so, that's a little weird, but OK, feel free to revert the de-duplication 
part, but add comments saying that the functions must be kept synchronized.

-- 
Eric Botcazou


Re: [PATCH, ARM, LRA] Prepare ARM build with LRA

2013-09-24 Thread Eric Botcazou
 Here is the new patch discussed in the other thread.
 
 Thanks
 Yvan
 
 2013-09-11  Yvan Roux  yvan.r...@linaro.org
 Vladimir Makarov  vmaka...@redhat.com
 
   * rtlanal.c (lsb_bitfield_op_p): New predicate for bitfield operations
   from the least significant bit.
   (strip_address_mutations): Add bitfield operations handling.
   (shift_code_p): New predicate for shifting operations.
   (must_be_index_p): Add shifting operations handling.
   (set_address_index): Likewise.

+/* Return true if X is a sign_extract or zero_extract from the least
+   significant bit.  */
+
+static bool
+lsb_bitfield_op_p (rtx x)
+{
+  if (GET_RTX_CLASS (GET_CODE (x)) == RTX_BITFIELD_OPS)
+{
+  enum machine_mode mode = GET_MODE(x);
+  unsigned HOST_WIDE_INT len = INTVAL (XEXP (x, 1));
+  HOST_WIDE_INT pos = INTVAL (XEXP (x, 2));
+
+  return (pos == (BITS_BIG_ENDIAN ? GET_MODE_PRECISION (mode) - len : 
0));

It seems strange to use the destination mode to decide whether this is the LSB 
of the source.

+/* Return true if X is a shifting operation.  */
+
+static bool
+shift_code_p (rtx x)
+{
+  return (GET_CODE (x) == ASHIFT
+  || GET_CODE (x) == ASHIFTRT
+  || GET_CODE (x) == LSHIFTRT
+  || GET_CODE (x) == ROTATE
+  || GET_CODE (x) == ROTATERT);
+}

ROTATE and ROTATERT aren't really shifting operations though, so are they 
really needed here?

-- 
Eric Botcazou


Re: [PATCH, ARM, LRA] Prepare ARM build with LRA

2013-09-24 Thread Yvan Roux
Hi Eric,

Thanks for the review.

 +/* Return true if X is a sign_extract or zero_extract from the least
 +   significant bit.  */
 +
 +static bool
 +lsb_bitfield_op_p (rtx x)
 +{
 +  if (GET_RTX_CLASS (GET_CODE (x)) == RTX_BITFIELD_OPS)
 +{
 +  enum machine_mode mode = GET_MODE(x);
 +  unsigned HOST_WIDE_INT len = INTVAL (XEXP (x, 1));
 +  HOST_WIDE_INT pos = INTVAL (XEXP (x, 2));
 +
 +  return (pos == (BITS_BIG_ENDIAN ? GET_MODE_PRECISION (mode) - len :
 0));

 It seems strange to use the destination mode to decide whether this is the LSB
 of the source.

Indeed, I think it has to be the mode of loc, but I just wonder if it
is not always the same, as in the doc it is written that mode m is the
same as the mode that would be used for loc if it were a register.


 +/* Return true if X is a shifting operation.  */
 +
 +static bool
 +shift_code_p (rtx x)
 +{
 +  return (GET_CODE (x) == ASHIFT
 +  || GET_CODE (x) == ASHIFTRT
 +  || GET_CODE (x) == LSHIFTRT
 +  || GET_CODE (x) == ROTATE
 +  || GET_CODE (x) == ROTATERT);
 +}

 ROTATE and ROTATERT aren't really shifting operations though, so are they
 really needed here?

According to gcc internals, ROTATE and ROTATERT are similar to the
shifting operations, but to be more accurate maybe we can rename
shif_code_p in shift_and _rotate_code_p rotation are used in arm
address calculation, and thus need to be handle in  must_be_index_p
and set_address_index

Thanks,
Yvan


Re: [PATCH, ARM, LRA] Prepare ARM build with LRA

2013-09-24 Thread Eric Botcazou
 Indeed, I think it has to be the mode of loc, but I just wonder if it
 is not always the same, as in the doc it is written that mode m is the
 same as the mode that would be used for loc if it were a register.

So can we assert that we have a REG here and use GET_MODE (XEXP (x, 0))?  Or 
else return false if we don't have a REG.

 According to gcc internals, ROTATE and ROTATERT are similar to the
 shifting operations, but to be more accurate maybe we can rename
 shif_code_p in shift_and _rotate_code_p rotation are used in arm
 address calculation, and thus need to be handle in  must_be_index_p
 and set_address_index

Egad.  I guess I just wanted to see it written down. :-)

Btw, are you sure that you don't need to modify must_be_index_p instead?

/* Return true if X must be an index rather than a base.  */

static bool
must_be_index_p (rtx x)
{
  return GET_CODE (x) == MULT || GET_CODE (x) == ASHIFT;
}

and call it from set_address_index?

-- 
Eric Botcazou


Re: [PATCH, ARM, LRA] Prepare ARM build with LRA

2013-09-24 Thread Yvan Roux
 So can we assert that we have a REG here and use GET_MODE (XEXP (x, 0))?  Or
 else return false if we don't have a REG.

I'm currently testing the patch with the modification below

+static bool
+lsb_bitfield_op_p (rtx x)
+{
+  if (GET_RTX_CLASS (GET_CODE (x)) == RTX_BITFIELD_OPS)
+{
+  enum machine_mode mode = GET_MODE(XEXP (x, 0));
+  unsigned HOST_WIDE_INT len = INTVAL (XEXP (x, 1));
+  HOST_WIDE_INT pos = INTVAL (XEXP (x, 2));
+
+  return (pos == (BITS_BIG_ENDIAN ? GET_MODE_PRECISION (mode) - len : 0));
+}
+  return false;
+}


 According to gcc internals, ROTATE and ROTATERT are similar to the
 shifting operations, but to be more accurate maybe we can rename
 shif_code_p in shift_and _rotate_code_p rotation are used in arm
 address calculation, and thus need to be handle in  must_be_index_p
 and set_address_index

 Egad.  I guess I just wanted to see it written down. :-)

Sorry, I'm not sure I understand well, it means that you prefer the
shift_and _rotate_code_p notation, right ?

 Btw, are you sure that you don't need to modify must_be_index_p instead?

 /* Return true if X must be an index rather than a base.  */

 static bool
 must_be_index_p (rtx x)
 {
   return GET_CODE (x) == MULT || GET_CODE (x) == ASHIFT;
 }

 and call it from set_address_index?

Yes, if we don't modify must_be_index_p we'll have failures when
enabling LRA on ARM.  You can find an history if the patch in the
thread named RFC: patch to build GCC for arm with LRA which started
more or less here :

http://gcc.gnu.org/ml/gcc-patches/2013-09/msg00526.html

Thanks,
Yvan

 --
 Eric Botcazou


Re: [PATCH, ARM, LRA] Prepare ARM build with LRA

2013-09-24 Thread Eric Botcazou
 Sorry, I'm not sure I understand well, it means that you prefer the
 shift_and _rotate_code_p notation, right ?

Let's do the following in addition to the lsb_bitfield_op_p thing:
  1. Replace the LO_SUM test in set_address_base by a call to must_be_base_p,
  2. Replace the MULT || ASHIFT test in set_address_index by a call to 
must_be_index_p,
  3. Add the new cases to must_be_index_p directly, with a comment saying that 
there are e.g. for the ARM.

-- 
Eric Botcazou


Re: [PATCH, ARM, LRA] Prepare ARM build with LRA

2013-09-24 Thread Yvan Roux
Thanks Eric, here is the new patch, validation is ongoing for ARM.

Yvan

2013-09-24  Yvan Roux  yvan.r...@linaro.org
Vladimir Makarov  vmaka...@redhat.com

* rtlanal.c (lsb_bitfield_op_p): New predicate for bitfield operations
from the least significant bit.
(strip_address_mutations): Add bitfield operations handling.
(must_be_index_p): Add shifting and rotate operations handling.
(set_address_base): Use must_be_base_p predicate.
(set_address_index):Use must_be_index_p predicate.



On 24 September 2013 17:26, Eric Botcazou ebotca...@adacore.com wrote:
 Sorry, I'm not sure I understand well, it means that you prefer the
 shift_and _rotate_code_p notation, right ?

 Let's do the following in addition to the lsb_bitfield_op_p thing:
   1. Replace the LO_SUM test in set_address_base by a call to must_be_base_p,
   2. Replace the MULT || ASHIFT test in set_address_index by a call to
 must_be_index_p,
   3. Add the new cases to must_be_index_p directly, with a comment saying that
 there are e.g. for the ARM.

 --
 Eric Botcazou


rtlanal.patch
Description: Binary data


Re: [PATCH, ARM, LRA] Prepare ARM build with LRA

2013-09-24 Thread Richard Sandiford
Eric Botcazou ebotca...@adacore.com writes:
 Sorry, I'm not sure I understand well, it means that you prefer the
 shift_and _rotate_code_p notation, right ?

 Let's do the following in addition to the lsb_bitfield_op_p thing:
   1. Replace the LO_SUM test in set_address_base by a call to must_be_base_p,
   2. Replace the MULT || ASHIFT test in set_address_index by a call to 
 must_be_index_p,
   3. Add the new cases to must_be_index_p directly, with a comment saying 
 that 
 there are e.g. for the ARM.

FWIW, I'd prefer to keep it as-is, since must_be_base_p (x) and
must_be_index_p (x) don't imply that we should look specifically at
XEXP (x, 0) (rather than just X, or XEXP (x, 1), etc.).  I think it's
better to keep the code tests and the associated XEXPs together.

Thanks,
Richard


Re: [PATCH, ARM, LRA] Prepare ARM build with LRA

2013-09-16 Thread Yvan Roux
Adding Eric and Steven in the loop as it is RTL related.

Thanks
Yvan

On 11 September 2013 21:08, Yvan Roux yvan.r...@linaro.org wrote:
 Here is the new patch discussed in the other thread.

 Thanks
 Yvan

 2013-09-11  Yvan Roux  yvan.r...@linaro.org
 Vladimir Makarov  vmaka...@redhat.com

 * rtlanal.c (lsb_bitfield_op_p): New predicate for bitfield operations
 from the least significant bit.
 (strip_address_mutations): Add bitfield operations handling.
 (shift_code_p): New predicate for shifting operations.
 (must_be_index_p): Add shifting operations handling.
 (set_address_index): Likewise.


Re: [PATCH, ARM, LRA] Prepare ARM build with LRA

2013-09-11 Thread Yvan Roux
New attempt, with fixes from Richard's comments (discussed in the other thread).

Thanks,
Yvan

2013-09-09  Yvan Roux  yvan.r...@linaro.org
Vladimir Makarov  vmaka...@redhat.com

* rtlanal.c (strip_address_mutations): Add bitfield operations
handling.
(shift_code_p): New predicate for shifting operations.
(must_be_index_p): Add shifting operations handling.
(set_address_index): Likewise.


On 9 September 2013 10:01, Yvan Roux yvan.r...@linaro.org wrote:
 Hi,

 here are the modifications, discussed in another thread, needed in
 rtlanal.c by ARM targets (AArch32 and AArch64) to build GCC with LRA.

 Is it ok for trunk ?

 Thanks,
 Yvan

 2013-09-09  Yvan Roux  yvan.r...@linaro.org
 Vladimir Makarov  vmaka...@redhat.com

 * rtlanal.c (must_be_index_p, set_address_index): Add ASHIFTRT,
 LSHIFTRT, ROTATE, ROTATERT and SIGN_EXTRACT handling.
 (set_address_base): Add SIGN_EXTRACT handling.


arm-lra-rtl.patch
Description: Binary data


Re: [PATCH, ARM, LRA] Prepare ARM build with LRA

2013-09-11 Thread Yvan Roux
Here is the new patch discussed in the other thread.

Thanks
Yvan

2013-09-11  Yvan Roux  yvan.r...@linaro.org
Vladimir Makarov  vmaka...@redhat.com

* rtlanal.c (lsb_bitfield_op_p): New predicate for bitfield operations
from the least significant bit.
(strip_address_mutations): Add bitfield operations handling.
(shift_code_p): New predicate for shifting operations.
(must_be_index_p): Add shifting operations handling.
(set_address_index): Likewise.


On 11 September 2013 09:00, Yvan Roux yvan.r...@linaro.org wrote:
 New attempt, with fixes from Richard's comments (discussed in the other 
 thread).

 Thanks,
 Yvan

 2013-09-09  Yvan Roux  yvan.r...@linaro.org
 Vladimir Makarov  vmaka...@redhat.com

 * rtlanal.c (strip_address_mutations): Add bitfield operations
 handling.
 (shift_code_p): New predicate for shifting operations.
 (must_be_index_p): Add shifting operations handling.
 (set_address_index): Likewise.


 On 9 September 2013 10:01, Yvan Roux yvan.r...@linaro.org wrote:
 Hi,

 here are the modifications, discussed in another thread, needed in
 rtlanal.c by ARM targets (AArch32 and AArch64) to build GCC with LRA.

 Is it ok for trunk ?

 Thanks,
 Yvan

 2013-09-09  Yvan Roux  yvan.r...@linaro.org
 Vladimir Makarov  vmaka...@redhat.com

 * rtlanal.c (must_be_index_p, set_address_index): Add ASHIFTRT,
 LSHIFTRT, ROTATE, ROTATERT and SIGN_EXTRACT handling.
 (set_address_base): Add SIGN_EXTRACT handling.


arm-lra-rtl.patch
Description: Binary data


[PATCH, ARM, LRA] Prepare ARM build with LRA

2013-09-09 Thread Yvan Roux
Hi,

here are the modifications, discussed in another thread, needed in
rtlanal.c by ARM targets (AArch32 and AArch64) to build GCC with LRA.

Is it ok for trunk ?

Thanks,
Yvan

2013-09-09  Yvan Roux  yvan.r...@linaro.org
Vladimir Makarov  vmaka...@redhat.com

* rtlanal.c (must_be_index_p, set_address_index): Add ASHIFTRT,
LSHIFTRT, ROTATE, ROTATERT and SIGN_EXTRACT handling.
(set_address_base): Add SIGN_EXTRACT handling.


arm-lra-rtl.patch
Description: Binary data