Hi all,
The following code sequence should be generated for TLS local exec model
in aarch64 backend.
add t0, tp, #:tprel_hi12:x1, lsl #12
add t0, #:tprel_lo12_nc:x1
However, we have the following codegen using -S option.
add t0, tp, #:tprel_hi12:x1 <-------- (1)
add t0, #:tprel_lo12_nc:x1
This is not correct from the first impression. The tprel_hi12 should
left shift 12 bits first and add to thread pointer. However, the gas is
able to detect tprel_hi12 relocate modifier, and rewrite the instruction
marked as 1 above into the shifted form. So the final behaviour is correct.
But I think the inconsistency is very confusing. The asm generated by
Gcc and object dumped from object code are different because of above
reason.
This patch should fix this small issue.
Okay to commit?
Regards,
Renlin Li
gcc/ChangeLog:
2015-01-20 Renlin Li <renlin...@arm.com>
* config/aarch64/aarch64.c (aarch64_load_symref_appropriately): Correct
the comment.
* config/aarch64/aarch64.md (tlsle_small_<mode>): Add left shift 12-bit
for higher part.
diff --git a/gcc/config/aarch64/aarch64.c b/gcc/config/aarch64/aarch64.c
index ee9a962..cd01b82 100644
--- a/gcc/config/aarch64/aarch64.c
+++ b/gcc/config/aarch64/aarch64.c
@@ -692,8 +692,8 @@ tls_symbolic_operand_type (rtx addr)
Local Exec:
mrs tp, tpidr_el0
- add t0, tp, #:tprel_hi12:imm
- add t0, #:tprel_lo12_nc:imm
+ add t0, tp, #:tprel_hi12:imm, lsl #12
+ add t0, t0, #:tprel_lo12_nc:imm
*/
static void
diff --git a/gcc/config/aarch64/aarch64.md b/gcc/config/aarch64/aarch64.md
index 597ff8c..90f7bf4 100644
--- a/gcc/config/aarch64/aarch64.md
+++ b/gcc/config/aarch64/aarch64.md
@@ -4039,7 +4039,7 @@
(match_operand 2 "aarch64_tls_le_symref" "S")]
UNSPEC_GOTSMALLTLS))]
""
- "add\\t%<w>0, %<w>1, #%G2\;add\\t%<w>0, %<w>0, #%L2"
+ "add\\t%<w>0, %<w>1, #%G2, lsl #12\;add\\t%<w>0, %<w>0, #%L2"
[(set_attr "type" "alu_sreg")
(set_attr "length" "8")]
)