Modified: trunk/Source/_javascript_Core/ChangeLog (173496 => 173497)
--- trunk/Source/_javascript_Core/ChangeLog 2014-09-10 23:48:10 UTC (rev 173496)
+++ trunk/Source/_javascript_Core/ChangeLog 2014-09-10 23:57:34 UTC (rev 173497)
@@ -1,3 +1,20 @@
+2014-09-10 Akos Kiss <[email protected]>
+
+ Apply ARM64-specific lowering to load/store instructions in offlineasm
+ https://bugs.webkit.org/show_bug.cgi?id=136569
+
+ Reviewed by Michael Saboff.
+
+ The standard risc lowering of load/store instructions with base +
+ immediate offset addresses is to move the offset to a temporary, add the
+ base to the temporary, and then change the load/store to use the
+ temporary + 0 immediate offset address. However, on ARM64, base +
+ register offset addressing mode is available, so it is unnecessary to
+ perform explicit register additions but it is enough to change load/store
+ to use base + temporary as the address.
+
+ * offlineasm/arm64.rb: Added arm64LowerMalformedLoadStoreAddresses
+
2014-09-10 Oliver Hunt <[email protected]>
Rename JSVariableObject to JSEnvironmentRecord to align naming with ES spec
Modified: trunk/Source/_javascript_Core/offlineasm/arm64.rb (173496 => 173497)
--- trunk/Source/_javascript_Core/offlineasm/arm64.rb 2014-09-10 23:48:10 UTC (rev 173496)
+++ trunk/Source/_javascript_Core/offlineasm/arm64.rb 2014-09-10 23:57:34 UTC (rev 173497)
@@ -1,4 +1,5 @@
# Copyright (C) 2011, 2012, 2014 Apple Inc. All rights reserved.
+# Copyright (C) 2014 University of Szeged. All rights reserved.
#
# Redistribution and use in source and binary forms, with or without
# modification, are permitted provided that the following conditions
@@ -197,6 +198,36 @@
# Actual lowering code follows.
#
+def arm64LowerMalformedLoadStoreAddresses(list)
+ newList = []
+
+ def isAddressMalformed(operand)
+ operand.is_a? Address and not (-255..4095).include? operand.offset.value
+ end
+
+ list.each {
+ | node |
+ if node.is_a? Instruction
+ if node.opcode =~ /^store/ and isAddressMalformed(node.operands[1])
+ address = node.operands[1]
+ tmp = Tmp.new(codeOrigin, :gpr)
+ newList << Instruction.new(node.codeOrigin, "move", [address.offset, tmp])
+ newList << Instruction.new(node.codeOrigin, node.opcode, [node.operands[0], BaseIndex.new(node.codeOrigin, address.base, tmp, 1, Immediate.new(codeOrigin, 0))], node.annotation)
+ elsif node.opcode =~ /^load/ and isAddressMalformed(node.operands[0])
+ address = node.operands[0]
+ tmp = Tmp.new(codeOrigin, :gpr)
+ newList << Instruction.new(node.codeOrigin, "move", [address.offset, tmp])
+ newList << Instruction.new(node.codeOrigin, node.opcode, [BaseIndex.new(node.codeOrigin, address.base, tmp, 1, Immediate.new(codeOrigin, 0)), node.operands[1]], node.annotation)
+ else
+ newList << node
+ end
+ else
+ newList << node
+ end
+ }
+ newList
+end
+
class Sequence
def getModifiedListARM64
result = @list
@@ -204,6 +235,7 @@
result = riscLowerSimpleBranchOps(result)
result = riscLowerHardBranchOps64(result)
result = riscLowerShiftOps(result)
+ result = arm64LowerMalformedLoadStoreAddresses(result)
result = riscLowerMalformedAddresses(result) {
| node, address |
case node.opcode