Title: [292080] trunk/Source/_javascript_Core
Revision
292080
Author
commit-qu...@webkit.org
Date
2022-03-29 16:21:12 -0700 (Tue, 29 Mar 2022)

Log Message

[JSC][ARMv7] Cleanup GPR numbering
https://bugs.webkit.org/show_bug.cgi?id=235027

Patch by Geza Lore <gl...@igalia.com> on 2022-03-29
Reviewed by Yusuke Suzuki.

- Make the the lower order callee save register be regCS0/llint csr0.
Some of the CSR store/restore code relies on this and using a
numbering scheme consistent with other targets (that is: regCS<N> maps
to a lower number machine register than regCS<N+1>) eliminates some
ifdefs in LLInt, and hopefully will prevent hard to find issues due to
the mismatch from other targets that all follow this rule.

- In the Thumb-2 instruction set, use of r0-r7 can often be encoded
using a shorter, 16-bit instruction. Swap regT4/regT5 with
regT7/regT6, so lower order temporaries (which are usually used first)
map to the lower order registers that can yield denser code. This
then simplifies BaselineJITRegisters.h, and also saves about ~1% DFG
code size.

- In offlineasm, prefer low order registers for temporaries.

- Also clean up baseline instanceof op implementation.

* bytecode/StructureStubInfo.cpp:
(JSC::StructureStubInfo::initializeFromUnlinkedStructureStubInfo):
* jit/BaselineJITRegisters.h:
* jit/GPRInfo.h:
(JSC::GPRInfo::toIndex):
(JSC::PreferredArgumentImpl::preferredArgumentJSR):
* jit/JITOpcodes.cpp:
(JSC::JIT::emit_op_instanceof):
(JSC::JIT::emitSlow_op_instanceof):
* jit/JITPropertyAccess.cpp:
(JSC::JIT::emit_op_put_by_val):
(JSC::JIT::emitSlow_op_put_by_val):
(JSC::JIT::slow_op_put_by_val_callSlowOperationThenCheckExceptionGenerator):
(JSC::JIT::emit_op_put_private_name):
(JSC::JIT::emitSlow_op_put_private_name):
(JSC::JIT::slow_op_put_private_name_callSlowOperationThenCheckExceptionGenerator):
* llint/LowLevelInterpreter.asm:
* offlineasm/arm.rb:

Modified Paths

Diff

Modified: trunk/Source/_javascript_Core/ChangeLog (292079 => 292080)


--- trunk/Source/_javascript_Core/ChangeLog	2022-03-29 23:04:15 UTC (rev 292079)
+++ trunk/Source/_javascript_Core/ChangeLog	2022-03-29 23:21:12 UTC (rev 292080)
@@ -1,3 +1,47 @@
+2022-03-29  Geza Lore  <gl...@igalia.com>
+
+        [JSC][ARMv7] Cleanup GPR numbering
+        https://bugs.webkit.org/show_bug.cgi?id=235027
+
+        Reviewed by Yusuke Suzuki.
+
+        - Make the the lower order callee save register be regCS0/llint csr0.
+        Some of the CSR store/restore code relies on this and using a
+        numbering scheme consistent with other targets (that is: regCS<N> maps
+        to a lower number machine register than regCS<N+1>) eliminates some
+        ifdefs in LLInt, and hopefully will prevent hard to find issues due to
+        the mismatch from other targets that all follow this rule.
+
+        - In the Thumb-2 instruction set, use of r0-r7 can often be encoded
+        using a shorter, 16-bit instruction. Swap regT4/regT5 with
+        regT7/regT6, so lower order temporaries (which are usually used first)
+        map to the lower order registers that can yield denser code. This
+        then simplifies BaselineJITRegisters.h, and also saves about ~1% DFG
+        code size.
+
+        - In offlineasm, prefer low order registers for temporaries.
+
+        - Also clean up baseline instanceof op implementation.
+
+        * bytecode/StructureStubInfo.cpp:
+        (JSC::StructureStubInfo::initializeFromUnlinkedStructureStubInfo):
+        * jit/BaselineJITRegisters.h:
+        * jit/GPRInfo.h:
+        (JSC::GPRInfo::toIndex):
+        (JSC::PreferredArgumentImpl::preferredArgumentJSR):
+        * jit/JITOpcodes.cpp:
+        (JSC::JIT::emit_op_instanceof):
+        (JSC::JIT::emitSlow_op_instanceof):
+        * jit/JITPropertyAccess.cpp:
+        (JSC::JIT::emit_op_put_by_val):
+        (JSC::JIT::emitSlow_op_put_by_val):
+        (JSC::JIT::slow_op_put_by_val_callSlowOperationThenCheckExceptionGenerator):
+        (JSC::JIT::emit_op_put_private_name):
+        (JSC::JIT::emitSlow_op_put_private_name):
+        (JSC::JIT::slow_op_put_private_name_callSlowOperationThenCheckExceptionGenerator):
+        * llint/LowLevelInterpreter.asm:
+        * offlineasm/arm.rb:
+
 2022-03-29  Yusuke Suzuki  <ysuz...@apple.com>
 
         [JSC] Use spoolers in FTL OSR exit thunk

Modified: trunk/Source/_javascript_Core/bytecode/StructureStubInfo.cpp (292079 => 292080)


--- trunk/Source/_javascript_Core/bytecode/StructureStubInfo.cpp	2022-03-29 23:04:15 UTC (rev 292079)
+++ trunk/Source/_javascript_Core/bytecode/StructureStubInfo.cpp	2022-03-29 23:21:12 UTC (rev 292080)
@@ -562,7 +562,7 @@
         baseGPR = BaselineJITRegisters::Instanceof::valueJSR.payloadGPR();
         valueGPR = BaselineJITRegisters::Instanceof::resultJSR.payloadGPR();
         regs.prototypeGPR = BaselineJITRegisters::Instanceof::protoJSR.payloadGPR();
-        m_stubInfoGPR = BaselineJITRegisters::Instanceof::stubInfoGPR;
+        m_stubInfoGPR = BaselineJITRegisters::Instanceof::FastPath::stubInfoGPR;
 #if USE(JSVALUE32_64)
         baseTagGPR = BaselineJITRegisters::Instanceof::valueJSR.tagGPR();
         valueTagGPR = InvalidGPRReg;
@@ -639,7 +639,7 @@
         baseGPR = BaselineJITRegisters::PutByVal::baseJSR.payloadGPR();
         regs.propertyGPR = BaselineJITRegisters::PutByVal::propertyJSR.payloadGPR();
         valueGPR = BaselineJITRegisters::PutByVal::valueJSR.payloadGPR();
-        m_stubInfoGPR = BaselineJITRegisters::PutByVal::FastPath::stubInfoGPR;
+        m_stubInfoGPR = BaselineJITRegisters::PutByVal::stubInfoGPR;
         if (accessType == AccessType::PutByVal)
             m_arrayProfileGPR = BaselineJITRegisters::PutByVal::profileGPR;
 #if USE(JSVALUE32_64)

Modified: trunk/Source/_javascript_Core/jit/BaselineJITRegisters.h (292079 => 292080)


--- trunk/Source/_javascript_Core/jit/BaselineJITRegisters.h	2022-03-29 23:04:15 UTC (rev 292079)
+++ trunk/Source/_javascript_Core/jit/BaselineJITRegisters.h	2022-03-29 23:21:12 UTC (rev 292080)
@@ -50,28 +50,33 @@
 }
 
 namespace Instanceof {
+    using SlowOperation = decltype(operationInstanceOfOptimize);
+
+    // Registers used on both Fast and Slow paths
+    constexpr JSValueRegs resultJSR { JSRInfo::returnValueJSR };
+    constexpr JSValueRegs valueJSR { preferredArgumentJSR<SlowOperation, 2>() };
+    constexpr JSValueRegs protoJSR { preferredArgumentJSR<SlowOperation, 3>() };
+
+    // Fast path only registers
+    namespace FastPath {
+        constexpr GPRReg stubInfoGPR { GPRInfo::argumentGPR1 };
+        constexpr GPRReg scratch1GPR { GPRInfo::argumentGPR0 };
+        constexpr GPRReg scratch2GPR {
 #if USE(JSVALUE64)
-    constexpr JSValueRegs resultJSR { GPRInfo::regT0 };
-    constexpr JSValueRegs valueJSR { GPRInfo::argumentGPR2 };
-    constexpr JSValueRegs protoJSR { GPRInfo::argumentGPR3 };
-    constexpr GPRReg stubInfoGPR { GPRInfo::argumentGPR1 };
-    constexpr GPRReg scratch1GPR { GPRInfo::nonArgGPR0 };
-    constexpr GPRReg scratch2GPR { GPRInfo::nonArgGPR1 };
+            GPRInfo::regT4
 #elif USE(JSVALUE32_64)
-    constexpr JSValueRegs resultJSR { JSRInfo::jsRegT10 };
-    constexpr JSValueRegs valueJSR {
-#if CPU(MIPS)
-        GPRInfo::argumentGPR3, GPRInfo::argumentGPR2
-#else
-        JSRInfo::jsRegT32
+            GPRInfo::regT6
 #endif
-    };
-    constexpr JSValueRegs protoJSR { JSRInfo::jsRegT54 };
-    constexpr GPRReg stubInfoGPR { GPRInfo::regT1 };
-    constexpr GPRReg scratch1GPR { GPRInfo::regT6 };
-    constexpr GPRReg scratch2GPR { GPRInfo::regT7 };
-#endif
-    static_assert(noOverlap(valueJSR, protoJSR, stubInfoGPR, scratch1GPR, scratch2GPR));
+        };
+        static_assert(noOverlap(valueJSR, protoJSR, stubInfoGPR, scratch1GPR, scratch2GPR), "Required for DataIC");
+    }
+
+    // Slow path only registers
+    namespace SlowPath {
+        constexpr GPRReg globalObjectGPR { preferredArgumentGPR<SlowOperation, 0>() };
+        constexpr GPRReg stubInfoGPR { preferredArgumentGPR<SlowOperation, 1>() };
+        static_assert(noOverlap(globalObjectGPR, stubInfoGPR, valueJSR, protoJSR), "Required for call to slow operation");
+    }
 }
 
 namespace JFalse {
@@ -117,15 +122,9 @@
 
     // Fast path only registers
     namespace FastPath {
-#if USE(JSVALUE64)
-        constexpr GPRReg stubInfoGPR { GPRInfo::regT1 };
-        constexpr GPRReg scratchGPR { GPRInfo::regT2 };
-        constexpr JSValueRegs dontClobberJSR { GPRInfo::regT3 };
-#elif USE(JSVALUE32_64)
         constexpr GPRReg stubInfoGPR { GPRInfo::regT2 };
         constexpr GPRReg scratchGPR { GPRInfo::regT3 };
-        constexpr JSValueRegs dontClobberJSR { GPRInfo::regT6, GPRInfo::regT7 };
-#endif
+        constexpr JSValueRegs dontClobberJSR { JSRInfo::jsRegT54 };
         static_assert(noOverlap(baseJSR, stubInfoGPR, scratchGPR, dontClobberJSR), "Required for DataIC");
     }
 
@@ -132,13 +131,9 @@
     // Slow path only registers
     namespace SlowPath {
         constexpr GPRReg globalObjectGPR { GPRInfo::regT2 };
-        constexpr GPRReg bytecodeOffsetGPR { GPRInfo::regT2 };
+        constexpr GPRReg bytecodeOffsetGPR { globalObjectGPR };
         constexpr GPRReg stubInfoGPR { GPRInfo::regT3 };
-#if USE(JSVALUE64)
         constexpr GPRReg propertyGPR { GPRInfo::regT4 };
-#elif USE(JSVALUE32_64)
-        constexpr GPRReg propertyGPR { GPRInfo::regT7 };
-#endif
         static_assert(noOverlap(baseJSR, bytecodeOffsetGPR, stubInfoGPR, propertyGPR), "Required for call to CTI thunk");
         static_assert(noOverlap(baseJSR, globalObjectGPR, stubInfoGPR, propertyGPR), "Required for call to slow operation");
     }
@@ -147,23 +142,13 @@
 namespace GetByIdWithThis {
     // Registers used on both Fast and Slow paths
     constexpr JSValueRegs resultJSR { JSRInfo::returnValueJSR };
-#if USE(JSVALUE64)
-    constexpr JSValueRegs baseJSR { GPRInfo::regT0 };
-    constexpr JSValueRegs thisJSR { GPRInfo::regT1 };
-#elif USE(JSVALUE32_64)
     constexpr JSValueRegs baseJSR { JSRInfo::jsRegT10 };
     constexpr JSValueRegs thisJSR { JSRInfo::jsRegT32 };
-#endif
 
     // Fast path only registers
     namespace FastPath {
-#if USE(JSVALUE64)
-        constexpr GPRReg stubInfoGPR { GPRInfo::regT2 };
-        constexpr GPRReg scratchGPR { GPRInfo::regT3 };
-#elif USE(JSVALUE32_64)
-        constexpr GPRReg stubInfoGPR { GPRInfo::regT7 };
-        constexpr GPRReg scratchGPR { GPRInfo::regT6 };
-#endif
+        constexpr GPRReg stubInfoGPR { GPRInfo::regT4 };
+        constexpr GPRReg scratchGPR { GPRInfo::regT5 };
         static_assert(noOverlap(baseJSR, thisJSR, stubInfoGPR, scratchGPR), "Required for DataIC");
     }
 
@@ -170,14 +155,15 @@
     // Slow path only registers
     namespace SlowPath {
         constexpr GPRReg globalObjectGPR { GPRInfo::regT4 };
-        constexpr GPRReg bytecodeOffsetGPR { GPRInfo::regT4 };
+        constexpr GPRReg bytecodeOffsetGPR { globalObjectGPR };
+        constexpr GPRReg stubInfoGPR { GPRInfo::regT5 };
+        constexpr GPRReg propertyGPR {
 #if USE(JSVALUE64)
-        constexpr GPRReg stubInfoGPR { GPRInfo::regT2 };
-        constexpr GPRReg propertyGPR { GPRInfo::regT3 };
+            GPRInfo::regT1
 #elif USE(JSVALUE32_64)
-        constexpr GPRReg stubInfoGPR { GPRInfo::regT6 };
-        constexpr GPRReg propertyGPR { GPRInfo::regT7 };
+            GPRInfo::regT6
 #endif
+        };
         static_assert(noOverlap(baseJSR, thisJSR, bytecodeOffsetGPR, stubInfoGPR, propertyGPR), "Required for call to CTI thunk");
         static_assert(noOverlap(baseJSR, thisJSR, globalObjectGPR, stubInfoGPR, propertyGPR), "Required for call to slow operation");
     }
@@ -186,23 +172,13 @@
 namespace GetByVal {
     // Registers used on both Fast and Slow paths
     constexpr JSValueRegs resultJSR { JSRInfo::returnValueJSR };
-#if USE(JSVALUE64)
-    constexpr JSValueRegs baseJSR { GPRInfo::regT0 };
-    constexpr JSValueRegs propertyJSR { GPRInfo::regT1 };
-#elif USE(JSVALUE32_64)
     constexpr JSValueRegs baseJSR { JSRInfo::jsRegT10 };
     constexpr JSValueRegs propertyJSR { JSRInfo::jsRegT32 };
-#endif
 
     // Fast path only registers
     namespace FastPath {
-#if USE(JSVALUE64)
-        constexpr GPRReg stubInfoGPR { GPRInfo::regT2 };
-        constexpr GPRReg scratchGPR { GPRInfo::regT3 };
-#elif USE(JSVALUE32_64)
-        constexpr GPRReg stubInfoGPR { GPRInfo::regT7 };
-        constexpr GPRReg scratchGPR { GPRInfo::regT6 };
-#endif
+        constexpr GPRReg stubInfoGPR { GPRInfo::regT4 };
+        constexpr GPRReg scratchGPR { GPRInfo::regT5 };
         static_assert(noOverlap(baseJSR, propertyJSR, stubInfoGPR, scratchGPR), "Required for DataIC");
     }
 
@@ -209,14 +185,15 @@
     // Slow path only registers
     namespace SlowPath {
         constexpr GPRReg globalObjectGPR { GPRInfo::regT4 };
-        constexpr GPRReg bytecodeOffsetGPR { GPRInfo::regT4 };
+        constexpr GPRReg bytecodeOffsetGPR { globalObjectGPR };
+        constexpr GPRReg stubInfoGPR { GPRInfo::regT5 };
+        constexpr GPRReg profileGPR {
 #if USE(JSVALUE64)
-        constexpr GPRReg stubInfoGPR { GPRInfo::regT2 };
-        constexpr GPRReg profileGPR { GPRInfo::regT3 };
+            GPRInfo::regT1
 #elif USE(JSVALUE32_64)
-        constexpr GPRReg stubInfoGPR { GPRInfo::regT6 };
-        constexpr GPRReg profileGPR { GPRInfo::regT7 };
+            GPRInfo::regT6
 #endif
+        };
         static_assert(noOverlap(baseJSR, propertyJSR, bytecodeOffsetGPR, stubInfoGPR, profileGPR), "Required for call to CTI thunk");
         static_assert(noOverlap(baseJSR, propertyJSR, globalObjectGPR, stubInfoGPR, profileGPR), "Required for call to slow operation");
     }
@@ -224,43 +201,30 @@
 
 #if USE(JSVALUE64)
 namespace EnumeratorGetByVal {
-    static constexpr JSValueRegs baseJSR { GPRInfo::regT0 };
-    static constexpr JSValueRegs propertyJSR { GPRInfo::regT1 };
-    static constexpr JSValueRegs resultJSR { GPRInfo::regT0 };
-    static constexpr GPRReg stubInfoGPR { GPRInfo::regT2 };
-    // We rely on this when linking a CodeBlock and initializing registers for a GetByVal StubInfo.
-    static_assert(baseJSR == GetByVal::baseJSR);
-    static_assert(propertyJSR == GetByVal::propertyJSR);
-    static_assert(resultJSR == GetByVal::resultJSR);
-    static_assert(stubInfoGPR == GetByVal::FastPath::stubInfoGPR);
-
-    static constexpr GPRReg scratch1 { GPRInfo::regT3 };
-    static constexpr GPRReg scratch2 { GPRInfo::regT4 };
+    // We rely on using the same registers when linking a CodeBlock and initializing registers
+    // for a GetByVal StubInfo.
+    static constexpr JSValueRegs baseJSR { GetByVal::baseJSR };
+    static constexpr JSValueRegs propertyJSR { GetByVal::propertyJSR };
+    static constexpr JSValueRegs resultJSR { GetByVal::resultJSR };
+    static constexpr GPRReg stubInfoGPR { GetByVal::FastPath::stubInfoGPR };
+    static constexpr GPRReg scratch1 { GPRInfo::regT1 };
+    static constexpr GPRReg scratch2 { GPRInfo::regT3 };
     static constexpr GPRReg scratch3 { GPRInfo::regT5 };
+    static_assert(noOverlap(baseJSR, propertyJSR, stubInfoGPR, scratch1, scratch2, scratch3));
 }
 #endif
 
 namespace PutById {
     // Registers used on both Fast and Slow paths
-#if USE(JSVALUE64)
-    constexpr JSValueRegs baseJSR { GPRInfo::regT0 };
-    constexpr JSValueRegs valueJSR { GPRInfo::regT1 };
-#elif USE(JSVALUE32_64)
     constexpr JSValueRegs baseJSR { JSRInfo::jsRegT10 };
     constexpr JSValueRegs valueJSR { JSRInfo::jsRegT32 };
-#endif
 
     // Fast path only registers
     namespace FastPath {
-#if USE(JSVALUE64)
-        constexpr GPRReg stubInfoGPR { GPRInfo::regT3 };
-        constexpr GPRReg scratchGPR { GPRInfo::regT2 };
-        constexpr GPRReg scratch2GPR { GPRInfo::regT4 };
-#elif USE(JSVALUE32_64)
-        constexpr GPRReg stubInfoGPR { GPRInfo::regT7 };
-        constexpr GPRReg scratchGPR { GPRInfo::regT6 };
-        constexpr GPRReg scratch2GPR { baseJSR.tagGPR() }; // Reusing regT1 for better code size on ARM_THUMB2
-#endif
+        constexpr GPRReg stubInfoGPR { GPRInfo::regT4 };
+        constexpr GPRReg scratchGPR { GPRInfo::regT5 };
+        // Fine to use regT1, which also yields better code size on ARM_THUMB2
+        constexpr GPRReg scratch2GPR { GPRInfo::regT1 };
         static_assert(noOverlap(baseJSR, valueJSR, stubInfoGPR, scratchGPR), "Required for DataIC");
         static_assert(noOverlap(baseJSR.payloadGPR(), valueJSR, stubInfoGPR, scratchGPR, scratch2GPR), "Required for DataIC");
     }
@@ -267,17 +231,17 @@
 
     // Slow path only registers
     namespace SlowPath {
+        constexpr GPRReg globalObjectGPR {
 #if USE(JSVALUE64)
-        constexpr GPRReg globalObjectGPR { GPRInfo::regT2 };
-        constexpr GPRReg bytecodeOffsetGPR { GPRInfo::regT2 };
-        constexpr GPRReg stubInfoGPR { GPRInfo::regT3 };
-        constexpr GPRReg propertyGPR { GPRInfo::regT4 };
+            GPRInfo::regT1
 #elif USE(JSVALUE32_64)
-        constexpr GPRReg globalObjectGPR { GPRInfo::regT6 };
-        constexpr GPRReg bytecodeOffsetGPR { GPRInfo::regT6 };
-        constexpr GPRReg stubInfoGPR { GPRInfo::regT7 };
-        constexpr GPRReg propertyGPR { GPRInfo::regT4 };
+            GPRInfo::regT6
 #endif
+        };
+        constexpr GPRReg bytecodeOffsetGPR { globalObjectGPR };
+        constexpr GPRReg stubInfoGPR { GPRInfo::regT4 };
+        constexpr GPRReg propertyGPR { GPRInfo::regT5 };
+
         static_assert(noOverlap(baseJSR, valueJSR, bytecodeOffsetGPR, stubInfoGPR, propertyGPR), "Required for call to CTI thunk");
         static_assert(noOverlap(baseJSR, valueJSR, globalObjectGPR, stubInfoGPR, propertyGPR), "Required for call to slow operation");
     }
@@ -284,39 +248,39 @@
 }
 
 namespace PutByVal {
-    // Registers used on both Fast and Slow paths
+    constexpr JSValueRegs baseJSR { JSRInfo::jsRegT10 };
+    constexpr JSValueRegs propertyJSR { JSRInfo::jsRegT32 };
+    constexpr JSValueRegs valueJSR { JSRInfo::jsRegT54 };
+    constexpr GPRReg profileGPR {
 #if USE(JSVALUE64)
-    constexpr JSValueRegs baseJSR { GPRInfo::regT0 };
-    constexpr JSValueRegs propertyJSR { GPRInfo::regT1 };
-    constexpr JSValueRegs valueJSR { GPRInfo::regT2 };
-    constexpr GPRReg profileGPR { GPRInfo::regT3 };
+        GPRInfo::regT1
 #elif USE(JSVALUE32_64)
-    constexpr JSValueRegs baseJSR { GPRInfo::regT1, GPRInfo::regT0 };
-    constexpr JSValueRegs propertyJSR { GPRInfo::regT3, GPRInfo::regT2 };
-    constexpr JSValueRegs valueJSR { GPRInfo::regT6, GPRInfo::regT7 };
-    constexpr GPRReg profileGPR { GPRInfo::regT5 };
+        GPRInfo::regT6
 #endif
+    };
+    constexpr GPRReg stubInfoGPR {
+#if USE(JSVALUE64)
+        GPRInfo::regT3
+#elif USE(JSVALUE32_64)
+        GPRInfo::regT7
+#endif
+    };
 
-    // Fast path only registers
-    namespace FastPath {
-        constexpr GPRReg stubInfoGPR { GPRInfo::regT4 };
-        static_assert(noOverlap(baseJSR, propertyJSR, valueJSR, profileGPR, stubInfoGPR), "Required for DataIC");
-    }
+    static_assert(noOverlap(baseJSR, propertyJSR, valueJSR, profileGPR, stubInfoGPR), "Required for DataIC");
 
     // Slow path only registers
     namespace SlowPath {
         constexpr GPRReg globalObjectGPR {
 #if USE(JSVALUE64)
-                GPRInfo::regT5
+            GPRInfo::regT5
 #elif CPU(ARM_THUMB2)
-                // We are a bit short on registers on ARM_THUMB2, but we can just about get away with this
-                MacroAssemblerARMv7::s_scratchRegister
+            // We are a bit short on registers on ARM_THUMB2, but we can just about get away with this
+            MacroAssemblerARMv7::s_scratchRegister
 #else // Other JSVALUE32_64
-                GPRInfo::regT8
+            GPRInfo::regT8
 #endif
         };
         constexpr GPRReg bytecodeOffsetGPR { globalObjectGPR };
-        constexpr GPRReg stubInfoGPR { GPRInfo::regT4 };
         static_assert(noOverlap(baseJSR, propertyJSR, valueJSR, profileGPR, bytecodeOffsetGPR, stubInfoGPR), "Required for call to CTI thunk");
         static_assert(noOverlap(baseJSR, propertyJSR, valueJSR, profileGPR, globalObjectGPR, stubInfoGPR), "Required for call to slow operation");
     }
@@ -331,17 +295,10 @@
 
 namespace InByVal {
     constexpr JSValueRegs resultJSR { JSRInfo::returnValueJSR };
-#if USE(JSVALUE64)
-    constexpr JSValueRegs baseJSR { GPRInfo::regT0 };
-    constexpr JSValueRegs propertyJSR { GPRInfo::regT1 };
-    constexpr GPRReg stubInfoGPR { GPRInfo::regT2 };
-    constexpr GPRReg scratchGPR { GPRInfo::regT3 };
-#elif USE(JSVALUE32_64)
-    constexpr JSValueRegs baseJSR { GPRInfo::regT1, GPRInfo::regT0 };
-    constexpr JSValueRegs propertyJSR { GPRInfo::regT3, GPRInfo::regT2 };
-    constexpr GPRReg stubInfoGPR { GPRInfo::regT7 };
-    constexpr GPRReg scratchGPR { GPRInfo::regT6 };
-#endif
+    constexpr JSValueRegs baseJSR { JSRInfo::jsRegT10 };
+    constexpr JSValueRegs propertyJSR { JSRInfo::jsRegT32 };
+    constexpr GPRReg stubInfoGPR { GPRInfo::regT4 };
+    constexpr GPRReg scratchGPR { GPRInfo::regT5 };
     static_assert(baseJSR == GetByVal::baseJSR);
     static_assert(propertyJSR == GetByVal::propertyJSR);
 }
@@ -348,22 +305,13 @@
 
 namespace DelById {
     // Registers used on both Fast and Slow paths
-#if USE(JSVALUE64)
-    constexpr JSValueRegs baseJSR { GPRInfo::regT1 };
-#elif USE(JSVALUE32_64)
     constexpr JSValueRegs baseJSR { JSRInfo::jsRegT32 };
-#endif
 
     // Fast path only registers
     namespace FastPath {
         constexpr JSValueRegs resultJSR { JSRInfo::returnValueJSR };
-#if USE(JSVALUE64)
-        constexpr GPRReg stubInfoGPR { GPRInfo::regT3 };
-        constexpr GPRReg scratchGPR { GPRInfo::regT2 };
-#elif USE(JSVALUE32_64)
-        constexpr GPRReg stubInfoGPR { GPRInfo::regT7 };
-        constexpr GPRReg scratchGPR { GPRInfo::regT6 };
-#endif
+        constexpr GPRReg stubInfoGPR { GPRInfo::regT4 };
+        constexpr GPRReg scratchGPR { GPRInfo::regT5 };
         static_assert(noOverlap(baseJSR, stubInfoGPR, scratchGPR), "Required for DataIC");
     }
 
@@ -370,16 +318,10 @@
     // Slow path only registers
     namespace SlowPath {
         constexpr GPRReg globalObjectGPR { GPRInfo::regT0 };
-        constexpr GPRReg bytecodeOffsetGPR { GPRInfo::regT0 };
-#if USE(JSVALUE64)
-        constexpr GPRReg stubInfoGPR { GPRInfo::regT2 };
-        constexpr GPRReg propertyGPR { GPRInfo::regT3 };
-        constexpr GPRReg ecmaModeGPR { GPRInfo::regT4 };
-#elif USE(JSVALUE32_64)
+        constexpr GPRReg bytecodeOffsetGPR { globalObjectGPR };
         constexpr GPRReg stubInfoGPR { GPRInfo::regT1 };
-        constexpr GPRReg propertyGPR { GPRInfo::regT6 };
-        constexpr GPRReg ecmaModeGPR { GPRInfo::regT7 };
-#endif
+        constexpr GPRReg propertyGPR { GPRInfo::regT4 };
+        constexpr GPRReg ecmaModeGPR { GPRInfo::regT5 };
         static_assert(noOverlap(baseJSR, bytecodeOffsetGPR, stubInfoGPR, propertyGPR, ecmaModeGPR), "Required for call to CTI thunk");
         static_assert(noOverlap(baseJSR, globalObjectGPR, stubInfoGPR, propertyGPR, ecmaModeGPR), "Required for call to slow operation");
     }
@@ -387,40 +329,29 @@
 
 namespace DelByVal {
     // Registers used on both Fast and Slow paths
-#if USE(JSVALUE64)
-    constexpr JSValueRegs baseJSR { GPRInfo::regT1 };
-    constexpr JSValueRegs propertyJSR { GPRInfo::regT0 };
-#elif USE(JSVALUE32_64)
     constexpr JSValueRegs baseJSR { JSRInfo::jsRegT32 };
     constexpr JSValueRegs propertyJSR { JSRInfo::jsRegT10 };
-#endif
 
     // Fast path only registers
     namespace FastPath {
         constexpr JSValueRegs resultJSR { JSRInfo::returnValueJSR };
-#if USE(JSVALUE64)
-        constexpr GPRReg stubInfoGPR { GPRInfo::regT3 };
-        constexpr GPRReg scratchGPR { GPRInfo::regT2 };
-#elif USE(JSVALUE32_64)
-        constexpr GPRReg stubInfoGPR { GPRInfo::regT7 };
-        constexpr GPRReg scratchGPR { GPRInfo::regT6 };
-#endif
+        constexpr GPRReg stubInfoGPR { GPRInfo::regT4 };
+        constexpr GPRReg scratchGPR { GPRInfo::regT5 };
         static_assert(noOverlap(baseJSR, propertyJSR, stubInfoGPR, scratchGPR), "Required for DataIC");
     }
 
     // Slow path only registers
     namespace SlowPath {
+        constexpr GPRReg globalObjectGPR { GPRInfo::regT4 };
+        constexpr GPRReg bytecodeOffsetGPR { globalObjectGPR };
+        constexpr GPRReg stubInfoGPR { GPRInfo::regT5 };
+        constexpr GPRReg ecmaModeGPR {
 #if USE(JSVALUE64)
-        constexpr GPRReg globalObjectGPR { GPRInfo::regT2 };
-        constexpr GPRReg bytecodeOffsetGPR { GPRInfo::regT2 };
-        constexpr GPRReg stubInfoGPR { GPRInfo::regT3 };
-        constexpr GPRReg ecmaModeGPR { GPRInfo::regT4 };
+            GPRInfo::regT1
 #elif USE(JSVALUE32_64)
-        constexpr GPRReg globalObjectGPR { GPRInfo::regT4 };
-        constexpr GPRReg bytecodeOffsetGPR { GPRInfo::regT4 };
-        constexpr GPRReg stubInfoGPR { GPRInfo::regT6 };
-        constexpr GPRReg ecmaModeGPR { GPRInfo::regT7 };
+            GPRInfo::regT6
 #endif
+        };
         static_assert(noOverlap(baseJSR, propertyJSR, bytecodeOffsetGPR, stubInfoGPR, ecmaModeGPR), "Required for call to CTI thunk");
         static_assert(noOverlap(baseJSR, propertyJSR, globalObjectGPR, stubInfoGPR, ecmaModeGPR), "Required for call to slow operation");
     }
@@ -431,11 +362,7 @@
     constexpr JSValueRegs brandJSR { GetByVal::propertyJSR }; // Required by shared slow path thunk
 
     namespace FastPath {
-#if USE(JSVALUE64)
-        constexpr GPRReg stubInfoGPR { GPRInfo::regT2 };
-#elif USE(JSVALUE32_64)
-        constexpr GPRReg stubInfoGPR { GPRInfo::regT7 };
-#endif
+        constexpr GPRReg stubInfoGPR { GetByVal::FastPath::stubInfoGPR };
         static_assert(noOverlap(baseJSR, brandJSR, stubInfoGPR), "Required for DataIC");
     }
 

Modified: trunk/Source/_javascript_Core/jit/GPRInfo.h (292079 => 292080)


--- trunk/Source/_javascript_Core/jit/GPRInfo.h	2022-03-29 23:04:15 UTC (rev 292079)
+++ trunk/Source/_javascript_Core/jit/GPRInfo.h	2022-03-29 23:21:12 UTC (rev 292080)
@@ -566,12 +566,12 @@
     static constexpr GPRReg regT1 = ARMRegisters::r1;
     static constexpr GPRReg regT2 = ARMRegisters::r2;
     static constexpr GPRReg regT3 = ARMRegisters::r3;
-    static constexpr GPRReg regT4 = ARMRegisters::r8;
-    static constexpr GPRReg regT5 = ARMRegisters::r9;
-    static constexpr GPRReg regT6 = ARMRegisters::r5;
-    static constexpr GPRReg regT7 = ARMRegisters::r4;
-    static constexpr GPRReg regCS0 = ARMRegisters::r11;
-    static constexpr GPRReg regCS1 = ARMRegisters::r10;
+    static constexpr GPRReg regT4 = ARMRegisters::r4;
+    static constexpr GPRReg regT5 = ARMRegisters::r5;
+    static constexpr GPRReg regT6 = ARMRegisters::r8;
+    static constexpr GPRReg regT7 = ARMRegisters::r9;
+    static constexpr GPRReg regCS0 = ARMRegisters::r10;
+    static constexpr GPRReg regCS1 = ARMRegisters::r11;
     // These registers match the baseline JIT.
     static constexpr GPRReg callFrameRegister = ARMRegisters::fp;
     // These constants provide the names for the general purpose argument & return value registers.
@@ -579,7 +579,7 @@
     static constexpr GPRReg argumentGPR1 = ARMRegisters::r1; // regT1
     static constexpr GPRReg argumentGPR2 = ARMRegisters::r2; // regT2
     static constexpr GPRReg argumentGPR3 = ARMRegisters::r3; // regT3
-    static constexpr GPRReg nonArgGPR0 = ARMRegisters::r4; // regT7
+    static constexpr GPRReg nonArgGPR0 = ARMRegisters::r4; // regT4
     static constexpr GPRReg returnValueGPR = ARMRegisters::r0; // regT0
     static constexpr GPRReg returnValueGPR2 = ARMRegisters::r1; // regT1
     static constexpr GPRReg nonPreservedNonReturnGPR = ARMRegisters::r5;
@@ -603,7 +603,7 @@
         ASSERT(reg != InvalidGPRReg);
         ASSERT(static_cast<int>(reg) < 16);
         static const unsigned indexForRegister[16] =
-            { 0, 1, 2, 3, 7, 6, InvalidIndex, InvalidIndex, 4, 5, 9, 8, InvalidIndex, InvalidIndex, InvalidIndex, InvalidIndex };
+            { 0, 1, 2, 3, 4, 5, InvalidIndex, InvalidIndex, 6, 7, 8, 9, InvalidIndex, InvalidIndex, InvalidIndex, InvalidIndex };
         unsigned result = indexForRegister[reg];
         return result;
     }
@@ -1136,8 +1136,8 @@
         // the link register as a temporary.
         return pickJSR<OperationType, ArgNum>(
             GPRInfo::argumentGPR0, GPRInfo::argumentGPR1, GPRInfo::argumentGPR2,
-            GPRInfo::argumentGPR3, GPRInfo::regT7,        GPRInfo::regT6,
-            GPRInfo::regT4,        GPRInfo::regT5,        ARMRegisters::lr);
+            GPRInfo::argumentGPR3, GPRInfo::regT4,        GPRInfo::regT5,
+            GPRInfo::regT6,        GPRInfo::regT7,        ARMRegisters::lr);
 #elif CPU(MIPS)
         return pickJSR<OperationType, ArgNum>(
             GPRInfo::argumentGPR0, GPRInfo::argumentGPR1, GPRInfo::argumentGPR2,

Modified: trunk/Source/_javascript_Core/jit/JITOpcodes.cpp (292079 => 292080)


--- trunk/Source/_javascript_Core/jit/JITOpcodes.cpp	2022-03-29 23:04:15 UTC (rev 292079)
+++ trunk/Source/_javascript_Core/jit/JITOpcodes.cpp	2022-03-29 23:21:12 UTC (rev 292080)
@@ -156,9 +156,9 @@
     using BaselineJITRegisters::Instanceof::resultJSR;
     using BaselineJITRegisters::Instanceof::valueJSR;
     using BaselineJITRegisters::Instanceof::protoJSR;
-    using BaselineJITRegisters::Instanceof::stubInfoGPR;
-    using BaselineJITRegisters::Instanceof::scratch1GPR;
-    using BaselineJITRegisters::Instanceof::scratch2GPR;
+    using BaselineJITRegisters::Instanceof::FastPath::stubInfoGPR;
+    using BaselineJITRegisters::Instanceof::FastPath::scratch1GPR;
+    using BaselineJITRegisters::Instanceof::FastPath::scratch2GPR;
 
     emitGetVirtualRegister(value, valueJSR);
     emitGetVirtualRegister(proto, protoJSR);
@@ -189,36 +189,29 @@
     addSlowCase();
     m_instanceOfs.append(gen);
 
+    setFastPathResumePoint();
     emitPutVirtualRegister(dst, resultJSR);
 }
 
-void JIT::emitSlow_op_instanceof(const JSInstruction* currentInstruction, Vector<SlowCaseEntry>::iterator& iter)
+void JIT::emitSlow_op_instanceof(const JSInstruction*, Vector<SlowCaseEntry>::iterator& iter)
 {
     linkAllSlowCases(iter);
 
-    auto bytecode = currentInstruction->as<OpInstanceof>();
-    VirtualRegister resultVReg = bytecode.m_dst;
-    
     JITInstanceOfGenerator& gen = m_instanceOfs[m_instanceOfIndex++];
-    
+
     Label coldPathBegin = label();
 
-    using SlowOperation = decltype(operationInstanceOfOptimize);
-    constexpr GPRReg globalObjectGPR = preferredArgumentGPR<SlowOperation, 0>();
-    constexpr GPRReg stubInfoGPR = preferredArgumentGPR<SlowOperation, 1>();
     using BaselineJITRegisters::Instanceof::valueJSR;
-    static_assert(valueJSR == preferredArgumentJSR<SlowOperation, 2>());
     using BaselineJITRegisters::Instanceof::protoJSR;
-    // On JSVALUE32_64, 'proto' will be passed on stack anyway
-    static_assert(protoJSR == preferredArgumentJSR<SlowOperation, 3>() || is32Bit());
-    static_assert(noOverlap(globalObjectGPR, stubInfoGPR, valueJSR, protoJSR));
+    using BaselineJITRegisters::Instanceof::SlowPath::globalObjectGPR;
+    using BaselineJITRegisters::Instanceof::SlowPath::stubInfoGPR;
 
     loadGlobalObject(globalObjectGPR);
     loadConstant(gen.m_unlinkedStubInfoConstantIndex, stubInfoGPR);
-    callOperation<SlowOperation>(
+    callOperation<decltype(operationInstanceOfOptimize)>(
         Address(stubInfoGPR, StructureStubInfo::offsetOfSlowOperation()),
-        resultVReg,
         globalObjectGPR, stubInfoGPR, valueJSR, protoJSR);
+    static_assert(BaselineJITRegisters::Instanceof::resultJSR == returnValueJSR);
     gen.reportSlowPathCall(coldPathBegin, Call());
 }
 

Modified: trunk/Source/_javascript_Core/jit/JITPropertyAccess.cpp (292079 => 292080)


--- trunk/Source/_javascript_Core/jit/JITPropertyAccess.cpp	2022-03-29 23:04:15 UTC (rev 292079)
+++ trunk/Source/_javascript_Core/jit/JITPropertyAccess.cpp	2022-03-29 23:21:12 UTC (rev 292080)
@@ -381,7 +381,7 @@
     using BaselineJITRegisters::PutByVal::propertyJSR;
     using BaselineJITRegisters::PutByVal::valueJSR;
     using BaselineJITRegisters::PutByVal::profileGPR;
-    using BaselineJITRegisters::PutByVal::FastPath::stubInfoGPR;
+    using BaselineJITRegisters::PutByVal::stubInfoGPR;
 
     emitGetVirtualRegister(base, baseJSR);
     emitGetVirtualRegister(property, propertyJSR);
@@ -430,7 +430,7 @@
     ASSERT(BytecodeIndex(bytecodeOffset) == m_bytecodeIndex);
     JITPutByValGenerator& gen = m_putByVals[m_putByValIndex++];
 
-    using BaselineJITRegisters::PutByVal::SlowPath::stubInfoGPR;
+    using BaselineJITRegisters::PutByVal::stubInfoGPR;
     using BaselineJITRegisters::PutByVal::SlowPath::bytecodeOffsetGPR;
 
     Label coldPathBegin = label();
@@ -458,9 +458,9 @@
     using BaselineJITRegisters::PutByVal::propertyJSR;
     using BaselineJITRegisters::PutByVal::valueJSR;
     using BaselineJITRegisters::PutByVal::profileGPR;
+    using BaselineJITRegisters::PutByVal::stubInfoGPR;
     using BaselineJITRegisters::PutByVal::SlowPath::globalObjectGPR;
     using BaselineJITRegisters::PutByVal::SlowPath::bytecodeOffsetGPR;
-    using BaselineJITRegisters::PutByVal::SlowPath::stubInfoGPR;
 
     jit.emitCTIThunkPrologue();
 
@@ -493,7 +493,7 @@
     using BaselineJITRegisters::PutByVal::baseJSR;
     using BaselineJITRegisters::PutByVal::propertyJSR;
     using BaselineJITRegisters::PutByVal::valueJSR;
-    using BaselineJITRegisters::PutByVal::FastPath::stubInfoGPR;
+    using BaselineJITRegisters::PutByVal::stubInfoGPR;
 
     emitGetVirtualRegister(base, baseJSR);
     emitGetVirtualRegister(property, propertyJSR);
@@ -529,7 +529,7 @@
     JITPutByValGenerator& gen = m_putByVals[m_putByValIndex++];
 
     using BaselineJITRegisters::PutByVal::SlowPath::bytecodeOffsetGPR;
-    using BaselineJITRegisters::PutByVal::SlowPath::stubInfoGPR;
+    using BaselineJITRegisters::PutByVal::stubInfoGPR;
 
     Label coldPathBegin = label();
     linkAllSlowCases(iter);
@@ -556,9 +556,9 @@
     using BaselineJITRegisters::PutByVal::propertyJSR;
     using BaselineJITRegisters::PutByVal::valueJSR;
     using BaselineJITRegisters::PutByVal::profileGPR;
+    using BaselineJITRegisters::PutByVal::stubInfoGPR;
     using BaselineJITRegisters::PutByVal::SlowPath::globalObjectGPR;
     using BaselineJITRegisters::PutByVal::SlowPath::bytecodeOffsetGPR;
-    using BaselineJITRegisters::PutByVal::SlowPath::stubInfoGPR;
 
     jit.emitCTIThunkPrologue();
 

Modified: trunk/Source/_javascript_Core/llint/LowLevelInterpreter.asm (292079 => 292080)


--- trunk/Source/_javascript_Core/llint/LowLevelInterpreter.asm	2022-03-29 23:04:15 UTC (rev 292079)
+++ trunk/Source/_javascript_Core/llint/LowLevelInterpreter.asm	2022-03-29 23:21:12 UTC (rev 292080)
@@ -302,12 +302,9 @@
     if C_LOOP or C_LOOP_WIN
         const PB = csr0
         const metadataTable = csr3
-    elsif ARMv7
+    elsif ARMv7 or MIPS
         const metadataTable = csr0
         const PB = csr1
-    elsif MIPS
-        const metadataTable = csr0
-        const PB = csr1
     else
         error
     end
@@ -842,18 +839,7 @@
     subp CalleeSaveSpaceStackAligned, sp
     if C_LOOP or C_LOOP_WIN
         storep metadataTable, -PtrSize[cfr]
-
-    # Next ARMv7 and MIPS differ in how we store metadataTable and PB,
-    # because this codes needs to be in sync with how registers are
-    # restored in Baseline JIT (specifically in emitRestoreCalleeSavesFor).
-    # emitRestoreCalleeSavesFor restores registers in order instead of by name.
-    # However, ARMv7 and MIPS differ in the order in which registers are assigned
-    # to metadataTable and PB, therefore they can also not have the same saving
-    # order.
-    elsif ARMv7
-        storep metadataTable, -4[cfr]
-        storep PB, -8[cfr]
-    elsif MIPS
+    elsif ARMv7 or MIPS
         storep PB, -4[cfr]
         storep metadataTable, -8[cfr]
     elsif ARM64 or ARM64E
@@ -882,12 +868,7 @@
 macro restoreCalleeSavesUsedByLLInt()
     if C_LOOP or C_LOOP_WIN
         loadp -PtrSize[cfr], metadataTable
-    # To understand why ARMv7 and MIPS differ in restore order,
-    # see comment in preserveCalleeSavesUsedByLLInt
-    elsif ARMv7
-        loadp -4[cfr], metadataTable
-        loadp -8[cfr], PB
-    elsif MIPS
+    elsif ARMv7 or MIPS
         loadp -4[cfr], PB
         loadp -8[cfr], metadataTable
     elsif ARM64 or ARM64E
@@ -950,12 +931,7 @@
             storeq csr4, 32[entryFrame]
             storeq csr5, 40[entryFrame]
             storeq csr6, 48[entryFrame]
-        # To understand why ARMv7 and MIPS differ in store order,
-        # see comment in preserveCalleeSavesUsedByLLInt
-        elsif ARMv7
-            storep csr1, [entryFrame]
-            storep csr0, 4[entryFrame]
-        elsif MIPS
+        elsif ARMv7 or MIPS
             storep csr0, [entryFrame]
             storep csr1, 4[entryFrame]
         elsif RISCV64
@@ -1031,12 +1007,7 @@
             loadq 32[temp], csr4
             loadq 40[temp], csr5
             loadq 48[temp], csr6
-        # To understand why ARMv7 and MIPS differ in restore order,
-        # see comment in preserveCalleeSavesUsedByLLInt
-        elsif ARMv7
-            loadp [temp], csr1
-            loadp 4[temp], csr0
-        elsif MIPS
+        elsif ARMv7 or MIPS
             loadp [temp], csr0
             loadp 4[temp], csr1
         elsif RISCV64
@@ -1810,32 +1781,37 @@
     _sanitizeStackForVMImpl:
         tagReturnAddress sp
         # We need three non-aliased caller-save registers. We are guaranteed
-        # this for a0, a1 and a2 on all architectures.
+        # this for a0, a1 and a2 on all architectures. Beware also that
+        # offlineasm might use temporary registers when lowering complex
+        # instructions on some platforms, which might be callee-save. To avoid
+        # this, we use the simplest instructions so we never need a temporary
+        # and hence don't clobber any callee-save registers.
         if X86 or X86_WIN
             loadp 4[sp], a0
         end
-        const vmOrStartSP = a0
         const address = a1
-        const zeroValue = a2
-    
-        loadp VM::m_lastStackTop[vmOrStartSP], address
-        move sp, zeroValue
-        storep zeroValue, VM::m_lastStackTop[vmOrStartSP]
-        move sp, vmOrStartSP
+        const scratch = a2
 
+        move VM::m_lastStackTop, scratch
+        addp scratch, a0
+        loadp [a0], address
+        move sp, scratch
+        storep scratch, [a0]
+        move sp, a0
+
         bpbeq sp, address, .zeroFillDone
         move address, sp
 
-        move 0, zeroValue
+        move 0, scratch
     .zeroFillLoop:
-        storep zeroValue, [address]
+        storep scratch, [address]
         addp PtrSize, address
-        bpa vmOrStartSP, address, .zeroFillLoop
+        bpa a0, address, .zeroFillLoop
 
     .zeroFillDone:
-        move vmOrStartSP, sp
+        move a0, sp
         ret
-    
+
     # VMEntryRecord* vmEntryRecord(const EntryFrame* entryFrame)
     global _vmEntryRecord
     _vmEntryRecord:

Modified: trunk/Source/_javascript_Core/offlineasm/arm.rb (292079 => 292080)


--- trunk/Source/_javascript_Core/offlineasm/arm.rb	2022-03-29 23:04:15 UTC (rev 292079)
+++ trunk/Source/_javascript_Core/offlineasm/arm.rb	2022-03-29 23:21:12 UTC (rev 292080)
@@ -31,15 +31,17 @@
 #
 #  x0 => t0, a0, r0
 #  x1 => t1, a1, r1
-#  x2 => t2, a2, r2
-#  x3 => t3, a3, r3
-#  x6 =>            (callee-save scratch)
+#  x2 => t2, a2
+#  x3 => t3, a3
+#  x4 => t4                 (callee-save, PC)
+#  x5 => t5                 (callee-save)
+#  x6 => scratch            (callee-save)
 #  x7 => cfr
-#  x8 => t4         (callee-save)
-#  x9 => t5         (callee-save)
-# x10 => csr1       (callee-save, PB)
-# x11 => cfr, csr0  (callee-save, metadataTable)
-# x12 =>            (callee-save scratch)
+#  x8 => t6                 (callee-save)
+#  x9 => t7, also scratch!  (callee-save)
+# x10 => csr0               (callee-save, metadataTable)
+# x11 => csr1               (callee-save, PB)
+# x12 => scratch            (callee-save)
 #  lr => lr
 #  sp => sp
 #  pc => pc
@@ -69,7 +71,10 @@
     end
 end
 
-ARM_EXTRA_GPRS = [SpecialRegister.new("r6"), SpecialRegister.new("r4"), SpecialRegister.new("r12")]
+# These are allocated from the end. Use the low order r6 first, ast it's often
+# cheaper to encode. r12 and r9 are equivalent, but r9 conflicts with t7, so r9
+# only as last resort.
+ARM_EXTRA_GPRS = [SpecialRegister.new("r9"), SpecialRegister.new("r12"), SpecialRegister.new("r6")]
 ARM_EXTRA_FPRS = [SpecialRegister.new("d7")]
 ARM_SCRATCH_FPR = SpecialRegister.new("d6")
 OS_DARWIN = ((RUBY_PLATFORM =~ /darwin/i) != nil)
@@ -99,20 +104,22 @@
             "r1"
         when "t2", "a2"
             "r2"
-        when "a3"
+        when "t3", "a3"
             "r3"
-        when "t3"
-            "r3"
-        when "t4"
-            "r8"
+        when "t4" # LLInt PC
+            "r4"
         when "t5"
-            "r9"
+            "r5"
         when "cfr"
             "r7"
+        when "t6"
+            "r8"
+        when "t7"
+            "r9" # r9 is also a scratch register, so use carefully!
         when "csr0"
+            "r10"
+        when "csr1"
             "r11"
-        when "csr1"
-            "r10"
         when "lr"
             "lr"
         when "sp"
_______________________________________________
webkit-changes mailing list
webkit-changes@lists.webkit.org
https://lists.webkit.org/mailman/listinfo/webkit-changes

Reply via email to