- Revision
- 283288
- Author
- sbar...@apple.com
- Date
- 2021-09-29 17:47:41 -0700 (Wed, 29 Sep 2021)
Log Message
We need to load the baseline JIT's constant pool register after OSR exit to checkpoints if we return to baseline code
https://bugs.webkit.org/show_bug.cgi?id=230972
<rdar://83659469>
Reviewed by Mark Lam and Yusuke Suzuki.
JSTests:
* stress/checkpoint-osr-exit-needs-to-reload-baseline-jit-constant-pool-gpr.js: Added.
(empty):
(empty2):
(test):
Source/_javascript_Core:
Consider the following:
- We have a CodeBlock A.
- DFG or FTL compiles an exit to A when A is still LLInt code. This means
the OSR exit code will materialize registers as if A is LLInt.
- We tier up A to Baseline JIT code.
- Now, we take the exit to A as if it's LLInt. But the checkpoint OSR exit
code will actually jump to the tiered up baseline code when it's done,
because it determines where to jump at runtime. Because of this, when
we return from the checkpoint code, and if we are jumping into baseline
code, we must always load the constant pool register.
- There's no need to load the metadata register because that register is
shared with LLInt code, and will already contain the right value.
* jit/JIT.cpp:
(JSC::JIT::privateCompileMainPass):
* llint/LLIntSlowPaths.cpp:
(JSC::LLInt::dispatchToNextInstructionDuringExit):
(JSC::LLInt::llint_slow_path_checkpoint_osr_exit_from_inlined_call):
(JSC::LLInt::llint_slow_path_checkpoint_osr_exit):
(JSC::LLInt::dispatchToNextInstruction): Deleted.
* llint/LowLevelInterpreter.asm:
* llint/LowLevelInterpreter64.asm:
Modified Paths
Added Paths
Diff
Modified: trunk/JSTests/ChangeLog (283287 => 283288)
--- trunk/JSTests/ChangeLog 2021-09-30 00:21:20 UTC (rev 283287)
+++ trunk/JSTests/ChangeLog 2021-09-30 00:47:41 UTC (rev 283288)
@@ -1,5 +1,18 @@
2021-09-29 Saam Barati <sbar...@apple.com>
+ We need to load the baseline JIT's constant pool register after OSR exit to checkpoints if we return to baseline code
+ https://bugs.webkit.org/show_bug.cgi?id=230972
+ <rdar://83659469>
+
+ Reviewed by Mark Lam and Yusuke Suzuki.
+
+ * stress/checkpoint-osr-exit-needs-to-reload-baseline-jit-constant-pool-gpr.js: Added.
+ (empty):
+ (empty2):
+ (test):
+
+2021-09-29 Saam Barati <sbar...@apple.com>
+
Code inside strength reduction can incorrectly prove that we know what lastIndex is
https://bugs.webkit.org/show_bug.cgi?id=230802
<rdar://problem/83543699>
Added: trunk/JSTests/stress/checkpoint-osr-exit-needs-to-reload-baseline-jit-constant-pool-gpr.js (0 => 283288)
--- trunk/JSTests/stress/checkpoint-osr-exit-needs-to-reload-baseline-jit-constant-pool-gpr.js (rev 0)
+++ trunk/JSTests/stress/checkpoint-osr-exit-needs-to-reload-baseline-jit-constant-pool-gpr.js 2021-09-30 00:47:41 UTC (rev 283288)
@@ -0,0 +1,15 @@
+function empty() {}
+function empty2() {}
+
+function test(arr) {
+ empty.apply(undefined, arr);
+ empty2();
+}
+
+for (let i = 0; i < 10000; i++) {
+ let arr = [];
+ for (let j = 0; j < i+10000; j++) {
+ arr.push(undefined);
+ }
+ test(arr);
+}
Modified: trunk/Source/_javascript_Core/ChangeLog (283287 => 283288)
--- trunk/Source/_javascript_Core/ChangeLog 2021-09-30 00:21:20 UTC (rev 283287)
+++ trunk/Source/_javascript_Core/ChangeLog 2021-09-30 00:47:41 UTC (rev 283288)
@@ -1,3 +1,34 @@
+2021-09-29 Saam Barati <sbar...@apple.com>
+
+ We need to load the baseline JIT's constant pool register after OSR exit to checkpoints if we return to baseline code
+ https://bugs.webkit.org/show_bug.cgi?id=230972
+ <rdar://83659469>
+
+ Reviewed by Mark Lam and Yusuke Suzuki.
+
+ Consider the following:
+ - We have a CodeBlock A.
+ - DFG or FTL compiles an exit to A when A is still LLInt code. This means
+ the OSR exit code will materialize registers as if A is LLInt.
+ - We tier up A to Baseline JIT code.
+ - Now, we take the exit to A as if it's LLInt. But the checkpoint OSR exit
+ code will actually jump to the tiered up baseline code when it's done,
+ because it determines where to jump at runtime. Because of this, when
+ we return from the checkpoint code, and if we are jumping into baseline
+ code, we must always load the constant pool register.
+ - There's no need to load the metadata register because that register is
+ shared with LLInt code, and will already contain the right value.
+
+ * jit/JIT.cpp:
+ (JSC::JIT::privateCompileMainPass):
+ * llint/LLIntSlowPaths.cpp:
+ (JSC::LLInt::dispatchToNextInstructionDuringExit):
+ (JSC::LLInt::llint_slow_path_checkpoint_osr_exit_from_inlined_call):
+ (JSC::LLInt::llint_slow_path_checkpoint_osr_exit):
+ (JSC::LLInt::dispatchToNextInstruction): Deleted.
+ * llint/LowLevelInterpreter.asm:
+ * llint/LowLevelInterpreter64.asm:
+
2021-09-29 Basuke Suzuki <basuke.suz...@sony.com>
Suppress warnings for implicit copy assignment operator/copy constructor with clang 13
Modified: trunk/Source/_javascript_Core/jit/JIT.cpp (283287 => 283288)
--- trunk/Source/_javascript_Core/jit/JIT.cpp 2021-09-30 00:21:20 UTC (rev 283287)
+++ trunk/Source/_javascript_Core/jit/JIT.cpp 2021-09-30 00:47:41 UTC (rev 283288)
@@ -270,6 +270,18 @@
sizeMarker = m_vm->jitSizeStatistics->markStart(id, *this);
}
+#if ASSERT_ENABLED
+ if (opcodeID != op_catch) {
+ probeDebug([=] (Probe::Context& ctx) {
+ CodeBlock* codeBlock = ctx.fp<CallFrame*>()->codeBlock();
+ auto* constantPool = ctx.gpr<void*>(s_constantsGPR);
+ RELEASE_ASSERT(codeBlock->baselineJITConstantPool() == constantPool);
+ auto* metadata = ctx.gpr<void*>(s_metadataGPR);
+ RELEASE_ASSERT(codeBlock->metadataTable() == metadata);
+ });
+ }
+#endif
+
if (UNLIKELY(m_compilation)) {
add64(
TrustedImm32(1),
Modified: trunk/Source/_javascript_Core/llint/LLIntSlowPaths.cpp (283287 => 283288)
--- trunk/Source/_javascript_Core/llint/LLIntSlowPaths.cpp 2021-09-30 00:21:20 UTC (rev 283287)
+++ trunk/Source/_javascript_Core/llint/LLIntSlowPaths.cpp 2021-09-30 00:47:41 UTC (rev 283288)
@@ -2409,7 +2409,7 @@
valueRegister = iteratorResultObject.get(globalObject, vm.propertyNames->value);
}
-inline SlowPathReturnType dispatchToNextInstruction(ThrowScope& scope, CodeBlock* codeBlock, InstructionStream::Ref pc)
+inline SlowPathReturnType dispatchToNextInstructionDuringExit(ThrowScope& scope, CodeBlock* codeBlock, InstructionStream::Ref pc)
{
if (scope.exception())
return encodeResult(returnToThrow(scope.vm()), nullptr);
@@ -2427,7 +2427,7 @@
ASSERT(codeBlock->jitType() == JITType::BaselineJIT);
BytecodeIndex nextBytecodeIndex = pc.next().index();
auto nextBytecode = codeBlock->jitCodeMap().find(nextBytecodeIndex);
- return encodeResult(nullptr, nextBytecode.executableAddress());
+ return encodeResult(bitwise_cast<void*>(static_cast<uintptr_t>(1)), nextBytecode.executableAddress());
#endif
RELEASE_ASSERT_NOT_REACHED();
}
@@ -2479,7 +2479,7 @@
break;
}
- return dispatchToNextInstruction(scope, codeBlock, pc);
+ return dispatchToNextInstructionDuringExit(scope, codeBlock, pc);
}
extern "C" SlowPathReturnType llint_slow_path_checkpoint_osr_exit(CallFrame* callFrame, EncodedJSValue /* needed for cCall2 in CLoop */)
@@ -2524,7 +2524,7 @@
break;
}
- return dispatchToNextInstruction(scope, codeBlock, pc);
+ return dispatchToNextInstructionDuringExit(scope, codeBlock, pc);
}
extern "C" SlowPathReturnType llint_throw_stack_overflow_error(VM* vm, ProtoCallFrame* protoFrame)
Modified: trunk/Source/_javascript_Core/llint/LowLevelInterpreter.asm (283287 => 283288)
--- trunk/Source/_javascript_Core/llint/LowLevelInterpreter.asm 2021-09-30 00:21:20 UTC (rev 283287)
+++ trunk/Source/_javascript_Core/llint/LowLevelInterpreter.asm 2021-09-30 00:47:41 UTC (rev 283288)
@@ -2482,6 +2482,32 @@
end)
+if JIT
+ macro loadBaselineJITConstantPool()
+ # Baseline uses LLInt's PB register for its JIT constant pool.
+ loadp CodeBlock[cfr], PB
+ loadp CodeBlock::m_jitData[PB], PB
+ loadp CodeBlock::JITData::m_jitConstantPool[PB], PB
+ end
+
+ macro setupReturnToBaselineAfterCheckpointExitIfNeeded()
+ # DFG or FTL OSR exit could have compiled an OSR exit to LLInt code.
+ # That means it set up registers as if execution would happen in the
+ # LLInt. However, during OSR exit for checkpoints, we might return to
+ # JIT code if it's already compiled. After the OSR exit gets compiled,
+ # we can tier up to JIT code. And checkpoint exit will jump to it.
+ # That means we always need to set up our constant pool GPR, because the OSR
+ # exit code might not have done it.
+ bpneq r0, 1, .notBaselineJIT
+ loadBaselineJITConstantPool()
+ .notBaselineJIT:
+
+ end
+else
+ macro setupReturnToBaselineAfterCheckpointExitIfNeeded()
+ end
+end
+
op(checkpoint_osr_exit_from_inlined_call_trampoline, macro ()
if (JSVALUE64 and not (C_LOOP or C_LOOP_WIN)) or ARMv7 or MIPS
restoreStackPointerAfterCall()
@@ -2505,8 +2531,10 @@
cCall2(_llint_slow_path_checkpoint_osr_exit_from_inlined_call)
end
+ setupReturnToBaselineAfterCheckpointExitIfNeeded()
restoreStateAfterCCall()
branchIfException(_llint_throw_from_slow_path_trampoline)
+
if ARM64E
move r1, a0
leap JSCConfig + constexpr JSC::offsetOfJSCConfigGateMap + (constexpr Gate::loopOSREntry) * PtrSize, a2
@@ -2528,6 +2556,7 @@
move cfr, a0
# We don't call saveStateForCCall() because we are going to use the bytecodeIndex from our side state.
cCall2(_llint_slow_path_checkpoint_osr_exit)
+ setupReturnToBaselineAfterCheckpointExitIfNeeded()
restoreStateAfterCCall()
branchIfException(_llint_throw_from_slow_path_trampoline)
if ARM64E
Modified: trunk/Source/_javascript_Core/llint/LowLevelInterpreter64.asm (283287 => 283288)
--- trunk/Source/_javascript_Core/llint/LowLevelInterpreter64.asm 2021-09-30 00:21:20 UTC (rev 283287)
+++ trunk/Source/_javascript_Core/llint/LowLevelInterpreter64.asm 2021-09-30 00:47:41 UTC (rev 283288)
@@ -444,10 +444,7 @@
btpz r0, .recover
move r1, sp
- # Baseline uses LLInt's PB register for its JIT constant pool.
- loadp CodeBlock[cfr], PB
- loadp CodeBlock::m_jitData[PB], PB
- loadp CodeBlock::JITData::m_jitConstantPool[PB], PB
+ loadBaselineJITConstantPool()
if ARM64E
leap JSCConfig + constexpr JSC::offsetOfJSCConfigGateMap + (constexpr Gate::loopOSREntry) * PtrSize, a2